IFMBE Proceedings Series Editor: R. Magjarevic
Volume 32
The International Federation for Medical and Biological Engineering, IFMBE, is a federation of national and transnational organizations representing internationally the interests of medical and biological engineering and sciences. The IFMBE is a non-profit organization fostering the creation, dissemination and application of medical and biological engineering knowledge and the management of technology for improved health and quality of life. Its activities include participation in the formulation of public policy and the dissemination of information through publications and forums. Within the field of medical, clinical, and biological engineering, IFMBE’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. The objectives of the IFMBE are scientific, technological, literary, and educational. The IFMBE is a WHO accredited NGO covering the full range of biomedical and clinical engineering, healthcare, healthcare technology and management. It is representing through its 60 member societies some 120.000 professionals involved in the various issues of improved health and health care delivery. IFMBE Officers President: Herbert Voigt, Vice-President: Ratko Magjarevic, Past-President: Makoto Kikuchi Treasurer: Shankar M. Krishnan, Secretary-General: James Goh http://www.ifmbe.org
Previous Editions: IFMBE Proceedings SBEC 2010, “26th Southern Biomedical Engineering Conference SBEC 2010 April 30 – May 2, 2010 College Park, Maryland, USA”, Vol. 32, 2010, Maryland, USA, CD IFMBE Proceedings WCB 2010, “6th World Congress of Biomechanics (WCB 2010)”, Vol. 31, 2010, Singapore, CD IFMBE Proceedings BIOMAG2010, “17th International Conference on Biomagnetism Advances in Biomagnetism – Biomag2010”, Vol. 28, 2010, Dubrovnik, Croatia, CD IFMBE Proceedings ICDBME 2010, “The Third International Conference on the Development of Biomedical Engineering in Vietnam”, Vol. 27, 2010, Ho Chi Minh City, Vietnam, CD IFMBE Proceedings MEDITECH 2009, “International Conference on Advancements of Medicine and Health Care through Technology”, Vol. 26, 2009, Cluj-Napoca, Romania, CD IFMBE Proceedings WC 2009, “World Congress on Medical Physics and Biomedical Engineering”, Vol. 25, 2009, Munich, Germany, CD IFMBE Proceedings SBEC 2009, “25th Southern Biomedical Engineering Conference 2009”, Vol. 24, 2009, Miami, FL, USA, CD IFMBE Proceedings ICBME 2008, “13th International Conference on Biomedical Engineering” Vol. 23, 2008, Singapore, CD IFMBE Proceedings ECIFMBE 2008 “4th European Conference of the International Federation for Medical and Biological Engineering”, Vol. 22, 2008, Antwerp, Belgium, CD IFMBE Proceedings BIOMED 2008 “4th Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 21, 2008, Kuala Lumpur, Malaysia, CD IFMBE Proceedings NBC 2008 “14th Nordic-Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 20, 2008, Riga, Latvia, CD IFMBE Proceedings APCMBE 2008 “7th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 19, 2008, Beijing, China, CD IFMBE Proceedings CLAIB 2007 “IV Latin American Congress on Biomedical Engineering 2007, Bioengineering Solution for Latin America Health”, Vol. 18, 2007, Margarita Island, Venezuela, CD IFMBE Proceedings ICEBI 2007 “13th International Conference on Electrical Bioimpedance and the 8th Conference on Electrical Impedance Tomography”, Vol. 17, 2007, Graz, Austria, CD IFMBE Proceedings MEDICON 2007 “11th Mediterranean Conference on Medical and Biological Engineering and Computing 2007”, Vol. 16, 2007, Ljubljana, Slovenia, CD IFMBE Proceedings BIOMED 2006 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 15, 2004, Kuala Lumpur, Malaysia, CD IFMBE Proceedings WC 2006 “World Congress on Medical Physics and Biomedical Engineering”, Vol. 14, 2006, Seoul, Korea, DVD
IFMBE Proceedings Vol. 32
Keith E. Herold, William E. Bentley, and Jafar Vossoughi (Eds.)
26th Southern Biomedical Engineering Conference SBEC 2010 April 30 – May 2, 2010 College Park, Maryland, USA
123
Editors Jafar Vossoughi Biomed Research Foundation 20832 Olney, USA Email:
[email protected]
Keith E. Herold Ph.D. University of Maryland Dept. Bioengineering Glenn L. Martin Hall 2181 20742 College Park Maryland USA E-mail:
[email protected] William E. Bentley University of Maryland Fischell Dept. of Bioengineering 20742 College Park USA Email:
[email protected]
ISSN 1680-0737 ISBN 978-3-642-14997-9
e-ISBN 978-3-642-14998-6
DOI 10.1007/978-3-642-14998-6 Library of Congress Control Number: 2010932014 © International Federation for Medical and Biological Engineering 2010 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permissions for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The IFMBE Proceedings is an Official Publication of the International Federation for Medical and Biological Engineering (IFMBE) Typesetting: Scientific Publishing Services Pvt. Ltd., Chennai, India. Cover Design: deblik, Berlin Printed on acid-free paper 987654321 springer.com
This page intentionally left blank
Preface
The 26th Southern Biomedical Engineering Conference was hosted by the Fischell Department of Bioengineering and the A. James Clark School of Engineering from April 30–May 2, 2010. The conference program consisted of 168 oral presentations and 21 poster presentations with approximately 250 registered participants of which about half were students. The sessions were designed along topical lines with student papers mixed in randomly with more senior investigators. There was a Student Competition resulting in several Best Paper and Honorable Mention awards. There were 32 technical sessions occurring in 6–7 parallel sessions. This Proceedings is a subset of the papers submitted to the conference. It includes 147 papers organized in topical areas. Many thanks go out to the paper reviewers who significantly improved the clarity of the submitted papers. We greatly appreciate the opportunity to team with IFMBE – International Federation for Medical and Biological Engineers who endorsed the conference and made this Proceedings possible through their relationship with Springer. In addition, the endorsement by BMES – Biomedical Engineering Society provided excellent visibility for the SBEC through listings on the BMES website. The National Cancer Institute (NCI) of the U.S. National Institutes of Health sponsored Session 33 which was a special Memorial Session for a long time NCI program manager, Dr. James W. Jacobson, who died recently. The special session topic was Technologies for Cancer Diagnostics and four of the papers from that session are included in this Proceedings. NCI’s support is gratefully acknowledged. This session was made possible by Dr. Avraham Rasooly at NCI who organized and promoted this session and who was responsible for the best concentration of science at the conference. Finally, special thanks goes to NSF – National Science Foundation for their generous support over many years for this conference series. NSF’s support allows the organizers to subsidize student participation, market the conference to a broader range of potential participants, and to achieve a higher overall educational value as a result. We hope that this permanent record of the conference will be a useful tool for researchers in the broad field of biomedical engineering. SBEC Conference Co-chairs Keith Herold William Bentley Jafar Vossoughi
Table of Contents
Traumatic Brain Injury Traumatic Brain Injury in Rats Caused by Blast-Induced Hyper-Acceleration . . . . . . . . . . . . . . . . G. Fiskum, J. Hazelton, R. Gullapalli, W.L. Fourney
1
Early Metabolic and Structural Changes in the Rat Brain Following Trauma in vivo Using MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Xu, J. Zhuo, J. Racz, S. Roys, D. Shi, G. Fiskum, R. Gullapalli
5
Principal Components of Brain Deformation in Response to Skull Acceleration: The Roles of Sliding and Tethering between the Brain and Skull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Teresa M. Abney, Y. Aaron Feng, Robert Pless, Ruth J. Okamoto, Guy M. Genin, Philip V. Bayly
9
Investigations into Wave Propagation in Soft Tissue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.F. Valdez, B. Balachandran
13
Correlating Tissue Response with Anatomical Location of mTBI Using a Human Head Finite Element Model under Simulated Blast Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T.P. Harrigan, J.C. Roberts, E.E. Ward, A.C. Merkle
18
Human Surrogate Head Response to Dynamic Overpressure Loading in Protected and Unprotected Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.C. Merkle, I.D. Wing, J.C. Roberts
22
Blast-Induced Traumatic Brain Injury: Using a Shock Tube to Recreate a Battlefield Injury in the Laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.B. Long, L. Tong, R.A. Bauman, J.L. Atkins, A.J. Januszkiewicz, C. Riccio, R. Gharavi, R. Shoge, S. Parks, D.V. Ritzel, T.B. Bentley
26
Wave Propagation in the Human Brain and Skull Imaged in vivo by MR Elastography . . . . . . . E.H. Clayton, G.M. Genin, P.V. Bayly
31
Cavitation as a Possible Traumatic Brain Injury (TBI) Damage Mechanism . . . . . . . . . . . . . . . . . . Andrew Wardlaw, Jack Goeller
34
Prognostic Ability of Diffusion Tensor Imaging Parameters among Severely Injured Traumatic Brain Injury Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joshua F. Betz, Jiachen Zhuo, Anindya Roy, Kathirkamanthan Shanmuganathan, Rao P. Gullapalli
38
Auditory Science Hair Cell Regeneration in the Mammalian Ear, Is Gene Therapy the Answer? . . . . . . . . . . . . . . . . Matthew W. Kelley
42
Magnetoencephalography and Auditory Neural Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.Z. Simon, N. Ding
45
VIII
Table of Contents
Voice Pitch Processing with Cochlear Implants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monita Chatterjee, Shu-Chen Peng, Lauren Wawroski, Cherish Oberzut
49
Transcranial Magnetic Stimulation as a Tool for Investigating and Treating Tinnitus . . . . . . . . . G.F. Wittenberg
53
Bioengineering Education A Course Guideline for Biomedical Engineering Modeling and Design for Freshmen . . . . . . . . . . W.C. Wong, E.B. Haase
56
Classroom Nuclear Magnetic Resonance System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.L. Zimmerman, E.S. Boyden, S.C. Wasserman
61
The Basics of Bioengineering Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arthur T. Johnson
65
HealthiManage: An Individualized Prediction Algorithm for Type 2 Diabetes Chronic Disease Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Salim Chemlal, Sheri Colberg, Marta Satin-Smith, Eric Gyuricsko, Tom Hubbard, Mark W. Scerbo, Frederic D. McKenzie
67
Cellular Engineering Dynamic Movement and Property Changes in Live Mesangial Cells by Stimuli . . . . . . . . . . . . . . . Gi Ja Lee, Samjin Choi, Jeong Hoon Park, Kyung Sook Kim, Ilsung Cho, Sang Ho Lee, Hun Kuk Park Cooperative Interactions between Myosin II and Cortexillin I Mediated by Actin Filaments during Cellular Deformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tianzhi Luo, Douglas N. Robinson
71
74
Devices Constitutive Law for Miniaturized Quantitative Microdialysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.-f. Chen
77
Non-invasive Estimation of Intracranial Pressure by Means of Retinal Venous Pulsatility . . . . . S. Mojtaba Golzan, Stuart L. Graham, Alberto Avolio
81
Apparatus for Quantitative Slit-Lamp Ocular Fluorometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jos´e P. Domingues, Isa Branco, Ant´ onio M. Morgado
85
Changes in Viscoelastic Properties of Latex Condoms Due to Personal Lubricants . . . . . . . . . . . . Srilekha Sarkar Das, Matthew Schwerin, Donna Walsh, Charles Tack, D. Coleman Richardson
89
Towards the Objective Evaluation of Hand Disinfection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ´ Akos Lehotsky, Melinda Nagy, Tam´ as Haidegger
92
Table of Contents
IX
Neural Systems Engineering In vitro Models for Measuring Charge Storage Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K.F. Zaidi, Z.H. Benchekroun, S. Minnikanti, J. Pancrazio, N. Peixoto Discovery of Long-Latency Somatosensory Evoked Potentials as a Marker of Cardiac Arrest Induced Brain Injury . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dan Wu, Jai Madhok, Young-Seok Choi, Xiaofeng Jia, Nitish V. Thakor In vivo Characterization of Epileptic Tissue with Time-Dependent, Diffuse Reflectance Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nitin Yadav, Sanghoon Oh, Sanjeev Bhatia, John Ragheb, Prasanna Jayakar, Michael Duchowny, Wei-Chiang Lin
97
101
105
Kinematics Effects of Initial Grasping Forces, Axes, and Directions on Torque Production during Circular Object Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Huang, J.K. Shim
109
Time Independent Functional Training of Inter-joint Arm Coordination Using the ARMin III Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.B. Brokaw, T. Nef, T.M. Murray, P.S. Lum
113
Kinematic Analysis in Robot Assisted Femur Fracture Reduction: Fuzzy Logic Approach . . . . . Wang Song, Chen Yonghua, Ye Ruihua, Yau WaiPan
118
Compensation for Weak Hip Abductors in Gait Assisted by a Novel Crutch-Like Device . . . . . . J.R. Borrelli, H.W. Haslach Jr.
122
Nanotechnology Measuring in vivo Effects of Chemotherapy Treatment on Cardiac Capillary Permeability . . . . A. Fernandez-Fernandez, D.A. Carvajal, A.J. McGoron
126
Nanoscale “DNA Baskets” for the Delivery of siRNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.C. Zirzow, M. Skoblov, A. Patanarut, C. Smith, A. Fisher, V. Chandhoke, A. Baranova
130
Nanoscale Glutathione Patches Improve Organ Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Homer Nazeran, Sherry Blake-Greenberg
134
Nanoscale Carnosine Patches Improve Organ Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Homer Nazeran, Sherry Blake-Greenberg
138
Multiple Lumiphore-Bound Nanoparticles for in vivo Quantification of Localized Oxygen Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.L. Van Druff, W. Zhou, E. Asman, J.B. Leach
142
Ion-Mobility Characterization of Functionalized and Aggregated Gold Nanoparticles for Drug Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.-H. Tsai, L.F. Pease III, R.A. Zangmeister, S. Guha, M.J. Tarlov, M.R. Zachariah
146
X
Table of Contents
Implants Quantitative Mapping of Vascular Geometry for Implant Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.W. Karanian, O. Lopez, D. Rad, B. McDowell, M. Kreitz, J. Esparza, J. Vossoughi, O.A. Chiesa, W.F. Pritchard
150
Failure Analysis and Materials Characterization of Hip Implants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.M. Bastidos, S.W. Stafford
154
Nano-Wear-Particulates Elicit a Size and Dose Dependent Response by RAW 264.7 Cells . . . . Mrinal K. Musib, Subrata Saha
158
Viscous Behavior of Different Concentrations of Bovine Calf Serum Used to Lubricate the Micro-textured CoCrMo Alloy Material before and after Wear Testing . . . . . . . . . . . . . . . . . . . . . . . Geriel Ettienne-Modeste, Timmie Topoleski Progressive Wear Damage Analysis on Retrieved UHMWPE Tibial Implants . . . . . . . . . . . . . . . . . N. Camacho, S.W. Stafford, L. Trueba Jr.
161 165
Tissue Engineering Gum Arabic-Chitosan Composite Biopolymer Scaffolds for Bone Tissue Engineering . . . . . . . . . . R.A. Silva, P. Mehl, O.C. Wilson Modification of Hydrogel Scaffolds for the Modulation of Corneal Epithelial Cell Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L.G. Reis, P. Pattekari, P.S. Sit
171
175
Making of Functional Tissue Engineered Heart Valve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.S. Patel, Y.S. Morsi
180
Ties That Bind: Evaluation of Collagen I and α-Chitin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tiffany Omokanwaye, Otto Wilson Jr.
183
Chitosan/Poly (-Caprolactone) Composite Hydrogel for Tissue Engineering Applications . . . . Xia Zhong, Chengdong Ji, Sergei G. Kazarian, Andrew Ruys, Fariba Dehghani
188
Disease Modeling Modeling and Control of HIV by Computational Intelligence Techniques . . . . . . . . . . . . . . . . . . . . . N. Bazyar Shourabi
192
Mathematical Modeling of Ebola Virus Dynamics as a Step towards Rational Vaccine Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sophia Banton, Zvi Roth, Mirjana Pavlovic
196
Respiratory Impedance Values in Adults Are Relatively Insensitive to Mead Model Lung Compliance and Chest Wall Compliance Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bill Diong, Michael D. Goldman, Homer Nazeran
201
A Systems Biology Model of Alzheimer’s Disease Incorporating Spatial-temporal Distribution of Beta Amyloid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.R. Kyrtsos, J.S. Baras
204
Table of Contents
A Mathematical Model of the Primary T Cell Response with Contraction Governed by Adaptive Regulatory T Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.N. Wilson, P. Lee, D. Levy
XI
209
A Mathematical Model for Microenvironmental Control of Tumor Growth . . . . . . . . . . . . . . . . . . . A.R. Galante, D. Levy, C. Tomasetti
213
Assessing the Usability of Web-Based Personal Health Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pedro Gonzales, Binh Q. Tran
217
Drug Delivery Real Time Monitoring of Extracellular Glutamate Release in Rat Ischemia Model Treated by Nimodipine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.K. Park, G.J. Lee, S.K. Choi, S. Choi, S.W. Kang, S.J. Chae, H.K. Park Targeted Delivery of Doxorubicin by PLGA Nanoparticles Increases Drug Uptake in Cancer Cell Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tingjun Lei, Supriya Srinivasan, Yuan Tang, Romila Manchanda, Alicia Fernandez-Fernandez, Anthony J. Mcgoron
221
224
Cellular Uptake and Cytotoxicity of a Novel ICG-DOX-PLGA Dual Agent Polymer Nanoparticle Delivery System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Romila Manchanda, Tingjun Lei, Yuan Tang, Alicia Fernandez-Fernandez, Anthony J. McGoron
228
Electrospray – Differential Mobility Analysis (ES-DMA) for Characterization of Heat Induced Antibody Aggregates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suvajyoti Guha, Joshua Wayment, Michael J. Tarlov, Michael R. Zachariah
232
Mechanisms of Poly(amido amine) Dendrimer Transepithelial Transport and Tight Junction Modulation in Caco-2 Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.S. Goldberg, P.W. Swaan, H. Ghandehari
236
Absorbable Coatings: Structure and Drug Elution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Sarkar Das, M.K. McDermott, A.D. Lucas, T.E. Cargal, L. Patel, D.M. Saylor, D.V. Patwardhan
240
Special Topics A Brief Comparison of Adaptive Noise Cancellation, Wavelet and Cycle-by-Cycle Fourier Series Analysis for Reduction of Motional Artifacts from PPG Signals . . . . . . . . . . . . . . . . . . . . . . . . M. Malekmohammadi, A. Moein
243
Respiratory Resistance Measurements during Exercise Using the Airflow Perturbation Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Chapain, A. Johnson, J. Vossoughi, S. Majd
247
Comparison of IOS Parameters to aRIC Respiratory System Model Parameters in Normal and COPD Adults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Mangum, Bill Diong, Michael D. Goldman, Homer Nazeran
251
Effect of Waveform Shape and Duration on Defibrillation Threshold in Rabbit Hearts . . . . . . . . J. Stohlman, F. Aguel, G. Calcagnini, E. Mattei, M. Triventi, F. Censi, P. Bartolini, V. Krauthamer
254
XII
Table of Contents
The Measurement and Processing of EEG Signals to Evaluate Fatigue . . . . . . . . . . . . . . . . . . . . . . . . M.R. Yousefi Zoshk, M. Azarnoosh
258
Modeling for the Impact of Anesthesia on Neural Activity in the Auditory System . . . . . . . . . . . Z.B. Tan, L.Y. Wang, H. Wang, X.G. Zhang, J.S. Zhang
262
Cortical Excitability Changes after Repetitive Self-regulated vs. Tracking Movements of the Hand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.B. Godfrey, P.S. Lum, C.N. Schabowsky, M.L. Harris-Love What the ENT Wants in the OR: Bioengineering Prospects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.A. Depireux, D.J. Eisenman An in vitro Biomechanical Comparison of Human Dermis to a Silicone Biosimulant Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I.D. Wing, H.A. Conner, P.J. Biermann, S.M. Belkoff
266 270
274
Telemetric Epilepsy Monitoring and Seizures Aid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Hameed, F. Azhar, I. Shahrukh, M. Muzammil, M. Aamair, D. Mujeeb
278
Spike Detection for Integrated Circuits: Comparative Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Sarje, P. Abshire
282
Effect of Ambient Humidity on the Electrical Conductance of a Titanium Oxide Coating Being Investigated for Potential Use in Biosensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jorge Torres, James Sweeney, Jose Barreto Brain Computer Interface in Cerebellar Ataxia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.I. Newman, S.H. Ying, Y.-S. Choi, H.-N. Kim, A. Presacco, M.V. Kothare, N.V. Thakor
286 289
Biosensors Effects of Stray Field Distribution Generated by Magnetic Beads on Giant Magnetoresistance Sensor for Biochip Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kyung Sook Kim, Samjinn Choi, Gi Ja Lee, Dong Hyun Park, Jeong Hoon Park, Il Sung Jo, Hun-Kuk Park Electrostatic Purification of Nucleic Acids for Micro Total Analysis Systems . . . . . . . . . . . . . . . . . . E. Hoppmann, I.M. White
293
297
Applicability of Surface Enhanced Raman Spectroscopy for Determining the Concentration of Adenine and S-Adenosyl Homocysteine in a Microfluidic System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Omar Bekdash, Jordan Betz, Yi Cheng, Gary W. Rubloff
301
Integration of Capillary Ring Resonator Biosensor with PDMS Microfluidics for Label-Free Biosensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Farnoosh Farahi, Ian White
305
Surface Plasmon-Coupled Emission from Rhodamine- 6G Aggregates for Ratiometric Detection of Ethanol Vapors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Sai Sathish, Y. Kostov, G. Rao
309
Table of Contents
XIII
Formation of Dendritic Silver Substrates by Galvanic Displacement for Surface Enhanced Raman Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jordan Betz, Yi Cheng, Omar Bekdash, Susan Buckhout-White, Gary W. Rubloff
313
High Specificity Binding of Lectins to Carbohydrate Functionalized Etched Fiber Bragg Grating Optical Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geunmin Ryu, Mario Dagenais, Matthew T. Hurley, Philip DeShong
317
Oximetry Oximetry and Blood Flow in the Retina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Lemaillet, A. Lompado, D. Duncan, Q.D. Nguyen, J.C. Ramella-Roman
321
Monitoring and Controlling Oxygen Levels in Microfluidic Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . Peter C. Thomas, Srinivasa R. Raghavan, Samuel P. Forry
325
An Imaging Pulse Oximeter Based on a Multi-Aperture Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ali Basiri, Jessica C. Ramella-Roman
329
Fluorescent Microparticles for Sensing Cell Microenvironment Oxygen Levels within 3D Scaffolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miguel A. Acosta, Jennie B. Leach
332
Determination of in vivo Blood Oxygen Saturation and Blood Volume Fraction Using Diffuse Reflectance Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Chen, W. Lin
336
Image Analysis Fredholm Integral Equations in Biophysical Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Schuck High-Resolution Autofluorescence Imaging for Mapping Molecular Processes within the Human Retina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martin Ehler, Zigurts Majumdar, Emily King, Julia Dobrosotskaya, Emily Chew, Wai Wong, Denise Cunningham, Wojciech Czaja, Robert F. Bonner
340
344
Local Histograms for Classifying H&E Stained Tissues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.L. Massar, R. Bhagavatula, M. Fickus, J. Kovaˇcevi´c
348
Detecting and Classifying Cancers from Image Data Using Optimal Transportation . . . . . . . . . . . G.K. Rohde, W. Wang, D. Slepcev, A.B. Lee, C. Chen, J.A. Ozolek
353
Nanoscale Imaging of Chemical Elements in Biomedicine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.A. Aronova, Y.C. Kim, A.A. Sousa, G. Zhang, R.D. Leapman
357
Sparse Representation and Variational Methods in Retinal Image Processing . . . . . . . . . . . . . . . . . J. Dobrosotskaya, M. Ehler, E. King, R. Bonner, W. Czaja
361
XIV
Table of Contents
Neuromechanics & Rehabilitation Optimization and Validation of a Biomechanical Model for Analyzing Running-Specific Prostheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brian S. Baum, Roozbeh Borjian, You-Sin Kim, Alison Linberg, Jae Kun Shim
365
Prehension Synergy: Use of Mechanical Advantage during Multi-finger Torque Production on Mechanically Fixed- and Free-Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jaebum Park, You-Sin Kim, Brian S. Baum, Yoon Hyuk Kim, Jae Kun Shim
368
Modeling, Optimizing & Monitoring Investigating Vortex Ring Propagation Speed Past Prosthetic Heart Valves: Implications for Assessing Valve Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ann Bailey, Michelle Beatty, Olga Pierrakos
372
Transient Heat Transfer in a Dental Prosthesis Implanted in Mandibular Bone . . . . . . . . . . . . . . . M.N. Ashtiani, R. Imani
376
Characterization of Material Properties of Aorta from Oscillatory Pressure Tests . . . . . . . . . . . . . V.V. Romanov, K. Darvish, S. Assari
380
Quasi-static Analysis of Electric Field Distributions by Disc Electrodes in a Rabbit Eye Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Minnikanti, E. Cohen, N. Peixoto Optimizing the Geometry of Deep Brain Stimulating Electrodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.Y. Zhang, W.M. Grill Exploratory Parcellation of fMRI Data Based on Finite Mixture Models and Self-Annealing Expectation Maximization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Maleki Balajoo, G.A. Hossein-Zadeh, H. Soltanian-Zadeh Computational Fluid Dynamic Modeling of the Airflow Perturbation Device . . . . . . . . . . . . . . . . . . S. Majd, J. Vossoughi, A. Johnson
385 389
393 397
Biomaterials Mechanism and Direct Visualization of Electrodeposition of the Polysaccharide Chitosan . . . . . Yi Cheng, Xiaolong Luo, Jordan Betz, Omar Bekdash, Gary W. Rubloff
401
Chito-Cotton: Chitosan Coated Cotton-Based Scaffold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . O. Agubuzo, P. Mehl, O.C. Wilson, R. Silva
404
Effects of Temperature on the Performance of Footwear Foams: Review of Developments . . . . . M.R. Shariatmadari, R. English, G. Rothwell
409
A Tissue Equivalent Phantom of the Human Torso for in vivo Biocompatible Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David M. Peterson, Walker Turner, Kevin Pham, Hong Yu, Rizwan Bashirullah, Neil Euliano, Jeffery R. Fitzsimmons
414
Table of Contents
XV
Identification of Bacteria and Sterilization of Crustacean Exoskeleton Used as a Biomaterial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tiffany Omokanwaye, Donae Owens, Otto Wilson Jr.
418
Neural Stem Cell Differentiation in 2D and 3D Microenvironments . . . . . . . . . . . . . . . . . . . . . . . . . . . A.S. Ribeiro, E.M. Powell, J.B. Leach
422
A Microfluidic Platform for Optical Monitoring of Bacterial Biofilms . . . . . . . . . . . . . . . . . . . . . . . . . M.T. Meyer, V. Roy, W.E. Bentley, R. Ghodssi
426
Conduction Properties of Decellularized Nerve Biomaterials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.G. Urbanchek, B.S. Shim, Z. Baghmanli, B. Wei, K. Schroeder, N.B. Langhals, R.M. Miriani, B.M. Egeland, D.R. Kipke, D.C. Martin, P.S. Cederna
430
Reverse Cholesterol Transport (RCT) Modeling with Integrated Software Configurator . . . . . . S. Adhikari
434
Biomechanics Modeling Linear Head Impact and the Effect of Brain-Skull Interface . . . . . . . . . . . . . . . . . . . . . . . . . K. Laksari, S. Assari, K. Darvish
437
Mechanics of CSF Flow through Trabecular Architecture in the Brain . . . . . . . . . . . . . . . . . . . . . . . . Parisa Saboori, Catherine Germanier, Ali Sadegh
440
Impact of Mechanical Loading to Normal and Aneurysmal Cerebral Arteries . . . . . . . . . . . . . . . . . M. Zoghi-Moghadam, P. Saboori, A. Sadegh
444
Identification of Material Properties of Human Brain under Large Shear Deformation: Analytical versus Finite Element Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.D. Untaroiu, Q. Zhang, A.M. Damon, J.R. Crandall, K. Darvish, G. Paskoff, B.S. Shender
448
Mechanisms of Traumatic Rupture of the Aorta: Recent Multi-Scale Investigations . . . . . . . . . . . N.A. White, C.S. Shah, W.N. Hardy
452
Head Impact Response: Pressure Analysis Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R.T. Cotton, P.G. Young, C.W. Pearce, L. Beldie, B. Walker
456
Imaging An Introduction to the Next Generation of Radiology in the Web 2.0 World . . . . . . . . . . . . . . . . . A. Moein, M. Malekmohammadi, K. Youssefi
459
Novel Detection Method for Monitoring of Dental Caries Using Single Digital Subtraction Radiography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.H. Park, Y.S. Choi, G.J. Lee, S. Choi, K.S. Kim, D.H. Park, I. Cho, H.K. Park
463
Targeted Delivery of Molecular Probes for in Vivo Electron Paramagnetic Resonance Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.R. Burks, E.D. Barth, S.S. Martin, G.M. Rosen, H.J. Halpern, J.P.Y. Kao
466
XVI
Table of Contents
New Tools for Image-Based Mesh Generation of 3D Imaging Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . P.G. Young, D. Raymont, V. Bui Xuan, R.T. Cotton Characterization of Speed and Accuracy of a Nonrigid Registration Accelerator on Pre- and Intraprocedural Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Raj Shekhar, William Plishker, Sheng Xu, Jochen Kruecker, Peng Lei, Aradhana Venkatesan, Bradford Wood
470
473
Assessment of Kidney Structure and Function Using GRIN Lens Based Laparoscope with Optical Coherence Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.W. Chen, J. Wierwille, M.L. Onozato, P.M. Andrews, M. Phelan, J. Borin, Y. Chen
477
Reliability of Structural Equation Modeling of the Motor Cortex in Resting State Functional MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Kavallappa, S. Roys, A. Roy, J. Greenspan, R. Gullapalli, A. McMillan
481
Quantitative Characterization of Radiofrequency Ablation Lesions in Tissue Using Optical Coherence Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Wierwille, A. McMillan, R. Gullapalli, J. Desai, Y. Chen
485
Clinically Relevant Hand Held Two Lead EEG Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.M. O’Brien, R.L. Elliott A Simple Structural Magnetic Resonance Imaging (MRI) Method for 3D Mapping between Head Skin Tattoos and Brain Landmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mulugeta Semework Frame Potential Classification Algorithm for Retinal Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . John J. Benedetto, Wojciech Czaja, Martin Ehler Raman-AFM Instrumentation and Characterization of SERS Substrates and Carbon Nanotubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Q. Vu, M.H. Zhao, E. Wellner, X. Truong, P.D. Smith, A.J. Jin
489
493
496
500
A Novel Model of Skin Electrical Injury . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thu T.A. Nguyen, Ali Basiri, J.W. Shupp, A.R. Pavlovich, M.H. Jordan, Z. Sanford, J.C. Ramella-Roman
504
Design, Construction, and Evaluation of an Electrical Impedance Myographer . . . . . . . . . . . . . . . . K. Lweesy, L. Fraiwan, D. Hadarees, A. Jamil, E. Ramadan
508
The Role of Imaging Tools in Biomedical Research: Preclinical Stent Implant Study . . . . . . . . . . W.F. Pritchard, M. Kreitz, O. Lopez, D. Rad, B. McDowell, S. Nagaraja, M.L. Dreher, J. Esparza, J. Vossoughi, O.A. Chiesa, J.W. Karanian
512
Hard Tissue and Posture Optimization of Screw Positioning in Mandible during Bilateral Sagittal Split Osteotomy Using Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Raeisi Najafi, A. Pashaei, S. Majd, I. Zoljanahi Oskui, B. Bohluli
516
Table of Contents
XVII
Extraction and Characterization of a Soluble Chicken Bone Collagen . . . . . . . . . . . . . . . . . . . . . . . . . Tiffany Omokanwaye, Otto Wilson Jr., Hoda Iravani, Pramodh Kariyawasam
520
A Model for Human Postural Regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yao Li, William S. Levine
524
Development of an Average Chest Shape for Objective Evaluation of the Aesthetic Outcome in the Nuss Procedure Planning Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K.J. Rechowicz, R. Kelly, M. Goretsky, F. Frantz, S. Knisley, D. Nuss, F.D. McKenzie
528
Sickle Cell and Blood Cell Sickle Hemoglobin Fiber Growth Rates Revealed by Optical Pattern Generation . . . . . . . . . . . . . Z. Liu, A. Aprelev, M. Zakharov, F.A. Ferrone
532
Sickle Cell Occlusion in Microchannels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Aprelev, W. Stephenson, H. Noh, M. Meier, M. MacDermott, N. Lerner, F.A. Ferrone
536
Engineering Microfluidics Based Technologies for Rapid Sorting of White Blood Cells . . . . . . . . Vinay Raj, Kranthi Kumar Bhavanam, Vahidreza Parichehreh, Palaniappan Sethu
540
Peripheral Arterial Tonometry in Assessing Endothelial Dysfunction in Pediatric Sickle Cell Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K.M. Sivamurthy, C. Dampier, M. MacDermott, M. Meier, M. Cahill, L.L. Hsu
544
Comparison of Shear Stress, Residence Time and Lagrangian Estimates of Hemolysis in Different Ventricular Assist Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K.H. Fraser, M.E. Taskin, T. Zhang, B.P. Griffith, Z.J. Wu
548
Cancer Drug Resistance Always Depends on the Turnover Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Tomasetti, D. Levy Design and Ex Vivo Evaluation of a 3D High Intensity Focused Ultrasound System for Tumor Treatment with Tissue Ablation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Lweesy, L. Fraiwan, M. Al-Shalabi, L. Mohammad, R. Al-Oglah
552
556
The Dr. James W. Jacobson Symposium on Technologies for Cancer Diagnostics Clinical Applications of Multispectral Imaging Flow Cytometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. Minderman, T.C. George, K.L. O’Loughlin, P.K. Wallace
560
Multispectral Imaging, Image Analysis, and Pathology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Richard M. Levenson
564
XVIII
Table of Contents
Sensitive Characterization of Circulating Tumor Cells for Improving Therapy Selection . . . . . . . H. Ben Hsieh, George Somlo, Robyn Bennis, Paul Frankel, Robert T. Krivacic, Sean Lau, Janey Ly, Erich Schwartz, Richard H. Bruce
568
Nanohole Array Sensor Technology: Multiplexed Label-Free Protein Binding Assays . . . . . . . . . . J. Cuiffi, R. Soong, S. Manolakos, S. Mohapatra, D. Larson
572
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
577
Keyword Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
581
Traumatic Brain Injury in Rats Caused by Blast-Induced Hyper-Acceleration G. Fiskum1, J. Hazelton1, R. Gullapalli2, and W.L. Fourney3 1
University of Maryland School of Medicine, Dept. of Anesthesiology and the Shock, Trauma, and Anesthesiology Research Center (STAR) 2 University of Maryland School of Medicine Dept. of Diagnostic Radiology 3 University of Maryland School of Engineering, Dept. of Mechanical Engineering and the Center of Energetics Concepts Development Abstract— Well over 100,000 U.S. warfighters in Iraq and Afghanistan have sustained some form of traumatic brain injury. Most of these injuries have been due to exposure to blasts. Of these victims, approximately 20% have been passengers within vehicles that were targets of roadside improvised explosive devices. The hyper-acceleration experienced by these victims can result in exposure to g-forces much greater than those that cause loss of consciousness, a clinical symptom of mild traumatic brain injury. We have developed an experimental paradigm to study the effects of blast-induced hyperacceleration on laboratory rats to gain insight into mechanisms responsible for brain injury. Our hypothesis is that g-forces in the range of 20 – 40 g can induce mild brain injury without causing other injuries that are lethal. The preliminary results of brain histology measurements that probe for the degeneration or structural disorganization of neurons support this hypothesis. The significance of these studies is that they could eventually lead to improved designs of military vehicles that better protect against blast-induced neurologic injury. Moreover, the use of accelerometers and other sensors in these experiments could establish thresholds of forces that cause brain injury. Finally, experimental drugs and other conditions could be tested in this paradigm to identify neuroprotective interventions that are specifically effective against blastinduced traumatic brain injury. Keywords— Explosion, acceleration, traumatic brain injury, neuron.
I. INTRODUCTION A form of complex traumatic brain injury (TBI) has been identified in armed forces and civilians in Iraq and Afghanistan [1,2]. Approximately 25% of all combat casualties in these military conflicts are caused by TBI, with most of these head injuries caused by explosive munitions such as bombs, land mines, improvised explosive devices and missiles [Defense & Veterans Brain Injury Center web site, www.dvbic.org]. The majority of experimental data has focused on one aspect of these explosions, the blast overpressure [3,4]. Most of these studies used a model in which an air-driven pressure wave was delivered via a long shock tube, either directly to the immobilized animal’s head or body. Very few causative pathologic mechanisms to explain
the CNS injury in this model have been identified, and those identified have had limited description. It has become apparent that blast overpressure is not the only factor in complex, explosive related closed head injuries. A multitude of physical forces play a role, including blast overpressure, thermal and chemical components, shockwave, and hyper-acceleration of the brain. We hypothesize that this extreme hyper-acceleration, with subsequent rapid deceleration, could be responsible for many aspects of brain injury. This may be especially true for the large number of soldiers injured while driving light armored vehicles over improvised explosive devices, as well as for pedestrians injured in the vicinity of large explosions. The marked effects of rapid acceleration, or g force (Gz), on the brain have been studied in other models related to flight acceleration. These studies use centrifuge exposure (+4-14 Gz) in rats, and have shown diffuse neuronal degeneration and indicators of cell death throughout the brain [5,6]. Similar histologic changes were seen in neurons and other brain cells in the brains of rhesus monkeys exposed to graded Gz loading (+15-21 Gz) [7]. In addition to histologic cellular changes, investigators have noted significant shearing stresses on blood vessels, which could cause vessel collapse and subsequent restricted blood flow [8]. One study involving graded Gz load (+5-20 Gz) found depression of cerebral energy metabolism that correlated with increasing Gz force [9]. Importantly, acceleration exposure resulted in significant learning deficits in rats [10]. It is interesting to note that these studies of acceleration effects on the brain used Gz of a much smaller scale than soldiers experience during a war related explosion. The Dynamic Effects Laboratory at UMCP has used small scale testing to evaluate the loads applied to personnel carriers when a buried explosive detonates beneath them [11,12]. Conditions in these small scale explosions proved to be extremely reproducible, and very similar to the parameters observed in full scale testing of explosions at the Army Research Laboratory in Aberdeen, Maryland. Adaptation and scaling of this model to allow animal injury in a similar explosive environment could provide a completely new, clinically relevant model of blast TBI that encompasses many of the physical forces including the extreme hyper-acceleration. Ultimately, use of this model could
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 1–4, 2010. www.springerlink.com
2
G. Fiskum et al.
allow rapid testing of neuroprotective strategies, with eventual confirmation in a large animal model. This could lead to the discovery of neuroprotective strategies for the many warfighters and civilians suffering mild to severe brain injury. As a first step toward these goals, energetics experts at the University of Maryland School of Engineering in collaboration with neuroscientists at the University of Maryland School of Medicine have performed preliminary experiments demonstrating that blast induced hyperacceleration can cause mild TBI in laboratory rats at Gz between 20 and 30 that do not cause lethal injury to other organs.
travel vertically up to 15 in guided by poles located in holes in each corner of the plate. The two cylinders secured to the top of the plate house the anesthetized rats that are wrapped in a thick cotton “blanket” to minimize movement within the cylinders. The cylinders are capped to prevent exposure of the rats from the pressures (<20 psi) generated in the room in which the device is located during the blast. An explosive charge of 0.75 gm is placed in the water precisely under the center of the plate at distances that generate precise g-forces (20 – 50 g), determined previously at the Dynamics Effects Laboratory. When detonated, the explosion causes the plate containing the rats to accelerate upwards extremely rapidly to heights of approximately 4 – 8 inches and it then drops back down to the original location. Three separate blast experiments have been conducted, each using 2 male Sprague-Dawley rats under deep ketamine-induced anesthesia. All animal procedures have been approved by the University of Maryland School of Medicine Animal Care and Use Committee. The first blast was conducted using a stand-off distance from the charge to the plate that generated a Gz of 30. Both rats initially survived the experiment and spontaneously recovered from the anesthesia within one hour. At approximately 2 hr, one rat demonstrated respiratory distress and died soon thereafter. The other rat exhibited no signs of distress and was reanesthetized and euthanized 24 hr later, using perfusion fixation with parformaldehyde (similar to embalming fluid). The brain was removed and used for histopathology. The second blast experiment was conducted using the same conditions. One rat was euthanized at 24 hr and the other was euthanized at 7 days after the blast. The third experiment used a longer stand-off distance, generating Gz of 22. One rat was euthanized at 24 hr and the other was euthanized at 7 days after the blast.
III. RESULTS
Fig.
1 Device for conducting blast-induced hyper-acceleration experiments with laboratory rats
II. METHODS A. Blast-Induced Hyper-Acceleration Device The device consists of an aluminum water tank 3 ft long x 2 ft wide x 2 ft deep in which a platform is located that supports a thick aluminum plate 15 in x 15 in. The plate can
An autopsy on the one rat that died within 2 hr after the blast indicated that it had succumbed to a hemothorax due to bleeding from the lungs into the chest cavity. This rat also had a ruptured eardrum. This rat and all other animals exhibited evidence at the time of death of subdural hemorrhage (bleeding on the brain surface). The initial histopathology was based on the degree to which brain sections are stained with silver particles. This method has been used in numerous animal models of acute brain injury, including that caused by direct exposure to blast overpressure generated by a shock tube [13]. Increased staining is indicative of degenerating neurons or their axons, which are the long processes that transmit electrical signals among the billions of neurons present in the brain.
IFMBE Proceedings Vol. 32
Traumatic Brain Injury in Rats Caused by Blast-Induced Hyper-Acceleration
Figure 2 provides an example of the differences we have seen in silver-stained brain sections from a sham control rat compared to rats perfusion fixed 24 hr after being subjected to either 22 or 30 g forces in response to blast-induced hyper-acceleration. Results are qualitatively similar to those reported for rats exposed to blast overpressure using shock or blast tubes, i.e., increased overall staining with the appearance of discrete silver grains. The regions of the brain used for this analysis is the hippocampus, which is responsible for learning and memory. Individuals who have suffered from a mild TBI typically experience at least temporary amnesia and occasionally exhibit learning problems.
IV. DISCUSSION To the best of our knowledge, these experiments represent the first to test for the specific effects of blast-induced hyper-acceleration on the brain. While very preliminary, several tentative conclusions can be made: 1. Adult rats can be subjected to blast-induced hyper-acceleration at Gz less than or equal to 30 with a high survival rate. 2. Within 24 hr after exposure to this paradigm, bleeding from the surface of the brain occurs. 3. The increase in silver staining of the brain sections within 24 after the blast strongly suggests the existence of mild traumatic brain injury.
3
The following outcomes are being pursued by experiments in progress: 1. Dose-dependent relationship between blast-induced hyper-acceleration and brain injury. 2. Determination of which brain areas are relatively sensitive to injury caused by blast-induced Gz. 3. Characterization of time-dependent brain injury over 1 to 30 days after the blast. 4. Time dependent effects of the blast on behavioral outcome, including motor function, learning, and memory. 5. Fine structural and chemical changes in different brain regions at different times using magnetic resonance imaging and magnetic resonance spectroscopy. The results of these experiments and outcome measures will provide insight into the pathophysiology of mild TBI induced by blast-induced hyperacceleration. This information will be used as a guide for testing potentially therapeutic interventions in this model. Conditions or agents that demonstrate neuroprotective efficacy will then be used in other animal models, e.g., those simulating free-field blast exposure. Based on the results of these tests, one or more interventions could be translated to clinical trials and potentially applied to warfighter blast TBI victims. This unique model of warfare-relevant TBI could also be used to validate computational models of brain injury caused by acceleration and other forces associated with blasts. It could also be useful for testing advanced accelerometers and other sensors, whose responses could be directly compared to the presence or absence of brain injury. Finally, these experiments and associated computational models could eventually lead to improved designs of military vehicles that better protect against blast-induced neurologic injury.
ACKNOWLEDGMENT Supported by an intramural collaborative research grant from the University of Maryland College Park and the University of Maryland School of Medicine.
REFERENCES Fig. 2 Silver stained neurons in the hippocampus of control (A,C) and blast TBI (B,D) rats using the silver staining method specific for the detection of degenerating neuron cell bodies, axons and terminals. Healthy neurons (A) are minimally stained yellow/gold or unstained with absence of black silver grains inside the cell body or processes (C). In contrast, following bTBI (30 g), damaged neurons become particularly argyrophilic and are darkly stained (C) and impregnated with silver grains (arrows) throughout their cell bodies and processes (D). Images are shown at 4x (A,B) and 100x (C,D)
1. Okie S (2005) Traumatic brain injury in the war zone. N.Engl.J.Med. 352:2043-2047. 2. Hoge CW, McGurk D, Thomas JL, Cox AL, Engel CC, Castro CA (2008) Mild traumatic brain injury in U.S. Soldiers returning from Iraq. N Engl J Med. 358:453-463. 3. Cernak I, Radosevic P, Malicevic Z, and Savic J. (1995) Experimental magnesium depletion in adult rabbits caused by blast overpressure. Magnes Res 8: 249-259. 4. Cernak I, Savic J, Malicevic Z, Zunic G, Radosevic P, Ivanovic I, and Davidovic L. (1996) Involvement of the central nervous system in the general response to pulmonary blast injury. J Trauma 40: S100-104.
IFMBE Proceedings Vol. 32
4
G. Fiskum et al.
5. Li JS, Sun XQ, Wu XY, Rao ZR, Liu HL and Xie XP. (2002) Influences of repeated lower +Gz exposures on high +Gz exposure induced brain injury in rats. Space Med Med Eng 15: 339-342. 6. Cai Q, Liu HJ, Zhan Z, and Zhu MC. (2000) A study of apoptosis and related gene bcl-2 and p53 expression in hippocampus of rats exposed to repeated +Gz. Space Med Med Eng13: 263-266. 7. Hao JF, Wu B, Zhao DM, Ning ZY, Wang CW and You GX. (2004) A study on pathological changes of brain after high +Gx exposure in rhesus monkey. Space Med Med Eng 17: 171-175. 8. Guillaume AI, Osmont D, Gaffie D, Sarron JC, and Quandieu P. (2002) Physiological implications of mechanical effects of +Gz accelerations on brain structures. Aviat Space Environ Med 73: 171-177. 9. Wu B, Xie BS, You GX, Liu XH, Lu SQ and Huang WF. (2002) Effects of +Gx load on energy metabolism of brain tissue in rats. Space Med Med Eng 15: 406-409. 10. Cao XS, Sun XQ, Wu YH, Wu XY, Liu TS and Zhang S. (2005) Changes of hippocampus somatostatin and learning ability in rats after +Gz exposure. Space Med Med Eng 18: 79-83. 11. Fourney, W.L., Leiste, U., Bonenberger, R., and Goodings, D. (2005) Explosive impulse on plates, FRAGBLAST 9:1-17.
12. Fourney, W.L., Leiste, U., Bonenberger, R.J., and Goodings, D. (2005). Mechanism of loading on plates due to explosive detonation, FRAGBLAST 9:205-217. 13. Long JB, Bentley TL, Wessner KA, Cerone C, Sweeney S, Bauman RA. (2009) Blast overpressure in rats: recreating a battlefield injury in the laboratory. J Neurotrauma 26:827-840.
Corresponding author: Author: Dr. Gary Fiskum Institute: Univ. of Maryland School of Medicine Dept. of Anesthesiology Street: 685 W. Baltimore St. City: Baltimore, MD Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Early Metabolic and Structural Changes in the Rat Brain Following Trauma in vivo Using MRI S. Xu1,3, J. Zhuo1,3, J. Racz2, S. Roys1,3, D. Shi1,3, G. Fiskum2,3, and R. Gullapalli1,3 1 Department of Diagnostic Radiology and Nuclear Medicine, Department of Anesthesiology and the Center for Shock Trauma and Anesthesiology Research (STAR), 3 Core for Translational Research in Imaging @ Maryland University of Maryland School of Medicine, Baltimore, MD 21201, USA 2
Abstract— Traumatic brain injury (TBI) is characterized by acute physiological changes that may play a significant role in the final outcome for the patient. The understanding of tissue alterations at an early stage following TBI is critical for injury management and prevention of more severe secondary damage. In this study we investigated the early post-traumatic neurometabolic changes, and changes in tissue water diffusion using proton magnetic resonance spectroscopy ( 1H MRS) and diffusion tensor imaging (DTI) following mild to moderate controlled cortical impact injury on six adult male Sprague-Dawley rats on a 7.0 Tesla animal MRI system. Significant reduction in Nacetylaspartate, glutamate and choline was observed as early as 3 hours following injury. Lactate continued to increase in the ipsilateral hippocampus even at 5 hours indicating increased demands for energy closer to the injury site. Such changes were not observed on the contralateral side at 5 hours. Decreased apparent diffusion coefficient and increased fractional anisotropy was observed among regions in close proximity to impacted regions (ipsilateral hippocampus and bi-lateral thalamus) immediately following TBI, with the ipsilateral hippocampus most affected, followed by ipsilateral thalamus and contralateral thalamus. Remote regions such as the ipsilateral olfactory area were affected to a lesser degree. At the 4 hour time point a large inter-individual variation was observed with an overall trend towards recovery in the ipsilateral hippocampus while the thalamus continued to experience significant changes. Combined information from MRS and DTI suggests a distance effect from the site of injury and the existence of a therapeutic window of about 2-4 hours to limit the cascade of events that may lead to secondary injury.
very early cerebral metabolic changes induced by the damage. Previous in vivo proton magnetic resonance spectroscopy (1H MRS) studies indicated a time evolution of TBI (4, 5). Schuhmann et al (4) showed that total creatine (tCr), Nacetylaspartate (NAA), glutamate (Glu), and choline (Cho) concentrations significantly decreased during the first 24 hour, and then started to increase at 7 days. At the same time, lactate (Lac) increased and reached its peak at 7 days after TBI. Concurrent with metabolic changes, microstructural changes have also been observed including axonal damage and demyelination (6-9). Such changes in the tissue microenvironment can be observed through the use of diffusion tensor imaging (DTI) which provides information on changes in water homeostasis in axons. In particular, parameters derived from DTI including, apparent diffusion coefficient (ADC) and fractional anisotropy (FA) provide information on general water mobility within a tissue and the preferential direction of transport of the water molecule respectively within the white matter tracts. Because the early changes in neurometabolic and biophysical changes may offer valuable information for the clinical neuroprotective treatment, in the present study, we investigate the post-traumatic neuro-metabolic changes at 2-4 hours and 4-6-hours after TBI following a focal controlled cortical impact injury in rat, using in vivo 1H MRS and DTI at 7 Tesla.
Keywords— magnetic resonance imaging, in vivo, proton magnetic resonance spectroscopy, diffusion tensor imaging, traumatic brain injury.
II. MATERIALS AND METHODS
I. INTRODUCTION Traumatic brain injury (TBI) occurs when an external mechanical force or pressure force (as in the case of blast related injury) traumatically injures the brain. The primary injury is characterized by acute biophysical, biochemical and cellular changes that contribute to continuing neuronal damage and lead to permanent or temporary impairment of physical, cognitive, emotional, and behavioral functions (1-3). Experimental models of TBI provide a useful tool for understanding the
A. TBI Model Six adult male Sprague-Dawley rats (300-350 grams) were subjected to left parietal controlled cortical impact injury (10). After being anesthetized initially with 4% isoflurane, the rats were maintained at 2% isoflurane, and the left parietal bone was exposed via a midline incision in a stereotactic frame. A high-speed dental drill was used to perform a leftsided 5 mm craniotomy that was centered 3.5 mm posterior and 4 mm lateral to bregma. A 5 mm round impactor tip was accelerated to 5 m/sec with a vertical deformation depth of either 1.0 or 1.5 mm and impact duration of 50 ms. The bone flap was immediately replaced with dental acrylic and the
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 5–8, 2010. www.springerlink.com
6
S. Xu et al.
scalp incision was closed with 3.0 silk. The experimental protocol was approved by the Committee for the Welfare of Laboratory Animals of the University of Maryland. B. In Vivo 1H MRS & DTI All experiments were performed on a Bruker Biospec 7.0 Tesla 30 cm horizontal bore scanner using Paravision 5.0 software. A Bruker 1H surface coil array was used as the receiver and a Bruker 72 mm linear-volume coil as the transmitter. Proton density-weighted MR images were taken using a 2D rapid acquisition with relaxation enhancement (RARE) sequence (TR/TE=5500/9.5 ms) for anatomic reference. A point-resolved spectroscopy (PRESS) pulse sequence (TR/TE=2500/20 ms) was used for data acquisition from a 3 x 3 x 3 mm3 voxel. The voxel covered immediate pericontusional zone, all layers of the hippocampus, and superior thalamic structures. For each spectrum, 300 acquisitions were averaged for a total of 13 minutes. Data were
were subjected for further analysis. DTI (ADC and FA) maps were generated using FDT (FMRIB's Diffusion Toolbox, Oxford, UK). Regions of interest (ROIs) were drawn manually using ImageJ v1.38x (Wayne Rasband, NIH, Bethesda, MD). Two sample paired t-test for statistics.
III. RESULTS The insets in Fig. 1 (a and b) shows the voxel from which spectra were obtained overlaid on a coronal proton densityweighted image from the two regions of the brain. The corresponding spectra obtained 3 hours after TBI are shown from these regions in Fig. 1 c and d respectively. The in vivo 1H spectra demonstrate good spectral resolution and sensitivity both at the pericontusional side and the contraleteral side. Among the several metabolic ratios, NAA/tCr, Glu/tCr and Cho/tCr demonstrated significant changes over the five hours following injury as shown in Fig. 2.
a
b
c d Fig. 1 Localized in vivo 1H spectra and corresponded anatomic images of a TBI rat at 3-hour after injury. a and b, MR image on the pericontusional and
contraleteral sides, the boxes indicate the spectroscopy voxel (3x3x3 mm3); c and d, MR spectra from the pericontusional and contraleteral voxels; γaminobutyric acid (GABA), creatine (Cr), glutamate (Glu), glutamine (Gln), glycerophosphorylcholine (GPC), lactate (Lac), myo-inositol (Ins), N-acetylaspartate (NAA), phosphocreatine (PCr), phosphorylcholine (PCh), and taurine (Tau). M1 and M2 are macromolecules
acquired before injury (baseline), at 3 hours, and at 5 hours after injury. DTI images were obtained using a single shot spin-echo EPI sequences (TR/TE = 6000/50 ms). The diffusion tensor was sampled in 30 gradient directions with diffusion sensitivity of 1000 s/mm2 and two averages. Data were acquired before injury (baseline), at 2 hours, and at 4 hours after injury. At all times during the experiment, the animal was under 1-2% isoflurane anesthesia and 1 L/min oxygen administration. Respiratory monitoring was performed and the animal was maintained at 36-37 oC during the entire experiment. 1 H MRS data was fitted using the LCModel package (11), and only metabolites with standard deviations (SD) % < 20 were included for further analysis. The in vivo mean metabolite concentrations relative to tCr at each time point
No statistically significant differences were found in glutamine, myo-inositol, and taurine concentrations among the three time points in either the pericontusional voxel or in the contraleteral voxel. There was also no significant difference in these metabolite ratios between the two sides. Significant reduction of 32 % and 33 % NAA was observed in the pericontusional voxel at 3 hours and 5 hours after TBI respectively compared to the baseline. Although the contraleteral voxel also exhibited significant reduction in NAA this reduction was much lower compared to the pericontusional side. Although lower, the concentration of NAA at the 5 hour time point was not significantly different from the 3 hour point. In addition to NAA, our results showed that Glu significantly decreased at 3 hours after TBI in the pericontusional voxel, compared to the baseline (0.922 ± 0.137
IFMBE Proceedings Vol. 32
Early Metabolic and Structural Changes in the Rat Brain Following Trauma in vivo Using MRI
vs. 1.155 ± 0.202, p<0.03) and the contraleteral side (0.922 ± 0.137 vs. 1.12 ± 0.08, p<0.04). As with NAA, we did not observe a significant difference between 3 hours and 5 hours in Glu level in the pericontusional side. Cho in the pericontusional voxel was significantly lower than the contraleteral side (0.166 ± 0.014 vs. 0.179 ± 0.013, p<0.05) at 3 hours after TBI. Although, the signal intensities of Lac were undetectable during the baseline, varying levels (0.115 - 2.098) of increased Lac signal intensity was observed in the pericontusional voxel at 3 hours and 5 hours after the injury, but not in the corresponding contraleteral voxel. Fig.3 shows proton density-weighted axial and coronal slices (a and b), FA and ADC maps (c) of a representative rat after injury. Fig.4 shows average ADC and FA values from the six ROI’s shown in Fig. 3 at 2 hours and 4 hours after injury. At 2 hour after injury, ADC was significantly reduced and FA increased (p<0.05) in the ipsilateral hippocampus. FA was also increased (p<0.05) in the ipsilateral thalamus. Although non-significant, a trend for ADC reduction was observed bilaterally in the thalamus and the ipsilateral olfactory region (p<0.1). For FA, an increasing trend (p<0.1) was also observed for the contralateral hippocampus while a decreasing trend (p<0.1) was observed in the olfactory region. At 4 hours after injury, a significant increase in FA (p<0.05) was observed in the ipsilateral hippocampus
*
7
and the thalamus. An increasing trend (p<0.1) of FA was observed in the contralateral hippocampus and thalamus. Although changes in DTI metrics remained abnormal from the baseline both in the hippocampus and the thalamus at the 4 hours point, this outcome was quite variable due to the inter-individual differences compared to the 2 hours time point.
Fig. 3 Images of a representative rat at 2 hour after TBI showing the extent of the injury (yellow contour). (a) T2-weighted axial slice. (b) T2-weighted coronal slices showing the placement of the ROIs (blue contour). (c) coronal FA and ADC maps
* Fig. 4
Regional ADC and FA values at 2 hour and 4 hour after TBI for hippocampus ipsilateral (hip_ips) and contralateral (hip con) , thalamus ipsilateral (tha_ips) and contralateral (tha_con), olfactory ipsilateral (of_ips) and contralateral (of_con). Error bar shows standard error (= standard deviation/√sample size). ( * is for significance level of p < 0.05. # is for significance level of p < 0.1. )
*
IV. CONCLUSIONS
*
Fig. 2 Neuro-metabolic (NAA, Glu, and Cho) levels in the TBI rat before injury (baseline), at 3 hours, and at 5 hours after injury (* indicates p<0.05)
This study shows that there exists a temporal window of brain vulnerability after TBI in rat, which is in line with previous studies (4). Furthermore, our investigation demonstrates that the neuro-metabolic changes following TBI associated with NAA, Glu and Cho may have their most significant changes as early as three hours after the injury. We found decreased ADC and increased FA in regions in close proximity to impacted regions (ipsilateral hippocampus and bi-lateral thalamus) immediately following TBI, with the ipsilateral hippocampus most affected, followed by ipsilateral thalamus and contralateral thalamus. Remote regions such as the ipsilateral olfactory area were affected to
IFMBE Proceedings Vol. 32
8
S. Xu et al.
a lesser degree. A reduction in ADC with a concomitant increase in FA suggests that regions near the site of injury experience cytotoxic edema that may result from a disruption of the water homeostasis resulting in the breakdown of the Na-ATP pumps that regulate water in the cells. The increase in FA is reflective of increased directional mobility of water and may be a result of mass effect of the tissue at the region of impact where the white mater tracts are compacted together in a manner that leads to limited mobility perpendicular to the fibers. However, at the 4 hours time point, a large inter-individual variation was observed with an overall trend towards recovery of ADC in the hippocampus and the olfactory region, while the thalamus was still going through significant worsening stage. The tendency towards normalcy at the 4-6 hour point from the DTI data and the indication of no further significant damage at the 46 hour point from the spectroscopy data indicates a temporal window of about 3 hours for planning interventions that might limit secondary damage to the brain. Further studies with histological correlation are needed to confirm the imaging findings.
ACKNOWLEDGMENT This study was supported by National Institutes of Health (award #1S10RR019935).
REFERENCES
2. Morganti-Kossmann MC, Rancan M, Stahel PF, Kossmann T. (2002) Inflammatory response in acute traumatic brain injury: a doubleedged sword. Curr Opin Crit Care. 8:101-105. 3. Zhang X, Chen Y, Jenkins LW, Kochanek PM, Clark RS. (2005) Bench-to-bedside review: Apoptosis/programmed cell death triggered by traumatic brain injury. Crit Care. 9:66-75. 4. Schuhmann MU, Stiller D, Skardelly M, Bernarding J, Klinge PM, Samii A, Samii M, Brinker T. Metabolic changes in the vicinity of brain contusions: a proton magnetic resonance spectroscopy and histology study. J Neurotrauma. 20:725-743. 5. Vagnozzi R, Tavazzi B, Signoretti S, Amorini AM, Belli A, Cimatti M, Delfini R, Di Pietro V, Finocchiaro A, Lazzarino G. Temporal window of metabolic brain vulnerability to concussions: mitochondrial-related impairment--part I. (2007) Neurosurgery. 61:379-389. 6. Alsop DC, Murai H, Detre JA, McIntosh TK, Smith DH. (1996) Detection of acute pathologic changes following experimental traumatic brain injury using diffusion-weighted magnetic resonance imaging. J Neurotrauma. 13:515-21. 7. Arfanakis K, Haughton VM, Carew JD, Rogers BP, Dempsey RJ, Meyerand ME. (2002) Diffusion tensor MR imaging in diffuse axonal injury. AJNR Am J Neuroradiol. 23:794-802. 8. Huisman TA, Sorensen AG, Hergan K, Gonzalez RG, Schaefer PW. (2003) Diffusion-weighted imaging for the evaluation of diffuse axonal injury in closed head injury. J Comput Assist Tomogr. 27:511. 9. Mamere AE, Saraiva LAL, Matos ALM, Carneiro AAO, Santos AC. (2009) Evaluation of Delayed Neuronal and Axonal Damage Secondary to Moderate and Severe Traumatic Brain Injury Using Quantitative MR Imaging Techniques. AJNR Am J Neuroradiol. 30:947-952. 10. Robertson CL, Puskar A, Hoffman GE, Murphy AZ, Saraswati M, Fiskum G. (2006) Physiologic progesterone reduces mitochondrial dysfunction and hippocampal cell loss after traumatic brain injury in female rats. Exp Neurol.197:235-243. 11. Provencher SW. (2001) Automatic quantitation of localized in vivo 1H spectra with LCModel. NMR Biomed. 14:260-264.
1. Lenzlinger PM, Morganti-Kossmann MC, Laurer HL, McIntosh TK. (2001) The duality of the inflammatory response to traumatic brain injury. Mol Neurobiol. 24:169-181.
IFMBE Proceedings Vol. 32
Principal Components of Brain Deformation in Response to Skull Acceleration: The Roles of Sliding and Tethering between the Brain and Skull Teresa M. Abney1, Y. Aaron Feng1, Robert Pless2, Ruth J. Okamoto1, Guy M. Genin1, and Philip V. Bayly1,3 1
Department of Mechanical, Aerospace and Structural Engineering, 2 Department of Computer Science and Engineering, and 3 Department of Biomedical Engineering, Washington University in St. Louis, Saint Louis, USA
Abstract— The relationship between skull acceleration and brain injury is not well understood, in large part because of the challenge of visualizing the brain’s mechanical response in vivo. This difficulty also complicates the validation of computational mechanics predictions. Our dynamic magnetic resonance (MR) imaging suggests an important role for the attachments between brain and skull. Here, we present an MRI-based method for identifying the dominant modes of brain displacement relative to the skull during angular acceleration of the head, and apply it to study brain/skull interactions in live volunteers. The approach was to estimate dynamic intracranial displacement fields from a sequence of tagged MR images of the brain and skull, then identify dominant displacement modes using principal component (PC) analysis. After verifying the method through analysis of a simulated 2-D vibrating plate and MR images of a cylindrical gel phantom, the method was applied to show that the dominant mode of brain/skull interaction is one of sliding arrested by brain/skull meninges in a few specific regions. Keywords— Tagged MR imaging, TBI, skull-brain interactions, principal component analysis.
I. INTRODUCTION What happens to the brain when the skull accelerates? This question is central to understanding the most common forms of mild traumatic brain injury, but it is still not completely answered (Bailey and Gudeman 1989; Ommaya et al. 2002; Spaethling et al., 2007). While computer models with high anatomic accuracy promise insight into the brain’s response to skull acceleration (Zhang et al. 2004; Levchakov et al., 2006; Cloots et al., 2008; Takhounts et al., 2008; Ji et al., 2009), validation of these models and identification of appropriate brain/skull boundary conditions are challenging because of difficulties in measuring accurate dynamic displacement fields in the human brain in vivo (e.g. Ji et al., 2009). Our focus in this article is extraction of quantitative, statistical information on the dynamic displacement of the brain relative to the skull in human volunteers during angular head acceleration.
Four classes of experimental data exist from which brain/skull mechanical interactions can be estimated. The first class consists of quasi-static data obtained during image-guided neurosurgery (e.g., Ji et al., 2009). These data provide estimates of boundary conditions, obtained through solution of inverse problems, that are combined with a computational model to update three-dimensional maps of the brain as the brain is manipulated during surgery. While these data are useful for estimating effects of distant boundaries on displacement of a tumor mass during surgery, they are not optimized for the distinct task of predicting dynamic effects of brain/skull interactions. The second class of data is obtained through the bi-planar X-ray approach of Hardy et al. (Hardy et al., 2001; Hardy et al., 2007), in which neutrally buoyant markers embedded in a cadaver head are tracked during high-speed impact between the cadaver head and a relatively rigid surface. Some insight into brain/skull boundary conditions can be gained from these data: Zou et al. (2007) observed that the markers displaced as if connected to a rigid body when observed during lower levels of acceleration, and as if connected to a deformable body at higher levels of acceleration. While the observation of brain sliding relative to the skull in vivo is supported by magnetic resonance (MR) imaging observations the brain’s responses to both mild acceleration and quasi-static deformations, (Ji et al., 2004; Bayly et al., 2005; Ji & Margulies, 2007; Sabet et al., 2008), care must be taken when extrapolating data from cadavers to humans. These data have been applied to validate computational models of the human head, but the sparse distribution of markers limits quantification of boundary conditions to uniform, averaged relations (Kleiven & Hardy, 2002). The third class of data provide indirect measurements of important inputs to computational models of brain biomechanics through the magnetic resonance elastography. In these techniques, micron-amplitude shear waves are imaged, and tissue mechanical properties can be estimated from the spatial-temporal displacement data (Kruse et al.,
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 9–12, 2010. www.springerlink.com
10
T.M. Abney et al.
2008). However, these methods have not yet been exploited to identify brain/skull boundary conditions and quantify the brain’s structural response to these vibrational loads. The fourth approach is the tagged MR approach of Bayly et al. (Bayly et al., 2004; Bayly et al., 2005; Sabet et al., 2008). In this approach, displacement fields within the brains of human volunteers are tracked as the volunteers move their heads inside the core of a MR scanner. Displacement fields are interpreted through a modified version of the harmonic phase (HARP) algorithm of Osman et al. (2000) (Bayly et al., 2004), providing accurate estimates of dynamic strain fields (Bayly et al., 2008). Comparison of intracranial strain fields to solutions applying prescribed displacement boundary conditions (e.g. Massouros and Genin, 2008) suggests that regions of both fixation and sliding exist at the boundary between the brain and skull. However, the approach has not been used to obtain displacement fields because brain displacement is not meaningful, unless it is described relative to the skull. Estimation of the motion of the skull from MR data sets presents an additional challenge, since tagged MR images contain relatively little contrast in bony skull tissue, and adjacent soft tissue (scalp) moves relative to the skull. In this article, we identify the dominant modes of brain displacement relative to the skull through a method based upon principal component analysis of tagged MR images. Our method involves an algorithm for aligning a series of sequential dynamic MR images to determine the rigid body motion of the skull, a HARP-based analysis to identify displacement fields in the brain relative to the skull, and principal component analysis to identify the dominant modes of displacement.
II. METHODS A. Analytical Methods Three steps were involved in converting sequences of dynamic MR images into PCs of displacement fields: (1) identification of rigid body translations and rotations of the skull, (2) identifying displacement fields within the images, and (3) identifying the PCs of the displacement fields. B. Image Alignment In sequences of images for which the reference body (skull) underwent rigid body motion relative to the image frame of reference (the MR scanner), rigid body motion of the skull was estimated using an autocorrelation technique we developed, then subtracted before subsequent displacement estimation.
C. Estimation of Displacement Fields Temporary magnetic “tag lines” were imposed on MR images of a gel cylinder and of three human volunteers by applying RF pulses in combination with magnetic gradients. These were analyzed to estimate displacement fields. Phase contours of the dominant spatial frequencies in the tagging pattern were tracked using a modification of the harmonic phase (HARP) method (Osman et al., 2000) by Bayly et al. (2004). D. Principal Component (PC) Analysis PCs of the estimated displacement vectors were calculated for each of the time frames and at each of the intersection points of tag lines. Singular value decomposition was performed. The full matrix of displacements at all times and locations, Q, was represented as Q = Uλϕ, where U is a coefficient matrix, ϕ is matrix of eigenvectors, and λ is a diagonal matrix of eigenvalues. Each displacement array Qk was written as the weighted sum of eigenvectors at each K
time frame k: Qk = ∑ a kj φ j where the modal coeffij=1
cient a kj = a j (t k ) is the temporal variation representing the contribution of PC φj at each of the K time frames. E. Experimental Methods Displacement fields were estimated from dynamic MR images of a viscoelastic gelatin cylinder subjected to an angular deceleration pulse, and of the heads of human volunteers undergoing similar turning deceleration about the axis of the spine. The MR images were acquired as described elsewhere (Bayly et al., 2008). Briefly, a gelatin MR phantom was prepared in a relatively rigid cylindrical container (inner diameter 56 mm), and placed in a MRcompatible rotation device that imparted a prescribed, repeatable rotation to the specimen that was stopped by a repeatable mild impact. Motion of the cylinder triggered a fast gradient-echo MR imaging sequence (FLASH 2D) in a 1.5T MR scanner (Sonata, Siemens, Malvern, PA). The sequence superimposed “tag lines” over the normal MR image of the cylinder. These sinusoidal variations in brightness, which move with material points in the cylinder, served as non-invasive markers for tracking motion. For human experiments, the mechanical device used to rotate the gelatin cylinder was adapted to hold a volunteer’s head and apply a prescribed, repeatable rotation and stopping force. All protocols were approved by the Washington University Human Research Protection Office.
IFMBE Proceedings Vol. 32
Principal Components of Brain Deformation in Response to Skull Acceleration
Fig. 1 (A)
Quiver plots for the first four principal components (PC) of the gel cylinder. Plots are scaled differently to highlight detail; magnitude differs between PCs. (B) Associated Lagrangian strains
11
Fig. 3 (A) Rigid body motion of the skull was removed prior to PC analysis (Top: original images and Bottom: rotated images)
III. RESULTS The method was applied to experimental MR observations of a rotating viscoelastic (gelatin) cylinder. Rigid body motion was subtracted to emphasize displacement of the gelatin relative to the cylindrical container. Estimated displacement fields from MR images of the rotating cylinder were then analyzed using PC analysis (Figure 1A). Strain fields for the first four PCs (Figure 1B), showed intuitive distributions, with shear strain fields in radial coordinates resembling the Bessel functions expected in a linear viscoelastic solution to this problem (Bayly et al., 2008). Analysis of the temporal variation in amplitudes of the PCs indicated that in cases of degenerate modes (similar frequency of oscillation, Figure 2A), the expected modal information could appear mixed between modes. Superposition of degenerate PCs 3 and 4 yielded displacement patterns analogous to the Bessel functions expected in the analytical solution for this problem (Figure 2B).
Fig. 2 (A) Modal coefficients of principal components 2 and 3 overlaid to show their similar temporal variation. (B) Linear combinations of principal components 2 and 3 (the difference, left, and sum, right) yield intuitive radial patterns
Rigid body translation and rotation of the skull were subtracted from images of the deforming human brain (Figure 3) before PCs were analyzed (Figure4A). Strain fields corresponding to the first four PCs differed fundamentally from those of observed in a gel cylinder (Figure 4B).
Fig. 4 (A) Quiver plot of the first four PCs with the mechanical response of the brain to a rotational deceleration (shown from left to right). Plots are scaled independently to highlight detail within each of the PCs. (B) Langrangian Strain fields corresponding to each of the first four PCss of the displacement field
IV. CONCLUSIONS PC analysis shows that the dominant mode of displacement was one of rotation of the brain relative to the skull for the individuals tested. The first PC of brain displacement differed from that of the gel cylinder in two important ways: first, the brain clearly slides relative to the skull, while the gel did not slide relative to its outer shell; second, the brain’s displacement was non-symmetric, with the anterior and posterior regions having significantly different contributions to the first mode, while the gel exhibited axisymmetry. Although the method is primarily intended for computing displacement fields, the strain fields associated with the PCs of the displacement provide some insight as well. Clearly, strain fields associated with the brain’s PCs of displacement do not resemble the simple Bessel functions observed in the gel cylinder, indicating a central role of boundary conditions and/or internal structures in the brain’s mechanical response. Alternating tensile and compressive components
IFMBE Proceedings Vol. 32
12
T.M. Abney et al.
in the strain field associated with the first PC are suggestive of effects of internal meninges. With boundary conditions taken into consideration, these results suggest strongly that anatomical features like meninges and vessels may be important factors in the brain’s low acceleration mechanical response, and thus in mild traumatic brain injury. The model problem studied highlights limitations of the method. In cases of degenerate modes, the associated PCs bleed into one another. Additionally, at the outermost boundaries of the cylinder images small amounts of slipping appear in PC2 and PC3 (Figure 1). This highlights the importance of considering amplitudes in addition to shapes of PCs: since the amplitudes of all PCs beyond those showed that the first PC accounted for ~80% of variance in the dataset, the small slipping in the second PC was of a small magnitude. This magnitude provides an estimate of the error expected when using the method.
ACKNOWLEDGMENT Financial support was provided by NIH RO1 NS055951 and by a National Science Foundation Fellowship to TMA. We are grateful for the technical assistance of Richard Nagel in acquiring tagged MR images.
REFERENCES Bailey, B. N. & Gudeman S. K. 1989 Minor Head Injury. In Textbook of Head Injury (ed. D. P. Becker & S. K. Gudeman), pp. 308-318. Philadelphia: Saunders. Bayly PV, Ji S, Song SK, Okamoto RJ, Massouros P, Genin GM 2004. Measurement of strain in physical models of brain injury: A method based on HARP analysis of tagged magnetic resonance images. J BiomechEng 126:523-528. Bayly, P.V., Cohen, T.S., Leister, E.P., Ajo, D., Leuthardt, E.C., Genin, G.M., 2005. Deformation of the human brain induced by mild acceleration. J. Neurotrauma. 22, 845-856. Bayly, PV, Massouros, PG, Christoforou, E, Sabet, A, Genin, GM, 2008. Magnetic resonance measurement of transient shear wave propagation in a viscoelastic gel cylinder. J. Mechanics and Physics of Solids. 56:2036-2049. Cloots RJ, Gervaise HM, van Dommelen JA, Geers MG. 2008. Biomechanics of traumatic brain injury: influences of the morphologic heterogeneities of the cerebral cortex. Ann Biomed Eng. 36:1203-15.
Hardy WN, Foster CD, Mason MJ, Yang KH, King AI, Tashman S (2001). Investigation of head injury mechanisms using neutral density technology and high-speed biplanar X-ray. Stapp Car Crash Journal 45:337368. Hardy WN, Mason MJ, Foster CD, Shah CS, Kopacz JM, Yang KH, King AI, Bishop J, Bey M, Anderst W, Tashman S. 2007. A study of the response of the human cadaver head to impact. Stapp Car Crash J. 51:1780. Ji S, Zhu Q, Dougherty L, Margulies SS (2004). In vivo measurements of human brain displacement. Stapp Car Crash Journal 48:1-12. Ji S, Margulies SS. 2007. In vivo pons motion within the skull. J Biomech. 40:92-9. Ji S, Roberts DW, Hartov A, Paulsen KD. 2009. Brain-skull contact boundary conditions in an inverse computational deformation model. Med Image Anal. 13:659-72. Kleiven S, Hardy WN. 2002. Correlation of an FE Model of the Human Head with Local Brain Motion--Consequences for Injury Prediction. Stapp Car Crash J. 46:123-44. Kruse SA, Rose GH, Glaser KJ, Manduca A, Felmlee JP, Jack Jr. CR, Ehman RL. 2008. Magnetic resonance elastography of the brain, NeuroImage, 39, Issue 1, 1 January 2008:231-237 Levchakov A, Linder-Ganz E, Raghupathi R, Margulies SS, Gefen A. 2006. Computational studies of strain exposures in neonate and mature rat brains during closed head impact. J Neurotrauma. 23:1570-80. Ommaya AK, Goldsmith W, Thibault L. 2002. Biomechanics and neuropathology of adult and paediatric head injury. Br J Neurosurg. 16:22042. Sabet AA, Christoforouc E, Atlind B, Genin GM, Bayly PV, 2008. Deformation of the human brain induced by mild angular head acceleration. J. Biomechanics. 41, 307-315. Spaethling JM, Geddes-Klein DM, Miller WJ, von Reyn CR, Singh P, Mesfin M, Bernstein SJ, Meaney DF. 2007. "Linking impact to cellular and molecular sequelae of CNS injury: modeling in vivo complexity with in vitro simplicity." Prog Brain Res. 161:27-39. Takhounts EG, Ridella SA, Hasija V, Tannous RE, Campbell JQ, Malone D, Danelson K, Stitzel J, Rowson S, Duma S. 2008. Investigation of traumatic brain injuries using the next generation of simulated injury monitor finite element head model. Stapp Car Crash J. 52:1-31. Zhang, L., Yang, K.H. and King, A.I. (2004) A proposed injury threshold for mild traumatic brain injury. J. Biomech. Eng. 126(2): 226–236. Zou H, Schmiedeler JP, Hardy WN. 2007. Separating brain motion into rigid body displacement and deformation under low-severity impacts. J Biomech. 40:1183-91.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Teresa M. Abney Washington University in St. Louis 1 Brookings Drive Saint Louis USA
[email protected]
Investigations into Wave Propagation in Soft Tissue M.F. Valdez and B. Balachandran Department of Mechanical Engineering, University of Maryland, College Park, MD 20742 Abstract–– In this article, the authors investigate wave propagation in soft-tissue matter with the aim of understanding the following: i) influence of nonlinear material properties of soft tissue on its mechanical response when subjected to transient loading and ii) mechanical response variation with respect to frequency and amplitude of the loading. To aid these investigations, reduced-order models are constructed taking into account available experimentally determined brain tissue properties and these models are used to study onedimensional wave propagation in brain tissue fiber bundles. These investigations could help in furthering the understanding of wave phenomena in a skull-brain system subjected to transient loadings. Keywords— Brain fibers, nonlinear visco-elastic material, wave propagation.
I. INTRODUCTION Traumatic brain injury (TBI) is a physical injury to the brain, which may be caused by penetrating and nonpenetrating impacts. The latter includes those associated with automobile accidents, falls, sports injuries, and blasts from explosive devices. In the combat arena, blast induced traumatic brain injury (bTBI) may result from a head being impacted by high-speed fragments, impacts of a head with a rigid surface after the body has been launched by a blast, impacts of the brain with skull structures following a blast, and direct exposure of head to a blast. Although brain injury mechanisms associated with direct (collision) and indirect (those due to sudden acceleration/deceleration) impacts have been widely studied [1, 2], little is known about the inherent injury mechanism associated with bTBI. Blast loading transients involve time scales of the order of 10-3 to 1 msec depending on the intensity of the explosion [3]. Common head impacts are characterized by time scales of about 10 to 16 msec [4]. The threshold for mild TBI, based on sports injury data is roughly 50 g acceleration. On the other hand, the differential pressure acting on the human body due to a blast wave traveling at sonic speed can produce accelerations up to 300 g. Strain rates ranging from 15 sec-1 to 21 sec-1 have been associated with the occurrence of axonal injuries. With much shorter durations of blast loadings, higher strain rates are to be expected.
II. BRAIN INJURY MECHANISMS AND RELATED MODELING Many theories for brain concussion due to direct or indirect impact have been proposed. The so-called “shear strain theory” [5] and the “cavitation theory” [6] are well known in the field. Several studies [7] suggest that low-speed impacts activate brain injury mechanisms based on relative motions between the skull and the brain, and such relative motions form the basis for cavitation and shear strain theories, which belong to the first group of mechanisms. On the other hand, at high-impact speeds, brain deformation is expected to play a primary role, and the injury mechanisms associated with deformations belong to the second group. Blast wave loadings belong to the second group as short time scales characterize them. Hence, the classical theories of brain concussion fail to explain brain injury mechanisms associated with strain and stress wave propagation inside the brain, which can occur in the absence of relative motions between the skull and brain. Though research on blast related brain injury has seen a recent spurt, the study of mild traumatic brain injury caused by direct and indirect impacts started over 60 years ago. Several models have been developed to isolate and understand the physics of the phenomena. Usually, in these models, a small number of variables and parameters are included and simplified geometries are considered, so that closedform solutions and/or relatively simple numerical solutions can be obtained. Such model studies allow one to understand particular aspects of a phenomenon in isolation. For example, models have been developed to study the coupcontra-coup phenomenon and the influence of the cerebralspinal fluid (CSF) [8]. If one were to consider additional details, such as geometrical details of the skull, internal membranes, nonlinear material properties, CSF fluid, and so on, the modeling complexity increases and a common recourse has been finite element modeling. In studies of blunt impacts to the head, models ranging from two-dimensional finite element models to anatomically detailed finite element models have been developed with details including those for the scalp, three layered skull, CSF, interior membranes and brain. The different finite element models developed have helped confirm some of the mechanisms proposed by classical theories and
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 13–17, 2010. www.springerlink.com
14
M.F. Valdez and B. Balachandran
further the investigation of other related aspects. As an example, the coup-contra-coup phenomenon (e.g., [9, 10]) is cited. The presence of different energy transmission paths in the skull-brain-neck system have been assessed [11]. Finally, the effects of protective devices such as helmets have also been studied [12,13]. Recently, Moss et al. [14] proposed skull deformation as a novel mechanism associated with blast related brain injuries. Nyein et al. [15] concluded that direct interaction and propagation of the shock wave through the skull to the brain is possible. Models that have been developed to study injuries due to interactions between blast waves and the skullbrain system have been limited. However, many of the finite element models developed for studying head impacts can be extended to simulate blast wave interactions with the human head provided appropriate fluid models, structure models, and fluid-structure interaction schemes are incorporated. The present effort is aimed at helping develop models for understanding the response of skull-brain system subjected to a blast loading.
III. NONLINEAR VISCO-ELASTIC MODEL FOR BRAIN TISSUE CHARACTERIZATION
Experiments with swine and rat brain tissue in tension, compression and shear deformation reveal that brain tissue behaves as a nonlinear visco-elastic material [16, 17]. However, the strain rates (about 0.65 sec-1 and lower) at which these experiments have been conducted are usually much lower than those expected due to a blast loading. Brain tissue characterization at large strain rates requires further work. The nonlinear behavior of brain tissue and heterogeneity of brain tissue can potentially lead to energy localization in the brain matter, and consequently, to a brain injury. A nonlinear visco-elastic material behaves as a nonlinear elastic (hyper-elastic) material at both, very slow and very fast deformation rates, but with different properties in each case. For intermediate deformation rates, on the other hand, the material exhibits rate dependent, hysteretic behavior [18]. It is remarked that the type of material properties determined from experiments depend on the type of model used to determine them; in other words, the material moduli determined through experiments are not intrinsic properties but rather model based properties. This is true of the current work as well. In this work, the authors present a one-dimensional nonlinear visco-elastic model to investigate wave propagation phenomena in a longitudinal brain fiber group. The visco-elastic model is based on the general Maxwell’s model; that is, it consists of nonlinear springs in parallel with a certain number of Maxwell’s modes composed, in turn, by a nonlinear spring in series with a dashpot, as
illustrated in Fig. 1. In this model, following previous studies ([16]), the incompressible hyper-elastic Mooney-Rivlin constitutive law is considered for describing the nonlinear springs. The dashpot behavior is assumed to be linear in the strain rate. The variables γ 1 , γ 2 ,…, γ N are internal variables that are related to the mechanism of energy dissipation. σ 1 ,
σ 2 …, σ N are the stresses in the corresponding Maxwell’s modes and σ ∞ is the stress in the nonlinear spring in parallel. The details of the formulation are provided in the authors’ recent work [19]. In order to obtain the model parameters, the governing equations of the visco-elastic model are fitted to the available experimental data (data from compression experiments on swine brain tissue [17]). The model identification is carried out by using the nonlinear ODE parameter estimation tool (Grey-box) from Matlab® System Identification Toolbox. For simplicity, only one Maxwell’s mode is used in the visco-elastic model. The comparisons between experimental data and predictions of the visco-elastic model with the estimated parameters are shown in Fig. 2 and Fig. 3 for strain rates of 0.64 sec-1 and 0.0064 sec-1, respectively.
Fig. 1 General nonlinear Maxwell’s one-dimensional visco-elastic model
Fig. 2 Comparison between experimental data and predictions of viscoelastic model with estimated parameters for strain rate of 0.64 sec-1
IFMBE Proceedings Vol. 32
Investigations into Wave Propagation in Soft Tissue
15
Fig.
3 Comparison between experimental data and predictions of viscoelastic model with estimated parameters for strain rate of 0.0064 sec-1
It is noted that the models produce a reasonable estimate of the experimentally observed brain tissue mechanical behavior. However, an important limitation of the model is that its material parameters do not explicitly depend on the strain rate. Therefore, a set of different model parameters is obtained for each strain rate considered. The extension of the model to include explicit dependence of the strain rate is to be considered in future work.
A. Nonlinear Hyper-Elastic Rod Study
∂ u ⎡ ∂ ⎤ = u XX ⎢ 2 Ψ (λ )⎥ ∂t 2 ⎣ ∂λ ⎦
(
) (
)
Ψ (λ ) = c1 λ 2 + 2λ −1 − 3 + c2 2λ + λ −2 − 3
(2)
For this constitutive equation, the governing equation becomes ρ
ρ
As previously stated, the dynamic behavior of the human brain subjected to a blast loading is likely to be dominated by tissue deformation and stress and strain wave propagation rather than relative rigid body translation and rotation. Stress and strain waves propagate along certain preferred pathways, and the authors argue that the longitudinal brain fiber groups in the brain may serve as wave propagation pathways. Furthermore, it is argued that nonlinearity of the brain tissue behavior is likely to affect the wave propagation leading to regions where energy concentrations occur, resulting in high stress and strain levels. This situation can represent a potential cause of injury. Brain fibers networks are schematically shown in Fig. 4. In order to study the wave propagation in the brain tissue matter, a collection brain fibers is modeled as a continuous longitudinal nonlinear rod. It can be shown that the equation governing the longitudinal wave propagation in nonlinear hyper-elastic rods is given by, ρ
For a nonlinear, incompressible Mooney-Rivlin material in uni-axial stress state, Ψ (λ ) takes the form,
−3 −4 ⎛ ∂2 u ∂2 u ⎡ ⎛ ∂u ⎛ ∂u ⎞ ⎞ ⎞ ⎤ = 2c1 ⎜ 1 + 2 ⎜ + 1⎟ ⎟ + 6c2 ⎜ + 1⎟ ⎥ 2 2 ⎢ ⎝ ∂X ⎝ ∂X ⎠ ⎠ ⎠ ⎥ ∂t ∂X ⎣⎢ ⎝ ⎦
(3)
Recalling that the strain is defined as e = ∂u ∂x , and expanding in Taylor series for | u |<< 1 we obtain,
IV. WAVE PROPAGATION IN A BRAIN FIBER
2
Fig. 4 Modeling approach to study wave propagation in brain fiber groups and energy localization
(1)
where Ψ (λ ) is a strain-energy function, λ = (1 + ∂u ∂X ) is
∂2 u ∂ 2 u = ∂t 2 ∂X 2
∂u ⎤ ⎡ ⎢⎣ 6 (c1 + c2 ) − 12 (c1 + 2c2 ) ∂X ⎥⎦ + H .O.T
(4)
where H.O.T. stands for higher order terms. Eq.(3) can be rewritten as ∂2 u ∂2 u = c02 2 ∂t ∂X 2
∂u ⎤ ⎡ ⎢⎣1 + 2E1 ∂X ⎥⎦
(5)
where, c02 = 6 (c1 + c2 ) ρ and, E1 = − (c1 + 2c2 ) (c1 + c2 ) . Eq. (6) can be transformed into two first-order partial differential equations in terms of v = ∂u ∂t and e = ∂u ∂x . Defining t = τ L c0 and X = ξ L , the non-dimensional version of the equations in terms of the strain and the velocity are, ∂v ∂e ⎡ 2E1 ⎤ = 1+ e ∂τ ∂ξ ⎢⎣ L ⎥⎦ ∂v ∂e = ∂ξ ∂τ
(6)
The system (6) is solved numerically through finite difference discretization, and the details of determining this solution are omitted here for brevity. To determine the numerical solution, the following material properties are used [19]:
the principal stretch and u is the displacement. IFMBE Proceedings Vol. 32
c1 = 641 Pa
c2 = 733 Pa
16
M.F. Valdez and B. Balachandran
The nonlinear rod described by the system (6) is fixed at one end and subjected to a harmonic excitation at the other end according to e ξ =1 = e0 sin (c0 Ω L τ )
(7)
The numerically determined strain distribution in time and space are shown in Fig. 5 and Fig. 6, for two values of the forcing frequency Ω = π s−1 and Ω = 1.7π s −1 , and forcing amplitude e0 = 0.1 . It is noted from these results that the loading frequency greatly influences the strain distribution along the rod. For Ω = 1.7π , steppening of the strain waves leading to energy localization is observed in the indicated regions in Fig. 6. For the same amplitude of the external loading, the maximum strain was almost twice that seen for Ω = π . This type of energy localization can be a potential injury in brain fibers. Hence, an analysis of the frequency content of the external load applied to the brain may be helpful in assesing the possibility of energy localization in the brain fibers, and potential damage. B. Nonlinear Visco-Elastic Discrete Model Study
Fig. 7 Discrete model of a brain fiber bundle The masses are linked together by nonlinear visco-elastic elements whose governing equations can be found in [19]. In this article, only three visco-elastic elements are used to discretize a collection of brain fibers. To study the mechanical response to blast waves acting on the skull-brain system, as a first approximation, it is assumed that the shape of the pressure pulse that reaches the brain fibers is similar to that of the incoming blast wave that impacts the skull. However, this may not be true since a part of the energy carried by the blast wave that reaches the human head is redistributed by wave reflections and skull and brain tissue deformation before reaching the fibers. A blast-like loading is applied in the leftmost mass of the system as shown in Fig. 7 and this loading is given by,
As a discretization of the continuous rod treated previously, a mass-spring system is shown in Fig. 7.
F (t ) = F0 e−t T
where T is a characteristic time, and is related with the duration of the “impact” or “blast loading. The parameters of each of the nonlinear visco-elastic elements are the same as those obtained for the highest value of strain rate value in Section III. In the following cases, the loading amplitude is assumed to be F0 = 1000 N
Two values of the loading characteristic time are considered in the present study and they are given by T1 = π 25 s
Fig. 5 Strain distribution for Ω = π rad/s
Fig. 6 Strain distribution for Ω = 1.7π rad/s
T2 = π 100 s
For these two values of the characteristic time, the time variations of the stresses in each of the visco-elastic elements are shown in Fig. 8 and Fig. 9. These results are illustrative of the propagation of the longitudinal stress wave through the elements of the system. Also, it is remarked that the stress in the fiber group reaches a maximum compressive stress near the constrained end. This observation is consistent with the observations made with the continuum model (Figs. 5 and 6). This is due to the amplification of the stress as a consequence of the constructive interference between the wave travelling to the right and the wave travelling to the left that resulted from the reflection on the right boundary.
IFMBE Proceedings Vol. 32
Investigations into Wave Propagation in Soft Tissue
17
REFERENCES
Fig. 8 Stress time history for T1 = π 25 s
Fig. 9 Stress time history for
T1 = π 100 s
V. CONCLUDING REMARKS In this article, the authors have examined wave propagation through soft tissue, and in particular, brain fibers, with the aim of furthering the understanding of mechanical response of the brain matter subjected to blast loadings. Reduced-order models such as those presented in the present effort can help uncover energy localization phenomenon and form the basis for formulating mechanisms associated with bTBI.
ACKNOWLEDGMENTS The authors gratefully acknowledge the support received for this work from the Center for Energetic Concepts Development, University of Maryland.
1. Schmitt K, Niederer P, Muser M, and Walz F (2007) Trauma Biomechanics: Accidental injury in traffic and sports. Springer Verlag 2. Voo L, Kumaresan S, Pintar F et al. (1996) Finite-element models of the human head. Med Biol Eng Comput 34:375-381 3. Kambouchev N (2007) Analysis of blast mitigation strategies exploiting fluid-structure interaction. PhD. Thesis, Massachusetts Institute of Technology. 4. Zhang L, Yang K and King A (2004) A proposed injury threshold for mild traumatic brain injury. Trans. ASME 126(2):226-236 5. Holbourn A and Phill D (1943) Mechanics of brain injury. Lancel, 438-441, 1943. 6. Gross A (1958) A new theory on the dynamics of brain concussion and brain injury. J Neurotrauma 15(5): 548-561 7. Zou H, Schmiedeler J and Hardy W (2007) Separating brain motion into rigid body displacement and deformation under low-severity impacts J Biomech 40:1183–1191 8. Halabieh O and Wan J (2008) Simulating Mechanism of Brain Injury During Closed Head Impact Biomedical Simulation 5104:107-118 9. Kumaresan S and S. Radhakrishnan S (1996) Importance of partitioning membranes of the brain and the neck in head injury modeling. Med Biol Eng Comput 34:27-32 10. Huang H, Lee I, Lee S et al. (2000) Finite element analysis of brain contusion: an indirect impact study. Med Biol Eng Comput 38:253259 11. Zong Z, Leeb H and A Lu (2006) A three-dimensional human head finite element model and power flow in a human head subject to impact loading. J Biomech 39:284–292 12. Imielinska C, Przekwas A and Tan X (2006) Modeling of Trauma Injury. Comput Science ICCS 3226:822-830 13. Pinnoji P and Mahajan P (2007) Finite element modeling of helmeted head impact under frontal loading. Sadhana 32(4):445–458 14. Moss W, King M and Blackman E (2009) Skull flexure from blast waves: A new mechanism for brain injury with implications for helmet design. Phys Review Letters 103:108702 15. Nyein N, Jerusalem A, Radovitzky R et al. (2008) Modeling Blastrelated brain injury. 26th Army Science Conference, December 2008 16. Mendis K, Stalnaker R, Adyani S (1995) Constitutive Relationship for Large Deformation Finite Element Modeling of Brain Tissue. J Biomech Eng 117:79-285 17. Miller K and Chinezi K Mechanical properties of brain tissue in tension. J Biomech 35: 483-490, 2002. 18. Belytschko T, Liu W and Moran B (2001) Nonlinear finite element for continua and structures. Wiley. 19. Valdez M, Balachandran B (2010) Wave transmission through softtissue matter. Chapter 7, Simulation based innovation and discovery for energetics application, in press.
Corresponding Author: M. Valdez Institute: Department of Mechanical Engineering, University of Maryland Street: 2133 Glenn L. Martin Hall, Building 088 City: College Park, MD 20742 Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Correlating Tissue Response with Anatomical Location of mTBI Using a Human Head Finite Element Model under Simulated Blast Conditions T.P. Harrigan1, J.C. Roberts1,2, E.E. Ward1, and A.C. Merkle1 2
1 The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, USA The Johns Hopkins University Department of Mechanical Engineering, Baltimore, Maryland, USA
Abstract— Mild traumatic brain injury (mTBI) has recently been shown to include deficits in cognitive function that have been correlated to changes in tissue within regions of the white matter in the brain. These localized regions show decreased anisotropy in water diffusivity, which are thought to be related to local mechanical damage. However, a specific link to mechanical factors and tissue changes in these regions has not been made. This study is an initial attempt at such a correlation. A human head finite element model, verified against experimental data under simulated blast loading conditions, was used to estimate strains within regions in the brain that are correlated to functional deficits. Strain values from the most anterior and posterior extent of the corpus callosum (the rostrum and the splenium), the right and left anterior and posterior limb of the internal capsule (ALIC and PLIC), and the left cingulum bundle were calculated under frontal blast loading at overpressure intensities below those typically known to cause injury. Strain peaks of approximately 1 percent were noted in regions associated with cognitive brain injury, indicating that loading conditions which involve higher pressures could raise strains to significant levels. Keywords— TBI, Head, Finite, Element, Model.
I. INTRODUCTION The increased use of improvised explosive devices in current military conflicts has led to a significant percentage of blast-induced injuries. Nearly 40% of the warfighter injuries sustained between 2001-2009 in Operation Iraqi Freedom (OIF) and Operation Enduring Freedom (OEF) were attributed to an explosive event [1]. This has yielded an increased concern and focus on understanding the role of blast exposure in contributing to mild traumatic brain injury (mTBI). Clinically, mTBI is currently diagnosed based on an array of functional deficits as measured by neurological tests. These functional deficits have recently been related to abnormal findings at specific locations in the brain in medical imaging studies [2-4]. Locally, normal white matter in the human brain has a directionally-dependent (or anisotropic) water diffusivity due to the oriented nature of the tissue, which is made up primarily of axons connecting
different parts of the brain. The anisotropy in diffusivity can be measured using magnetic resonance imaging (MRI). In cases of mTBI, MRI imaging studies have found decreased diffusive anisotropy and increased average diffusivity in focal areas, indicating local damage due to disruption of the solid components of the tissue. A fractional anisotropy (FA) index is used to identify the abnormal region in the white matter of the brain and quantify the degree of change from normal tissue. While these abnormal findings are correlated with functional deficits in mTBI, a clear connection between mechanical factors and the documented tissue changes has not been made. Recent studies suggest that a number of mechanical factors may contribute to mTBI. Diffuse axonal injury (DAI) can be caused by elevated tissue strain due to high levels of head angular acceleration [5,6]. In recent computational studies, the levels of strain that are related to axonal damage are approximately from 5-15% [7]. It has also been shown that for a given strain level, high strain rates (>90 sec-1) may increase the membrane permeability of spinal cord axons, thus disrupting the neural processes [8]. Alternatively, direct pressure-based hypotheses have been advanced, based on data that cell permeability can be changed by highpressure ultrasound [9]. The objective of this effort is to employ an experimentally verified Human Head Finite Element Model (HHFEM) [10] to study tissue response parameters under blast loading conditions, at anatomical locations where FA measurements indicate that injury could occur. Dynamic overpressure loads applied for the current analysis simulated shock tube conditions used previously to experimentally verify the results of the HHFEM. This effort will report on peak shear and principal strains as the tissue response property of interest as a predictor of mTBI.
II. METHODOLOGY To develop the HHFEM, an initial head mesh was obtained from the VOLPE National Transportation Research Center. This model was modified to include additional structures, such as facial features and a neck transition
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 18–21, 2010. www.springerlink.com
Correlating Tissue Response with Anatomical Location of mTBI Using a Human Head Finite Element Model
component, and the mesh was refined in select areas. The head was then integrated with the Livermore Software Technology Corporation (LSTC) Hybrid III Anthropomorphic Test Device (ATD) neck to form the HHFEM. The integrated model consists of 103874 nodes, with a total of 127902 elements. Key features include the skull, brain, cerebrospinal fluid (CSF), brain stem, facial structure, and neck (Figures 1a-b). The HHFEM was initially evaluated against a physical surrogate, termed the Human Surrogate Head Model (HSHM), which was developed with identical geometry to the HHEFM. The HSHM, composed of biosimulant materials, was used to verify HHFEM response to dynamic overpressure loading conditions [10]. Human tissue material properties for the HHFEM were largely selected based on values reported in the literature. Brain tissue was modeled as viscoelastic, with a short-term shear modulus of 421 kPa, a long-term shear modulus of 7.8 kPa, a decay constant of 0.7 msec-1, a bulk modulus of 2.19 GPA, and a density of 1.04 g/cc [11]. The CSF was modeled as an elastic fluid with a bulk modulus of 2.19 GPa. Bone was modeled as elastic with a modulus of 8 GPa and a density of 2.086 g/cc. The material over the skull was modeled as elastic with a density of 0.225 g/cc, an elastic modulus of 0.3 GPa, and a Poisson ratio of 0.495. The bottom of the skull and the top of the neck were rigidly attached, with neck properties taken from a 50th percentile Hybrid-III dummy model (LSTC Inc., Livermore, CA). To accurately simulate blast-wave interaction on the HHFEM surface, a three-dimensional computational fluid dynamics (CFD) model was constructed to recreate experimental conditions previously used to load the HSHM. This model included a numerical grid simulating the HHFEM surface (Figure 1c), the shock tube, and the air flow around and within these components. The shock tube was modeled as two separate cylindrical sections with 15 cm diameters. The first section (Driver) was 0.9 m long and had an 861 kPa initialization pressure, while the second section (Driven) was 4.9 m long and was initialized at ambient conditions (0 kPa). To simulate an experimental system, the pressure wave in the Driver was allowed to begin propagation down the Driven section at the start of the simulation. The shock wave that formed as the simulated gas traveled down the tube impacted the HHFEM surface, which was modeled 15 cm from the tube opening. The temporal and spatial distribution of pressure on the HHFEM grid surfaces were collected from the CFD simulation and used as input for the HHFEM. This loose model coupling allowed mapping of the pressure profiles to the HHFEM.
a.
b.
19
c.
Fig. 1 (a) HHFEM with Hybrid III neck, (b) HHFEM sagittal cross-section view showing skull, brain, CSF and brain stem, and (c) grid of the head model used for CFD calculations
The Green-Lagrange maximum shear strains and maximum principal strains in the model were calculated at locations in the brain corresponding to regions where FA decreases have been documented in the white matter of patients with known functional deficits [2-4]. These regions were the most anterior and posterior extent of the corpus callosum (the rostrum and the splenium)(Figure 2), the right and left anterior and posterior limb of the internal capsule (ALIC and PLIC)(Figure 3), and the left cingulum bundle (Figure 2).
Fig. 2 Locations in the finite element mesh queried for anterior and posterior corpus callosum strains (green) and left cingulum bundle strains (black)
IFMBE Proceedings Vol. 32
20
T.P. Harrigan et al.
Fig.
3 Locations in the finite element mesh queried for ALIC and PLIC strains (orange)
strain occurred approximately 10 to 20 milliseconds after the arrival of the shock wavefront. The strains in these observed regions were typically not the highest strains within the brain. Since head rotation due to loading is thought to induce strain within the brain, the global motion of the HHFEM head was tracked to determine translation and rotation of the skull with respect to pressure loading. The system exhibited primarily bi-modal behavior, in which the head moved rearward at a low frequency (approximately 5 Hz) and rotated about the transverse axis at a higher frequency (approximately 35 Hz). The period of time between strain minima in Figures 5 and 6 (approximately 28 msec) correspond to a 35 Hz vibration, indicating a potential correlation to transverse rotation.
III. RESULTS Shock tube overpressure loading on the HHFEM was simulated for an 861 kPa Driver pressure. It is important to note that the peak pressure exerted on the HHFEM was substantially lower than the driver pressure. The pressure loading taken from the face of the HHFEM is provided in Figure 4. Pressure in all brain locations highlighted in this study trended with the loading pulse, in that they peaked and decayed to less 20 kPa within the first 5 milliseconds after the arrival of the shock wavefront.
Fig. 5 Von Mises effective strains in the anterior corpus callosum for 861 kPa driver pressure. Individual lines show data from elements in Figure 2. Strains were higher in inferior (lower) elements than in superior elements
Fig.
4 Pressure loading curve generated from the CFD analysis of the shock tube loading and subsequently coupled to the HHFEM
The von Mises effective strains in the anterior and posterior regions of the corpus callosum peaked at approximately 1.2 percent (Figure 5) and 0.8 percent (Figure 6), respectively. Similar to the posterior corpus callosum, the von Mises effective strains in the ALIC, the PLIC, and the left cingulum bundle peaked at approximately 0.8 percent. In all locations, peak
Fig. 6 Von Mises effective strains in the posterior corpus callosum for 861 kPa driver pressure. Individual lines show data from elements in Figure 2. Strains were higher in inferior (lower) elements than in superior elements
IFMBE Proceedings Vol. 32
Correlating Tissue Response with Anatomical Location of mTBI Using a Human Head Finite Element Model
21
IV. DISCUSSION
ACKNOWLEDGMENT
In the locations of the brain where FA measures are correlated with clinically significant effects, the values of the von Mises effective strain and the maximum tensile shear strain peak at 10 to 20 milliseconds after the arrival of the shock wavefront in blast loading. This is significantly later than the time at which the peak pressure is seen in these locations. Similarly, significant head rotational velocity was not seen immediately after the initial impact of the blast wave, with anterior-posterior head rotation occurring 10 milliseconds after the wavefront. These observations imply that global head motion is the primary contributor to peak strain seen in the brain during blast loading. Correlations in frequency response between the transverse head rotation and local strain minima substantiate this theory, but warrant further investigation. Peak strain values in the present loading conditions were approximately 1% for the observed locations. This value is below the level currently associated with mTBI [7]. However, pressure loading in the present study was selected to correspond to pressures profiles generated and verified in the shock tube environment, and was significantly lower than may be encountered in theater and expected to cause injury [12]. Additionally, the anatomical areas targeted for this study were based on clinical observations of mTBI in medical imaging studies, which were collected from nonblast injury environments. In the HHFEM, these locations did not reflect the locations of peak strain, which may indicate a different mechanism or region of injury for mTBI from blast loading than impact loading.
The authors would like to express their appreciation to the Office of Naval Research (ONR) for funding this study under contract No N0001406PD30002.
V. CONCLUSION In this study, an experimentally tested and verified model of the head and neck under blast loading was used to calculate strains in the areas of the brain where white matter injuries of clinical importance have been documented. The peak exposure pressures were less than those that typically produce blast-related injury, and the tissue strains, while significant, were correspondingly below strains that are typically considered injurious. Future refinement of this model, additional verification studies, and ultimate replication of the injurious loading conditions will help to identify injury mechanisms and contribute to the design of injury mitigation strategies.
REFERENCES 1. DoD Personnel and Procurement Statistics: Personnel & Procurement Reports and Data Files, http://siadapp.dmdc.osd.mil/personnel/ CASUALTY/gwot_reason.pdf, (2008) 2. Niogi S., Mukherjee P., Ghajar J., Johnson C., Kolster R., Sarkar R., Lee H., Meeker M., Zimmerman R., Manley G. and McCandliss B. (May 2008) Extent of Micorstructural White Matter Injury in Postconcussive Syndrome Correlates with Impaired Cognitive Reaction Time: A 3T Diffusion Tensor Imaging Study of Mild Traumatic Brain Injury. AJNR Am. J. Neuroradiol. 29:967-973. 3. Wu T., Wilde E., Bigler E., Yallampalli R., McCauley S., Troyanskaya M., Chu Z., Li X., Hanten G., Hunter J. and Levin H. (2009) Evaluating the Relation Between Memory Functioning and Cingulum Bundles in Acute Mild Traumatic Brian Injury using Diffusion Tensor Imaging. J. Neurotrauma (doi: 10.1089/neu.2009.1110). 4. Kumar R., Husain M., Gupta R., Hasan K., Haris M., Agarwal A., Pandey C. and Narayana P. (April 2009) Serial Changes in the White Matter Diffusion Tensor Imaging Metrics in Moderate Traumatic Brain Injury and Correlation with Nero-Cognitive Function. J. Neurotrauma. 26: 481-495. 5. Gennarelli T. (1993) Mechanisms of Brain Injury J. Emerg. Med. 11(supp 1): 5-11. 6. Meaney D, Thibault L., Smith D., Ross D. and Gennarelli T. (1993) Diffuse Axonal Injury in the Miniature Pig: Biomechanical Development and Injury Threshold. ASME WAM. 25: 169-175. 7. Marjoux, D., Baumgartner, D., Deck, C., Willinger, R., (2008) Head Injury Prediction capability of the HIC, HIP, SIMon, and ULP Criteria, Accident Analysis and Prevention 40 1135-1148. 8. Shi R, Whitebone J. Conduction deficits and membrane disruption of spinal cord axons as a function of magnitude and rate of strain. J Neurophysiol. 2006 Jun; 95(6):3384-90. 9. Reinhard M, Hetzel A, Krüger S, Kretzer S, Talazko J, Ziyeh S, Weber J, Els, T., (2006) Blood-brain barrier disruption by low-frequency ultrasound. Stroke 37(6):1546-8. 10. Roberts, J., Harrigan, T., Ward, E., Taylor, T., Annett, M. and Merkle, A. Development of a Human Head-Neck Computational Model for Assessing Blast Injury, 2009 ASME International Mechanical Engineering Congress & Exposition (IMECE) , Lake Buena Vista, Florida, November 13-19. 11. Zhang, L., Yang, K. H., King, A. I., (2001) Comparison of Brain Responses Between Frontal and Lateral Impacts by Finite Element Modeling, J. Neurotrauma 18(1):21-30. 12. Bowen, I.G., Fletcher, E.R., Richmond, D.R., October 1968, Estimate of Mans Tolerance to The Direct Effects of Air Blast, Technical Progress Report: DASA-2113, Washington, D.C., Defense Atomic Support Agency, Dept. of Defense, Washington, D.C.
IFMBE Proceedings Vol. 32
Human Surrogate Head Response to Dynamic Overpressure Loading in Protected and Unprotected Conditions A.C. Merkle1, I.D. Wing1, and J.C. Roberts1,2 2
1 The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, USA The Johns Hopkins University Department of Mechanical Engineering, Baltimore, Maryland, USA
Abstract— The ballistic performance of helmets has contributed to increased soldier survivability through the prevention of penetrating injuries. However, the efficacy of helmets in mitigating primary blast–induced traumatic brain injury (bTBI) is unclear. The objective of this effort was to utilize the Human Surrogate Head Model (HSHM) to investigate brain response to shock tube overpressure loading conditions, both with and without personal protective equipment (PPE). The HSHM is a physical surrogate which includes a brain, skull, facial structure and skin, all fabricated using biosimulant materials. The system was mounted to a Hybrid III Anthropomorphic Test Device neck to allow head motion during overpressure exposure. Pressure sensors were embedded along the sagittal plane in the anterior and posterior regions of the biosimulant brain. A series of shock tube tests using driver pressures at four levels (ranging from 420 to 1150 kPa) were conducted to simulate blast loading conditions. Internal pressure response was highly correlated to driver pressure, thus demonstrating the surrogate models sensitivity to load conditions. Characteristic features observed in the archetypical pressure waveform were used to evaluate differences between test parameters, including the effects of a helmet system on response. Results suggest that the helmet does not alter the initial peak pressure response. However, certain subsequent pressure peaks were found to undergo statistically significant reductions when a helmet was placed on the HSHM. The results of this test series demonstrate the use of a surrogate head system in characterizing the brain response to overpressure loading. Future studies will further evaluate the efficacy of PPE and contribute to the understanding of blast-induced injury mechanisms. Keywords— Brain, Injury, Surrogate, Blast, Head.
I. INTRODUCTION Explosive mechanisms are responsible for nearly 80% of injuries sustained in current military operations with approximately 25% of those injuries occurring to the head and neck [1]. Detonation of explosive weapons releases a large amount of energy in a very short period of time resulting in phenomena including blast waves and fragmentation. Exposure to these explosive events and the injuries sustained from them has generated an increased focus on nonpenetrating, blast-induced traumatic brain injury (bTBI). Studies on both large and small animal models have
confirmed that blast waves can generate cognitive impairment and biochemical changes in the brain [2-3], while observation of human populations exposed to blast estimate that 30% of the patients sustained long term symptoms reflecting central nervous system disorders [4]. Although helmet systems have been developed to reduce the potential for penetrating injuries to the head, the efficacy of these systems in mitigating primary bTBI due to blast waves is unclear. To study the effects of blast waves on the human brain, there is an increased need for an experimental device, constructed with representative human anatomy and biosimulant materials, which allows the measurement of internal response to blast wave loading. This device should be durable, repeatable, and capable of measuring the mechanical response of the brain. The objective of this effort was to develop an instrumented Human Surrogate Head Model (HSHM) to meet this need, and to evaluate its performance when exposed to shock tube overpressure loading conditions. An investigation of pressure response in the brain both with and without personal protective equipment (PPE) was conducted.
II. MATERIALS AND METHODS A. Human Surrogate Head Model (HSHM) The HSHM was generated from solid model files describing the major structures of the human head including the skull, brain, facial structure, and skin. Rapid Prototype (RP) parts were generated from these files for each component of the head. Molds were constructed from the RP parts for each component, with the exception of the skin which was created by over molding the skull and facial structures. The biosimulant materials used with these molds included a glass/epoxy mixture (skull), Sylgard silicone gel (brain), and syntactic foam (facial structure). Sensors chosen for implantation were staged within the intracranial space at select locations prior to the skull assembly. Upon completion of the head fabrication, the neck of a Hybrid III Anthropomorhic Test Device (ATD) was integrated to allow head motion during overpressure exposure (Figure 1).
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 22–25, 2010. www.springerlink.com
Human Surrogate Head Response to Dynamic Overpressure Loading in Protected and Unprotected Conditions
Instrumentation in the HSHM includes two forwardfacing miniature pressure sensors (Measurement Specs EPIH, Norfolk, VA) embedded within the brain along the sagittal plane. One sensor was positioned in the anterior region of the brain while the second was placed in the posterior region. High-rate displacement sensors (JHUAPL, custom developed) were also embedded in various locations of the brain [5]. These sensors are capable of measuring both linear and angular displacements for all three principal axes. Although these displacement data were collected simultaneously, the results of this paper will focus on the measured response of the pressure sensors.
23
collected at three locations in the shock tube (Kulite HKM375, Leonia, NJ), as well as at the two intracranial locations previously described. All reported values are gauge pressures referenced to ambient conditions. Signal postprocessing was conducted using MATLAB (The MathWorks, Natick, MA). The sensor signal voltages were processed using a moving average filter with a 20 sample box size (i.e. an effective frequency of 25 kHz) prior to converting the voltages to pressures.
B. Experimental Conditions A shock tube system, designed to simulate blast loading pressure profiles in a laboratory setting, was used to generate a short duration pressure wave on the HSHM. The system is divided into two sections separated by a thin membrane designed to rupture once a certain pressure differential is reached between the Driver and Driven sections (Figure 2). The Driver section is pressurized with helium gas, which flows into the Driven section once the membrane ruptures. A shock wave forms as the gas travels down the tube toward the tube exit. For this test series, the inner diameter of the tube was 15 cm, and the length of the driver and driven sections were 0.9 and 4.9 m, respectively.
Fig. 2 Schematic of the shock tube system Table 1 Test matrix conditions Test ID
Driver Pressure (kPa)
Helmet?
HSHM04
634
N
HSHM07
754
N
HSHM11
425
N
HSHM14
757
N
HSHM16
578
N
HSHM17
593
Y
HSHM18
597
Y
Fig.
1 Human Surrogate Head Model mounted on the ATD neck and positioned at the shock tube opening
III. RESULTS
The test series analyzed here encompasses a total of seven exposures of the HSHM to four nominal driver pressure levels of 425, 590, 635, and 750 kPa. The HSHM was mounted on the ATD neck and positioned 15 cm from the opening of the shock tube for each test. The surrogate was evaluated in an “unprotected” (no helmet) and “protected” (helmet) configuration. Two tests were run with an experimental helmet system on the headform. The test conditions are described in Table 1. All data was collected at 500 kHz using a GaGe CompuScope (GaGe Applied, Lockport, IL) high speed data acquisition system. Overpressure time histories were
Characteristic pressure response curves at various initial conditions for the anterior sensor in the configuration without helmet are provided in Figure 3. As the input pressure increases, the measured response in the brain also increases, directly correlating brain pressure magnitude with that of the loading conditions. Although the magnitude of the pressure response is influenced, the characteristic shape of the pressure response seems to remain largely unaffected. The response waveform maintains characteristic features for the anterior and posterior brain pressure data between tests. The archetypical waveforms are shown in Figure 4. To reduce this response shape to a set of quantitative
IFMBE Proceedings Vol. 32
24
A.C. Merkle, I.D. Wing, and J.C. Roberts
HSHM07 – 754 kPa
60
HSHM16 – 578 kPa
40
HSHM11 – 425 kPa
Pressure (kPa)
60
P1
50 25 0 0
2
6
8
10
4
6
8
10
P3
-50 50
P6 25
P8
P4
0
2
-25
20
0
0.2
0.4
0.6
0.8
1
10 2
4
-25
0
1
P2
0
35
-15 0
P8 were significantly influenced by the presence of the helmet system.
Pressure (kPa)
parameters, certain shape features were extracted and tabulated for each test. These features were selected to not only characterize the response, but also isolate waveform parameters which potentially correlate to injury including the magnitude and range of positive and negative peaks. Quantification of these features and subsequent statistical analyses was used to determine response differences between tests with and without a helmet. Features selected to describe the waveform include various pressure peaks at key points on the curve (Figure 4). For the anterior pressure trace, the selected parameters of interest were the 1st and 2nd principal peak pressures, as well as the 1st negative peak. For the posterior trace, the selected parameters were the pressure range captured by the initial oscillation (near t = 0.5 ms), as well as the first two positive and negative maxima.
3
4
5
-40
P5 -50
Time (ms) P7
Fig. 4 Characteristic waveforms with selected features. Anterior response (top): 1st positive peak (P1), 2nd pos. peak (P2), 1st negative peak (P3); Posterior response (bottom): Initial range (P4), 1st neg. peak (P5), 1st pos. peak (P6), 2nd neg. peak (P7), and 2nd pos. peak (P8)
Time (ms)
IV. DISCUSSION
Fig. 3 Comparison of intracranial pressure response (unprotected configuration) at the anterior location for three driver pressure levels
A comparison of the helmet vs. no helmet case for the anterior and posterior pressure responses are provided in Figure 5. For both test configurations, a similar response pattern was observed but with differences in certain selected features. Most notably, the initial peak pressure responses (P1, P2, P5 and P6) appear unaffected by the presence of the helmet. However, the secondary peaks (P3, P7, P8) seem to undergo a reduction in magnitude when the helmet is placed on the HSHM. To test the statistical significance of these observations, a linear multivariate analysis of variance (M-ANOVA) was performed using JMP statistical analysis software (SAS, Cary, NC). Independent factors included driver pressure and helmet condition. The dependent response variables studied were the eight pressure waveform features shown in Figure 4. The M-ANOVA identified the features that were found to be significantly affected by the helmet system, and generated a statistical fit to the data using a least squares regression analysis. Figure 6 provides the responses calculated from this analysis and shows that parameters P3, P4, P7 and
The HSHM effectively measured the pressure response in the brain during a series of dynamic overpressure loading conditions. Response differences were evident between the anterior and posterior regions of the brain. The anterior region provided a dominant positive pressure peak followed by a second positive peak before the pressure response was shown to go negative. Posterior response data had a much lower magnitude initial response followed by a negative pressure and subsequent oscillation. Although the magnitude of the HSHM pressure response exhibited sensitivity to increased loading conditions, the waveform characteristics for each sensor location largely remained unchanged. Therefore, the pressure waveform experienced within the brain varies significantly and is largely dictated by anatomical location for a given loading condition. The pressure response observed in the brain is somewhat complex and is not well characterized by a simple peak overpressure. The existence of a characteristic pressure waveform at each location in the brain permitted the identification of prominent features that could be extracted. As the introduction of the helmet system did not affect the
IFMBE Proceedings Vol. 32
Human Surrogate Head Response to Dynamic Overpressure Loading in Protected and Unprotected Conditions
pressure responses within the brain of the surrogate device. Future studies are required to further evaluate the influence of the helmet system on pressure response and to determine the implications for blast-induced traumatic brain injury.
Pressure (kPa)
shape of the characteristic waveform, these features could be quantified and compared for protected and unprotected configurations. The statistical analysis determined that the presence of the helmet did not reduce the magnitude of the initial pressure levels experienced in the anterior portion of the brain. However, the helmet was found to significantly reduce the negative phase of the pressure response. For the posterior location, the initial pressure was decreased as well as the secondary positive and negative pressure peaks. The dynamic overpressures applied to the HSHM for this test series were predominantly selected to conduct an initial investigation of the HSHM response and sensitivity to test parameters. The overpressure levels imparted to the system are considered to be sub-injurious as determined by the survivability predictions estimated for a human exposed to blast overpressure [6].
25
* *
*
*
50
Fig. 6 Magnitudes for P1-P8 calculated using least squares regression results and a 689 kPa driver for helmet and no helmet conditions. The 95% confidence intervals and MANOVA results are shown (* indicates p<0.05)
25 0 0
2
4
6
8
10
ACKNOWLEDGMENT
Pressure (kPa)
-25 -50
No Helmet
50
The authors would like to express their appreciation to the Office of Naval Research (ONR) for funding this study under contract No N0001406PD30002.
Helmet
25
REFERENCES
0 0
2
4
6
8
10
-25 -50
Time (ms)
Fig. 5 Intracranial pressure response at the anterior (top) and posterior (bottom) locations for helmet (HSTM16) and no helmet (HSTM17) test conditions
V. CONCLUSIONS The results of this effort have successfully demonstrated the use of a human surrogate head model in measuring the pressure response experienced in the brain during dynamic overpressure loading. Characteristic pressure waveforms were identified for two different intracranial locations and peak pressures correlated to increased loading conditions, demonstrating both device sensitivity to anatomical location and external loading conditions. This preliminary series of tests indicated an effect of helmet systems on certain
1. Owens, B, Kragh J, et al., (2008) Combat Wounds in Operation Iraqi Freedom and operation Enduring Freedom. J. Trauma, 64: 295-299. 2. Baumann R, et al., (2009) An Introductory Characterization of Combat-Casualty-Care Relevant Swine Model of Closed Head Injury Resulting from Exposure to Blast. J. Neurotrauma, 26:841-860. 3. Cernak I, et al. (2001) Ultrastructural and Functional Characteristics of Blast Injury-Induced Neurotrauma. J. Trauma, 50:695-706. 4. Cernak I, Savic J., (1999) Blast Injury from Explosive Munitions. J.Trauma. 47(1):96-103. 5. Merkle A, Wing I, et al., Development of a Human Head Surrogate Model for Investigating Blast Injury, ASME IMECE Proceedings, IMECE2009-11807, Orlando, FL, 2009. 6. Bowen, I et al., (1968) Biophys mechanisms and scaling procedures applicable in assessing responses of the thorax energized by air-blast overpress or by nonpen missiles. Ann NY Acd Sci, 152:122-146.
Corresponding author: Author: Andrew C. Merkle Institute: The Johns Hopkins University Applied Physics Laboratory Street: 11100 Johns Hopkins Rd City: Laurel, MD 20723 Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Blast-Induced Traumatic Brain Injury: Using a Shock Tube to Recreate a Battlefield Injury in the Laboratory J.B. Long1, L. Tong1, R.A. Bauman1, J.L. Atkins1, A.J. Januszkiewicz1, C. Riccio1, R. Gharavi1, R. Shoge1, S. Parks2, D.V. Ritzel3, and T.B. Bentley1 1
Division of Brain Dysfunction and Blast Injury, Walter Reed Army Institute of Research, Silver Spring, MD, 20910, USA 2 Operations Research and Applications, Fredericksburg, VA 22408, USA 3 Dyn-FX Consulting Ltd., Amherstburg, Ontario, N9V 2T5 Canada
Abstract–– Explosive detonation has been a longstanding battlefield concern for the U.S. Army. Recently, emphasis has shifted to blast injury to the brain since blast has emerged as the predominant cause of neurotrauma in current military conflicts, and its etiology is largely undefined. Using a compressiondriven shock tube to simulate blast, we are assessing the physiological, neuropathological, and neurobehavioral consequences of airblast exposure. Blast generates an air shock front imparting effectively instantaneous increases in static and dynamic pressure conditions. The positioning of the experimental subject within the shock tube greatly influences the exposure conditions and determines the relative contributions of these side-on and dynamic pressures to the injury. The pressure exposures and brain injuries resulting from airblast exposure at different shock tube positions are reviewed. Shock tube exposures provide survivable blast conditions under which striking neuropathological changes can be generated and TBI can be studied. These findings demonstrate that shock tube-generated airblast can cause TBI in rats, and point to the utility of this experimental tool in the development of effective therapies and countermeasures. Keywords–– blast, traumatic brain injury, shock tube, shock wave, simulation, blast overpressure.
I. INTRODUCTION Blast has emerged as the predominant cause of casualties in Operation Iraqi Freedom (OIF) and Operation Enduring Freedom (OEF), with the majority of these injuries resulting from blast propagated by improvised explosive devices (IEDs). According to the Department of Defense’s Defense Manpower Data Center [1], as of Feb 6, 2010, over 63% of US military casualties in OIF and OEF have been caused by explosive device weaponry. Among these casualties, the large majority of neurotrauma patients have closed head (i.e. nonpenetrating) injuries. Recent figures from the Department of Defense Military Health System [2] indicate that as of October 2009, there have been 152,939 medically-diagnosed non-penetrating traumatic brain injuries among service members in the U.S. military since FY2000, which is 98% of the overall TBI total. Other investigators, using post-deployment screening assessments such as the Post Deployment Health Assessment (PDHA) or Post Deployment Health Reassessment (PDHRA), have estimated even higher incidences of
blast TBI (bTBI). For example, a recent RAND report estimates that 320,000 service members or 20% of the deployed force potentially suffer from TBI [3]. A similar summary from the Institute of Medicine estimates the prevalence of bTBI in deployed U.S. warfighters at 22% [4]. Despite claims to the contrary [5], it is widely recognized by leading authorities that bTBI can be produced at all severity levels, independent of a penetrating wound or being bodily thrown [4,6]. The etiology of primary blast-induced TBI is at this point largely undefined, although it appears that it may differ substantially from the penetrating brain injuries caused by ballistics or shrapnel in earlier conflicts. Body armor has made blast injuries survivable; consequently, to a large extent blast-induced nonpenetrating head injuries have emerged among troops who without body armor would have simply been killed in action as a result of injury to more vulnerable organs such as the lung [7]. Collectively, these statistics highlight the urgent need to advance medical care targeting the growing numbers of veterans with disabilities stemming from bTBI. This includes efforts to preclinically define the pathophysiological mechanisms underlying blast-induced TBI, to devise improved means to mitigate the risk of brain injury after blast exposure, and to identify rational therapeutic interventions. In this report, we describe considerations that go into the preclinical modeling of blast TBI in the laboratory and our experimental efforts to simulate blast TBI using a compressed air-driven shock tube. An explosion is caused by the release of energy resulting from the nearly instantaneous chemical conversion of a solid or liquid into a gas [5-10]. The gaseous detonation products expand rapidly from their point of origin and compress the surrounding medium (usually air or water), generating a blast wave. Mechanical and thermal energy are transferred into the surrounding air or water as well as into objects or bodies within proximity to the blast. In air, the most distinctive feature of the blast wave is the supersonic shock front, which is the leading element of the pressure disturbance through which there is a nearly instantaneous change in all gas-dynamic conditions of the air (pressure, density, flow velocity, and temperature). For a typical explosion in a free field, the pressure-time relationship has been described by Friedlander as an initial rapid rise in
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 26–30, 2010. www.springerlink.com
Blast-Induced Traumatic Brain Injury: Using a Shock Tube to Recreate a Battlefield Injury in the Laboratory
pressure, followed by an exponential-like decay which may rebound below the ambient pressure and produce a negative pressure phase, thereafter rising back to baseline [6,10]. The shock front is associated with a blast wind (dynamic pressure) resulting from the kinetic energy imparted to the air as it is traversed by the shock wave [6-10]. Thus, following an explosion, an individual exposed to a blast wave will be exposed to a step increase in static pressure as well as a high velocity wind [6-10]. With time and distance, the peak pressure and velocity of the blast wave weaken. Near the source of the explosion the overpressure decreases approximately with the inverse cube of the distance from the origin, but at greater distances it decays inversely with distance as an acoustic wave [10]. The distinction regarding the incident blast flow conditions (i.e. static and dynamic pressures) and target loading have important implications with regard to the mechanisms for blast injury, imparted loading, and cellular stresses as well as the proper simulation of blast in the laboratory [8]. Blast waves are often described solely by their peak sideon (or static) overpressure, which is an incomplete characterization that does not represent the target loading pressure. Although the static pressure profile is an important component of blast insult, it is by no means the only relevant energy component, particularly for victims within the area of the fireball, where kinetic energy of the flow dominates [8]. In a shock tube, the side-on pressure measurement commonly used to characterize peak overpressure is the pressure detected on a surface aligned parallel to the blast wave propagation, which does not offer aerodynamic resistance and therefore does not experience the kinetic energy component of the flow. The reflected pressure experienced by a target obstructing the flow may be many-fold higher than the unobstructed static pressure component [8]. In terms of physiological significance to blast TBI, the critical biomechanical loading to the experimental subject is determined from both the static (Ps) and dynamic pressure (Pd) of the blast wave and the geometry of the structure [8,9]. The three static overpressure parameters of greatest importance are the peak value of P (i.e. peak overpressure), its duration, and the impulse (i.e. area under the pressure-time curve); rise time and decay rate are also important for visco-elastic biologic materials. When simulating blast in a shock tube, the relative contributions of Ps and Pd to the imparted loading vary with the position along the long axis of the tube, which can be altered to simulate particular blast exposure conditions. To clearly monitor the potential contributions of both these parameters to experimental bTBI in a subject exposed to airblast at different tube positions, we have devised a rat holder that incorporates piezoresistive gauges to record both side-on and head-on pressures. Preliminary findings with this device are described below along with associated histopathological descriptions of the brain injuries resulting from single or repeated air blast exposures.
27
II. MATERIALS AND METHODS Sprague Dawley rats (275-325 g) were anesthetized with isoflurane and subjected to survivable blast overpressures that were generated using an air-driven shock tube with Mylar membranes rupturing at predetermined pressured thresholds. The shock tube (Fig. 1) consists of a 2.5 ft long compression chamber that is separated from a 15 ft long expansion chamber by polyester Mylar membranes (DuPont, Wilmington, DE) of different thicknesses. Both
Fig. 1 Shock tube chambers are 1 ft in diameter. Using an air compressor, the compression chamber is pressurized with room air, causing the Mylar membrane to rupture at a pressure that is linearly dependent upon the thickness of the Mylar sheet(s) separating the two chambers. Rats were placed in a transverse prone position in a holder secured 2.5 ft within the mouth of the tube. Piezoresistive gauges (Endevco) incorporated into a low profile aluminum holder (Fig. 2) were used to record the static (i.e. side-on pressure) and dynamic pressure Fig. 2 Rat holder with tip (A) and side (B) (i.e. blast wind) at gauges multiple positions, including the tube positions where rats were exposed. After airblast or sham handling, rats were quickly removed from the tube and returned to their home cages. In some cases, rats were reexposed to airblast 24 hours after the initial exposure. Three days after their final blast exposure or sham handling, rats were anesthetized and following perfusion fixation with a 4% paraformaldehyde solution, brains were sectioned (30 um) and silver impregnated to study fiber degeneration
IFMBE Proceedings Vol. 32
28
J.B. Long et al.
(i.e. axonopathy). Sections were independently scored by 2 blinded observers. Scores (0, 1, or 2) were assigned to specific brain regions based upon the extent of silver impregnation in these neuroanatomical structures.
III. RESULTS Fig. 3 shows typical pressure tracings recorded with gauges placed 2.5 ft from the mouth of the tube (i.e. 12.5 Fig. 3 Blast pressure recordings when positioned ft from the 12.5 ft from a 1000 µ Mylar membrane. Tip gauge Mylar memrecording in blue, side gauge in green brane). The markedly higher peak pressure and impulse recorded from the tip gauge measuring the ‘total pressure’ Table 1 Pressure Readings for Different Membrane Thicknesses at Varying Positions within the Blast Tube
Feet from Membrane 5 9 12.5 15.5 5 9 12.5 15.5
Peak Peak Pressure Pressure Tip Impulse (psi): Tip (psi): Side (psi-ms) 500 µ Thick Membrane 13.07 11.27 140.40 12.80 10.73 135.18 14.00 11.47 118.28 14.40 10.20 60.00 1000 µ Thick Membrane 20.87 20.27 20.60 21.17
16.74 17.13 16.33 15.47
239.80 241.27 194.40 130.44
Side Impulse (psi-ms) 126.39 119.99 92.39 10.33 204.90 203.09 145.26 13.76
Values are averaged from 3 recordings
Fig. 4 Impulse ratios calculated from the average of 3 tip gauge and side gauge recordings at varied tube positions and Mylar membrane thicknesses. Square – 500 µ thickness, Triangle – 750 µ thickness, Diamond – 1000 µ thickness, and Cross – 1400 µ thickness.
reveal that Ps only partly accounts for load conditions experienced at this point. As anticipated, Ps and Pd varied somewhat along the axis of
Fig. 5 10 X (top) and 40X (bottom) photomicrographs of sham-injured (left) and blast –injured (right) rat cerebella. Axonopathy, which is absent in the sham-injured brain, is prominent as darkened, silver-impregnated fibers in the injured cerebellum on the right which was exposed to 20 psi airblast 2.5 ft from the mouth of the tube
the tube (Table 1). In particular, although peak pressures were relatively consistent across locations, pressure impulses measured from both gauges decreased substantially closer to the mouth of the tube and impulse ratios increased (Fig. 4), revealing a greater relative contribution of Pd to biomechanical loading at locations closer to the open end of the tube. The impulse ratio changed dramatically immediately outside the mouth of the tube, where total pressure impulse increased by factors of 100-fold reflecting an even more pronounced potential biomechanical influence of Pd in the exit-jet flow. The exit-jet flow also includes a quasi-steady “Mach disc” shock which greatly affects specimen response. Positional variations were consistent across all Mylar membrane thicknesses and associated pressures. As previously described [12], at Table 2 Fiber degeneration neuro-pathology approximately 20 psi scores peak pressure intensiopt CBW SuG ties, brains of rats Control 0 0 0 exposed to airblast 1 Blast 2 10 6 typically were devoid 2 Blast 9 16 7 of any obvious cell loss or injury, and Scores (0,1, or 2) were assigned based instead most typically upon the level of silver impregnation in showed widespread injured fibers (0-absent, 1–moderate, and fiber degeneration 2-extensive). Left and right scores asthat was evident in signed by 2 blinded observers are summed (N=3 brains per treatment gp). Optic tract silver-stained sections (opt), Cerebellar white matter (CBW), and throughout all levels Superficial grey layer of the superior of the brain (Fig. 4). colliculus (SuG). The neuropathological changes produced 2.5 ft from the mouth of the tube (12.5 ft from the Mylar membrane) were widespread and were
IFMBE Proceedings Vol. 32
Blast-Induced Traumatic Brain Injury: Using a Shock Tube to Recreate a Battlefield Injury in the Laboratory
observed bilaterally, although not uniformly. The fiber degeneration was particularly prominent in the cerebellum, optic tracts, and in commissural fibers of the corpus callosum. Axonopathy was evident at higher magnification in small-caliber fibers as well. Brain injuries were qualitatively similar in subjects exposed to airblast at other tube positions [12; unpublished observations]. In contrast, fiber degeneration was absent in brains of all the sham handled rats (Fig. 5). Fiber degeneration was also clearly more pronounced in brains of rats receiving repeated airblast exposures than in those exposed only once (Table 2).
IV. DISCUSSION These data confirm that blast simulation conditions vary along the longitudinal axis of the shock tube and represent a first step at mapping these positional differences. In general, the comparative tip and side gauge recordings reveal greatest differences in pressure impulses which become evident as one approaches the mouth of the shock tube, and are much more pronounced immediately outside the tube. The relative influences of tube position on these recordings are consistent across Mylar membrane thicknesses and associated driver pressures. As presently configured, the side-on pressures of blastwaves generated in this shock tube are of an approximately 12-16 msec duration. Since explosive blasts in the field typically yield shorter blastwaves (typically ranging from 2 to 10 msec), it will be desirable to modify the tube to better simulate this relevant blast waveform. This can be accomplished by altering the ratio of the dimensions of the driver (i.e. compression chamber) relative to the expansion chamber. Through manipulation of these experimental parameters (e.g. driver volume, tube position, Mylar membrane thickness, and type of gas), one can recreate a wide variety of conditions experienced in different blast scenarios, and also comparatively establish which of these parameters (e.g. peak pressure vs. impulse) has the greatest influence on physiological perturbations and injury mechanisms. Rats exposed to approximately 20 psi airblast displayed neuropathological changes that were qualitatively quite similar to those observed in rats and pigs after different exposure conditions [11-13]. In all cases, injury was most prominent as widespread fiber degeneration that was not associated with cell loss or injury. Although the numbers of subjects in each treatment group are small, it is noteworthy that rats receiving two airblast exposures had noticeably greater neuropathological changes than either the single- or sham-exposure groups. The widespread use of explosive weaponry (e.g. IEDs) in OIF and OEF has prompted a surge in biomedical research to address the consequences of blast exposure, the relevant
29
injury mechanisms, and potential countermeasures. Shock tubes, which have been used for decades in blast biophysics research, are now increasingly employed as a research tool for biomedical research as well. As the use of shock tubes and other experimental models of blast expand, it is critical that they be used in a manner that most effectively simulates explosive blast conditions, recognizing that creation of an injury does not constitute validation of an injury model. For example, immediately outside the mouth of a shock tube conditions are very complex with high flow gradients such that slight changes in position impart large changes in pressure conditions. Practically all flow energy is converted to a collimated jet at the shock tube exit yielding extreme dynamic pressure and negligible static pressure as end wave rarefaction abruptly reduces static pressure and greatly accelerates flow. As a consequence, despite the appearance of injuries and pathophysiological responses, experimental subjects placed outside the mouth of the shock tube in an attempt to model blast exposure are in all likelihood experiencing extremely different loading and injury phenomena than those resulting from explosive blast, and might also yield disparate neuropathological changes [8,11, 14,15]. It is therefore essential that exposure conditions be carefully monitored and considered to validate the fidelity of the experimental model. The measurements of the physical conditions and associated neuropathological outcomes described in this report represent a step toward that objective.
ACKNOWLEDGMENT The expert technical assistance of Eloyse Fleming, Angela Dalmolin, and Andrea Edwards is gratefully acknowledged. Supported by CDMRP Awards W81XWH-08-20018 and W81XWH-08-2-0017.
REFERENCES 1. DMDC at http://siadapp.dmdc.osd.mil/personnel/CASUALTY/castop.htm. 2. MHS at (http://www.health.mil/Pages/Page.aspx?ID=49) 3. Tanielian T, Jaycox, LH, eds. (2008) Invisible Wounds of War: Psychological and Cognitive Injuries, Their Consequences, and Services to Assist Recovery. Santa Monica, CA: RAND Corporation 4. Institute of Medicine (2009) Gulf War and Health, Volume 7: LongTerm Consequences of Traumatic Brain Injury. Washington, DC: The National Academies Press 5. Champion HR, Holcomb JB, Young LA (2009) Injuries from explosions: physics, biophysics, pathology, and required research focus. J Trauma 66:1468–77, discussion 77 6. Ling G, Bandak F, Armonda R et al. (2009) Explosive blast neurotrauma. J Neurotrauma 26:815–25 7. Wightman JM, and Gladish SL (2001) Explosions and blast injuries. Ann. Emerg. Med. 37:664–678. 8. Benzinger TL, Brody D, Cardin S et al. (2009) Blast-related brain injury: imaging for clinical and research applications: report of the 2008 St. Louis workshop. J Neurotrauma 26:2127-2144
IFMBE Proceedings Vol. 32
30
J.B. Long et al.
9. Cernak I, Noble-Haeusslein LJ (2010) Traumatic brain injury: an overview of pathobiology with emphasis on military populations. J Cereb Blood Flow Metab. 30:255-66. 10. Leung LY, VandeVord PJ, Dal Cengio AL et al. (2008) Blast related neurotrauma: a review of cellular injury. Mol Cell Biomech 5:155–68 11. Long JB, Bentley TL, Wessner KA et al. (2009) Blast overpressure in rats: recreating a battlefield injury in the laboratory. J Neurotrauma 26:827–40. 12. Bauman RA, Ling G, Long L et al. (2009) An introductory characterization of a combat-casualty-care relevant swine model of closed head injury resulting from exposure to explosive blast. J Neurotrauma 26:841-860. 13. Garman R, Jenkins LW, Bauman RA et al. (2009) Blast overpressure injury in rats with body protection produces acute and subacute axonal, dendritic and synaptic neuropathology. J Neurotrauma 26:A-53. 14. Jaffin JH, McKinney L, Kinney RC et al. (1987) A laboratory model for studying overpressure blast injury. J Trauma 27:349-356. 15. Svetlov SI, Prima V, Kirk DR et al. (2010) Morphologic and biochemical characterization of brain injury in a model of controlled blast overpressure exposure. J Trauma in press.
DISCLAIMER The opinions or assertions contained herein are the private views of the author, and are not to be construed as official, or as reflecting true views of the Department of the Army or the Department of Defense. Research was conducted in compliance with the Animal Welfare Act and other federal statutes and regulations relating to animals and experiments involving animals and adheres to principles stated in the Guide for the Care and Use of Laboratory Animals, NRC Publication, 1996 edition.
Joseph B. Long, Ph.D. Division of Brain Dysfunction and Blast Injury Walter Reed Army Institute of Research 503 Robert Grant Avenue Silver Spring, MD 20910 U.S.
[email protected]
IFMBE Proceedings Vol. 32
Wave Propagation in the Human Brain and Skull Imaged in vivo by MR Elastography E.H. Clayton1, G.M. Genin1, and P.V. Bayly1,2 1
Washington University in St. Louis/Department of Mechanical, Aerospace and Structural Engineering, Saint Louis, USA 2 Washington University in St. Louis/Department of Biomedical Engineering, Saint Louis, USA
Abstract— Traumatic brain injuries (TBI) are common, and often lead to permanent physical, cognitive, and/or behavioral impairment. TBI arises in vehicle accidents, assaults, athletic competition, and in battle (due to both impact and blast). Despite the prevalence and severity of TBI, the condition remains poorly understood and difficult to diagnose. Computer simulations of injury mechanics offer enormous potential for the study of TBI; however, computer models require accurate descriptions of tissue constitutive behavior and brain-skull boundary conditions. Lacking such data, numerical predictions of brain deformation remain uncertain. Brain tissue is heterogeneous, anisotropic, nonlinear, and viscoelastic. The viscoelastic properties are particularly important for TBI, which usually involves rapid deformation due to impact. Magnetic resonance elastography (MRE) is a non-invasive imaging modality that provides quantitative spatial maps of biologic tissue stiffness in vivo. MRE is performed by inducing micron-amplitude propagating shear waves into tissue with a surface actuator at steady state while images of the wave motion are acquired using a standard clinical MRI scanner. A custom synchronized MRI pulse sequence, with “motionsensitizing gradients”, is used to encode wave displacements, and at various time points. Elastograms, or images with contrast corresponding to complex shear modulus (storage and loss modulus), can be computed from the raw spatial-temporal displacement data by inverting the governing equations of motion. Wave images and elastograms can provide fundamental insight into the dynamics of human brain and skull under rapidly time-varying loads. In this study, we aim to understand in vivo brain motion as the cranium is exposed to acoustic frequency pressure waves (45 Hz). This loading approximates some physical features of blast, albeit at very low levels. Keywords— MR-Elastography, TBI, non-invasive measurement, blast loading.
brain
external rigid body, inertial loading due to linear or angular acceleration of the head, or external pressure loads (i.e. blast wave). Despite its importance, little is known with confidence about the mechanics of TBI. Diffuse Axonal Injury (DAI) is a prominent manifestation of TBI. It has been suggested that DAI is a microstructural process in which neural axons are stressed beyond a limit, initiating a biochemical cascade that leads to axon destruction [2]. While uncertainty remains as to the biochemistry of DAI, understanding in vivo brain motion and brain-skull interaction as the cranium is subjected to external loading is fundamental to understanding TBI and may help explain why micro-structural changes appear where they do. Experimental data is urgently needed to provide estimates of mechanical properties for use in computer simulations. Characterization of brain-skull interactions is equally important; the attachments that transmit force from skull to brain are critical in determining brain deformation [3]. In addition, computer simulations need to be validated before they can be relied upon to illuminate TBI or design countermeasures. This study uses magnetic resonance elastography (MRE), an imaging modality designed specifically to measure oscillatory motion in soft tissue [4]. The objective is to illuminate the mechanical behavior of brain tissue while the skull is subjected to acoustic pressure waves. Results obtained will be applicable for both characterization of mechanical features of the brain and skull, and validation of computer models.
response,
I. INTRODUCTION The US Centers for Disease Control and Prevention (CDC) estimates that each year 1.4 million Americans suffer a traumatic brain injury [1]. The prevalence of TBI is surely related to the variety of ways one can subject their brain to insult. Most TBI cases are caused by one or more of the following events: direct physical contact with an
II. METHODS Four human subjects, aged 18-29 years-old, were imaged in this study. Experiments were conducted at 1.5T on a MAGNETOM Avanto (Siemens) whole-body clinical scanner in Washington University's Center for Clinical Imaging Research (CCIR) facility. The MRE imaging sequence implemented for this study consists of a specialized gradient-recalled echo (GRE), phase-contrast magnetic resonance imaging sequence. The sequence differs from a traditional
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 31–33, 2010. www.springerlink.com
32
E.H. Clayton, G.M. Genin, and P.V. Bayly
Fig. 1 Experimental setup of acoustic paddle actuators near left and right temporal lobe (Coban™ not shown). Note location of accelerometer shown on right paddle GRE imaging sequence in three ways. First, phase-contrast MR images are acquired, instead of magnitude images. Second, the sequence is triggered to acquire data in synchronization with an external actuator and repeated with time delays to acquire data at several time points. Third, a single cycle, trapezoidal magnetic gradient is added to the imaging routine. The "motion encoding gradient" (MEG), is responsible for spatially recording tissue motion at the at the MEG frequency. All procedures were approved by the institutional Human Research Protection Office Internal Review Board to ensure that the rights and welfare of the human research participants were protected. The brain was imaged using a commercial phased-array MRI head coil with the subject in the supine position. Motion was transmitted to each subject's skull by affixing two acoustic paddle actuators (Resoundant™, Resoundant Inc., Rochester, MN) near the left/right temples with Coban™ bandaging (Fig. 1). The paddle actuator geometry is similar to that of a timpani (kettle) drum. It consists of a rigid plastic hemispherical frame with a flexible membrane stretched across the bowl. Flexible plastic tubing acts as a pressure waveguide, connecting each passive paddle actuator to an active driver outside of the MRI scanner magnetic field. The active driver consists of a voice coil actuator enclosed by cabinet. Pressure waves generated by the active driver are transmitted through the flexible tubing to the passive paddles, causing the paddle’s flexible membrane to vibrate on the skull. The paddle actuators were configured to transmit a four cycle pressure wave at 45 Hz in synchronization with the
Fig. 2 A half-second time history window of the accelerations produced by in situ paddle actuators; three wave trains of 45Hz four cycle acceleration shown MRE imaging sequence. The input excitation amplitude and waveform were quantified by accelerometers attached to the exterior bowl (opposite the paddle-subject contact surface) of each paddle. A typical MRE image acquisition acceleration time history is shown in Fig. 2. The custom gradient-recalled echo MRI pulse sequence contained 45 Hz oscillating magnetic field gradients to encode tissue displacements in the direction of the gradient at the same frequency. A single 3 mm thick trans-axial image slice was acquired with TR: 133.3 ms, TE: 27.5 ms, flip angle: 25˚, and in-plane resolution of 3 mm2. The imaging sequence was repeated three times to record all displacement components (u, v, and w) relative to a laboratory coordinate system.
III. RESULTS Experimental data obtained from this study is presented in Figure 3. Propagating displacement waves were observed at wavelengths of 4-5 cm and propagation speeds of 2-3 m/s. These wavelengths and speeds are consistent with shear motion in a viscoelastic medium with storage modulus 3-5 kPa. Dissipation is apparent by the decay in amplitude as the waves propagate into the interior of the brain. Reflection from interior boundaries (for example, at the falx cerbri between the brain hemispheres) is also apparent. Interestingly, while the excitation is almost perfectly symmetric, the brain’s response at this frequency is anti-symmetric (Fig. 3). These observations are especially interesting because they are consistent among all of the four subjects studied to date.
IFMBE Proceedings Vol. 32
Wave Propagation in the Human Brain and Skull Imaged in vivo by MR Elastography
33
blast-induced TBI, and to illuminate fundamental mechanical properties of the skull, brain, and associated intracranial anatomy. The current results are limited to 45 Hz excitation. Higher frequencies would provide shorter wavelengths, and higher spatial resolution, but wave dissipation is more pronounced. Future work will focus on the frequencydependence of the brain’s response, and the combination of MRE with other imaging modalities to illuminate the effects of anatomy and tissue type on wave propagation.
ACKNOWLEDGMENT Financial support was provided by NIH RO1 NS055951. Authors wish to thank Dr. Agus Priatna with Siemens Medical Solutions for valuable discussions and technical assistance.
REFERENCES Fig.
3 MRE images of relative x-displacement (left-right) during wave propagation in brain tissue while the skull is subjected to a 45Hz, 4-cycle acoustic wave train. Data from four human subjects is shown (one per row). Temporal displacement images are shown at four phases (Φi) in the excitation cycle; the rightmost image in each row shows the anatomical trans-axial slice plane
1. Centers for Disease Control and Prevention, Mass Casualty Events http://www.bt.cdc.gov/masscasualties/braininjuriespro.asp 2. Smith D, Meaney D, Shull W (2003) Diffuse axonal injury in head trauma. J Head Trauma Rehabil. 18(4):307-316. 3. Bayly P, Cohen T, Leister E et al. (2005) Deformation of the human brain induced by mild acceleration. J Neurotrauma 22(8):845-856. 4. Muthupillai R, Lomas D, Rossman P et al. (1995) Magnetic resonance elastography by direct visualization of propagating acoustic strain waves. Science 269:1854–1857.
IV. CONCLUSIONS Simultaneous excitation of the skull at two locations with acoustic paddle actuators leads to propagation of shear waves in brain tissue. The magnitude and phase of these oscillations can be used to validate computer simulations of
Author: Erik H. Clayton Institute: Washington University in St. Louis Street: 1 Brookings Drive City: Saint Louis Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Cavitation as a Possible Traumatic Brain Injury (TBI) Damage Mechanism Andrew Wardlaw and Jack Goeller Advanced Technology and Research Corporation, Columbia, MD., 21046, USA Abstract–– Cavitation has been proposed as a damage mechanism for traumatic brain injury. This paper uses simulation of simplified head models to determine the plausibility of cranial cavitation during blast events. Of particular interest is CSF which is treated with a cavitation model developed and validated for water. Ellipsoid as well as 3D head models are considered that consist of a skull, brain matter and CSF. Simulations, conducted with a coupled fluidstructure hydrocode, suggest that cranial cavitation will occur during blast events, particularly at the contrecoup site. Keywords–– TBI, cavitation.
I. INTRODUCTION Most studies of TBI are concerned with impact or head acceleration; very few deal with trauma from blast. Blast induced injuries can result from inter-cranial shock waves that produce regions of extreme pressure without high acceleration. In this paper we focus on regions of tension that may, particularly in CSF, cause cavitation. Cavitation from negative pressure at the contrecoup site was hypothesized as early as 1948 by Ward [1] and has since been investigated by Gross [2], Suh [3], Hickling and Wenner [4], Engin and Akkas [5] and Lubock and Goldsmith [6], to name but a few. Simple experiments with fluid filled containers have lent credence to the hypothesis that cavitation may accompany shock loading of the head. More recently, blast loading of the head has been simulated by Ziejewski, et. al. [7], Chafi, et. al. [8] and Moss, King and Blackman [9]. Generally, these works mention cavitation but are more concerned with the overall blast effects. This paper addresses the hypothesis that cavitation occurs during blast loading of the head. Cavitation, at least in CSF or brain matter, is poorly understood and it is unknown under what conditions these materials cavitate. However, it can be supposed that regions of tension are required. This paper seeks to determine, through hydrocode simulations with simplified models, whether such regions occur. Previously developed cavitation models for water [10] are used to estimate the occurrence of cavitation in the CSF.
accomplished with the DYSMAS code [11], which is a coupled fluid-structure code. The fluid and structure components run simultaneously and exchange node and pressure data after each computational step. This allows the air and the CSF to be modeled in an Euler framework and the skull and brain matter to be simulated using a Lagrange approach. Material properties used in the simulations are given in Table 1. Table 1 Material Parameters material
model
density (g/cc)
coefficients
air
Gamma law
0.0013
CFS
Polynomial
1
γ =1.4 See [10]
Skull T issue
Elastic Visco-
1.4 1.04
E=6,55 Gpa, ν=0.22 G∞=7.0KPa,G0 =40KPa β=400/s, K=2.19GPa
Elastic
Cavitation induced by a blast to the head is assumed to be manifested as bubbly water (i.e., bulk cavitation), with the presence of bubbles allowing water to expand rapidly in regions of tension. Cavitation is modeled by imposing a pressure floor, which similarly allows water to expand quickly without any restraining tension. This model has been successfully used to treat internal and external cavitation in other scenarios [10].
III. ELLIPSOID HEAD MODEL The simplest head model considered is an ellipsoid, which contains a CSF layer next to the skull and interior brain matter as depicted in Fig. 1. In the simulation shown in Fig. 2, an air shock with an over-pressure of 25 psi moves along the major axis of the ellipse. When it hits the front of the skull, two types of internal shocks are transmitted: one through the CSF layer and brain matter and a second through the skull. 19.2 cm
13.7 cm TissueViscoelastic
II. METHODOLOGY A challenge in simulating blast loading of the head is accurately imposing the shock load on the head. This is
0.6cm skull thickness
Fig. 1 Ellipsoid head model
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 34–37, 2010. www.springerlink.com
Cavitation as a Possible Traumatic Brain Injury (TBI) Damage Mechanism
A) .139 ms
B) .249 ms
35
C) 0.578 ms
D) 1.048 ms
Fig. 2 Pressure contours for the ellipsoid head model
50
0.01
0.005
0
-0.005
Skull, CFS Layer,VE Brain and CFS Ventrical -0.01
45 CSF Cavitated Fluid Brain Volume P < -1B
40
Cavitated Volume (cc)
Fig. 4 shows the variation in length of the ellipsoid along the major axis. Peaks in the cavitated volume are marked and it is evident that points coincide with times at which the ellipsoid starts to expand along its major axis.
Change in Skull Length (cm)
Although the path around the skull is longer, the sound speed is higher in this material and both shocks arrive at the contrecoup site at about the same time. This presses the skull outwards, initiating cavitation, which is visible as the white region in Fig. 2B. Meanwhile, the air shock, traveling more slowly than either of the internal shocks, progresses towards the contrecoup site, compressing the skull laterally as it passes. When it reaches the contrecoup site (Fig. 2C) it generates a high pressure and axial force counter to that of the initial shock strike. The coup and contrecoup loads cause the skull to vibrate and change the length of its major axis. The cavitation volume history from the simulation shown in Fig. 2 is presented in Fig. 3 and has a periodic component that correlates well with the vibration of the ellipsoid.
25 20 15 10 5 0
0
0.0005
0.001
0.0015
Time (s)
0.002
Fig. 3 Ellipsoid cavitated volume
0.0025
0.0005
0.001
0.0015
Time (s)
0.002
0.0025
Fig. 4 Change in length of Ellipsoid major axis
35 30
0
The role of the skull deformation in generating cavitation can be demonstrated by comparing a rigid and deformable skull simulation. Because of computational constraints, the model considered consists of a water filled ellipsoid. The deformable results resemble those of Fig. 2 closely. For the rigid skull, the initial shock strike cannot deform the cylinder - it can only translate it. At the coup site this generates a shock that moves through the fluid, while at the contrecoup site it causes cavitation. Following the passage of the shock, the skull and interior water travel at a similar velocity, generating only minor cavitation as is shown in Fig. 5.
IFMBE Proceedings Vol. 32
36
A. Wardlaw and J. Goeller
IV. 3-D HEAD MODEL 300
Cavitated Volume For Water Filled Skulls
Cavitated Volume (cc)
250
Rigid Skull Deformable Skull
200
150
100
The development of the flow field in the center plane of this model is shown in Fig. 7. Here the cavitated regions are marked in white. Similarities between this and the ellipsoid model are evident: cavitation initially occurs at the contrecoup site which is extinguished when the blast shock arrives at the contrecoup. The cavitated volume history is shown in Fig. 8 and features similar initial cavitation levels. However, the periodic component evident in the ellipsoidal model is missing.
50
0
0
0.0005
0.001
0.0015
Time (s)
0.002
Fig. 5 Rigid and deformable ellipsoid cavitated volume A more realistic head model is shown in Fig. 6. Here the magenta is the skull, the blue is brain matter and green accounts for the general contours of the face. The region between the blue and magenta is assumed filled with CSF, which provides a long, thin layer along the brain center plane. The region between the green and magenta is excluded from the computation.
Fig. 6 3-D head model
.278 ms
V. SUMMARY AND CONCLUSIONS
0.0025
This paper examines the plausibility of cavitation during blast loading of a head. Numerical simulations using the coupled DYSMAS hydrocode were used to study this question and predict the formation of tension regions susceptible to cavitation. Cavitation of the CSF was treated using a water cavitation model. Blast loading was applied to an ellipsoid and 3-D head model, both of which consisted of a skull, CSF layer and brain matter. In all simulations, regions of tension were formed which would likely have resulted in cavitation. In both models, the loading consisted of a coup shock load followed by a reverse, contrecoup load generated when the blast shock closes on the end of the model. Cavitation initially forms at the contrecoup site in response to the initial blast load. The reverse load tended to suppress cavitation in this location, although cavitation intermittently occurs at other locations through out the simulation. The ellipsoid model features a periodic component to cavitation formation and collapse that was traced to the skull vibration, a feature that was absent in the 3-D model. Additional numerical experiments indicate that skull deformation significantly impacts the creation of cavitated regions. A rigid skull can only translate in response to a blast, which produces a short lived cavitated
.529 ms
Fig. 7 Shock interaction with the 3-D head mode IFMBE Proceedings Vol. 32
1.145 ms
Cavitation as a Possible Traumatic Brain Injury (TBI) Damage Mechanism
region at the countercoup site. Once the skull and its internal components come to velocity equilibrium, cavitation ceases.
70
3-D Head Model Cavitation Volume
Cavitated Volume (cc)
60
50
40
30
20
10
0
0
0.0005
0.001
0.0015
Time (s)
0.002
0.0025
Fig. 8 3-D head model cavitated volume history
ACKNOWLEDGEMENT The authors would like to thank Dr. Judah Goldwasser, Program Manager DARPA/DSO for supporting this work. The authors would also like to acknowledge the guidance provided by Drs. Geoffrey Ling, DARPA, and Bruce Lamattina, ARO.
REFERENCES 1. Ward, J.W., L. H. Montgomery, et al., (1948) “A Mechanism of Concussion: A Theory.” Science 107: 349-353.
37
2. Gross, A.G. (1958) “A new theory on the dynamics of brain concussion and brain injury.” J. Neurosurg 15: 548-651. 3. Suh, C.C., J. W. Yang, et al., (1972) “Rarefaction of liquids in a spherical shell due to local radial loads with application to brain damage.” J. Biomech. 5:181-189. 4. Hickling, R. and M. L. Wenner, (1973) “Mathematical model of a head subjected to an axisymmetric impact.” J. Biomech. 6:115-132. 5. Engin, A.E. and N. Akkas, (1978) “Application of a fluid-filled spherical sandwich shell as a biodynamic head injury model for primates.” Aviat. Space & Environ. Med 49(1): 120-124. 6. Lubock, P. and W. Goldsmith, (1980) “Experimental cavitation studies in a model head-neck system.” J. Biomech. 13:1041-1052. 7. Ziejewski, M., G. Karami and I. Akhatov, "Selected Biomechanical Issues of Brain Injury Caused by Blasts", Brain Injury Professional, 2007; 4(1), 10-15. 8. Sotudeh-Chafi, M., G. Karami and,M. Ziejewski, “Simulation of Blast-Head Interactions to Study Traumatic Brain Injury,” Proceedings of IMECE2007, ICEME2007-41629, Nov. 11-15, 2007. 9. Moss, W. C.,King, M. J., and Blackman, E. G., "Skull Flexure from Blast Waves: A Mechanism for Brain Injury with Implications for Helmet Design", J. Acoust. Soc. Am, 125, 4, pp. 2650-2650 (April 2009) 10. Wardlaw, A., Jr., Ilamin, R. and Harris, G., “Cavitation Modeling”, Proceedings of the 73th Shock and Vibration Symposium, Nov. 2002. 11. Wardlaw, A. B., Jr., Luton, J. A., Renzi, J. R., Kiddy, K. C., Mckeown, R. M., “The Gemini Euler Solver for the simulation of underwater explosions”, NSWC IH TR 2500, Nov. 2003.
Authors: Andrew Wardlaw and Jack Goeller Institute: Advanced Technology and Research Corp. Street: 6650 Eli Whitney Dr. City: Columbia, MD 21046 Country: USA Email:
[email protected]
“The views, opinions, and/or findings contained in this article/presentation are those of the author/presenter and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense.”
IFMBE Proceedings Vol. 32
Prognostic Ability of Diffusion Tensor Imaging Parameters among Severely Injured Traumatic Brain Injury Patients Joshua F. Betz1,2, Jiachen Zhuo1, Anindya Roy2, Kathirkamanthan Shanmuganathan1, and Rao P. Gullapalli1 1
Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD 2 Department of Mathematics and Statistics, University of Maryland Baltimore County, Catonsville MD
Abstract— Diffuse axonal injury (DAI) represents the most common primary intra-axial form of traumatic brain injury (TBI), comprising approximately half of all such injuries. Patients presenting with DAI follow a highly variable clinical course, with initial status frequently discrepant from long-term neurological outcome. Diffusion Tensor Imaging (DTI) is sensitive to disruptions in neuronal structure that may not be appreciated on CT or conventional MRI and may serve as an important prognostic imaging marker. In this study, we retrospectively evaluated the data from 84 patients to determine if the whole brain DTI parameters (axial diffusivity λ1, radial diffusivity λPERP, apparent diffusion coefficient ADC, and fractional anisotropy FA) are predictive of their clinical outcome as determined by discharge Glasgow Coma Scale (GCS). The first group consisted of 52 severely injured patients (GCS≤ 8) that either died (n=10), had poor outcome (n=12) or good outcome (n=27). The second group was comprised of mildly injured patients (GCS≥14 during entire hospitalization) that served as the reference group. Whole brain measurements of the DTI parameters were measured on each patient, and using non-parametric statistics, the measures from within each group were compared. Significant differences were found in ADC, λ1, and λPERP between the three outcome groups. Further, these measures were shown to be significantly related to GCS at scan. Using ordinal logistic regression models adjusted for age, gender, and admission GCS, DTI parameters were shown to significantly predict outcomes in severe TBI patients (death, poor outcome, good outcome, or mild injury). Evaluation of TBI patients may be improved using DTI measures as they correlate well with clinical measures, reflect the severity of injury, and can predict outcome. This method may provide a valuable independent tool to predict clinical outcomes in DAI. Keywords— magnetic resonance imaging, diffusion tensor imaging, traumatic brain injury, outcome.
I. INTRODUCTION Traumatic brain injury (TBI) is a physical injury to the brain that can affect all people at any time during their life and includes both penetrating (e.g., bullet wounds) and nonpenetrating head wounds (e.g., automobile accidents, falls, sports injuries, blasts from explosive devices). Each year, the Centers for Disease Control and Prevention (CDC) estimates that approximately 1.5 million Americans survive TBI, among
whom only 230,000 are hospitalized [1, 2]. Additionally, approximately 80,000 new TBI survivors have residual deficits related to their injury, which can lead to permanent, longterm disabilities. TBI carries a significant financial burden, with costs greater than $56.3 billion per year due to health care and loss of work [3-5]. The CDC's National Center for Injury Prevention and Control estimates that 5.3 million U.S. citizens (2% of the population) are living with disability as a result of a traumatic brain injury [6]. These type of injuries involve impacts that cause sudden acceleration-deceleration type forces leading to linear, rotational or angular shearing injuries at the gray-white matter boundaries [8-10]. The tissue boundaries are especially susceptible because tissues with different physical properties experience different loads, and this difference causes shearing injury of the nerve fibers and tiny vessels along the borderline, tangential to the plane of shift. Such shear mechanisms may cause strain injury to the crossing white matter fibers, and these mechanisms are also responsible for ruptured veins typically seen between the skull bone and the brain leading to subdural hematoma. The extent of this can be quite variable and may also be influenced by the dural folds and the internal structure of the skull and other factors [1116]. The injuries may range from primary axotomy where axonal connectivity is severed, to a minor disruption in the axoplasma membrane. The effects of such physical insults can be immediate or manifest over several days following the initial injury. Similarly the sequelae of the injury can be quite varied and may depend on the severity and the location of the injury that may lead to secondary injury of the brain. Secondary injury to the brain includes brain swelling, ischemia, cerebral hypotension, edema, elevated intracranial pressure and altered metabolism [17]. Recently there has been a significant interest on the use of diffusion tensor imaging (DTI) in the evaluation of TBI patients [18, 19]. The goal of the present study was to determine whether DTI markers such as axial diffusivity (λ1), radial diffusivity (λPERP), apparent diffusion coefficient (ADC), and fractional anisotropy (FA) provided additional sensitivity to the clinical measure Glasgow Coma Scale (GCS) in predicting the of outcome of severe TBI patients.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 38–41, 2010. www.springerlink.com
Prognostic Ability of Diffusion Tensor Imaging Parameters among Severely Injured Traumatic Brain Injury Patients
II. MATERIALS AND METHODS Subjects: The study was approved by the University Of Maryland School Of Medicine Institutional Review Board. Eighty patients (55 male, age=38.5±17.7, range 18-94) that had a closed head injury and who received standard-of-care MRI that included DTI were selected retrospectively for this study. Patients with a GCS of 8 or less at the time of MRI comprised of the severe TBI group (n=49) and those with a GCS of 14 or above for the entire duration of their hospitalization formed the mild TBI reference group (n=31). The outcome of each of the severely injured patients was classified as death (n=10), severe GCS at time of discharge (poor outcome; n=12), mild-moderate GCS at time of discharge (good outcome; n=27), or mild TBI (the reference group). Patients were imaged 4.6±9.4 days post admission. Imaging: All imaging was performed on a 1.5 T Siemens Avanto scanner using a 12 channel head-neck coil. Each of the patients received conventional MRI and DTI. Diffusion tensor images were obtained using a 128×128 matrix over a 23cm2 FOV, with contiguous 2mm thick slices (3 averages; TE/TR of 95/11200ms, parallel imaging (GRAPPA) with a reduction factor of 2). A total of 68 axial images were acquired to cover the brain from the apex to the skull base. Diffusion gradients were sensitized in 12 collinear directions at an effective b-value of 1000 s/mm2. Image Processing and Analysis: DTI images were exported offline and processed using FDT (FMRIB Diffusion Toolbox, Analysis Group, FMRIB, Oxford, UK). The FA maps of all patients were segmented into gray matter, white matter, and CSF maps using SPM5 (Wellcome Department of Imaging Sciences; University College London, UK). The segmented white-matter images were used to obtain information on whole brain white matter ADC, FA, axial diffusivity (λ1) and radial diffusivity (λPERP) values. Summary statistics such as the mean, peak, standard deviation (SD), and coefficient of variation (CV) for each of the above parameters were calculated using MATLAB (Mathworks, Natick, MA). These summary measures were used as predictors in subsequent statistical models. Statistical Analysis: Nonparametric correlation coefficients (Kendall’s Tau) were used to examine the relationship between DTI parameters and GCS at admission and MRI appointment. Nonparametric one-way ANOVAs were used to compare the DTI parameters across outcome categories. Ordinal Logistic Regression models adjusted for age, gender, and admission GCS were used to determine if DTI parameters significantly improved prediction of patient
39
outcome status among severe TBI patients. Models were chosen by best subset selection using the score criterion. Model improvement was determined by significant reduction of model deviance. Statistical analysis was conducted in R (R Foundation for Statistical Computing, Vienna Austria) and SAS 9.2 for Windows XP (SAS Corporation, Cary, NC). Statistical significance was judged at p<0.05.
III. RESULTS AND DISCUSSION Table 1 lists the DTI parameters for each of the four groups used in this study. The distribution of all wholebrain summary measures (average, peak value, standard deviation, and coefficient of variation for each DTI parameter) differed significantly among all outcome groups. The average value of λ1, peak value of λPERP and average value of ADC were positively correlated with the GCS obtained at the time of the MRI. The CV for λ1 and λPERP, and the standard deviation and CV of ADC were negatively correlated with scan GCS. An example of the variation is shown for average, standard deviation, and CV for λ1 is shown in Figure 1. Similar trends in correlations were seen between admission GCS and DTI parameters. Table 1 Average DTI Parameter Statistics by Group DEAD
λ1 Avg
λ1 Pk
λ1 SD
λ1 CV
0.972 (0.19)
0.932 (0.21)
0.270 (0.05)
0.293 (0.10)
POOR
0.990 (0.05)
0.985 (0.03)
0.269 (0.05)
0.272 (0.05)
GOOD
1.074 (0.041)
1.026 (0.04)
0.240 (0.02)
0.224 (0.02)
CONT
1.080 (0.026)
1.007 (0.03)
0.236 (0.02)
0.219 (0.01)
DEAD
λPERP Avg
λPERP Pk
λPERP SD
λPERP CV
0.529 (0.13)
0.509 (0.12)
0.188 (0.06)
0.368 (0.10)
POOR
0.526 (0.04)
0.533 (0.03)
0.174 (0.03)
0.332 (0.06)
GOOD
0.567 (0.04)
0.567 (0.04)
0.164 (0.03)
0.292 (0.07)
CONT
0.551 (0.02)
0.559 (0.02)
0.152 (0.02)
0.275 (0.03)
FA Avg
FA Pk
FA SD
FA CV
0.410 (0.05)
0.359 (0.05)
0.142 (0.01)
0.348 (0.04)
DEAD POOR
0.416 (0.03)
0.357 (0.02)
0.139 (0.01)
0.333 (0.02)
GOOD
0.412 (0.03)
0.366 (0.04)
0.133 (0.01)
0.322 (0.02)
CONT
0.425 (0.01)
0.370 (0.02)
0.135 (0.01)
0.318 (0.01)
ADC Avg
ADC Pk
ADC SD
ADC CV
DEAD
0.675 (0.15)
0.665 (0.16)
0.199 (0.06)
0.307 (0.10)
POOR
0.680 (0.04)
0.705 (0.03)
0.186 (0.04)
0.275 (0.06)
GOOD
0.736 (0.04)
0.731 (0.03)
0.165 (0.3)
0.224 (0.04)
CONT
0.727 (0.02)
0.716 (0.02)
0.15 (0.02)
0.206 (0.02)
IFMBE Proceedings Vol. 32
40
J.F. Betz et al.
In Ordinal Logistic Models predicting outcome category, addition of any λ1 summary measure (average, peak, standard deviation, or coefficient of variation) followed by the CV of ADC had the most effect in significantly reducing the model deviance. Of all the DTI parameters, λPERP had the least effect on the reduction of the model deviance.
information from DTI was incorporated into prediction of good outcomes vs. poor outcomes. This indicates that the addition of DTI significantly improves the sensitivity and specificity in predicting outcomes compared to GCS alone.
Fig. 3 ROC Curves for Outcome Prediction. Addition of DTI parameters to logistic models resulted in significantly higher area under the curve (AUC) and significantly lower model deviance when predicting poor outcome status Fig. 1 λ1 Summary Measures by outcome group. As injury severity decreases, the average values tended to increase, and the variability tended to decrease, leading to a decrease in the coefficient of variation. Similar relationships were seen in all DTI parameters
Fig. 2 Distribution of λ1 Avg by outcome group. The distribution of the DTI parameter values differed significantly across the different outcome groups, showing differences in central tendency and variability as a function of injury severity Nonparametric correlations show that the DTI parameters are strongly related to a commonly-used clinical measure, the Glasgow Coma Scale. Correlations were strongest with the Glasgow Coma Scale at the time of the MRI. Receiver operating characteristic analysis showed that the area under the curve improved from 0.74 to 0.88 (Figure 3) when
A decrease in λ1 and λPERP with the severity of injury indicates that the mobility of water protons is decreased both in the longitudinal and radial direction with the severity of injury along most of the axonal fibers at the whole brain level. While this confirms the reduction in ADC, this decrease in mobility in different directions results in a manner that it does not alter the fractional anisotropy. It should be noted that although FA does not decrease significantly with outcome severity the CV of FA does increase suggesting a general disruption of the water homoestasis resulting from the breakdown of Na-K pumps that regulate the water within the cells. Further, MR DTI parameters are highly dependent on intra- and extracellular microenvironmental factors, such as fluid viscosity and the architecture of the cytoskeleton formed by the axon and myelin. High viscosity from cell debris and elevated lipid content within areas of necrosis act as barriers to diffusion and can decrease the λ1 and λPERP values. Intracellular barriers from the collapse of the cytoskeleton and breakdown of fast transport causing organelles to accumulate in the paranodal region, can also cause hindrance and decrease these parameters. Increasing variability in the DTI of the white matter as a function of injury severity is likely due to the more pervasive nature of severe traumatic brain injury. The current study is limited by the use of whole-brain segmentation, which may not converge in the most severely injured patients. This inherently biases against finding a difference between patients which will succumb to their injuries and those that survive, as those with the most severe
IFMBE Proceedings Vol. 32
Prognostic Ability of Diffusion Tensor Imaging Parameters among Severely Injured Traumatic Brain Injury Patients
injuries will be removed from the sample by the inability to segment their DTI images. It is important to note that even in the face of this limitation DTI markers were sensitive enough to provide information on patient outcomes. Further studies involving Region of Interest (ROI) analyses of DTI measures are underway, which may extend the usefulness of DTI to patients whose injuries prohibit the use of segmentation algorithms. The usefulness of DTI as a prognostic marker may be further extended by longitudinal studies relating DTI findings to long term biological, psychological, and social outcomes.
IV. CONCLUSIONS Traumatic Brain Injury is a prevalent factor in morbidity and mortality worldwide, with significant medical, psychological, social, and economic implications. Existing clinical measures often can not provide meaningful prognostic information in the acute state of TBI. Diffusion Tensor MRI appears to provide several biomarkers that relate to the physiological conditions of the white matter, which are correlated with existing clinical measures, yet provide more information about a patient’s discharge status. Furthermore, these markers are often obtained within a week of injury, and are not sensitive to the wakefulness of the patient. Since injury severity was related to decreases in the mean of the DTI measures and increases in whole-brain variability of the measures, the coefficient of variation appears to be a parsimonious way to summarize these changes.
ACKNOWLEDGMENT This study was supported by the Department of Defense (award #W81XWH-08-1-0725, PT075827).
REFERENCES
41
3. Sosin DM, Sniezek JE, Waxweiler RJ. Trends in death associated with traumatic brain injury, 1979 through 1992. JAMA 1995; 273:1778-80. 4. Max W, MacKenzie EJ, Rice DP. Head injuries: costs and consequences. J Head trauma Rehabil 1991;6:76-91 5. Kraus JF. Epidemiology of head injury. In: Cooper PR, editor. Head Injury, 3rd ed. Baltimore: Williams and Wilkins, 1993;1-25. 6. From the CDC website, http://www.cdc.gov. 7. Thurman D, Alverson C, Dunn K, Guerrero J, Sniezek J (1999) Traumatic brain injury in the United States: a public health perspective. J of Head Trauma and Rehabil. 14(6):602–15. 8. Ommaya AK, Gennarelli TA (1974) Cerebral concussion and traumatic unconsciousness. Correlation of experimental and clinical observations of blunt head injuries. Brain, 97:633-654. 9. Ommaya AK, Goldsmith W, Thibault L (2002) Biomechanics and neuropathology of adult and pediatric head injury. Br J Neurosurg. 16:220-242. 10. Povlishock J, Becker DP, Cheng CL, Vaughan GW (1983) Axonal change in minor head injury. J Neuropathol Exp Neurol,42:225-242. 11. Elson LM, Ward CC., Mechanisms and pathophysiology of mild head injury. Semin Neurol. 1994 Mar;14(1):8-18. 12. McCrory P, Johnston KM, Mohtadi NG, Meeuwisse W., Evidencebased review of sport-related concussion: basic science. Clin J Sport Med. 2001 Jul;11(3):160-5. 13. Johnston KM, Ptito A, Chankowsky J, Chen JK., New frontiers in diagnostic imaging in concussive head injury. Clin J Sport Med. 2001 Jul;11(3):166-75. 14. Graham DI (1996) Neuropathology of Head Injury, in Neurotrauma, Narayan RK, Wilberger JE and Povlishock JT, Eds., W.B. Saunders, Philadelphia, pp. 43-60. 15. Graham DI, Gennarelli TA, and McIntosh TK (2002) Trauma, in Greenfield’s Neuropathology, 7th ed., Graham DJ and Lantos PL, Eds., Arnold Press, London. 16. Gentry LR (1994) Imaging of closed head injury. Radiology Apr;191(1):1-17. 17. DeKosky ST, Kochanek PM, Clark RS, Ciallella JR, Dixon CE (1998) Secondary Injury After Head Trauma: Subacute and Longterm Mechanisms. Semin Clin Neuropsychiatry Jul;3(3):176-185. 18. Shanmuganathan, K., Gullapalli, R.P., Mirvis, S.E., Roys, S., and Murthy, P. (2004). Whole-brain apparent diffusion coefficient in traumatic brain injury: Correlation with Glasgow Coma Scale score. Am J Neuroradiol 25, 539–544. 19. Kumar, R., Husain, M., Gupta, R.K., Hasan, K.M., Haris, M., Agarwal, A.K., Pandey, C.M., and Narayana, P.A. (2008). Serial changes in the white matter diffusion tensor imaging metrics in moderate traumatic brain injury and correlation with neuro-cognitive function. J Neurotrauma 26, 581-495.
1. Sosin DM, Sniezek JE, Thurman DJ. Incidence of mild and moderate brain injury in the United States, 1991. Brain Injury 1996;10:47-54. 2. Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. Unpublished analysis of data from the 1994 National Hospital Discharge Survey, 1998.
IFMBE Proceedings Vol. 32
Hair Cell Regeneration in the Mammalian Ear, Is Gene Therapy the Answer? Matthew W. Kelley Section on Developmental Neuroscience, National Institute on Deafness and other Communication Disorders, National Institutes of Health, Bethesda, Maryland Abstract–– In vertebrates the sensation of hearing is dependent on the presence of mechanosensory hair cells located within the coiled cochlea of the inner ear. The apical surface of each hair cell contains a specialized stereociliary bundle that acts to detect sound-induced pressure waves. In mammals, hair cells are only generated during a finite period in embryogenesis. Therefore, hair cell loss, as a result of either genetic or environmental factors, leads to a permanent loss in hearing acuity. Recent results have identified some of the genes that are instructive for the formation of hair cells. In particular, forced expression of the basic helix-loop-helix transcription factor, Atoh1, was found to be sufficient to induce hair cell formation in different cell types within the embryonic inner ear. These findings suggested that expression of Atoh1 within a damaged mammalian inner ear might be sufficient to induce the formation of new hair cells. In fact, studies in mature animals that can spontaneously regenerate hair cells, such as birds, indicates that re-expression of Atoh1 is a key step in this process. In contrast, existing data indicates that Atoh1 is never re-expressed in a mature mammalian ear, regardless of the nature or degree of damage. Based on these observations, it has been suggested that induction of hair cell regeneration might be accomplished through gene therapy. The inner ear is well suited for gene transfer-based treatments as it is a relatively enclosed space with limited communication with the rest of the body. Preliminary results in animals using both adeno- and adeno-associated virus-based vectors have demonstrated that inner ear tissues can be efficiently transfected. Moreover, in one publication, forced expression of Atoh1 resulted in some degree of recovery in deafened animals. These results, while very promising, will require replication by additional laboratories as well as further study. Cochlear Hair Cells
In all mammals, the sense of hearing is initiated in the coiled cochlea located in the ventral portion of the inner ear [1]. Extending along the length of the cochlear spiral is a specialized sensory epithelium referred to as the organ of Corti. This remarkable structure contains a number of different unique cell types including two distinct types of mechanosensory hair cells. As their name implies, these cells are exquisite mechanoreceptors that serve as the primary sensory cells in the auditory pathway. Airborne sound pressure waves are relayed through the outer and middle ears to produce a traveling pressure wave that travels along the length of the cochlea. As these waves propagate along the cochlear spiral, groups of hair cells are stimulated
based on their position between the base and apex of the spiral. Each hair cell contains a group of modified microvilli referred to as a stereociliary bundle. All bundles are arranged in a staircase pattern with the individual stereocilia increasing in length from one end to the other [2]. Sound generated pressure waves deflect the stereociliary bundle leading to the opening of an, as yet, unidentified transduction channel. Opening of these channels allows the passage of calcium and potassium into the bundle and from there into the cytoplasm of the hair cell. The influx of positive charge leads to a depolarization of the cell from a resting voltage of approximately -70 mV. Each hair cell acts as a tonic receptor that continuously releases glutamate at synapses between it basal surface and the dendrites of afferent spiral ganglion neurons. Depolarization of the cell increases the rate of neurotransmitter release, leading to an increase in the level of spiral ganglion activity. Hearing Loss Hearing loss is prevalent in developed nations as a result of voluntary and involuntary exposure to both chronic and acute noise. Recent studies have indicated that approximately 33% of individuals age 65 or older have a significant degree of hearing loss and this number increases to greater than 50% in individuals older than 80. Perhaps more alarming, 15% of individual between the ages of 20 and 69 are thought to have a significant hearing loss [3]. And this number will probably increase as the “ipod generation” advances into middle age. In virtually all of these cases, the primary cause of hearing loss is the death of mechanosensory hair cells. As described above, hair cells act as the primary transducers of incoming sound waves, and so their loss leads to progressive decreases in sensitivity. For reasons that are not understood, in mammals once a hair cell is lost, it is not replaced. This has prompted ongoing research to identify factors that either act to inhibit or might promote hair cell regeneration. Identification of Genes That Regulate Hair Cell Development
In order to identify genes that regulate hair cell formation, researchers focused on two distinct systems, the developing mammalian inner ear and the inner ears of chickens, which unlike mammals, can regenerate hair cells throughout life. In mammals, hair cells are only specified
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 42–44, 2010. www.springerlink.com
Hair Cell Regeneration in the Mammalian Ear, Is Gene Therapy the Answer?
during a finite period in embryogenesis [1]. Several lines of research led to the identification of the transcription factor, Atoh1, as a potential key regulator of hair cell formation. Atoh1 is a member of the basic-helix-loop-helix (bHLH) family of transcription factors, an ancient family that includes multiple genes in animals as diverse as humans, fruitflies and nematode worms. In most cases, bHLH transcription factors act to regulate key events in the specification and differentiation of particular cell types [4]. Initial localization studies indicated that Atoh1 begins to be expressed in cells that, while not yet hair cells, will go on to develop as hair cells [5]. Moreover, targeted deletion of Atoh1 in a mouse model resulted in animals with no hair cells whatsoever [6]. But, it is important to note that an absence of hair cells was not the only defect in these animals, a fact that must be considered in the future development of potential gene therapies. The previous results demonstrated that Atoh1 is necessary for hair cell formation, but in order to be considered as a candidate for gene therapy, Atoh1 must also be sufficient to induce hair cell formation. To test this hypothesis, several different laboratories used gene transfer techniques including electroporation [7,8,9] and adenovirus mediated transfection to induce Atoh1 expression either in the developing organ of Corti or other cells within the cochlear duct [10]. Initially, in vitro experiments were used to assess the effects of forced expression of Atoh1 in embryonic or early post-natal mouse cochleae. Results demonstrated that both cells within the developing organ of Corti [9], or surprisingly, cells located within the cochlear duct but in regions that would not develop as the organ of Corti [7,8], would express markers consistent with hair cell formation in the presence of forced Atho1 expression. As a next step, an adenoviral vector was used to force expression of Atoh1 in the cochlear duct of adult guinea pigs [10]. When cochleae were analyzed several weeks after infection, cells that expressed some markers of hair cells and that contained structures that resembled rudimentary hair cells were observed. A more recent experiment took the important step of determining whether Atoh1-induced hair cells are functional. To accomplish this, Atoh1 was transfected into embryonic otocyst cells in vivo using a technique called in utero electroporation [11]. Once the animals were born, transfected hair cells were identified based on expression of a fluorescent marker and electrophysiological techniques were used to assess the ability of these cells to respond to pressure waves. The results indicated that Atoh1-transfected hair cells have physiological properties that are comparable to endogenous hair cells, at least at early post-natal time points. Unfortunately, anatomical changes that occur during the
43
maturation of the inner ear made it impossible to assess the function of these hair cells in adult animals. Atoh1 and Spontaneous Hair Cell Regeneration
The results described above strongly suggested that Atoh1 could be a good candidate for regenerative therapies. Further support for this hypothesis was obtained by studying the regulation of the avian version of Atoh1, referred to as catoh1, during hair cell regeneration in chickens. For reasons that are not understood, birds and all other non-mammalian vertebrates are able to regenerate hair cells [12] Examination of the genes that are upregulated during active hair cell regeneration indicated that catoh1 expression is initiated in the early stages of a regenerative event and expression is maintained in regenerated hair cells [13]. These results provided further evidence to suggest that Atoh1 is instructive for hair cell formation. Atoh1 and Gene Therapy
Based on the results presented above, two separate laboratories have attempted to use adenoviral-based gene transfer of Atoh1 to elicit hair cell regeneration in either the cochlea or in the vestibular system. While not discussed in this paper, vestibular structures within the inner ear also contain hair cells and also fail to regenerate after injury. In the first study, guinea pigs were deafened using a combination of aminoglycosides and ethacrynic acid. Previous results have demonstrated that this treatment leads to the death of most auditory hair cells [14]. Following this treatment, hearing was assessed in specific animals. In those with a significant hearing loss, an Atoh1-expressing adenovirus was administered to just one inner ear. The animals were allowed to recover for two months and at that time their hearing was reassessed. Remarkably, many of the animals showed a significant improvement in hearing acuity in the infected ear when compared with either the preinfection assessment or with assessment of the non-infected side [15]. An anatomical analysis of the ears from these animals suggested the possible presence of regenerated hair cells. However, it is important to note that the experimental design did not allow for a definitive identification of regenerated cells. Studies using vestibular epithelia have obtained similar results including recovery of function and apparent regenerated hair cells [16]. But, as was the case in the studies in the auditory system, the experimental design did not allow for a definitive identification of new hair cells that had developed as a result of infection with the Atoh1virus. The results of these studies have provided exciting new results indicating both the potential for gene therapy in the inner ear and the possibility that Atoh1 may be an
IFMBE Proceedings Vol. 32
44
M.W. Kelley
appropriate candidate gene. However, the development of this technology is just beginning. Important experiments still need to be performed to confirm that Atoh1 truly can induce new hair cells rather than simply protecting damaged hair cells. In addition, confirmation of the results in additional laboratories will be required to ensure that these techniques can be transferred to other locations and, potentially, model systems.
REFERENCES 1. Kelley MW. (2006) Regulation of cell fate in the sensory epithelia of the inner ear. Nat Rev Neurosci 7:837-849. 2. Raphael Y, Altschuler RA. (2003) Structure and innervation of the cochlea. Brain Res Bull 60:397-422. 3. Brigande JV, Heller S. (2009) Quo vadis, hair cell regeneration? Nat Neurosci 12:679-685. 4. Powell LM, Jarman AP. (2008) Context dependence of proneural bHLH proteins. Curr Opin Genet Dev 18:411-417. 5. Lanford PJ, Shailam R, Norton CR, Gridley T, Kelley MW. (2000) Expression of Math1 and HES5 in the cochleae of wildtype and Jag2 mutant mice. J Assoc Res Otolaryngol 1:161-171. 6. Bermingham NA, Hassan BA, Price SD, Vollrath MA, Ben-Arie N, Eatock RA, Bellen HJ, Lysakowski A, Zoghbi HY. (1999) Math1: an essential gene for the generation of inner ear hair cells. Science 284:1837-1841. 7. Zheng JL, Gao WQ. (2000) Overexpression of Math1 induces robust production of extra hair cells in postnatal rat inner ears. Nat Neurosci 3:580-586.
8. Woods C, Montcouquiol M, Kelley MW. (2004) Math1 regulates development of the sensory epithelium in the mammalian cochlea. Nat Neurosci 7:1310-1318. 9. Jones JM, Montcouquiol M, Dabdoub A, Woods C, Kelley MW. (2006) Inhibitors of differentiation and DNA binding (Ids) regulate Math1 and hair cell formation during the development of the organ of Corti. J Neurosci 26:550-558. 10. Kawamoto K, Ishimoto S, Minoda R, Brough DE, Raphael Y. (2003) Math1 gene transfer generates new cochlear hair cells in mature guinea pigs in vivo. J Neurosci 23:4395-4400. 11. Gubbels SP, Woessner DW, Mitchell JC, Ricci AJ, Brigande JV. (2008) Functional auditory hair cells produced in the mammalian cochlea by in utero gene transfer. Nature 455:537-541. 12. Stone JS, Cotanche DA. (2007) Hair cell regeneration in the avian auditory epithelium. Int J Dev Biol 51:633-647. 13. Cafaro J, Lee GS, Stone JS. (2007) Atoh1 expression defines activated progenitors and differentiating hair cells during avian hair cell regeneration. Dev Dyn 236:156-170. 14. Kawamoto K, Sha SH, Minoda R, Izumikawa M, Kuriyama H, Schacht J, Raphael Y. (2004) Antioxidant gene therapy can protect hearing and hair cells from ototoxicity. Mol Ther 9:173-181. 15. Izumikawa M, Minoda R, Kawamoto K, Abrashkin KA, Swiderski DL, Dolan DF, Brough DE, Raphael Y. (2005) Auditory hair cell replacement and hearing improvement by Atoh1 gene therapy in deaf mammals. Nat Med 11:271-276. 16. Staecker H, Praetorius M, Baker K, Brough DE. (2007) Vestibular hair cell regeneration and restoration of balance function induced by math1 gene transfer. Otol Neurotol 28:223-231.
IFMBE Proceedings Vol. 32
Magnetoencephalography and Auditory Neural Representations J.Z. Simon1,2 and N. Ding1 1
Department of Electrical & Computer Engineering, University of Maryland, College Park MD 20815, USA 2 Department of Biology, University of Maryland, College Park MD 20815, USA
Abstract— Complex sounds, especially natural sounds, can be parametrically characterized by many acoustic and perceptual features, one among which is temporal modulation. Temporal modulations describe changes of a sound in amplitude (amplitude modulation, AM) or in frequency (frequency modulation, FM). AM and FM are fundamental components of communication sounds, such as human speech and speciesspecific vocalizations, as well as music. Temporal modulations are encoded in at least two ways, temporal coding and rate coding. Magnetoencephalography (MEG), with its high temporal resolution and simultaneous access to multiple auditory cortical areas, is a non-invasive tool that can measure and describe the temporal coding of auditory modulations. We refer to the neural temporal encoding of temporal acoustic modulations as “modulation encoding”. For simple, individually presented, acoustic modulations, modulation encoding is well described by a simple modulation transfer function (MTF). Even in this simple case, however, the MTF may depend strongly on the type of modulation being encoded (e.g. AM vs. FM, narrowband vs. broadband) or the context in which the modulation is heard (e.g. attended vs. unattended). Here we present a range of different types of modulation encoding employed by human auditory cortex. The simplest examples are for sinusoidally amplitude modulated carriers of a range of bandwidths (with special emphasis on those modulation rates relevant to speech and other natural sounds: below a few tens of Hz). We provide evidence that the modulation transfer functions are lowpass in shape and relatively independent of bandwidth. When several modulations are applied concurrently however, the modulation encoding typically, but not always, becomes non-linear: the auditory modulations are at the rates of the acoustic modulations but also at the rates of cross-modulation frequencies. The physiological occurrence, or not, of these cross terms seem be in accord with the psychophysical concept of modulation filterbanks. Keywords— Magnetoencephalography, cortex, modulations.
I. INTRODUCTION Speech and environmental sounds contain slow modulations in both amplitude (AM) and frequency (FM). Speech intelligibility depends strongly on the integrity of modulations within a range of perceptually-relevant rates, the same range which drives cortical responses most vigorously as measured by single-unit activity. The neural mechanisms by which this modulation is encoded are important to our
understanding of perception, and to applications such as auditory prostheses. Speech and natural sounds contain simultaneous AM and FM, of various rates and embedded in other sounds, so it is crucial to determine the encoding of compound and noisy modulations as well as simple ones. In humans, neurophysiological studies of temporal methods in auditory coding are generally limited to non-invasive techniques. For these studies, MEG is an appropriate tool, as it has excellent temporal resolution, is silent, and can localize neural sources with suitable resolution (particularly so in auditory areas). A. Modulation Sensitivity Psychophysically, human listeners attending to spectrally simple broadband sounds are most sensitive to amplitude modulations at a few Hz, peaking near 10 Hz and diminishing substantially by 100 Hz [1]. This result also holds when increasing the spectral complexity of the carrier (see, e.g. [2,3]). Speech intelligibility is strongly dependent on modulation rates below 20 Hz, for both amplitude modulation [4,5] and frequency modulation components of speech [6]. B. MEG and Temporal Response Properties Complementary to hemodynamic methods (e.g. fMRI), MEG is more limited in its spatial resolution but virtually unlimited in its temporal resolution (e.g. down to ~1 ms). Since temporal aspects of auditory signals play such a crucial role in both perception and speech processing, MEG is a logical tool to employ for these purposes. Additionally, MEG is silent, unlike fMRI. MEG measures the magnetic fields generated by neuronal current flow, using SQUID based detectors (Superconducting Quantum Interference Devices) to make highgain recordings [7]. It is believed the main source for MEG is current flow in pyramidal cells’ apical dendrites. [7] A major benefit over EEG (which measures electric potentials generated by the same neural activity) is high sensitivity to activity within auditory cortex (where pyramidal cell dendrites are parallel to the surface). In contrast, EEG responses mix across cortical hemispheres and are strongest medially (see, e.g., [8]). Another benefit to MEG is that results from data averaged over subjects are typically visible in individual subjects as well.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 45–48, 2010. www.springerlink.com
46
J.Z. Simon and N. Ding
The auditory steady state response (SSR) is the neural response to a stationary acoustic stimulus. SSR responses are a rich source of neurophysiological data that have strong potential to make compelling links with animal-based cortical physiology. Auditory SSR latencies from MEG, typically 30-40 ms [9,10], are much better matched to latencies seen in single neuron studies than the M100 or even M50. SSR was first measured in humans by using EEG, where the strongest response is typically found at 40 Hz [11]. Rates below 20 Hz are less well explored due the lower signal to noise ratio (SNR) compared to near 40 Hz. Using MEG, Ross et al. [9] found a slowly decreasing plateau from 20 Hz down to 10 Hz, but with increasing error bars. EEG studies give conflicting results, but, with care in rejecting strong but statistically non-significant responses, Picton et al. [8] found a trend that the EEG response increases as rate decreases below 20 Hz. Our MEG results support this result in much greater detail.
II. METHODS 15 subjects (9 female) (Experiment 1), and 10 subjects (5 female) (Experiment 2), all right handed [12], who reported normal hearing and no history of neurological disorder, listened to the acoustic stimuli while MEG recordings were taken. The procedures were approved by the University of Maryland institutional review board and written informed consent was obtained from each participant. Subjects were paid for their participation. In Experiment 1, sinusoidally amplitude-modulated sounds of 2000 ms duration were presented to each subject. Twenty stimuli were made with 5 modulation frequencies (1.5 Hz, 3.5 Hz, 7.5 Hz 15.5 Hz and 31.5 Hz) and 4 different carriers (pure tone at 707 Hz; 1/3 octave pink noise; 1 octave pink noise Hz and 5 octave pink noise (all centered at 707 Hz) with 100% modulation depth. The four different bandwidth conditions are henceforth referred to as 0, 1/3, 1, and 5 octave bandwidths. In Experiment 2, the stimulus is a tone simultaneously frequency and amplitude modulated. The FM modulation frequency, fFM, is fixed at 37.7 Hz. The maximum frequency deviation from the base carrier frequency, 550 Hz, is 330 Hz. The AM modulation frequency, fAM, is varied, taking on values of 0.3, 0.7, 1.7, 3.1, 4.9, 9.0, and 13.8 Hz. The specific values of fFM and fAM are selected to avoid harmonic overlaps. Stimulus duration is 21 s. Subjects were placed horizontally in a dimly lit magnetically shielded room (Yokogawa Electric Corporation, Tokyo). The signals were delivered to the subjects’ ears with 50 Ω sound tubing (E-A-RTONE 3A, Etymotic Research, Inc), attached to E-A-RLINK foam plugs inserted into the
ear-canal and presented binaurally at a comfortable loudness of approximately 70 dB SPL. MEG recordings were conducted using a 157-channel axial gradiometer whole-head system (Kanazawa Institute of Technology, Kanazawa, Japan). Its detection coils form a uniform array on a helmet-shaped surface of the dewar bottom, with about 25 mm between the centers of two adjacent 15.5 mm diameter coils. Sensors are axial gradiometers with 50 mm baseline; field sensitivities are 5 fT/√Hz or better in the white noise region. Three magnetometers measure the environmental magnetic field. The signals were bandpassed between 0 or 1 Hz and 200 Hz, notch filtered at 60 Hz, and sampled at the rate of 500 Hz or 1kHz. The 1 Hz high pass filter’s influence on the amplitude and phase of MEG recordings is corrected. Two denoising techniques were applied off-line: TSPCA [13], to remove external noise (filtered versions of the reference signals), and SNS [14], to remove noise internal to individual gradiometers. TS-PCA used a ±100 ms range of filter taps; SNS used 10 channel neighbors. Finally, DSS [15] a blind source separation technique was applied to preserve phase-locked neural activities. The DSS components are sorted based on how much of the response power is phase-locked to the stimulus. Only the first component is kept for further analysis in this study. A single current dipole modeled the auditory response in each hemisphere. The lead field was calculated using the complex version of the Sarvas spherical head model [16]. A isotropic sphere model was build for each subject, using MEG Laboratory 2.001M (Yokogawa Electric Corporation, Tokyo). The dipole moment is estimated using least squares [17] while the dipole position is estimated using a modified simplex search with clustering [18].
III. RESULTS A. Experiment 1 The relation between the strength of neural AM response and the stimulus fAM is commonly known as modulation transfer function (MTF). Dipole strengths of MEG responses are shown in Fig. 1 as a function of the stimulus AM rate and carrier bandwidth, for each hemisphere. This low-pass pattern is also seen in intracranial recordings [19]. The MEG response power in the right hemisphere is stronger than that in the left hemisphere for 1.5 Hz AM with pure tone carrier and 31.5 Hz AM with all carriers (paired ttest, t(13) > 2.2, p < 0.05). The MEG response power is significantly affected by the stimulus AM rate (2-way ANOVA, F(4, 279) > 22, p < 10-4 for both hemispheres) but not the stimulus bandwidth. There is a significant interaction between the effect of AM rate and the effect of carrier
IFMBE Proceedings Vol. 32
Magnetoencephalography and Auditory Neural Representations
Fig. 1 Dipole strengths of MEG responses averaged over subjects. Error bars are standard error over subjects bandwidth (F(12, 279) > 2.04, p < 0.03) in the right hemisphere.
47
Fig. 2 Analysis of the power of the SSR at fAM
. The MTF calculated as a function of the corrected power of the SSR at fAM and stimulus fAM. Each gray hollow circle represents the corrected power of the SSR at fAM for one subject. The black line marked by triangles shows the grand averaged MTF. The gray line is the optimal linear fit of the MTF
B. Experiment 2 An MEG response at the stimulus fAM is observed in all stimulus conditions.. The MTF measured by the power of the MEG response at fAM has a low-pass pattern: the power of the MEG response to an AM sound decreases with increasing fAM of that sound. It needs to be clarified, however, whether the low-pass pattern of the MTF results from stimulus-driven SSR or background noise. One can estimate the power of stimulus-driven SSR by subtracting the estimated power of background noise at fAM from the power of measured MEG signal at fAM. The MTF measured by this corrected power of the SSR at fAM still shows a low-pass pattern and can be modeled as a linear function of fAM measured in Hz (Fig. 2). The slope of the fitted linear function is -0.96 dB/Hz (99% confidence interval, -1.17 to -0.74 dB/Hz). For fAM higher than 1 Hz, the slope of the MTF can also be fitted as -3.6 dB/oct (99% confidence interval, -4.8 to -2.5 dB/oct ). Since the slope of the fitted line is significantly negative (p < 0.01), the low-pass pattern of the MTF is statistically significant for fAM lower than 15 Hz. To reduce subject-to-subject variability, the corrected power is normalized before the regression analysis. Even without any correction or normalization, the slope of the MTF is still significantly negative (99% confidence interval, -1.73 dB/Hz to -0.29 dB/Hz). To investigate whether the reduction in the evoked power of the MEG response at fAM is due to a loss of energy in every single trial or a loss of phase locking over trials, we calculated the phase coherence value [20] of the MEG response at fAM over trials. One way ANOVA shows the phase coherence values does not significantly change when the stimulus fAM increases from 0.7 Hz to 13.8 Hz (F(5,54) = 0.84, p > 0.5). Hence, the
low-pass pattern of evoked power of SSR at fAM is due to a change in single trial power rather than a change in over trial phase coherence. Since both the neural response power and background noise power are strongest at low frequencies, regression analysis was used to show that the signal to noise ratio of the neural response at fAM does not significantly increase or decrease when fAM increases (p > 0.6). If the neural response power at fAM, 2fAM and 3fAM are combined, the MTF has a slope of -1.06 dB/Hz (99% confidence interval, -1.30 dB/Hz to -0.82 dB/Hz).
Fig. 3 Analysis of the instantaneous amplitude of the SSR at fFM . The solid black line with triangle markers is the MTF averaged over all subjects. The solid gray line is the optimal linear fit of the MTF while the dotted gray line with white square markers is the MTF predicted a model. The power of the instantaneous amplitude for each subject and each condition is shown as a gray hollow circle. The instantaneous amplitude’s power at fAM is plotted as the dashed gray line
IFMBE Proceedings Vol. 32
48
J.Z. Simon and N. Ding
One of the primary goals of this work is to examine the interaction between fast modulations and slow modulations. Since the instantaneous amplitude of the SSR at fFM oscillates with fundamental frequency fAM, it is a neural correlate of the stimulus slow AM. Consequently, the relation between the power of the instantaneous amplitude and the stimulus fAM can also be regarded as an effective MTF. We estimate the power of the instantaneous amplitude as the sum of the power at the first four harmonics of fAM. This MTF (Fig. 3) has a slope of -0.72 dB/Hz (99% confidence interval, -0.95 to -0.49 dB/Hz). When fAM is higher than 1 Hz, the slope of the MTF can also be fitted as -3.0 dB/oct (99% confidence interval, -5.0 to -1.6 dB/oct). For this MTF calculation, the power at each harmonic of fAM was corrected by subtracting the power of background noise at that frequency; the estimate of the power of the instantaneous amplitude is also normalized to reduce subject-to-subject variability Without any correction or normalization, the MTF slope is still significantly negative (p < 0.01). As the stimulus fAM increases, the power of the MEG response at fFM decreases at 0.86 dB/oct while the power of the instantaneous amplitude of the SSR at fFM decreases 3.0 dB/oct. If the SSR at fFM is assumed to be sinusoidally amplitude modulated, the neural AM modulation depth of the SSR can be estimated based on the ratio between the power of the instantaneous amplitude of the SSR and the power of the MEG response at fFM . Hence, with the sinusoidal AM assumption, the neural AM modulation depth should decreases at 2.1 dB/oct.
IV. CONCLUSIONS First, this study characterizes the properties of MEG responses to AM below 30 Hz. The SSR is strongest at the lowest modulation rates and decreases 2-4 dB per octave. For jointly modulated stimuli, the instantaneous amplitude of the SSR at fFM also oscillates with fundamental frequency fAM. Due to these neural interactions, the information in slow AM is simultaneously encoded in neural oscillations at fAM and fFM .
2. van Zanten GA, Senten CJ (1983) Spectro-temporal modulation transfer function (STMTF) for various types of temporal modulation and a peak distance of 200 Hz. J Acoust Soc Am 74:52-62 3. Chi T, Gao Y, Guyton MC, Ru P, Shamma S (1999) Spectro-temporal modulation transfer functions and speech intelligibility. J Acoust Soc Am 106:2719-2732 4. Steeneken HJ, Houtgast T (1980) A physical method for measuring speech-transmission quality. J Acoust Soc Am 67:318-326 5. Drullman R, Festen JM, Plomp R (1994) Effect of temporal envelope smearing on speech reception. J Acoust Soc Am 95:1053-1064 6. Zeng FG, Nie K, Stickney GS, Kong YY, Vongphoe M, Bhargave A, Wei C, Cao K (2005) Speech recognition with amplitude and frequency modulations. Proc Natl Acad Sci U S A 102:2293-2298 7. Hamalainen M, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV (1993) Magnetoencephalography - Theory, Instrumentation, and Applications to Noninvasive Studies of the Working Human Brain. Reviews of Modern Physics 65:413-497 8. Picton TW, John MS, Dimitrijevic A, Purcell D (2003) Human auditory steady-state responses. Int J Audiol 42:177-219 9. Ross B, Borgmann C, Draganova R, Roberts LE, Pantev C (2000) A high-precision magnetoencephalographic study of human auditory steady-state responses to amplitude-modulated tones. J Acoust Soc Am 108:679-691 10. Schoonhoven R, Boden CJ, Verbunt JP, de Munck JC (2003) A whole head MEG study of the amplitude-modulation-following response: phase coherence, group delay and dipole source analysis. Clin Neurophysiol 114:2096-2106 11. Galambos R, Makeig S, Talmachoff PJ (1981) A 40-Hz auditory potential recorded from the human scalp. Proc Natl Acad Sci U S A 78:2643-2647 12. Oldfield RC (1971) The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia 9 (1), 97-113 13. de Cheveigné A, Simon JZ (2007) Denoising based on time-shift PCA. J. Neurosci. Methods 165 (2), 297-305 14. de Cheveigné A, Simon JZ (2008a) Sensor noise suppression. J. Neurosci. Methods 168 (1), 195-202 15. de Cheveigné A, Simon JZ (2008b) Denoising based on spatial filtering. J. Neurosci. Methods 171 (2), 331-339 16. Sarvas J (1987) Basic mathematical and electromagnetic concepts of the biomagnetic inverse problem. Phys. Med. Biol. 32, 11-22 17. Mosher JC, Baillet S, Leahy RM (2003) Equivalence of linear approaches in bioelectromagnetic inverse solutions. IEEE Workshop on Statistical Signal Processing, St. Louis 18. Uutela K, Hamalainen M, Salmelin R (1998) Global optimization in the localization of neuromagnetic sources. IEEE Trans. Biomed. Eng. 45 (6), 716-723 19. Liegeois-Chauvel C, Lorenzi C, Trebuchon A, Regis J, Chauvel P (2004) Temporal Envelope Processing in the Human Left and Right Auditory Cortices. Cereb Cortex 14:731-740 20. Fisher NI (1993) Statistical analysis of circular data. Cambridge [England] ; New York, NY, USA: Cambridge University Press
ACKNOWLEDGMENTS We thank Max Ehrman and Jeff Walker for excellent technical support. This research was supported by the National Institutes of Health (NIH) grant R01DC008342.
REFERENCES 1. Viemeister NF (1979) Temporal modulation transfer functions based upon modulation thresholds. J Acoust Soc Am 66:1364-1380
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Jonathan Z. Simon Electrical & Computer Engineering University of Maryland College Park, MD 20815 USA
[email protected]
Voice Pitch Processing with Cochlear Implants Monita Chatterjee1, Shu-Chen Peng2, Lauren Wawroski3, and Cherish Oberzut1 1 2
Cochlear Implants and Psychophysics Lab, Dept.of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA Division of Ophthalmic and Ear, Nose and Throat Devices, Office of Device Evaluation, Center for Devices and Radiologic Health, US Food and Drug Administration, Silver Spring, MD, USA 3 Children’s National Medical Center, Washington, DC, USA
Abstract— Cochlear implants today allow many severe-toprofoundly hearing-impaired individuals to hear and understand speech in everyday settings. However, the transmission of voice pitch information via the device is severely limited. Other than limiting their appreciation of music, the lack of pitch information means that cochlear implant patients have difficulty with speaker/gender recognition, intonation and emotion perception, all of which limit speech communication in everyday life. In electrical stimulation via cochlear implants, the fine spectral detail necessary for conveying the harmonic structure of F0, is not available. Although the spectral cues for pitch are lost, the temporal periodicity cue for pitch may still be available to the listener after speech processing. Our previously published results indicate that adult cochlear implant listeners are sensitive to this periodicity cue and are able to use it in a voice-pitch-based intonation identification task. Ongoing experiments also suggest that different mechanisms may play a role in processing the temporal pitch cue when multiple channels are concurrently stimulated, rather than when a single channel is stimulated. Initial experiments with primary-schoolaged children who were implanted before the age of five, indicate no significant differences between them and their normally hearing peers in performance in the intonation identification task. This suggests that cochlear implants can benefit at least some children with severe-to-profound hearing loss in voice-pitch processing, and points to the potential role of neural plasticity in adaptation to cochlear implants. Keywords— cochlear implants, voice pitch, modulation, children, intonation.
I. INTRODUCTION In normal hearing, the primary cue for the auditory perception of voice pitch, or fundamental frequency (F0) of spoken utterances, is provided by the detailed harmonic structure found in the acoustic spectrum. Normal cochlear filters are able to represent the harmonics within the voice pitch range with excellent resolution. Thus, the human auditory system is able to function with remarkable precision in tasks involving speaker identification, music processing, speech intonation processing, tonal language perception, and emotion recognition, all of which play major roles in our everyday communication. It has been shown that the
temporal periodicity of the signal, which also contains information about the fundamental frequency, can be utilized by the auditory system in extracting voice pitch; however, this pitch is perceptually not as salient as spectrally determined pitch. In cochlear implants, the primary pitch information available to listeners arises from the temporal periodicity in the envelope: therefore, cochlear implant patients do not have access to the salient pitch information that is so important for speech and music perception. The spectral detail in the peripheral (auditory nerve level) representation is primarily lost to broad spatial fields and large amounts of channel interaction. There is no explicit coding of pitch in the speech processing strategies employed in present-day devices. The processor performs a frequency analysis of the signal, extracts the time-varying envelope from each channel, and stimulates the different electrodes of the implanted array with current pulse trains modulated by the extracted envelope from tonotopically appropriate frequency bands. Pitch information is present to varying degrees in the extracted envelope, depending upon the degree of lowpass filtering applied in the processing stages and the carrier rates of the modulated pulse trains. It is thus apparent that the ability to process amplitude modulations, and to discriminate between different rates of amplitude modulation, is necessary for CI listeners to process the available pitch information in the signal envelope. Here, we present our recent work investigating listeners’ ability to discriminate between temporal modulation patterns in a psychophysical task, as well as measures of more real-world performance in a speech intonation identification task.
II. EXPERIMENTAL FINDINGS A. Sensitivity to Temporal Patterns Is Correlated with Performance in F0-Based Speech Intonation In a recent study [1] we showed that adult CI users were able to use F0 cues to determine whether an utterance was question-like or statement-like. The utterance was the word “popcorn”, which had been resynthesized to have 360 different combinations of initial F0 (120 Hz and 200 Hz – 2
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 49–52, 2010. www.springerlink.com
50
M. Chatterjee et al.
B. Sensitivity to Temporal Patterns in Multichannel Stimulation This experiment was conducted to measure amplitude modulation rate discrimination thresholds in CI listeners, in the presence of competing signals on other channels. The signal was applied to one channel, in the presence of “maskers” (competing signals) presented concurrently on two flanking electrodes. The flanking electrodes were presented either one, two, three, or four electrodes away from the signal electrode. The experiment was conducted with users of the Freedom or N-24 CI (manufactured by Cochlear Corporation). Methods 1. Participants: Eight adult CI users participated in these experiments. All were users of devices manufactured by Cochlear Corporation.
2. Stimuli: A custom research interface was used to deliver controlled electrical stimuli directly to the specific electrodes in the patient’s implanted device. Stimuli were trains of biphasic, charge-balanced current pulses, presented with a carrier pulse rate of 2000 pulses/second. The signal was always presented on electrode 18 (an apical electrode in the cochlea). All stimuli were 300 ms in length. Following the standard interleaved stimulation mode on these devices, the stimuli on the different channels/electrodes were presented concurrently but non-simultaneously with each other, with offset delays of ~ 0.17 ms. When modulated, pulse trains were sinusoidally amplitude modulated at specific rates. The masker channels were either steady state, or amplitude modulated. All modulation depths were fixed at 20% (ie, modulation index of 0.2). 3. Procedure: The listener’s task was to detect a difference between a reference modulation rate of 100 Hz and a (higher) comparison modulation rate. Discrimination thresholds were measured using a standard 3-interval, forced-choice psychophysical procedure with a 2-down, 1up, adaptive method. In each trial, two of the three intervals (randomly chosen) contained the “reference” signal modulated at the base rate, while the third interval contained the “target” signal, which was modulated at the higher rate. The rate of the target signal was adaptively modified using a 2-down, 1-up rule, converging at the 70.7% correct point on the psychometric function (Levitt, 1971). Alongside the signal to be discriminated, the fixed maskers were concurrently present in each interval. Results Analysis of the results indicated that masker electrode location did not appreciably influence the results. For the sake of simplicity, therefore, results shown here have been averaged across electrode locations. Figure 1 shows the results obtained when the maskers were coherently modulated at 100 Hz (the same reference rate as the signal), compared with results obtained in the unmasked condition. 25 Mean Weber fraction (%)
levels), F0 contours (rising, falling, flat – 9 levels), intensity patterns (increasing or decreasing from the first to the second syllable – 4 levels) and duration patterns (increasing or decreasing from the first to the second syllable – 5 levels). For the purposes of that particular study, the intensity and duration patterns served as random roves, and listeners’ attention to F0 was the focus of the analyses. The proportion of times the listener judged each sample of the word as a question, was plotted as a function of the change in F0 from beginning to end (in octaves) to obtain psychometric functions. These functions were converted into cumulative d’ scores, and these scores were compared across listener groups. Results showed that CI listeners’ performance was significantly poorer than NH listeners’ performance in the same task. When the NH listeners were subjected to spectrally degraded samples of the same stimuli, their performance declined to resemble that of CI listeners’. In a parallel psychophysical task, the CI listeners’ sensitivity to amplitude modulation rate discrimination was measured on a single electrode. The reference rate was fixed at different values across the voice pitch range (from 50 to 300 Hz), and a 3 interval, forced choice, adaptive procedure was used to obtain their modulation rate discrimination limens. The CI listeners’ performance on this task was found to be significantly correlated with their cumulative d’ scores on the intonation recognition task. These results suggest that temporal pattern sensitivity is important for CI listeners in their everyday experience with voice-pitchbased tasks in speech communication. The psychophysical experiments described above, however, were conducted with single-channel stimuli. In the experiment described below, results obtained with multi-channel stimulation are described.
20 15 10 5 0
100 Hz Mod. Masker
No masker
Masker Type
Fig. 1 Sensitivity to modulation rate in the presence of 100-Hz modulated maskers and no maskers. Error bars show +/-1 s.d
IFMBE Proceedings Vol. 32
Voice Pitch Processing with Cochlear Implants
51
The Weber fraction (ΔF/F, where ΔF is the justdetectable increment in modulation frequency), is plotted in percentage form. The lower the Weber fraction, the more sensitive the listener (ie, the smaller the change in modulation rate he/she can detect). It is apparent that the modulated maskers produced a lower Weber fraction than the unmasked condition. This difference was found to reach a moderate level of statistical significance in an ANOVA (F(1,13) =5.629, p = 0.034). In a further analysis, the effects of the relative phases of the masker modulators were measured. Results showed significantly enhanced sensitivity in the conditions in which the masker modulators were in phase with each other, but out of phase with the signal. The effects of other masker modulation rates were also examined. Results showed that the 100 Hz modulation rates on the maskers (ie, the same rate as the reference signal) produced the greatest sensitivity on the signal channel. Other masker modulation rates and types (8 Hz, 24 Hz, 134 Hz, and steady-state) either had no effect on the results, or produced interference. This is shown in Figure 2, which plots the mean Weber fraction (collapsed across masker electrode location) obtained with the different masker types. These results indicate that CI listeners’ sensitivity to temporal patterns can be strongly influenced by the presence of competing temporal envelopes on other channels. In particular, envelopes that are modulated at rates close to the reference rate, can cause enhancement. Underlying mechanisms are as yet unclear. The pulses on different channels were never simultaneous, but rather interleaved in time. Thus, simultaneous interactions between the electrical stimuli (such as beats) cannot be considered when interpreting these results.
60
Prosodic cues help children to learn spoken language [2, 3]. It is therefore of considerable interest to investigate to what extent early-implanted young children with CIs are able to perceive changes in voice pitch to detect aspects of prosody. Here, we present the results of an initial study conducted with a group of primary-school-aged children. The objective of the study was to quantify the sensitivity of normally-hearing (NH) and cochlear-implanted (CI) children who were 6-8 years of age and implanted before the age of five, to changes in the F0 contour. Methods 1. Participants: Twenty normally hearing and eight CI children participated in this study. All children were between 6 and 8 years of age. All CI children had been implanted before the age of 5 years. 2. Stimuli: these were a subset of the “popcorn” database described previously, chosen to have various F0 contours. Intensity and duration, which normally co-vary with F0, remained unchanged. Sounds were presented via loudspeaker at 65 dBA in a soundproof booth. 3. Procedure: The children indicated whether each sample of the utterance “popcorn” sounded like the speaker was “asking” or “telling”. For each F0 contour, a child heard eight samples of the word, four with an initial F0 of 120 Hz (male-sounding) and four with an initial F0 of 200 Hz (female-sounding). Results Figure 3 shows results obtained with the two groups of children, plotted as the proportion of samples judged to be question-like, against the F0-change (from the end of the sample to the beginning) in octaves. These psychometric functions were fitted with a three-parameter sigmoidal function. The data were converted into a cumulative d’ measure (a measure of sensitivity based on signal detection theory). 1
50 40 30 20 10 0
100
SS
8 SSpeak Masker Type
24
134
Proportio n Question
Mean Weber Fraction (%)
70
C. Voice Pitch Processing by Children
0.8
NH CI
0.6 0.4 0.2
Fig. 2 Sensitivity to modulation rate
in the presence of various types of maskers. Note that 100, 8, etc. denote 100 Hz, 8 Hz, etc. (modulation rate); SS denotes an unmodulated masker, and SSpeak denotes an unmodulated masker with amplitude at the peak of the corresponding modulated masker. Error bars show +/- 1 s.d
0 -1.5
-1
-0.5 0 0.5 F0 ch ange in octaves
1
1.5
Fig. 3 Results obtained with NH and CI children. Error bars indicate +/- 1 s.e
IFMBE Proceedings Vol. 32
52
M. Chatterjee et al.
A two-way, mixed ANOVA conducted on the data showed a significant effect of F0 change [F(4.7, 122.188) = 26.418, p<0.001], and no significant interactions between F0 change.and subject group. The effect of subject group did not reach significance (p = 0.055). Analyses of the slopes of the best- fitting sigmoidal functions did not show significant differences between the two groups. Analyses of the cumulative d’ also did not show significant differences between the groups. Thus, these results indicate that the early-implanted CI children in this study are able to perform about as well as their normally hearing peers in this task. A further comparison of the psychometric functions obtained with these children, with previously obtained functions [1] with normally hearing adults, indicates that the children have shallower psychometric functions. This has been observed in other psychophysical studies with children, and has been interpreted to be indicative of greater internal noise in children than in adults. We note here that in an additional test of the children’s speech recognition in noise, the CI children had significant deficits in performance relative to the NH children. Additionally, tests of non-verbal intelligence produced normal average scores in both groups of children. These results suggest that at least some CI children are able to perform as well as their NH counterparts in voicepitch based speech intonation recognition. It is possible that neither group of children is proficient in this task at this age. In this case, the NH children should become more adult-like as they develop. Whether the CI children would also develop along the same trajectory or not, is as yet unknown. It is not unreasonable to speculate that, being early-implanted, these children were able to gain the benefit of a more plastic brain. Perhaps their auditory system is able to glean the F0 information needed for prosody, from the temporal cues in the signal, or from the subtle spectral cues that may be available after speech processing. Further research is clearly needed to shed light on these issues.
III. DISCUSSION Taken together, the studies reviewed here indicate that it is possible for CI listeners to utilize the voice pitch cues available in the temporal envelope of the signal. We have shown that they can perform this task in the context of a sparse psychophysical experiment, as well as in the context of processing the intonation pattern in a spectro-temporally complex speech signal. It is to be noted that the perception of the F0 cue in CI listeners is not necessarily identical to the musical pitch perceived by normally hearing listeners. It is possible that the F0 cue heard by CI listeners does not translate to musical pitch, but perhaps some other auditory sensation (such as roughness or timbre change) which can
be nonetheless utilized in a prosody or intonation processing task. The fact that listeners can use this cue in intonation processing, however, does not mean that they can use the same perceptual cue for the appreciation of melody, which would clearly require the perception of musicality. The results with the children are of considerable interest. Children form a significant portion of the rapidly growing population of CI users. It is therefore of great importance to achieve greater understanding of auditory perception by this unique population, which is developing with the implanted device. The finding that coherently modulated stimuli on other channels can enhance listeners’ sensitivity to temporal periodicity cues, is consistent with recent work on novel speech processing strategies suggesting that synchronous modulation of pulse trains on multiple channels can enhance listeners’ performance on F0-related tasks (Guerts and Wouters, 2001; Vandali et al, 2005). The present results expand on existing knowledge of F0-coding in the electrically stimulated auditory system, and are expected to contribute to improved speech processor designs for CIs in the future.
ACKNOWLEDGMENT This work was funded in part by the MCM Fund for Student Research Excellence awarded to LW, and by NIDCD grant no. R01DC004786 to MC. We are grateful to the participants for their efforts.
REFERENCES 1. Chatterjee, M. and Peng, S.C. (2008) Processing F0 with cochlear implants: Modulation Frequency Discrimination and Speech Intonation Recognition. Hearing Research 235(1-2):143-56. 2. Levitt, H. (1971) Transformed up-down methods in psychoacoustics. J. Acoust. Soc.Am. 49: 467-477. 3. Mehler, J., Jusczyk, P., Lambertz, G., Halsted, N., Bertoncini, J., Amiel-Tison, C. (1988) A precursor of language acquisition in young infants. Cognition 29: 143-178. 4. Crystal, D. (1986) Prosodic development. In: In P.J. Fletcher & M.A. Garman (eds), Language acquisition (Cambridge: CUP), 33-48. 5. Geurts, L., Wouters, J. (2001) Coding of the fundamental frequency in continuous interleaved sampling processors for cochlear implants. J. Acoust. Soc. Am. 109: 713-726. 6. Vandali, A.E., Sucher, C., Tsang, D.J., McKay, C.M., Chew, J.W.D. and McDermott, H.J. (2005) Pitch ranking ability of cochlear implant recipients: A comparison of sound-processing strategies. J. Acoust. Soc. Am. 117(5): 3255-3267. Author: Monita Chatterjee Institute: Cochlear Implants and Psychophysics Lab, Dept of Hearing and Speech Sciences, University of Maryland Street: 0100 LeFrak Hall City: College Park, MD Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Transcranial Magnetic Stimulation as a Tool for Investigating and Treating Tinnitus G.F. Wittenberg1,2 1
Veterans Affairs Maryland Health Care System/Geriatrics Research, Education and Clinical Care, Baltimore, Maryland, USA 2 Dept. of Neurology, University of Maryland, Baltimore, Maryland, USA
Abstract— Transcranial magnetic stimulation (TMS) is a technique that allows non-invasive, painless stimulation of neuronal structures, based on electromagnetic induction of electric fields in excitable tissue. The technique has been applied to investigate normal physiology and disease states. TMS has therapeutic potential, as repetitive stimulation (RTMS) can produce long-term modulation of neuronal circuits. RTMS has been demonstrated to be an effective treatment for depression and is being developed for use in neurorehabilitation. TMS is delivered by discharge of high-voltage capacitors through copper coils applied to the scalp, with induction of a rapidly changing magnetic field that easily penetrates the skull. Besides brain stimulation, other effects of such a discharge are a clicking noise, stimulation of sensory fibers in the scalp, and heating of the coil and nearby conductors. It may be difficult to distinguish which effects are important in tinnitus investigations, particularly if the coil noise and scalp sensory stimulation are not controlled. Localization of stimulation is another problem in TMS work, as coils may be accurately placed over brain areas but then produce significant magnetic fields over several square centimeters of brain surface. Tinnitus is a condition defined by auditory experience in the absence of internal or external auditory input. While the experience is subjective, there is evidence from functional neuroimaging that it associated with activation of brain auditory pathways. RTMS can suppress tinnitus when applied over a variety of regions, including primary auditory cortex in the temporal lobe, and even reduce the tonic activation in auditory regions. There are now a number of clinical trials investigating RTMS methods in tinnitus and the latest published results will be discussed. It is less clear what TMS can tell us about the underlying pathophysiology of tinnitus as conflicting results have been obtained regarding measurable abnormalities in cortical physiology. Keywords— Transcranial magnetic stimulation, cortical physiology, tinnitus, treatment, auditory system.
I. INTRODUCTION Tinnitus is a common clinical condition in which there is a subjective experience of ringing in the ears. It may arise through many different disease processes, with hearing loss being a common factor that triggers the condition. Loss of primary sensory neuron activity may lead to compensatory activity in downstream brain areas, in a kind of homeostatic regulatory process [1]. Whatever the cause, there is evidence that the state of the nervous system includes activity in auditory areas, including portions of the brainstem and auditory cortex. Therefore treatment has focused on changing brain activity.
II. TRANSCRANIAL MAGNETIC STIMULATION A. History and Mechanisms TMS is based on Faraday’s Law of Induction, which, as reformulated by Maxwell, states that the electromotive force in a circuit is proportional to the change in magnetic flux over time. This principal was used to stimulate the nervous system over a century ago, with D’Arsonval [2] generally being credited with being the first to demonstrate central nervous system stimulation. While early effects concentrated on production of phosphenes (perceptions of flashes of light.) While experimentation with magnetic stimulation continued throughout the twentieth century, it was only in 1985 that a practical device was perfected that allowed safe, reproducible, relatively focal stimulation [3]. Studies of motor cortical physiology have been one of the mainstays of TMS research, with sensory stimulation being less popular. This is partly because the discharge of electricity through the TMS coil results in vibration of the coil, with resulting sound and tactile sensation on the scalp. Electrical stimulation of scalp sensory nerve ending also occurs. TMS can induce sensory sensations, but they are likely not due to direct stimulation of primary somatosensory cortex, but are thought to be due to feedback from motor cortex. However, paradigms that use TMS to modulate primary somatosensory cortex have been more recently developed [4]. B. Repetitive TMS The administration of trains of TMS pulses is referred to as repetitive TMS (rTMS) or sometimes as rapid-rate TMS to distinguish it from the slower, but often repetitive, nature of single-pulse TMS experiments. RTMS has been found to modulate the activity of cortical areas to which it is delivered. In the first approximation, 1 Hz RTMS has lasting inhibitory effects [5] while > 5 Hz RTMS has excitatory effects. These effects last on the order of 15 minutes with generally somewhat shorter times for the excitatory effects. The safety of different stimulation rates and duty cycles has been established [6, 7] with the principal side effect being induction of seizures. There is no evidence of any kindling-like phenomenon, so that even if seizures occur, epilepsy does not result. Because of a desire to minimize the amount of stimulation, and therefore reduce the risk of seizures, intermittent burst protocols have been developed. The most popular of these is
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 53–55, 2010. www.springerlink.com
54
G.F. Wittenberg
theta-burst stimulation (TBS), which involves a 50 Hz interstimulus interval, but only for three stimuli and a time, with these three stimuli then repeated at 5 Hz [8]. Such protocols can produce inhibition if performed continuously for on the order of 20-40 seconds, and facilitation if performed intermittently [9]. However, the main benefit of TBS is the reduction in number of stimuli delivered rather than a comparative benefit in effect size [10]. C. Clinical Uses of rTMS While many uses of rTMS have been considered, the most successful use is in depression, where stimulators are used to increase activity in the dorsolateral prefrontal cortex [11]. This treatment is now approved for treatment of drugresistant depression in the U.S. and Canada.
III. RTMS AS T REATMENT FOR T INNITUS A. Clinical Trials Because rTMS can be used to suppress cortical activity, it was a reasonable choice of treatment to treat tinnitus, in which there is constant activity in the cortex related to the conscious perception of a ringing sounds, without any external sounds. Table 1 lists some of the studies and reviews related to such treatment. As can be seen, there has been a tremendous explosion of publishing related to rTMS in tinnitus, starting in 2003, and mainly being carried out in German, Belgian and British laboratories. This bias may related to the more common clinical use of TMS in those countries. The first treatments for tinnitus involved rTMS to the temporal cortex, with stimulation frequencies ranging from 1 Hz [12] to 10 Hz [13]. The treatment targets have varied, but often include the left temporal cortex. Recent studies have shown surprisingly several month long durability for a treatment that takes only 10 days [14]. While treatment can be guided by neuroimaging that demonstrates areas of overactivity to be suppressed [12, 15] treatment that is based on treating the most frequent locus of overactivity, the left auditory cortex, is also effective [14]. But use of neuroimaging has demonstrated the expected suppression of the abnormal activity after RTMS treatment [16]. Perhaps surprisingly, multiple protocols for stimulation appear to be effective, including 1-25 Hz stimulation, theta burst stimulation, with a variety of stimulation strengths. Because there is no noticeable effect of stimulation over the auditory cortex, stimulation strength is generally referenced to the threshold for stimulation of muscles when the coil is over the motor cortex. B. Practical Issues The TMS coil makes a loud click when pulses of current are passed, and the target of TMS in treatment of tinnitus is the
temporal cortex, which is close to the ear. Therefore control for the effects of auditory stimulation are critical. This has been accomplished by use of a sham coil, but the particular electrical Table 1 Selected Studies of TMS in Tinnitus Treatment Year First Author
Type
2003 2003 2003 2005
Eichhammer P Langguth B Plewnia C Kleinjung T
2006 2006 2006 2006 2006 2006 2006 2007 2007 2007
Folmer RL Langguth B Londero A Richter GT Fregni F Langguth B Londero A Eichhammer P Kleinjung T De Ridder D
2007
Kleinjung T
2007 2007 2007 2007 2007
Langguth B Smith JA Rossi S Plewnia C Plewnia C
2008 2008
Kleinjung T Mennemeier M
2008 2008
Landgrebe M Kleinjung T
2008 2008 2008 2009
Lee SL Khedr EM Langguth B Zazzio M
2009
Arfeller C
2009 2009 2009 2009 2009 2009 2010
Meeus OM Marcondes RA Mobascher A Poreisz C Khedr EM Kleinjung T Frank G
Case Series Single subject Clinical Trial Clinical Trial – Long-term outcome Clinical Trial Mechanistic Study – PET Review Single subject RTMS vs tDCS Methodology Pilot Clinical Trial (French) Mechanistic Study in Normals Review Clinical Trial – burst TMS regimens Clinical Trial – predictors of response Mechanistic Study Pilot Clinical Trial Controlled Clinical Trial Clinical Trial Clinical Trial – Dose-finding with PET guidance Review Single subject – maintenance therapy Clinical Trial Pilot Clinical Trial – 2 rTMS locations Pilot Clinical Trial Clinical Trial – dose finding Clinical Trial – Priming Controlled Clinical Trial + other treatments Controlled Clinical Trial – theta burst rTMS Review Controlled Clinical Trial Review (German) Clinical Trial Clinical Trial Clinical Trial + L-dopa Retrospective
IFMBE Proceedings Vol. 32
Transcranial Magnetic Stimulation as a Tool for Investigating and Treating Tinnitus
and material techniques used to create such a coil vary, and subjects can often detect the sham quality of the stimulation, particularly if they have experienced real TMS. The other practical issue remains of whether rTMS treatments will need to be repeated indefinitely, success of rTMS suggests that a more permanent solution for stimulation of the affected area will also be successful [17]. Another option is transcranial direct current stimulation, which is less technically demanding to apply, causes no auditory input, and appears to be equally effective [18].
IV. CONCLUSIONS RTMS appears to be a promising treatment for tinnitus, and an example in which knowledge of central nervous system activity can be used to design an intervention to treat a disease state.
ACKNOWLEDGMENT Dr. Wittenberg is supported by the Department of Veterans Affairs (Geriatrics Research, Education, and Clinical Center & Rehabilitation Research and Development Program) and Kernan Orthopaedic and Rehabilitation Hospital, Baltimore MD.
REFERENCES 1. Turrigiano GG, Nelson SB (2000) Hebb and homeostasis in neuronal plasticity. Curr Opin Neurobiol 10: 358-364. 2. D'Arsonval MA (1896) Dispositifs pour la mesure des courants alternatifs de toutes fréquences. Comptes Rendues del la Société Biologique (Paris) 2: 450-451. 3. Barker AT, Jalinous R, Freeston IL (1985) Non-invasive magnetic stimulation of human motor cortex. Lancet 1: 1106-1107. 4. Wolters A, Schmidt A, Schramm A, Zeller D, Naumann M, Kunesch E, Benecke R, Reiners K, Classen J (2005) Timing-dependent plasticity in human primary somatosensory cortex. J Physiol 565: 1039-1052. 10.1113/jphysiol.2005.084954 5. Chen R, Classen J, Gerloff C, Celnik P, Wassermann EM, Hallett M, Cohen LG (1997) Depression of motor cortex excitability by lowfrequency transcranial magnetic stimulation. Neurology 48: 13981403. 6. Pascual-Leone A, Houser CM, Reese K, Shotland LI, Grafman J, Sato S, Valls-Sole J, Brasil-Neto JP, Wassermann EM, Cohen LG, et al. (1993) Safety of rapid-rate transcranial magnetic stimulation in normal volunteers. Electroencephalogr Clin Neurophysiol 89: 120130. 7. Chen R, Gerloff C, Classen J, Wassermann EM, Hallett M, Cohen LG (1997) Safety of different inter-train intervals for repetitive transcranial magnetic stimulation and recommendations for safe ranges of stimulation parameters. Electroencephalogr Clin Neurophysiol 105: 415-421.
55
8. Huang YZ, Rothwell JC (2004) The effect of short-duration bursts of high-frequency, low-intensity transcranial magnetic stimulation on the human motor cortex. Clin Neurophysiol 115: 1069-1075. 10.1016/j.clinph.2003.12.026 9. Talelli P, Greenwood RJ, Rothwell JC (2007) Exploring Theta Burst Stimulation as an intervention to improve motor recovery in chronic stroke. Clin Neurophysiol 118: 333-342. 10.1016/j.clinph.2006.10.014 10. Zafar N, Paulus W, Sommer M (2008) Comparative assessment of best conventional with best theta burst repetitive transcranial magnetic stimulation protocols on human motor cortex excitability. Clin Neurophysiol 119: 1393-1399. 10.1016/j.clinph.2008.02.006 11. Pascual-Leone A, Rubio B, Pallardo F, Catala MD (1996) Rapid-rate transcranial magnetic stimulation of left dorsolateral prefrontal cortex in drug-resistant depression. Lancet 348: 233-237. 12. Langguth B, Eichhammer P, Wiegand R, Marienhegen J, Maenner P, Jacob P, Hajak G (2003) Neuronavigated rTMS in a patient with chronic tinnitus. Effects of 4 weeks treatment. Neuroreport 14: 977980. 10.1097/01.wnr.0000068897.39523.41 13. Plewnia C, Bartels M, Gerloff C (2003) Transient suppression of tinnitus by transcranial magnetic stimulation. Ann Neurol 53: 263266. 10.1002/ana.10468 14. Khedr EM, Rothwell JC, El-Atar A (2009) One-year follow up of patients with chronic tinnitus treated with left temporoparietal rTMS. Eur J Neurol 16: 404-408. 10.1111/j.1468-1331.2008.02522.x 15. Kleinjung T, Eichhammer P, Langguth B, Jacob P, Marienhagen J, Hajak G, Wolf SR, Strutz J (2005) Long-term effects of repetitive transcranial magnetic stimulation (rTMS) in patients with chronic tinnitus. Otolaryngol Head Neck Surg 132: 566-569. 10.1016/j.otohns.2004.09.134 16. Smith JA, Mennemeier M, Bartel T, Chelette KC, Kimbrell T, Triggs W, Dornhoffer JL (2007) Repetitive transcranial magnetic stimulation for tinnitus: a pilot study. Laryngoscope 117: 529-534. 10.1097/MLG.0b013e31802f4154 17. De Ridder D, De Mulder G, Walsh V, Muggleton N, Sunaert S, Moller A (2004) Magnetic and electrical stimulation of the auditory cortex for intractable tinnitus. Case report. J Neurosurg 100: 560-564. 10.3171/jns.2004.100.3.0560 18. Fregni F, Marcondes R, Boggio PS, Marcolin MA, Rigonatti SP, Sanchez TG, Nitsche MA, Pascual-Leone A (2006) Transient tinnitus suppression induced by repetitive transcranial magnetic stimulation and transcranial direct current stimulation. Eur J Neurol 13: 9961001. 10.1111/j.1468-1331.2006.01414.x
Use macro [author address] to enter the address of the corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
George F. Wittenberg VAMCHS/GRECC 10 N Greene St., (BT/18/GR) Baltimore USA
[email protected]
A Course Guideline for Biomedical Engineering Modeling and Design for Freshmen W.C. Wong and E.B. Haase Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD Abstract–– Johns Hopkins University’s Biomedical Engineering (BME) Department Freshmen Modeling and Design course provides a taste of BME by integrating first-order modeling of physiological systems with quantitative experimentation at a level that freshmen students can understand. It is a team-based course from both instructor and student perspectives, combining lectures with practical and project components. The freshmen teams consist of 5-6 students, each mentored by a faculty adviser, and guided by a graduate student teaching assistant and upperclassmen BME lab managers. Projects are completed and graded as a group. To encourage teamwork and participation, a peer evaluation system is employed, in which a student receives a modified grade based on the group's grade and their personal contributions to the project. For a cohort of about 130 freshmen, typically more than 20 faculty members, 12 graduate student teaching assistants and 14-18 BME upperclassmen lab managers are involved, which is a unique aspect of this course. By putting freshmen students into close contact with many members of faculty and student body, this course aims to serve as a springboard for students to explore the diverse BME landscape, as well as foster a greater awareness of the opportunities of student life on campus. Keywords–– biomedical, engineering, education, freshmen, modeling.
I. INTRODUCTION Freshmen college students face a range of decisions, such as which academic discipline to pursue, which laboratory to work in, which social group to associate with and which extracurricular activities to pursue. Each of these decisions may have a profound impact on their future. Freshmen BME majors at Johns Hopkins not only need to decide whether BME is a suitable discipline for them, they also need to choose a focus area such as cell and tissue engineering, systems biology, biomechanics, biomedical sensors and devices, computational modeling and bioinformatics. This mandatory 2-credit freshmen course in modeling and design helps our students explore different fields through a diverse range of lecture topics and projects. Freshmen are in direct contact with faculty members, graduate students and undergraduates working in laboratories throughout the University. The team-based format builds a social network which helps support them through the rest of their college career, regardless of the decisions they make.
The primary goal of this course is to engage our students as active BMEs from their very first day at Johns Hopkins. Emphasis is placed on developing critical thinking, problem solving, interpersonal and leadership skills that are relevant across a wide range of disciplines, rather than on teaching subject specific knowledge that students will acquire in the subsequent years of their college education. The difficulty in challenge-based teaching is that the students have not learned the skills or information they need before they start their projects, a situation encountered in other freshmen biomedical engineering courses [1]. This course has been designed to provide students with enough guidance to successfully transition from high school to college, while also fulfilling a number of ABET criteria (a), (b), (d), (e) and (g) [2]. In addition, some of the independent projects require equipment design (c) or consideration of ethical concerns (f).
II. ORGANIZATION The typical freshmen class for the Johns Hopkins BME department is approximately 130 students per year. Every freshman is required to attend a one-hour lecture once a week. In addition, freshmen are organized into teams of 5-6 students, resulting in about 25 teams in total. Each team is assigned a faculty member and a graduate student Teaching Assistant (TA). Each team undertakes 5 laboratory modules, in which they design their own experimental protocols, perform experiments in lab and write reports collectively as a team. Every year 14-20 upperclassmen laboratory managers provide on-the-spot guidance to students and ensure safety in the laboratory. The lab managers also serve as mentors to the freshmen, giving advice on course selection and extracurricular activities, which has been shown to benefit both the upperclassmen and the freshmen [3]. In total, over 60 people are involved in teaching this freshmen course. Teams meet with their faculty advisers informally on a biweekly basis. The role of the faculty adviser is to introduce students to an aspect of their scientific research and life at Hopkins, as well as to help students prepare for each laboratory module by going through relevant concepts and ideas. Each faculty adviser is given the choice to structure the meeting and discussion in the manner of their preference. Students are given the freedom to design and implement their own protocols. The role of the TA is to ensure that the students are fully prepared for each laboratory module,
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 56–60, 2010. www.springerlink.com
A Course Guideline for Biomedical Engineering Modeling and Design for Freshmen
ensure that protocols are safe and scientific, and to grade the laboratory reports. The course director provides each faculty member with a handbook describing the course projects in detail. The handbook also suggests questions faculty may ask to encourage discussion of a specific project. In addition, the course director meets personally with all of the TAs and upperclassmen laboratory managers prior to each of the five projects. During these meetings, the course director goes through the theory and procedures behind each laboratory exercise. These meetings help ensure students have a relatively uniform learning experience and grading criteria.
III. LECTURES There are usually 14 one-hour lectures during the semester. These lectures are not directly related to the laboratory modules, but are on various pertinent topics such as laboratory safety, engineering design, library resources etc. Since lectures are scheduled according to availability of the presenters, they vary slightly from year to year. Table 1 is a compilation of topics covered over the years. Table 1 Bi-weekly lecture topics 1. Laboratory Safety 2. Engineering Design 3. Introduction to the Library 4. Six Flags Trip: Measuring HR and acceleration 5. Introduction to Physiology 6. Sensors 7. Department Chair Presentation 8. You and the IRB 9. Introduction to Statistics 10. Effective Oral and Poster Presentations 11. Undergraduate Research Day 12. Patents, Licensing and Technology 13. Matlab in a Nutshell 14. Design Team Presentations
IV.
LABORATORY MODULES
Each laboratory module has a different theme, with three experiments aimed at modeling a certain aspect of the human body (human efficiency, static and dynamic arm, and cardiovascular system), an engineering exercise using foam core material and an independent project. Experiments are typically presented to students in an open-ended manner, with an accompanying list of essential facts and equations provided for guidance. Students are expected, under the guidance of their TA and Faculty Advisor, to
57
design their own experimental protocol, which will be reviewed by TAs and approved by Lab Managers before the start of each experiment. A. Model of Human Efficiency The first project models human efficiency using the simple equation: Efficiency = output/input
(1)
Students develop this definition even further for humans by realizing that “output” can be measured through work or exercise. Initially the students tend to guess that “input” is food intake. Further questioning helps them to understand that oxygen consumption is a much more accurate measurement of the energy used by the body during a specific period of time. Students use their model to predict how much oxygen they would need to do a precise amount of work. The students design laboratory experiments to determine human efficiency at rest and during exercise by calculating the work done in a repetitive exercise (output) and measuring oxygen use, and consequently energy consumption (input). Oxygen consumption is computed from measurements of the subject's tidal volume obtained by using the Biopac Data Acquisition system. Students are introduced to the noisiness of biological measurements, and the necessity of making certain assumptions in the acquisition of data, e.g. negligible anaerobic respiration under controlled exercise conditions,. They also learn to make quantitative comparisons to verify possible differences in efficiency due to gender and conditioning (between an athlete and a non-athlete). B. Model of Static and Dynamic Arm The second project estimates the force required in an arm muscle using both static and dynamic models. Using a static model, such as the one sketched in Fig 1, students learn to solve for the force in the deltoid muscle. The maximum possible deltoid muscle force is determined by multiplying the cross-sectional area of the muscle by an average maximum muscle force of approximately 30 N/m2. Through a combination of estimates and measurements, the students may solve for the force in the deltoid muscle that is required to hold a specific load, depicted as Fload in Figure1. Changing the value of the estimated parameters in the models allows the students to determine which variables have the greatest effect on muscle force. The students are usually surprised to discover that the arm length and weight have little effect on the force required by the deltoid muscle. The point and angle of attachment of the deltoid muscle, which varies between males and females are the most important variables in calculating deltoid force.
IFMBE Proceedings Vol. 32
58
W.C. Wong and E.B. Haase
C. Models of the Cardiovascular System
Fig. 1 Diagram of Free Body Diagram of Static Arm Model Fload = weight held, Farm = weight of arm, Fdelt=force of deltoid muscle, Fshoulder=forces in shoulder muscle. Arm is modeled as a cylinder of constant diameter.
Fig. 2 Arm Model Triceps Force-Length Relationship The force-length relationship has been linearized. Muscle force is calculated as the maximum possible muscle force multiplied by a forcelength factor between 0 and 1. A force-length factor of 1 indicates that the muscle is close to its resting length of lo. A factor close to 0 indicates that the muscle is already very contracted or stretched, and cannot generate its maximum force. In the dynamic arm model, the forearm is propelled by the contraction of the triceps muscle. The contraction force of the triceps is modeled as a function dependent on muscle length and contraction velocity. The differential equation of this system is solved numerically using an Excel spreadsheet, such that students are not required to possess any programming background. The estimated force in the triceps is calculated as the maximum force multiplied by two factors valued between 0 and 1.0; a force-length factor and a force-velocity factor. Figure 2 illustrates the forcelength relationship used in the dynamic arm model. The force-velocity relationship is also linearized in this model. The students can change, or even remove, these muscle relationships in their own models.
The third project studies the cardiovascular system through two different approaches: a hydraulic model using tubes and pumps, and an electric model using breadboards, D.C. Current supplies and Ohmic resistors. In the constant pressure hydraulic model, students are introduced to the factors that affect the flow of water through a rigid pipe, such as length and diameter, to derive the flow equation, essentially Pouiseuille’s Law. Students learn to draw analogies between a hydraulic system and the human circulatory system to estimate the effect of a change in diameter of a blood vessel on flow. In the electric model, basic concepts of flow and resistance in series and parallel circuits are introduced. The students use this model to estimate the change in resistance of the body before and during exercise by measuring the change in mean arterial pressure and heart rate. The most enjoyable aspect of studying the cardiovascular system is a field trip to Six Flags Amusement Park. Previous BME Design Teams developed the SHARD (Synchronous Heart rate and Acceleration Recording Device) to simultaneously measure heart rate, using a Polar heart rate monitor, and ride acceleration, using perpendicular accelerometers. Students plan experiments on three rides to determine the correlation between heart rate and acceleration through activation of the baroreceptor reflex. Data is analyzed using a Matlab program. Not only is this trip a great team building exercise, it also provides an opportunity for the students to obtain data outside a traditional laboratory setting. D. Foam Core Project One of our students’ favorite modules is the foam core project, which is held as a competition between all the student teams. Students are required to design two simple machines which transport a ping pong ball across a distance of 3 meters and back. To make the design exercise more challenging, one of the machines must travel with the ball. The teams are given about a week to come up with the design, and 6 hours to construct the devices using only simple materials such as foam core boards, elastic bands and wooden sticks. Students are also required to give a short presentation on their designs and theoretically estimate the amount of time their machines would take to move the ball. The resultant devices are graded based on the total transit time, originality of the design and the degree of automation of the machine. E. Independent Project The final laboratory module is the independent project. Students have 3 weeks to propose a model of a physiological
IFMBE Proceedings Vol. 32
A Course Guideline for Biomedical Engineering Modeling and Design for Freshmen
system and perform a scientific experiment to test their model. The teams present their results at a poster session at the end of the semester judged by various faculty members, TAs and the lab managers. At this point in the semester, the freshmen have learned to appreciate the open-endedness of these engineering problems and enjoy having the freedom design their own project. Some notable past year projects include: 1. Skin surface area in contact with local cold stimulus on one hand affects intensity of thermoregulatory response in the contralateral hand. This project demonstrated that a cold stimulus in one hand led to vasoconstriction in the contralateral hand. The students concluded that neural circuits that regulate homeostasis in response to changes in temperature are bilateral. 2. Where is that racket coming from? Effect of binaural cues on sound source localization. This project modeled blind-folded subjects’ ability to localize sound at different angles with both ears, left plugged, and right plugged. The students concluded that sound localization decreased with either ear plugged, and that the location of sounds from behind the subject were inaccurately predicted whether an ear was plugged or not.
V. GRADING AND ASSESSMENT Since all laboratory reports are submitted as team work, each team is assigned a team grade. However, to encourage individual participation and fair distribution of work load within each group, there is a peer evaluation system in place such that team members grade one another and themselves based on their commitment and contribution to the project. A student who contributes more than their peers to the project, receives a grade above the team grade and vice versa. If all students contributed equally to the project, which is often the case, then each student would receive the same team grade. Upon completion of the course, students complete an online survey to obtain feedback regarding many different aspects of the course. The comments in Table 2 are in response to the value of the independent project.
VI. CONCLUSIONS The range of topics covered during the lectures and five modules provide the freshmen with the requisite information needed to feel confident about their career choice in BME. The team-based format and multi-level teaching style allow the students to develop relationships with peers, faculty, and mentors. The variety of required
59
outcomes; lab reports, oral presentations, projects, and posters, give the students experience presenting their work in many different formats. Through this course, freshmen are exposed to the problem solving and team-building skills that are crucial to a career in BME. Table 2 Anonymous survey results regarding what the students learned from their independent projects. First 12 of the 98 responses ID 1 2 3 4 6 7
8 9 10 11 12
Response A basic understanding of how the body can be recreated using free diagrams and simple models I think i really learned the concept of modeling. I'm glad we had this project and feel good finishing it. I feel I had learned many things about how biomedical engineers model something in simple way. I learned from each project about different topics which gave me a more generalized knowledge of the biomedical field. I feel I have a bit more of an idea of what is involved in biomedical engineering, at least the design aspect of it. I was exposed to the various fields of BME, and I was introduced to basic modeling. With the independent project, it was really great to be able to design our own experiment, and I learned how to put together a research poster, which I've never done before. The judges were also really helpful in telling us what to do next time I feel that I learned a great deal about the importance of modeling in the engineering process.
Teamwork, designing/building skills more insight on what biomedical engineering field is like A greater understanding of how engineers work together. I felt that one of the most important things I got out of this was how to work with a group on a collegiate level.
ACKNOWLEDGMENTS We would like to thank freshmen teams (12 and 16) for the projects described in this paper, specifically, R. Chang, J. Fang, P. He, B. Ha, R. Romano, G. Wang, P. Adstamongkonkul, D. Dorfman, A. Harwell, C. Kemper, L. Wu, W. Zhong, And B. Chapman, J. Jung, E. Kim, A. Mateen, K. Takach (17) for the sketch of their foam core project. We would also like to acknowledge the work of Dr. Robert Susil on the dynamic arm model.
REFERENCES 1. De Jongh Curry, A. L., Eckstein, E.C. (2005) Gait-model for freshmen level introductory course in biomedical engineering. Proc. of the 2005 Am. Soc. for Engineering Education Annual Conference and Exposition 2. Accreditation Board for Engineering and Technology (ABET) 2008. Criteria for accrediting engineering programs effective for the evaluations during the 2009-2010 accreditation cycle, Baltimore 3. Patel, K.V., DeMarco, R., Foulds, R. (2002) Integrating biomedical engineering design into the freshmen curriculum. IEEE Xplore pp143144
IFMBE Proceedings Vol. 32
60
W.C. Wong and E.B. Haase Author References and Contacts Wing Chung Wong, 1 E University Pkwy Apt 603, Baltimore, MD 21218
[email protected] Eileen Haase Dept of BME - Clark Hall 318 Johns Hopkins University 3400 N Charles Street Baltimore, MD 21218
[email protected]
IFMBE Proceedings Vol. 32
Classroom Nuclear Magnetic Resonance System C.L. Zimmerman1, E.S. Boyden2, and S.C. Wasserman2 1
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology 2 Department of Biological Engineering, Massachusetts Institute of Technology
Abstract— A low-field classroom NMR system was developed that will enable hands-on learning of NMR and MRI concepts in a Biological Engineering laboratory course. A permanent magnet system was built to produce a static field of B0 = 0.133 Tesla. A single coil is used in a resonant probe circuit for both transmitting the excitation pulses and detecting the NMR signal. An FPGA is used to produce the excitation pulses and process the received NMR signals. This research has led to the ability to observe Nuclear Magnetic Resonance. Relaxation time constants of glycerin samples can easily be measured. Future work will allow further MRI exploration by incorporating gradient magnetic field coils.
I. INTRODUCTION The primary motivation for this research is the need for students to understand the principles of NMR which is the basis for its medical imaging application, MRI. As a relatively new imaging technique there is an abundance of active research in the MRI field, which continues to contribute to the already broad range of MRI functionality. Because of the ubiquity of NMR as the basic principle behind MRI and other technologies, it is an important topic of learning for science and engineering students, particularly in bioengineering. The current NMR system presented here achieves pulsed NMR capabilities and time domain observation. The system will be used in an MIT laboratory course, Biological Instrumentation and Measurement. Further work may extend the system presented here to MRI. A. Past Work Based on the scientific importance of NMR, it is no surprise that there are many sources of work that are relevant to this project. For example, [1] describes a small desktop MR system developed using permanent magnets and inexpensive RF integrated circuits at the Magnetic Resonance Systems Lab at Texas A&M. The C-shaped magnetic setup had a static field of 0.21T and an imaging region of 2cm. The most relevant past work for this project is a NMR system developed at MIT for an undergraduate physics lab [2]. This NMR system allows students to do pulsed NMR experiments, and has a static field strength of 0.17T.
A laboratory module similar to the system at the MIT undergraduate physics lab was developed at Northwestern University [3]. However, the system incorporates gradient coils to allow for spatial encoding demonstration. This research inspired some of the magnetic system development in our project. B. Background Atomic nuclei that have a “spin” have an intrinsic magnetic moment. NMR technology is based on the relationship between the magnetic moments of atomic nuclei and external magnetic fields and the ability to observe that interaction. NMR experiments can be thought of as having two stages: excitation and acquisition. The critical components of NMR are: a large static homogeneous magnetic field (B0), an oscillating excitation field (B1) which is perpendicular to B0, and a coil to measure the precession of the spins (this may be the same coil that was used to generate B1). In the presence of B0, the nuclear spins align with the field (which is generally in the z direction). During excitation, the nuclear spins are perturbed from alignment. This is done by applying B1, magnetic field pulses at the Larmor frequency. B1 rotates the sample’s magnetization vector, M, creating a “transverse” magnetization component. The amount by which M is rotated is referred to as the tip angle, θ, ands depends on the duration and amplitude of B1. When the spins are perturbed from alignment, they exhibit precessional motion. The frequency of precession is referred to as the Larmor frequency, and is linearly dependent on the field strength of B0. The second stage of pulsed NMR involves observing the precession of the spins. The orientation of M can be measured by the interaction of the magnetization with a receive coil. A changing magnetic field in a coil (produced by the precessing magnetic moments) induces an electromotive force. This corresponds to a voltage that may be observed.
II. SYSTEM DESIGN A. System Overview Figure 1 shows a block diagram of the overall system design. An FPGA development kit (Altera Cyclone III) with
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 61–64, 2010. www.springerlink.com
62
C.L. Zimmerman, E.S. Boyden, and S.C. Wasserman
A/D and D/A converters is used to create pulses and process the received signal. B0 was created using a permanent magnet circuit (see section III). The FPGA is used to create RF pulses, which are then amplified by a power amplifier (Minicircuits ZHL-3A). The output of the power amplifier is connected to the probe circuit, where these pulses generate B1, the excitation field. After excitation, the voltage generated across the receive coil is amplified by low-noise pre-amps (Minicircuits ZFL500LN). The amplified signal is then sent to the FPGA where it is down-modulated and low-passed filtered. While the original NMR signal is in the MHz range, mixing allows us to shift the frequency down and eases low-pass filtering of the signal. The down-modulated NMR signal can then clearly be observed on a computer or oscilloscope.
Fig. 1 Block diagram of System Design B. Transmit Chain Verilog code was developed to produce the RF pulses with the FPGA, the Verilog modules are shown in fig. 2. The “ConfigurationRegisters” module is used to set the pulse parameters, such as pulse width, spacing, and frequency. While the program is running, the user can use buttons and switches on the development kit to adjust these parameters.
Fig. 3 Block diagram of the receive chain (processes the NMR signal) D. Isolation The system’s probe circuit serves two purposes. It transduces the power amplifier signal into the magnetic excitation field, and transduces the NMR magnetic field into the received electrical signal. A single solenoid in the probe circuit is used for both purposes, thus the system must be designed carefully to isolate the transmit and receive signal chains. Isolation prevents the transmitted pulses from damaging the pre-amps, and helps to reduce noise during the NMR signal observation [4]. Crossed diode pairs were used for isolation (shown in fig. 4). There is one set of diodes in series after the power amplifier and one set of shunt diodes before the pre-amps. The received NMR signal will be less than 0.6 volts in amplitude (the conducting voltage for the diodes), and the amplitude of the RF pulses is 10-20 volts. Therefore when the pulses are being transmitted, all of the diodes will be conducting. The result is the following: the series diodes connected to the power amplifier conduct, and the shunt diodes connect the pre-amp input to ground. Therefore, the large RF pulses can generate the excitation field, without damaging the pre-amplifiers. We will want to observe the NMR signal after the pulses. During this time, none of the diodes are conducting because the NMR signal is too small. Therefore, there is no conducting path to the power amplifier or to ground through the shunt diodes. This technique was used in [3] and [2], and described in [4].
Fig. 2 Block diagram of the transmit chain (produces B1) C. Receive Chain Figure 3 shows a block diagram of the overall receive chain and implemented Verilog modules. After being amplified by cascaded pre-amps, the received NMR signal is input to an A/D converter on the development kit. The signal is down-modulated (by the frequency mixer) and filtered by the FPGA before being observed.
Fig. 4 Isolation is provided by crossed diode pairs
III. MAGNETIC CIRCUIT DESIGN The purpose of the magnetic circuit is to create B0, the static magnetic field of the NMR system. The field needs to
IFMBE Proceedings Vol. 32
Classroom Nuclear Magnetic Resonance System
63
be homogeneous because the Larmor frequency is dependent on the field strength. Although the idea of using an electromagnet to create B0 was considered, it was decided that use of permanent magnets was a better choice for this classroom application. To create a homogeneous field with permanent magnets, it was necessary to create a closed magnetic circuit. Finite-Element modeling software was used to simulate the magnetic circuit designs (Comsol and Quickfield). This final magnetic circuit is illustrated in figure 5. The field is created by 3" diameter cylindrical NdFeB magnets. The magnetic field is guided by a welded rectangular yoke made out of low-carbon steel (SAE 1018). Cylindrical spacers and slanted pole pieces were used. The most accurate indication of the field strength is the frequency of resulting NMR signal, which was found to be fo = 5.668 MHz. This corresponds to a field strength of B0 = 0.133T.
The design of the probe circuit is a series LC tank circuit, consisting of L, the coil inductance, and Ct, the tuning capacitor. It is necessary to match the input impedance, Zin, of the probe circuit to the other system components to achieve maximum power transfer and SNR. A matching capacitor, Cm, is added in parallel so that the total input impedance at the resonant frequency may be 50Ω.
Fig. 6 Resonant circuit with tuning and matching capacitors With fixed values of ω (the Larmor frequency) and L, the values of Ct and Cm were calculated so that at the resonant frequency, Zin=50Ω (the imaginary part of the input impedance must be zero for resonance): Ct =
1
(1)
Lω 2 − ω 50 R − R 2
Cm =
1 R 2 − (ωL − c ω1 2 ) 2
(2)
t
(a)
Table 1 Probe Circuit Values
(b)
Fig. 5 (a) simulated magnetic field lines (b) photograph of the system
Property
Value
Larmor Frequency
5.668 MHz
Coil Inductance @5.668MHz
IV. PROBE CIRCUIT The NMR system uses a single coil (a solenoid) as both a transmitter and receiver. The coil creates a magnetic field by driving it with a current (Ampere's law). The coil can also detect the NMR signal because the precessing spins generate a voltage across the coil (Faraday’s law). A resonant circuit is generally used to detect the NMR signals because it only allows the detection of a narrow frequency band, which can be tuned to the Larmor frequency of the system (this frequency specificity increases SNR). The resonant circuit design is an LC tank circuit, in which the transmit/receive coil serve as the inductor. The solenoid was designed with physical and electrical constraints in mind. It was wound so that an NMR test tube could fit snuggly in the coil, and so that the inductance was a reasonable value for the design. AWG 20 wire was tightly wound around a test tube, and then epoxy was used to hold it together. Properties of the coil are shown in table 1. Note that the inductance and resistance of the coil were measured at the intended operating frequency. This is especially important for the resistance measurement because at high frequencies skin effect in the wire becomes significant, effectively increasing the resistance.
3.443 µH
Coil Resistance @5.668MHz
2.7 Ω
Tuning Capacitor (Ct)
250pF
Matching Capacitor (Cm)
2100pF
Table 1 shows the calculated capacitance values of the probe circuit, and figure 7 shows simulations of the probe circuit done with LTspice.
(a)
(b)
Fig. 7 (a) This plot is a result of an ‘ac analysis’ simulation.
The resonant peak corresponds exactly to the desired frequency. (b) This is the result of a ‘transient simulation’. This simulation demonstrates impedance matching. The input voltage source is 10V and a has source impedance of 50Ω. If the input impedance of the probe circuit is 50Ω then the voltage at the input should be half of the source amplitude
IFMBE Proceedings Vol. 32
64
C.L. Zimmerman, E.S. Boyden, and S.C. Wasserman
Trimmer capacitors were used with a range of 12pF 120pF in addition to larger ceramic capacitors for Ct and Cm. There are several possible sources of parasitic capacitance in the probe assembly that can affect the behavior of the resonant circuit. There may also be slight variation in the desired resonant frequency due to temperature drift of the magnets or the location of the probe in the magnetic field. Having adjustable capacitors lets us account for all of these things by allowing a 100pF range of adjustment. When the entire NMR system is implemented, these capacitors can be adjusted until the maximum NMR signal is achieved.
inversion-recovery sequence was used to observe longitudinal relaxation, which is of the form Mz = M0(1 - 2e-t/T1). The acquired data and curve fits are shown in figure 10. T2 was determined to be 10.8ms and T1 was determined to be 15.8ms.
(a)
V. RESULTS The principle result of this research was the demonstration of received NMR signals with sufficient SNR. Figure 8 shows oscilloscope shots of observed NMR signals.
(b)
Fig. 10 (a) This is a plot of the data acquired using the spin-echo sequence. The amplitude of the echo was measured to give an accurate indication of the amplitude of the transverse magnetization. (b) This is a plot of the data acquired using the inversion-recovery sequence. A two pulse sequence is necessary to observe the amplitude of the longitudinal relaxation
VI. CONCLUSION AND FUTURE WORK
(a)
(b)
Fig. 8 Oscilloscope shots of an FID curve and an Echo signal In order to execute pulse sequences to measure time constants, we must first determine the pulse duration that corresponds to 90o and 180o tip angles. The mapping of pulse width to tip angle is shown in figure 9.
A functional low-field pulsed nuclear magnetic resonance (NMR) system for bench top undergraduate laboratory studies was demonstrated. The developed system will be a useful NMR learning tool for students. Fundamental NMR concepts that are difficult to visualize can be easily demonstrated with this system. The use of the FPGA to produce pulses and process the received NMR signal provides broad flexibility. The FPGA can be used to produce complex pulse sequences, and will also be used to drive gradient coils in the future development of this system. The addition of gradient coils is currently in development, and will allow positional detection of the sample. The future goal of this research is to develop the NMR system into a classroom MRI system.
REFERENCES Fig. 9 This plot represents the magnetization vector, M, being rotated by the excitation field, B1. The x-axis is the pulse width but also represents the tip angle, θ, of M. The y-axis is the FID amplitude resulting from a pulse After the durations of the 90o and 180o pulses were determined, we were able to execute pulse sequences used to measure the time constants, T1 and T2. The pulse sequences used were: 90-180 (spin-echo) and 180-90 (inversion-recovery). The spin-echo sequence was used to observe transverse relaxation, which is of the form Mxy(t) = M0e-t/T2. The
1. Wright S, Brown D, Porter J et al (2002) A desktop magnetic resonance system. Magnetic Resonance Materials in Physics, Biology and Medicine. Vol. 13, pp 177-185. 2. Kirsch J, Newman R, A pulse NMR experiment for an undergraduate physics laboratory. http://web.mit.edu/8.13/www/JLExperiments/JLExp_12AJP.pdf 3. Hayes C, Sahakian A, Yalvac B (2005) An inexpensive laboratory module to teach principles of NMR/MRI. Proceeding of the 2005 ASEE Conference. 4. Fukushima E, Roeder S (1981) Experimental Pulse NMR. AddisonWesley Publishing Company, Inc., 1981.
IFMBE Proceedings Vol. 32
The Basics of Bioengineering Education Arthur T. Johnson Fischell Department of Bioengineering, College Park, MD 20742 Abstract— Bioengineering education often tends towards applied biological science. However, engineering is a profession different from the discipline of biological science. This difference should be maintained in undergraduate bioengineering education. A curriculum based upon fundamentals of engineering, science, math, and liberal studies can give students the flexibility they need to master the challenges of future employment. Keywords— teaching for success, education, undergraduate curriculum, modern biology, curriculum content.
I. INTRODUCTION Bioengineering can easily be confused with applied biology, but education for bioengineering students also needs to include engineering fundamentals. Engineering differs from science in that it results in satisfactory products and processes through creative activities. Bioengineering education must emphasize both the science and engineering sides of its roots. There is a tendency today for biological science, in particular, to focus on lower hierarchical levels. This reductionism becomes reflected in bioengineering subjects taught to undergraduates. Yet, there is still need for bioengineers to manage production processes, understand package sterilization, and design instruments for medical use. These talents must be developed through exposure to course topics dealing with all biological levels and systems in general. Bioengineering education should aim to produce biological engineers rather than applied biological scientists. Educational experiences for bioengineering students should stress fundamentals, analogical methods, and broad range of applications. The sciences of physics, chemistry, biology, and engineering science (especially controls, information transfer, and strengths of materials) need to be included. Calculus, differential equations, and mathematical modeling techniques are very important. Engineers, in particular, draw many of their conclusions from model results in a deductive fashion (as opposed to biologists who, like most scientists, draw general principles from accumulate facts by the process of induction). For engineers to develop models correctly, they must be given solid concepts of the ways things work, and then be able to translate these concepts into (usually) mathematical form. Mathematical
manipulative abilities are unfortunately sadly lacking in many of our bioengineering undergraduates today. Students should learn about terms and nomenclature, which are often quite different for sundry applications areas and are needed to communicate with specialists in the different fields. Nomenclature, especially in biology, changes rapidly. Despite the emphasis on nomenclature, bioengineering education should emphasize general principles and possible applications rather than fact memorization. Modern biology emphasizes four approaches: 1) inheritance and information legacy, 2) developmental and ecological explanations, 3) phenotypic plasticity and biodiversity, and 4) relationships within interactive networks. Each of these is conducive to bioengineering understanding and application. Biological responses are very dependent upon the surrounding physical, chemical, and biological environment. Modern epigenetics indicates that environments have more effect than previously believed. It has always been known that living beings respond to environmental cues, but it is becoming clearer now that environment can change the genetic legacy of living things. New paradigms for teaching information legacies need to include these epigenetic effects as well as the cultural legacies known as memes. Creative experiences in an engineering context are called designs, and bioengineering educational programs should not skimp on design projects. Especially if design projects are combined with group projects and have an identifiable communications component, bioengineers will learn the essence of engineering involving biological systems. These are important problem-solving skills necessary for the successful practice of bioengineering. It is this ability to use logic to solve difficult problems that is fundamental to the practice of engineering. All bioengineering students will not further their educations in graduate school or professional schools. Undergraduate bioengineering education should serve those who seek employment immediately after the bachelor’s degree, as well as those who plan to continue their formal educations. Hence, undergraduate bioengineering education should include additional skills, such as economics, business management, cultural awareness, and communications skills. The undergraduate experience should expose students to generalities and fundamental thinking. Undergraduates should not become too specialized in either the knowledge
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 65–66, 2010. www.springerlink.com
66
A.T. Johnson
that they possess or the methods they use. The more versatile they are when they graduate, the more likely it is that they will be successful in their professional endeavors. This also includes the ability to work with non-human biological systems, if need be. Lastly, bioengineering education needs to be flexible. Although progress in biology has largely moved from fundamentals to application details, there is still enough new information added to biological science that bioengineering curricula must be able to accommodate major changes every few years. Breakthroughs in physics, mathematics, engineering science, and chemistry are much less likely in the foreseeable future than are major breakthroughs in biology. This means that courses may have to be added, some dropped, and many changed to keep up with the field. More efficient ways must be used to package information in order to deliver necessary knowledge efficiently to students. The ways that courses were taught a decade before to present faculty may not be appropriate for the students they teach today. It would certainly be a mistake to assume that bioengineering education can be completed in four years. Whether
they continue in their formal education or not, bioengineering graduates will necessarily gain knowledge as they pursue their careers. Whereas it is impossible to include each of the aforementioned fundamentals in a four year curriculum to completely cover all of them, nevertheless it is possible to expose students to them in various ways in a coordinated curriculum. In this way, courses to meet curriculum requirements cannot be considered to be independent of one another. There should be communications among instructors to make them cognizant of the overall goals of the program and each instructor’s part toward achieving those goals. It was stated in a recent article in the ASEE Prism (Lord, M., 2010, Not What Students Need, Prism 19(5):44-46) that “there’s a pretty big gap between what engineers do in practice and what we think we’re preparing them for”. Because the world of engineering practice is, and will continue to be, dynamic, we need to assure that our graduates are versatile, good communicators, sound in technical fundamentals, and specialists in technical diversity. Building upon that foundation will lead to success in their careers.
IFMBE Proceedings Vol. 32
HealthiManage: An Individualized Prediction Algorithm for Type 2 Diabetes Chronic Disease Control Salim Chemlal1, Sheri Colberg1, Marta Satin-Smith2, Eric Gyuricsko2, Tom Hubbard2, Mark W. Scerbo1, and Frederic D. McKenzie1 1
2
Old Dominion University, Norfolk, VA, USA Eastern Virginia Medical School, Norfolk, VA, USA
Abstract— This paper describes a prediction algorithm for blood glucose in Type 2 diabetes. An iPhone application was developed that allows patients to record their daily blood glucose levels and provide them with relevant feedback using the prediction algorithm to help control their blood glucose levels. Several methods using theoretical functions were tested to select the most accurate prediction method. The prediction is adjusted with each glucose reading input by the patient taking into consideration the time of the glucose reading and the time after the patient's last meal, as well as any physical activity. The individualized prediction algorithm was tested and verified with real patient data and also validated using a non-parametric regression method. The accuracy of prediction results varied from different approaches and was adequate for most of the methods tested. The predicted results merged closer to the patients’ actual glucose readings after each additional input reading. The findings of the research were encouraging and the predictive system provided what we believe to be helpful feedback to control, improve, and take proactive measures to regulate blood glucose levels. Keywords— Prediction, blood glucose, diabetes, exercise.
I. INTRODUCTION There are over 23.6 million children and adults in the United States with diabetes[1]. Type 2 diabetes is the most common form of this chronic disease and is now one of the most rapidly growing forms of diabetes. Type 2 diabetes occurs when the body does not produce enough insulin or loses its ability to efficiently use insulin, which results in glucose build up in the blood instead of into body cells. Uncontrolled diabetes is the leading cause of kidney failure and is directly responsible for harming blood vessels leading to early heart attacks, stroke, blindness, and a need for amputations. A management strategy for diabetes is keeping blood sugar in a close-to-normal range preventing any unsafe glucose levels. Our objective is helping Type 2 diabetic patients monitor and control their glucose level based on daily feedback of glucose regulation and compliance. A prediction algorithm has been developed to provide the patient with feedback about ongoing glucose management
based on a predicted model. We also developed an iPhone application that allows patients to record their daily blood glucose levels and provide relevant feedback using the prediction algorithm. Several methods involving functions, such as regression, power series, and exponentials functions, were considered and tested to select the most appropriate and accurate prediction method. In all methods, the prediction is adjusted with each new glucose reading input by the patient. The prediction also takes into account physical activity considering the fact that even mild exercise may have a significant effect on blood glucose variation; the duration and intensity of exercise are the key factors that contribute to the effect of an activity on glucose level. The individualized prediction algorithm was tested and verified with realistic patient data. The accuracy of prediction results varied from the different approaches and was adequate enough for most methods; however, the last prediction involving exponential functions was the most accurate. As expected, the predicted results merged closer to the patients’ actual glucose readings after each additional input. The results of our selected prediction method were also validated using a non-parametric regression method. An interactive iPhone application was designed to provide patients with valuable feedback based on the prediction model to help track and control their blood glucose levels. Numerous efforts were conducted concerning blood glucose prediction, but they were intended for Type 1 diabetic patients or based on continuous glucose monitoring for short-term predictions using data-driven auto-regressive (AR) models [2,3] or simple regression models [4]. Some of the glucose prediction studies for Type 1 included medication dosing decision support and GUI interface along with the predictive model [5,6]. Such a case was not previously conducted for Type 2 diabetes.
II. MATERIALS AND METHODS A. Prediction Strategies Our objective was to develop a glucose prediction algorithm that can provide patients with valuable feedback
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 67–70, 2010. www.springerlink.com
68
S. Chemlal et al.
based on comparing predicted values with the actual reading. Several methods using different theoretical functions were considered. The first method involved fitting a typical Type 2 diabetic patient blood glucose level curve over a 24 hour period using a high order polynomial. The polynomial would change over time based on the patient's blood glucose readings input. Since such readings are typically collected once or twice a day, a close fit to an individual patient may only be accurate after several weeks of use and adaptation. This method is based on a least squares curve fitting technique and was applied to the initial idealized glucose values and readings collected each day from the patient. The idealized glucose values were obtained from a combination of sources and need not be representative of a particular individual but closer initial values would obviously take less time to converge to accurate predictions. For this method, it is assumed that the best-fit curve has the minimal sum of the deviations squared from our dataset, n
min = ∑[( yi − (a0 + a1xi + a2 xi2 + ... + am xim ))2 ] i =1
(1)
where a 0 , a 1 , a 2 , ..., a m are the polynomial coefficients, x i is the glucose reading time, and y i is the glucose reading. As a second method, the 24-hour prediction period was split into to three smaller periods based on the glucose behavior after meals; the three periods were represented as, before and after breakfast, before and after lunch, and before and after dinner. After a meal, the glucose level increases immediately reaching a peak within approximately 45min. After this peak, levels fall dramatically, almost as quickly as they rose. When glucose drops back down to the normal range that it was in before the meal, which occurs usually within 2 hours after food consumption, it keeps gradually decreasing. Therefore, Lognormal and Weibull functions were considered to represent this last behavior. In this method, Weibull functions were represented as follows,
⎛ β ⎞⎛ β ⎞ f (T ) = ⎜ ⎟ ⎜ ⎟ ⎝ η ⎠⎝ η ⎠
β−1
e
⎛T ⎞ ⎜⎜ η ⎟⎟ ⎝ ⎠
β
(2)
where η and β are the scale and shape parameters, respectively. The three periods of the curve of the prediction have the same Weibull function, but with different parameters based on the initial idealized glucose curve and any added glucose input readings. The third method was based on the observations and issues encountered applying the previous methods making this method seemingly more logical, and reliable. In this method, the 24-hour prediction period is also partitioned to three periods represented by, after breakfast and before lunch, after lunch and before dinner, and after dinner and before breakfast; they are characterized with same
functions, but different parameters. This prediction method utilizes the idealized glucose values as a starting graph as well and updates the functions parameters after every added input reading. Any food intake causes the glucose level to increase immediately, but our main concern of the prediction is not concerning the increase or peak glucose level after food intake, but the drop and change following a meal and primarily before the next meal. In this method, the drop of glucose after a meal, which starts within 45 minutes after the meal, is assumed to follow an exponential distribution. Also, when the glucose level drops back down to near regular levels for an individual patient, it is assumed that it follows a linear function that is gradually decreasing with time. The exponential function is expressed as the following,
y = a + be-x
(3) where a and b are the location and rate parameters. It was important to assume a meal time for breakfast, lunch, and dinner in all the above methods. The assumed meal time gets updated with respect to any input reading that is known to be before or after meal; this is again individualized for each particular patient. For instance, when the assumed meal time increases or decreases by a Δt, the prediction curve would be shifted left or right. By adjusting the assumed meal time, a better accuracy can be achieved especially if the person is consistent about meal times. B. Management and Compliance Feedback Since the goal of the prediction model is to provide reasonably accurate and helpful feedback to the patient, it is critical to have a well developed and meaningful response mechanism. The flow chart below represents the feedback to be displayed to the patient by comparing the actual input reading (CG) to the predicted reading (PG) at a certain time. The feedback process was developed with the help of pediatric endocrinologists on our team. The feedback process has two main parts based on the glucose reading input time, before or after meal. "Before the meal" readings are past the after-meal glucose peak, which is within an hour after finishing a meal. The "after the meal" readings are within an hour of finishing the meal. Since this is not a recommended time to take readings, most of the feedback based on the predicted glucose levels is on the before meal side. If the patient's input reading is before the meal, it is compared to the predicted reading from the predictive model as well as to other variables. The variables on the flow chart are given initial values such as 60mg/dL, but may be changed by the patient's physician. For the after meal path, since the rate of blood glucose increase depends highly on the glycemic index of the meal, we are more interested on predicting the individual patients usual drop and change following a meal and before the next meal.
IFMBE Proceedings Vol. 32
HealthiManage: An Individualized Prediction Algorithm for Type 2 Diabetes Chronic Disease Control
69
carbohydrate during exercise can vary enormously and depends strongly on exercise intensity. In the case of physical activity, the predictive system adjusts the predictions based on the duration and intensity of exercise.
III. RESULTS
Fig. 1 Feedback Process Flow Chart C. GUI Interface The predictive model along with the feedback process were implemented into an iPhone application for simple and daily use by the patients. When a patient inputs the current glucose reading, the application provides feedback about of their glucose level. The patient may also input the time after last meal with a scroll menu for a more accurate prediction; otherwise, the prediction model uses an assumed last meal time based on the previous collected data. The feedback responses are implemented as alert windows, which pop up instantly after saving the current glucose reading. The iPhone application allows an easy everyday use of the predictive system; the valuable feedback provided can help patients control their glucose regulation and keep it as close as possible to the range specified by their doctors. D. Considerations The prediction algorithm also takes into consideration any physical activity. The relative utilization of fat and
The methods utilized in our prediction algorithm were tested with realistic representative data provided by Children's Hospital of The King's Daughters; observations were made based on the long-term individual behavior and percentage error between the predicted and actual readings. The first method using a high order polynomial represented a good curve of the typical graph, but was too computationally complex. The polynomial curve was composed of 48 data points recorded every 30min from the typical graph, which required a number of high order coefficients for a good representation. Therefore, replacing or averaging the input reading with the predicted one at that time would still not change the shape much to adjust to further readings. From the observations and results of the first method, we chose to split the graph to three periods and implement a prediction method for each one separately. The method based on Weibull functions represented well the rise and drop of glucose after each meal; however, it did not handle the period after glucose drops back down to the normal range, which has a slower decreasing rate. The third method, which was based on an exponential function, produced results that are accurate enough to drive the feedback process. We started by dividing the 24-hour curve to three parts based on the major meal, as in method two; however, on this method, we were not concerned much about the immediate rise of glucose after meal. We were more concerned with predicting the glucose drop and behavior after the meal and primarily before the next meal. In this method, if a reading is recorded before a meal, dinner for instance, it would not change the shape of the exponential function used for dinner, but it would adjust the assumed meal time. In other words, the curve would be shifted closer to an assumed meal time, which would allow the application to estimate a usual meal time of an individual even if not entered for a certain day. This sample reading recorded before dinner would also be considered as an after meal for lunch; feedback is then given by comparing the prediction with the actual reading in the after lunch period. The sample reading is then added to the prediction model for next time. Hence, readings either shift the glucose curve up or down or they change the exponential drop by changing its parameters.
IFMBE Proceedings Vol. 32
70
S. Chemlal et al.
Fig. 2 Blood Glucose Prediction of Actual Patient Data The above figure shows a typical Type 2 diabetic patient graph with idealized values (top red curve), samples of real patient readings before and after dinner (blue stars), a final patient prediction graph after 3 weeks of representative data (bottom black curve), and the two actual readings of the day following the prediction (blue squares). After adding the realistic patients' data to the typical curve, represented by stars in the figure, the prediction graph slightly changed the assumed meal times and the exponential functions parameters. For instance, the typical graph had 19:00 as a starting dinner time; however, the patient readings were recorded right before dinner at an earlier time. Therefore, the prediction of dinner slightly shifted its assumed meal time from 19:00 to an earlier time; the final assumed dinner time after 3 weeks of readings was 18:30. The model was also validated using a non-parametric regression method. The predictive system was implemented on the iPhone as a user-friendly application making interaction with patients simple and easy. Reliable and helpful feedback is provided instantly by inputting any new glucose readings. For instance, when the patient's current reading is too low, below 60mg/dL, the system recommends the rule of 15 (Fig. 3), which is consume 15 grams of carbohydrates, wait about 15 minutes, then recheck the glucose level.
IV. CONCLUSION In this paper we presented a prediction algorithm using different methods to predict blood glucose regulation for Type 2 diabetic patients. Feedback responses are then provided based on a comparison between the predicted and actual readings at the time of the reading. The predictive system will also take into consideration any physical activity based on the total exercise duration and intensity.
Fig. 3 iPhone Application Feedback based on the Prediction Model The methods were tested and verified with realistic representative data and the performance was assessed by considering the convergence of typical data to the representative data and the percentage error. The findings of this research were encouraging and the predictive system provided what we believe to be helpful feedback to control, improve, and take proactive measures to regulate blood glucose levels. The next step would be to test the system by utilizing a statistically significant set of actual patient data.
REFERENCES [1] "Diabetes Statistics," American Diabetes Association, April 2007, http://www.diabetes.org/diabetes-basics/diabetes-statistics/ [2] T. Bremer and D. Gough, “Is blood glucose predictable from previous values? A solicitation for data,” Diabetes, vol. 48, pp. 445–451, 1999. [3] Gani A, Gribok AV, Rajaraman S, Ward WK, Reifman J. Predicting subcutaneous glucose concentration in humans: data-driven glucose modeling. IEEE Trans Biomed Eng. In Press (Feb 2009). [4] Sparacino G, Zanderigo F, Corazza S, Maran A, Facchinetti A, Cobelli C. Glucose concentration can be predicted ahead in time from continuous glucose monitoring sensor time-series. IEEE Trans Biomed Eng. 2007 May;54(5):931–937. [5] Albisser AM (2005) A graphical user interface for diabetes management that integrates glucose prediction and decision support. Diabetes Technol Ther 7:264–273 [6] Albisser AM, Baidal D, Alejandro R, Ricordi C: Home blood glucose prediction, clinical feasibility and validation in islet cell transplant candidates. Diabetologia 2005, in press. Salim Chemlal, Old Dominion Univeristy Norfolk, USA E-mail:
[email protected]
IFMBE Proceedings Vol. 32
Dynamic Movement and Property Changes in Live Mesangial Cells by Stimuli Gi Ja Lee1,2, Samjin Choi1,2, Jeong Hoon Park1,2, Kyung Sook Kim1,2, Ilsung Cho1,2, Sang Ho Lee3, and Hun Kuk Park1,2,* 1
Department of Biomedical Engineering, College of Medicine, Kyung Hee University 2 Healthcare Industry Research Institute, Kyung Hee University 3 Dept. of Nephrology, College of Medicine, Kyung Hee University, Seoul 130-701, Korea Abstract— Atomic force microscopy (AFM) has become an important device to visualize various cells and biological materials for non-invasive imaging. The major advantage of AFM compared to the conventional optical and electron microscopes is its convenience. Sample preparation for AFM does not need special coating or vacuum as a procedure. AFM can detect samples even under the aqueous condition. Although the AFM is originally used to obtain surface topography of sample, it can measure precisely the interactions between its probe tip and the sample surface from force-distance measurements. Glomerular mesangial cells (MC) occupied central position in the glomerulus. It is known that MC can control not only glomerular filtration, but also cell response to local injury including cell proliferation and basement membrane remodeling. It was reported the increment of angiotensin II by activation of rennin angiotensin aldosterone system (RASS) caused abnormal function of MC. In this study, we observed structural and mechanical changes to MC after Ang II treatment using AFM. Real time imaging of live cell suggested dynamical movement of cells was stimulated by angiotensin II injection. Simultaneously, the changes of stiffness and adhesion force of MC by angiotensin II and angiotension II inhibitor (telmisartan) was revealed by using force-distance curve measurement. Keywords— AFM, Mesangial cell, Real-time imaging, forcedistance analysis, RAAS.
I. INTRODUCTION Atomic force microscopy (AFM) has become an important tool for non-invasive imaging of various cells and biological materials since its invention in 1986 by Binnig et al [1]. The major advantages of AFM over conventional optical and electron microscopes for imaging cells included the fact that no special coating and vacuum were required and imaging could be done in all environments – air, vacuum or aqueous condition. The AFM imaging of live cells under physiological condition is more complicated and challenging even for experts because cells are soft and easily detached from the substrate. To prevent the detachment of cells during AFM imaging, many researchers utilized cell fixation methods such as chemical fixatives, micropipettes, trap by agar and the pores of filters [2-4]. However, as a result of the fixation, the artifact and depression were
reported during the sample preparation or the measurement process [5]. Murphy M.F. et al. reported that successful imaging of live human cells using AFM was influenced by many variables including cell culture conditions, cell morphology, surface topography, scan parameters and cantilever choice [6]. The glomerular mesangial cell (MC) occupies a central anatomical position in the renal glomerulus. The MC not only can control glomerular filtration, but may also be involved in the response to local injury such as cell proliferation and basement membrane remodeling [7]. Angiotensin II, a potent vasoconstrictor, has a key role in renal injury and in the progression of chronic renal disease of diverse causes [8]. In this study, we performed the imaging of live MC by contact mode AFM. From real time imaging of live cell, we measured the dynamical movement and mechanical change of cells by stimulus such as drug injection.
II. METHODOLOGY A. Cultured Mesangial Cells Sprague-Dawley (SD) rats (150~200g) were recruited for glomerular cell culture. Glomeruli were isolated from their kidneys by the common sieving method through serial steel meshes. Completely purified glomeruli were collected by a micropipet and used for primary culture. Dulbecco’s Modified Eagles’s Medium (DMEM) supplemented with 20% fetal bovine serum (FBS), 10 mg/mL bovine insulin, 4 mmol/L glutamine and antibiotic-antimycotic and 5.5 mg/mL human transferring) was obtained for the primary cell culture medium. Cells were identified as MCs by their spindle shape in phase contrast microscopy, as well as positive staining with anti-smooth muscle actin and negative staining with cytokeratin, and common leucocyte antigen antibodies in immunofluorescent microscopy. We used the MCs from the rat between 4th and 9th passages. B. Preparation for AFM Measurement Contact mode AFM images and force-distance curves were obtained using the NanostationⅡTM (Surface Imaging
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 71–73, 2010. www.springerlink.com
72
G.J. Lee et al.
Systems, Herzogenrath, Germany). Data acquisition and processing were performed by the SPIPTM (Scanning Probe Image Processor, version 4.1, Image Metrology, Denmark). Live MCs were scanned at the resolution of 256×256 pixels with scan speed of 3 line/s. We used the gold coated silicon cantilevers for contact mode and the loading force was adjusted to below 2~3nN. In order to detect real time cell response, angiotensin II and Telmisartan (Sigma, St. Louis, Missouri, USA) were applied at a concentration of 5μM. Once a live cell was identified using the imaging mode, locations for force data were selected. After force curve acquisition was completed, a subsequent image was obtained to make sure that the cell had not shifted.
III. RESULTS AND DISCUSSIONS Figure 1 shows the topography image of live MC in DMEM medium buffered with a HEPES. It was reported that MCs possessed some of the morphological characteristics of vascular smooth muscle cells (SMC), such as bundles of actin filaments [9]. As shown in Figure 1, the images exhibited features associated with cytoskeletal structures, such as actin filaments and other filamentous elements.
Fig. 2 Time series of deflection images of live mesangial cell by 1μM angiotensin II addition (left) before adding angiotensin II (right) 15min after adding angiotensin II Force curves can provide useful information about the physical properties of a cell [10]. The slope of extension curve was used to determine the stiffness of the cell. As shown in Figure 3, MC treated with angiotensin II was stiffer than control MC. But MC treated with angiotensin II and angiotensin II inhibitor (telmisartan) was not as stiff as angiotensin II-treated MCp < 0.0001. Table 1 Calculated spring constants of MCs before and at 20 min after Ang II treatment, in addition 20 min after Ang II and telmisartan treatment Cellular spring constant Kmc (N/m) (n=20) MC before Ang II MC for 20 min after treatment Ang II treatment 0.109 ± 0.019**
0.031 ± 0.009 **
MC for 20 min after Ang II & telmisartan treatment 0.051 ± 0.016*
*
p < 0.0001, p < 0.005
IV. CONCLUSIONS
Fig. 1 AFM topography and deflection images of live mesangial cells in DMEM medium buffered with HEPES Figure 2 shows the effect of angiotensin II on MC from a time series of deflection images of live MC. MC was gradually contracted towards the center with the passage of time after angiotensin II addition.
To our knowledge, this study was the first one tried to image live mesangial cells in the glomelulus by AFM. In order to detect real time cell response, we successfully observed the topography changes of MC by angiotensin II injection, in particular on cytoskeletal dynamics in MC. Simultaneously, elastic changes of MC by angiotensin II and angiotension II inhibitor (telmisartan) was revealed by using force-distance analysis. From this result, we conclude that the contraction of MC by angiotensin II was effectively blocked by telmisartan.
ACKNOWLEDGMENT This research was supported by the research fund from Seoul R&BD (grant # CR070054).
IFMBE Proceedings Vol. 32
Dynamic Movement and Property Changes in Live Mesangial Cells by Stimuli
REFERENCES 1. Binnig G, Quate CF, Gerber C (1986) Atomic force microscope, Phys Rev Lett 56: 930-933 2. Horber JK, Mosbacher J, Haberle W et al. (1995) A look at membrane patches with a scanning force microscope, Biophys J 68: 16871693 3. Grad A, Ikai A (1995) Method for immobilizing microbial cells on gel surface for dynamic AFM studies, Biophys J 69: 2226-2233 4. Kasas S, Ikai A (1995) A method for anchoring round shaped cells for atomic force microscope imaging, Biophys J 68: 1678-1680 5. Moloney M, McDonnell L, O’Shea H (2004) Atomic force microscopy of BHK-21 cells; an investigation of cell fixation techniques, Ultramicrosopy 100: 153-161 6. Murphy MF, Lalor MJ, Manning FCR et al. (2006) Comparative study of the conditions required to image live human epithelial and fibroblast cells using atomic force microscopy, Microscopy research and technique 69: 757-765 7. Schlondorff D (1987) The glomerular mesangial cell : an expanding role for a specialized pericyte, FASEB J 1: 272-281
73
8. Klahr S, Morrissey J (1998) Angiotensin II and gene expression in the kidney, Am J Kidney Dis 31: 171-6 9. Elger M, Drenckhahn D, Nobiling R et al. (1993) Cultured rat mesangial cells contain smooth muscle α-actin not found in vivo, AJP 142: 497-509 10. Volle CB, Ferguson MA, Aidala KE et al. (2008) Quantitative changes in the elasticity and adhesive properties of Escherichia coli ZK1056 prey cells during predation by Bdellovibrio bacterio vorus 109J, Langmuir 24: 8102-8110
The corresponding author: Author: Hun Kun Park Institute: Kyung Hee University Street: 1 Hoegi-dong, Dongdaemun-gu City: Seoul Country: Korea Email:
[email protected]
IFMBE Proceedings Vol. 32
Cooperative Interactions between Myosin II and Cortexillin I Mediated by Actin Filaments during Cellular Deformation Tianzhi Luo1 and Douglas N. Robinson1,2,3 1 School of Medicine/Department of Cell Biology, Johns Hopkins University, Baltimore, USA School of Medicine/Department of Pharmacology and Molecular Sciences, Johns Hopkins University, Baltimore, USA 3 School of Engineering/Department of Chemical and Biomolecular Engineering, Johns Hopkins University, Baltimore, USA 2
Abstract— A mechanosensory system consisting nonmuscle myosin II, cortexillin I, and actin filaments has been identified recently. During cellular deformation, myosin II and cortexillin cooperatively accumulate in highly deformed region in response to the applied stress and the extent of accumulation increases with increased stress. The cooperativity is suggested to be mediated by the actin filaments. The accumulation of these proteins increases the mechanical resistance of the cells to against the external load, leading to a diminishing deformation. Keywords— Mechanosensing, Myosin, Actin, Actin crosslinking protein.
I. INTRODUCTION Cells are capable of sensing mechanical stimuli and translating them into biochemical signals, which enables the cells to adapt to their physical surroundings by remodeling their cytoskeletal architectures, activating various signaling pathways, and changing their gene expression [1,2]. These phenomena involve two essential processes, mechanosensing and mechanotransduction. In these processes, force or deformation needs to be transmitted from the outside environment to the proteins and organelles inside the cells. The actin cytoskeleton composed of actin filaments, myosin motors, and actin-crosslinking proteins (ACLPs) plays a critical role in force propagation and in the response to deformation. Recently, we discovered a new mechanosensing phenomenon that myosin II and an ACLP cortexillin I cooperatively accumulate to highly deformed regions in dividing Dictyostelium cells as shown in Fig.1 and the accumulation extent increases with increasing forces as shown in Fig. 2 [3,4]. In addition, the length of the cell in the pipette decreases when the proteins start to accumulate in micropipette aspiration experiments (not shown here). This observation is a typical example of cells protecting themselves by reinforcing their local cytoskeleton in response to external forces.
II. RESULTS AND DISCUSSIONS One possible mechanism of the cooperative accumulation is that the binding of myosin to actin enhances cortexillin binding to actin filament. There are two features of myosin binding to the actin filaments. The first is that the binding lifetime increases with the external forces, which explains the positive proportionality between the accumulation and the applied forces as shown in Fig. 2 and the accelerated accumulation as shown in Fig 3. The second is that myosin alone is able to bind to actin filaments cooperatively and the corresponding transient curves have a sigmoid shape [5, 6], which is thought to be origin of the observed cooperative accumulations. It was proposed that the cooperative binding between two neighboring myosins to a common actin filament was attributed to their elastic interactions mediated by the actin filament. Single molecule measurements demonstrated that the binding of cortexillin I to actin filament is not forcedependent (over a -2 to 2 pN range), suggesting cortexillin alone does not bind to actin filament cooperatively [4].
Fig. 1 Myosin accumulation during micropipette aspiration adapted from ref. [4] Therefore, the elastic deformation in actin filaments caused by myosin binding facilitates the cortexillin binding,
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 74–76, 2010. www.springerlink.com
Cooperative Interactions between Myosin II and Cortexillin I Mediated by Actin Filaments during Cellular Deformation
resulting in the cortexillin accumulation. On the other hand, cortexillin cross-links the actin filaments into network to allow the tension force to build up such that the myosin head can feel the tension and its binding lifetime is then increased. We suspect this kind of cooperative binding might also exist among other ACLPs. We simulate the corresponding two-dimensional reaction-diffusion problems using coarse-grained kinetic Monte Carlo simulation. In our simulations, the diffusion coefficients vary in the range of 0.01~100 μm2/s and the characteristic times of binding/unbinding reactions are in the range of 100 s-1 and 0.01 s-1. The essential mechanism we proposed is that the myosin binding leads to local conformal changes in actin filament filaments, which facilitates cortexillin binding in nearby region. As expected, the cooperativity increases with the strength of the elastic interactions. The simulation shows that myosins and ACLPs accumulate cooperatively as shown in Fig. 4. The kinetics of accumulation behaves as a Hill-type function. The corresponding time-scale of accumulation is also consistent with that in the experiment when physiological values of myosin and cortexillin are utilized.
75
Fig. 3 Kinetics of myosin accumulation adapted from ref. [4]
continues to diminish due to the shrinking of the cell length in the pipette and decreasing of the force applied on each myosin. Using coarse-grained molecular dynamics simulation scheme initially developed by Discher [7] and later improved by Li [8], we demonstrate that local enhancement of cortical stiffness associated with protein accumulation drives the cell to craw away from the pipette. Based on geometry analysis and certain assumed relationships between the moduli and the protein concentrations, we argue that the rate of cell length change has the negative value of the myosin
Fig. 2 Myosin accumulation in different mutants under different applied pressures adapted from ref. [4] The shear modulus and stretch modulus of actin network are known to be dependent on the concentrations of actin, myosin and ACLPs. The accumulation of these proteins enhances the local resistance of actin cortex to external forces and reduces the local strains. As a result, the cell tends to achieve less deformed cell shape, i.e., the shrinking of cell length in the pipet. Meanwhile, the force felt by each myosin decreases with the increased myosin concentration. Therefore, on one hand, myosin and cortexillin cooperatively accumulate in response to the applied force; on the other hand, the driving force for the protein accumulation
Fig. 4 Cooperative accumulation of Myosin and cortexillin in kinetic Monte Carlo simulation. The strain energy is caused by myosin binding to actin filaments
IFMBE Proceedings Vol. 32
76
T. Luo and D.N. Robinson
accumulation rate normalized by the instantaneous myosin concentration. This prediction is confirmed by plotting the slopes of the kinetic data from experiments as shown in Fig. 5.
Fig. 5 The derivatives of myosin accumulation (blue circle) and the shrinking of cell length in pipette (black square)
III. CONCLUSIONS
REFERENCES 1. Wang N, Tytell JD and Ingber DE (2009) Mechanotransduction at a Distance: Mechanically Coupling the Extracellular Matrix with the Nucleus. Nature Rev Mol Cell Biol 10: 75-82. 2. Chien S (2007) Mechanotransduction and Endothelial Cell Homeostasis: the Wisdom of the Cell. Am J Physiol Heart Circ Physiol 292: H1209-H1224. 3. Effler JC, Kee S, Berk JM, Tran MN, Iglesias PA, and Robinson DN (2006) Mitosis-Specific Mechanosensing and Contractile Protein Redistribution Control Cell Shape. Curr Biol 16: 1962-1967. 4. Ren Y, Effler JC, Norstrom M, Luo T, Firtel RA, Iglesias PA, Rock RS, and Robinson DN (2009) Mechanosensing through Cooperative Interactions between Myosin II and the Actin Crosslinker Cortexillin I. Curr Biol 19: 1421-1428. 5. Greene LE, and Eisenberg E (1980) Single-myosin crossbridge interactions with actin filaments regulated by troponin-tropomyosin. Proc. Natl. Acad. Sci. USA 77: 2616-2620. 6. Trybus KM and Taylor EW (1980) Kinetics study of the cooperative binding of subfragment 1 to regulated myosin. Proc. Natl. Acad. Sci. USA 77: 7209-7213. 7. Discher, DE, Boa, EH, and Boey SK (1998) Simulations of the Erythrocyte cytoskeleton at Large Deformation. II. Micropipette Aspiration. Biophys J 75: 1584-1597. 8. Li J, Dao M, Lim CT, and Suresh S (2005) Spectrin-Level Modeling of the Cytoskeleton and Optical Tweezers Stretching of the Erythrocyte. Biophys J 88: 3707-3719. The address of the corresponding author:
We discovered a mechanosensory system in which the cooperative interactions between myosin and the ACLP is suggested to be mediated the actin filaments. We performed simulation at protein level and demonstrated actin filament mediated interaction indeed reproduced certain key features of in vivo observations. We also successfully explained the diminishing of deformation during protein accumulation.
Author: Institute: Street: City: Country: Email:
ACKNOWLEDGMENT We acknowledge the support of the National Institute of Health (Grant #GM066817) and the American Cancer Society (Grant #RSG CCG-114122).
IFMBE Proceedings Vol. 32
Tianzhi Luo Johns Hopkins School of Medicine 725 N. Wolfe Street, Physiology 100 Baltimore USA
[email protected]
Constitutive Law for Miniaturized Quantitative Microdialysis C.-f. Chen Department of Mechanical Engineering/University of Alaska Fairbanks, Fairbanks, AK 99775-5905
Abstract— Miniaturized microdialysis, a membranesampling technique, is in need for monitoring “tough” molecular substances such as neurotransmitters which exhibit limited diffusivity and fast clearance in synaptic space. This paper uses non-dimensional analysis and combinatorial simulations to predict the sampling performance of miniaturized microdialysis, prior to rigorously prototyping such small devices. As current microdialysis has sampling resolution too rough to meet the needs, one aim of this paper is to understand how best miniaturized microdialysis would improve the sampling performance, and to what degree. Our results of numerical simulations and curve-fitting extrapolation suggest improved temporal resolution (at least ten times better) is achievable, while retaining the relative recovery, a key factor for quantitative microdialysis, at an acceptable level. To the limit of theoretical downscaling in microdialysis, the results also suggest the need for new operation principles for miniaturized microdialysis. Keywords— microdialysis, sampling, miniaturization.
I. INTRODUCTION Microdialysis is an invasive membrane-sampling technique in which a probe is inserted into tissue in vivo, such that one side of a semi-permeable membrane is in contact with extracellular fluid and the other side is flushed with a dialysis fluid (perfusate) that takes-up substances (analyte) from the extracellular fluid through the membrane. When coupled with analytical separation techniques, microdialysis enables online monitoring of targeted bioactive analytes. The ability to continuously sample the extracellular compartment has opened up a wide range of applications of microdialysis in biological sample cleanup [1]; observation of metabolic activity in tissues in humans [2]; and monitoring neurotransmitters in brains [3] since its first presentation in 1966 [4]. Microdialysis also allows for delivery of compounds into targeted extracellular sites [5]. The sampling performance of microdialysis is usually quantified by relative recovery, which is a ratio of the steady-state analyte concentration in the perfusate to the true value in the extracellular fluid. Relative recovery is determined by the probe size and perfusate flow rate. The former is related to the temporal resolution of microdialysis – the larger the probe is, the longer for analyte concentration to reach its steady-state value. The relative recovery
increases as the perfusate flow rate decreases because analyte are continuously flushed out for further analysis. The continuous sampling in microdialysis indeed creates an environment that analyte can never saturate the probe chamber. Interpretation of microdialysis results is typically indirect, based on proportional changes in analyte where flow rate affects relative recovery. Current microdialysis, typically with temporal and spatial resolution of about 600 seconds and 0.1 mm3, respectively [6], is somewhat too coarse to sample neurotransmitters such as glutamate. The application to online and real-time monitoring of neurotransmitters, if made successful, would greatly enhance our understanding of their metabolic implications to behavioral stimuli, drug abuse treatment, and pharmaceutical agent development. The nature of fast clearance and short diffusion distances of neurotransmitters imposes a challenge to existing microdialysis [7]. Apparently, problems associated with the microdialysis devices in use include (relatively) large dead volumes, rough spatial resolution and traumatic tissue damage associated with probe implantation. Large cross-sectional areas cause significant tissue damage that can hamper interpretation of results [8]. Poor spatial resolution (the probe is large relative to the area sampled) results in a reduced ability to sample a desired tissue region. Prolonged temporal resolution, particularly, is a concern for glutamate detection because of the presumed rapid clearance and short diffusion distances associated with glutamatergic synapses [9]. Recent work with small carbon fibers [10] suggests miniaturization of microdialysis for better sampling resolution. The advancement in microfabrication enables the miniaturization of microdialysis, assuming that operation principle still holds for microdialysis at the small scale. This paper predicts the performance of miniaturized microdialysis by calculating the relative recovery and temporal resolution (i.e., the time to reach the steady state in sampling). The predictive modeling is based on non-dimensional analysis and combinatorial simulations. The former characterizes the sampling process, while the latter uses various combinations of model parameters for quantification. The results are curve-fitted for predicting the performance of miniaturized microdialysis and extrapolating the limit of miniaturization. Finally, we conclude out work by discussing an important implication of the scaling law in microdialysis.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 77–80, 2010. www.springerlink.com
78
C.-f. Chen
II. IMPLICIT MODEL FOR QUANTITATIVE MICRODIALYSIS Microdialysis essentially creates a concentration gradient for sampling by diffusion. As implanted in tissue, the microdialysis probe is continuously supplied with a clean perfusate such as to build a concentration gradient inclining from within the probe chamber (through the porous membrane) to the extracellular space, allowing molecular particles diffuse through the membrane into the probe chamber. As particles enter the probe chamber, they are flushed by the perfusate flow to the outlet for further chemical analysis. The chamber thus can never be saturated, thus allowing new particles to come in. In in-vivo applications microdialysis sampling is usually operated under the steady state which is equilibrium between convectivity (attributed by the flushing capacity of perfusate flow) and diffusivity (of analyte). A two-dimensional model (Fig. 1) is used to quantify microdialysis. This model, for the consideration of microfabricated prototypes, is described by the rectangular coordinate and is different from the conventional models which were all described by the cylindrical coordinates [11, 12]. The model in Fig. 1 shows a portion of the microdialysis probe in the proximity of the porous membrane through which analyte diffuse from the extracellular fluid, through the membrane, into the probe chamber. The perfusate fluid flows through the chamber from left to right. (Microdialysis usually uses a syringe pump to drive the flow.) Owing to the small channel size, it is appropriate to model the perfusate flow as the Poiseuille flow. A non-slip boundary condition along the interior wall of the channel is imposed. Among many parameters influencing the relative recovery of microdialysis [11], eight parameters, as highlighted in Fig. 1 and listed in Table 1, pertain to the microdialysis probe. The performance of microdialysis is usually quantified by relative recovery rr, a dimensionless parameter defined as the ratio between c and c∝. It is an implicit function of the other six parameters: rr =
(
c = f Vavg , μ , ρ , D , H , A c∞
)
will not affect the steady-state distribution of analyte in the chamber because a face diffusion problem in three dimensions is equivalent to a line diffusion problem in two dimensions [13].
Fig. 1 Schematic of microdialysis. The performance is governed by the variables shown Table 1 Microdialysis parameters and their units Symbol
Dimension
average speed of perfusate flow
Parameter
Vavg
Lt-1
dynamic viscosity of perfusate
ML-1t-1
density of perfusate
μ ρ
coefficient of diffusion of analyte
D
L2t-1
characteristic dimension of channel
H
L
area of semi-permeable membrane
A
L2
concentration of analyte in channel
c
L-3
concentration of analyte in tissue
c∝
L-3
III. NONDIMENSIONAL ANALYSIS The scaling effect on the relative recovery of miniaturized microdialysis can be illustrated by performing nondimensional analysis on Eq. (1) [14]: rr =
(1)
Noted that the diffusion coefficients of analyte in the porous membrane and the extracellular fluid have been excluded from Eq. (1), since this paper is aimed at the scaling effect of probe dimension to the microdialysis performance. We simply applied a constant diffusion coefficient DMEM = 108 μm/s2 (membrane) and DECS =367 μm/s2 (extracellular fluid) in the simulations. The use of the constants DMEM and DECS in our model is appropriate for only describing an instrumental tune-up of microdialysis devices such as the in-vitro microdialysis, as the content of this work. We also assume that the membrane has an equal out-of-plane dimension to that of the probe. The choice of the out-of-plane dimension
ML-3
⎛ HVavg c A HVavg ρ ⎞ , 2 , = f⎜ ⎟ ⎜ μ ⎟⎠ c∞ H ⎝ D
(2)
The relative recovery is governed by three dimensionless groups: HVavg /D (the Péclet number), A/H2 (the membraneto-channel area ratio), and HVavgρ /μ (the Reynolds number). The above equation sheds light into the physics underlying microdialysis. The steady-state analyte concentration in the probe chamber, as formulated by the relative recovery, is dictated by two competing factors, the diffusivity of analyte that increases the concentration, and the drifting speed of perfusate flow that decreases the concentration. Smaller probes and larger membrane areas are advantageous of higher relative recovery. A large Reynolds number will depreciate the relative recovery, which is concluded by holding μ,Vavg, and ρ but H.
IFMBE Proceedings Vol. 32
Constitutive Law for Miniaturized Quantitative Microdialysis
79
IV. CONSTITUTIVE LAW FOR QUANTITATIVE MICRODIALYSIS To develop an explicit version of Eq. (2) in order to quantify the relation among the relative recovery and other parameters, we also formulate the following equations of combined diffusion-drifting to describe the sampling process [14]: ∂c = DECS ∇2 c ∂t ∂c = DMEM ∇2c in membrane ∂t ∂c ∂c + υx = DCHM ∇2c in probe chamber ∂t ∂x
in extracellular fluid
(3) (4) (5)
These equations describe the transportation of analyte, in the continuum sense, that diffuse from within the extracellular fluid, through the semi-permeable membrane, and then into the probe chamber in which perfusate fluid flows. The transport problem was implemented in Matlab by the finite difference method. One typical steady-state distribution of the analyte concentration is shown in Fig. 2, in which the horizontal dimension spans the membrane length. A constant line source in the extracellular fluid has been designated at the bottom of the problem domain. Such an arrangement is suitable for modeling microdialysis in vitro, a scenario of placing a microdialysis probe in a large, wellstirred solution reservoir for instrumental performance quantification. A reflective boundary condition was imposed at the top of the domain where the chamber wall is. No particle is allowed to accumulate at the left- and right-hand sides of the membrane. The perfusate flow (in the camber) is modeled as a Poiseuille flow. Since the dimensions considered in this study are all in the range of sub-millimeters, we assumed that there is no pressure drop across the chamber in the direction of flow. The time when the distribution of concentration reaches its steady state is detected by monitoring the concentration profile of analyte at a few places in the chamber. Once all the monitored concentrations are fluctuated within a preset range (10-6 in all our simulations) for 10 more time steps, the simulation is deemed to enter the steady state and is terminated. In the remaining illustrations the relative recovery is defined as the maximum value of the concentrations averaged along each vertical line in the chamber to represent the relative recovery. The same procedure is repeated for another seventy four combinatorial trials of different values for the six parameters of the three dimensionless groups (Eq. (2)). All the cases simulated have a channel of cross-sectional area less than 2000 μm2. For each trial we recorded the corresponding relative recovery and the time to reach the steady state. The results are plotted against the Reynolds number (Re) for seven categories of the membrane-to-channel ratio (p2*)
in Fig. 3. All the cases simulated are in the low Reynolds number regime and have the steady-state time shorter than 5 seconds. The data points in the Re range of [0.001, 0.1] segregate into an observable pattern and are thus curvefitted by power-law functions. The fit curves suggest a Redependent design guideline, by which, with a given Re value, the relative recovery may achieve a level defined by the rr* value in a duration defined by the corresponding ss* value. The data points labeled (2), (5), and (6) in Fig. 3 show three cases, per the design guideline, and the corresponding distribution of the steady-state analyte concentration. The data points above the fitted rr* curve such as shown by Case (3) and (7) represent a design scenario for higher relative recovery. Data point (1) illustrates an undesirable design, which corresponds to low relative recovery. Data point (4), among other sparsely distributed points locating in the very low Re range, illustrates the result of a nearly stop flow, an impractical setup for the continuousflow based microdialysis. The problem domains of all the seven cases illustrated in Fig. 3 are proportionally scaled to Case (5), in which the dimensions are in the unit of micrometers and the color bar (atop) can be best demonstrated in color.
chamber
membrane extracellularspace
Fig. 2 A typical distribution of steady-state analyte concentration in microdialysis. The domain has an x-dimension 100 μm representing the membrane length and is comprised of three horizontal regions, the 10-μmthick extracellular space (bottom), 6-μm-thick membrane (middle), and 20μm-high chamber (top). The diffusion coefficients for the three media are DECS = 367 μm2/s, DMEM = 108 μm2/s, and DCHM = 760 μm2/s, respectively. An aqueous fluid (ρ = 1g/cm3, μ = 0.09 cp at 20oC) is used, which flows rightwards through the chamber with a parabolic velocity profile in which the peak velocity (along the middle streamline) is V0 =240 μm/s. The concentration level is indicated by the color bar atop. A constant line source is imposed at the bottom of the problem domain with a constant concentration level of 1 (i.e., c∝ = 1). The averaged relative recovery at the steady state is 0.462. The steady state is reached at 2.1 seconds after the analyte begin to diffuse from the source line at the bottom of the domain For microdialysis operated under large Péclet numbers (e.g., Case (5) in Fig. 3) the analyte exhibit weaker diffusivity than convectivity (by flow flushing), thus analyte have less change to arrive the top portion of the chamber. On the opposite, Case (4) is associated with a low Péclet number, at
IFMBE Proceedings Vol. 32
80
C.-f. Chen
which the analyte quickly permeate through the entire chamber before perfusate flow flushes them out. Some cases associated with a relatively large Re value (e.g., Case (5)) are with a relatively fast perfusate flow rate (e.g., 1.2 mm/s). The back pressure, an unavoidable issue in any continuous flow and an undesired factor in microdialysis, will be ridiculously amplified in microchannels [14] and would be a bottleneck for miniaturizing the microdialysis technique. Is continuous perfusion still effective and efficient in microchannels? One possible solution to this question is to seek for a design which operates microdialysis at very low Reynolds numbers (such as Case (4)).
V. CONCLUSIONS The microdialysis sampling has been formulated and simulated in this paper, with a focus on the temporal performance a miniaturized microdialysis probe. The performance is essentially decided by the diffusivity of analyte and convectivity of perfusate flow. The fit curves in Fig. 3 are believed to be an optimal design criterion, by which the relative recovery can be best achieved in the shortest time. This hypothesis needs further justification. At very low Re numbers the relative recovery becomes less predictable in our results, suggesting something new that cannot be explained by Eq. (2). It triggers a question: “Does a stop flow applicable in miniaturized microdialysis?” The answer to this question, a solution to reduce the back pressure issue, is a key to success of miniaturization of microdialysis.
REFERENCES
Fig. 3 Scaled relative recovery (rr*) and scaled equilibrium time (ss*) vs. Reynolds number (Re). p2* resembles the membrane-to-channel area ratio by p2* = A/(100w) (μm2/μm2) + H/100 (μm/μm) where w is the value of the channel’s third dimension (out of the plane). rr* = rr/p2*, and ss* = (ss)V0/sqrt(w/AH) (s-μm/s/μm) where V0 is defined in the Fig. 2 caption. Seven data points, as labeled (1)~(7), are chosen to illustrate the associated steady-state distribution of analyte concentration as inset in the rr*-Re plot. All the seven illustrations are scaled to the dimensions defined in (5)
1. Wang P C, DeVoe D L, Lee C S (2001) Integration of polymeric membranes with microfluidic networks for bioanalytical applications. Electrophoresis, 22:3857-3867. 2. Benjamin R K, Hochberg F H, Fox E et al. (2004) Review of microdialysis in brain tumors, from concept to application: first annual carolyn Frye-Halloran symposium. Neuro-Oncology,65-74. 3. Bourne J (2003) Intracerebral Microdialysis: 30 Years as a Tool for the Neuroscientist. Clin.Exp. Pharmacol. & Physiol., 30:16-24. 4. Bito L, Davson H, Levin E et al. (1966) The concentrations of free amino acids and other electrolytes in cerebrospinal fluid, in vivo dialysate of brain and blood plasma of the dog. J. Neurosci., 13:10571067. 5. Drew K L, Ungerstedt U (1991) Pergolide presynaptically inhibits calcium-stimulated release of gamma-aminobutyric acid. J. Neurochem., 57:1927. 6. Watson C J, Venton B J, Kennedy R T (2006 ) In vivo measurements of neurotransmitters by microdialysis sampling. Anal. Chem.,13911399. 7. Drew K L, Pehek E A, Rasley B T et al. (2004) Sampling glutamate and GABA with microdialysis: suggestions on how o get the dialysis membrane closer to the synapse. J. Neurosci. Methods, 140:127-131. 8. Bungay P M, Newton-Vinson P, Isele W et al. (2003) Microdialysis of dopamine interpreted with quantitative model incorporating probe implantation trauma. J. Neurochem., 86:932. 9. Cragg S J, Rice M E (2004) Dancing past the DAT at a DA synapse. Trends Neurosci., 27:270-277. 10. Allen C, Peters J L, Sesack S R et al. (2001) Microelectrodes closely approach intact nerve terminals in vivo, while larger devices do not: A study using electrochemistry and electron microscopy. Monitoring Molecules in Neuroscience. Proceedings of the International Conference on In Vivo Methods, 9th, pp 89-90. 11. Bungay P M, Morrison P F, Dedrick R L (1990) Steady state theory for quantitative microdialysis of solutes and water in vivo and in vitro. Life Sci., 46:105-119. 12. Benveniste H, Hüttemeier P C (1990) Microdialysis: theory and application. Prog. Neurobiol., 35:195-215. 13. Crank, J. (1975) The Mathematics of Diffusion, 2nd edn. Oxford Univ. Press, Oxford. 14. Chen C, Drew K L (2008) Droplet-based microdialysis concept, theory, and design consideration. J. Chromatogr. A, 1209:29-38.
IFMBE Proceedings Vol. 32
Non-invasive Estimation of Intracranial Pressure by Means of Retinal Venous Pulsatility S. Mojtaba Golzan, Stuart L. Graham, and Alberto Avolio Australian School of Advanced Medicine, Macquarie University, Sydney, Australia
Abstract— Current techniques used to measure intracranial pressure (ICP) are invasive and require surgical procedures in order to implant pressure catheters in brain ventricles. The amplitude of central retinal vein pulsations (RVPa) has been shown to be associated with the pressure gradient between intraocular pressure (IOP) and ICP. When IOP approaches ICP, the pressure gradient drops, leading to cessation of RVPa. In this study we aim to investigate this relationship and define a new method to estimate ICP non-invasively. 10 healthy subjects (mean age 35±10) with clear medical history were included in this study. Baseline IOP was measured (Goldman tonometers) and RVP recorded using Dynamic Vessel Analyser.IOP was decreased actively using 0.5% Iopidine and RVP recorded simultaneously every 15 minutes. Digital signal processing techniques were used to measure mean RVP peak-to-peak amplitude in each cardiac cycle at different IOP levels. Linear regression equations were used to extract a relation between IOP and RVPa and to estimate the pressure at which RVPa cease (i.e. RVPa=0). At this point ICP equals IOP. IOP and ICP pressure waveforms were simulated in order to estimate ICP continuously. Results show a linear relationship between RVPa and IOP such that RVP decreases with IOP reduction. Estimated ICP ranged between 2-13.7 mmHg, all falling in the normal physiological range (i.e. 0-15 mmHg). Analysis of retinal venous pulsation in accordance with IOP may introduce a novel approach for estimation of ICP non-invasively. Keywords— Intracranial Pressure, Pulsations, Non-invasive measurement.
Retinal
Venous
I. INTRODUCTION The measurement of the absolute value of intracranial pressure (ICP) is important in diagnosing and treating various pathophysiological conditions caused by head trauma, hemorrhage tumors and inflammatory diseases. Conventional invasive ICP measurement techniques require surgical passage through the skull bone into the brain ventricles, parenchyma or the region between the skull and dura matter to implant a measuring transducer. Such invasive techniques, however, are undesirable, as damage to the sensitive brain tissues may result. Moreover, due to the invasive nature of the procedures, this induces a risk of
infection. The cerebrospinal fluid (CSF) produced at the choroid plexus of the brain ventricles circulates around the cranium through the subarachnoid space. The CSF compartment surrounds the optic nerve and extends to the posterior aspect of the globe right up to the lamina cribosa at the optic nerve head. The central retinal vein (CRV) is contained within the optic nerve at this point and therefore it is subject to pressure interaction between CSF and IOP. Baurmann [1] originally modeled the loss of pulsations of the CRV with intracranial hypertension. This finding was later supported in a clinical study by Kahn [2]. Levine [3] proposed the constant inflow variable outflow (CIVO) theory which described the disappearance of the retinal venous pulsations (RVP) during intracranial hypertension. During an increase in cerebrospinal fluid pressure (CSFp) (also known as the ICP), the CSF pulsations and the mean CSFp rise [4] and approach the intraocular pulse pressure decreasing the intravascular pressure gradient over the prelaminar and retrolaminar optic nerve and leading to cessation of the RVPa. Levine’s hypothesis was supported by Jacks [5] who suggested that the pulsations occur due to a pressure gradient along the central retinal vein as it traverses the lamina cribrosa. Levin [6] found that these pulsations were present in 87.6% of 146 unselected subjects 20-90 years of age and were absent in 100% of 33 patients with increased intracranial pressure. He concluded that the presence of spontaneous venous pulsations was a reliable indicator of an intracranial pressure below 13-14 mmHg. Various systems and methods for the non-invasive measurement of ICP have been suggested. Among these studies several attempts have been made to use the ocular circulation to approach the ICP [7-10]. Existing techniques for non-invasive estimation of ICP including opthalmoscopic examination for evidence of papilledema in adults or the palpation of the fontanelles and skull sutures in infants are highly qualitative and do not necessarily correlate directly with ICP measurements. The present study examined the relationship between IOP and RVPa and demonstrates a relation between these two parameters in order to estimate ICP non-invasively.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 81–84, 2010. www.springerlink.com
82
S.M. Golzan, S.L. Graham, and A. Avolio
simulated using the following equation [11] and estimated mean ICP is added to the simulated waveform:
II. METHODS Ten healthy subjects (35 ±10yrs) with no history of eye disease, a normal fundus on ophthalmoscopy with no vascular changes or signs of raised ICP were included. Baseline IOP was measured using a tonometer (Goldman). The RVPa was then recorded non-invasively for 100 seconds (inferotemporal vein 1 disc diameter from optic disc) using the Dynamic Retinal Vessel Analyser (Imedos, Jena, Germany). IOP was lowered using aproclonidine 0.5% (Alcon, Fort Worth ) and was measured every 15 minutes, each time followed by a further 100 seconds RVPa recording from the same site (i.e. inferior temporal vein). Heart rate (HR) was also recorded throughout. Mean RVPa was subtracted from recorded RVPa and was passed through a low pass filter with a cut-off frequency of 30 Hz (Fig.1-a). A moving average algorithm was applied in order to remove baseline wandering (Fig 1b). Then the recordings were rectified (Fig 1-c) and peaks were detected. A threshold of mean peak amplitude ± 0.5 peak amplitude was used to eliminate the undesired artefact peaks collected at the time of recording (Fig 1-d). Selected peaks were plotted against the intraocular pressure. According to CIVO hypothesis [3] and other studies [5] described previously the RVPa amplitude is associated with the pressure difference between the IOP and ICP:
2
(3)
Where and . A and B define the ICP pulse pressure, and were chosen as: A=1 and B=0.5 Figure 3 is the diagram of the methodology discussed above.
a
c
b
i.e. then
(1)
where K is a constant. According to equation (1), when venous pulsations cease to be present (i.e. RVPa=0), then ICP=IOP. Linear regression equations were used to relate changes in the RVPa peaks and IOP. Figure 2 is an example of this relation. Based on the linear regression obtained between RVPa peaks and IOP, it is possible to estimate ICP. K was measured and averaged using an iterative method. Based on this method 9 subjects were used to define K and 1 subject for testing. Rearranging equation (1): (2)
d
Fig. 1 (a-d): a- recorded RVPa passed through low pass filter, b- baseline wandering removed from recorded RVPa using moving average algorithm, c- signal rectified, d- peaks detected with threshold
According to equation 2, mean ICP could be estimated using the baseline mean IOP, mean RVPa and the measured K. This equation could also be used to estimate the mean ICP of each individual cardiac cycle using the related RVPa in that cardiac cycle. This equation is applied for each individual cardiac cycle of RVPa and the relevant mean ICP is measured in each cardiac cycle. ICP waveforms are then
IFMBE Proceedings Vol. 32
83
Table 1 Baseline IOP, RVPa and estimated ICP for our subjects
25 20 15
Subject No
Baseline IOP (mmHg)
Mean RVPa (µm)
Estimated ICP (mmHg)
1 2 3 4 5 6 7 8 9 10
16 16 12 19 12 15 15 12 14 18
9.2 5.3 12.8 15.6 7.5 12.7 7.8 5.2 19.1 11.2
9 9.3 6.4 13.7 8.9 9.2 2.7 2.2 4.9 12.4
10 5 0 6
6.4
RVP peaks (micro meters)
Non-invasive Estimation of Intracranial Pressure by Means of Retinal Venous Pulsatility
8
10
12
IOP (mmhg)
Fig. 2 Linear regression line used to relate changes in RVPa amplitudes to IOP changes
Continuous estimated ICP- b 12
8
11
ICP-mmhg
ICP-mmhg
Continuous estimated ICP- a 10
6 4 2
10 9 8 7
0
6 0
1
2
3
4
5
0
Time (second)
Fig. 3 Overall schematic used to simulate and estimate ICP
1
2
3
4
5
Time (second)
Fig. 4 a- ICP estimated based on RVPa peaks and IOP, dashed line is the mean ICP estimated from the regression equation (as shown in figure 2), the error is 3% between continuous and mean estimated ICP . b- Second tested subject, error is 5% Table 2 Changes of IOP, Heart rate and mean RVPa
III. RESULTS Table 1 is the baseline mean IOP, RVPa and estimated ICP recorded from all subjects. Average K was 2.18 ±0.8. The RVPa peaks decreased consistently as IOP decreased. Mean ICP in each cardiac cycle was estimated using equation 3. These values were then added to the simulated ICP waveform. Two of the subjects were used to test the algorithm. Figure 4 shows continues waveforms of ICP for each cardiac cycle (solid line) and the overall mean ICP measured from the linear regression equations (dashed line). The error between the solid line and dashed line is 3% and 5% in the two figures respectively. Minimum estimated ICP was 2.2 mmHg and the maximum was 13.7 mmHg. We observed that ICP correlated with the height of the subjects (Height (cm)= -1.55*ICP (mmHg)+186.9 , R2=0.4 ). Mean RVPa for all 10 subjects fell from 10.75 µm at the baseline to 3.26 µm at the lowest IOP. Table 2 shows the changes in mean IOP, RVPa and Heart rate at the baseline and the lowest IOP for all of our subjects (i.e after 45 minutes).
t= 0 minutes Baseline
t= 45 minutes
P value
Mean IOP (mmHg)
15.5 ±2.9
10.8 ±2.9
p<0.005
Heart Rate (BPM)
74.2±16.6
74.3±18.1
p>0.1
Mean RVPa (µm)
10.75±4.9
3.26±1.28
P<0.0001
IV. DISCUSSION This study investigated a dynamic relation between RVPa peaks and IOP to estimate the ICP continuously and non-invasively. Although ICP waveforms were simulated artificially ,the mean ICP values are estimated based on real physiological recorded data which could reflect any ocular or cerebral abnormalities.
IFMBE Proceedings Vol. 32
84
S.M. Golzan, S.L. Graham, and A. Avolio
The values estimated for ICP all fell in normal range (015 mmHg) [12]. The findings support previous studies which show that RVPa arise due to a pressure gradient between IOP and ICP [3, 5]. Different studies [6, 13] showed that these pulsations are present in high percentage of normal subjects, therefore the proposed method could be a reliable approach in noninvasive ICP estimation. There have been several previous attempts to define the relationship between IOP and RVPa peaks [3, 5, 14]. Parsa [15] suggested lowering intraocular pressure until venous pulsations cease minimizes the pressure gradient between mean intraocular and retrolaminar pressure, and allows a clinical estimation of intracranial pressure. However no clinical studies were performed to verify this. The study published recently by Ren [16], showed that the translamina cribrosa pressure (IOP-CSFp) difference was significantly higher in a high-IOP glaucoma group than in controls, and that there was a trend for higher CSFp in those with higher IOP. The limited data in our study would be consistent, in that our estimates for ICP rose with higher IOP in these normal subjects, including in the same individuals tested at different times of day on different days. We recognize that the model does not factor in resting levels of retinal venous pressure (RVP) which could influence the actual pulsation amplitude (ie a maximally dilated vein as may occur in some disease states may not show the same expansion). In this study on healthy normals at rest, mean RVP was assumed to be constant at normal physiological levels. We conclude that studying the relation of RVPa amplitude in accordance with IOP could introduce a novel method in estimating ICP non-invasively. Further studies will be directed to validation of this method with invasive measurements of ICP.
ACKNOWLEDGMENT S. M. Golzan supported by Macquarie University Research Excellence Scholarship. Supported in part by a research grant from Novartis (Australia).
REFERENCES 1. B. M., “Ueber die Entstehung und klinische bedeutung des netzhautvenenpulses,” Zusammenkunft deutschen ophthalmol, vol. 45, 1925, pp. 53-59.
2. C.G. KAHN EA, “The clinical importance of spontaneous retinal venous pulsation,” Med Bull (Ann Arbor), vol. 10, no. 16, 1950, pp. 305-308. 3. David N Levine, “Spontaneous Pulsation of the Retinal Veins,” Microvascular Research, vol. 56, no. 3, 1998, pp. 154-165. 4. D A Dardenne, G Lacheron JM., “Cerebrospinal fluid pressure and pulsatility. An experimental study of circulatory and respiratory influences in normal and hydrocephalic dogs,” European neurology, vol. 4, no. 2, 1969, pp. 193-216. 5. N R Miller, A S Jacks, “Spontaneous retinal venous pulsation: aetiology and significance,” J Neurol Neurosurg Psychiatry, vol. 74, 2003, pp. 7-9. 6. B. Levin, “The clinical significance of spontaneous pulsations of the retinal vein,” Archives of Neurology, vol. 1, no. 35, 1978, pp. 37-40. 7. H Harbison Kimberly, S Shah, K Marill, Vicki Noble, “Correlation of Optic Nerve Sheath Diameter with Direct Measurement of Intracranial Pressure,” Academic Emergency Medicine, vol. 15, no. 2, 2008, pp. 201-204. 8. T. Geeraerts, et al., “Non-invasive assessment of intracranial pressure using ocular sonography in neurocritical care patients,” Intensive Care Medicine, vol. 34, no. 11, 2008, pp. 2062-2067. 9. W.D. Newman, et al., “Measurement of optic nerve sheath diameter by ultrasound: a means of detecting acute raised intracranial pressure in hydrocephalus,” Br J Ophthalmol, vol. 86, no. 10, 2002, pp. 11091113; DOI 10.1136/bjo.86.10.1109. 10. R Firsching, M Schutze, M Motschmann, W B Baumann, “Venous ophthalmodynamometry: a noninvasive method for assessment of intracranial pressure,” Journal of Neurosurgery, vol. 93, 2000, pp. 3336. 11. A A Linninger , Cristian Tsakiris, David C. Zhu , Michalis Xenos , Peter Roycewicz , Zachary Danziger , Richard Penn, “Pulsatile Cerebrospinal Fluid Dynamics in the Human Brain,” IEEE transactions on biomedical engineering, vol. 52, no. 4, 2005, pp. 557565. 12. Guyton AC, Hall JE text book of medical physiology, elsevier, 2006. 13. WH Morgan, et al., “Retinal venous pulsation in glaucoma and glaucoma suspects,” Ophthalmology, vol. 111, no. 8, 2004, pp. 14891494. 14. Steven j, Donnelly S, Prem S. subramanian, “Relationship of Intraocular Pulse Pressure and Spontaneous Venous Pulsations,” American Journal of Ophtalmology, vol. 147, 2009, pp. 51-55. 15. C.F. Parsa, “Spontaneous venous pulsations should be monitored during glaucoma therapy,” Br J Ophthalmol, vol. 86, 2002, pp. 1187. 16. R Ren, J B Jonas, G Tian,Y Zhen, K Ma,S Li, Hongtao Wang, B Li, X Zhang, N Wang,, “Cerebrospinal fluid pressure in glaucoma,” American Journal of Ophtalmology, 2009.
Author: S.Mojtaba Golzan Institute: Macquarie University Street: 2 Technology Place City: Sydney Country: Australia Email:
[email protected]
IFMBE Proceedings Vol. 32
Apparatus for Quantitative Slit-Lamp Ocular Fluorometry José P. Domingues1,2, Isa Branco2, and António M. Morgado1,2 1
IBILI – Biomedical Institute for Research on Light and Imaging, Coimbra Portugal 2 Physics Department, University of Coimbra, Portugal
Abstract— Ocular Fluorometry has long been used to measure the presence and concentration of tracers in ocular tissues and fluids and, more and more, quantification of natural occurring fluorescence. Early detection of abnormalities in Diabetic Retinopathy has been a major field of application of this non-invasive technique. A slit-lamp based Ocular Fluorometer has been developed (US patent 06,013,034) based on a multi-element sensor to quantify fluorescence along line segments of ocular globe by electronic scanning. In this paper we describe a new version of the Ocular Fluorometer hardware and present results on the assessment of its performance. Increased flexibility is offered due to the possibility of using different kinds of sensors and cameras. Portability is also improved as USB/RS232 communications are used for data and command flow between computer and Data Acquisition System. Finally measurement performance also improved significantly: Lower Level of Detection (LLOD) down to 1.5 ng/ml fluorescein equivalent concentration has been reached using conventional NMOS multielement image sensors and preliminary results indicate that 0.1 ng/ml can be reached with a cooled CCD camera. The 16-bit Analog-to-Digital converter used and the low-noise amplification allow full use of sensors wide dynamic range, linearity and resolution. Corneal fluorescence measurements are presented as preliminary in vivo low-light-level quantification results. Keywords— Ocular Fluorometry, Slit-Lamp, CCD multielement sensors.
I. INTRODUCTION Diabetic Retinopathy (DR) is among the major chronic complications of diabetes mellitus. DR is the leading cause of blindness among adult population in developed countries [1][2] and early diagnosis a key factor to define treatments and new therapies. Ocular fluorometry has long been used as an early diagnostic tool to DR [3][4][5] by quantification the Blood-Retinal Barrier leakage. In the eighties the first commercial Ocular fluorometer became available (Fluorotron Master, Ocumetrics, USA) and since then several research data has been published relating the amount of sodium fluorescein leakage into vitreous (after systemic administration of the sodium fluorescein tracer) with the grade of diabetic retinopathy and even before DR disease in diabetic patients[3][4][5][7]. More recently this line of
research has been reinforced using more sophisticated instrumentation including ocular fundus angiographs [6]. Also Blood-Aqueous Barrier (BAB) proved to be a good indication of alterations in blood vessels permeability and, consequently, of the possibility of measuring DR progression [3][7][13]. Finally, consistent studies indicate that the corneal auto-fluorescence is also a good indicator of metabolic control in diabetic patients and, so, related with DR grading [8][9][11][14]. On measuring natural occurring fluorescence like corneal auto-fluorescence there is no need of tracer injection which is an enormous advantage as some adverse reactions occur to fluorescein. Also there is no need to take blood samples as in the case of Blood-Ocular barriers permeability evaluation. Therefore, accurate quantification of corneal auto-fluorescence which is usually as low as 10 ng/ml fluorescein equivalent concentration is well needed and opens new possibilities in clinic diagnosis.
II. DESCRIPTION OF THE FLUOROMETER A. Hardware A slit-lamp based Ocular Fluorometer has been developed by our group (US patent 06,013,034, EP 0 656 759 B1) with a multi-element sensor to quantify fluorescence along line segments of ocular globe by electronic scanning. A clinical study to measure fluorescein leakage into anterior chamber after systemic administration has already been conducted [13]. A new data acquisition system has been developed to improve sensitivity, measurement resolution, portability and to increase programmability. This is achieved by using a dsPIC Microcontroller (dsPIC30F6012A, Microchip, USA) together with a 16-bit, 1.25 MSPS ADC and allowing the possibility of using cooled CCD cameras. The communication with the PC is done either by USB or RS-232 and a robust power supply for overall instrument has been developed. Next picture depicts a simplified block diagram. This new hardware setup and mainly the possibility of using a cooled high sensitive CCD camera represent a crucial advancement in performance (sensitivity and spatial resolution) which will allow to address other locations in the eye (cornea and vitreous) and, mainly, to test the possibility of quantitatively evaluate different clinical situations
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 85–88, 2010. www.springerlink.com
86
J.P. Domingues, I. Branco, and A.M. Morgado
(corneal auto-fluorescence and its relation with progression of Diabetes, corneal epithelial and endothelial function, effects of contact lenses in cornea, inflammation follow-up, vitreous fluorophotometry). Of course, it keeps the compatibility with a set of lower grade NMOS multi-element image sensors. The system is configured as an add-on to an ordinary slit-lamp - the most common ophthalmic equipment for anterior segment observation which gives the system the capability of widespread clinical use (Fig. 1).
reduce optical magnification in Y direction for better detectivity, keeping the axial image resolution over onedimensional array detector. With high sensitivity camera sensors, narrower excitation bands can be used improving fluorophore selectivity. Also narrower slit widths can be selected improving spatial resolution. Lower pixel pitches can, of course, still contribute to a better resolution. C. Software A graphical user interface (GUI) using MatLab and some software tools have been developed for Data Acquisition System (DAS) programming, data collection and data analysis. Use of Microchip™ development tools allow us to use C-programming for on-chip microcontroller program design where basic functions for hardware control are allocated. Modules for sensor/camera driving and reading, for acquisition synchronism and integration timing, for A/D conversion control and for some arithmetics and statistical processing are included.
Fig. 1 Overall system diagram B. Optics A Zeiss 30SL/M slit-lamp was used as a primary optical instrument. It provides a general basic platform for providing excitation light source and for collecting fluorescence emission. To reach the best results making use of the slitlamp basic optics some additional components can be used. Excitation filters must be selected according with application. The standard slit-lamp filter set does not usually fit our demands. For preliminary tests we used a band-pass filter (460-490 nm) with peak transmission (90%) at 480 nm. At emission side we introduced a standard high-pass filter, HP 500 nm. Optical amplification is given by the combination of objective lens and focusing lens on a classic two-lens system (M =
f2 ) which can be used in conjunction with the slitf1
lamp built-in Galilean system. To further improvement of image illuminance the use of a cylindrical lens was tested. With this element we can
Fig. 2 Aspect of a processing window for calibration purposes
III. RESULTS A. In Vitro Preliminary tests have been performed using an Hamamatsu C5809 Multichannel Detector Head. This uses a thermoelectrically cooled FFT-CCD sensor with 24 µm pixel size in line binning operation, using charge integration method. Figures 3 show results of in vitro measurements to determine linearity at low fluorescein concentrations (of the
IFMBE Proceedings Vol. 32
Apparatus for Quantitative Slit-Lamp Ocular Fluorometry
order of corneal auto-fluorescence equivalent values, about 10 ng/ml fluorescein equivalent). Linearity was determined to be 2% (large percentage deviation of experimental points from best line fit). This was done using glass cuvettes filled with 5, 10, 15 and 20 ng/ml fluorescein concentrations with standard measurement settings
Fig. 3 Linearity of measurements for low fluorescein concentrations Another important parameter is the Lowest Level of Detection (LLOD) defined as background value (0 ng/ml) added to twice the standard deviation of its measurement distribution. It was found to be 0.1 ng/ml fluorescein concentration with cooled CCD camera and 1.5 ng/ml with conventional – non-cooled - NMOS sensors. We also measured the lateral spatial resolution using the USAF 1951 target. Figure 4 depicts one of the results obtained with Group 1, Element 6 (3.56 LP/mm). Measurement resolution depends on Galilean system optical amplification and on the focusing lenses used. Of course pixel-topixel pitch and overall optics quality are important. We were able to reach 100 µm resolution..
87
This relates directly with ocular axial resolution as long as 90º slit-lamp geometry measurements can be done in case of corneal measurements. B. Preliminary Corneal Measurements The fluorometer proved to be accurate enough to measure lens auto-fluorescence and anterior chamber fluorescence (after either IV or oral fluorescein administration). Cornea auto-fluorescence has significantly lower intensity and its axial extent is only about 500 µm. We used the already mentioned cooled CCD camera with very low dark charge and tested different measurement geometry, optical amplification, filter set and slit width to obtain the best results. Also signal amplification must be accurately defined. We were able to perform preliminary in vivo corneal measurements in some volunteers with different measurements setup but measurement calibration and definition of standard protocols must proceed and be established. However promising results can already be foreseen. Figure 5 shows measurements with 90º slit-lamp geometry and 1 second integration time. The optical setup was the one already briefly described on which we used a 65 mm focal length focusing lens and 30 × Galilean amplification. Left-end peak is the beginning of lens auto-fluorescence peak – not shown. Two consecutive scans are shown with Y-scale representing sensor output in normalized relative units. Inter-scan reproducibility was found to be 7%.
Fig. 5 In vivo cornea fluorescence measurements (two scans)
Fig.
4 Measurements of lateral spatial resolution with USAF 1951 test target
Much remains to be done towards calibration standards and geometry/optical settings optimization and, finally, in vivo validated studies to ensure the efficiency of the instrument to report for clinically relevant parameters and diagnosis.
IFMBE Proceedings Vol. 32
88
J.P. Domingues, I. Branco, and A.M. Morgado
REFERENCES 1. 2008 National Diabetes Fact Sheet, general information and national estimates on diabetes in the United States, Atlanta, GA: US Department of Health and Human Services 2008. 2. Hall, Michael, Together we are stronger, report presented to the IDF Europe General Assembly, Sep. 2008. 3. Yoshida Akitoshi et al., Permeability of blood-ocular barriers in adolescent and adult diabetic patients. British Journal of Ophthalmology 1993; 77: 158-161 4. Cunha-Vaz JG et al. Blood-Retinal Barrier permeability and its relation to progression of retinopathy in patients with type 2 diabetes. A four-year follow-up study. Graefe’s Arch Clin Exp Ophthalmol (1993) 231:141-145. 5. Cunha-Vaz, J. G. The Blood-ocular barriers: past, present and future, Documenta Ophthalmologica 93: 149-157, 1997. 6. C. Lobo R. Bernardes, J. Figueira et al. Three-year follow-up study of blood-retinal Barrier and retinal thickness alterations in patients with type-2 diabetes mellitus and mild non-proliferative diabetic retinopathy. Arch. Ophthalmol., 122:211-217, 2004 7. Schalnus, Rainer, Christian Ohrloff, Eckart Jungmann, Kerstin Maaβ, Stephan Rinke Anette Wagner - Permeability of the Blood-retinal Barrier and the Blood Aqueous Barrier in Type I diabetes without diabetic Retinopathy: Simultaneous Evaluation with Fluorophotometry, German J. Ophthalmol 2: 202-206, 1993. 8. Stolwijk TR, van Best JA. Corneal auto fluorescence by fluorophotometry as indicator of diabetic retinopathy. Invest Ophthalmol Vis Sci 32 (Suppl.):1067 (1991).
9. van Schaik HJ, Coppens J, van den Berg TJ, van Best JA. Autofluorescence distribution along the corneal axis in diabetic and healthy humans. Exp Eye Res 69(5):505-10 (1999). 10. Cunha-Vaz, J. Domingues, JPP, Correia, CMBA Ocular Fluorometer, EP 0 656 759 B1, European patent (1998). 11. Van Best, J et al. Simple, low-cost, portable corneal fluorometer for detection of the level of diabetic retinopathy, Applied Optics, Vol. 37 No. 19, 4303-4311. 12. Cunha-Vaz, J. Domingues, JPP, Correia, CMBA Ocular Fluorometer, US patent 06,013,034 (2000). 13. Domingues J. P. P.,Figueira J., Correia C. M. , Cunha-Vaz J. G. Blood-Aqueous Barrier Permeability Assessment By Ocular Fluorescence Measurements After Oral And Iv Fluorescein Administration, , IFMBE Proceedings, Vol. 11. Prague: IFMBE, 2005. ISSN 17271983. Editors: Jiri Hozman, Peter Kneppo (Proceedings of the 3rd European Medical & Biological Engineering Conference EMBEC’05. Prague, Czech Republic, 20-25.11.2005). 4752 p. 14. Satoshi Ishito et al. Corneal and Lens autofluorescence in young insulin-dependent diabetic patients, Ophthalmologica, 212: 301-5 (1998)
Author: Institute: Street: Country: Email:
IFMBE Proceedings Vol. 32
José Paulo Domingues Biomedical Institute For Research on Light and Image City: Coimbra Portugal
[email protected]
Changes in Viscoelastic Properties of Latex Condoms Due to Personal Lubricants Srilekha Sarkar Das1, Matthew Schwerin1, Donna Walsh1, Charles Tack2, and D. Coleman Richardson1 1
Office of the Science and Engineering Laboratories, Center for Devices and Radiological Health, Food and Drug Administration 2 Biomedical Engineering, Marquette University
Abstract— Changes in the viscoelastic properties of the latex membrane due to personal lubricant application may be a potential cause of condom slippage during use. In this study, swelling and stress relaxation were studied for natural rubber latex condoms in the presence of personal lubricants. Neither swelling nor stress relaxation occurred in the presence of a water-based lubricant. Marginal swelling occurred due to a silicone-based lubricant compared to the positive control, mineral oil. However, no relaxation was observed when a stress relaxation test was performed after the swelling reached completion. Considerable relaxation occurred when the test was performed during the ongoing swelling process. These results suggest that stress relaxation may increase the risk of condom slippage due to the presence of a personal lubricant that induces swelling. Keywords— Latex condoms, lubricants, slippage, swelling, stress relaxation.
I. INTRODUCTION Condom slippage, one of the primary modes of condom failure, poses an enormous risk to public health. Elastomeric materials such as latex are known to swell while in the presence of various chemicals [1], including those that have been used in personal lubricants (e.g., benzoic acid, propane, and scented oils). Therefore the likelihood of slippage, due to swelling could increase as a result of the application of personal lubricants (PL) during condom use. There currently exists no accepted bench test to assess slippage due to swelling of the condom material coated with PL. This study proposes stress relaxation as a possible method for evaluating the risk of slippage of swelled natural rubber latex (NRL) condoms in the presence of PL.
A. Swelling Test Rectangular samples were stamped from NRL condoms and their appearance photodocumented in triplicate: before, and 15 and 30 minutes post PL application. Sample dimensions were determined using Image J software [2] to quantify swelling. Samples were also weighed before and 30 minutes after PL application to compare changes in mass due to PL uptake. B. Stress Relaxation Test An RSA3 dynamic mechanical analyzer (TA Instruments, New Castle, DE) was used to observe stress relaxation on rectangular samples stamped from NRL condoms. Data were recorded at 10% strain on samples without PL, on samples that were soaked for 15 minutes in each PL, and on samples immediately after PL application to observe any stress relaxation during the swelling process. The strain was applied to the samples in the circumferential direction of the condom.
III. RESULTS The swelling study indicated that NRL condom material swelled ~65% due to application of MO and ~10% due to SB. Figure 1 shows representative samples before and after exposure to MO. Swelling after application of WB was less than 1%, which is considered within the margin of
II. MATERIALS AND METHODS Non-lubricated NRL condoms were tested for swelling in the presence of a water-based PL (WB), a silicone-based PL (SB), and petroleum-based mineral oil (MO). The condoms were also tested for stress relaxation with and without PL.
Fig.
1 Example of swelling after application of mineral oil on NRL condom samples: without mineral oil (left) and after the application of mineral oil (right)
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 89–91, 2010. www.springerlink.com
90
S.S. Das et al.
IV. DISCUSSION The swelling and stress relaxation response of NRL condom samples were evaluated in the presence of personal lubricants, WB and SB, and in the presence of MO as a positive control. The swelling data showed that SB swelled the condom material marginally when compared to MO, which swelled the condom material significantly. WB did not swell the NRL condom material, which could explain the absence of measurable stress relaxation in these samples. No stress relaxation was observed in samples tested after 15 minutes of exposure to MO or SB. We suspect this is due to the fact that swelling was already complete by the time the 10% strain was applied. However, considerable stress relaxation occurred in the MO and SB samples when a strain was applied immediately after PL application, i.e., during the swelling process. In this case, stress relaxation began within two minutes of application of the PL and the strain.
Without MO
V. CONCLUSIONS
With MO
Fig. 2 Example of stress relaxation curves of NRL condom samples: without mineral oil (top) and after the application of mineral oil (bottom) experimental error and therefore negligible. Furthermore, no appreciable increase in swelling was observed after15 minutes of exposure to any PL. The stress relaxation study indicated that stress remained constant for two hours in condom samples without PL (before swelling), as well as in samples that were exposed to PL for 15 minutes prior to the application of 10% strain (after swelling is complete). Note Figure 2. However, condom samples that were tested immediately after the application of MO or SB began stress relaxing within two minutes of the start of the test (“During Swelling” in Figure 2). The same samples continued to stress relax for up to twenty minutes in the case of MO, but stabilized to a lower stress value in the case of SB. Samples treated with WB did not stress relax under any condition.
The swelling and stress relaxation data suggest that if a test is to capture (quantitatively or even qualitatively) any viscoelastic change in the condom material due to the application of PL, then the test run time must continue until swelling reaches equilibrium. If the test is performed after the completion of swelling, the stress relaxation may not be observed. In the current test, a strain was applied in the circumferential direction of the stamped condom samples and the initial stress response was transient. In contrast, during actual use, a condom is subjected to stresses and strains along its multiple axes throughout the course of use. Such differences between the methodology and in-use conditions are sufficient to support our concern that stress relaxation may increase the risk of condom slippage due to the presence of a PL that induces swelling. Additional investigation in this area is therefore needed.
REFERENCES [1] Queslel JP and Mark JE. Swelling equilibrium studies of elastomeric network structure. Springer-Verlag, Advances in Polymer Science 1985; 71: 229-247. [2] http://rsbweb.nih.gov/ij/
IFMBE Proceedings Vol. 32
Changes in Viscoelastic Properties of Latex Condoms Due to Personal Lubricants
DISCLAIMER The mention of commercial products, their source, or their use in connection with the material reported herein is not to be construed as either an actual or implied endorsement of the US Food and Drug Administration.
Corresponding Author: Dr. Srilekha Sarkar Das Division of Chemistry and Materials Sciences, Office of Science and Engineering Laboratories, CDRH/FDA, 10903 New Hampshire Avenue, Building 64, Silver Spring, MD 20993, US;
[email protected].
IFMBE Proceedings Vol. 32
91
Towards the Objective Evaluation of Hand Disinfection Ákos Lehotsky, Melinda Nagy, and Tamás Haidegger Budapest University of Technology and Economics, Dept. of Control Engineering and Information Technology, Budapest, Hungary
Abstract— The failure of hand disinfection prior to surgical operations is considered to be the leading cause of nosocomial infections worldwide. It contributes to the spread of multiresistant pathogens and is recognized having an important role in the development of post-operative complications. To reduce this cause, we developed a compact, mobile device to support surgeons with the objective evaluation of hand disinfection. The hardware consists of a metal box with matte black interior with ultra-violet lighting and a digital camera. Image processing and segmentation is performed on a regular notebook. After the hand washing procedure, the surgeons spread UV-reflective powder on the hands, showing bright under UV light only on sterile surfaces. When the surgeon insert its hands into the box, the camera placed on the top takes an image of the hand for evaluation. The software performs the segmentation and clustering automatically. First, the hand contour is determined from the intensity image, than two clusters are built using a threshold value, derived based on the average intensity of the region of interest. If the affected area is significant, the system warns the surgeon to wash their hands again. The main advantage of our device is the ability to obtain objective result on the quality of hand disinfection. It may find its best use in the clinical education and training. Keywords— hand disinfection, objective evaluation of hand washing, image segmentation, clustering.
I. INTRODUCTION The extensive use of antibiotics (diagnostic and therapeutic) and also the expansion of invasive instruments led to an unforeseen problem: the expansion of nosocomial infections. Infections generate needless expenses; reducing the quality of life of the patients, prolonging their recovery and promoting the resistance of pathogens against the antibiotics. They can cause permanent damages and even death in the most serious cases. We need to face the fact that these infections can be often caused by medical workers. It has been shown that at least 30% of these infections would be preventable [1], [2]. The fight against the nosocomial infection should receive more attention to be able to extend current capabilities through technologydriven solutions.
II. GENERAL NOTIONS ABOUT HAND DESINFECTION A. Brief Historical Overview Nightingale began reforming hospital hygiene in the second half of the 19th century. She gained support for her reforms by providing statistics of decreasing hospitals death rates in cleaner hospitals. Semmelweis discovered that the incidence of puerperal fever could be drastically cut by the use of hand disinfection in obstetrical clinics. He postulated the theory of washing with “chlorinated lime solutions” in 1847. Scientific explanation of the method was made possible only a couple of years later, when germ theory was derived by Lister and others. He developed the antiseptic technique, which was a major milestone towards modern surgery. He successfully introduced carbolic acid to sterilise surgical instruments and to clean wounds. The real breakthrough came in 1854, when French chemist Pasteur discovered that diseases were caused by germs. He also proved that introducing dead or weakened germs to the body developed immunity towards the particular disease. Redard advised disinfection of the hand with steam in 1886, and the American Halsted created the first rubber gloves, which significantly contributed to the sterility of the operating theatre. Nowadays, pre-operative hand washing become an essential standard of surgical care, however, we still lack the equipment to support objective evaluation of hand disinfection [3]. B. Basic Nomenclature Nosocomial infections are a result of treatment in a hospital, or other healthcare service unit, not related to the patient’s original condition. Infections are considered nosocomial if they first appear 48 hours or more after hospital admission or within 30 days after discharge. It is also referred as hospital–acquired infection [4]. Disinfection is a process in which most or nearly all microorganisms—whether or not pathogenic—on clothing, hard surfaces, or wounds are killed through the use of chemicals, heat, or ultraviolet rays. Physiologically, the skin is not a sterile surface, there are micro-organisms living on
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 92–96, 2010. www.springerlink.com
Towards the Objective Evaluation of Hand Disinfection
it. These micro-organisms create the normal flora of the skin, and their composition depends on our age and the environment we live in. The hand is considered to be the most contaminated part of the body in direct contact with the environment. Notably, hospital infections can easily spread by the medical staff in the case of inadequate sterility [5], [6]. Hand hygiene is a general term that applies to either hand washing, antiseptic hand wash, antiseptic hand rub, or surgical hand antisepsis. Its primary aim is to inactivate, damage the temporary micro flora appearing on the surface of the hand.
93
Fig.
2 The equipment contains four UV Fluorescent light sources, to develop the disinfection liquid containing phosphor
III. HARDWARE
IV. IMAGE PROCESSING
We created a compact, mobile device for the objective assessment of hand disinfection quality. The equipment consists of a covering box with built-in UV lighting, a digital camera (Nikon Dimage 7) and an attached notebook (Fig. 1). The size of the box was chosen to be 330x330 mm that provides the opportunity for dual hand imaging and other expansions in the future. The matte black background helps to reduce the environmental disturbing effects during image processing. Fig. 2. shows the UV light sources at the interior side of the box. The camera in the middle of the cover and there are 4 pieces of 22 W UV fluorescent lamp around which provides equal illumination on the surface of the hand. Fluorescent lamps without a phosphorescent coating emit ultraviolet radiation at 254 nm due to the peak emission of the mercury within the tube.
The main goal was to create a program that automatically segments the digital image of the hand, and objectively determines the ratio of the clean (bright) surface compared to the whole skin area. In order to distinguish the well disinfected parts, first we have to segment the hand on the original image, than classify its pixels accordingly. To provide a numeric results, we calculate the percentage of the good pixels over the whole hand. The program relies on MATLAB Image Processing Toolbox (MathWorks Inc.). A. Image Acquisition The picture is taken with a digital camera mounted on the box. In order to fasten processing, the RGB images are downsized to 560x480 pixels. This might affect the accuracy of the algorithm, although the primary focus is to identify consistent regions (a task with a lower resolution requirement). For testing purposes, we intentionally used only partially disinfected hands. B. Pre-processing
Fig. 1 Hardware setup consisting of a black box and digital imaging equipment to support image processing for the objective evaluation of hand disinfection
The pre-processing phase begins with noise reduction on the image, realized in two steps. First, we perform closing (erosion followed by dilatation) to filter the pixel-level noises. The structuring element used is "ball-shaped", with equal radius and height values (Fig. 3a). Next, the RGB image is transformed into an intensity image with an embedded function, and then we can calculate which parts are to be considered as background (based on the histogram). Experiments showed that three gray colors are well distinguishable to human eye on the intensity image. Instead of computing the histogram in 255 bins, we reduced it to 10. This will help us to separate the black background and shadows through the following steps:
IFMBE Proceedings Vol. 32
94
• •
•
Á. Lehotsky, M. Nagy, and T. Haidegger
find the position of the first maximum value of the histogram (Pmax) find the grayscale value corresponding to that position ⎛ 255 ⎞ v = Pmax ⎜ -1⎟ ⎝ 10 ⎠ every pixel value smaller than v is considered to belong to the background
Thus a pre-processed image of the hand is acquired with black background and no shadows. C. Hand Segmentation Image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. There are several known methods for object segmentation on digital image (e.g., region based-, color-based segmentation, edge detection). Our goal is to determine the contour of the hand in order to cluster only the relevant pixels. First, we detect the edges of the hand using the Sobel approximation to calculate the derivative [7]. It returns edges at the points where the gradient is maximum. To get only the hand’s contour, we need to find the longest coherent component of the edge detected image (Iedge). This is returned by the convolution of the Iedge with a mask. The mask was defined as a 5x5 matrix. The matrix performed acceptably for these kinds of image sizes. ⎡0 ⎢0 ⎢ mask = ⎢ 0 ⎢ ⎢0 ⎢⎣ 0
0 0 0 0⎤ 1 1 1 0⎥ ⎥ 1 1 1 0⎥ ⎥ 1 1 1 0⎥ 0 0 0 0 ⎥⎦
This convolution will result in a matrix of different image objects. We can determine the longest coherent component—the hand’s contour, by labeling these objects. In general, the object of interest is labeled number 1, while label 0 marks the background. We don’t take into consideration the rest of the pixels. Next, a binary image is acquired, where the pixel value 0 represents the background and the pixel value 1 represents the pixels in the hand’s area. The contour image is filled with ones (normalizing). This way, we managed to segment the hand in a binary image shown in Fig. 3b. This mask image will be useful in clustering only the pixels inside the hand’s area. D. Building Clusters The most important part of our experimental work focuses on the efficiency of clustering. We tested two different algorithms, a pixel based and a region based method. In the first case, each point is considered with the same weight. The current implementation doesn’t take into account that smaller weight should be assigned to pixels closer to the contour. This method is relatively slow, but experiments showed that eight out of ten pictures were reasonably well classified. Iterating through the pixels within the hand’s area in the grayscale image, we decide based on a priori determined thresholds whether the pixel is in the cluster of admissible points. Pixels associated with high intensity are considered to belong to a well disinfected area of the skin. The thresholds for the two main clusters were set by a professional, based on manually collected statistics, analyzing several pictures. Results are gathered in Table 1. The middle cluster is an intermediate group, containing those pixels which are still acceptable, meaning these would pass the test for hand disinfection. Table 1 Cluster intervals Cluster Inadmissible Admissible Very good
Fig. 3 a) Original image taken with the camera. b) Segmented and filled hand to determine the region of interest
Min. value (grayscale) 40 110 120
Max. value (grayscale) 109 119 220
The other implementation used to determine which parts of the hand are well disinfected was region based classification. First, we need to transform the RGB image into a binary one. This method requires a threshold value; all pixels greater than this will be assigned 1, while others 0. The threshold is determined by an embedded function, and the transformation is executed by a MATLAB function. If the threshold value isn’t established in a robust way, the method fails to return good result for different images.
IFMBE Proceedings Vol. 32
Towards the Objective Evaluation of Hand Disinfection
95
to use more effective algorithms, such as fuzzy C-means clustering and more powerful computers. The final results for the users are given in percentage of the skin surface well disinfected. An appropriately washed hand should provide 95-98% result.
Fig. 4 Performance of segmentation with different thresholds. a) Appropriately set threshold, resulting in a valid segmentation. b) Choosing a wrong value will result in a poor segmentation
V. RESULTS Based on several recorded images, we were able to test both methods. Neither pixel based nor region based clustering method delivered a perfect solution so far. Results of the fusion of the binary and original images are shown in Fig. 4, having executed the region based clustering. The left image shows reasonably good classification. There are few or no regions that should be assigned in the admissible cluster. Fig. 4b is an example of using wrongly chosen thresholds. While the well washed parts of the hand are clustered fine, admissible and bad regions are mis-categorized. Testing on different images, the efficiency of the embedded functions was acceptable on three out of five images. The results are calculated in approximately 10 seconds. The pixel based method determines the tree clusters defined in Table 1. Each color (yellow green, red) corresponds to a cluster: inadmissible, admissible and very good, respectively. We note that along the contour, there are yellow points. That is because distance from the contour was not taken into consideration, each pixel was clustered with the same weight. To compare the result of the two methods, we can analyze the difference between the images on Fig. 4b and Fig. 5b. There are much bigger inadmissible regions than the above mentioned method can find. The percentage for the good regions of this picture resulted 50.03% with the pixel based clustering, contrary to 78.38%, provided with the region based classification. As far as human eyes can judge, the first result would be accepted as a good classification. The latter method worked fine for all tested pictures; however, a major disadvantage is its higher computational requirement. The results are computed in more than 110 seconds on a regular notebook. To reduce this time we plan
Fig. 5 Results with pixel iteration for the same images
VI. DISCUSSION Based on the preliminary test results, we realized that the combination of the above described two methods would provide better results defining the good and bad areas of the tested hand. After calculating the binary image containing the good regions, we could re-process this image with pixel based clustering. This would provide the exact locations where the admissible pixels are. Such a combination might also provide a better computational performance.
VII. CONCLUSIONS We managed to build a mobile system that can perform the objective evaluation of the hand’s disinfection ratio. The equipment is intended to support surgeons and hospital staff to reduce nosocomial infection rates. It could also be used in the medical training. The device provides numerical results for the surgeon, showing the percentage of skin surface adequately disinfected. The workflow is very simple; pictures of the disinfected hand are taken in a UV lighted box. A regular computer provides the results using a MATLAB program. First, the hand is segmented on the original image, and then the pixels are classified into two groups based on intensity. Two clustering methods were tested. Pixel based clustering was significantly slower, but provided better results on arbitrary washed hands than region based classification. The device can help medical students to learn effective hand disinfection.
IFMBE Proceedings Vol. 32
96
Á. Lehotsky, M. Nagy, and T. Haidegger
ACKNOWLEDGMENT The research was supported by the National Office for Research and Technology (NKTH), Hungarian National Scientific Research Foundation grant OTKA CK80316.
REFERENCES 1. Kampf, G (2004) The six golden rules to improve compliance in hand hygiene, Journal of Hospital Infection, Volume 56, Supplement 2, April 2004, pp.3-5 2. Pitett D, Mourouga P et al. (2002) Compliance with handwashing in a teaching hospital. Ann Inter Med. 126-130 3. Berhe M, Bearman G, Edmond Mb (2006) Measurement and feedback of infection control process measures in the intensive care unit: Impact on compliance. Am J Infect Control. 2006 Oct., 34(8): 537-9 4. Rosenthal Vd, Guzman S, Safdar N (2005) Reduction in nosocomial infection with improved hand hygiene in intensive care units of a tertiary care hospital in Argentina. Am J Infect Control. 2005 Sep; 33 (7): 392-7
5. Boyce JM, Pitett D (2005) Guideline for hand hygiene in health-care settings: recommendations of the Healthcare Infection Control Practices Advisory Committee and HICPAC/SHEA/APIC/IDSA Hand Hygiene Tsk Force.MMWR 2005.51:1-44 6. Panhota BR, Al-Mulhim AS, Saxena, AK (2005) Contamination of patients' files in intensive care units: an indication of strict handwashing after entering case notes. Am J Infect Control. 2005 Sep; 33 (7): 398-401. 7. Jähne B, Scharr H, Körkel S (1999) Principles of filter design. In Handbook of Computer Vision and Applications. Academic Press Use macro [author address] to enter the address of the corresponding author: Author: Tamas Haidegger Institute: Budapest University of Technology and Economics, Dept. of Control Engineering and Information Technology, Budapest, Hungary Street: Magyar tudosok krt. 2. City: Budapest Country: Hungary Email:
[email protected]
IFMBE Proceedings Vol. 32
In vitro Models for Measuring Charge Storage Capacity K.F. Zaidi3, Z.H. Benchekroun1, S. Minnikanti1, J. Pancrazio1, and N. Peixoto1,2 1
2
Electrical and Computer Engineering Dept. and Krasnow Institute for Advanced Study, George Mason University, Fairfax VA, USA 3 A. James Clark School of Engineering, University of Maryland, College Park, MD
Abstract— The interaction of the nervous tissue with electrode surfaces impacts the efficacy of the charge transfer capacity of the electrode. A better understanding of the interface between electrodes and tissue will inform the design of electrically conductive interfaces for implantable electrodes. Here we propose two in vitro models that reproduce the in vivo electrochemical characteristics of Iridium oxide (IrOx) electrodes. Cyclic voltammetry (CV) was performed in phosphate buffer saline solution (PBS) with Ag|AgCl as reference electrode. The charge storage capacity (CSC) of each electrode was measured three times, in this order: (1) in PBS; (2) in the in vitro model; (3) in PBS, after cleaning. Our proposed in vitro models are sheep brain and ground meat. We then compared the in vitro results against in vivo data. The CSC of the iridium oxide electrode decreased from 199.4 μC (in PBS) to 83.7 μC in sheep brain. After cleaning, the CSC increases to a value lower than the initial CSC. Our in vitro models replicate the data presented by the previously performed in vivo (rat) experiments in terms of the characteristics of the CV and impedance curves. We suggest that the similarity of the tissue environment, which involves proteins and extra-cellular matrix, is responsible for the observed charge loss. The specific factor responsible for changing the CSC will be determined by doing further studies. Utilization of these models would decrease the use of lab animals characterize the electrical performance of the implantable neural electrodes. Keywords— brain stimulation, implantable electrodes, charge storage capacity, carbon nanotubes, iridium oxide.
I. INTRODUCTION Implantable electrodes provide a method of delivering current to hostile neural tissue environment. Electrical stimulation can be utilized as treatment for epilepsy, paralysis, and neurological disorders. To facilitate the characterization of implantable electrodes, we considered the development of a model for both brain tissue and in vivo CSC. Brain tissue responses vary according to various factors such as implantation and substance interactions, all of which must be taken into consideration [1]. Surface interactions in nervous tissue influence the efficacy of electrodes
and their CSC when implanted [2]. In this study, the electrochemical nature of the electrode-neural tissue interface is compared to our in vitro models. The goal is to develop in vitro models that reproduce the in vivo environment; to construct a simplified model of neural tissue, based on the factors that lead to certain electrode behaviors. Another major drawback to studying neural tissue behavior is the impracticality of conducting in vivo experiments in rapid succession. If we can provide an experimental model which closely mimics the biological tissue, and in particular the brain, we will be able to determine in vivo electrode performance before implantation. Apart from reducing the number of animals needed in any experimental study, it will increase the confidence in the predicted electrochemical behavior of electrodes.
II. MATERIALS AND METHODS We have previously shown that electrochemically deposited IrOx electrodes gain impedance in the in vivo models [3]. Gained impedance and loss of CSC allow assumptions to be made regarding the different behaviors predicted in the in vitro and in vivo models [2]. Factors that differentiate in vivo and in vitro models include the mechanical environment [4], and presence of certain organic elements [5]. The in vitro models consisted of tissue samples: sheep brain and ground meat. Electrode preparation. The electrodes were fabricated from 250 µm diameter medical grade stainless steel (316L) wire with polyimide insulation. We removed 3 mm of insulation from the tip in order to expose a 2.5 mm2 area for deposition. Prior to the deposition, we chemically cleaned the electrodes in three electrolyte solutions for 20 min each. First, electrodes were exposed to potassium hydroxide, followed with distilled water, and finally 70% ethanol. In order to remove surface contaminants, we dip the stripped electrode tips in aqua regia at room temperature for 1 min. Using a commercial potentiostat (REF 600, Gamry Instruments, Warminster, PA), the electrodes were cleaned through an electrochemical cycling process.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 97–100, 2010. www.springerlink.com
98
K.F. Zaidi et al.
A. Iridium Oxide Film Deposition Electrochemical iridium oxide (IrOx) was deposited over the clean stainless steel electrode. A custom designed software and hardware apparatus was used for voltagecontrolled cycling (triangular waveform, followed by square wave, both from 0 to 0.5V) for 30 minutes. The deposition solution was based on iridium tetrachloride dissolved in oxalic acid and potassium carbonate. Immediately after deposition electrodes were sprayed with 70% alcohol. The IrOx electrodes exhibited CSC of several orders of magnitude higher than the stainless steel electrodes. Characteristic RedOx peaks are visible in the CV spectra, indicating the presence of iridium in several valences. A description of the behavior of iridium oxide based neural stimulation techniques can be found in Cogan et al, 2004 [6]; the presence of electric current through implanted IrOx electrodes leads to a documented loss in CSC [2], an observation we will use in our analysis of an in vitro model.
to the open circuit potential (Eoc) to ensure that there will be no current due to the dc polarization. 2. In vivo Model The in vivo data was conducted in a different study that can be found in Minnikanti et al, 2010 [2]. Data from the in vitro modelswere compared with the in vivo model data. Figure 3 shows the CV of an IrOx electrode before, after, and in a live rat brain. It is followed by figure 4 where the bode plots of the impedance modulus of the same electrode is presented.
B. Electrochemical Characterization of Electrodes 1. In Vitro Model The main objective of these experiments was to be able to replicate the IrOx electrode performance in vivo with our in vitro models. We measured the CSC in three different stages; first PBS followed by the in vitro model, then in PBS again. All stimulation protocols were implemented using a commercial potentiostat (REF 600, Gamry Instruments,Warminster, PA); a two electrode setup was used in all experiments, where the working electrode was IrOx and the reference electrode was Ag|AgCl. The stimulation protocol consists of cyclic voltammetry (CV) [5] and potentiostatic electrochemical impedance spectroscopy (EIS) [5]. The CV section includes 20 preconditioning CVs followed by 5 cycles. Figure 1 depicts a typical CV of an IrOx electrode in phosphate buffer saline (PBS) of pH 7.4, in vitro model, and in PBS again. CV was performed at a slow scan rate of 50 mV/s in order to measure the maximum charge that the electrode will be able to deliver. The CSC was obtained by integrating the cathodic current over time for one period of the triangular waveform, and can be visualized as the lower half of the area delimited by one CV cycle. For potentiostatic EIS as shown in figure 2, an AC potential of 50 mVrms was applied to the working electrode and a DC bias was set
Fig. 1 CV spectrum of a single IrOx electrode in PBS (pre meat) (o) in comparison with an (meat) in vitro model (*), and in PBS (post meat) (+). CV measurement were taken in a two electrode setup using PBS (pH 7.4) and Ag/AgCl as the reference electrode at a scan rate of 50 mV/s C. Data Analysis In order to compare the in vivo to the in vitro models for electrodes of different deposition runs, we assume here that the CSC for every electrode in PBS, before testing in any target tissue (in vivo or in vitro), is 100%. We measure the CSC while the electrode is in the tissue. The values are mentioned in table 1.
III. RESULTS Our in vitro results show that the behavior of IrOx electrodes in vitro is similar to that observed in vivo model. The CSC reversibly decreased when the electrode was implanted in vivo and inserted into the in vitro preparations. Table 1 represents the percent change of the
IFMBE Proceedings Vol. 32
In vitro Models for Measuring Charge Storage Capacity
99
CSC of the in vivo and in vitro experiments. The CSC of the in vivo model reduced to 64%, while the in vitro model decreases to a similar level 56%. These decreases were reversible for both models upon subsequent exposure to PBS. Table 1 Change of CSC percentage of IrOx electrodes when tested in vivo and in vitro PBS(pre model)
Fig. 2 Bode plots of the impedance modulus of a single IrOx electrode in PBS (pre meat) (+), in meat (*) and in PBS (postmeat) (o). EIS measurements were taken using a two electrode setup with Ag|AgCl as a reference electrode
in vivo (n=7) in vitro (n=4)
in model
PBS (post model)
100%
63.75%
85.27%
100%
56.41%
80.45%
IV. CONCLUSION
Fig.
3 CV spectrum of a single IrOx electrode when implanted (*) and when immersed in phosphate buffered saline (pre (o) and post (+) refer to the electrode before being implanted and after being cleaned, explanted from the brain). Same setup as described for Figure 1
The fact that the in vitro models recapitulate the in vivo model findings with the IrOx electrodes suggest that the in vitro model may be an adequate approach for studying electrode performance. The essential differences between existing in vitro (PBS) and in vivo models are the organic contents, and physiological responses after electrode implantation and during charge application [5]. Since these samples of tissues are able to replicate in vivo (brain) electrode behavior, we believe we could use it further to investigate the factors that are responsible for poor electrode performance. The similarities in the electrode performance under these two conditions suggest that the CSC alterations with exposure to tissue may be due to common features between the models such as extracellular matrices or protein deposition. Further experiments will be necessary to elucidate the molecular basis for the reversible alteration in CSC with tissue exposure. The evaluation of two distinct in vitro models (meat, sheep brain) introduces the possibility of decreasing the use of lab animals to harvest brains. The similar performance of IrOx in our in vitro models to in vivo models suggest that at least for CSC, one might rely on an in vitro model to facilitate testing. Future work may explore the utility of mathematical modeling using the EIS results to describe the effects of tissue electrode interaction.
ACKNOWLEDGMENTS Fig. 4 Bode plots of the impedance modulus of an electrode in PBS (pre rat brain) (+), in rat brain (*) and in PBS (post rat brain) (o). EIS measurements were taken using a two electrode setup with Ag|AgCl as a reference electrode
This work was supported by the Volgenau School of Engineering (George Mason University) and the UAP (Undergraduate Apprenticeship Program).
IFMBE Proceedings Vol. 32
100
K.F. Zaidi et al.
REFERENCES 1. Polikov, V. Tresco, P.A. et al, (2005) Response of brain tissue to chronically implanted neural electrodes, J. N. Meth. 148, 1-18. 2. Minnikanti, S. Pereira M. et al, (2010) In vivo electrochemical characterization and inflammatory response of multiwalled carbon nanotubebased electrodes in rat hippocampus, J. Neural Eng. 7 016002 (11pp). 3. Abidian M R and Martin D C (2009) Multifunctional nanobiomaterials for neural interfaces, Adv. Funct. Mater (19) 573-85. 4. Sharp A.A., Ortega, A.M. Restrepo, et al (2009) In vivo Penetration Mechanics and Mechanical Properties of Mouse Brain Tissue at Micrometer Scales, IEEE Trans. Biomed. Eng. 56:1, 45-53. 5. Cogan, S.F. (2008) Neural stimulation and recording electrodes, Annu. Rev. Biomed. Eng. 10:275-309.
6. Cogan, S.F. Guzelian, A.A. et al (2004) Over-pulsing degrades activated iridium oxide films used for intracortical neural stimulation, J. N. Meth., 137(2), 141-150.
IFMBE at http://www.ifmbe.org Author: Nathalia Peixoto Institute: George Mason University, ECE department Street: 4400 University Drive MS1G5 City: Fairfax, VA-22030 Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Discovery of Long-Latency Somatosensory Evoked Potentials as a Marker of Cardiac Arrest Induced Brain Injury Dan Wu, Jai Madhok, Young-Seok Choi, Xiaofeng Jia, and Nitish V. Thakor* Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21029 Abstract— The use of Somatosensory Evoked Potentials (SSEPs) has been an established electrophysiological tool for diagnosis of neurological disorders or injury. We use SSEP for the prognostication of outcomes after hypoxic-ischemic brain injury. Previous studies on rats with median nerve stimulation have primarily focused on short-latency SSEP within 30msec after stimulus. This study shows that long-latency SSEP (LLSSEP) within 30-100msec is also of unique importance in monitoring brain injury induced by cardiac arrest (CA) and prediction of long-term recovery. In this study, 16 rats underwent either a 7min or 9min hypoxic CA. The Neurological Deficit Score (NDS) measured at 72hr post-CA was used to specify good outcome (NDS≥50) and poor outcome (NDS<50). Firstly, the LL-SSEP showed sharp responses to CA insults— change in P60 peak in the time-frequency space, Shannon entropy in the time domain, and wavelet entropy in the frequency domain. Secondly, LL-SSEP during early recovery had significant prognostic value: the Shannon entropy within 60min post-CA was higher for the good-outcome group (pvalue=0.02, Student’s t-test) and a delayed P60 exclusively predicted poor outcome. Thirdly, the LL-SSEP was significantly different than the short latency response. Since the LLSSEP occurs well beyond the time delays for production by the thalamocortical network, it may be an independent cortical response, and may reflect the recovery of cortical neurons. The discovery of LL-SSEP should have significant clinical potential in assessing the recovery of the cortical function after brain injury and should be helpful in understanding the mechanism of thalamocortical arousal. Keywords— somatosensory evoked potential, long-latency, cardiac arrest, neural monitoring, prognosis, entropy.
I. INTRODUCTION Somatosensory evoked potentials (SSEPs) consist of a series of waves that reflect sequential activation of neural structures along the somatosensory pathway. Abnormalities in SSEPs indicate dysfunctions at certain levels of the pathway. SSEPs are conventionally used for the evaluation of hypoxic-ischemic brain injury from cardiac arrest (CA), where neurological complications have been a leading cause for morbidity and disability after CA[1]. Bilateral loss of cortical N20 is a strong indicator of cortex injury by CA [2], and post-CA SSEP helps to predict long-term outcome with a very high specificity, but moderate sensitivity [3].
In humans, median nerve evoked short-latency (SL) SSEP components (N13, N20, P25[4]) that reflect thalamocortical responses and long-latency (LL) components (N35, P45, N70, P95 [4]) that reflect corticocortical responses [5, 6] have been studied. Prior research has looked into LLSSEP in humans (N70) as a secondary indictor to the N20 to improve prognosis. Absence or a delay ≥120ms of N70 with N20 present strongly correlates with adverse outcomes [3, 4]. However, animal studies of LL-SSEP are rare to the best of our knowledge, leaving a gap in the translational research of hypoxic-ischemic brain injury. In this study, we investigate dynamics of LL-SSEP during CA and early recovery using information theoretic measures and time-frequency analysis. The prognostic value of LL-SSEP is examined using the entropy measure. We also compare the evolution of LL-SSEP with that of the SLSSEP, and discuss the physiological foundations for the difference. The primary goal of this study is to understand the role of LL-SSEP and its representation of cortical functions. In this way, we hope to gain more insight into to loss and return of SSEPs due to hypoxic-ischemic insults, and how the recovery process associate with neurological outcomes.
II. EXPERIMENT The experiment was done on a rat model [7, 8] of cardiac arrest. The protocol was approved by the Institutional Animal Care and Use Committee at the Johns Hopkins University School of Medicine. 16 adult male Wistar rats underwent a 7min (n=8) or a 9min (n=8) hypoxic-ischemic cardiac arrest. Five epidural screw electrodes (Plastics One, Roanoke, VA) were implanted in the somatosensory cortex in both left and right cerebral hemispheres, with a ground in the parasagittal right frontal lobe. The rats were allowed to recover for one week before the experiment. At the beginning of the experiment, the rats were anesthetized and intubated for mechanically ventilation with 1.5% isoflurane in 1:1 N2/O2. A 15-min baseline SSEP was recorded from bipolar cortical electrodes, followed by 5min washout to remove the residual effect of isoflurane. At the end of washout, CA was initiated by clamping the endotracheal tube to stop mechanical ventilation. The total
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 101–104, 2010. www.springerlink.com
102
D. Wu et al.
duration of asphyxia was calculated from the point when the mean arterial pressure (MAP) fell below 10mmHg, and lasted for 7 or 9 min. Cardiopulmonary resuscitation (CPR) was performed with external chest compressions and mechanical ventilation until return of spontaneous circulation (ROSC), indicated by a rise in MAP over 60mmHg. Bipolar SSEP signals were recorded with Tucker Davis Technologies (TDT) Neurophysiology Hardware and software (Alachua, FL). The median nerves were stimulated using two pairs of needle electrodes placed in the right and left distal forelimbs. A 0.6mA stimulation pulse with duration of 200μs was given alternatively to the left and right forelimbs at a unilateral stimulation frequency of 0.5Hz. The first 150ms of SSEPs were recorded at a sampling rate of 6103.5Hz. Signals were continuously recorded for 1 hour after the onset of CA. For the next 3 hours, SSEPs were recorded in 15min intervals. Functional evaluation was performed using the Neurological Deficit Score (NDS) which ranges from 0-80 at 24hr, 48hr and 72hr post-CA.
III. METHODS A. Time-Frequency Analysis
a) Shannon Entropy (SE) The Maximum Likelihood estimator (MLE) of SE with Miller-Madow bias correction[13] is defined as M
SEˆ MM = −∑ pi log 2 pi + i =1
∞
∑ x(n)w(n − t )e
− j 2πfn
Entropy from information theory viewpoint is a measure of the uncertainty of a random variable. We used two typical entropy measures: One is the classical Shannon entropy [11] as a measure of the entropy of amplitude distribution in time domain. The other is wavelet entropy to assess the entropy of energy distribution over frequency subspace. The
k
where ψ j , k (i ) is scaled and translated basis function from the mother wavelet and with parameter j and k, and Cj(k) is the wavelet coefficient on the jth decomposition level. The energy distribution in each subband is used for estimating WE as follows as: Ej Ej (5) WE = −∑ log 2 j ∑Ej ∑Ej where E j = ∑| C j (k ) |2 is the energy in the jth subband. k
n = −∞
B. Entropy Analysis
(3)
b) Wavelet Entropy (WE) To measure spread of the spectrum of a signal distributed in different frequency subbands, we use multidimensional Discrete Wavelet Transform (DWT) for wavelet decomposition of SSEP. We choose as the basis function the Daubechies 5 function, because of its similarity to SSEP signals in shape. Then, the wavelet expansion of signal x(i) is (4) x(i) = ∑∑ C j (k )ψ j , k (i)
(1)
where w(n) is a window function. For this study, a hamming window was chosen. At a sampling rate of 6103.5 Hz, the window size is set to be 30 data points, which corresponds 5ms, and the overlap is set to be 20 data points. The magnitude square of the STFT yields the TF spectrogram of the signal: (2) spectrum{x(n)} ≡ [ X w (t , f )]2
mˆ − 1 2N
where pi is probability of finding the signal in the ith bin while the full range of the signal is equally divided to M ˆ is the interconnected and non-overlapping bins, and m number of bins with non-zero probability. We will use SEˆ MM to compensate the negative bias by MLE alone.
j
Time-frequency (TF) representation on SSEP signals has served an important indicator of hypoxic-ischemic injury and spinal cord injury [9, 10]. Short Time Fourier Transform (STFT) is a commonly used technique, which is essentially a short-term fast Fourier transform within moving windows to track the temporal-spectral dynamics. In discrete-time STFT, a time series x(n) is split into overlapping windows to reduce artifacts at the boundary. Mathematically, this is expressed as: STFT {x (n)} ≡ X w (t , f ) =
wavelet approach is especially suitable for analysis of short duration neural signals [12] such as SSEP.
IV. RESULTS For all of the 16 experiment rats, besides the most prominent short-latency peaks N10 and P15, there was a peak around 60ms in the bipolar SSEPs, which by convention is termed as P60 in this study (see the inlet waveform in Fig. 1). This long-latency peak was different from shortlatency peaks in that its amplitude was lower, and while its latency in each rat remained consistent with time, the latency was inconsistent across rats and varied within 5070ms. A. Evolution of SL- and LL-SSEPs after CA In all experiments, the long-latency peaks recovered soon after CA, while short-latency peaks had low amplitudes until a mean interval of 60min after ROSC. In order to
IFMBE Proceedings Vol. 32
Discovery of Long-Latency Somatosensory Evoked Potentials as a Marker of Cardiac Arrest Induced Brain Injury
Normalized 2.5 peak amplitude
100Hz
Baseline
CA injury
6min
8min
14min
16min
20min
22min
24min
28min
30min
32min
2min
4min
10min
12min
18min
26min
30ms
P60 50uV
N10
2
each raw sweep, and were normalized to [0, 1] with respect to the full spans. As shown in Fig. 3, SE decreased dramatically at the onset of CA and remained low until 10min postCA. In contrast, WE increased during CA and decreased towards baseline during recovery.
Freq(Hz)
quantify the change of short- and long-latency after CA, N10 and P60 peaks were detected for every 20-sweep average and their amplitudes were normalized with respect to baseline for comparison. As shown in Fig. 1, both N10 and P60 amplitudes dropped instantly upon onset of asphyxia until they reached a minimum, but they exhibited distinct recovery patterns. The P60 amplitude rebounded close to the baseline within 30min post-CA, and grew to surprisingly large amplitude during early recovery. There were some oscillations in its evolution. In contrast, short-latency followed a gradually increasing trend and only reached 50% of the baseline value within the first hour of recovery.
103
Recovery
25ms
1.5 Latency
P60
Fig. 2 Time-frequency spectrogram of long-latency SSEP during the first 0-32 min in a 7-min CA experiment. The time of onset of CA in this case is the 5min mark. Each subplot is the STFT spectrogram of a 2-min averaged SSEP waveform ranging from 30 to 150 ms
1 N10 0.5
1
0
0 10 7min clamp CA
20
30
40
50
60
min
0.9
WE
0.8 0.7
Fig.
1 The evolution of normalized N10 and P60 amplitude from the beginning of cardiac arrest (CA) up to 60min in a 7-min CA experiment. The inlet in the figure shows a typical SSEP waveform from baseline with N10 and P60 marked
0.6
SE
0.5 0.4 0.3 0.2
B. Monitoring of CA-Induced Brain Injury
0.1
STFT was used to track the change of long-latency of SSEP in time-frequency domain. We took 2-min averages (60 sweeps) during the first 0-32 min of an experiment in which CA started from the 5th minute. For the implementation of STFT, we applied a 5ms long moving window with two-third overlapping, and computed FFT in each window to obtain the TF spectrogram. As we can see from Fig. 2, there is a concentrated peak in TF spectrum prior to CA that becomes widely dispersed during CA and gradually returns to the baseline pattern at about 10min after ROSC. This degree of dispersion can be a visible indicator of hypoxicischemic brain injury. The TF spectrum enables us to visualize the change of LL-SSEPs in temporal-spectral manner. To quantify this change, we used SE for measuring changes in time-domain and WE for measuring changes in frequency-domain. For the same recording as Fig. 1, SE and WE were computed on
0
0
5
clamp
10
15
20
25
30
min
7min CA
Fig. 3 The time evolutions of Shannon entropy (SE) and wavelet entropy (WE) of long-latency SSEP during the first 0-32 min in a 7-min CA experiment. The entropies are expressed in arbitrary unit C. Prediction of Neurological Outcomes The LL-SSEP is not only indicative of ischemic brain injury but is also predictive of long-term recovery. The SE turned out to be a promising measurement of long-latency for prognosis. We compared the averaged SE based on functional outcomes. The classification of rats into good and poor outcomes was based on NDS evaluated at 72hr post-CA with a cutoff at NDS=50. The p-value of a twotailed t-test with unequal variance assumption was used to evaluate the statistical difference between the two groups.
IFMBE Proceedings Vol. 32
104
D. Wu et al.
The results showed that very early during recovery, SE was significantly larger for good-outcome group (p-value = 0.022 at 60min post-CA). This difference became insignificant once SE recovered to near baseline values. We also examined P60 amplitude and latency as predictors. Result showed that a delayed P60 within 90min after ROSC predicted poor outcome with 0% false positive rate, while early arrival of P60 had no predictive value; the peak amplitude of P60 was unrelated with neurological outcome.
showed remarkable changes after CA in both time- and frequency-domains. Entropy of LL-SSEP was capable of predicting good or poor outcome very early during recovery. A delayed P60 was a reliable sign for poor outcome. The recovery pattern of LL-SSEP differentiates its origin from that of SL-SSEP. We propose that the origin of LLSSEP is cortical, and thus the loss and return of LL-SSEP reflect injury and recovery of the cortical activity; however, this remains yet to be fully elucidated.
V. DISCUSSION
ACKNOLEDGEMENT
We evaluated the value of LL-SSEP in monitoring CAinduced brain injury and predicting outcomes. We illustrated that the STFT pattern of long-latency became dispersed during CA and recovered to near baseline pattern around 10min after ROSC. In agreement with the change of STFT, the Shannon entropy of long-latency dropped during CA and gradually returned around the same time. The wavelet entropy followed an opposite trend due to the reduction of low-frequency components during CA. The superiority of using long-latency as a marker over short-latency lies in the benefit of using SE and WE on each raw sweep, without needing hundreds of sweeps for signal averaging. The LL-SSEP played a unique role in our data for predicting outcomes well ahead of other electrophysiological makers such as EEG and SL-SSEP. The prognostic value of P60 latency is very similar to that of N70 in humans [4, 6], which predicts poor outcomes by absence or delayed latency with 100% specificity, but fails to predict good outcomes. The different recovery patterns of LL and SL-SSEPs points towards different neural origins. The quicker recovery of LL-SSEP suggests lower vulnerability of the cortical function, given the hypothesis that SL-SSEP reflects thalamocortical activity and LL-SSEP reflects corticocortical interaction. More interestingly, in 7 of 16 rats, P60 amplitudes in early recovery became greater than the baseline. We speculate that this may be due to strong oscillations among cortical neurons that play a role in the arousal of the brain. To justify the origins of short- and long-latencies, we will need further experiments with thalamocortical recordings.
We thank Dr. Wei Xiong for conducting the experiments, Dr. Anil Maybhate for technical help, and Alessandro Presacco and Huaijian Zhang for helpful discussions. This work was supported by grants RO1 HL071568 from the National Institute of Health and 09SDG1110140 from the American Heart Association.
VI. CONCLUSION We present an investigation of LL-SSEP in hypoxicischemic CA and post-CA recovery in a rat model. We define LL-SSEP as the evoked potential after 30ms and characterized it by P60. Our results show that the LL-SSEP
REFERENCES 1. Berek K, Jeschow M, Aichner F (1997) The prognostication of cerebral hypoxia after out-of-hospital cardiac arrest in adults. Eur Neurol 37(3):135-45 2. Ahmed I (1998) Use of somatosensory evoked responses in the prediction of outcome from coma. Clin Electroencephalogr 19(2):78-86 3. Zandbergen E G J, Hijdra A, Koelman J et al. (2006) Prediction of poor outcome within the first 3 days of postanoxic coma. Neurology 66(1):62-9 4. Madl C, Grimm G, Kramer L et al. (1993) Early prediction of individual outcome after cardiopulmonary resuscitation. Lancet 341(8849):855-8 5. Dermot R, Doherty J S H (2009) The Central Nervous System in Pediatric Critical Illness and Injury, in Hypoxic ischemic encephalopathy after cardiorespiratory Arrest. Springer, London 6. Zandbergen E G J, Koelman J, Haan R J et al. (2006) SSEPs and prognosis in postanoxic coma - Only short or also long latency responses? Neurology 67(4):583-6 7. Geocadin R G, Ghodadra R, Kimura T et al. (2000) A novel quantitative EEG injury measure of global cerebral ischemia. Clin Neurophysiol 111(10):1779-87 8. Jia X, Koenig MA, Shin HC et al. (2006) Quantitative EEG and neurological recovery with therapeutic hypothermia after asphyxial cardiac arrest in rats. Brain Res 1111(1):166-75 9. Brauna J C, Hanley D F, Thakor V N (1996) Detection of neurological injury using time-frequency analysis of the somatosensory evoked potential. Electroencephalogr Clin Neurophysiol 100(4):310-8 10. Hu Y, Luk K D K, Lu W W (2001) Comparison of time-frequency distribution techniques for analysis of spinal somatosensory evoked potential. Med Biol Eng Comput 39(3):375-80 11. Shannon C E (1948) A Mathematical Theory of Communication. Bell Syst Tech J 27(3):379-423 12. Rosso O A, Blanco S, Yordanova J et al. (2001) Wavelet entropy: a new tool for analysis of short duration brain electrical signals. J Neurosci Methods 105(1):65-75 13. Miller G (1955) Note on the bias of information estimates., in Information theory in psychology. Free Press, Glencoe
IFMBE Proceedings Vol. 32
In vivo Characterization of Epileptic Tissue with Time-Dependent, Diffuse Reflectance Spectroscopy Nitin Yadav1, Sanghoon Oh1,2, Sanjeev Bhatia2, John Ragheb2, Prasanna Jayakar2, Michael Duchowny2, and Wei-Chiang Lin1,2 1
Department of Biomedical Engineering, Florida International University, Miami, FL, USA 2 The Brain Institute, Miami Children’s Hospital, Miami, FL, USA
Abstract— According to the Epilepsy Foundation, about 45,000 children under the age of 15 develop epilepsy each year. One of the long-standing challenges in the field of pediatric epilepsy research is to differentiate between normal and epileptic brain areas. This problem is especially important for children with intractable epilepsy, because surgical intervention must be used to alleviate their epileptic symptoms. Optical spectroscopy may be used to assess the pathophysiological characteristics of the epileptic brain tissue intraoperatively in a non-destructive manner, which, in turn, allows surgeons to plan an epilepsy surgery. Herein, the study is focused on demonstrating the feasibility of using time-dependent, diffuse reflectance spectroscopy specifically to detect pathophysiological characteristics associated with epileptic brain tissue. In a pediatric patient undergoing epilepsy surgery, time-dependent, diffuse reflectance spectra were measured from the epileptic cortex and the surrounding tissue using a fiber-optic spectroscopy system. The system was designed to acquire diffuse reflectance spectra from 300 - 1100 nm at a maximum acquisition rate of 33 Hz. Spectral and temporal variations in the acquired spectral sequences were compared to the corresponding electrocorticographic results, and thosevariations unique to the epileptic cortex were identified using statistical methods. Furthermore, these spectral variations were related to the physiological and morphological characteristics of the brain tissue. The preliminary results obtained from this in vivo study demonstrate the feasibility of differentiating epileptic cortex tissue from normal tissue using time-dependent, diffuse reflectance data. Keywords— Epilepsy, Time dependent, Diffuse Reflectance, Optical Spectroscopy, Correlation.
I. INTRODUCTION According to Epilepsy foundation about 200,000 new cases of epilepsy are diagnosed each year [1]. Incidence is highest under the age of 2 and over 65. According to a World Health Organization (WHO) survey, epilepsy accounts for 1% of the global burden of disease, a figure equivalent to breast cancer in women and lung cancer in men [2]. In epilepsy a group of nerve cells in the cerebral cortex become activated simultaneously during a certain period of
time, and they emit excessive and sudden bursts of electrical energy and induce seizures. Various molecular mechanisms have been identified as the potential causes of epileptic seizures thus far. They include(1) chemical-malfunctioning of the sodium, potassium and calcium ion channels in the brain, abnormalities in neurotransmitters such as GABA, serotonin and acetylcholine,and (3)brain injuries induced by fever, vaccinations, head trauma, viral infections, and brain tumors. Epilepsy surgeries are generally performed with a view to free patients from intractable epilepsy without compromising their normal neurological functions. An array of technologies is used during epilepsy surgery to assist in localization of the epileptic brain area. In general, imaging technologies such as magnetic resonance imaging (MRI)is used preoperatively; electrocorticography (ECoG) is used intraoperatively [3]. Unfortunately, these existing medical diagnostic and imaging technologies often fail to provide an exact demarcation of the epileptic brain area; hence the outcome of epilepsy surgery is always optimized. Optical diagnosis is operated based on the principle of light absorption and scattering by the tissue. Over the past two decades, some studies have shown that diffuse reflectance spectroscopy maydetect brain tumorin vivo during brain tumor surgery [4,5,6]. In comparison to other intraoperative surgical guidance technologies, diffuse reflectance spectroscopy has the advantages of being inexpensive, portable, and providingdiagnostic results in real time. The primary tissue characteristics detected by diffuse reflectance spectroscopy arethe hemodynamic and the compositional/structural characteristics of tissue. Specifically, changes in reflected light between 540 and 580 nm are indicative of the blood oxygenation level [7,8]. In the longer wavelength region (i.e., 650-850 nm), differences in reflected light are indicative of morphological and compositional variations, including fiber, membrane and protein density [9], cell density [10] or cell distribution within the volume of investigation [11]. In this study, the feasibility of using diffuse reflectance spectroscopy to detect an epileptic brain area in an in vivo setup was investigated.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 105–108, 2010. www.springerlink.com
106
N. Yadav et al.
II. MATERIALS AND METHODS The instrument used for in vivo diffuse reflectance spectral acquisition consisted of an optical probe and a spectroscopic system.The probecontained one source fiber and 5 detection fibers, and all fibers had 300 micron core diameter. A mechanical probe holder was used in this study to eliminate the spectral artifacts induced by hand motions. Data acquisition of the spectroscopic system was coordinated by a laptop computer using a LabView program.The system was calibrated using a NIST-traceable, calibrated tungsten halogen light source (LS-1-CAL, Ocean Optics, Dunedin, FL) in order to characterize the responsivity of the spectrophotometer and the transmissivity of the fiber-optic probe. The calibration procedure yielded a set of calibration factors, used in the spectral data preprocessing procedure. Diffuse reflectance spectra were acquired from the brains in pediatric patients during epilepsy surgery.The study was approved by the institutional review boards at Miami Children’s Hospital and Florida International University. Informed written consent was obtained from the parents or guardians of each participant. In this pilot study, ECoG study results were used to classify the investigated areas as normal or epileptic. From a single patient, up to 8 investigated sites were selected from the area within (i.e., epileptic) and outside (normal) the resection zone defined by ECoG.Diffuse reflectance spectra Rdraw(λ,t) were acquired from a single investigated site forapproximately 12 seconds at an acquisition rate of ~ 30 Hz.Boxcar smoothing with a window size of 20 was used to denoise the measured diffuse reflectance spectra.Reference measurements were performedat the end of each patient study to monitor the day to day variation in the performance of the instrument. Calibration factors were applied to all measured diffuse reflectance spectra for the purpose of removing spectral artifacts induced by the instrument itself. In addition, baseline subtraction was performed to eliminate the room light interference during a spectral acquisition procedure. During the spectral pre-processing procedure, the spectral range of the recorded spectra was reduced to 400 nm - 850 nm with a wavelength resolution of 2nm. These preprocessing procedure yielded a set of calibrated diffuse reflectance spectra Rdcal(λ,t). To analyze the time domain characteristics of Rdcal(λ,t), Fourier transformation (FT) and correlation coefficient (CC) calculation were used. FT of Rdcal(λ,t), denoted as FT_Rd(λ,f), provided insights into the frequency compositions of Rdcal(λ,t). CC calculation was performed
between a reference signal, Rdcal(λref,t), and a comparison signal, Rdcal(λcomp,t), with a window size of 25. This calculation yielded γ(λref, λcomp, t) representing the temporal correlation between the two signals.
III. RESULTS AND DISCUSSION In this pilot study, five pediatric epilepsy patients undergoing single-stage epilepsy surgery were recruited. Their epileptic brain areas were defined by the pre-operative imaging studies as well as the intraoperative ECoG study. In the CC calculation procedure, λref = 800 nm was used. Through the comparisons of FT_Rd(λ,f) and γ(λref, λcomp, t) from normal cortex and those from epileptic cortex, some distinct features were observed. As illustrated in fig. 1,γ(800 nm, λcomp, t) from normal cortex generally showed a two-band characteristic. The band situated between 600 nm and 800 nm showed a strong positive temporal-correlation (γ> 0.8); the band situated between 400 and 600 nm showed a strong negative temporal-correlation (γ<-0.8). The inspection of FT_Rd(λ,f) from those normal sites revealed that the frequency compositions of Rdcal(λ,t) at all wavelengths were similar. These two features, however, were not noticeable in the data from epileptic cortex, as illustrated in fig. 2. These observations suggest that time dependent diffuse reflectance signals from normal cortex are very different from those from epileptic cortex; which confirms the possibility of using time dependent diffuse reflectance spectroscopy to demarcate an epileptic brain area in vivo. The temporal discrepancies between Rd(λ,t) from normal and epileptic cortex may be attributed to the differences between their hemodynamic and metabolic activities, which are strongly coupled to the neuronal activities of the brain. To evaluate the effects of hemodynamic alterations on γ(800 nm, λcomp, t), several sets of simulated diffuse reflectance spectra with altering blood oxygenation saturation SatO2 (60% to 100%) and blood volume fractions (3% to 5%) were generated using a Monte Carlo simulation model for photon migration. Fig. 3 shows γ(800 nm, λcomp, t)calculated from the simulated Rd(λ,t) with SatO2 oscillating between 60% and 100% periodically, in which multiple strong positive and negative temporal correlation bands are found. Because of the dissimilarity between γ(800 nm, λcomp, t)from in vivoRdcal(λ,t) and those from simulated Rd(λ,t), we believe the band characteristics observed in the in vivo data cannot be explained by the alterations in brain hemodynamics alone.
IFMBE Proceedings Vol. 32
In vivo Characterization of Epileptic Tissue with Time-Dependent, Diffuse Reflectance Spectroscopy
350
107
400 500 600 700 800
300 250 200 150 100 0
2
4
6
8
10
12
(a)
1 Wavelength(nm)
800 0.5
700
0
600
-0.5
500 400
0
2
4
6 Time(sec)
8
10
-1
12
8 16 7 14 6
Frequency (Hz)
12
(b)
5
10 8
4
6
3
4
2
2
1
0 400
450
500
550 600 650 700 Wavelength (nm)
750
800
0
850
Fig. 1 (a) γ(800 nm, λcomp, t) and (b)FT_Rd(λ,f)from normal cortex 400
Fig. 3 The
top figure shows simulated Rd(λ,t)with an altering blood oxygenation saturation SatO2 (60% to 100%) and a constant blood volume fraction (5%). The bottom figure shows γ(800 nm, λcomp, t) derived from the simulated Rd(λ,t)
IV. CONCLUSIONS Herein, we used a pilot in vivo human study to demonstrate the potential utility of time dependent diffuse reflectance spectroscopy in pediatric epilepsy surgery. Distinct frequency and temporal correlation characteristics were observed in the time dependent diffuse reflectance signals from normal and epileptic cortex.
400 500 600 700 800
350 300 250 200 150 0
2
4
6
8
10
12
ACKNOWLEDGMENT This research was supported by the Thrasher Research Fund and the Ware Foundation Research Endowment Fund.
(a) 1 Wavelength(nm)
800
0
600
-0.5
500 400
0
2
4
6 Time(sec)
8
10
-1
12
8 16 7 14 6
Frequency (Hz)
12
(b)
REFERENCES
0.5
700
5
10 8
4
6
3
4
2
2
1
0 400
450
500
550 600 650 700 Wavelength (nm)
750
800
850
0
Fig. 2 (a) γ(800 nm, λcomp, t) and (b)FT_Rd(λ,f)from epileptic cortex
1. Berg AT. (1995)The epidemiology of seizures and epilepsy in children In: Shinnar S, Amir N, Branski D, editors. Childhood seizures. Basel: S. Karger; p. 1–10. 2. Reynolds EH. (1990) Changing view of prognosis of epilepsy.BMJ 1990; 301:1112-4. 3. Lachhwani DK, Pestana E, Gupta A, Kotagal P, Bingaman W, Wyllie E. (2005) Identification of candidates for epilepsy surgery in patients with tuberous sclerosis. Neurology; 64:1651–1654. 4. Lin WC, Toms SA, Jansen ED, Mahadevan-Jansen A. (2001a). Intraoperative application of optical spectroscopy in the presence of blood. IEEE Journal of Selected Topics in Quantum Electronics 7: 996-1003 5. Lin WC, Toms SA, Johnson M, Jansen ED, Mahadevan-Jansen A. (2001b). In vivo brain tumor demarcation using optical spectroscopy. Photochemistry and Photobiology 73: 396-402 6. Lin WC, Toms SA, Motamedi M, Jansen ED, Mahadevan-Jansen A. (2000). Brain tumor demarcation using optical spectroscopy; an in vitro study. Journal of Biomedical Optics 5: 214-20 7. Haglund MM, Hochman DW.(2004) Optical imaging of epileptiform activity in human neocortex.Epilepsia.; 45 Suppl 4:43-7.
IFMBE Proceedings Vol. 32
108
N. Yadav et al.
8. Calderon-Arnulphi M, Alaraj A, Amin-Hanjani S, Mantulin WW, Polzonetti CM, et al. (2007) Detection of cerebral ischemia in neurovascular surgery using quantitative frequency-domain nearinfrared spectroscopy. Journal of Neurosurgery 106: 283-90 9. Chance B. (1998) Near-infrared images using continuous, phasemodulated, and pulsed light with quantitation of blood and blood oxygenation. Ann N Y AcadSci 838: 29-45 10. Gratton E, Toronov V, Wolf U, Wolf M, Webb A. (2005) Measurement of brain activity by near-infrared light. Journal of Biomedical Optics 10(01), 011008 11. Tomita M, Ohtomo M, Suzuki N. (2006) Contribution of the flow effect caused by shear-dependent REC aggregation to NIR spectroscopic signals. Neuroimage 33: 1-10
12. Wang, L-H, S.L. Jacques, L-Q Zheng (1995) MCML Monte Carlo modeling of photon transport in multi-layered tissues. Computer Methods and Programs in Biomedicine 47:131-146 Corresponding Author: Dr. Wei-Chiang Lin Department of Biomedical Engineering, Florida International University 10555, W. Flagler Street EAS 2673 Miami, Florida USA
[email protected]
IFMBE Proceedings Vol. 32
Effects of Initial Grasping Forces, Axes, and Directions on Torque Production during Circular Object Manipulation J. Huang1 and J.K. Shim1,2 2
1 Department of Kinesiology, University of Maryland, College Park, US Department of Bioengineering, University of Maryland, College Park, US
Abstract— Manipulation of circular objects is a common task in our everyday activity. Human tends to have a stable grasp before they start to maneuver the object. However, the effect of initial grasping forces on the subsequent dynamic performance is not yet clear. In this experiment, subjects were asked to produces different levels of grasping force, followed by a ramp and hold torque production task. The safety margin, an extra amount of grasping force to prevent objects from slippery, showed quicker response with smaller initial grasping force. The resultant force was found always point from the thumb to other fingers, although the vertical component showed distinct behaviors under different wrist postures. It was found that the thumb contribute largest grasping force while the index finger contribute similar or larger torque in counterclockwise direction. Keywords— multi-digit grasp, safety margin, resultant force, force sharing.
I. INTRODUCTION Previous studies on hand biomechanics focused on either two-digit/three-digit maneuver of a customized device [1] or five-digit prehension on a prismatic object [2], i.e. the thumb opposite to fingers. However, the research on dynamic control of circular objects, like opening a jar, turning a door knob, or throwing a ball, is rare even though they comprise a large portion of our everyday activity. When manipulating an object, human is known to exert an appropriate amount of grasping force in order to prevent the object from slipping as well as not breaking sometime fragile target [3]. This phenomenon, termed safety margin, is an important indicator for industry ergonomic design. However, it is yet unclear whether and, if yes, how human deal with the problem of safety margin when grasping force varies. Another appealing question is from biomechanics aspect. When a free object is grasped in still, the equilibrium equations have to be satisfied to maintain it. However, what might happen if the object is mechanically fixed and thus no force/torque requirements are needed? The purposes of this study, therefore, were to investigate these questions in a task requiring precise production of torque by all digits. First we investigated the effect of grasping force levels on safety margin. Then we calculated the resultant forces acting on the object under hand postures
and torque directions. The force and torque sharing patterns among digits were examined as well.
II. METHODS Subjects. Ten male university students participated this experiment after giving informed consent, as approved by Institutional Review Board in University of Maryland. Their average ages and anthropometric data are listed in Table 1. No subjects have reported history of neuropathology or external trauma on upper extremities. Table 1 Age and anthropometric data of subjects (Mean ±S.D.) Age (yrs) Weight (kg) Height (cm) Hand length (cm) Hand width (cm) Digit position (°)
Index Middle Ring Little
26.2 ± 3.7 70.1 ± 2.1 176.3 ± 4.7 19.9 ± 1.9 9.1 ± 0.9 109.0± 12.6 156.3± 11.2 187.0 ± 8.2 240.8± 15.4
Experiment Setup. Five six-component sensors (Nano-17, measuring forces and moments of force in three dimensions; ATI Industrial Automation, Garner, NC, USA) were mounted on a circular aluminum handle. The angular positions of fingers relative to the thumb were measured with a wooden handle of the same size before the experiment (see Table 1 for detail). The handle was mechanically fixed on a tripod.
Fig. 1 Schematic
diagram of experiment apparatus, including the handle and the positions of five sensors and digits
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 109–112, 2010. www.springerlink.com
110
J. Huang and J.K. Shim
A total of 30 analog signals from the sensors were routed to two synchronized 12-bit analog-digital converters (PCI6031 and PCI-6033; National Instruments, Austin, TX, USA). The signals were processed and saved in a desktop computer to provide online visual feedback (Labview; National Instruments, Austin, TX, USA) and offline data analysis. The sampling frequency was set as 100Hz. Procedure. Subjects sat on a chair, with the upper arm abducted about 45° in the frontal plane and flexed about 45° in the sagittal plane (Figure 2).The forearm was aligned parallel to the sagittal line of the subject and secured with two sets of Velcro straps.
The right hand side of the screen showed the temporal trajectory of the torque being produced on the handle. Subjects were asked to follow the template drawn on the screen (Figure 2). For the first three seconds, subjects did not produce any torque – grasping phase. In the next three seconds, the subjects linearly increased the torque to 20% of their maximum torque for a given direction, axis and initial grasping force (ramp phase). In the last three seconds, subjects held the handle with the same torque magnitude (holding phase). The order of initial grasping force, axis and direction was randomized across the subjects. In both experiments, subjects were told not extend the interphalange joints of every digit. Such trials were discarded and repeated. Data analysis. The forces and torques recorded were transformed from local reference frame to global reference frame. Normal forces and tangential forces were calculated with equations (2) to (4) ⎡ FXj ⎤ ⎡cos θoj ⎢ j⎥ = ⎢ j ⎣ FY ⎦ ⎣ sin θ o
− sin θ oj ⎤ ⎡ Fx j ⎤ ⎥⎢ ⎥ cos θ oj ⎦ ⎣ Fz j ⎦
M Yj = m yj + d oj × Fx j
(2)
Ft = M / r j
j Y
l o
(3)
Fn = (F ) + (FY ) − (Ft ) j
Fig. 2
Experiment procedure (ML: medial-lateral; AP: anterior-posterior; CW: clockwise; CCW: counterclockwise)
In the first experiment, subjects were asked to produce maximum torque under each of the four conditions – 2 AXIS (medio-lateral: ML and anterio-posterior: AP) and 2 DIRECTION (clockwise: CW and counter-clockwise: CCW). A visual feedback of the maximum torque being generated was displayed in the computer monitor in front of the subjects. Grasping force values corresponding to the maximum torques in the four conditions were also recorded and used in the second experiment. For each condition, subjects had two trials and at least 3 minutes between each trial to prevent muscle fatigue. In the second experiment, subjects were shown two visual feedbacks on the computer monitor. The left hand side displayed the grasping force feedback in the form of a horizontal line which rose up on the screen, as the grasping force increased (not shown on Figure 2). For each of the four conditions, subjects were asked to reach 0%, 10%, 20%, or 30% of the grasping forces obtained from the maximum torque from experiment 1( grasping phase). For 0% grasping force leve, subjects did not contact the sensors until grasping phase ended. After three seconds, subjects were asked to focus their attention on the right hand side of the screen.
j 2 X
j 2
(1)
j 2
(4)
j stands for each digit (thumb, index, middle, ring, little). The meaning of other symbols can be found in Figure 3. In reality safety margin is usually normalized to compare across subjects and tasks. j can be individual digit or summation of all five digits in equation (5). Fnj − Fμj,n Fnj − Ft j / μ (5) SM =
Fig.
Fnj
=
Fnj
3 Forces and torques in local reference frame (LRF, sensor) and global reference frame (GRF, earth)
IFMBE Proceedings Vol. 32
Effects of Initial Grasping Forces, Axes, and Directions on Torque Production during Circular Object Manipulation
Repeated measure analysis of variance was applied to analyze the effects of AXIS and DIR as well as GRIP LEVEL. Beferonni test was used to compare pairwise marginal means.
III. RESULT 1.
Safety margin under different grasping force levels From Figure 4, we can see that the safety margin of summation of all five digits converges faster with less initial grasping force. By fitting the curves with single-exponential functions ( SM = a exp − bx + c ), the transient coefficient b reveals how fast a specific safety margin transfer from near 1 to a certain value. For 0% initial grasping force, safety margin has a big noise during the first few milliseconds and b was not calculated. MANOVA shows that there is significant effect of GRIP LEVEL (F (1, 9) = 48.429, p < 0.001). There is no other main effect or interaction.
111
the object in additional to a wrench. Here we reported the results after subjects achieved a stable torque output. Resultant force in X direction is always larger than zero; i.e., pointing away from the thumb. There is no significant effect of main effects. However, there is a significant interaction between AXIS and GRIP LEVEL (F (3, 27) = 4.19, p = 0.015). When aligned along ML axis, Y components have a very small amount of pushing down force (for CCW) or even changes direction to pull (for CW) compared to those aligned along AP axis. MANOVA shows that significant effects of AXIS and GRIP LEVEL (F (1, 9) = 8.59, p = 0.017; and F (3, 27) = 7.691, p < 0.001), and a marginal significant interaction of AXIS x DIRECTION (F (1, 9) = 5.334, p = 0.046).
(a)
Fig. 4 Safety margin under different initial grasping force
(b) Fig. 6 Resultant forces in X (a) and Y (b) directions (mean±SE) 3.
Fig. 5 Safety margin transient coefficient (mean±SE) 2.
Internal and resultant forces Internal forces are the forces cancelled out each other while resultant forces are the forces acting on the center of
Torque and force sharing among digits Torque and force sharing patterns in ramp phase changed over time, while they were stable in holding phase. There is significant difference among DIGIT (F (4, 36) = 93.899, p < 0.001; T > I = R > M > L). On average, the contribution to the total torque is T = 38.4%, I = 21.3%, R = 17.4%, M = 14.2%, L = 8.8%, respectively. There is significant interactions of DIGIT x AXIS, DIGIT x GRIP LEVEL, AXIS x GRIP LEVEL, and DIGIT x AXIS x GRIP LEVEL, DIGIT x DIRECTION x GRIP LEVEL. The interaction of AXIS x DIRECTION x GRIP LEVEL is marginal significant (p = 0.09).
IFMBE Proceedings Vol. 32
112
J. Huang and J.K. Shim
t
Perce
Grasping forces were not specified after the grasp phase. However, subjects still maintained some similar patterns throughout the ramp and hold phases. DIGIT has a significant effect (T > R = I, I = M, R > M = L). On average, the contribution to the total grasping force is T = 46.3%, R = 18.2%, I = 13.7%, M = 13.3%, L = 8.5%, respectively. There are significant effects of DIRECTION, DIGIT x AXIS, DIGIT x GRIP LEVEL, and DIGIT x AXIS x GRIP LEVEL.
(a)
system into instability (see the large variation during the first few hundred milliseconds in grip condition 0%, when subjects did not have the direct contact until the ramp task started). This is undesirable in some cases. On the contrast, if subjects have a stable grasp before the task starts, they will have a prior knowledge of the shape and texture of the object. This might explain the smooth transition from grasping force production to torque production [4]. 2.
The role of thumb in counterbalancing force The thumb, in many cases, acts opposite to fingers in a way to counterbalance their net force. In this experiment, when the equilibrium equations are not mandatory, the X component (horizontal) force of the thumb is always larger than the net sum of finger forces. This is probably because the thumb is the main drive for torque production in either direction (Fig. 7b). The different behaviors of vertical components when producing torque in opposite directions along medial-lateral or anterior-posterior axis may have to do with the interaction between carpal tunnel and the tendons of multifinger muscles. The exact reason is not clear in this study and remains further research.
t
Perce
V. CONCLUSIONS
(b)
Fig. 7 Sharing of torque (a) and grasping force (b) among individual digits under different conditions of AXIS and DIRECTION (averaged across different grip levels, mean±SE)
In the present study, we investigated the effect of initial grasping force levels on the dynamic performance when subjects executed a task of ramp increase and hold on a mechanically fixed circular object. It was found that (1) safety margin slows down when initial grasping force increases; (2) the thumb exerts larger grasping force than the rest digits combined; (3) the thumb contributes largest grasping force under all conditions, while the index finger exerts similar or larger torque along counterclockwise direction.
REFERENCES
IV. DISCUSSION 1.
The relationship between grasping stability and flexibility It is known that subjects need to apply a certain grasping force passively before the loading force is applied. Whether this preload phase is useful for task improvement remains unclear. From this experiment, we can assume that smaller amount of preloading seems be able to better help the system transform from one state (safety margin in grasping phase) to another state (safety margin in holding phase). Simultaneously applying grasping force and torque has to go through a short time of adjustment and hence bring the
1. Johansson RS and Cole KJ (1994) Grasp stability during manipulative actions. Canadian Journal of Physiology and Pharmacology, 72, 511524 2. Shim JK, Latash ML, and Zatsiosky ZM (2005) Prehension synergies in three dimensions. Journal of Neurophysiology, 93, 766-776 3. Fower NK and Nicol AC (1999) Measurement of external threedimensional interphalangeal loads applied during activities of daily living. Clinical Biomechanics (Bristol, Avon), 14, 646-652 4. Henningsen H, Ende-Henningsen B, Gordon AM (1995) Contribution of tactile afferent information to the control of isometric finger forces. Experimental Brain Research, 105(2): 312-7
IFMBE Proceedings Vol. 32
Time Independent Functional Training of Inter-joint Arm Coordination Using the ARMin III Robot E.B. Brokaw1,2, T. Nef1,2, T.M. Murray1,2, and P.S. Lum1,2 1
Center for Applied Biomechanics and Rehabilitation Research, National Rehabilitation Hospital, Washington D.C., U.S.A. 2 Biomedical Engineering, The Catholic University of America, Washington D.C., U.S.A.
Abstract— Task specific training is a cornerstone of stroke motor therapy and is thought to contribute to improvement of function in activities of daily living (ADLs). Abnormal synergies after stroke often make the training of proper inter-joint coordination during functional tasks difficult, since patients are inclined to utilize abnormal compensatory movement patterns. Effective inter-joint coordination training may be important for regaining functional use of the limb. We have developed a novel robotic method for retraining inter-joint coordination based on time independent joint based control. An integral component of this method is facilitation of proper patient interaction with the robot, which is essential to maximize gains. While robots can assist completion of movement, patient control of the movement is frequently minimal, leading to little or no effort by the patient during training. To increase patient engagement, we have incorporated feed-forward friction compensation to improve backdriveability, gravity compensation for both the robot and human arms, and a visual interface to improve patient interactions with the robot. Index Terms–– rehabilitation robotics, motor control, stroke recovery.
I. INTRODUCTION Stroke is the leading cause of long term disability in the United States [1]. Task oriented movement training has been shown to improve both muscle development and performance of ADLs after stroke [2, 3]. Rehabilitation robots are becoming increasingly popular for stroke therapy. The most commonly used modes for robotic therapy are: 1) moving along with the robot and 2) the subject moving as much as possible unassisted and then the robot finishes the movement [4-6]. Even in controllers that minimize robot interaction to “assistance as needed” patient passiveness is a problem and may contribute to the lower therapy gains shown for robotic therapy when compared to standard therapy [4,7]. Rehabilitation robotic therapy provides improvement in upper limb motor function and strength after stoke, however, significant gains have generally not been obtained in functional tests [8, 9]. Researchers believe that this could be due to the compensation strategies stroke subjects used to complete functional tasks. Training proper joint coordination is useful for overcoming abnormal synergies and regaining independent
joint control [10]. Studies have shown that abnormal synergies can be decreased with isometric joint training against the constraining pattern and progressive loading [11,12]. Some studies have shown that multi-joint movements are learned as joint coordination ratios and patterns [13, 14], which promotes the idea of joint-space training to help overcome abnormal synergies. A new control method, Time Independent Functional Training (TIFT), for rehabilitation robotics has been developed. TIFT allows the patient to learn the desired functional movement in joint space at his or her own pace while still requiring the patient to actively complete the movement. This system provides guiding joint-space walls to keep the subject close to the ideal joint path and holds the subject’s arm in the trajectory if they stop actively producing the required interjoint coordination. This increases the likelihood of the patient completing the task despite possible fatigue, and may promote relearning of new movement patterns. TIFT uses feed-forward static friction compensation, as well as gravity and viscous friction compensation to minimize the effects of the robot on the subject. Human arm gravity compensation can be adjusted, as well as the joints controlling the trajectory, and the trajectory range of motion in order to scale the difficulty of the therapy session. These parameters are easily adjusted during the therapy session using the settings menu on the visual interface. The visual interface also provides the subject with an avatar with arrows showing how to move their joints to complete the trajectory, the number of repetitions they have done, and the percent of the trajectory the subject has completed. A. ARMin III Robot Hardware The ARMin III rehabilitation robot was used for the development of the TIFT mode. The ARMin III is an exoskeleton robot with six active DOF, which are used to assist the patient with shoulder, elbow, and wrist movements [5, 15]. A passive hand device (HandSOME) has been added to the ARMin III system to assist with finger extension [16]. The HandSOME device allows normal grasp and has an encoder for measuring grasp angle, allowing the subjects to use the TIFT mode with real world objects or virtual ones.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 113–117, 2010. www.springerlink.com
114
E.B. Brokaw et al.
II. TIME INDEPENDENT TRAJECTORY Three trajectories were developed for ADL task training: 1) putting an object on a shelf, 2) sorting objects, and 3) pouring from a pitcher. These task trajectories were modeled after normal kinematics for proper joint coordination training. All of the tasks require specific joint coordination in order to be completed, but the range of motion, such a how high the shelf is or the patient’s comfortable range of motion, can be adjusted for each subject, providing various difficulty levels, which can be tuned to increase the benefits of therapy. Partial to full human arm gravity compensation can be provided and adjusted to grade task difficulty.
requires some external rotation of the shoulder. However, this movement is slight compared to the control joints. In the case of subject weakness, the transfer of a controlled joint to a slaved joint will allow them to do more repetitions, and have control over the movement while still providing sensory feedback for the joint that could not be moved. This simplifies the patient’s movement pattern and allows therapists to do joint specific training within the context of a functional task. The shelf task example requires shoulder elevation as well as elbow extension. The therapist can have the subject train the task with both the elbow and shoulder. However if the subject is not capable of elbow extension due to hypertonia and abnormal synergies but is capable of shoulder elevation, the patient can initiate the movement with the shoulder while the robot forces the elbow to follow the shoulder for training. B. Trajectory Progression
Fig. 1 ARMin III robot and HandSOME being used in a functional shelf task with TIFT mode A. Joint Control and Slave Joints The ARMin III joints were divided into two categories based on the task. Arm joints were either slaved or controlled. The group of controlled joints for the task could be a single joint or many joints. These joints usually have the largest range of motion for the ADL and are used to control the progression through the trajectory. Table 1 shows the joints that would commonly control each of the three tasks that were designed.
The joint angles required to complete the task with normal kinematics were recorded and the trajectory was smoothed. The joint angle path was created to take the subject through the task and then back to the starting position as one repetition. The position values over time for each arm joint in the trajectory were normalized to make the range of motion of the joints easily variable with gains. The TIFT mode relies on real-time estimation of where the joints are relative to the desired trajectory and joint space walls around the current joint angle. In TIFT mode, movement within the desired trajectory is allowed without resistance. Movement is not allowed if coordination between joints is not correct for the desired trajectory. A global S value determines the current point on the trajectory for all joints. This establishes the current desired joint angle (pref). A local advancement variable, s, was created for each joint, providing a measure of that joint’s progression through its desired pattern. Table 2 summarizes the calculation of Δs, which is dependent upon the current joint angle (pact), the current trajectory position (pref), and the joint angle for a 2% advancement in the local s value (prefplus). Table 2 Chart of Δs Value Determination
Table 1 Possible Control Joints By Task Task Shelf Object Sorting Pouring
Control Joints Shoulder Flexion Elbow Extension Shoulder Flexion Shoulder Abduction Shoulder Internal Rotation Wrist Pronation
If a subject has trouble moving one of the controlled joints for the task, that joint can be switched to a slave joint. The routine slave joints are less important for the given ADL. For example the task of putting an object on a shelf IFMBE Proceedings Vol. 32
Time Independent Functional Training of Inter-joint Arm Coordination Using the ARMin III Robot
This linear estimation method is required because a given joint angle can be associated with several s values, due to reversals in the joint trajectory. At each time step, the minimum Δs value of all of the patient controlled motors is added to the global S value to determine the new S. Negative Δs values are set to zero to prevent moving backward in the trajectory. This new S value is used to reset the local s values of each joint. All controlled joints need to advance in the proper direction for the S value to advance. The start position was defined at S=0, from S=0 to S=0.5 the robot moved to the end of the task trajectory and for S=0.5 to S=1 the patient moved back to the initial position to restart the trajectory. The S value was used to servo the slave joints to their appropriate position in the trajectory using a PD controller. When the patient controlled motors all reach the end of the trajectory (S=1) the trajectory is reset (S=0) and can be repeated as many times as desired by the patient. C. Boundary Walls for the Trajectory The s value of each control joint was used for the creation of angle boundaries to assist the subject in maintaining the proper joint coordination. The angle boundaries have a deadband of 2o (D), which provides a channel that the subject can move through to complete the trajectory without hitting a wall. The joint-space boundaries were created for each joint as exponential walls with respect to actual angle (qact) from the desired angle on the trajectory (qref). The boundary wall torque was determined as shown in Table 3. Table 3 Chart of tau Value Determination
where tau is the boundary wall torque, C is a constant which determines the stiffness of the wall and D is the dead band angle. These boundary walls provide haptic feedback to assist the subject in finishing the trajectory. The walls also prevent the subject from moving backward on the trajectory or advancing within the trajectory with an inappropriate interjoint coordination pattern.
115
The wall stiffness was tuned for each joint. Slaved joints were given no dead band so that these joints track the trajectory perfectly according to the position of the patient controlled joints.
III. FEED-FORWARD STATIC FRICTION AND GRAVITY COMPENSATION
As mentioned previously, the s value is determined by looking ahead on the trajectory and comparing the current position to the future one. This establishes the desired direction that each motor needs to rotate in order to complete the trajectory. This information is used to apply a small torque in that direction to overcome static friction. This feed-forward friction compensation decreases the interaction force for the patient to move the robot joint in the desired direction and makes it slightly harder to move in the opposite direction [17]. The motor current values to overcome static friction at each joint of the ARMin were determined experimentally. When the joint velocity was greater then zero, the static friction compensation current was removed and viscous friction compensation was used. Feed-forward static friction compensation along with robot gravity and viscous friction compensation greatly improved the backdrivability of the robot. Arm gravity compensation was used to provide assistance to subjects with weakness. The robot could compensate for all of the subject’s arm weight, none of the weight, or anywhere in between. This provides the progressive training that maximizes subject effort while still encouraging them by allowing successful movements. Coordinate frames were attached and the transformation matrices with respect to the universal frame were determined for each joint [18]. Dempster’s body segment parameters were used to obtain the average center of mass location of the arm with respect to the segment length and the average limb segment mass relative to the total mass of the individual [19]. The vectors to the center of mass of the limb segments in the universal frame, were crossed with the gravity force vectors for these segments (Eqn 1). Ms is the shoulder motor torque, rs is the vector to the center of mass of the upper arm in the universal frame, ms is the mass of the upper arm, and the same variables for e and w are the values for the forearm and hand respectively.
M s = rs msg + re me g + rw mw g
(1)
The r vectors are calculated from the configuration of the robot and subject’s arm. They are determined using the encoder values of the robot. This allows the motors to provide the appropriate gravity compensation in all arm configurations. The joint torques are given in Eqn 2-4.
IFMBE Proceedings Vol. 32
TExtS = −0.147 ∗ SS ∗ SExt ∗ SE ∗ L f ∗ m t
(2)
116
E.B. Brokaw et al.
TFlexS = mt (0.1196* SS Lu + 0.147∗ L f (SS ∗ CE + CS ∗ CExt ∗ SE ) (3) 2 2 TFlexE = 0.147 ∗ m t ∗ L f (SExt ∗ SE ∗ CS + SS ∗ CExt ∗ CE + CExt ∗ CS ∗ SE ) (4)
S and C designate the sine and cosine of the joint angle (s is shoulder flexion, ext is shoulder internal rotation, e is elbow flexion), mt is the total mass of the individual, Lf is the length of the forearm, Lu is the length of the upper arm. The constants in Eqns 2-4 are from the combination of the Dempster and gravity constants for the segment.
IV. VISUAL INTERFACE The visual interface software for the ARMin III robot from ETH and University of Zurich was modified to assist the subjects with completion of the time independent task. The avatar arm is controlled by the encoder values of the ARMin III. Arrows were added to the controlled arm joints to tell the subject the direction the joint needed to move in order to complete the trajectory. These arrows change direction with the trajectory and are removed if a control joint is set as a slave joint. The number of repetitions and the percentage of the trajectory completed are also displayed in order to provide encouragement for the subjects. When the subject completes the trajectory, such as putting an object on the shelf the interface produces a triumphant chime.
control mode from this settings menu and switch back and forth between the modes easily.
V. DISCUSSION AND CONCLUSIONS The field of rehabilitation robotics has great potential for increasing therapy outcomes by providing precise repetitions of functional tasks and new training environments. Motivation and subject effort is very important in therapy and can be decreased by the excessive robot control of the movement. The TIFT control method has been implemented on the ARMin III robot to provide joint-space trajectory training with scalable assistance. Therapists are given the ability to change the joints controlling the movement, the human arm gravity assistance level, and scale the range of motion of the trajectory within the TIFT mode. The settings menu allows the therapist to smoothly change between TIFT and the standard time dependent robot therapy mode if needed. The visual interface allows an easy way to change the therapy settings and easy to understand feedback about how to interact with the TIFT mode.
ACKNOWLEDGMENT We would like to thank Marco Guidali and Prof Dr. Robert Riener from ETH and University of Zurich for the ARMin III source code, which was the starting point for our software development. This work was supported in part by the U.S. Department of Veteran Affairs and U.S. Army Medical Research and Materiel Command.
REFERENCES
Fig. 2 Visual interface for the shelf task with avatar, direction arrows and trajectory information for patient feedback. The image displays the avatar at the initial position of the task and the respective feedback to the subject The therapist is able use the settings menu to switch from the TIFT mode to the time dependent mode where the robot will move the subject through the rest of the path if the subject is not capable of finishing the trajectory. The therapist can control the velocity of the time dependent
1. Association AH. Heart disease and stroke statistics-2009 update. Dallas, TX: American Heart Association; 2009. 2. B. Kollen, G. Kwakkel, and E. Lindeman, “Functional recovery after stroke: a review of current developments in stroke rehabilitation research,” Rev Recent Clin Trials, vol. 1, pp. 75-80, 2006. 3. A. A. Timmermans, H. A. Seelen, R. D. Willmann, and H. Kingma, “Technology-assisted training of arm-hand skills in stroke: concepts on reacquisition of motor control and therapist guidelines for rehabilitation technology design,” J Neuroeng Rehabil, vol. 6, no. 1, pp. 1-18, 2009. 4. J. Hidler, D. Nichols, M. Pelliccio, K. Brady, D. Campbell, J. Kahn, and G. Hornby, “Multicenter randomized clinical trial evaluating the effectiveness of the lokomat in subacute stroke,” Neurorehabil Neural Repair, vol. 23, no. 1, pp. 5-13, 2009. 5. M. Guidali, M. Schmiedeskamp, V. Klamroth, R. Riener, “Assessment and training of synergies with an arm rehabilitation robot,” IEEE 11th International Conference on Rehabilitation Robotics, 2009, pp. 772 – 776. 6. P.S. Lum, C. Burgar, M. V. d. Loos, P. C. Shor, O. T. R. M. Majmundar, and R. Yap, “MIME robotic device for upper-limb neurorehabilitation in subacute stroke subjects: A follow-up study,” J REHABIL RES DEV, vol. 43, pp. 631–642, 2006.
IFMBE Proceedings Vol. 32
Time Independent Functional Training of Inter-joint Arm Coordination Using the ARMin III Robot 7. E. Wolbrecht, V. Chan, D. Reinkensmeyer, and J. Bobrow, “Optimizing compliant, model-based robotic assistance to promote neurorehabilitation,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 16, no. 3, pp. 286-296, 2008. 8. J. Mehrholtz, J. Kugler, and M. Pohl, “Electromechanical and robotassisted arm training for improved arm function and activities of daily living after stroke,” Cochrane Database of Systematic Reviews 2008, no. 8, 2008. 9. G. Kwakkel, B. J. Kollen, and H. I. Krebs. “Effects of robot-assisted therapy on upper limb recovery after stroke: a systematic review,” Neurorehabil Neural Repair, vol 22, pp. 111-120 2008. 10. L. Dipietro, H. I. Krebs, S. E. Fasoli, B. T. Volpe, J. Stein, C. T. Bever, and N. Hogan, “Changing motor synergies in chronic stroke,” J Neurophysiol, vol. 98, pp. 757-768, 2007. 11. M. D. Ellis, B. G. Holubar, A. M. Acosta, R. F. Beer, and J. P. Dewald. “Modifiability of abnormal isometric elbow and shoulder joint torque coupling after stroke”. Muscle Nerve. vol 32, pp 170-178, 2005. 12. M. D. Ellis, T. Sukal-Moulton, and J. D. Dewald. “Progressive shoulder abduction loading is a crucial element of am rehabilitation in chronic stroke,” Neurorehabil Neural Repair, vol 23, pp 862-869, 2009.
117
13. J. J. Buchanan, K. Zihlman, Y. U. Ryu, and D. L. Wright, “Learning and transfer of a relative phase pattern and a joint amplitude ratio in a rhythmic multijoint arm movement,” J Mot Behav, vol. 39, no. 1, pp. 49-67, 2007. 14. P. Haggard, K. Hutchinson, J. Stein, “Patterns of coordinated multijoint movement,” Exp Brain Res, vol. 107, pp. 254-266, 1995. 15. T. Nef, M. Guidali, R. Riener, “ARMin III - arm therapy exoskeleton with an ergonomic shoulder actuation,” Applied Bionics and Biomechanics, vol. 6, no. 2, pp. 127-142, 2009. 16. E.B. Brokaw. Hand spring operated movement enhancer (HandSOME) device design for hand rehabilitation after stroke, M.B.E. thesis, Catholic University of America, United States, 2009. 17. T. Nef, and P.S. Lum. “Improving backdrivability in geared rehabilitation robots,” Med & Biol Eng Comput, 2009. 18. J. J. Craig, Introduction to robotics, 3rd ed., Harlow: Pearson Education, 2004. 19. G. Robertson, G. Caldwell, J. Hamill, G. Kamen, and S. Whittlesey. Research methods in biomechanics Windsor, ON: Human Kinetics pp.57, 2004.
IFMBE Proceedings Vol. 32
Kinematic Analysis in Robot Assisted Femur Fracture Reduction: Fuzzy Logic Approach Wang Song1, Chen Yonghua1, Ye Ruihua1, and Yau WaiPan2 2
1 Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, China Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong, China
Abstract— Femur fracture is a frequently occurred injury and usually treated by surgeries. In order to overcome problems involved in the surgeries and improve the surgical operation efficiency, researches on automated or semi-automated robotassisted surgery have been carried out, but very few literatures have reported the realigning motion planning of bone fragment, like velocity planning and acceleration planning. Fuzzy logic, using linguistic rules to manipulate and implement human knowledge, is deemed as an efficient approach to deal with large variability and overcome difficulties of modeling complex environment that is difficult to express using mathematical equations. In this paper, bone fragment is reconstructed based on CT images. Two fuzzy logic controllers are proposed to deal with acceleration planning. The controllers use linguistic rules to coordinate the inputs about obstacles, such as soft tissues, nerves and arteries, so that acceleration for bone fragment reduction can be obtained. Fuzzy controller one evaluates risk of trauma that is induced by bone fragment motion and fuzzy controller two computes acceleration based on the risk of trauma and reduction path. Velocity is computed according to the acceleration. Simulation results have shown that the risk of trauma increases when the bone segment is approaching obstacles, and at the same time, bone fragment velocity reduces smoothly. This is exactly what is desired in bone fragment reduction.
visualization of the fracture, assistance planning and human-robot-interaction [3]. Warisawa et al. also developed a fracture reduction robot with fail-safe mechanism. The robot was mounted to a patient’s foot and provided power assistance to help surgeon to realign fragments [4-5]. After some modification of the robot’s end-effector, their robot was used to insert a pin into a patient’s bone fragment to assist hip fracture reduction surgery [6]. Koo T.K. et al. developed a reposition device to perform closed fracture treatment [7-8]. Ralf Westphal et al. developed a telemanipulated fracture reduction system consisted of fluoroscopy system, surgical navigation system, commercial serial robot RX90 and control PC [9-11]. Our laboratory has developed a reduction robot and the reduction system is illustrated in Figure 1.
Keywords— Robot-assisted surgery, femur fracture reduction, fuzzy logic, acceleration planning, velocity planning.
I. INTRODUCTION Femur fracture is a frequent injury [1]. However, the surgeons’ treatment of femur fracture involves many problems such as large amount of radiation exposure to patient and operating staffs and mal-alignment of fracture segments inducing many complications. In order to overcome these problems, many researchers and physicians are attempting to improve the reduction process and the most promising method is deemed to be the robot-assisted reduction surgery which leads to a considerable reduction in radiation exposure, shorter fluoroscopy times and increased precision of fracture reduction [2]. Much research has been done in the field of robotassisted surgery of femur fracture. A.E.Graham et al. designed a parallel robot and described the interface including
Fig. 1 The bone fragment reduction system. 1. Fluoroscopy device. 2. Reduction robot. 3. Control PC. 4. Robot controller Although some research reports can be found in the area of robot assisted femur fracture surgery, yet very few are about realigning motion planning including velocity planning and acceleration planning. Precise and intelligent motion planning of bone fragment reduction is deemed necessary, as it provides safeguard and prevents unexpected collision between bone fragment and critical soft tissues. A useful approach to implement motion planning is the use of fuzzy logic because of its robustness in dealing with large variability [12] and effectiveness to overcome difficulties of
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 118–121, 2010. www.springerlink.com
Kinematic Analysis in Robot Assisted Femur Fracture Reduction: Fuzzy Logic Approach
modeling complex environment which is difficult to express by mathematical equations. In this paper, bone fragment pre-processing is presented firstly. Initial poses of fracture fragments and reduction path are provided pre-operatively. Based on the information, acceleration planning is determined by two fuzzy controllers. Fuzzy controller one evaluates risk of trauma and fuzzy controller two computes the acceleration of mobile bone fragment. Velocity is computed based on the acceleration.
II. ACCELERATION AND VELOCITY PLANNING In femur fracture treatment, the proximal fragment is held in its position by clinical fixator and the distal fragment is moved to correct position. Thus, planning of velocity and acceleration is the computation of distal fragment’s motion. A. Bone Fragment Pre-processing 3D reconstruction of CT images of one femur fracture fragment is shown in Figure 2(a). After segmentation and surface reconstruction of the 3D model, we sample some points on the fracture surface as shown in Figure 2(b). Applying least square method to these 3D points, we can find a planar surface that makes the sum of distances between all these sampled points and the planar surface the least. Thereby, the planar surface minimizes the errors. Then the planar surface is offset by enough distance to cover all the sampled points and leave some tolerance between offset surface and sampled points. The flat surface and offset surface are shown in Figure 2(c). Therefore, bone fragment can be modeled as a regular cylinder with a flat offset surface as shown in Figure 2(d).
119
B. Computation of Acceleration a) Fuzzy Controllers Motion of distal fragment is denoted by the motion of the center point of fracture surface. When approaching to destination, distal fragment should be moved more and more slowly, thereby the distance from current position of bone fragment to destination, denoted by d_des, is one parameter to evaluate acceleration. Along with the reduction path, there are some critical soft tissues that are called obstacles in the paper and are modeled as spheres enclosing the critical soft tissues. When approaching to these obstacles, distal fragment should also be moved reasonably. Thus, we present another parameter, risk of trauma denoted by r, to evaluate the probability of injuring these critical soft tissues. To sum up, acceleration is evaluated according to the two input parameters, d_des and r, by fuzzy controller two. The risk of trauma ranges from zero to one and a higher value means distal fragment is more likely to collide with the critical soft tissues. A schematic diagram of parameters for fuzzy controller two is illustrated as Figure 3(a). Evaluation of risk of trauma is carried out by fuzzy controller one based on three input parameters, distance to the most adjacent obstacle denoted by d_obs, difference angle between moving direction of distal fragment and obstacle direction denoted by a_moving, and difference angle between fragment axis and obstacle direction denoted by a_axis. Parameters definition for fuzzy controller one is illustrated by Figure 3(b). The coding scheme for acceleration planning is shown as Figure 4.
(a)
(b)
Fig. 3 Scheme of fuzzy controllers. (a) Fuzzy controller two. (b) Fuzzy controller one (a)
(b)
(c)
(d)
Fig. 2 Bone fragment modeling. (a) Femur fracture fragment. (b) Sampled points on the reconstructed fracture surface. (c) Offset surface and flat surface. (d) Regular fragment modeling Fig. 4 Scheme of acceleration planning
IFMBE Proceedings Vol. 32
120
W. Song et al.
b) Fuzzification Membership function shown as Figure 5(a) is used to fuzzify the values of d_ obs. Membership function shown as Figure 5(b) is used to fuzzify the values of a_axis and a_moving. Values of d_des are fuzzified by membership function shown as Figure 5(c) and values of r are fuzzified by membership function shown as Figure 5(d).
conditions that the input variables could meet and The THEN statement gives out evaluation of output parameters. Fuzzy controller one has 27 rules for speculating the risk of trauma and Fuzzy controller two has 25 rules for speculating the acceleration. Rules of fuzzy controller one and two are shown as Table 1, left side for fuzzy controller one and right side for fuzzy controller two. Table 1 Rules of fuzzy controller one & two IF
(a)
(c)
(b)
(d)
Fig.
5 Membership functions for linguistic variables. (a) “d_ obs” (b) “a_axis & a_moving ” (c) “d_des” (d) “r”
c) Defuzzification Center of gravity defuzzification method is used to defuzzify acceleration. Output membership functions of r and acceleration are shown as Figure 6.
THEN
d_ obs
a_moving
a_axis
r
Near
Point
Point
V
Near
Point
Slant
V
Near
Point
Deflect
IV
Near
Slant
Point
Near
Slant
Slant
Near
Slant
Deflect
Near
Deflect
Point
Near
Deflect
Slant
II
IF
THEN
d_des
r
acceleration
Any
V
NB
Adjacent
IV
NB
Adjacent
III
NB
IV
Adjacent
II
NS
III
Adjacent
I
NS
III
Near
IV
NS
III
Near
III
NS
Near
II
ZE
Near
Deflect
Deflect
II
Near
I
ZE
Middle
Point
Point
IV
Middle
IV
NS
Middle
Point
Slant
III
Middle
III
ZE
Middle
Point
Deflect
III
Middle
II
PS
Middle
Slant
Point
II
Middle
I
PS
Middle
Slant
Slant
II
Far
IV
NS
Middle
Slant
Deflect
I
Far
III
ZE
Middle
Deflect
Any
I
Far
II
PS
Far
Any
Any
I
Far
I
PB
Very far
IV
NS ZE
Very far
III
Very far
II
PS
Very far
I
PB
C. Computation of Velocity Equation (1) shows the velocity calculation. Vi
1
Si (a)
(b)
Fig. 6 Membership functions. (a) “r” (b) “acceleration” d) Rule Evaluation Fuzzy rules have two parts: condition part that consists of IF statements and behavior part consisting of THEN statements. Condition part is designed to represents all possible
ti
(Vi ) 2 2ai Si 㧔xi
1
xi ) 2 ( yi
1
yi ) 2 ( zi
1
zi ) 2
(1)
2 Si Vi 1 Vi
Vi+1 is the velocity of distal fragment, ai is the acceleration of the distal fragment, and ∆Si+1 means the distance of one interval between point (xi, yi, zi) and (xi+1, yi+1, zi+1). ti is the time that distal fragment spent in the interval.
IFMBE Proceedings Vol. 32
Kinematic Analysis in Robot Assisted Femur Fracture Reduction: Fuzzy Logic Approach
121
III. SIMULATION RESULTS
ACKNOWLEDGMENT
Results are calculated in the MATLAB software and shown as Figure 7.
This research is supported by a CRCG grant from The University of Hong Kong.
REFERENCES
(a)
(c)
(b)
(d)
Fig.
7 Simulation results. (a) Reduction path and obstacle. (b) Result of risk of trauma. (c) Result of acceleration (d) Result of velocity
In Figure 7(b), risk of trauma reaches high values, about 0.7, at around 6s, because the distal fragment is nearest to the obstacle at that moment. In Figures 7(c)-(d), velocity rises a little and drops soon because of the high values of risk in the beginning and after 8s bone fragment is moved faster due to low values of risk. When bone fragment approaches the destination, acceleration slumps.
IV. DISCUSSION AND CONCLUSIONS From the simulation results, the motion of distal fragment could respond to the environment with obstacles. When bone fragment approaches to an obstacle, triggering the risk of trauma to climb up, velocity of bone fragment could be slowed down reasonably. After the risk of trauma falls, velocity could rise again. These results validate performance of the proposed methods and show that proposed algorithms could guide the motion of distal fragment in order to provide safeguard and prevent unexpected collision between bone fragment and critical soft tissues, making femur fracture treatment more precise and gentle. These results could be used to control the robotic system in automated femoral bone fracture reduction.
1. Zlowodzki M, B. M, Marek D.J, Cole P.A., Kregor P.J. (2006) Operative treatment of acute distal femur fractures: Systematic review of 2 comparative studies and 45 case series (1989 to 2005). Journal of orthopaedic trauma 20: 366-371 2. Gebhard F, K. M, Schneider E, Arand M, Kinzl L, Hebecker A, Bätz L (2003) Radiation dosage in orthopedics -- a comparison of computer-assisted procedures. Der Unfallchirurg 106: 492-497 3. Graham A.E, S. Q. X, Aw K.C, Xu W.L, Mukherjee S, "Robotic Long Bone Fracture Reduction " in Medical Robotics, Bozovic, V., Ed., ed Vienna, Austria I-Tech Education and Publishing, 2008, pp. 85-102 4. Mitsuishi M, S. N, Warisawa S, Ishizuka T, Nakazawa T, Sugano N, Yonenobu K, Sakuma I, Development of a computer-integrated femoral head fracture reduction system Proceedings of the 2005 IEEE International Conference on Mechatronics, ICM '05, Taipei, Taiwan, 2005, pp 834-839 5. Warisawa S, I. T, Mitsuishi M, Yonenobu K, Sugano N, Nakazawa T "Development of a femur fracture reduction robot," presented at the IEEE International Conference on Robotics and Automation, New Orleans, LA, United states, 2004, pp 3999-4004 6. Sanghyun J, H. K, Hongen L, Junichiro I, Touji N, Mamoru M, Yoshikazu N, Tsuyoshi K, Nobuhiko S, Yuki M, Masahiko B, Satoru O, Takuya M, Isao O, Ichiro S (2008) A robot assisted hip fracture reduction with a navigation system. Medical image computing and computer-assisted intervention 11: 501-508 7. Koo T.K, C. E, Mak A.F (2006) Development and validation of a new approach for computer-aided long bone fracture reduction using unilateral external fixator. Journal of Biomechanics 39: 2104-2112 8. Yoon H.K, N. I, Edmund Y.S.C (2002) Kinematic simulation of fracture reduction and bone deformity correction under unilateral external fixation. Journal of Biomechanics 35: 1047-1058 9. Ralf W, S. W, Thomas G, Markus O, Tobias H, Christian K, Friedrich M.W, "Automated Robot Assisted Fracture Reduction," in Advances in Robotics Research, ed Berlin: Springer Berlin Heidelberg, 2009, pp 1259-1278 10. Ralf W, S. W, Friedrich W, Thomas G, Markus O, Tobias H, Christian K (2009) Robot Assisted Long Bone Fracture Reduction International Journal of Robotics Research OnlineFirst 28: 12591278 11. Ralf W, T. G, Markus O, Jan B, Daniel K, Simon W, Tobias H, Christian K, Friedrich W, "Robot Assisted Fracture Reduction " in Experimental Robotics. vol. 39, ed: Springer Berlin / Heidelberg, 2008, pp. 153-163 12. Kiwon P, N. Z, Behavior-Based Autonomous Robot Navigation on Challenging Terrain: A Dual Fuzzy Logic Approach,IEEE Symposium on Foundations of Computational Intelligence, Honolulu, HI, 2007, pp 239-244 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
WANG Song The University of Hong Kong Pokfulam Road Hong Kong China
[email protected]
Compensation for Weak Hip Abductors in Gait Assisted by a Novel Crutch-Like Device J.R. Borrelli and H.W. Haslach Jr. University of Maryland/Department of Mechanical Engineering, College Park, USA
Abstract— A novel crutch-like device compensates for weak hip abductors in gait. Many teenagers with weak hip abductors, an important group being individuals with low level spina bifida, who are able to walk with crutches, abandon community ambulation in favor of a wheelchair because walking takes more energy than using a wheelchair. The health benefits to standing and walking as opposed to using a wheelchair include improved bone growth, improved blood circulation, reduced bladder infections, reduced pressure sores and prevention of contractures. The increased energy cost, which is characterized by excessive motions of the pelvis and torso, is attributed to weakened or paralyzed hip abductor muscles. A novel crutch, designed by the authors, lowers the energy cost of walking which encourages patients with weak hip abductors to continue ambulation. Published research shows that significant work is done in the frontal plane by the hip to stabilize the pelvis in normal gait. In the absence of hip abductors, other motions and muscles are used to inefficiently stabilize the pelvis. The new crutch compensates for weak or absent hip abductor torques by applying a load on the side of the upper torso. A 3D model of crutch walking validates the device and determines the forcing and fitment guidelines. The novel 3D gait model is capable of capturing crutch gait incorporating pelvic tilt, pelvic rotation and pelvic obliquity. The 3D model predicts a reduction in the hip abductor moment required during a gait cycle when using the crutch compared to forearm crutches if the user has very weak hip abductor muscles. The decrease in the required hip abductor moment is attributed to the forcing caused by the crutch, which may reduce excessive lateral displacement of the pelvis and torso. The decrease in the required hip abductor moment reduces the compensating motions that would otherwise be necessary. Keywords— Crutch gait for weak hip abductor, 6-link three-dimensional gait model, pelvic tilt, pelvic rotation and obliquity, inverse dynamics.
paralyzed hip abductor muscles, such as occurs in low-level spina bifida exhibit excessive pelvic and torso motion in the frontal plane, expending more energy during gait than normal walking. These motions in addition to excessive knee flexion increase the energy expenditure associated with this gait compared to normal gait [2], [3]. Further, these deviations from normal gait put excess stress on the knee, and consequently individuals with weak hip abductors frequently develop knee pain [4]. Cane use has been shown to reduce the moment needed by the hip abductors to stabilize the hip in the frontal plane [5]. Vankoski et al. [6] show a decrease in the tilt and lateral displacement of the pelvis among individuals with low level spina bifida who use forearm crutches compared to those who do not, resulting in a more “normal” gait. These findings suggest that assistive device use reduces compensatory motions by reducing the required hip abductor moment. However, the lack of assistive devices that completely eliminate unnecessary compensatory motions for a gait with weak hip abductors merits a closer look. An assistive device compensates for weak hip abductors by developing a moment about the pelvis in the frontal plane like the hip abductors would otherwise. A novel crutch, designed by the authors [7], directs the load applied by the user in a unique way that reduces the required force of the stance side hip abductors even more than conventional assistive devices. The crutch is evaluated using a three-dimensional dynamic model which incorporates pelvic tilt, rotation and obliquity. The novel three-dimensional walking model is used to investigate crutch gait. The model shows the crutch supplements the hip abductors during gait, potentially reducing the incidence of knee pain by reducing excess pelvic and torso motions in individuals with weak hip abductors.
I. INTRODUCTION
II. BACKGROUND
Although the majority of the work done while walking occurs in the sagittal plane, significant work is done in the frontal plane to stabilize the pelvis during gait in non-disabled individuals [1]. Individuals with weak or
Neumann [5] develops a model of the pelvis in the frontal plane in order to estimate the joint reaction force (JRF) between the stance side femur and the pelvis. The model assumes the JRF of the stance thigh on the pelvis, crutch
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 122–125, 2010. www.springerlink.com
Compensation for Weak Hip Abductors in Gait Assisted by a Novel Crutch-Like Device
force (CF), hip abductor force (HAF) and the body weight (BW) as acting through a line, in the frontal plane parallel to the horizontal plane, connecting the either ball and socket joint in the pelvis. This model predicts almost a 50% reduction in the hip abductor moment necessary to stabilize the pelvis when supporting 10% of the body weight on a contralateral cane while standing still compared to not using a cane. The model of Neumann [5] is extended by assuming the JRF, HAF, BW and CF act through a plate which represents the upper body. The reason for using a plate to model the upper body is that now the CF can act at an angle other than vertical. A free body diagram of the upper body in the frontal plane is shown in Fig. 1. Assuming that all of the forces are known, the hip abductor force can be calculated by summing the moments about O, eqn (1).
123
COG to the point of application of the CF, D3 is 26.36 cm and the vertical distance from the pelvis to the point of application of the CF, D4 is 53.9 cm from Neumann [5]. Assuming that the crutch supports 10% of the total BW, and a BW of 77 kg, the hip abductor moment necessary to stabilize the pelvis, the black line in Fig. 2, is 0.71 Nm/kg-BW. The red line is the hip abductor moment necessary when an assistive device supporting 10% of the user’s BW is placed at an angle from the body, which varies from 0 to 0.7 rads. When the device is vertical, the hip abductor moment is reduced by nearly 50%. Assuming a narrower crutch stance width, D3, than Neumann [5] of 17.1 cm compared to 26.36 cm gives the green line which predicts a 35% reduction in the required hip abductor moment when the crutch is held vertical. As the assistive device angle increases, the hip abductor moment necessary decreases.
Fig. 2 Plot of the hip abductor moment necessary to maintain the pelvis in a state of static equilibrium. The black line is with no crutch, the red line is with a crutch loaded with 10% of BW at an angle. The green line uses a narrower crutch stance width than assumed by Neumann. As the angle increases the hip abductor moment decreases
Fig. 1 Free body diagram of the upper body.
Skeleton picture taken from [8]. *Five sixths body weight (5/6 BW) is the weight of the body minus the stance leg
∑ M o = 0 : HAF ⋅ D1 − 5 6 BW ⋅ D2 + CF cos β ⋅ ( D2 + D3 ) + CF sin β ⋅ D4 = 0
(1)
The distance from the HAF force to the JRF force, D1 is 4.39 cm, the distance from the JRF to the center of gravity (COG), D2 is 8.64 cm, the horizontal distance from the
This model demonstrates that using a wide crutch stance decreases the role of the hip abductors while standing still. The hip abductors moment required for static equilibrium decreases more and more as the crutch stance width increases. Assuming a coefficient of friction of 0.5, the value for a rubber type footpad and linoleum, a maximum angle of 0.46 radians (26.5o) is found before the device slips. Except when completely vertical, crutch and cane footpads are constantly being deformed while in use. Too wide a crutch angle may cause the footpad of axillary or forearm crutches to deform, altering the contact pattern, and slip. This problem is overcome through a novel improvement on conventional crutch design [7] (Fig. 3). The device maintains a constant orientation between the footpad and the ground and the axillary pad and the body by allowing the footpad and axillary support to rotate in the frontal plane about the support tube. A benefit of the the axillary pad
IFMBE Proceedings Vol. 32
124
J.R. Borrelli and H.W. Haslach Jr.
maintaining a constant orientation with the body reduces chaffing. Internal springs provide a restoring force which returns the device to the vertical position.
III. GAIT MODEL The modified Neumann model ignores dynamic effects. In order to truly evaluate the effect of the assistive device a dynamic three-dimensional model is developed. Saunders et al. [9] identifies 6 determinants of gait or body motions which are thought to be the core motions that generate a “normal” gait. Three of the motions are motions of the pelvis. The model motions are based on the Saunders et al. determinants of gait. A torso is also included. The model uses six rigid links with 10 degree of freedom. The threedimensional model is used to confirm the hypothesis that contralateral crutch decreases the hip abductor moment needed to stabilize the pelvis during gait (Fig. 4).
(eversion/inersion and plantar/dorsiflexion). The thigh is connected to the shank by a hinge (flexion/extension) and to the pelvis by a ball and socket joint (rotation, tilt and obliquity). The torso rotates about the pelvis in the sagittal plane on a hinge and the swing thigh is connected to the pelvis by a universal joint (ab/adduction and flexion/extension). Lastly the swing leg is connected to the thigh by a hinge (flexion/extenstion).
4
1
Fig. 4 Free body diagram of the three dimensional model used to investigate the novel crutch 2
The equations of motion are derived using Lagrange’s equations, 3
d ∂L ∂L − = Q, dt ∂q& ∂q
Fig. 3 The novel footpad and axillary support, allows the support tube (1) to rotate around a longitudinal pin (3) in the frontal plane. A torsional spring (3) provides a restoring force to return the device to the vertical [7] The stance foot is ignored in this simulation and the shank is free to rotate in the frontal and sagittal planes
(2)
where the Lagrangian, L, is the difference between the kinetic and potential energy of the system, q are the generalized coordinates and Q are external forces Average normal kinematics from Neumann [5] are curve fit with a Fourier series up to the fifth term and substituted into the equations to estimate the joint torques. Winter [10] found that more than 99% of the signal power is contained in frequencies below 6 Hz for body kinematics. The black line in Fig. 5 represents experimentally derived hip ab/adductor moments [5] observed during the swing phase of normal gait. The red line is the model predicted hip ab/adductor moment with no crutch (i.e. CF=0).
IFMBE Proceedings Vol. 32
Compensation for Weak Hip Abductors in Gait Assisted by a Novel Crutch-Like Device
The model over-estimates the hip abductor moment until mid-swing after which it is under-estimated. This is potentially caused by the poor modeling of foot and ankle kinematics, however these estimates are within the range of what is considered normal. The green line in Fig. 5 is the hip abductor moment necessary for normal kinematics assuming a load of 10% BW is applied to a contralateral crutch in the vertical position. The blue line represents the hip abductor moment necessary to maintain normal gait kinematics while using a crutch loaded with 10% BW at an angle of 10o with respect to the vertical. The green line represents an average decrease in the hip abductor moment of 28% during the swing phase while the blue line decreases the moment needed from the hip abductors by 42% on average compared to no crutch.
Fig. 5 The experimentally derived hip abductor moment during the swing phase of gait is shown in black [5]. The model calculated hip abductor moment is shown in red. The model calculated hip abductor moment necessary to achieve “normal” kinematics during gait with 10% BW supported on a contralateral crutch is shown in green and with 10% BW supported on a contralateral crutch at a 100 angle with respect to the vertical is shown in blue
IV. DISCUSSION Neumann [5] determines the hip abductor moment necessary to stabilize the pelvis in the frontal plane. Vankoski et al. [6] show that lateral displacement of the pelvis becomes more like normal for individuals with weak hip abductors when forearm crutches are used. This information suggests that the greater the moment developed by a contralateral assistive device, the more closely the pelvic kinematics in the frontal plane will match those of a non-disabled individual. The three-dimensional model developed here, demonstrates contralateral crutch to decrease the hip abductor moment necessary to maintain “normal” kinematics and placing the device at an angle increases the assistive effect. Even supporting a modest load on the assistive device, 10%
125
BW, reduces the moment that must be generated by the hip abductors by 28%
V. CONCLUSION Gait with weak or paralyzed hip abductors, such as is exhibited in spina bifida, is characterized by excessive lateral pelvic and torso displacement which increases the energy cost of walking and strain on the knees. A new three dimensional model demonstrates that a assistive device use reduces the hip abductor moment necessary to maintain “normal” kinematics. Contralateral crutch use has been shown to help to restore normal pelvic and torso motions decreasing the energy cost associated with walking. This study suggests employing a wide crutch stance may further decrease the pelvic and torso displacements associated with a gait with weak hip abductors improving gait cosmesis and potentially reducing the incidence of knee pain. The model predicts a decrease in the hip abductor moment required as crutch stance width increases, however care must be taken to avoid exceeding the critical angle which is a function of the coefficient of friction of the device’s footpad. In addition, as the stance width increases, the assistive device length must increase. The maximum crutch width may be limited on a case by case basis, based on user tolerance of increasing crutch length.
REFERENCES 1. Eng J, Winter D (1995) Kinetic analysis of the lower limbs during walking: what information can be gained from a three-dimensional model? J Biomechanics 28:753–758 2. Duffy C, Hill A, Cosgrove I et al (1996) Three dimensional gait analysis in spina bifida. J Pediatr Orthoped, 16:786-791 3. Duffy C, Hill A, Cosgrove I et al (1996) The influence of abductor weakness on gait in spina bifida. Gait Posture, 4:34-38 4. Williams J, Graham G, et al (1993) Late knee problems in myelomeningocele. J Pediatr Orthop. 13:701-703 5. Neumann D (2002) Kinesiology of the Musculoskeletal system. Mosby, St. Louis 6. Vankoski S, Moore C, et al (1997) The influence of forearm crutches on pelvic and hip kinematics in children with myelomeningocele: don’t throw away the crutches. Dev Med Child Neurol 39:614-619 7. Haslach H and Borrelli J (2009) Crutch-like mobility assist device with rotatable footer asssmebly. United States patent US 7,581,556 B2. 8. http://naturenest.files.wordpress.com/2008/10/skeleton.gif 9. Saunder J, Inman V et al. (1953) The major determinants in normal and pathological gait. J. Bone Jt Surg 35A:543-558 10. Winter D (1990) The biomechanics and motor control of human movement. Wiley, New York 11. Neumann D (1996) Hip abductor muscle activity in persons with a hip prosthesis while carrying loads in one hand. Phys Ther 76:13201330
IFMBE Proceedings Vol. 32
Measuring in vivo Effects of Chemotherapy Treatment on Cardiac Capillary Permeability A. Fernandez-Fernandez*, D.A. Carvajal, and A.J. McGoron Biomedical Engineering Dept., Florida International University, Miami, FL Abstract— Background: Cardiotoxicity is a life-threatening side effect of chemotherapy that is multi-factorial in nature. Increased capillary permeability due to endothelial damage after anthracycline administration could be an important contributor to cardiac dysfunction. Methods: We investigated the cardiotoxic effects of the anthracycline doxorubicin (DOX) in rats using intraperitoneal injections of saline (n=10) or DOX (n=10) over 12 days for a cumulative dose of 18 mg/kg. We studied cardiotoxicity by serial echocardiography and with an isolated heart setup. We monitored perfusion and left ventricle pressures and measured permeability using a fluorescent indicator dilution method developed by our group. Results: There were significant differences (p=0.029) in permeability surface area product between control (0.047±0.009 cm3/s) and DOX animals (0.068±0.025 cm3/s). This is consistent with our hypothesis that chemotherapyinduced changes in the coronary capillary endothelium lead to increased permeability. We also observed changes in cardiac function consistent with chemotherapy-induced cardiotoxicity. Contractility (+dP/dt) and LVDP were significantly reduced (p = 0.030) in the DOX group. Average +dP/dt was 2465±183 mm Hg/s for controls vs. 1817±177 mm Hg/s for DOX-treated rats, and average LVDP was 92.4 mm Hg for controls vs. 78.7 mm Hg for DOX-treated rats. Fractional shortening (FS) echocardiographic measurements decreased significantly over the course of treatment for the DOX group (p = 0.028, end FS = 32.8%, n=5), but not for the control group (p = 0.209, end FS = 50.7%, n=5). Conclusion: Changes in permeability after chemotherapy treatment can be detected using a fluorescent indicator dilution method. These changes are consistent with other measures of cardiac function observed in the chemotherapy group. Efforts toward the development of chemotherapy drugs with reduced cardiotoxicity should consider the effect on the endothelial layer, and our method to measure permeability in an isolated heart setup could be useful during testing of new drug alternatives. Keywords— Permeability, chemotherapy, isolated heart, fluorescent indicator dilution, cardiotoxicity.
I. BACKGROUND Cancer is a major public health problem in the United States and it is the second cause of death after heart disease. In 2009 the American Cancer Society predicted that almost 1.5 million people would be diagnosed with cancer [1]. The 5-year relative survival rate for people diagnosed with * Presenting author.
cancer, based on data collected between 1996 and 2004, is 66% [1]. The fact that current diagnostics and therapeutics allow higher survival rates highlights the importance of developing chemotherapy approaches with reduced side effects. Anthracyclines are common agents used in cancer treatment. This family of drugs is very effective in causing tumor regression, but their long-term use is seriously limited by their systemic toxicity [2,3]. Cardiac toxic effects are particularly worrisome because they can have a very significant long-term impact on functional level and quality of life. Research efforts geared towards anthracycline derivatives or modified formulations with reduced cardiotoxic effects could have a major impact on the prognosis and quality of life of patients diagnosed with cancer. Anthracyclines such as doxorubicin (DOX) damage myocardial tissue through the production of reactive radical species [4,5]. The damaging action results in fibrosis, degeneration of myocardial cells, cardiac dilatation, reduced contractile function and interstitial edema [6,7]. Although most studies of the effect of chemotherapy on the heart focus on cardiac function measures, Wolf et al have determined that DOX also causes endothelial dysfunction in vitro [8], and that capillary permeability changes could be a contributing factor to the development of myocardial edema and dysfunction after chemotherapy. To our knowledge, there are no in vivo studies of the effect of chemotherapy on cardiac capillary permeability. The objective of this project was to study changes in cardiac capillary permeability in a short-term rat model of cardiotoxicity, along with more traditional measurements of cardiac function that are well established in the literature. This would allow us to determine whether cardiac capillary permeability is a variable of interest for in vivo testing of drug cardiotoxicity, and whether significant changes in permeability can be measured using a fluorescent indicator dilution method developed by our group. One of the classical ways to measure permeability of an organ or tissue is the multiple indicator dilution protocol. It typically uses radioactive tracers, but the fluorescent version is based on the same principles and avoids the use of radioactive materials. Two tracers are simultaneously introduced into the circulation. One of the tracers remains intravascular after injection and the second tracer can cross the capillary wall and will diffuse out of the capillary [9]. The output curves versus
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 126–129, 2010. www.springerlink.com
Measuring in vivo Effects of Chemotherapy Treatment on Cardiac Capillary Permeability
time for the two tracers depend on the flow rate, the output concentration of the two tracers, and the injected dose, as given by equation (1), where h is the fraction of injected indicator leaving the system per unit time h(t ) =
F * C out (t ) q0
(1)
The researcher can then calculate, among other parameters, the permeability-surface area product of the tissue using the Crone-Renkin equation. This expression relates the maximum extraction of the diffusible tracer (Emax) and the perfusate flow rate (F) to the permeability-surface area product (PSP) as shown in (2):
PSP = − F * ln(1 − E max )
(2)
II. METHODS In order to study the cardiotoxic effects of anthracyclines in a rat model we randomly assigned animals to receive intraperitoneal injections of either saline or doxorubicin solution over a period of 12 days. Each rat received a 3 mg/kg dose of their assigned treatment on days 1, 3, 5, 7, 9, and 11, for a cumulative dose of 18 mg/kg. Cardiotoxicity was studied by (1) serial echocardiography at baseline and on day 11, (2) pressure measurements in a Langendorff isolated perfused heart setup on day 12, and (3) cardiac capillary permeability measurements with fluorescent indicator dilution method in a perfused heart setup on day 12. A. Langendorff Isolated Rat Heart Setup The Langendorff isolated perfused rat heart preparation was used to examine cardiac function and capillary permeability after in vivo treatment (n=10 per group). The project was approved by the FIU IACUC. On day 12 of treatment, each rat was anesthetized with a 50 mg/kg i.p. injection of pentobarbital. The heart was surgically excised, placed in ice cold Krebs-Henseleit (KH) buffer, and quickly cannulated to a Langendorff setup. The heart was attached to a cannula by the aortic root and perfused retrogradely according to the Langendorff technique, with KH equilibrated to 95% O2 and 5% CO2 at 37°C, and artificially paced at approximately 300 bpm using a Harvard Apparatus Stimulator P. The flow was adjusted to a perfusion pressure between 50 and 75 mmHg, measured with a fluid-filled catheter connected to a Statham strain gauge pressure transducer. A water-filled latex balloon was inserted through the mitral valve into the left ventricle, and connected through a fluid filled catheter to a Statham strain gauge pressure transducer to monitor left ventricular cardiac function (left ventricular pressure waveforms). All waveforms (aortic pressure and
127
left ventricular pressure) were recorded by computer. The balloon was inflated to a left ventricular end diastolic pressure of approximately 5 mmHg. The heart was allowed to stabilize for about 30 minutes with the perfusion flow rate adjusted to maintain the perfusion pressure between 50 and 75 mm Hg. This stabilization period ensures the acclimation of the heart to the setup, as well as the clearance of any anesthesia residue from the tissue. After the stabilization period, we started recording pressure waveforms, and the capillary membrane permeability was measured using the fluorescent indicator dilution technique. We performed permeability measurements in three replicates separated by 15 minutes. The vascular tracer, which does not diffuse through the capillary wall, was Texas-Red conjugated dextran (TR, 70,000 MW; 1.56 mg/mL). The diffusible tracer was sodium fluorescein (NaFL, 376 MW; 1.56 μg/mL). For each replicate, we injected 25 ìL of an equivolume dye mixture above the aortic cannula (12.5 ìL each of TR and NaFL), and collected output samples for 45 seconds. The fluorescent intensities of the samples were measured using a Fluorolog-3 spectrofluorometer (Horiba Jobin Yvon) with characteristic excitation/emission wavelengths of 485/515 nm for NaFL, and 590/630 nm for TR. To analyze permeability and pressure data, measurements were imported into Matlab and processed with custom-made algorithms created by our group [10]. Calculated variables included (1) dP/dt curves, dP/dt max, dP/dt min, heart rate, and left ventricular developed pressure (LVDP) for the pressure data; and (2) h(t) curves, extraction curves, and PSP values for the permeability data. B. Serial Echocardiography Serial echocardiography images were obtained on day 1 and day 11 of treatment (n=5 per group) using a HewlettPackard Sonos 2500® echocardiography machine equipped with a pediatric 5.5/7.5 MHz transducer selecting the 7.5 MHz mode. Although a higher frequency such as 12 MHz would be desirable to improve axial and lateral resolution given the small size of the rodent heart, the literature [11] and our results show that the 7.5 MHz transducer provides reasonably accurate M-mode images that can be used to monitor changes in heart function over time when the measuring technique is kept consistent. The animal was anesthetized using isoflurane through an inhalation cone with concurrent oxygen flow. Once anesthetized, the rat was placed in the left lateral decubitus position, the chest was shaved and cleaned with isopropyl alcohol, and we applied ultrasound gel for transducer/skin coupling. Long-axis B-mode images were obtained from the left parasternal windows. From these B-mode images, we acquired M-mode images by taking a B-mode image section
IFMBE Proceedings Vol. 32
128
A. Fernandez-Fernandez, D.A. Carvajal, and A.J. McGoron
right above the capillary muscles. We recorded M-mode measurements of interventricular septum thickness in diastole and systole (IVSd and IVSs), left ventricular diameter in diastole and systole (LVDd and LVDs), and left ventricular posterior wall thickness in diastole and systole (LVPWd and LVPWs). The measurements were recorded at least in triplicate to eliminate the effect of beat to beat variability, and each variable was given as the average of individual measurements. Images and reports were printed using the echocardiography machine printer, and also recorded as digital images using a camera. Statistical comparisons between treated and untreated animals were made by Student’s t-test. Fig. 2 Comparative PSP values for the control and chemotherapy groups. The star denotes a significant difference between groups (p= 0.029). The higher standard deviation in chemotherapy-treated rats is expected due to variations in individual response to chemotherapy
III. RESULTS AND DISCUSSION A. Permeability Measurements Figure 1 shows a characteristic output h(t) curve for a permeability experiment. Note how there is a clear separation between the two peaks within the first ten seconds, indicating extraction of the diffusible dye, followed by some backflow of diffusible dye into the system. This is expected due to a reversal in the radial concentration gradient as the dye bolus starts progressing past a given region. After calculating PSP values from the output curves, we found significant differences in cardiac capillary permeability between control animals and animals treated with an unmodified anthracycline (p=0.029), as shown in figure 2. The average permeability surface area product (PSP) was 0.068 ± 0.025 cm3/s for chemotherapy-treated rats (n=10) and 0.047± 0.009 cm3/s for control rats (n=10). This is consistent with our hypothesis that chemotherapy-induced changes in the endothelial layer of the coronary capillaries would lead to increased cardiac capillary permeability in the animals receiving doxorubicin.
Fig. 1 Output curves, h(t), for TR and NaFL dyes in an isolated rat heart experiment using spline interpolation. We can observe, as expected, that the TR output peak is higher than the peak for NaFL, indicating extraction of the diffusible tracer through the capillary wall
B. Cardiac Function Measurements We also observed changes in cardiac function that were indicative of chemotherapy-induced cardiotoxicity, and consistent with the observed changes in permeability. In measurements obtained with the Langendorff isolated heart setup, both contractility (+dP/dt) and LVDP were significantly reduced (p = 0.030) in the doxorubicin group when compared to controls. This is shown in figures 3 and 4. The average +dP/dt value was 2465±183 mmHg/s for controls vs. 1817±177 mmHg/s for doxorubicin-treated rats. The average LVDP was 92.4 mmHg for controls vs. 78.7 mmHg for doxorubicin-treated rats. There were no significant differences between groups in relaxation rate as measured by -dP/dt. Based on our results, we may conclude that +dP/dt changes could precede -dP/dt changes in a short-term model of cardiotoxicity, i.e., contractility is affected earlier than relaxation. M-mode echocardiographic data also demonstrated functional changes due to chemotherapy treatment. A comparison between groups on day 1 and 11 is shown in figure 5. Fractional shortening (FS) measurements at baseline showed no significant differences between the control and doxorubicin groups (FS control = 46.8%; FS doxorubicin = 47.7%; p=0.824). However, by day 11, the values for the groups were significantly different (FS control = 50.7%, n=5; FS doxorubicin = 32.8%, n=5; p=0.030), as shown in figure 5. There was a significant decrease in FS for the doxorubicin group over the course of treatment (p = 0.028 when comparing baseline value with the value for day 11), but no decrease for the control group (p = 0.209).
IFMBE Proceedings Vol. 32
Measuring in vivo Effects of Chemotherapy Treatment on Cardiac Capillary Permeability
129
IV. CONCLUSION
Fig. 3 Comparative dP/dt values for control and chemotherapy groups. A
We have detected significant changes in cardiac capillary permeability after chemotherapy treatment in a short-term animal model. These changes are consistent with other measurements of cardiac function. Efforts toward the development of chemotherapy drugs with reduced cardiotoxicity should consider the effect on the endothelial layer, and our method to measure permeability in an isolated heart setup could be useful to investigate comparative cardiotoxicity of different chemotherapy drugs. The fluorescent indicator dilution method also has potential applications in permeability measurements of other organs or tissues, and to detect permeability changes in other disease processes.
star denotes a significant difference in +dP/dt values (p= 0.030)
ACKNOWLEDGMENT A.F.F. was supported by NIH/NIGMS R25 GM061347.
REFERENCES
Fig.
4 Comparative LVDP values for the control and chemotherapy groups. The star denotes a significant difference between groups (p= 0.030)
Fig.
5 Comparative fractional shortening (FS) values for the control and chemotherapy groups. The star denotes a significant difference between groups by day 11 of treatment (p= 0.030). The p-value for day 1 was nonsignificant (p = 0.824), indicating that the FS values for the two groups were comparable at the beginning of the experiment
1. American Cancer Society (2009) Cancer Facts & Figures 2009. American Cancer Society, Atlanta 2. Ferrans VJ (1978) Overview of cardiac pathology in relation to anthracycline cardiotoxicity. Cancer Treat Rep 62:955-61 3. Platel D, Bonoron-Adele S, and Robert J (2001) Role of daunorubicinol in daunorubicin-induced cardiotoxicity as evaluated with the model of isolated perfused rat heart.. Pharmacol Toxicol 88: 250-4. 4. Chen B, et al (2007) Molecular and cellular mechanisms of anthracycline cardiotoxicity. Cardiovasc Toxicol, 7: 114-21. 5. Chen Y, et al. (2007) Collateral damage in cancer chemotherapy: oxidative stress in nontargeted tissues. Mol Interv 7:147-56. 6. Platel D, et al. (2000) Preclinical evaluation of the cardiotoxicity of taxane-anthracycline combinations using the model of isolated perfused rat heart. Toxicol Appl Pharmacol 163: 135-40. 7. Minotti G, et al. Anthracyclines: molecular advances and pharmacologic developments in antitumor activity and cardiotoxicity. Pharmacol Rev 56:185-229. 8. Wolf MB, Baynes JW (2006) The anti-cancer drug, doxorubicin, causes oxidant stress-induced endothelial dysfunction. Biochim Biophys Acta 1760: 267-71. 9. Bassingthwaighte JB, Sparks HV (1986) Indicator dilution estimation of capillary endothelial transport. Annu Rev Physiol 48: 321-34. 10. Carvajal D, Fernandez-Fernandez A, McGoron, A (2009) Development of Matlab algorithm to process pressure waveforms from isolated perfused heart experiments. IFMBE Proceedings 24:315-317 11. Schwarz ER, Pollick C, Dow J, Patterson M, Birnbaum Y, Kloner RA (1998) A small animal model of non-ischemic cardiomyopathy and its evaluation by transthoracic echocardiography. Cardiovasc Res 39:216-223. Corresponding author: Alicia Fernandez-Fernandez Institute: Biomedical Engineering Dept., Florida International Univ. Street: 10555 W Flagler St., EC 2680 City and Country: Miami, FL, USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Nanoscale “DNA Baskets” for the Delivery of siRNA A.C. Zirzow1, M. Skoblov2, A. Patanarut3, C. Smith4, A. Fisher4, V. Chandhoke1, and A. Baranova1,2 1
Department of Molecular an Microbiology, College of Science, George Mason University, Fairfax, USA 2 Research Center for Medical Genetics, RAMS, Moskvorechie Str., 1, Moscow, Russian Federation 3 Department of Chemistry and Biochemistry, College of Science, George Mason University, Fairfax, USA 4 U.S. Army Engineer Research and Development Center-Geospatial Research and Engineering Division, Alexandria, USA Abstract— Silencing of gene expression by small interfering RNA (siRNA) is promising for drug target discovery and as a therapy. However, a major impediment to the practical use of this technology is an inherent instability of siRNA in the bloodstream, partly due to susceptibility to nucleases. To address this restriction, we evaluated a novel DNA/albumin-based siRNA delivery vehicle that forms a “basket” surrounding the siRNA and provides both steric separation of siRNA from nucleases and local excess of the substrate for nuclease action; thus, slowing the rate of the degradation of siRNA. We found that variation of the albumin concentration in basket construction can significantly decrease the mean size of the basket. Smaller siRNA-containing DNA baskets may increase cellular uptake. We found that the degradation of siRNA is delayed when siRNA is prepared with this delivery vehicle, implying that DNA baskets are a promising technology for further development as the delivery vehicle for siRNA therapeutics. Keywords— nanoparticles.
siRNA,
delivery
vehicle,
DNA
baskets,
I. INTRODUCTION The silencing of gene expression by small interfering RNA (siRNA) is promising both for a drug target discovery and as a therapy [1]. Systemic delivery of siRNA requires transport to the target cells by some carrier [2]. Commonly studied non-viral carriers include liposomes, peptides, polymers, and aptamers; however, known methods of siRNA delivery have notable drawbacks and substantial toxicity [8; 9]. Transfection has been demonstrated with naked siRNA in vitro and via electroporation in vivo [9], but use of this method is limited by the fact that siRNA is unstable and is rapidly eliminated from the bloodstream. Liposomes protect the siRNA and, in some cases, provide sustained or controlled release. A major drawback of this method of delivery is that liposomes containing various siRNA molecules uniformly provoke a strong induction of cytokines including interferon-α, indicating that the immune activation by liposomes containing siRNA duplexes is independent of sequence context [10]. Alternatives to liposomes include cationic polymers and aptamer conjugates. Cationic polymers are commonly used in gene delivery because they can easily complex with the anionic DNA molecules; however, there are
a number of toxicological concerns that have arisen from in vitro and in vivo observations of the cationic lipid–mediated cellular toxicity [7]. Aptamers are small single- or doublestranded nucleic acid segments that can directly interact with proteins [11]. Aptamer-assisted interaction can be used to interfere with the molecular functions that participate in transcription or translation; it is not yet clear if aptamersiRNA conjugates will be efficient as a method of siRNA delivery. This investigation proposes to deliver siRNA in association with nanoparticles comprised of albumin and DNA. DNA is assumed to form weak Van der Waals bonds with siRNA in the presence of a net positive charge that will be provided by the presence of albumin under the appropriate pH [3]. In these nanoparticles, DNA will serve as the biodegradable ‘basket’ that could be custom designed in order to enhance its interaction with the particular siRNA duplex. This DNA basket may also act as a scaffold upon which it may be possible to covalently bind targeting peptides ensuring addressed delivery of the therapeutic nanoparticle. Inherent properties of DNA make it an ideal building material for self-assembling structures. DNA is a chemically and physically stable polymer with unique affinity for specific DNA and protein molecules. In case of multimolecular interaction, a combination of Watson-Crick base pairing and Hoogsteen base pairing may result in the formation of stable structures, including triple-helical ones. Additionally, isolated Hoogsteen base pairs have been reported in some protein/DNA complexes and in RNA [4; 5]. This study investigates the use of DNA as the primary material for creation of the DNA baskets serving as biodegradable delivery vehicles (Figure 1) that provide both the steric separation of siRNA from nucleases and the local excess of the substrate for nuclease action. In this study we showed that the DNA baskets indeed slow down the rate of the degradation of siRNA.
II. MATERIALS AND METHODS Bovine serum albumin (BSA), herring sperm DNA and ssDNA were purchased from Sigma-Aldrich (St. Louis, MO). “Derinat” solution, a highly purified sodium salt of natural desoxyribonucleic acid (DNA) extracted from sturgeon or salmon fish soft roe, was purchased from Technomedservis
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 130–133, 2010. www.springerlink.com
Nanoscale “DNA Baskets” for the Delivery of siRNA
(Moscow, Russia). To create DNA baskets, DNA and different concentrations (from 11.1 mg/mL to 0.345 mg/mL) of BSA were concentrated overnight in SpeedVac and rapidly reconstituted with 1X PBS or 1X Sodium Chloride-TrisEDTA (STE). Concentrated BSA/DNA complexes were reconstituted overnight in molecular grade water (Invitrogen, Carlsbad, CA) at 4°C. Average size distribution of the baskets at biological temperature (37°C) was determined using photon correlation spectroscopy (N5 Submicron Particle Size Analyzer, Beckman Coulter). The measurement of the particles was performed using an equilibration time of 10 min and 200 s integration times, and the test angle for all light scattering experiments was 90°. Each measurement was performed in triplicate, and the average mean diameter was determined by converting the observed values to particle diameters via the Stokes-Einstein relationship [6].
131
fluorescence of the solution will be proportional to the degradation of siRNA (see Figure 2 for siRNA design.).
Fig. 2 The structure of model siRNA used in this study Kinetics of siRNA degradation was profiled by measuring the fluorescence of the solutions containing siRNA protected by the DNA basket and naked siRNA every 30 seconds over 2 hours after adding siRNA to fetal bovine serum. The C100 series Thermal Cycler (BioRad) was used to keep samples at 37°C while measuring. Spectrofluorometric measurements were made with the Fluoromax-3 and peltier adapter. The excitation illumination source is a xenon arc lamp and uses stepper monochromators to control both the excitation and emission wavelengths. The spectrofluorometer was computercontrolled and was configured to excite from 290 nm and detect emission from 510 to 530 nm for emission scans. The temperature was held at 37ºC. The spectral fluorescence absorption and emission data were collected at 6 nm slits. In addition, the lamp flux measures 7 mW/m2/nm from 250 to 700 nm and the emission detector is a Hamamatsu H series photomultiplier tube (PMT) that approaches 25% quantum efficiency from 250 to 700 nm (JY-Horiba, USA). All emission scans and EEM data were analyzed using the FluorEssence and Sigma Plot 9.0 software.
Fig.
1 ‘DNA baskets’ for siRNA delivery. (A) implementation of the biodegradable DNA baskets slowing down the kinetics of the degradation of siRNA (DNA is blue;BSA is tan; and siRNA is red) (B) targeting of the DNA basket to the tissue of interest may be achieved by the cross linking of DNA with various proteins and protein fragments affine to the surface receptors of the target cells (C) branching DNA structures stably locked at physiological temperatures are sterically capable of encapsulating siRNA molecules (D) parts of the DNA molecules within DNA-siRNA nanoparticles could be engineered to form dsRNA-DNA triplexes
siRNA degradation was quantified in a series of experiments involving profiling of degradation kinetics in siRNA protected by DNA basket versus naked siRNA placed in the same type of the RNase containing solution, fetal bovine serum (Invitrogen, Carlsbad, CA). Fluorescently labeled siRNA (Figure 2.) was custom synthesized by Integrated DNA Technologies, Coralville, Iowa. Each double stranded molecule of siRNA was labeled by FAM and included DABCYL quencher, thus, assuring that the
III. RESULTS A. Size Distribution of DNA Baskets Analysis of the size distribution of concentrated DNA in a solution of bovine serum albumin (BSA), as measured by the dynamic light scattering instrument, display a significantly reduced mean diameter. Moreover, size distribution of the protein/DNA baskets in solution was dependent on the concentration of BSA (Fig. 3). When herring sperm DNA or Derinat were reconstituted in the presence of BSA, two different size distributions were observed – an indication that the BSA is not tightly associated with the DNA. However, when DNA was concentrated with BSA in the SpeedVac, a bell shaped distribution of particle sizes was observed (Table 1). The size distribution of DNA baskets in solution was not
IFMBE Proceedings Vol. 32
132
A.C. Zirzow et al.
significantly altered when the temperature of basket reconstitution was raised from 25°C to 37°C. DNA baskets reconstituted in the absence of BSA normally have a mean size of 762.4 ± 7.26 nm. When this DNA is vacuum concentrated in the presence of BSA, the mean size of the basket can be reduced by up to 98%. A stable mean diameter of 12.5 ± 0.63 nm for BSA/DNA baskets was achieved using 25 μL of DNA (5 mg/mL) and 500 μL BSA (11.1 mg/mL). The large standard deviation associated with the size of the DNA baskets formulated with a smaller concentration of BSA is due to a bimodal distribution of sizes in solution. With a smaller concentration of BSA, there appears to be two different sized particles: one with a 5 nm radius and one with a larger concentration dependent radius (figure not shown). The smaller particle with a radius of 5 nm most likely represents unbound BSA that is present in solution in excess. Table 1 Mean diameter (nm) of 25 uL Derinat DNA baskets concentrated overnight with 500 uL of various concentrations of BSA, reconstituted with 1X PBS (37°C) BSA concentration (mg/L)
Mean Diameter (nm)
Molecular weight of the particle (kDa)
11.100
12.5
8.49 x 102
5.560
28.7
1.03 x 104
1.390
154.0
1.59 x 106
0.000
762.4
1.93 x 108
B. Kinetics of siRNA Degradation The kinetic studies of siRNA/DNA degradation showed that the degradation of siRNA within the DNA baskets placed in biologically active media (FBS) was substantially delayed as compared to the non-protected siRNA. In fetal bovine serum the naked siRNA degraded approximately 7 times faster than the siRNA prepared with DNA/BSA basket (Figure 4).
IV. DISCUSSION Control of gene expression in a sequence-specific manner by silencing with small interfering RNA (siRNA) in living cells holds great promise both as a novel therapeutic approach and as a new instrument for a drug target discovery. A major limitation of siRNA based therapies is that siRNA in the bloodstream undergo rapid incativation [12]. Stabilization of the siRNA containment within a DNA
Fig.
3 The mean diameter of Derinat DNA baskets decreases with an increasing concentration of bovine serum albumin (BSA). Mean diameter (nm) of 25 uL Derinat DNA baskets concentrated overnight with 500 uL of BSA (11.1 mg/mL, 5.56 mg/mL, 2.78 mg/mL, and 1.39 mg/mL), and reconstituted with 1X PBS. Size distributions were determined by the submicron particle analyzer at 37°C
basket may open an entirely novel avenue in the field of the delivery of gene therapeutic siRNAs. In this study we demonstrated that siRNA protected by BSA/DNA degrades significantly slower as compared to naked siRNAs when placed in biologically active solution (Fig. 5). In our model experiment we used bovine serum albumin (BSA) as its characteristics are similar to that of human serum albumin (HSA). As both HSA and DNA are normal components of the human blood proven safe in many biological applications, we anticipate relatively high safety of BSA/DNA packing approach. Analysis of the size distribution of DNA baskets in solution showed that the mean diameter of reconstituted DNA/siRNA particle can be controlled by the addition or elimination of bovine serum albumin. In the presence of BSA in 11.1 mg/mL the mean diameter of DNA baskets was reduced by more than 90% as compared to the mean diameter of the DNA polymers not associated with BSA. It was demonstrated previously that the size of the basket may be crucial for its uptake into cells [13]. According to Chithrani and colleagues (2006), both the size and shape of nanoparticles influence efficiency of the uptake. Optimal cellular uptake occurs for nanoparticles (colloidal Au) that are 50 nm in diameter and spherical, rather than rod shaped [13]. This diameter may be achieved if 25 µL 5 mg/mL Derinat is SpeedVac concentrated with 3.5 mg/mL BSA
IFMBE Proceedings Vol. 32
Nanoscale “DNA Baskets” for the Delivery of siRNA
133
ACKNOWLEDGMENTS Authors are grateful to Dr. Tariq Alsheddi, Dr. Alexandre Vetcher, Dr. Robin Couch, Dr. Michael Estep for initial discussion of this siRNA delivery project, and Dr. Daniel Cox and Dr. Alan Christensen, Graduate Committee members of A.Z. This research was partially covered by Russian Ministry of Science grant 02.512.12.2060 “Development of efficient approaches for in vivo delivery of the genetic information into target cells for the therapy of socially important diseases.”
REFERENCES
Fig. 4 Kinetics of siRNA degradation with and without the DNA basket in fetal bovine serum (FBS) at 37°C. The relative fluorescence intensity (RFU) of each sample was measured every 15 seconds (with a 15 second read time) (Figure 3). These particles are most likely spherical; however, further analysis of the DNA baskets should be conducted to determine the shape of these baskets. One potential limitation of this study is that the submicron particle analyzer is highly sensitive to any impurities such as dust. Additionally, the diameter of the particles in solution is calculated according to Brownian motion; therefore, settling or aggregation may cause the diameter values to be larger than they really are. In the future we plan to employ other techniques, such as Small Angle X-ray Scattering and cryo-Transmission Electron Microscopy should be employed [14]. This study provides a proof of the principle that DNA baskets may stabilize the siRNA in serum and prevent it from degradation of its route to the target cell. DNA/BSA baskets have a potential as a delivery mean for siRNAbased therapies.
V. CONCLUSIONS This investigation demonstrates that 1. DNA basket can delay the degradation of siRNA in fetal bovine serum at 37°C and 2. the size of the DNA baskets can be significantly reduced by using BSA as co-packing material. This size reduction may help enhance cellular uptake.
1. Behlke MA. (2006) Progress towards in Vivo Use of siRNAs. Mol Ther 13:644-670. 2. Leng Q, Woodle MC, Lu PY, Mixson AJ. (2009) Advances in Systemic siRNA Delivery. Drugs Future. 34:721. 3. Fogh-Andersen N, Bjerrum P, Siggaard-Andersen O. (1993) Ionic binding, net charge, and Donnan effect of human serum albumin as a function of pH. Clin Chem 39:48-52. 4. Leontis NB, Westhof E. (1998) A common motif organizes the structure of multi-helix loops in 16 S and 23 S ribosomal RNAs. Journal of Molecular Biology 283:571-583. 5. Patikoglou GA, Kim JL, Sun L, Yang S, Kodadek T, Burley SK. TATA element recognition by the TATA box-binding protein has been conserved throughout evolution. Genes & Development 13:3217-3230. 6. R. Pecora, Dynamic light scattering: Applications of photo correlation spectroscopy, Plenum, New York, 1985. 7. Akhtar S, Benter I. (2007) Toxicogenomics of non-viral drug delivery systems for RNAi: Potential impact on siRNA-mediated gene silencing activity and specificity. Advanced Drug Delivery Reviews 59:164-182. 8. Akhtar S, Benter IF. (2007) Nonviral delivery of synthetic siRNAs in vivo. J. Clin. Invest. 117:3623-3632. 9. Anwer K. (2008) Formulations for DNA Delivery via Electroporation in vivo [Internet]. In: Electroporation Protocols. p. 77-89. [cited 2009 Nov 24 ] Available from: http://dx.doi.org/10.1007/978-159745-194-9_5 10. Sakurai H, Kawabata K, Sakurai F, Nakagawa S, Mizuguchi H. (2008) Innate immune response induced by gene delivery vectors. International Journal of Pharmaceutics 354:9-15. 11. McNamara JO, Andrechek ER, Wang Y, Viles KD, Rempel RE, Gilboa E, Sullenger BA, Giangrande PH. (2006) Cell type-specific delivery of siRNAs with aptamer-siRNA chimeras. Nat Biotech 24:1005-1015. 12. Hickerson RP, Vlassov AV, Wang Q, Leake D, Ilves H, GonzalezGonzalez E, Contag CH, Johnston BH, Kaspar RL. (2008) Stability study of unmodified siRNA and relevance to clinical use. Oligonucleotides 18:345-54. 13. Chithrani BD, Ghazani AA, Chan WCW. (2006) Determining the Size and Shape Dependence of Gold Nanoparticle Uptake into Mammalian Cells. Nano Letters 6:662-668. 14. Andersen FF, Knudsen B, Oliveira CLP, Frohlich RF, Kruger D, Bungert J, Agbandje-McKenna M, McKenna R, Juul S, Veigaard C, Koch J, Rubinstein JL, Guldbrandtsen B, Hede MS, Karlsson G, Andersen AH, Pedersen JS, Knudsen BR. (2008) Assembly and structural analysis of a covalently closed nano-scale DNA cage. Nucl. Acids Res. 36:1113-1119.
IFMBE Proceedings Vol. 32
Nanoscale Glutathione Patches Improve Organ Function Homer Nazeran1 and Sherry Blake-Greenberg2 1
2
BioMedEng Consulting, El Paso, Texas, USA Health Integration Therapy, Paolos Verdes, California, USA
Abstract–– Glutathione, termed the “ultimate” or “master” antioxidant, is a vital intracellular tripeptide molecule and plays a central role in cellular physiologic functions. Currently the undeniable connection between glutathione and good health is very well established. Bioelectrical impedance data indicative of cellular physiologic organ function (status), using an Electro Interstitial Scanning (EIS) system, were acquired from two cohort volunteers. Cohort 1 comprised of 10 subjects: 1 male and 9 females, 18-86 (mean 58) years of age while Cohort 2 were 20 subjects: 4 males and 16 females, 19-80 (mean 54) years of age. Cellular physiologic function in subjects were evaluated in 8 organs (pancreas, liver, gall bladder, intestines, left and right adrenal glands, hypothalamus and pituitary gland) while wearing the glutathione patch for a period of 4 weeks. Physiologic function testing was repeated each week. Cohort 1 wore the glutathione patch for 12 hours/day daily, while Cohort 2 wore the glutathione patch for 12 hours/day on weekdays. Cellular physiologic function baseline data were acquired from all subjects at the beginning of the study period before the glutathione patch was worn. Subjects were instructed to keep well hydrated during the study period. All subjects served as their own control. The hypothesis to be tested was: The glutathione patch worn 12 hours daily for 4 weeks significantly improves cellular physiologic functional status in different organs. The overall data in Cohort 1 in this study demonstrated that glutathione patches worn 12 hours daily over a period of 4 weeks produced a highly significant improvement in physiologic functional status of pancreas, liver, gall bladder, intestines, left and right adrenals, hypothalamus and pituitary gland and very significant improvement in pancreas with a statistical power of at least 72%. Stated differently all organs achieved significant cellular physiologic functional status improvement compared to baseline with a statistical power of at least 91%. Keywords— Nanotechnology, Glutathione patch, Cellular physiologic function measurements, Electro interstitial scan (EIS) system, LifeWave.
I. INTRODUCTION Glutathione is a vital intracellular tripeptide molecule comprised of 3 nonessential amino acids: cysteine, glutamic acid and glycine (g-glutamyl-cysteinyl-glycine abbreviated as GSH). These 3 building blocks in turn are made from different combinations of essential amino acids. The –SH suffix in GSH (reduced form of glutathione) indicates that it contains a
sulfhydryl group. This group comes from sulfur- containing amino acids cysteine and methionine. Glutathione is produced naturally in abundance in the body and circulates constantly in the bloodstream neutralizing free radicals (dangerous by-products of normal metabolic processes converting food to energy) and removing environmental poisons such as heavy metals, harmful waste products and toxins to protect cells against oxidative stress. Free radicals are unstable oxygen-containing molecules which are hungry for electrons to quench their insatiable desire for cell destruction. Glutathione is a powerful antioxidant (created by the same energyproducing processes that create free radicals), which serves as a built-in defense against the harmful effects of free radicals, by rapidly quenching the destructive free electrons in these molecules. The balancing act between free radicals and antioxidants could be easily disrupted for any reason such as when the body is under stress, fighting an infection or inflammation or healing from an injury, in which case more free radicals are generated. Free radicals are also created when the body is exposed to cigarette smoke, alcohol, ultraviolet light, heavy metals, air pollution, pesticides, food additives, and other environmental toxins. Free radicals are the underlying cause of a variety of illnesses in the body [1]. Lyons et al, described that glutathione serves diverse physiologic functions such as detoxification of xenobiotics, protection of cells from oxidative stress, and acts as a storage and a transport form of cysteine. They explained that reduced tissue levels of GSH are thought to compromise cell function, promote tissue damage, and increase morbidity under various disease conditions [2]. Wu et al, studied glutathione metabolism and its implications for health. They described that glutathione plays important roles in antioxidant defense, nutrient metabolism, and regulation of a variety of cellular events. They also explained that glutathione deficiency contributes to oxidative stress playing a key role in aging and the pathogenesis of many diseases. These diseases include: seizure, Alzheimer’s, Parkinson’s, liver disease, cystic fibrosis, sickle cell anemia, human immunodeficiency virus (HIV), acquired immunodeficiency syndrome (AIDS), cancer, heart attack, stroke, and diabetes. They emphasized the need for new understanding of the nutritional regulation of GSH metabolism as a critical step for development of effective health improvement and disease treatment strategies [3].
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 134–137, 2010. www.springerlink.com
Nanoscale Glutathione Patches Improve Organ Function
Townsend et al, provided an overview of the biological importance of GSH at cellular and organism level and showed cause and effect relationships between GSH metabolism and diseases such as cancer, neurodegenerative diseases, cystic fibrosis (CF), HIV, and aging. They also showed how the enzymes involved in GSH regulation and control influence susceptibility and progression of these conditions. They concluded that there seemed to be no harm in supplementing a diet with GSH as “perhaps the product will provide a supply of the constituent amino acids, where, in particular, cysteine may be useful in stimulating gastrointestinal synthesis of GSH” [4]. The current methods of oral supplementation with glutathione or its amino acid precursors have not been effective in significantly elevating the blood levels of this antioxidant due to stomach acid destruction of L-Glutathione and unpredictability of results with precursor amino acids. Direct daily injection of glutathione has been more effective in producing short term elevation of glutathione, however this approach is unreliable due to expense and inconvenience. Preliminary clinical data from blood and urine samples collected every 24 hours over a period of 5 days from 15 volunteers wearing the glutathione patch have shown a 3 to 4 fold increase in blood levels of glutathione compared to baseline levels [5]. This is the first study of its kind to investigate the effect of the glutathione patch on organ physiologic function. Bioelectrical impedance data indicative of cellular physiologic function were acquired using and EIS system. The overall data in this study demonstrated that glutathione patches worn 12 hours daily over a period of 4 weeks caused highly significant improvement in physiologic functional status of pancreas, liver, gall bladder, intestines, left and right adrenals, hypothalamus and pituitary gland and very significant improvement in pancreas with a statistical power of at least 72%. Stated differently all organs achieved significant cellular physiologic functional improvement with a statistical power of at least 91%.
II. MATERIAL AND METHODS Subjects: Two cohort of volunteer subjects participated in this study. Cohort 1 comprised of 10 subjects: 1 male and 9 females, 18-86 (mean 58) years old while Cohort 2 were 20 subjects: 4 males and 16 females, 19-80 (mean 54) years old. Cohort 1 wore the glutathione patch for 12 hours daily, while Cohort 2 wore the glutathione patch for 12 hours/day on weekdays only. After giving informed consent, cellular physiologic function baseline data were acquired from all subjects at the beginning of the study period before the glutathione patch was worn and weekly afterwards for 4
135
weeks. Subjects were instructed to keep well hydrated during the study period. All subjects served as their own control. The subjects were instructed to place the glutathione patch 2 inches inferior to the navel (below belly button) or on CV6 acupuncture point according to manufacture’s instructions. Figure 1 shows the glutathione patch and the anatomical position for wearing the glutathione patch.
Fig.
1 The LifeWave glutathione patch and the anatomical position for wearing it (CV6)
Glutathione Patch: For this research, the nanoscale glutathione patch (LifeWave, La Jolla, California, USA) was used. The glutathione patch is described as “a new method for increasing glutathione levels by stimulating acupuncture points on the body with a combination of pressure and infrared energy. The LifeWave glutathione patch is a nontransdermal patch that does not put any chemicals or drugs into the body. The LifeWave glutathione patch contains natural nontoxic crystals that absorb body heat to generate infrared signals that cause the body to produce more endemic glutathione. These crystal are active for 12 hours. Clinical studies utilizing blood analyses indicate an average rise of more than triple the blood glutathione over a period of 24 hours” [5]. For a comprehensive discussion of the LifeWave glutathione patch please see reference [6]. Electro Interstitial Scan (EIS) System and Measurements: An EIS system (LD Technology, Coral Gables, Florida, USA), a programmable electro medical device, was deployed to acquire bioelectrical impedance measurements indicative of cellular physiologic functional status in 10 organs. The EIS system is a French device, classified as a Biofeedback Class 2 device in the United States (FDA product Code: HCC). Recently the FDA has approved a number of alternating current (ac) bioelectric impedance (BIM) devices for use in cardiology and oncology [10 -15]. Before EIS measurements were made on subjects, four operational tests were carried out automatically by the device: power supply test, channel test, volume and conductivity measurement and correspondence tests, as well as cable and precision control tests. Electrodes and electrode application sites were prepared following manufacturer’s instructions.
IFMBE Proceedings Vol. 32
136
H. Nazeran and S. Blake-Greenberg
Under software control the hardware delivers a sequence of three 1.28V pulses: 22 ac pulses, 1 second each, at 50KHz (at 0.6 mA, energy/pulse=0.77 mJ); 22 dc pulses, 1 second each (at 0.6 mA, energy/pulse=0.77 mJ); and another set of 22 dc pulses, 3 second in duration for each pulse (at 0.6 mA, energy/pulse=0.77 mJ) to 6 electrodes. These electrodes (2 disposable Ag/AgCl applied to the forehead, 2 reusable polished stainless steel hand electrodes, and 2 reusable polished stainless steel foot electrodes) form 22 different electrode pair (sensing) configurations and measure the intensity of interstitial fluid conductivity (by applying Maxwell’s equation) from which on-screen 3-D models of the human body organs are generated. The measurements are scaled on a scale of -100 to +100. As DC current only passes through the interstitial fluid (16% of the body’s total water), the device could measure the composition of interstitial fluid as well as other biochemical parameters and detect ionic abnormalities. Inclusion Criteria: Inclusion criteria for participation in this study were healthy and functional individuals who were willing to wear the glutathione patch and participate in the study for a period of four weeks. Participants also agreed not to start with any other new therapy or methods of healing and/or make any major changes in their daily life that could alter the efficacy of the study. Subjects must not have worn the carnosine patch prior to the study. Subjects were recruited from the local area of Palos Verdes and may or may not have been previous patients of Health Integration Therapy.
week are designated as Δ1, Δ2, Δ3 and Δ4, for the 4-week period showing cellular physiologic changes in the organs. Δavg shows the average value of changes for the 4-week period, and Δtotal represents the average total physiologic change after 4 weeks with respect to baseline readings. Table 3 shows the overall mean values and standard deviations for baseline and total change in physiologic function for each of the organs in Cohort 1 (n =10). Table 1 Typical
Electro Interstitial Scan data for a female subject in
Cohort 1
Table 2 Typical Electro Interstitial Scan data for a male subject in Cohort 1
Statistical Analysis: The cellular physiological effect in different organs after 4 weeks of wearing the glutathione patch were compared to baseline data before wearing the patch using the paired t-test. A p value < 0.05 was accepted as statistically significant. Sample size (n), level of significance (α or p), effect size and (mean value of EIS reading after wearing the patch – baseline mean value) and statistical power were related by the following formula. Φ[Zα + |μ-μ0| Sqrt (n)/σ] = Statistical Power
(1)
where Zα is the Z score related to the area under normal distribution curve at the desired level of significance, |μ-μ0| is effect size and σ is the standard deviation and n is sample size.
Table 3 Summary of mean and standard deviation values for EIS System readings in 8 organs in Cohort 1, n = 10
III. RESULTS Table 1 shows typical EIS System readings (cellular function physiologic status) for a female subject, while Table 2 shows typical EIS system readings for a male subject as examples. Functional status changes from week to IFMBE Proceedings Vol. 32
Nanoscale Glutathione Patches Improve Organ Function
IV. CONCLUSIONS Statistical analyses were carried out in both cohorts comparing the cumulative averages of the net changes in physiologic functional status of each organ at the end of the study period with corresponding baseline data. The results in Cohort 1 showed a highly significant (p < 0.001) improvement in physiologic functional status of all organs tested except in pancreas that showed a very significant improvement (p < 0.01). Average statistical power considering the effect size (% improvement in physiologic function, sample number, and level of significance) was at least 72% in Cohort 1. The results in Cohort 2 showed a significant (p < 0.05) improvement in physiologic functional status of four organs (adrenal glands, hypothalamus and pituitary gland). Average statistical power considering the effect size (% improvement in physiologic function, sample number, and level of significance) was at least 76% in these tests. No significance improvement in cellular physiologic status was observed in pancreas, liver, gall bladder and intestines in Cohort 2. This could be attributed to placebo effect or the fact that discontinued use and not wearing the glutathione patch for 2 days in a week (about 30% less exposure to glutathione) the subjects in Cohort 2 did not have adequate stimulated detoxification in all organs by glutathione over the study period. More detailed statistical analyses of the EIS data enabled us to make the following observations: 1.
2.
3.
4.
In Cohort 1 (n =10), the average statistical power was more than 72% for all organs showing a highly significant (p < 0.001) improvement in cellular physiologic function. The average statistical power without considering the pituitary gland was more than 82%. The average statistical power, without considering pituitary and intestine was more than 90%. In Cohort 1 (n = 10), the average statistical power was more than 84% for all organs showing a very significant (p< 0.01) improvement in cellular physiologic function. The average statistical power without considering the pituitary gland was more than 91%. The average statistical power without excluding the pituitary gland and intestine was more than 97%. In Cohort 1 (n = 10), the average statistical power was more than 91% for all organs showing a significant (p< 0.05) improvement in cellular physiologic function. The average statistical power without considering the pituitary gland was more than 96%. The average statistical power without excluding the pituitary gland and intestine was more than 99%. In Cohort 2 (n = 20), 4 organs (adrenal glands, hypothalamus and pituitary gland) showed a significant (p < 0.05) improvement in cellular physiologic function.
137
In summary, the overall data in Cohort 1 demonstrated that the glutathione patch worn 12 hours daily over a period of 4 weeks produced a highly significant improvement in physiologic functional status of liver, gall bladder, intestines, adrenals, hypothalamus and pituitary gland and a very significant improvement in pancreas with a statistical power of at least 72%. Stated differently, it could be concluded that the glutathione patch caused a significant improvement in cellular physiologic functional status of pancreas, liver, gall bladder, intestines, adrenals, hypothalamus and pituitary gland with a statistical power > 91%. Therefore, the hypothesis that: The glutathione patch worn 12 hours daily for 4 weeks significantly improves cellular physiologic functional status in different organs was accepted as true.
REFERENCES 1. Pressman AH (1997) Glutathione: The Ultimate Antioxidant. St Martin’s Press, New York, NY. 2. Lyons J, Rauh-Pfeiffer A, Yu AY, Lu XM et al (2000) Blood glutathione synthesis rates in healthy adults receiving a sulfur amino acid-free diet. Proceedings of the National Academy of Sciences, vol. 97, No. 10, 5071-5076. 3. Wu G, Fang Y, Yang S et al (2004) Glutathione Metabolism and Its Implications for Health. The Journal of Nutrition, 134: 489–492 4. Townsend DM, Tew KD, Tapiero H (2003) The importance of glutathione in human disease. Biomedicine & Pharmacotherapy 57: 145– 155 5. Haltiwanger S (2009) A New Way to Increase Glutathione Levels in the Body. Hippocrates Magazine. Vol 28, Issue 1, 48-49 6. Haltiwanger S (2009) LifeWave Skin Care Patch Instructions. http://www.lifewave.com/pdf/Papers/SciencePaper004GlutathioneSkinPatch.pdf 7. Electro Interstitial Scan (EIS) System (2009). http://www.ldtechnologies.com 8. Bard AJ, Faulkner LR (2001) Electrochemical Methods. Fundamentals and Applications. 2nd Ed. Wiley, New York 9. We are listening to the body signals! (2009) http://www.ldteck.com 10. Van De Water JM, Miller TW, Vogel RL, et al (2003). Impedance cardiography: the next vital sign technology? Chest;123:2028-33. 11. Critchley LAH (1998). Impedance cardiography. The impact of new technology. Anaesthesia. 53:677-84. 12. Cotter G, Schachner A, Sasson L, et al (2006). Impedance cardiography revisited. Physiol Meas. 27:817-27. 13. http://www.fda.gov/cdrh/pdf/p970033.html 14. Fricke H, Morse S (1926). The electric capacity of tumors of the breast. J Cancer Res. 16:310- 376. 15. Morimoto T, Kinouchi Y, Iritani T, Kimura S et al (1990). Measurement of the electrical bio-impedance of breast tumors. Eur Surg Res. 22:86-92.
The address of the corresponding author: Author: Homer Nazeran PhD, CPEng (Biomed.) Institute: Department of Electrical and Computer Engineering Street: 500 West University Ave, University of Texas at El Paso City: El Paso, Texas 79968 Country: United States of America Email:
[email protected]
IFMBE Proceedings Vol. 32
Nanoscale Carnosine Patches Improve Organ Function Homer Nazeran1 and Sherry Blake-Greenberg2 1
2
BioMedEng Consulting, El Paso, Texas, USA Health Integration Therapy, Paolos Verdes, California, USA
Abstract–– Carnosine (β-alanyl-L-histidine) is a naturally occurring dipeptide present in brain, cardiac muscle, stomach, kidney, olfactory bulbs and in large quantities in skeletal muscle. As free-radical-induced damage to the cells is an important factor in causing aging and senile diseases, carnosine has the potential ability to prevent and treat diseases such as atherosclerosis, diabetes, Alzheimer’s and senile cataract. Recent clinical research shows that carnosine has the ability to rejuvenate senescent cells and delay eyesight impairment and cataract, which are manifestations of the aging process. These results provide valuable data in favor of considering carnosine as a natural anti-aging substance. Bioelectrical impedance data indicative of cellular physiologic organ function (status), using an Electro Interstitial Scanning (EIS) system, were acquired from twenty volunteers:7 males and 13 females, 19-83 (mean 43) years, 118-185 (mean 150) lbs in weight, and 5’-6’ (mean 5’,5”) ft in height. Cellular physiologic function testing was evaluated in 10 organs (pancreas, liver, left/right kidneys, intestines, left /right adrenal glands, hypothalamus, pituitary and thyroid glands) while wearing a nanosclae carnosine patch for 2 weeks. EIS testing was repeated each week. Cellular physiologic function baseline data were acquired from all subjects at the beginning of the study period before application of the nanoscale carnosine patch. Subjects were instructed to keep well hydrated during the study period. All subjects served as their own control. The hypothesis to be tested was: The carnosine patch worn 12 hours/day on alternate days for two weeks significantly improves cellular physiologic functional status in different organs. Statistical analyses revealed that the carnosine patch worn 12 hours daily on alternate days (Tuesdays, Thursdays, and Saturdays) over a period of 2 weeks produced a very significant (p < 0.01) improvement in the physiologic functional status of the pancreas, liver, right kidney, left /right adrenals, hypothalamus, pituitary and thyroid glands with an average statistical power >95%. Keywords— Nanotechnology, Carnosine patch, Aging, Cellular physiologic function measurements, Electro interstitial scan (EIS) system, LifeWave.
I. INTRODUCTION Carnosine termed an “amazing anti-aging nutrient” is a dipeptide molecule comprised of 2 amino acids: beta-alanine and L-histidine. It was first isolated from meat extracts by Russian scholars Gulewitsch and Amiradzibi in 1900 [1]. It
is a naturally occurring (endogenously synthesized) molecule present in brain, cardiac muscle, stomach, kidney, olfactory bulbs and in large quantities in skeletal muscle [2]. Many studies on biological and biochemical effects of carnosine have suggested that it possesses antioxidant and free radical scavenging properties [3]. Free radicals are dangerous byproducts of normal metabolic processes converting food to energy. Free radicals are unstable oxygen-containing molecules, which are hungry for electrons to quench their insatiable desire for cell destruction. Carnosine like its “dancing partner” glutathione is an antioxidant that serves as an endogenous defense against the harmful effects of free radicals, by quenching the destructive free electrons in these molecules. The balancing act between free radicals and antioxidants could be easily disrupted for any reason such as when the body is under stress, fighting an infection or inflammation or healing from an injury, in which case more free radicals are generated. Free radicals are also created when the body is exposed to cigarette smoke, alcohol, ultraviolet light, heavy metals, air pollution, pesticides, food additives, and other environmental toxins. Free radicals are the underlying cause of a variety of illnesses in the body [4]. They are also one of most important possible causes of aging and senile diseases [5]. The literature shows that the emergence and development of aging are closely related to free-radical-induced damage to cells. Free radical damage leads to instability and malfunctioning of the cells, which consequently cause senile diseases such as atherosclerosis, diabetes, Alzheimer’s disease, and senile cataract. Research on the biological and biochemical effects of antioxidants and free radical scavenging molecules such as glutathione and carnosine has shown that these compounds have the ability to protect cells from the harmful effects of free radicals and therefore could exert a normalizing function on cell metabolism and therefore serve as endogenous anti-aging compounds. Extensive preliminary research by Russian scholars have shown that carnosine has a variety of beneficial effects including an increase in muscle strength and endurance, protection against radiation damage, enhancement of immunity and reduction of inflammation, protection against formation of ulcers and their treatment, treatment of burns, promotion of wound healing after surgery, improvement of appearance, etc.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 138–141, 2010. www.springerlink.com
Nanoscale Carnosine Patches Improve Organ Function
In a review, Quinn et al [6], suggest that carnosine and its related dipeptides could be considered as the water-soluble counterpart to lipid-soluble antioxidants such as vitamin E and serve to protect cells from oxidative damage. They refer to numerous studies that have demonstrated strong and specific antioxidant properties of these compounds both at the tissue and organelle level. They describe that carnosine and its related dipeptides play a number of roles such as neurotransmitters, modulation of enzymic activities and chelation of heavy metals. They also describe that these compounds have antihypertensive, immunomodulating, wound healing and antineoplastic effects. Hipkiss et al [7], present evidence to suggest that carnosine in addition to its antioxidant and oxygen free-radical scavenging activities, also reacts with deleterious aldehydes to protect susceptible macromolecules. They propose that the role of carnosine and its related dipeptides should be explored in pathologies that involve deleterious aldehydes, for example, secondary diabetic complications, inflammatory phenomena, alcoholic liver disease, and possibly Alzheimer's disease. For a more detailed study on carnosine beneficial effects please refer to the references listed in reference [8]. The current methods of oral supplementation with carnosine would take 1-4 months to show any significant effects. Marios Kyriazis MD has performed a preliminary experiment using L-carnosine supplements (50 mg daily) on 20 healthy human volunteers, aged 40-75 years, for a period of 1-4 months. He reports “No side affects were reported. Five users noticed significant improvements in their facial appearance (firmer facial muscles), muscular stamina and general well-being. Five others reported possible benefits, for example better sleep patterns, improved clarity of thought and increased libido. The rest did not report any noticeable effects. This is not surprising because supplementation with carnosine is not expected to show any significant noticeable benefits in a short time, but it should be used as an insurance against deleterious effects of the aging process. If any benefits are noted, these should be considered as an added extra bonus. It is worthwhile persevering with the supplementation long term, even if you do not experience any obvious benefits, as you will still be well protected against aging. Carnosine can be used together with vitamin E and/or Co-enzyme Q10 for full antioxidant protection, but even if it is used on its own it should still confer significant protection both against free radicals and against glycosylation.” [9] Our study is the first pilot investigation of its kind to explore the effect of the carnosine patch on organ physiologic function. Bioelectrical impedance data indicative of cellular physiologic function were acquired using an EIS system.
139
Cellular physiologic function in subjects were evaluated in 10 organs (pancreas, liver, left and right kidneys, intestines, left and right adrenal glands, hypothalamus, pituitary and thyroid glands) while wearing the carnosine patch for a period of 2 weeks, 12 hours/day on alternate days of the week (Tuesdays, Thursdays and Saturdays). Physiologic function testing was repeated each week. Each visit was approximately 1 hour in duration. Physiologic function baseline data were acquired from all subjects at the beginning of the study period before the carnosine patch was worn. Subjects were instructed to keep well hydrated (as 16% of body fluids is extracellular fluid), during the study period. All subjects served as their own control. The overall data in this study demonstrated that the carnosine patch worn 12 hours daily on alternate days over a period of 2 weeks produced a very significant (p < 0.01) improvement in the physiologic functional status of the pancreas, liver, right kidney, left and right adrenals, hypothalamus, pituitary and thyroid glands with an average statistical power of at least 95%. Therefore, the hypothesis was accepted as true.
II. MATERIAL AND METHODS Subjects: Twenty volunteer subjects: 7 males and 13 females, 19-83 (mean 43) years of age, 118-185 (mean 150) lbs in weight, and 5’-6’ (mean 5’,5”) ft in height participated in this study. They wore the carnosine patch for 12 hours daily, on alternate days of the week (Tuesdays, Thursdays and Saturdays) for 2 weeks. After giving informed consent, cellular physiologic function baseline data were acquired from all subjects at the beginning of the study period before the carnosine patch was worn and then weekly afterwards. Subjects were instructed to keep well hydrated during the study period. All subjects served as their own control. The subjects were instructed to place the carnosine patch 2 inches inferior to the navel (below belly button) or on CV6 acupuncture point according to manufacture’s instructions. Carnosine Patch: For this research, the nanoscale carnosine patch (LifeWave, La Jolla, California, USA) was used. The carnosine patch is described as a new method for increasing carnosine levels by stimulating acupuncture points on the body with a combination of pressure and infrared energy. “The carnosine patch is a non-transdermal patch that does not put any chemicals or drugs into the body. The carnosine patch contains natural nontoxic crystals that absorb body heat to generate infrared signals that cause the body to produce more endogenous carnosine.” The patch remains active for 12 hours. The carnosine patch is termed the “dancing partner” of the glutathione patch and seems to enhance and complement its physiological effects.
IFMBE Proceedings Vol. 32
140
H. Nazeran and S. Blake-Greenberg
Electro Interstitial Scan (EIS) System and Measurements: An EIS system (LD Technology, Coral Gables, Florida, USA), a programmable electro medical device, was deployed to acquire bioelectrical impedance measurements indicative of cellular physiologic functional status in 10 organs. The EIS system is a French device, classified as a Biofeedback Class 2 device in the United States (FDA product Code: HCC). Recently the FDA has approved a number of alternating current (ac) bioelectric impedance (BIM) devices for use in cardiology and oncology [10 -15]. Before EIS measurements were made on subjects, four operational tests were carried out automatically by the device: power supply test, channel test, volume and conductivity measurement and correspondence tests, as well as cable and precision control tests. Electrodes and electrode application sites were prepared following manufacturer’s instructions. Under software control the hardware delivers a sequence of three 1.28V pulses: 22 ac pulses, 1 second each, at 50KHz (at 0.6 mA, energy/pulse=0.77 mJ); 22 dc pulses, 1 second each (at 0.6 mA, energy/pulse=0.77 mJ); and another set of 22 dc pulses, 3 second in duration for each pulse (at 0.6 mA, energy/pulse=0.77 mJ) to 6 electrodes. These electrodes (2 disposable Ag/AgCl applied to the forehead, 2 reusable polished stainless steel hand electrodes, and 2 reusable polished stainless steel foot electrodes) form 22 different electrode pair (sensing) configurations and measure the intensity of interstitial fluid conductivity (by applying Maxwell’s equation) from which on-screen 3-D models of the human body organs are generated. The measurements are scaled on a scale of -100 to +100. As DC current only passes through the interstitial fluid (16% of the body’s total water), the device could measure the composition of interstitial fluid as well as other biochemical parameters and detect ionic abnormalities. Inclusion Criteria: Inclusion criteria for participation in this study were healthy and functional individuals who were willing to wear the carnosine patch and participate in the study for a period of two weeks. Participants also agreed not to start with any other new therapy or methods of healing and/or make any major changes in their daily life that could alter the efficacy of the study. Subjects must not have worn the carnosine patch prior to the study. Subjects were recruited from the local area of Palos Verdes and may or may not have been previous patients of Health Integration Therapy.
after wearing the patch – baseline mean value) and statistical power were related by the following formula. Φ[Zα + |μ-μ0| Sqrt (n)/σ] = Statistical Power
(1)
where Zα is the Z score related to the area under normal distribution curve at the desired level of significance, |μ-μ0| is effect size and σ is the standard deviation and n is sample size.
III. RESULTS Table 1 shows typical EIS readings (cellular organ function or physiologic status) for a female subject, while Table 2 shows typical EIS recordings for a male subject as examples. Functional physiological status changes in different organs from Week1 compared to baseline is designated as Δ1 and changes from Week 2 to Week 1 is shown as Δ2. Δavg shows the average value of the changes for the 2week period, and ΔT represents the average total physiologic change after 2 weeks and ΔT-base is indicative of total change at the end of the 2-week period with respect to baseline measurements. Table 3 shows the overall mean values and standard deviations for baseline and total change (ΔT) in physiologic function for each of the organs (n = 20). Table 1 Typical Electro Interstitial Scan data for a female subject. Age: 30, Weight: 125 lb, Height: 5 ft, 5 inches
Table 2 Typical Electro Interstitial Scan data for a male subject. Age: 66, Weight: 168 lb, Height: 5 ft, 11 inches
Statistical Analysis: The cellular physiological effect in different organs after 2 weeks of wearing the carnosine patch were compared to baseline data before wearing the patch using the paired t-test. A p value < 0.05 was accepted as statistically significant. Sample size (n), level of significance (α or p), effect size and (mean value of EIS reading IFMBE Proceedings Vol. 32
Nanoscale Carnosine Patches Improve Organ Function
Table 3 Summary of mean and standard deviation values for EIS readings in 10 organs, n = 20
141
The overall data in this study demonstrated that the carnosine patch worn 12 hours daily on alternate days over a period of 2 weeks produced a very significant (p < 0.01) improvement in the physiologic functional status of the pancreas, liver, right kidney, left and right adrenals, hypothalamus, pituitary and thyroid glands with an average statistical power of at least 95%. Therefore, the hypothesis that: The carnosine patch worn 12 hours/day on alternate days for two weeks, significantly improves cellular physiologic functional status in different organs was accepted as true.
REFERENCES
IV. CONCLUSIONS Statistical analyses were carried out on the data acquired from these subjects comparing the cumulative averages of the net changes in physiologic functional status of each organ at the end of the 2-week study period with the corresponding baseline data. The results showed a highly significant (p < 0.001) improvement in physiologic functional status of all organs tested except in pancreas and pituitary gland that showed a very significant improvement (p < 0.01) and left kidney and intestines that did not achieve significance. Average statistical power considering the effect size (% improvement in physiologic function, sample number, and level of significance) was at least 84% in all organs that achieved a highly significant improvement in cellular physiologic function. The average statistical power in pancreas and pituitary gland that showed a very significant improvement was at least 95%. Left kidney and intestines did not achieve significance after 2 weeks of exposure to carnosine patch. This could be attributed to placebo effect or to the fact that these organs need more exposure time to carnosine patch to significantly improve their physiologic status as a consequence of biochemical changes in their extracellular environment. Considering the fact that supplementation with carnosine or its building blocks may take 1-4 months to show a steady state effect, this level of impact is still remarkable. In the future, we plan to perform a double-blind placebo-controlled investigation to explore this topic further.
1. Gulewitsch VS, and S Amiradzibi (1990). Uber das carnosine, eine neue organische Base des Fleischextraktes. Ber. Disch. Chem. Ges. 33:1902-1904. 2. Gariballa SE, Sinclair AJ (2000) Carnosine: physiological properties and therapeutic potential. Age andAging. 29, 207-210. 3. Boldyrev AA, Formazyuk VE, Sergienko VI (1994) Biological significance of histidine-containing dipeptides with special reference to carnosine: chemistry, distribution, metabolism and medical applications. Sov. Sci. Rev. D. Physicochem. Biol. 13: 1–60. 4. Pressman AH (1997). Glutathione: The Ultimate Antioxidant. St. Martin’s Press, New York, NY. 5. Wang AM, Ma C, Xie ZH, et al (2000) Use of Carnosine as a Natural Anti-senescence Drug for Human Beings. Biochemistry (Moscow). 65 (7): 860-871. 6. Quinn PJ, Boldyrev AA, Formazyuk VE (1992) Carnosine: its properties, functions and potential therapeutic applications. Mol Aspects Med. 13(5): 379-444. 7. Hipkiss AR, Preston JE, Himsworth DT, et al (1998) Pluripotent protective effects of carnosine, a naturally occurring dipeptide. Ann N Y Acad Sci. 854:37-53 8. http://www.smart-publications.com/anti-aging/carnosine.php 9. Kyriazis M (2010). Carnosine: The new anti-aging supplement. http://www.smart-drugs.net/ias-carnosine-article.htm. 10. Van De Water JM, Miller TW, Vogel RL, et al (2003). Impedance cardiography: the next vital sign technology? Chest;123:2028-33. 11. Critchley LAH (1998). Impedance cardiography. The impact of new technology. Anaesthesia. 53:677-84. 12. Cotter G, Schachner A, Sasson L, et al (2006). Impedance cardiography revisited. Physiol Meas. 27:817-27. 13. http://www.fda.gov/cdrh/pdf/p970033.html 14. Fricke H, Morse S (1926). The electric capacity of tumors of the breast. J Cancer Res. 16:310- 376. 15. Morimoto T, Kinouchi Y, Iritani T, Kimura S et al (1990). Measurement of the electrical bio-impedance of breast tumors. Eur Surg Res. 22:86-92. The address of the corresponding author: Author: Dr Homer Nazeran PhD, CPEng (Biomed.) Institute: Department of Electrical and Computer Engineering Street: 500 West University Ave, University of Texas at El Paso City: El Paso, Texas 79968 Country: United States of America Email:
[email protected]
IFMBE Proceedings Vol. 32
Multiple Lumiphore-Bound Nanoparticles for in vivo Quantification of Localized Oxygen Levels J.L. Van Druff, W. Zhou, E. Asman, and J.B. Leach University of Maryland, Baltimore County, Department of Chemical and Biochemical Engineering, 1000 Hilltop Circle, Baltimore, MD, USA Abstract— Our group has previously published work wherein microparticles with bound oxygen-sensitive and oxygen-insensitive lumiphores proved an accurate, precise and reliable tool for the quantification of localized oxygen partial pressure in vitro. Calibration between the luminescence of oxygen-sensitive lumiphore and local oxygen partial pressure allows for oxygen quantification while the luminescence of oxygen-insensitive lumiphore allows for corrections based on particle concentration. An analogous system may prove to be an equally useful tool for in vivo measurements, if certain design features are altered to address concerns such as tissue optical absorptivity and possible toxicity. Current studies focus on the design of a surface functionalized nanospheresystem as a possible approach. This work focuses on the development of this sensing technology as well as methods to allow for precise and flexible synthesis, characterization of key properties (e.g., oxygen sensing in whole blood) and optimization for in vivo conditions. Keywords— Imaging, Sensors, Nanotechnology, Oxygen, Cancer, Magnetic Nanoparticles.
I. INTRODUCTION A. Background Localized hypoxia is known to correlate with tumor radiation resistance, angiogenesis, and metastasis. Consequently, several non-invasive methods of mapping oxygen concentration in vivo can be found in the literature. Two well-studied methods are oxygen-dependent quenching of lumiphores and electron paramagnetic resonance (EPR). Oxygen content has been successfully mapped with EPR with limited spatial resolution (~3 mm) [1], [2]. Quenching-based techniques can provide higher resolution, however the efficacy of such systems is contingent upon the depth to which the excitation and emission wavelengths can penetrate tissue. Ruthenium-based lumiphores (λEx.: 448 nm, λEm.: 603 nm) have been utilized to acquire in vivo measurements up to a depth of ~1 mm [3]. This depth limitation is due to the fact that the excitation wavelength is within the range of the visible spectrum wherein tissue absorbs very strongly.
A class of lumiphores which show great promise for in vivo oxygen measurement are palladium tetraaryl tetrabenzoporphyrins (Pd-Ar4TBPs; Fig. 1)
Fig. 1 Carboxy-functionalized Pd-Ar4TBP The Vinogradov group has published numerous papers wherein Pd-Ar4TBPs were successfully used to quantify oxygen at depths in the centimeter-range [4], [5], [6]. Such measurements are possible because Pd-Ar4TBPs both absorb and emit at wavelengths (λEx.:636 nm, λEm.:795 nm) greater than the absorptive maximum for tissue (~540 nm). Two problems related to the use of Pd-Ar4TBPs in vivo are the innate hydrophobicity of the molecule as the singlet oxygen (1O2) produced during the quenching process. 1O2 is a transient, high-energy toxin. This species is toxic enough that it has been used to selectively kill cells in a process known as photodynamic therapy [7]. Our group has previously published work on a novel multi-lumiphore microparticle system for quantification of oxygen in vitro [8]. This system consists of ruthenium dye and Nile Blue concomitantly bound to microparticles via ionic interactions. Ruthenium is quenchable by oxygen whereas Nile Blue is not. The strength of this system lies in the fact that the ruthenium can quantify oxygen via the Demas-Stern-Volmer relation [9] and Nile Blue allows for ratiometric corrections for variations in particle concentration.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 142–145, 2010. www.springerlink.com
Multiple Lumiphore-Bound Nanoparticles for in vivo Quantification of Localized Oxygen Levels I0 = I
1 f1 1 + K SV1 * pO 2
+ f2
(1)
The Demas-Stern-Volmer Equation. Here, I0 is the intensity at pO2=0, I is the intensity observed and KSV is the SternVolmer constant. f1 and f2 denote fractions of lumiphore population which are accessible and inaccessible to quenching, respectively. It is reasonable to assume that a system analogous to our microparticle system could prove an equally useful tool for in vivo oxygen quantification, provided certain design elements are altered. B. Design The design of our new particle-based system was contrived with biological conditions and concerns in mind. To maximize excitation and emission transmission through an organism, the lumiphore utilized to detect oxygen will be the carboxy-functionalized Pd-Ar4TBP (Fig. 1). While commercially unavailable, a synthesis scheme exists in the literature. The oxygen-insensitive dye will be carboxyfunctionalized DyLight747 (Thermo-Pierce) (λEx.:745 nm, λEm.:805 nm). The lumiphores are covalently linked to aminefunctionalized magnetic nanoparticles (TurboBeads) via an amide bond. The amide bond was chosen due to its low dissociation under nearly all biological conditions (thus minimizing the likelihood of introducing free lumiphore into the body). The use of magnetic nanoparticles provides two additional benefits for in vivo use. First, the magnetic aspect of the particles could be used to non-invasively manipulate local particle concentration in vivo or remove the particles from the body. Second, a hydrophilic, biologically inert polymer, such as polyethylene glycol (PEG) could be concomitantly bound to the particles along with the lumiphores. This would add to the hydrophilicity of the system and could protect biomolecules from 1O2 by forming a steric “shield”.
II. METHODS AND RESULTS A. Ar4TBP Synthesis A synthesis scheme for the Pd-Ar4TBP in Fig. 1 can be found in the literature [10]. The two primary precursors for this scheme are dimethyl 1,2,3,6-tetrahydrophthalate (“phthalic ester”) and t-butyl isocyanoacetate. Phthalic ester was synthesized from 1,2,3,6-tetrahydrophthalate anhydride in an acid-catalyzed esterification reaction. T-butyl
143
isocyanoacetate was synthesized from t-butyl chloroacetate and formamide following the published procedure [11]. The phthalic ester was reacted with t-butyl isocyanoacetate to form a pyrrole ester (Fig. 2)
Fig. 2 Pyrrole ester used in this synthesis The pyrrole ester was then reacted with benzaldehyde under Alder-Longo conditions to form tetraaryl tetracyclohexanylporphyrin (Ar4TCHP; Fig. 3)
Fig. 3 Ar4TCHP The Ar4TCHP was then metallated with zinc acetate and the cyclohexanyl groups were aromatized to benzyl groups via oxidation with dichloro dicyano benzoquinone (DDQ). The zinc chelate was then de-metallated with TFA to yield a free-base capable of forming a Pd chelate. Analysis of the Ar4TCHP and Ar4TBP products was complicated by three phenomena. First, both free-bases can form dications when exposed to oxidative conditions (such as DDQ). Dications cannot be aromatized, but can be remetallated. Second, Zn-Ar4TCHP and Zn-Ar4TBP are labile and spontaneously decay to free-base. Third, the direct product of the Alder-Longo reaction alternates between free-base and dication and contains numerous contaminants. Isolation of the direct product is possible but requires numerous rounds of chromatography and recrystallization, which would lead to a loss of product. These separation issues made the prospect of NMR analysis somewhat unfavorable. Luckily, Ar4TCHP and Ar4TBP as well as their corresponding dications and metal chelates have distinct UV-Vis peaks [12].
IFMBE Proceedings Vol. 32
144
J.L. Van Druff et al.
To analyze the identity of the product of the reaction where Ar4TBP free-base is formed from Zn-Ar4TBP, two rounds of chromatography (eluent: 12:1:1 methylene chloride:THF:acetic acid) were employed. Fractions from the second round were analyzed via UV-Vis spectroscopy. The fraction containing the product was then twice recrystallized with hexane. These UV-Vis data are presented in Fig. 4.
Fig. 5 Carboxy-functionalized Pd-TPP
Fig. 4 a) UV-Vis scan of first fraction of second round of chromatography b) 4th fraction c) 8th fraction d) 8th fraction after re-crystallization. 507 nm corresponds to Ar4TBP dication. 450 nm corresponds to Ar4TCHP dication
From these data, it appears as if both Ar4TCHP and Ar4TBP are present. This indicates incomplete aromatization. There are two likely explanations for the incomplete aromatization. First, the labile Zn-Ar4TCHP may have demetallated during the DDQ reaction, leaving oxidatively inert Ar4TCHP free-bases. Second, the Alder-Longo reaction results in chlorins (porphyrins wherein the pi-bond system is not fully conjugated) and the conversion of chlorin to porphyrin consumed a portion of the DDQ. B. Surface Reaction Development In order to produce our final product, a conjugation reaction capable of binding both carboxy Pd-Ar4TBP and DyLight 747 to the amine-functionalized nanoparticles. The method we chose was EDC/NHS-mediated coupling [13]. Since this work was performed while Pd-Ar4TBP was still being synthesized, another species, Pd-tetraphenyl porphrine (Pd-TPP; Fig. 5) was used as a substitute.
It was assumed that, because of the similarities between Pd-TPP and Pd-Ar4TBP, that a conjugation reaction that works for Pd-TPP would likely work for Pd-Ar4TBP as well. The reaction took place initially in PBS at pH 7.2. 10x molar excess EDC and NHS were used so that the ratio of Pd-TPP to DyLight747 bound to the particles would depend on the amounts of dye added and not catalyst concentration. The nanospheres were vortexed for 20 minutes prior to reaction. During the course of the reaction, the reaction vessel was continuously vortexed. To quantify the amount of dye bound to the nanoparticles, we developed a spectrophotometry-based method. Equal concentrations of EDC, NHS, lumiphore and buffer were added to two tubes. An equal amount of nanoparticles was added to each tube. 106 molar excess tris was also added to the reaction volumes. One tube received tris before nanoparticle addition and the other received tris 30 minutes after nanoparticle addition. Since tris has a primary amine, adding tris before nanoparticles should result in the vast majority of lumiphore being bound to tris instead of nanosphere. At the end of the reaction, the nanospheres were pelleted with a rare earth magnet, resulting in the bound lumiphore being in the pellet and the tris-bound lumiphore being in the supernatant. The absorbance of the lumiphore in the supernatant was then quantified spectrophotometricly. The ratio Abs(ttris=30 min) over Abs(ttris=0 min) is equal to the % unreacted.
IFMBE Proceedings Vol. 32
Multiple Lumiphore-Bound Nanoparticles for in vivo Quantification of Localized Oxygen Levels
145
B. Surface Reaction Development The current data indicate that our reaction scheme is capable of binding both Pd-TPP and DyLight747 to the nanoparticles individually. The next step would be to confirm that our reaction conditions work with carboxy-functionalized PdAr4TBP or adjust the reaction conditions if required.
ACKNOWLEDGMENTS
Fig. 6 Normalized absorbance for DyLight 747 for samples where tris was added before nanoparticles and 30 minutes after nanoparticles (P <0.01)
We acknowledge Miguel Acosta for his microparticle work and technical advice with respect to fluorescence spectroscopy, Dr. Lisa Kelly for her technical advice with respect to photochemistry, Dr. Silviya Zustiak for training on the fluorometer and UV-Vis, and the Henry Luce Foundation, UMBC, and NIH-NINDS (R01NS065205) for financial support.
REFERENCES
Fig. 7 Normalized absorbance for Pd-TPP for samples where tris was added before nanoparticles and 30 minutes after nanoparticles (P <0.01)
III. CONCLUSIONS A. Ar4TBP Synthesis The data indicate that we likely have a mixture of Ar4TCHP and Ar4TBP. While this is not ideal, it is not insurmountable, either. Both Ar4TCHP and Ar4TBP can be metallated with Pd and re-aromatized. Unlike the Zn chelates, Pd chelates are not labile and should not spontaneously demetallate. Once metallated and re-aromatized, separation should become much easier since there will be a single target-species that is known to be highly stable. This would also allow the use of analysis methods that require a pure sample, such as NMR and mass spectrometry.
1. Bratasz, A, Pandian, R et al (2007) In vivo imaging of changes in tumor oxygenation during growth and development. Magn Reson Med 57(5):950-959 2. Elas, M, Ahn, K et al (2006) Electron paramagnetic resonance oxygen images correlate spatially and quantitatively with Oxylite oxygen measurements. Clin Cancer Res 12(14 Pt 1):4209-4207 3. Itoh, T, Yaegashi, K et al (1994) In vivo visualization of oxygen transport in microvascular network. Am J Physiol 267(5 Pt 2):175182 4. Vinogradov, S, Wilson, D et al (1995) Metallotetrabenzoporphyrins: new phosphorescent probes for oxygen measurement. J Chem Soc Perkin Trans II 103-111 5. Vinogradov, S, Lo, W et al (1996) Noninvasive imaging of the distribution in oxygen in tissue in vivo using near-infrared phosphors. Biophys J 70(4):1609-1617 6. Wilson, D, Lee, W et al (2006) Oxygen pressures in the interstitial space and their relationship to those in the blood plasma in resting skeletal muscle. J Appl Physiol 101(6):1648-1656 7. Wiehe, A, Stollberg, H et al (2001) PDT-related photophysical properties of conformationally distorted palladium(II) porphyrins. Journal of Porphyrins and Pthalocyanines 5(12):853-860 8. Acosta, M, Ymele-Leki, P et al. (2009) Fluorescent microparticles for sensing cell microenvironment oxygen levels within 3D scaffolds. Biomaterials 30(17): 3068-74. 9. Carraway, E, Demas, J et al. (1991) Photophysics and Photochemistry of Oxygen Sensors Based on Luminescent Transition-Metal Complexes. Anal. Chem. 64(4): 337-342. 10. Finikova, O, Cheprakov, A et al. (2004) Novel versatile synthesis of substituted tetrabenzoporphyrins. J Org Chem 69(2): 522-35. 11. Novak, B, Lash, T (1998). Porphyrins with Exocyclic Rings. 11. Synthesis and Characterization of Phenanthroporphyrins, a New Class of Modified Porphyrin Chromophores. J. Org. Chem. 63(12): 39984010. 12. Lebedev, A, Filatov, M et al. (2008) Effects of structural deformations on optical properties of tetrabenzoporphyrins: free-bases and Pd complexes. J Phys Chem A 112(33): 7723-33. 13. Grabarek, Z, Gergely, J (1990) Zero-length crosslinking procedure with the use of active esters. Anal Biochem 185(1): 131-5.
IFMBE Proceedings Vol. 32
Ion-Mobility Characterization of Functionalized and Aggregated Gold Nanoparticles for Drug Delivery D.-H. Tsai1,2, L.F. Pease III2,3, R.A. Zangmeister2, S. Guha1,2, M.J. Tarlov2, and M.R. Zachariah1,2 2
1 University of Maryland, College Park, Maryland, U.S. National Institute of Standards and Technology, Gaithersburg, Maryland, U.S. 3 University of Utah, Salt Lake City, Utah, U.S.
Abstract— Surface properties of gold nanoparticles (AuNPs) can be easily modified with self assembled monolayers (SAM). This is extremely relevant for cancer detection and treatment. But for such applications, it’s important to characterize these nanoparticles from the perspective of purity, monodispersity, strength and stability of SAM bindings. Firstly, we demonstrate the use of electrospray-differential mobility analysis (ES-DMA) for characterizing non-functionalized and functionalized 10 to 60 nm citrate stabilized Au – colloids which enable us to determine the packing density. We also carry out temperature programmed desorption (TPD) of the bindings and find the binding energy to be consistent with studies of SAMs on flat surfaces. Lastly, we demonstrate ESDMA’s utility to investigate nanoparticle flocculation. By tuning in the ionic strength of the Au colloid solutions we are able to track the fraction of each aggregate species as a function of time which then enables us to determine extent of aggregation, aggregation rate and stability ratio at different ionic strengths. This demonstrates that ES-DMA is a valuable tool for quantitatively probing early stages of colloidal aggregation and can also be used as a size selection tool for studying specific aggregates. Keywords— ES-DMA, gold, colloid, aggregate, coating.
I. INTRODUCTION Gold nanoparticles (Au-NPs) are being widely considered for applications in health diagnostics such as targeted drug delivery. A significant challenge in using Au-NPs for clinical diagnostics or as therapeutics is their characterization. An important aspect of regulatory approval for use in humans will include rigorous physical and chemical characterization of the measuring properties such as the physical size of the particle, the size distribution, particle structure, colloidal stability, and the composition of chemical or biological coatings. In this paper we describe the use of electrospraydifferential mobility analysis (ES-DMA) for characterizing functionalized Au-NPs. Our objectives include characterizing the size and coating stability of alkanethiol self-assembled
monolayers (SAMs) on Au-NPs. We also use a programmed thermal environment in the gas phase to determine the binding energy of the thiol monolayers. We also present a novel approach to probe colloidal stability by ES-DMA. By monitoring the distribution of aggregates as a function of ionic strength and reaction time, the aggregation mechanism (rate constant and stability ratio) and surface potential of colloidal particles are determined.
II. MATERIALS AND METHODS A. Materials Commercially available monodisperse Au colloids (nominally 10 nm, 30 nm, 60 nm in diameter, citrate stabilized, Ted Pella Inc., Redding, CA) were used in this work1. Two different thiols, 11-mercaptoundecanoic acid (99+%, MUA, negatively-charged) and (1-mercapto-11-undecyl) tri (ethylene glycol) (99+%, PEG, neutral), were chosen for the either charged or neutral self-assembled monolayers (SAMs) on Au nanoparticles. The functionalized Au colloid suspension was first centrifuged to separate the colloids from the supernatant containing excess salts and unbound molecules (typically 0.95 mL of a 1.00 mL sample), and replaced with an equivalent volume of ca. 2 mmol/L to 10 mmol/L aqueous ammonium acetate (Sigma, 99.9 %) solution. B. ES-DMA Figure 1 depicts a schematic of our experimental system, consisting of an electrospray aerosol generator (Model 3480, TSI Inc.), a differential mobility analyzer (Model 3080n, TSI Inc.) and a condensation particle counter (CPC, Model 3025, TSI Inc.). Within the electrospray aerosol generator, conductive solutions of Au colloids were sprayed in a stable cone-jet mode. The aerosol stream is then passed through a housing containing a radioactive Po-210 (α) source that reduces the highly-charged droplets to droplets that are either neutral, with a single negative charge, or with
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 146–149, 2010. www.springerlink.com
Ion-Mobility Characterization of Functionalized and Aggregated Gold Nanoparticles for Drug Delivery
a single positive charge. The positively charged dry particles were separated within the differential mobility analyzer (DMA) based on their electrical mobility (or particle size) and then transmitted to a condensation particle counter (CPC)2-5. To achieve sufficient resolution and stability from the DMA, the ratio of sheath-to-aerosol flow rates within the DMA was set to 30 for 10 nm and 30 nm Au-NP and set to 10 for 60 nm Au-NP.
147
temperature of the particles is well known (i.e. the temperature of the gas) due to their small thermal mass. Figures 2b show the change in ΔL as a function of reactor temperature. This figure shows a monotonic decrease in the apparent coating thickness with increasing temperature, where T is the temperature of the Au-NP, t is the reaction time (the residence time in furnace), and ΔL0 represents ΔL at the initial condition (T = T0 = 20oC, t = 0). For all particles at the same reaction time,, we observed that MUA was effectively fully removed from the gold surface at T=300-350 oC independent of particle size. The relative high temperature of desorption confirms that the MUA is chemisorbed to the surface rather then physisorbed (T<80 oC). From the observed change of ΔL/ΔL0 we can back out the change in the surface packing density of the SAM on an ρ (T ) Au-NP, ρ. 4 We first define ρ * = ρ ( T ) and 0 ΔL* =
ΔL(T ) . For the size range we considered, our model ΔL(T0 )
provides a simple correlation between ΔL, which we measure, and ρ, which we employ to determine the binding energy in normalized terms, 4
ρ * = ( ΔL*) 2 Fig. 1 Schematic of experimental system, including differential mobility analyzer (DMA), and condensation particle counter (CPC)
III. RESULTS AND DISCUSSION A. Adsorption/Desorption of Self-Assembly Mononlayer on Au-NP First we examined the capability to detect the presence of SAM on the surface of Au-NPs using ES-DMA. We define a change in particle size as ΔL=dp-dp0 where dp and dp0 represent the coated and uncoated particle mobility diameter, respectively. As shown in Figure 2a, the size distribution does shift to larger sizes for both MUA- (ΔL ~2 nm) and PEG-coated (ΔL ~1.7 nm) Au-NPs. Thus, we are able to detect the presence of SAM conjugation on the Au-NPs based on the difference in electrical mobility between conjugated and bare Au-NPs. We now employ the change in thickness to assess the binding of SAMs to Au-NPs. With the ability to distinguish changes of 0.2 nm, ES-DMA offers the opportunity to further monitor the change in diameter from the thermal desorption of SAMs to gain insight into thermal stability and binding energy. One major advantage of using a gas-phase approach for thermal desorption studies is that thermal processing can be done rapidly, and in the absence of any complicating substrate effects. In particular, the
(1)
With Eq. 1, we evaluate the extent of thermal desorption of the SAM simply from the change of ΔL measured by the DMA. Nishida et al describe the desorption rate, Dr, of MUA from a flat gold surface as a 2nd order reaction, and we adopt their description here for the gold particle and check for consistency . The 2nd order desorption involves a dimerization process of SAM molecules at higher temperature (T > 200oC), when the surface coverage is high. 6,7
Dr = −
dρ * = k1 ( ρ *) 2 dt
(2)
Using Eq. 2 and the Arrhenius form of k1, we may evaluate of the apparent binding energy (E) between MUA and Au from the TPD results. With the initial condition ρ= ρ(T0) at t=0,
ln(
1 ΔE − 1) = ln( At ) − ρ* RT
(3)
where k1 is the rate constant (= Ae(-E/RT)), A is the Arrheniusfactor and E is the activation energy. Fitting our TPD data to Eq. 3 in Figure 2c shows that a second order Arrhenius model fits the data quite well, and yields an Arrheniusfactor of 1.0×1011 s-1 with an activation energy E ~105 ± 10 kJ/mol for all three sizes of Au-NPs. Thus, for the range of sizes we considered, the curvature effect is negligible as
IFMBE Proceedings Vol. 32
148
D.-H. Tsai et al.
b
c
Fig.
2 (a) Particle size distribution of Au-NPs with different kinds of SAM-coatings conditions. Blue spheres (●): bare Au-NPs; Red squares (■): MUA-coated Au particles; Green triangles: PEG-coated Au particles. (b) ΔL* (=ΔL/ΔL0) vs T for three different sized Au nanoparticles, 20 nm, 30 nm and 60 nm. t: 1.2 s. (c) Arrhenius plots for three different sizes of Au-MUA particles. T=200-350oC. t =0.3 s for 20 nm Au-NPs and t =1.2 s for both 30 nm and 60 nm Au-NPs. Y*=Ln(ρ*-1-1)-Ln(At). The slope = – E/R. R =gas constant (8.314 Jmol-1K-1)
salt residues and clusters containing one to four individual Au particles, which we term monomer, dimer, trimer, etc. The ion-mobility of the monomer peak (n = 1) results in a diameter of 11.6 nm consistent with the original colloidal sample encrusted with salts. 4 Further assignment of the dimer peak to 14.8 nm, the trimer peak to 17.4 nm, and the tetramer peak to 19.4 nm was confirmed by specifically depositing particles corresponding to each peak exiting the DMA on a TEM grid inside an electrostatic deposition chamber. TEM then confirms the identity (i.e the peak labeled as tetramer does contain four particles) 5. As seen in Fig. 3a, the intensity of the monomer (n = 1) decreased, and the intensity of the dimer (n = 2), trimer (n = 3), and tetramer (n = 4) increases as C increases, reflecting the decreased electrostatic repulsion between Au-NPs due to a decrease in the Debye screening length with added salts. Importantly, this result indicates that one can clearly distinguish the aggregate distribution in solution with ES-DMA, and clearly identify changes due to solution conditions. The results in Figure 3a obviously depend on when the flocculating solution was sampled, a fact we now exploit to determine the kinetics of flocculation. To quantify the extent of flocculation, we draw an analogy between the flocculation of colloidal nanoparticles and a step-growth polymerization process to define the degree of flocculation (DF). n=4
DF o
each MUA molecule only has an arc of 4 or less on the Au surface. B. Particle Aggregation Our objective is to develop a systematic approach to detect liquid-phase aggregation using the ES-DMA. One advantage of our approach is that even a small change in Au flocculation should be detectable. We restrict the scope of the homoaggregation studied here to the earliest stages of flocculation where single particles form small clusters (<5 primary particles) prior to significant cluster-cluster aggregation, though ES-DMA can examine larger extent of flocculation. We begin our work by identifying the peaks in the ionmobility spectrum, and then proceed to the time-dependent kinetic study. From a temporal change in the number concentration of each aggregation state, we estimate the flocculation rate of Au-NP. From this, we estimate the surface potential between the colloidal Au nanoparticles. Figure 3a presents ion-mobility spectra of gold colloids at various concentrations of ammonium acetate, C. Each spectrum presents up to five distinctive peaks representing
=
∑
n =1 n=4
∑
n =1
nN N
c ,n
(4)
c ,n
In this analogy, the average number of “repeat units” in a Au aggregate “polymer chain” corresponds to the average number of particles in an n-mer (where n > 1) aggregate. Like a polymer propagation process (our flocculation), DF increases from unity as a function of reaction time, t. Figure 3b summarizes the temporal changes in the ion mobility size distributions in terms of n-mer concentration. In keeping with simple Brownian flocculation, the monomer concentration decreases monotonically, with each successive n-mer appearing later in time. For this set of experimental conditions, dimers and trimers reached a peak concentration at t ~ 80 min and ~ 200 min, respectively. Note that the defined reaction time starts from the moment Au colloids were mixed with ammonia acetate buffer solution, to the time we collect the ion-mobility spectrum. Fig 3c shows DF versus t for various ionic strengths. Generally a higher DF was observed for longer reaction times and higher ionic strengths. Because of the depletion of monomer Au-NP, at C = 9.47 mmol/L, DF approaches a constant when t > 80 min. This result indicates the flocculation rate is dominated by monomer-n-mer interactions such
IFMBE Proceedings Vol. 32
Ion-Mobility Characterization of Functionalized and Aggregated Gold Nanoparticles for Drug Delivery
that when the monomer is depleted the aggregation rate essentially stops. Also, the aggregate rate constants, as well as stability ratio, are able to be calculated out from the change of the total number of particles versus reaction time in solution5.
149
state of NPs and to track changes in the number concentration versus various ionic strengths and reaction time. For the range of reaction times we considered, we find the degree of aggregation to be proportional to the ionic strength and the residence time. This study suggests the ES-DMA to be a useful tool for the study of packing density, stability of coatings on nanoparticles, and also for quantitatively analysis of early stages of colloidal aggregation.
a
DISCLAIMER
b
c
Certain commercial equipment, instruments, or materials are identified in this report in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose.
REFERENCES
Fig. 3 (a) Ion-mobility spectrum of colloidal Au ammonium acetate solution at different ionic strength conditions. Mobility size of individual particles is ≈10 nm. (b) Number concentration of different Au-aggregates vs reaction time. C=7.89 mmol/L. Nc,n is the number concentration of AuNP in solution, and n is the number of individual particles in one aggregate. ◆: n = 1; □: n = 2; ▲: n = 3; ×: n = 4. (c) Degree of flocculation, DF, vs reaction time. C varies from 4.21 mmol/L to 9.47 mmol/L
1. Kim, S. H.; Woo, K. S.; Liu, B. Y. H.; Zachariah, M. R., (2005) Journal of Colloid and Interface Science , 282 (1) : 46-57 2. Tsai, D. H.; Hawa, T.; Kan, H. C.; Phaneuf, R. J.; Zachariah, M. R. (2007) Nanotechnology , 18 (36), 3. Tsai, D. H.; Zangmeister, R. A.; Pease, L. F.; Tarlov, M. J.; Zachariah, M. R. (2008) Langmuir , 24 (16), 8483-8490 4. Tsai, D. H.; Pease, L. F.; Zangmeister, R. A.; Tarlov, M. J.; Zachariah, M. R., (2009) Langmuir , 25 (1), 140-146 5. Nishida, N.; Hara, M.; Sasabe, H.; Knoll, W. ,(1996), Japanese Journal of Applied Physics Part 1-Regular Papers Short Notes & Review Papers, 35 (11), 5866-5872 6. Nishida, N.; Hara, M.; Sasabe, H.; Knoll, W., (1996), Japanese Journal of Applied Physics Part 2-Letters , 35 (6B), L799-L802
IV. CONCLUSION We have demonstrated a systematic approach to characterize functionalized self-assembled monolayers (SAMs) of alkylthiol molecules on Au nanoparticles with ES-DMA. The mobility measurement using a DMA has sufficient resolution to track both the packing density of SAMs, and the thermally induced desorption kinetics of SAMs from Au nanoparticles. In addition, we have applied this ES-DMA to characterize the aggregation process of nanoparticles in a solution. This instrument has sufficient resolution to identify the aggregation
Information about corresponding author : Author: Prof. Micheal R Zachariah Institute: Department of Mechanical Engineering and Chemistry, University of Maryland Street: 2125 Glenn L Martin Hall City: College Park, Maryland Country: United States of America Email:
[email protected]
IFMBE Proceedings Vol. 32
Quantitative Mapping of Vascular Geometry for Implant Sites J.W. Karanian, O. Lopez, D. Rad, B. McDowell, M. Kreitz, J. Esparza, J. Vossoughi, O.A. Chiesa, and W.F. Pritchard Laboratory of Cardiovascular and Interventional Therapeutics, Division of Biology, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, FDA, 8401 Muirkirk Road, Laurel, MD 20708 Abstract— In vivo characterization of the complex dynamic forces and repetitive deformations experienced by pelvic and leg vasculature is important to improve the evaluation of safety and effectiveness of implantable interventional devices such as stents [1]. The goal of this study was to use image based geometric modeling and analytical techniques to characterize the vascular deformations that occur with pelvic-hind limb motion in swine. Geometric changes in the ilio-femoral vessels were evaluated across a full range of motion in anesthetized swine. Computed tomography angiograms were obtained at each position and processed for model construction and geometric analysis of vascular segment length, curvature and twist. Local geometric changes between positions were evaluated and compared for each arterial segment over the full length of the iliac-femoral-popliteal artery. The greatest changes in axial length, twist and curvature were observed in the femoral segments between positions of hip extension to flexion. The total change in iliac-femoral-popliteal artery axial length and twist between the positions were -5.4cm and 112°, respectively. Similarly, the maximum change in curvature along the artery was 0.46 cm-1. Reported device failures in the ilio-femoral-popliteal vessels of human may be due in part to the wide range of physical forces and deformations imposed on these vessels during normal motion. Improved knowledge of the biomechanical behavior of the sites of vascular implants will make bench and preclinical testing more predictive for these devices, improving early evaluation of safety and effectiveness. Keywords— Vascular biomechanics, vascular characterization, interventional device safety. imaging-based modeling, preclinical evaluation.
I. INTRODUCTION In recent reports, the prevalence of human stent fractures and the need to model failures has become increasingly important [1, 2]. These failure modes may be related to the implant material and environment, including normal motion of the implant site [3]. Intravascular implants are subjected to repetitive deformations-axial strain, torsion, compression, bending-(Fig. 1) and may alter those vascular deformations in patients and animal models [4]. In vivo characterization of these deformations and the dynamic forces acting on pelvic and leg vasculature (ilio-femoral-popliteal) in both
humans and pre-clinical animal models is critical to the evaluation of design and safety of implanted devices such as stents. The study objective was to use image-based geometric modeling and analytical techniques to characterize vascular deformations that occur in pelvic-hind limb motion at arterial implant sites in swine.
Axial
Bending Branch A
Torsion
Compression
Fig. 1 Modes of vascular deformation
II. MATERIALS AND METHODS The study was performed under an Institutional Animal Care and Use Committee-approved research protocol. Five anesthetized domestic swine, each 100 lbs, underwent computed tomography angiography (CTA) in a series of six static pelvic-hind-limb positions representing the full range of normal leg motion (Fig. 2) and the related motion and deformations of pelvic blood vessels of the animal. These positions included: standing; walking; hips extended with legs extended; hips flexed with legs flexed; hips flexed with legs extended; and, hips flexed with the pelvis rotated (Fig. 2). Computed tomography angiography (CTA) of the vasculature was performed with a Phillips Mx8000 Quad/IDT CT scanner (Philips Medical Systems, Cleveland, OH, USA) using Isovue-370 (Bracco Diagnostics, Princeton, NJ, USA) as the contrast agent. The body positions were confirmed by reference angle measurements of limb position using the CTA scout images. Centerline paths of the arteries and branches were identified on CTA images and the in vivo geometries segmented and quantified using a customized software package cvSim®
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 150–153, 2010. www.springerlink.com
Quantitative Mapping of Vascular Geometry for Implant Sites
[1,5] (cvSim Inc., Palo Alto, CA, USA). Longitudinal strain, axial twist and curvature were evaluated and compared across the range of motion to characterize the 3D deformation of pelvic-peripheral arterial vascular segments. Matlab® (Mathworks, Natick, MA) was also used for further data analysis and display of local geometric differences along centerline paths of vessels of interest throughout the range of pelvic-hind limb positions.
151
caudal femoral arteries (Fig. 3) with the use of cvSim [1,5]. These centerlines are composed of analytical representations of connected centroids of the vessel lumen across the length of the vessel. Initially, a centerline path was created by selecting the centroid of vessel lumen on each 2D CTA image slice (Fig. 4). Each lumen centroid was then assigned to a corresponding spatial location on a cartesian system whereby a 3D curve representative of the blood vessel was produced by connecting consecutive centroids from vessel lumen (Fig. 4). Centerline paths were further used for quantification of vessel characteristics [4].
Fig. 2 Evaluated positions of pelvic motion: A) standing, B) walking, C) hips extended with legs fully extended, D) hip flexed, leg flexed, E) hips flexed with legs fully extended, and F) hips flexed with pelvis rotated. Positions shown in C and E are reported here aorta Fig. 4 Construction of geometric vascular model from CTA image data
Fig. 3 3D representation of centerline paths of the vascular bed including the abdominal aorta, iliac, femoral and popliteal arteries and branches
A. Image-Based Anatomic Model Construction Using the CTA data, centerline-paths (interpolated cubichermite-spline curves) were constructed for the abdominal aorta, and the iliac, femoral, popliteal, circumflex iliac, deep femoral, lateral circumflex femoral, saphenous and
Mathematical reshaping and smoothing operations were performed on initial centerline paths in order to eliminate artificial irregularities of lumen body boundaries and highfrequency fluctuations present in sequential CTA images. Such operations were necessary to mitigate image noise, particularly at small diameter vessels, and irregularities of the vessel skeletonization process. Careful attention was given to retaining the natural features of the vessel to ensure the accuracy of proceeding geometric measures to be obtained from the models. Centerline paths were smoothed by Fourier smoothing using a kernel of less than half the length of the original centerline path. Centerline paths were evaluated for distortions in natural curvature of the vessels at each phase of pelvic-hind limb motion. Our technique ensures the generation of reliable geometric vascular models in an efficient and productive manner. The generated centerline paths were then used to quantify local geometric vascular deformations attributed to pelvichind limb motion in swine in terms of axial strain, axial twist and curvature changes (Fig. 5).
IFMBE Proceedings Vol. 32
152
J.W. Karanian et al.
torsion deformations at implant sites. Proximal and distal branch points at each segment were taken as fiducial markers to measure the axial angular difference between branch points. Axial twist was subsequently calculated as the change in angular difference for each segment between a reference and final position (Fig. 5) [5]. A
B
C
III. RESULTS
Fig.
5 Geometric measures quantified to characterize modes of vascular deformation: A) axial strain, B) axial twist and C) curvature
B. Length Measurements Vessels were divided into segments defined by the origins of specific branch vessels as reference anatomic markers. Vessel length for a segment was measured as the distance between the ostia of reference vessels that defined the segment. (Fig. 6 & Table). The external iliac, segment 1, was measured from the origin of the contralateral external iliac artery to the origin of the deep circumflex iliac artery. The femoral-popliteal was divided into four segments, 2-to5. The first femoral segment was measured from the deep circumflex circumflex artery to the deep femoral artery. The second femoral segment was measured from the deep femoral artery to the lateral circumflex femoral artery. The third femoral segment was measured from the lateral circumflex artery to the saphenous artery. The final segment, popliteal, was measured from the saphenous artery to the caudal popliteal artery. These branch points were common in all animals and served as reference locations for length measurements across different pelvic positions. Axial strain was calculated by comparing the segment length changes between reference and final positions (Fig. 5) [5].
The 3D geometry of the iliac-femoral-popliteal arteries with the hips fully extended (Fig. 6A) and fully flexed (Fig. 6B), were compared. Change in axial length and twist was 5.4cm and 7.2°/cm, respectively with maximum change in curvature of 0.46 cm-1 (Table). The Table shows the geometric change in each arterial segment (1, 2, 3, 4 and 5) and the total vessel (1-5) between extended and flexed positions (m ± se). A 3D curvagram of the geometric model shows the regional distribution of curvature along the vessel for extended and flexed positions (Fig. 6). The greatest curvature (lightest shade) for the femoral segment was observed in the flexed position. The greatest change in axial length, twist and curvature was in the femoral segments 2-to-4 (Fig. 6). Stent placement in the femoral artery (Fig. 7) reduced geometric changes with motion and altered the curvature in the native vessel. Geometric changes were statistically significant across the range of motion evaluated (p<0.05).
C. Curvature Measurement Local changes in vessel curvature are mainly attributed to orthogonal bending moments acting on the vessels along a longitudinal axis [5]. Local vessel curvature was calculated based on the radius (ri) of a circumscribed circle through three sequential points (Pa, Pi, Pb) included within a sampling window equivalent to 5% of the vessel’s average diameter along the centerline path. The curvature at Pi was defined as the inverse of the radius of the circumscribed circle around the three coordinates, Pa, Pi, and Pb, where Pa and Pb have equal distance to Pi (Fig. 5) [5]. D. Axial Twist Measurement Musculoskeletal motion can induce torsional moments on vessels that may result in axial twist and in turn cause
A Fig.
B
C
6 Geometric model of iliac-femoral-popliteal arteries (1-5) in extended (A) and flexed (B) positions. Comparison of left side arterial curvature (indicated by shading) in each position (C)
IFMBE Proceedings Vol. 32
Quantitative Mapping of Vascular Geometry for Implant Sites
153
positions: hips extended, standing and hips flexed (Fig. 7). From the cdf it can be easily observed that the distribution of femoral curvature values significantly changes according to the degree of pelvic-hind limb motion. Local vascular curvature significantly increased from extended to flexed positions as shown by the (m ± se) of their distributions: extended (0.1816 ± 0.0035), standing (0.2523 ± 0.0047), and flexed (0.4046 ± 0.0082) positions.
IV. CONCLUSION Fig. 7 Cumulative distribution functions (cdf) of local curvature values for the right femoral artery at hips extended, standing and hips flexed positions of pelvic motion
A
The motion of the iliac-femoral-popliteal vessels were characterized by changes in length, twist and curvature during normal motion of the pelvic-hind limbs. Significant differences in geometric deformation of individual vessel segments (eg, curvature) were described. Characterization of vascular deformations may significantly influence the pre-clinical evaluation of implanted vascular stent. Reported failures in patients, such as stent fracture [1,2], may be due in part to the wide range of physical forces imposed on vascular beds during normal motion. Selection of an appropriate implant site for pre-clinical modeling should consider the specific pre-clinical safety question to be evaluated, the potential motion and deformation characteristics of the animal vessel and the intended clinical implant site. Improved knowledge of the biomechanical behavior of arteries (human and animal) and the correlation between the two will make preclinical bench and animal testing more predictive, improving the evaluation of device safety and effectiveness.
REFERENCES
B Fig.
8 3D CT angiogram of stented femoral (A). Vessels in the flexed position with a right femoral stent showing redistribution of vascular curvature compared to the contralateral vessel (B)
Maps of local vessel curvature, hereafter referred to as curvagrams, were constructed by superimposing color coded curvature measurements at finely sampled intervals onto the 3D geometric model of a vessel (Fig. 6 and 8). Curvagrams provide the ability for graphical identification and comparisons between vascular regions susceptible to high or low degrees of curvature during normal pelvic-hind limb motion. Furthermore, cumulative distribution functions (cdf) of local curvature values were calculated for the right femoral artery of three different animals in three different
1. Umeda H, Gochi T, Iwase M, et al. (2009) Frequency, predictors and outcome of stent fracture after sirolimus-eluting stent implantation. Int J Cardiol 133(3):321-6. 2. Popma JJ, Tiroch K, Almonacid A, et al. (2009) A qualitative and quantitative angiographic analysis of stent fracture late following sirolimus-eluting stent implantation. Am J Cardiol.103(7):923-9. 3. Choi G, Shin LK, Taylor CA, Cheng CP. (2009) In vivo deformation of the human abdominal aorta and common iliac arteries with hip and leg flexion: implications for the design of stent-grafts. J Endovasc Ther. 16(5):531-8. 4. Nikanorov A, Smouse HB, Osman K, et al. (2008) Fracture of selfexpanding nitinol stents stressed in vitro under simulated intravascular conditions. J Vasc Surg 48(2):435-40. 5. Choi G, Cheng CP, Wilson NM, Taylor CA. (2009) Methods for quantifying three-dimensional deformation of arteries due to pulsatile and nonpulsatile forces: Implications for the design of stents and stentsgrafts. Ann Biomed Eng 37(1):14-33.
IFMBE Proceedings Vol. 32
Failure Analysis and Materials Characterization of Hip Implants A.M. Bastidos and S.W. Stafford University of Texas at El Paso/Department of Metallurgical and Materials Engineering, El Paso, TX, U.S. Abstract— This research focused on the microstructural characterization and failure analysis of hip implant components. The main hip components analyzed were the femoral heads and the ultra-high molecular weight polyethylene (UHMWPE) liners. Previous research has shown that hip implants tend to fail by the PE liners due to adhesive and abrasive wear, delamination, and third body particles (metals, ceramics, and PE). The methods and procedures for analyzing the said failures consisted of non-destructive and destructive evaluations. Non-destructive evaluations included of techniques such as visual characterization and dye penetrant inspection (DPI)–which displayed macroscopic surface details and presented initial clues as to the extent and causes of the failures. The use of attenuated total reflection infrared spectroscopy (ATR-IR) produced transmittance spectra for the PE liners to indicate the bonds and their associated wavelength energies. The destructive evaluations included metallography and scanning electron microscopy (SEM). These techniques revealed the microstructural characteristics of the metallic components and focused on microscopic cracks and abrasions from areas of delamination and adhesion in the samples. After further studies and analyses on the failed implants, the information and data shall be given to the collaborating orthopaedic surgery group in hopes of altering new components to ensure increased implant lifetimes and less needed revision surgeries for hip replacement patients. Keywords— Total hip arthroplasty (THA), femoral head, ultra high molecular weight polyethylene (UHMWPE), acetabular shell.
I. INTRODUCTION First performed in 1960, total hip replacement surgeries have made some of the most important surgical advances in the last century [1]. According to the American Academy of Orthopaedic Surgeons, more than 193,000 total hip replacements and 140,000 partial and revision hip replacements are performed each year in the United States [1,2]. A total hip arthroplasty (THA) involves the removal of diseased cartilage and bone and is then replaced with implant materials [3]. People in need of hip replacements typically stem from various cases of arthritis or injury [4]. In support of the previous statement, Table 1 displays the five most frequent diagnoses in patients who underwent hip replacement surgery in 2003 [4].
Table 1 Most frequent five principal diagnoses in patients who underwent hip replacement surgery [4] Rank 1 2 3 4 5
Total Hip Replacement Osteoarthritis Other bone, musculoskeletal diseases Fracture of neck of femur (hip) Complication of device, implant, or graft Rheumatoid arthritis and related diseases
% 81 9 4 1 1
In terms of injuries, a hard fall, repeated loads and stresses (i.e., sports, walking, running, etc.), and improper care on the part of the patient can result in wear and ultimate failure of the joints [5]. This research focused on failed hip implant components with an emphasis on the interaction between the metallic femoral heads and the ultra high molecular weight polyethylene (UHMWPE) liners from the acetabular cups. Typical problems found between the femoral head and liner, are wear, possible corrosion product, defects, and fractures. The overall goal of this research is to perform failure analysis and materials characterization to determine the causes for premature failure of the aforementioned components.
II. PATIENT AND COMPONENT BACKGROUNDS The first case, Patient E, involved a 69 year old male whose duration of implantation was 12 years. The manufacturer, DePuy Orthopaedic, Inc., produced the S-ROM total hip arthroplasty (THA) components, in which the femoral head has an outer diameter (OD) of 28mm with an offset of 6mm (i.e., 28+6) and an UHMWPE liner inner diameter (ID) of 28mm. The second case, Patient W, involved a 77 year old female who required revision hip surgery after 10 years from the initial THA. The manufacturer, Biomet, produced the 28mm OD femoral head with an offset of 3mm and an UHMWPE liner ID of 28mm. The third case, Patient H, was an 85 year old male whose implant lifetime exceeded 10 years. The Biomet femoral head has an OD of 28mm with an increased neck length and an offset of 9mm and an UHMWPE liner ID of 28mm with a 12/14 taper.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 154–157, 2010. www.springerlink.com
Failure Analysis and Materials Characterization of Hip Implants
Before the primary THA, Patient U was a 57 year old woman, 5’2”, and a weighed 140 pounds. Patient U was diagnosed with diabetes at age 44 and a THA of the left hip was required due to stage II osteoporosis. Though no osteolysis or diabetes infections were found in the surrounding tissues of the implant during the revision surgery, all components were replaced with newer models. The retrieved components were manufactured by Stemcup Medical Products AG. The Kuoni-model femoral head has a 28mm OD, the UHMWPE liner an ID of 28mm (though not provided), and an acetabular shell with an ID of approximately 60mm. Due to research requirements from the manufacturer, the UHMWPE liner was sent back to the company for further research and analyses.
III. RESULTS A. Visual Examination Upon visual examination of the UHMWPE, areas of abrasive wear, plastic flow, scratches, and possible cracks, were found, Fig. 1. The surfaces of the femoral heads revealed minute scratches, which are usually due to contact with abrasive particles creating wear tracks. The socket of the acetabular shell displayed worn areas with burnishing.
155
B. Fourier Transform Infrared Spectroscopy The spectrum provides wavenumbers (cm-1) for the C-C and C-H bonds found within the ethylene C2H4 homopolymer chain. The experimental Attenuated Total Reflection (ATR) spectra peaks match with the reference values as the reference is for polyethylene (in general) and not specifically for UHMWPE. Fig. 2 provides the IR spectra for the three UHMWPE components and Table 2 compares the experimental trough values to values of a reference spectrum. The main characteristic trough of the reference polyethylene spectrum is that of the C-H stretch bond found in the higher wavenumber range of 2853-2962 cm-1 [6]. The two smaller troughs, located in the fingerprint region, coincide with the C-C stretch bond at approximately 1492.23 cm-1 and the C-H bending bond in the range of approximately 723.30-749.51 cm-1. Each trough is produced because energy is being absorbed from the particular frequency of infrared radiation to excite the molecular bonds to a higher state of vibration [6].
a.
a.
b. b.
c. c.
d.
Fig. 1 (a)-(c) Femoral heads and UHMWPE liners: Patients W, H, E, (d) Patient U femoral head and acetabular shell
Fig. 2 (a)-(c) UHMWPE Infrared transmittance spectra: Patients E, W, H, liners
Table 2 Comparison of the reference wavenumber values with experimental wavenumber values [7] Transmittance Spectra Reference Patient E Patient W Patient H
Trough 1 2916.50 2915.62 2915.61 2915.66
Trough 2 2855.34 2848.20 2848.40 2848.28
Wavenumbers(cm-1) Trough 3 1492.23 – – 1472.44
Trough 4 1466.02 1462.80 1462.86 1462.85
IFMBE Proceedings Vol. 32
Trough 5 749.51 730.29 718.62 718.80
Trough 6 723.30 718.30 718.62 718.80
156
A.M. Bastidos and S.W. Stafford
C. Dye Penetrant Inspection
E. Metallography
Indentations on the Patient E liner were found in the socket region, Fig. 3a, where each indentation had a similar diameter and depth. DPI of the Patient H liner had the largest amount of surface scratches, abrasive wear, and plastic flow, Fig. 3b. One of the primary deformations of the Patient W liner sat lower than the neighboring “sprocket” arms, Fig. 3c. Plastic flow of the PE was also evident in the downward sloping crack and the puncture mark. The largest amount of damage sustained was on the proximal region of the Patient W liner, Fig. 3d. The deformed area was in a location that made no contact with the femoral head.
As three of the four femoral heads were cobalt-based alloys, one implant was selected to present the microstructure. Based upon the elemental weight percentages in the compositions acquired by XRF, the three femoral heads fell under the ASTM designation of an F75 casting alloy. The microstructure for the Patient E section of the femoral head displayed dispersed carbides, Fig. 4a. Engravings on the Patient U femoral head indicated the component as a 316L stainless steel, which was confirmed by XRF. The microstructure for this surgical alloy consisted of variable grain sizes with annealing twins and limited carbide particles, Fig. 4b.
a.
b.
a.
b.
Fig. 4 (a) Patient E femoral head: cobalt-alloy with dispersed carbides, (b) Patient U stainless steel femoral head with variable grain sizes and annealing twins F. Scanning Electron Microscopy c.
d.
Fig.
3 (a) Patient E liner: indentations, (b) Patient H liner: surface abrasions and abrasive wear of socket (c) and (d) Patient W liner: rim deformation and puncture, surface deformation
D. X-Ray Fluorescence With the use of an Innov-X Systems Alpha-2000A Handheld XRF Analyzer, it was found that the three sets of implants for Patients E, W, and H, were all cobalt-based alloys; specifically alloy F75 with cobalt, chromium, and molybdenum as the main components, Table 3. The fourth component, Patient U femoral head, was a 316L stainless steel.
The Patient W liner contained a large area of deformation and plastic flow, Fig. 5a. In another case, the Patient H liner also contained a large area of localized damage. The large area of surface damage indicated that PE may have delaminated, based on the ductile fracture surface, Fig. 5b. Yet, in some areas, remnants of the original PE surface still remained. The Patient U liner contained damage modes in the forms of crevices, bulges, scarring, and areas containing surface deposits, Figs. 6a and 6b. The damage associated with this component implies metal-on-metal contact because it is impossible that the PE liner could produce these results alone.
Table 3 Elemental compositions of femoral heads Component CoCrMo Patient E Patient W Patient H Stainless Steel Patient U
Cr
Fe
Co
Mn
Mo
Ni
Cu
27.97 28.55 27.06
– – –
64.68 64.57 65.32
0.70 0.94 0.97
6.02 0.56 6.10
0.53 – 0.56
– – –
17.68
67.61
–
1.30
2.17
10.83
0.40
a.
b.
Fig. 5 (a) Patient W liner: plastic flow lines, (b) Patient H liner: localized damage of the socket with a ductile fracture surface
IFMBE Proceedings Vol. 32
Failure Analysis and Materials Characterization of Hip Implants
157
IV. CONCLUSIONS
a.
b.
Fig. 6 Patient U femoral head: (a) crevice with surface product and wear scars, (b) bulge with wear scars Within the socket of the Patient E liner, circular impressions with similar diameters and wear tracks and embedded metallic particles were observed, Fig. 7a. Energy dispersive x-ray spectroscopy (EDS) confirmed that the metallic particles were pure titanium. Fig. 7b displays that the particle faced a lateral load as indicated by the plastic flow lines. The Patient E femoral head had the most unimpaired surface, save for an area with large amounts of surface deposits and wear tracks. The EDS spectrum of the surface deposit in Fig. 7c revealed the typical alloying elements of the F75 alloy, and carbon, titanium, and aluminum. The evidence of titanium was most likely the result of the hard third-body particles caught between the femoral head and PE liner while in service.
The overall chance of a hip replacement lasting 20 years is approximately 80% [8]. The variables that affect implant duration include the patients’ background, activity, and lifestyle. The sex, age, and weight of a THA recipient can greatly affect the outcome. In vivo variables that cause component damage are wear mechanisms and third-body particles such as metal, bone, or polymethyl methacrylate (PMMA) bone cement particles. The PE liners demonstrated many of the damage modes associated with wear, such as: abrasion with multiple orientation wear tracks, adhesion, and fatigue. Unique to one case were indentations with complex morphologies and slight amounts of plastic flow. The femoral heads, consisting of stainless steel and cobalt-based materials also demonstrated abrasive damage such as scratches and plastic deformation. The forms of wear observed were difficult to assess in terms of distinguishing damage sustained in service and damage induced in installation/revision and part surgical removal. Without complete patient information and history, the forms of damage observed throughout this research are difficult to quantify. It was also concluded that the wear damage observed was not construed to be severe and debilitating. As a result, removal and/or replacement of the components may have been for other reasons.
ACKNOWLEDGMENT A portion of this research was made possible by the Freeport-McMoran Copper and Gold professorship at the University of Texas at El Paso.
REFERENCES a.
b.
1. 2. 3. 4. 5. 6. 7. 8.
c.
Fig.
7 Patient E liner: (a) indentations with complex morphologies and wear tracks, (b) foreign particle with plastic flow lines, (c) Patient E femoral head: EDS analysis spectrum of surface deposit
Total Hip Replacement at http://orthoinfo.aaos.org Hip Replacement at http://www.niams.nih.gov Total Hip Replacement Surgery at http://www.medicinenet.com Zhan, Chunlin, et al (2007) Incidence and Sort Term Outcomes of Primary and Revision Hip Replacement in the United States. Journal of Bone and Joint Surgery 89A.3:526-533 DOI 10.2106/JBJS.F.00952 Bhat, Sujata V (2005) Biomaterials (2nd ed.) Alpha Science International Ltd., UK Interpreting an Infra-Red Spectrum at http://www.chemguide.co.uk Polyethylene at http://people.csail.mit.edu Hip Implants at http://orthoinfo.aaos.org Author: Amanda Marie Bastidos Institute: University of Texas at El Paso Street: 1441 Chato Villa Dr. City: El Paso Country: United States Email:
[email protected]
IFMBE Proceedings Vol. 32
Nano-Wear-Particulates Elicit a Size and Dose Dependent Response by RAW 264.7 Cells Mrinal K. Musib and Subrata Saha Department of Orthopedic Surgery and Rehabilitation Medicine, SUNY Downstate Medical Center, Brooklyn, New York 11203 Abstract— Ultrahigh molecular weight polyethylene (UHMWPE) and metals, primarily Ti and Co-Cr alloys are widely used to make components of orthopaedic implants. Cellular response to their wear-particulates depends on their size and dose. Although biological response to larger particles has been previously studied, there are no specific studies examining cellular response to simultaneous administration of well characterized clinically relevant size ranges of both UHMWPE and Ti particles less than 0.2 µm in size. The overall hypothesis for this work is that RAW 264.7 cells will display a more robust negative response to nanaoparticulates as compared to larger particles. Isolation of ‘clean’ simulated wear particles has been an issue and we in our lab have been able to develop a standardized and reproducible technique to isolate, characterize and fractionate (following ASTM F-1877 standards) both UHMWPE and Ti particles into 3 distinct size ranges [1.0-10.0 µm (micron), 0.2-1.0 µm (submicron) and 0.01-0.2 µm (nano)] from periprosthetic tissue explants and furthermore minimize clumping, thus facilitate disaggregation and isolation of individual particles into micron-, submicron and nano- fractions. Cells were treated for 24 h, 48 h and 72 h with various doses (103, 105, 107) and sizes (mentioned above) of both UHMWPE and Ti particles. Proliferation was determined by counting the number of cells at the specified time periods. Preliminary studies reveal early stimulatory and late inhibitory effect on cell proliferation. At 24 h, the nanofraction particles elicited a proliferative response by the cells, though statistically not significant, but at 72 h there was a significant inhibitory response. The response was dose and size dependent and the cells exhibited an inhibitory response to the nanofraction particulates only at the highest dose. To mimic in vivo conditions, further studies are being conducted and will involve administration of dual wear particulates that will help understand cellular response to wear particulates. Keywords— Wear-debris particles, UHMWPE, Ti, sizeshape descriptors, nanoparticles.
I. INTRODUCTION UHMWPE and Ti particulates are the most prevalent wear debris encountered during revision surgeries of total joints. Although these wear particulates are responsible for osteolysis, isolation and fractionation of well characterized clinically relevant size range and ‘clean’ (devoid of any extraneous organic matter) wear-debris particles from complex biological media, such as those used in wear
simulator machines and from periprosthetic tissue has been a longstanding problem. In fact, previous isolation techniques have been associated with the potential loss of nanoparticles during isolation. Understanding how cells respond to particles <0.2 μm is important as previous studies have implicated smaller particles in wear-mediated osteolysis and that cellular response to particles is dependent on size and dose [1-8]. In this study, we combine the use of an optimized solvent with mechanical treatment for reducing particle aggregation and improve particle recovery. Fractionated particles (micron-, sub-micron-, and nano-size fractions) were then added to cell cultures to examine their effect on phagocytosis and proliferation of RAW 264.7 murine monocyte/macrophage cell line. The overall hypothesis of the current work is that RAW 264.7 cells (which have been previously used to study cellular response to wear particles) [9,10] will display a more robust negative response to nanaoparticles as compared to larger particles.
II. MATERIALS AND METHODS A. Recovery and Fractionation of UHMWPE Particles into 3 Different Size Ranges UHMWPE (GUR 1050) and Ti particles were obtained from Ticona and Alpha Aesar (#681) respectively. Particles were suspended in acidified water (pH 5.5) containing Pluronic (2000 ppm); suspensions were vortexed for 5 minutes, sonicated for 120 minutes and allowed to stand for 7 days. After 7 days the suspension was filtered through a 10 μm pore size filter to remove large aggregates. Thereafter the particulates were separated into three different sizes; micron (1-10 μm), submicron (0.2-1 μm) and nano- (0.01-0.2 μm) using filters of varying pore size. Representative SEM images were obtained; particle number and size-shape were determined. Energy dispersive (EDS) spectra were obtained for the particles to make sure they belong to the same species of particles. Size-shape descriptors as mentioned in ASTM F1877 (Table 1) were used to perform a quantitative and morphometric analysis of the particles. Particulate suspensions were then prepared containing varying concentrations (103, 105, 107 per mL) of either of the particles in 10% fetal bovine serum (FBS), so
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 158–160, 2010. www.springerlink.com
Nano-Wear-Particulates Elicit a Size and Dose Dependent Response by RAW 264.7 Cells
159
that unit volume of the suspension contains the same number of particles. B. Cell Proliferation For studying proliferation RAW 264.7 cells were seeded at 104 cells per cm2 and cultured to confluence and then treated with varying doses of particles for each of the 3 size ranges. After each time period of 24 h, 48 h and 72 h, media was removed and cell number was counted using a hemocytometer. C. Statistical Interpretation of Data ANOVA was used to analyze the data; post-hoc testing was performed using Student’s t-test with Bonferonni correction. P values ≤ 0.05 were considered significant.
Fig. 1 SEM micrographs of UHMWPE (top panel) and Ti particles along with respective EDS spectra
III. RESULTS Recovery and fractionation of UHMWPE particles: Using the novel, accurate and reproducible technique we were able to fractionate the particles, both UHMWPE and Ti into the predetermined size ranges (Fig 1). Though the particles were obtained from commercial vendors, none-theless their size and shape were similar to those isolated from periprosthetic tissues by previous investigators [1,5,7]. The EDS spectra conformed that the particulated belong to the same species. Proliferation: The particles exhibited a early stimulatory and late inhibitory effect on cell proliferation. At 24 h, the nanofraction particles elicited a proliferative response by the cells, though statistically not significant, but at 72 h there was a significant inhibitory response as compared to the untreated control (Fig 2). The response was dose and size dependent and the cells exhibited an inhibitory response to the nanofraction particulates only at the highest dose.
Fig. 2 Change in the number of RAW 264.7 cells at 24 h in response to varying dose and sizes of both UHMWPE and Ti particles. All data are mean ± SEM. # p≤0.05, significant difference between control and individual treatments; * p≤ 0.05, significant difference between micron and nano- fractions at same dose; € p≤ 0.05, significant difference between 107 dose and other doses of a particular size particle suspension
IV. CONCLUSIONS 1. A novel, accurate and reproducible technique has been developed for isolating, fractionating and characterizing wear debris particles into well defined and clinically relevant size ranges (including the nanofraction <200nm). 2. RAW 264.7 cells elicited a dose-size-treatment time dependent response to wear debris particles. 3. Cell number was more affected by the nano-sized particles than the larger particles, suggesting a more toxic effect of the smallest particles. IFMBE Proceedings Vol. 32
160
M.K. Musib and S. Saha
ACKNOWLEDGEMENT We would like to thank Jim Eckert, Ph.D. at the Yale University Electron Microprobe Laboratory for helping with the ESEM images and EDS data.
REFERENCES 1. Campbell P, Kossovsky N, Schmalzried TP. Particles in loose hips. J Bone Joint Surg Br 1993;75:161-162. 2. Endo M, Tipper JL, Barton DC, Stone MH, Ingham E, Fisher J. Comparison of wear, wear debris and functional biological activity of moderately crosslinked and non-crosslinked polyethylenes in hip prostheses. Proc Inst Mech Eng [H] 2002;216:111-122. 3. Fisher J, Bell J, Barbour PS, Tipper JL, Matthews JB, Besong AA, Stone MH, Ingham E. A novel method for the prediction of functional biological activity of polyethylene wear debris. Proc Inst Mech Eng [H] 2001;215:127-132. 4. Goodman SB, Huie P, Song Y, Schurman D, Maloney W, Woolson S, Sibley R. Cellular profile and cytokine production at prosthetic interfaces. Study of tissues retrieved from revised hip and knee replacements. J Bone Joint Surg Br 1998;80:531-539. 5. Green TR, Fisher J, Stone M, Wroblewski BM, Ingham E. Polyethylene particles of a 'critical size' are necessary for the induction of cytokines by macrophages in vitro. Biomaterials 1998;19:2297-2302.
6. Jacobs JJ, Hallab NJ, Urban RM, Wimmer MA. Wear particles. J Bone Joint Surg Am 2006;88 Suppl 2:99-102. 7. McKellop HA, Campbell P, Park SH, Schmalzried TP, Grigoris P, Amstutz HC, Sarmiento A. The origin of submicron polyethylene wear debris in total hip arthroplasty. Clin Orthop Relat Res 1995;(311):3-20. 8. Shanbhag AS, Jacobs JJ, Black J, Galante JO, Glant TT. Human monocyte response to particulate biomaterials generated in vivo and in vitro. J Orthop Res 1995;13:792-801. 9. Suzuki Y, Nishiyama T, Hasuda K, Fujishiro T, Niikura T, Hayashi S, Hashimoto S, Kurosaka M. Effect of etidronate on COX-2 expression and PGE(2) production in macrophage-like RAW 264.7 cells stimulated by titanium particles. J Orthop Sci 2007;12:568-577. 10. Bi Y, Collier TO, Goldberg VM, Anderson JM, Greenfield EM. Adherent endotoxin mediates biological responses of titanium particles without stimulating their phagocytosis. J Orthop Res 2002;20:696-703.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Mrinal K Musib, Ph.D., Subrata Saha, Ph.D. SUNY Downstate Medical Center 450 Clarkson Avenue Brooklyn USA
[email protected]
Viscous Behavior of Different Concentrations of Bovine Calf Serum Used to Lubricate the Micro-textured CoCrMo Alloy Material before and after Wear Testing Geriel Ettienne-Modeste and Timmie Topoleski University of Maryland, Baltimore County, Baltimore, MD Abstract— Bovine calf serum is a common lubricant used to test the rheological properties of synovial fluid; however the rheological properties of the serum and their effect on testing the wear behavior of artificial joints are not fully understood. Bovine calf serum (BCS) is used as a model lubricant for testing artificial joints because the natural human synovial fluid is not readily available. Synovial fluid is the lubricant for joints and the source of nutrition for articular cartilage. The purpose of the present paper is to provide a comprehensive examination of the flow properties of bovine calf serum, with and without antibacterial agents, to determine whether it can be used as appropriate model for synovial fluid when testing the wear of artificial joint materials. We hypothesized that the viscosity of a mixture of bovine calf serum and water changes as the concentration of bovine calf serum changes before and after wear testing the lubricants. The steady-shear viscosity and storage and loss moduli were evaluated in BCS from fifteen lubricant compositions, with and without antibacterial agents, before and after wear testing the lubricant. The steady-shear viscosity varied over two orders of magnitude for both lubricants samples with and without antibacterial agent, with a greater variation for the samples with antibacterial agents. Bovine calf serum without antibacterial agents was more likely to exhibit normal viscous properties than bovine calf serum with antibacterial agents (p <0.001). The non-worn “before wear” BCS lubricants were less viscous than the worn “after wear” lubricants. Other parameters distinguished the two groups, and showed statistical significance. Both groups exhibited degenerate flow properties when compared to the synovial fluid from healthy individuals. Further examination of the connection between flow properties of BCS, other joint fluids commonly used to test artificial joints and the tribology of joint replacement prostheses should be studied. Keywords— bovine calf serum (BCS), artificial joint fluid, viscosity, in vitro implant, wear.
I. INTRODUCTION Each year, approximately 1.3 million total hip and knee replacement surgeries are performed worldwide. The lifetime requirements of these implants can be up to 30 years in elderly, inactive patients. However, younger and more active patients will place higher stress on the prosthesis and require using a device for a longer time. Hence, there is a continuing and urgent need to create new materials, and develop a suitable testing environment for the new materials, to be used in hip and knee implants to increase the wear
resistance. Hip and knee testing machines have been developed to simulate the performance of these materials under mechanical conditions believed to represent the patient’s gait and hence the loading of the prosthesis. The clinical environment surrounding the in vitro implant, however, is less established. Although the test lubricant has recently been identified as one of the major factors affecting in vitro testing of joint implants [1]. One of the difficulties in evaluating various wear studies is the lack of consistent test parameters. The type of lubricant fluid, the protein concentration in the lubricant, the lubricant volume and temperature are all important test variables. Different implant material combinations may react differently to the same lubricant. This research examines the rheological properties of bovine calf serum (BCS), a common lubricant used to simulate natural joint fluid for in vitro tribological studies of artificial joints. The study also compares the apparent and dynamic viscosities of BCS at different concentrations with and without antibacterial agent.
II. MATERIALS AND METHODS Bovine calf serum (BCS), with and without an antibacterial agent, was used to simulate the properties of synovial fluid relevant to testing artificial joints. In the preliminary stage of this study, the rheological properties of bovine calf serum (BCS) and other lubricants used in testing artificial joints were studied to determine the proper lubricant for wear tests. To conduct the rheological studies on the lubricants used for the wear tests, combinations of BCS and deionized water were created with concentrations of 100% (0% DI water), 75%, 50%, 25% and 0% (100% DI water). For the antibacterial agents, either penicillin/streptomycin (P/S) or sodium azide were added to some samples to investigate their effect on the viscosities. The viscosity was determined using: (1) and the apparent viscosity was calculated by:
η
.
min
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 161–164, 2010. www.springerlink.com
(γ ) =
τ
min .
γ
(2)
162
G. Ettienne-Modeste and T. Topoleski
where η min is the minimum viscosity, τ min is the minimum shear stress and γ is the shear rate. The storage modulus and loss modulus were determined from oscillation flow data. In this study, a total of six lubricant samples were used for each of the fifteen lubricant compositions. The lubricant compositions tested were: (1) 100% bovine serum (BCS 100%); (2) BCS 100% + 1ml Penicillin/Streptomycin (P/S); (3) 75% bovine serum and 25% distilled deionized water (BCS 75%); (4) BCS 75% + 1 ml P/S; (5) 50% bovine serum and 50% distilled deionized water (BCS 50%); (6) BCS 50% + 1 ml P/S; (7) 25% bovine serum and 75% distilled deionized water (BCS 25%); (8) BCS 25% + 1 ml P/S; (9) BCS 0% or distilled deionized water (DDW); (10) BCS 0% + 1 ml P/S; (11) BCS 100% + 1ml Sodium azide; (12) BCS 75% + 1 ml Sodium azide. (13) BCS 50% + 1 ml Sodium azide; (14) BCS 25% + 1 ml Sodium azide; (15) BCS 0% + 1 ml Sodium azide. Of the fifteen lubricant compositions, the BCS lubricants listed in numbers 1-10 were used in wear testing for different material combinations. The materials included novel carbide surfaces, where the carbide was deposited for 2 or 4 hours (“2hr carbide” and “hr carbide). Tthe 2hr and 4hr carbide-on-carbide, the carbide-on-UHMWPE, and the CoCrMo-on-UHMWPE wear couple systems to determine whether the viscosity and shear modulus changed during the wear tests. The viscous shear properties of the 10 lubricants used in the wear tests for the 2hr and 4hr carbide-oncarbide, the carbide-on-UHMWPE, and the CoCrMo-onUHMWPE wear couple systems were measured at 250,000, 500,000 and 1,000,000 cycles. The lubricant was changed after each 250,000 cycles. Thus, the lubricants were subject to only 250,000 cycles after wear. The articulating bearing material, however, underwent wear up to 1,000,000 cycles. An additional study was done on a soak control lubricants without articulation to determine the whether the viscosity, modulus, shear stress and strain properties of the fluid would change as a result time in the wear testing chambers without articulation. The rheological properties for the nonarticulating test lubricant samples for the main four lubricant compositions (BCS - 100%, 50%, 25%, and 0%) were tested as controls. The specifications for the P/S and the BCS samples are given in Tables 4.1 and 4.2.
Table 1 Specifications for Penicillin-Strptomycin from MP Biomedicals, LLC
,
Reference
Cat. # S1670249
pH Osmolality
4.27 57 mOsm/Kg H20
Penicillin Concentration
10,000 IU/ml
Streptomycin Concentration
10,000 ug/ml
Table 2 Composition of Bovine Calf Serum (BCS) from Atlanta Biological
Reference
g Cat. # S11495
pH Total Protein Albumin Globulin A/G Ratio Osmolality Endotoxin Hemoglobin
7.9 6.9 gm/dl 3.5 gm/dl 3.4 gm/dl 1 290.8 mOsm 4.8 EU/ml 6.8 mg/dl
The rheological properties of the lubricants were measured with a plate-on-plate rheometer (AR 2000EX, TA Instruments) with shear rate capabilities up to high shear rates (range: 0.01-30,000 s-1) at 37.0 ± 0.1°C. The shear rates used in this study were up to 1,000 s-1; beyond this shear rate, the lubricant breaks down. A platinum resistance thermometer (PRT) sensor positioned at the center of the peltier plate ensured accurate temperature measurement and control. The rheometer was first calibrated with Cannon Certified Viscosity ASTM Standard mineral oil using imposed stresses that varied from 10 to 0.1 Pa. Less than 1ml of test fluid was needed for rheological measurement of each lubricant composition. A stainless steel parallel plate fixture of outer diameter 12.0mm and inner diameter 11.1 mm was used. The viscosity measurements were performed at a steady shear rate. The sample temperature was maintained at 37.0 ± -0.2°C using a temperature controlled water bath surrounding the sample cell, in addition to the standard circulating water within the cup assembly. Two types of tests were performed with the rheometer using parallel plate geometry on the peltier plate configuration: 1) stepped flow and 2) oscillation flow.
IFMBE Proceedings Vol. 32
Viscous Behavior of Different Concentrations of Bovine Calf Serum
(1) The specific lubricant composition was prepared in a 20ml volume bottle: DI water (DDW), diluted 25% (BCS 25) and 50% (BCS 25) or undiluted BCS. (2) The specimen was preheated to body temperature (37°C) before rheological testing. (3) The stepped flow or oscillation flow tests were performed using the rheometer. Table 4.3 summarizes the test conditions for both rheological tests. (4) The data were analyzed using TA Rheology Data Analysis software, MatLab and Excel. The specifications for the peltier plate and BCS sample size used to run experiments are given in Table 3. (5) Each lubricant was subsequently stored in a freezer at -5 to -10°C for further analysis. For example, the frozen lubricants could be used for ion concentration and particle size determination. Table
3 Specifications for AR 2000EX Sample size of BCS
Temperature range (°C) Heating rate (°C/min) Temperature accuracy (°C) Sample size of BCS in test (ml)
Rheometer Peliter Plate and
agent before wear testing is that the initial viscosity of BCS increased with the addition of the antibacterial agents P/S and sodium azide. Also the initial viscosity of BCS- P/S was about 65% more than BCS-azide for all the lubricant concentrations except at BCS 100% and BCS 25%. BCS 100% produced a greater viscosity than BCS 100%-P/S and BCS 100%-azide. However, BCS 25%-P/S produce a lower viscosity than BCS 25%-azide and BCS 25%, with BCS 25% producing the highest viscosity for the group. 0.14
BCS_Azide
BCS
BCS_P/S
BCS 50
BCS 75
0.12 Viscosity (Pa.s)
The following steps were used for rheological testing:
163
0.10 0.08 0.06 0.04 0.02 0.00 -0.02
DDW
BCS 25
BCS 100
Fig. 1 The average apparent viscosities for BCS 100, 75, 50, 25 and 0 % with and without P/S and azide antibacterial agent before the lubricants were used for wear testing
-20 to 200
The data in Figure 1 shows that:
20
(1)
BCS 100% produced the highest viscosity for lubricants without antibacterial agents. In contrast BCS 0% produced the lowest viscosity for lubricants without antibacterial agents.
(2)
BCS 75%-azide produced the highest viscosity for lubricants when azide was used as the antibacterial agents. In contrast BCS 50% produced the lowest viscosity for lubricants when azide was used as the antibacterial agents.
(3)
BCS 50%-P/S produced the highest viscosity for lubricants when P/S was used as the antibacterial agents. In contrast BCS 25%-P/S produced the lowest viscosity for lubricants when P/S was used as the antibacterial agent.
+/- 0.1 1 ± 0.05
In addition to examining the effect of serum concentration in the lubricant on the viscosity, we also characterized the properties of lubricant with and without an added penicillin/streptomycin antibacterial agent at different concentrations and mixtures.
III. RESULTS The results for the average apparent viscosities for BCS 100, 75, 50, 25 and 0 % with and without P/S and azide antibacterial agent before the lubricants were used for wear testing are shown in Figure 1. The general behavior of the viscosity for the pure BCS with and without antibacterial
The results of the average apparent viscosity was different depending on the wear couple systems used (Figures 2 and 3). Figure 2 shows the average apparent viscosities for BCS 100, 50, and 25 % with P/S antibacterial agent at 250,000, 500,000 and 1,000,000 wear cycles for the carbide-on-UHMWPE
IFMBE Proceedings Vol. 32
164
G. Ettienne-Modeste and T. Topoleski
wear couple system. The viscosity increased for all three concentrations of BCS as the wear cycles increased from 250K to 1,000K. BCS 100%-P/S produced the greatest viscosity at 1,000K cycles compared to all the other lubricants and wear cycles. In Figure 3, the average viscosity for all the lubricant concentrations BCS 100%, 50% and 25% increased more for the carbide-on-UHMWPE wear couple system than the CoCrMo-on-UHMWPE wear couple system for all the three intervals, 250,000, 500,000 and 1,000,000 cycles. The viscosity of (BCS 100, 50 and 25%) increased with increase in cycles from 250K to 500K and increased with increase in cycles for BCS 100% and decreased for BCS 50% and 25% from 500K to 1,000K for both the 4hr and 2hr carbide-on-carbide wear couple systems. 250K
1.2
500K
The apparent viscosity of BCS increased with an increase in concentration before and after the serum was used for wear testing. Future research extends the exploration of using bovine calf serum (BCS) as the lubricant for studying the wear properties of the seven different wear couple systems. Additional studies should include measuring the viscoelastic properties of bovine calf serum with added hyualronic acid (HA), and conducting further studies on shear rate-controlled rheological properties of BCS at different concentrations to properly characterize the shear ratedependent thermomechanical properties of BCS used in wear testing.
ACKNOWLEDGMENT
1000K
The authors wish to thank the Arthritis Foundation, a GAANN Fellowship from the US Dept. of Education, and the UMBC Graduate Meyerhoff Fellowship (NIGMS-R25GM55036) for the support of this research.
1 Viscosity (Pa.s)
IV. SUMMARY AND CONCLUSIONS
0.8 0.6 0.4
REFERENCES
0.2 0 BCS 100-P/S
BCS 50-P/S
BCS 25-P/S
Fig.
Viscosity (Pa.s)
2 The average apparent viscosities for BCS 100, 50, and 25 % with P/S antibacterial agent at 250,000, 500,000 and 1,000,000 wear cycles for the carbide-on-UHMWPE wear couple system
45 40 35 30 25 20 15 10 5 0
BCS100-Carbide BCS25-Carbide BCS50-CoCrMo
BCS50-Carbide BCS100-CoCrMo BCS25-CoCrMo
Ferguson, J. (1991) Applied Fluid Rheology, Elsevier Science Publishers Ltd 1991: 47-133. Ettienne-Modeste, G, Toploeski, LDT, (2009) Trans. 33nd Ann Meeting Society for Biomaterials, p.367. VanDamme, NS, Que, L, Topoleski, LDT, (1999 ) J. Materials Science, 34: 3525-3531. Wolfarth, D.L., Zha, Y., Topoleski, L.D.T., (2001) The Effect of Antibacterial Agents on the Wear of Metal-on-Metal Specimens Transactions, Society of Biomaterials. Yao, J.Q., Laurent, M.P., Johnson, T.S., Blanchard, C.R., Crownishield, R.D. (2003) Wear, 255(1-6): 780-784. Mazzucco D. (2002) Rheology of joint fluid in total knee arthroplasty patients. J of Orthopaedic Research 20:115-1163.
Author: Geriel Ettienne-Modeste, Ph.D. Institute: University of Maryland, Baltimore County Street: 1000 Hilltop Circle City: Baltimore Country: MD, USA Email:
[email protected] 250000
500000 Number of cycles
1000000
Fig. 3 BCS effect on wear for the carbide-on-UHWMPE and CoCrMo-onUHWMPE wear couple systems
IFMBE Proceedings Vol. 32
Progressive Wear Damage Analysis on Retrieved UHMWPE Tibial Implants N. Camacho, S.W. Stafford, and L.Trueba Jr. University of Texas at El Paso/Metallurgical and Materials Engineering Department, El Paso, TX, USA Abstract— In recent years, the incidence of knee joint degeneration has increased considerably in the young and elderly population. Due to the complicated geometry and movements of the knee, the development and improvement of knee joint prostheses have been a slow process that includes not only in vivo and in vitro studies, but also, computational and numerical analysis. Different studies have demonstrated that far too often ultra-high-molecular weight polyethylene (UHMWPE) components sustain premature failure and require replacement. In spite of its widespread use, the bearing properties of this polymer continue to limit the wear resistance and the clinical life span of implanted knee prosthetics. UHMWPE is subjected to complex-multi-axial stress states in vivo causing multiaxial shearing at the articulating surfaces, which in turn, has led to wear debris formation. A failure analysis was performed to examine the progressive wear damage sustained by UHMWPE tibial components. The surface damage of six retrieved tibial components was assessed with a semi-quantitative wear damage scoring. Additionally, the surface morphology of the retrievals was examined microscopically using stereo- and low voltage scanning electron microscopy to explore the relationship between in vivo surface damage mechanisms and large-deformation plasticity. The semi-quantitative wear scoring method revealed that the damage experienced by the six studied retrievals ranged from 19 to 136; nonetheless, the score damage could not be correlated to the implantation time due to the lack of complete background information provided on each patient. One inadequacy of this scoring method is that two the retrieved components had similar scores but demonstrated different contact surface degradation mechanisms. For instance, one of them displayed severe pitting while the other revealed an absence of pitting. Their common features included deformation, delamination and abrasion. Keywords— Knee Replacement, Failure Analysis, Polyethylene Wear, Progressive Wear Damage Assessment.
I. INTRODUCTION Knees carry half of the body weight and provide support and mobility to the human body. In recent years, the incidence of joint degeneration has increased considerably in the young and elderly population; the National Center for Health Science reported that there are more than 300,000 knee replacements per year in the United States [1].
The knee is considered a hinge joint for simplicity. However, the knee joint displays a very complicated geometry and movement since the femoral surface not only glides but it also rolls on the articular cartilage as the knee bends. Advances in the area of biomaterials have allowed the development of different knee implants, which in turn, have restored, to a certain point, mobility of the lower body in the post-surgery stage. There are more than 150 knee replacement designs today that aim to reproduce the more complicated motion of the knee [2]. However, there are still some basic issues that need to be addressed in order to increase the life span of knee replacements. Total knee replacements (TKR) incorporate four different components: a metal (usually, cobalt-chromium alloy) femoral component, an ultra-high-molecular-weight polyethylene (UHMWPE) cushion that acts as a bearing surface, a tibial plate usually made out of a titanium alloy, and a patellar component [3- 6]. The design is chosen depending on the patients’ specific characteristics, such as weight, age, and activity level, since excessive activity and extra weight can accelerate the wear rate of the UHMWPE component, causing pain to the patient and failure of the joint [4 and 7]. These components are produced with bioinert materials which are supposed to last as long as the patients’ life [8]. Components are designed so that metal always articulates on plastic, which provides smooth movement and reduces wear rates. Nevertheless, up until now, manmade joint prostheses have not solved the issue of wear to the polyethylene articulating surface. Far too often, UHMWPE components sustain premature failure and require replacement. Moreover, current design and materials used to fabric knee implants only offer a 90% probability of implant survival at 10 years in the elderly group; nevertheless, the surviving implants are not performing optimally [3]. Implant survival in younger and more active groups is significantly lower. UHMWPE is subjected to complex-multi-axial stress states in vivo causing multiaxial shearing at the articulating surfaces, which in turn, has led to wear debris formation [910]. There are many variables that affect the wear of the polyethylene bearing surface including wear resistance of the material, lubrication, motion pattern, loads, and patient specifics [11]. The objective of this study was to examine the progressive wear damage sustained by UHMWPE tibial
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 165–170, 2010. www.springerlink.com
166
N. Camacho, S.W. Stafford, and L.Trueba Jr.
components and to develop an understanding of the deterioration mechanisms experienced by polymeric materials in artificial knees. The surface damage of six retrieved tibial components was assessed with a semiquantitative wear damage scoring method. Additionally, the surface morphology of the retrievals was examined microscopically using stereo- and low voltage scanning electron microscopy to explore the relationship between in vivo surface damage mechanisms and large-deformation plasticity. The devices used in this study were provided by local orthopaedic surgeons and were manufactured by Biomet Inc.
delamination, third-body debris, abrasion, and cold flow. The total damage score, having a theoretical maximum of 210, is obtained by summing the contribution of the seven damage modes. A score of 0 is related to the absence of the damage mode in the specific region of the retrieval while the scores of 1, 2 or 3 correspond to approximate 10%, 10 to 50% and more that 50% respectively [10].
II. MATERIALS AND METHODS A. Macroscopic Examination The surface characterization performed on six retrieved UHMWPE tibial componentsinvolved a preliminary investigation of the patients’ history, including age, gender, and approximate weight, year of knee replacement installation and removal, and knee replacement manufacturer; all the information is summarized in Table 1. Table 1 Patients’ Specifics Component Number
Weight (lbs)
Implanted time (years)
Manufacturer
B257440
Age Gender 65/F
195
0.5
Biomet
B360020
75/M
320
1
Biomet
B426170
77/M
150
4
Biomet
427830
NA/M
NA
8
NA
B497500
50/F
180
8
Biomet
X000000
NA/M
197
8
NA
Fig. 1 UHMWPE tibial component divided into 10 regions C. Scanning Electron Microscopy The contact surface was also analyzed using stereo microscopy (Leica M205 C), and low and high voltage electron microscopy (Hitachi TM-1000). The regions of focus for the microscopy were the interior hemispherical portion, the lateral and medial compartments, and the contact areas on the femoral implant. Each implant was carbon coated for 20 seconds with the Pelco CC-7A SEM Carbon Coater before the surface analysis in the scanning electron microscope (SEM).
III. RESULTS AND DISCUSSION A. Macroscopic Examination
Samples were macroscopically observed in the as received condition.The visual examination was performed before the decontamination and sterilization of each component to preserve and document the exact conditions of the components surfaces. Each component was given a reference number. Deionized water was used for the ultrasonic cleaning process with the Branson 2210 Ultrasonic Agitator and the time and cycles depended on the degree of contamination of the elements.
All of the UHMWPE components showed different levels of damage and degradation mechanisms on the bearing surface. Figures 2 to 6 display the photographs of the different components.
B. Semi-quantitative Wear Damage Scoring Method A semi-quantitative wear damage score method was used to measure the wear damage degree of each sample. Figure 1 shows how the sample is divided into 10 regions for the visual inspections [9]. The method assignsa score of 0–3 based on the severity of pitting, scratching, burnishing,
Fig. 2 Components B257440 (left) and B360020 (right) in as-received conditions
IFMBE Proceedings Vol. 32
Progressive Wear Damage Analysis on Retrieved UHMWPE Tibial Implants
Fig. 3 Visual inspection of componentB427160
167
The macroscopic examination revealed that in spite of having a similar implantation times, the samples displayed very different degrees of wear damage. Retrieved component 427830 (Figure 4) displayed all seven wear modes (pitting, scratching, burnishing, delamination, thirdbody debris, abrasion, and cold flow) on the bearing surface. Moreover, after the microscopic examination, it was concluded that all three mechanisms of wear (adhesion, abrasion, and fatigue) contributed to the failure of this implant. It was also noted that the wear damage on the bearing surface of the three components with implantation times of eight years was asymmetrical and that not all seven damage modes were present on both sides of the polyethylene components. B. Semi-quantitative Wear Damage Scoring Method Table 2 presentsthe wear scorefor the damage displayed by each component. Table 2 Semi-quantitative wear damage scores
Fig. 4 Visual examination of retrieved component 427830
Degradation Mechanism Pitting Scratching Burnishing Delamination Third-Body Debris Abrasion Cold Flow Total damage Score
B257 440 0 4 8 0 0
B360 020 0 8 8 0 0
B426 170 4 8 6 2 2
42783 0 24 4 16 24 24
B497 500 0 1 27 27 24
X000 000 24 0 16 24 27
7 0 19
8 0 24
6 6 36
24 27 119
27 27 134
27 27 145
Fig. 5 Visual examination of retrieved component B497500 Component B497500 displays all seven different damage modes as it can be observed in Figure 5. The severity of the damage in the contact surface could be due to the daily activity level of the patient, takinginto account that she was the youngest person in the group.
Using the semi-quantitative wear damage score method displayed in Table 2, it can be noted that the degradation of the components is not completely time dependent, since 3 components that were used for the same amount of time displayed different scores. Moreover, the score for the component worn for four years (B426170) is not even 30% of the score of the ones used for eight years (427830, B497500, and X000000). This means that doubling the implantation time will not translate into only doubling the wear damage score. C. Scanning Electron Microscopy
Fig. 6 Visual examination of component X000000
The SEM analysis confirmed all the degradation mechanisms found in the macroscopic inspection.Figures 7 through 14 displayed different SEM photographs at low (1 kV) and high (20 kV) voltages. As it can be observed in the photographs, the early stages of wear to the bearing surface
IFMBE Proceedings Vol. 32
168
N. Camacho, S.W. Stafford, and L.Trueba Jr.
in the UHMWPE components translate into degradation mechanisms such as burnishing and scratching. Later on, pit formation will take place releasing small particles between the femoral and the polymeric component. These particles can and will act as third-body debris, producing even more damage to the bearing surface, such as more scratching and cold flow. At the same time, delamination and abrasion take place too. Complete layers can be released by delamination, as observed in Figures 4 and 10, while fracture accentuates the material loss rate of the polyethylene component.The last stage in the wear damage is cracking. As can be observed in figures 4 through 6, macroscopic cracking appears to preferentially occur at the external boundaries of the polyethylene, while micro-cracking is defined in the severe damagezones, often following paths of least resistance from pit-to-pit.
Fig. 7 SEM photograph displaying scratching and burnishing (Sample B360020)
Fig. 8 SEM photograph displaying severe cold flow near the edge of the sample B497500
Fig. 9 SEM photograph displaying a rough surface caused by abrasive wear (Sample B257440)
Fig. 10 SEM photograph displaying severe delamination on the right condyle of sample 427830
Fig. 11 SEM photograph displaying cracking on the edge of a condyle (Sample 427830)
IFMBE Proceedings Vol. 32
Progressive Wear Damage Analysis on Retrieved UHMWPE Tibial Implants
169
D. Discussion
Fig. 12 SEM photograph displaying severe pitting (Sample X000000)
Fig. 13 SEM photograph displaying severe pitting, cold flow inside and around the pits, and rib marks (Sample X000000)
The failure analysis performed to examine the progressive wear damage sustained by UHMWPE tibial components revealed different patterns in the degradation mechanisms to the bearing surface of the retrieved components. The macroscopic evidence exposed different degradation mechanisms and the early stages of the wear damage to the bearing surface. In spite of the fact that a complete pattern could not been established for the degradation mechanisms, it can be concluded that the early stages of wear damage are burnishing and scratching to the bearing surface. From the semi-quantitative wear damage scoring method it can be concluded that the damage to the polymeric bearing surface is notentirelytime dependent. The retrieved samples ranged between 19 and 145 points, 210 being the maximum. Even though the components did not reach a score close to the maximum, the implants wereremoved from the patients’ due to a loss of functionality. Finally, the microscopic evidence (SEM analysis) revealed the combination of different degradation mechanisms taking place at the same time. For instance, the pits found in two different components (427830 and X000000) revealed the presence of cold flow inside and around the pits. Additionally, these pits help the crack propagation in the sample and may act as stress concentrators.The degradation mechanisms acting on the bearing surface can be categorized in terms of least severe to the most damaging in following order: burnishing, scratching, abrasion, pitting, third-body debris, cold flow, delamination, and finally cracking.
IV. CONCLUSIONS
Fig.
14 SEM photograph displaying abrasion marks and fracture on the bearing surface (Sample X000000)
The surface damage of six retrieved tibial components was assessed with a semi-quantitative wear damage scoring method. This method revealed that the damage experienced by the six retrievals ranged from 19 to 145. Moreover, this study revealed that the wear damage to the bearing surface is not completely time dependent. Nevertheless, it has to be said that one inadequacy of this scoring method is that two of the retrieved components had similar scores but demonstrated very different contact surface degradation mechanisms. For instance, one of them (X000000) displayed severe pitting while the other (B497500) revealed an absence of pitting. Their common features included deformation, delamination and abrasion. The SEM analysis revealed the relative severity of the different degradation mechanisms occurring simultaneously and/or synergistically on the bearing surface of the implant. Although these findings support the notion that the
IFMBE Proceedings Vol. 32
170
N. Camacho, S.W. Stafford, and L.Trueba Jr.
deformation taking place in the contact surface is largely related to the daily activity level of the patient, the variation in clinical performance suggests that factors other than implantation time and activity level, such as material properties, patient- and surgeon-specific variables, are also likely to play important roles in the longevity of the tibial inserts. It was concluded that the wear degradation of the tibial bearing surfaces is primarily controlled by joint kinematics, polyethylene properties and implant design.
ACKNOWLEDGMENT The authors recognize the generous funding contributions of CONACyT to this research effort.
REFERENCES 1. Janeway, Patricia A. (2006) Bioceramics. Materials That Mimic Mother Nature, American Ceramic Society Bulletin, Vol. 8, 7:. 26-30 2. Knee Implant (2007). American Academy of Orthopaedic Surgeons. American Association of Hips and Knees Surgeons. http://orthoinfo.aaos.org/topic.cfm?topic=A00221 3. Ratner, Buddy D. et al. (2004) Biomaterial Science: An Introduction to Materials in Medicine. Second Edition. 12-19, 30-150 4. Massin, Philippe. (2007) Le Polyéthylène en Orthopédie, Chirurgie Orthopédique, CHU Angers
5. Park, John B and Lakes, Roderic S. “Biomaterials: An Introduction”. Second Edition. 1992, Plenum Press, New York. Pp. 325-352 6. Design Rationale. NexGen System, Complete Knee Solution Handbook (2006) 7. Willing, Ryan. Reducing Wear in Total Knee Replacements using Three Dimensional Shape Optimization, Ontario Biomechanics Conference 2006 (Provincial conference) http://me/research/ RyanWilling/kneereplacement.htm 8. Boland, Thomas. Biomaterials, A Textbook for the Biomedical Engineering Senior and Junior. Biomedical Engineering Program. Kinddle Edition. Pp 7-13 9. Willing, R. and I. Y. Kim, Design Optimization of the Femoral Component and UHMWPE Insert of a Total Knee Arthroplasty”, CSME Forum 2006. (National conference) http://me. queensu.ca/people/iykim/research/RyanWilling.php 10. Kurtz, Steven M. “The UHMWPE Handbook: Ultra-high-molecular Weight Polyethylene in Total Joint Replacements”. Elsevier Academic Press. 1968.. 123-184. 11. Schmalzried, Thomas P. and Callaghan, John J. (1999). Current Concepts Review - Wear in Total Hip and Knee Replacements. J. Bone Joint Surg. Am., 81: 115 - 136.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Nayeli Camacho University of Texas at El Paso 500 West University Avenue El Paso, TX USA
[email protected]
Gum Arabic-Chitosan Composite Biopolymer Scaffolds for Bone Tissue Engineering R.A. Silva1, P. Mehl2, and O.C. Wilson1 1
Biomedical Engineering Department, BONE/CRAB Lab, The Catholic University of America, Washington, DC, USA 2 Vitreous State Laboratory, Physics Department, The Catholic University of America, Washington, DC, USA
Abstract— Biopolymer composites are a very promising area for developing novel tissue engineering (TE) scaffolds. Chitosan is known to have a variety of properties that make it suitable for TE applications due to its biocompatible, antibacterial, and biodegradable nature. Gum Arabic (GA) is a natural biopolymer that is incorporated into a number of food products. This study reports a biopolymer composite synthesized by mixing GA with Chitosan in amounts ranging from 0-90 wt% GA. The addition of GA caused a marked increase in the weight gain due to water uptake. 100% chitosan films gained approximately 50 wt% on aging in water while 17 wt% chitosan-83 wt% GA films gained over 250 wt% due to water absorption. Thermogravimetric (TGA) and Carbon-Hydrogen-Nitrogen (CHN) elemental analysis were used as techniques to characterize the material. MC3T3E1 mouse pre-osteoblast cells were used in an initial assessment of the suitability of these scaffolds for bone tissue engineering. Scanning electron microscopy (SEM) analysis was used as a method to characterize the films in vitro. The cells initially adhered to the composite films and exhibited minimal toxicity, but gradually started to detach from the films after one week. Keywords— Chitosan, gum arabic, mass loss, swelling.
I. INTRODUCTION The search for biocompatible materials that support and enhance the growth of bone cells is one of the major goals and challenges in TE. One very fascinating research area within TE involves the idea of combining biocompatible polymers to make biopolymer composites with unique properties and levels of functionality. In cases where the biopolymers have opposite charge, a complex coacervate is formed due to electrostatic interactions. The GA-chitosan system is unique among the biopolymer composite systems because it unites a terrestrial polymer, derived from a tree, with a marine derived biopolymer. Chitosan is a linear polysaccharide and it is the product of the partial deacetylation of the naturally occurring polysaccharide chitin, which is found in the exoskeletons of insects and marine invertebrates [1]. Chitosan possesses biological and material properties suitable for clinical applications [2].Chitosan is known to have a variety of properties that make it suitable for TE applications such as biocompatibility, antimicrobial properties, and biodegradability. It has already been evaluated as a wound-healing agent, bandage material, skin
grafting template, hemostatic agent, drug-delivery system, and as a promising scaffold in TE [2]. Another of chitosan’s features is its ability to be processed into porous structures for use in cell transplantation and tissue regeneration [3]. Gum Arabic (GA) is a complex mixture of polysaccharides and glycoproteins. GA is a naturally existing biomaterial known for being perfectly edible. It is identified as a dried exudate taken from the stems and branches of two subSaharan species of the acacia tree: Acacia senegal and Acacia seyal [4]. It is used primarily in the food industry as a stabilizer, but the combination of its behavior, structure, and composition allows it to have highly valued emulsifying, thickening and suspending properties [5]. The focus of this experimental work is to prepare GAChitosan biocomposite polymer films ranging from 0-90 wt% GA and to investigate the influence of GA content on the swelling behavior of the biopolymer films. Chitosan has a cationic charge at acidic pH values, whereas GA is a negatively charged molecule. The cationic nature of chitosan makes it responsible for any electrostatic interactions with anionic glycosaminoglycans, proteoglycans and other negatively charged molecules such as cytokines and/or growth factors found in any developing organ [6]. Our goal is to synthesize a stable and biocompatible scaffold that mimics the structure of bone and to evaluate scaffold-cell interactions.
II. MATERIALS AND METHODS A. Chemical Reagents and Scaffold Preparation The materials used to prepare scaffolds were low molecular weight chitosan and reagent acetic acid, deionized water, and TIC Pre-tested Gum Arabic FT. The crosslinking reagent was ammonium hydroxide and ethanol was used for the sterilization procedure. The protein BCA test reagents were obtained from Pierce Biotechnologies. In vitro culture cells were plated with α-MEM Medium and fetal bovine serum or FBS. Penicillin-Streptomycin mixture was also used in cell culture media to prevent contamination. A two wt% chitosan solution was prepared by dissolving two grams of low molecular weight chitosan in a solution containing two ml of acetic acid and 96 ml of deionized water. The solution was stirred overnight and then centrifuged
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 171–174, 2010. www.springerlink.com
172
R.A. Silva, P. Mehl, and O.C. Wilson
for 15 minutes at 20,000 rpm. The 10 wt% GA solution was prepared by dissolving 10 grams of TIC Pre-tested Gum Arabic FT in 90 ml of dH20. The solution was stirred for two to three hours and centrifuged for 15 minutes at 20,000 rpm. The GA solution was filtered and then refrigerated to retard mold formation. Polymer films were prepared with the GA content varying from 0-90 wt% by mixing the appropriate amount of 10 wt % GA, according to Table 1. Gum Arabic Chitosan Acetate Gels (GACAG) were allowed to dry for 2448 hours. The scaffolds were aged in 50% (v/v) ammonium hydroxide solution for 45 minutes to one hour to crosslink the polymer network and subsequently washed repeatedly in deionized water. Table 1 Volumes needed for 10-ml scaffolds Targeted wt% GA
Targeted wt% Chitosan
83.3%
16.7%
5
5
671.87
75%
25%
6.24
3.76
566.20
50%
50%
8.34
1.66
395.43
25%
75%
9.37
0.63
299.73
20%
80%
9.52
0.48
288.73
10%
90%
9.78
0.22
253.57
0%
100%
10.00
0
240.60
Chitosan Vol. (ml)
GA Vol. (ml)
Experimental mass (mg)
B. Swelling Ratio and Mass Loss Swelling data was taken by measuring weight after the scaffolds were immersed in ammonium hydroxide for 45 minutes and deionized water at ambient temperature for 30 minutes; after crosslinking and washing procedures, respectively. The weight of the swollen films (W1) was measured after removing excess fluid from the surface with blotting paper. Swelling ratio (SR) was calculated on the basis of the weight of the swollen and dry initial films (W0), according to Equation 1. All scaffolds were weighed after they had been let dry in the final washing step. Mass loss (ML) results were calculated on the basis of the weight of the initial (M0) and final dry film (M1), according to Equation 2. SR = (W1-W0)/W0
(1)
ML = [(M0-M1)/M0] x 100 %
(2)
C. Thermal and CHN Analysis Thermal gravimetric analysis (TGA) was carried out on all scaffold samples using a thermal analysis system (Shimadzu TGA-50, Japan). The runs were performed in the temperature range of 30°C to 800°C and consisted of a ramp at a steady rate of 10°C/min under air atmosphere. The weights of samples varied from 5 to 15 mg. The different GACAG scaffolds (according to Table 1) were sent to Prevalere Life Sciences, LLC (Whitesboro, NY) for CHN quantitative elemental analysis. The weight of the samples required for this analysis ranged from 5 to 10 mg. D. Cell Culture Studies & SEM MC3T3-E1 (ATCC, Manassas, VA) mouse preosteoblast cell cultures were used in an initial assessment of the cyto-compatibility of the reported GACAG scaffolds. Samples were fixed, sputter coated with carbon, and examined with a Hitachi SU-70 Analytical Ultra-High Resolution SEM.
III. RESULTS The thermal analysis of chitosan and GA are seen in Figure 1. The graph shows a three-stage decomposition process for all samples. The first stage takes placed between 30 and 150°C. It is followed by another weight loss just above 230°C and ends around 400°C. A complete weight loss for chitosan is achieved in the third stage of the sample decomposition starting passed the 400°C mark. It is noticed GA doesn’t get close to zero until high temperatures due to the presence of residues in the form of mixed metal salts such as calcium, potassium, and magnesium.
Fig. 1 TGA results for chitosan and GA IFMBE Proceedings Vol. 32
Gum Arabic-Chitosan Composite Biopolymer Scaffolds for Bone Tissue Engineering
Samples ranging from 0 to 100 wt% GA were analyzed by CHN analysis, after being crosslinked and washed. Nitrogen is not a main component of the complex mixture of polysaccharides found in GA, but it is present in chitosan. The data trends tell us the presence of hydrogen is almost constant and there is a small decline in carbon as GA is added to the films. The major result is the almost linearly decrease in nitrogen content as less chitosan is present. As seen in Figure 2, the average results obtained on scaffolds for which mass weight was measured after crosslinking and washing, confirmed that water uptake increased as the percentage of GA was higher in the scaffolds. Between 0 and 25 wt% GA, the SR ranges from 0.45 to 0.75. It then increases to about 1.5 for 50 wt% GA and grows steadily to 2.7 in 83 wt% GA. There was a tremendous drop on SR for samples with amounts of GA higher than 83 wt%. For samples ranging from 0 to 25 wt% GA, the total mass loss averages 15%. For 50, 75, and 83 wt% GA scaffolds, the percentage of mass loss is 24, 37, and 55 % respectively.
Fig. 2 Swelling ratio after washing MC3T3-E1 mouse pre-osteoblasts cells were plated with α-MEM Medium supplemented with 10% FBS at 37°C on the scaffolds. At day 17, some cells still showed adhesion with no signs of contamination, but at the same time, no confirmation of cell replication was seen. In some cases, cells seem to group together forming clusters, whereas in others they look to be spread out over different areas of the polymers. Scaffolds at day 1, 7, and 14 were analyzed by SEM (Figure 3). The results tell us cells adhered with no or little rejection at first. Cells looked spread out evenly over the surface and they connected through pseudopods. At day 7 and 14, few cells remained attached to the scaffolds as most either went through apoptosis or detached from the polymers. The different concentrations of GA did not seem to present a marked difference on the way cells behaved.
173
Fig. 3 10wt% GA scaffold at day 1
IV. DISCUSSION The initial solutions of 10 wt% GA and 2 wt% chitosan had a relatively low viscosity, but slightly greater than water. The mixing of GA and chitosan solutions yielded an interesting complex coacervate. The first attempts to prepare scaffolds were quite simple, using same volumes of both GA and chitosan solutions, which produced films of 83.3 wt% GA and 16.7 wt% chitosan. After mixing them, a very viscous gel formed a phase, which then separated from the aqueous phase. The weight of the viscous polymer coacervate increased as the weight of GA increased. Initial observations determined that complex coacervates less than 25 wt% GA yielded films that were quite flexible. The higher amounts of chitosan present in the coacervate, the more plasticity it gave to the film. On the other hand, GACAG polymer films with 50wt% GA or higher resulted in stiffer films, which would break easily and into smaller pieces. GACAG samples exhibited a mass loss during the crosslinking step. The original electrostatic forces acting on both molecules make them bond, leaving enough space in between them for water absorption. GA weaves within the structure of this “cage”, formed by chitosan polysaccharides, and as its percentage increases, it’s assumed the areas left in between molecules increases too leaving enough space for other molecules not bound to leave the skeleton of the films. Elemental analysis by CHN determination was important to confirm the levels of chitosan and GA present in the scaffolds. It indicated that the nitrogen content was less found as the percentage of GA was increased in the samples. The results from the TGA analysis were evidence to consider that weight loss occurs in three different stages. The first stage (30-150 °C) is associated with loss of residual alcohol, acetic acid in the case of chitosan, and absorbed
IFMBE Proceedings Vol. 32
174
R.A. Silva, P. Mehl, and O.C. Wilson
water. This is followed by another weight loss just above 230°C and ends around 400°C; this stage corresponds to the degradation of the polysaccharides per se. In the case of chitosan alone, TGA shows about 12% loss in weight during this first stage and a constant degradation (about 40 wt%) of the polymer passed the 230°C value. The same is true for GA on its own; there exists about a 7% weight loss for the first stage and then a faster weight loss (54 wt%) with respect to temperature. A complete weight loss for chitosan is achieved in the third stage of the sample decomposition starting passed the 400°C mark. This final step could be assigned to the oxidative degradation of the carbon residue from the second stage. It is noticed GA doesn’t reach zero until high temperatures due to the presence of residues in the form of mixed metal salts such as calcium, potassium, and magnesium. It’s not yet fully understood the nature of how MC3T3E1 pre-osteoblasts work. Initially, during the first 24 hours, cells seem to adhere to the surface of the polymers. With time, the remaining cells attached to the scaffold form clusters in some cases, leaving empty areas on the surface, in what could be an attempt of cells to find the signaling necessary for them to proliferate and eventually differentiate. Even when adhesion has been confirmed, it’s still unknown if a particular concentration of GA in these composites works better than the rest. It’s been reported that modified chitosan does promote adhesion of MC3T3 cells, and in other cases reduces it, diminishing growth and proliferation as well [8]. The three-dimensional feature of a scaffold also needs to be taken into consideration.
V. CONCLUSIONS While our results show that our new proposed bone substitute is non-toxic and that cells are capable of adhering and surviving on them, we still have yet to prove if differentiation is possible. There is important data to be taken into account. For swelling and mass loss data, we have been able to demonstrate the threshold in which GA and chitosan can stay together in a solid structure. It’s been confirmed mass
is lost within the organization of molecules in our polymer and GA has indeed been determined as one component escaping the structure of the films as they are released in solution. Cell signaling is an important occurrence in tissue developing and our scaffolds might be lacking or adding to what should be the normal pathway for pre-osteoblasts. These aspects will be focus of our future studies.
ACKNOWLEDGMENT We acknowledge the support of the Maryland NanoCenter and its NispLab at the University of Maryland at College Park. The NispLab is supported in part by the NSF as a MRSEC Shared Experimental Facility. We thank TIC Gums for kindly donating the Gum Arabic used in this project. We also acknowledge the support of NSF Grant # 0645675.
REFERENCES 1. Marreco PR, Moreira P, Genari SC, Moraes AM. (2004) Effects of Different Sterilization Methods on the Morphology, Mechanical Properties, and Cytotoxicity of Chitosan Membranes Used as Wound Dressing. J Biomed Mater Res 71B:268-277. 2. Lahiji A, Sohrabi A, Hungerford D, Frondoza C. (2000) Chitosan supports the expression of extracellular matrix proteins in human osteoblasts and chondrocytes. J Biomed Mater Res 51A:586-595. 3. Francis Suh JK, Matthew H. (2000) Application of chitosan-based polysaccharide biomaterials in cartilage tissue engineering: a review. Biomaterials 21:2589-2598. 4. Yebeyen D, Lemenih M, Feleke S. (2009) Characteristics and quality of gum arabic from naturally grown Acacia senegal (Linne) Willd. Trees in the Central Rift Valley of Ethiopia. Food Hydrocolloids 23:175-180. 5. Cozic C, Picton L, Garda M, Marlhoux F, Le Cerf D. (Article in press) Analysis of arabic gum: Study of degradation and water desorption processes. Food Hydrocolloids 6. Di Martino A, Sittinger M, Risbud M. (2005) Chitosan: A versatile biopolymer for orthopaedic tissue-engineering. Biomaterials 26:59835990. 7. Roque ACA, Wilson OC. (2008) Adsorption of gum Arabic on bioceramic nanoparticles. Mater Sci Eng C 28:443-447. 8. Xi J, Gao Y, Kong L, Gong Y, Zhao N, Zhang X. (2005) Behavior of MC3T3-E1 osteoblast cultured on chitosan modified with polyvinylpyrrolidone. Tsinghua Sci Technol 10:439-444.
IFMBE Proceedings Vol. 32
Modification of Hydrogel Scaffolds for the Modulation of Corneal Epithelial Cell Responses L.G. Reis, P. Pattekari, and P.S. Sit Biomedical Engineering Program, Louisiana Tech University, Ruston, Louisiana 71272, USA Abstract–– In this study, we present the results of fabricating novel methacrylate-containing hydrogel scaffolds to be used as corneal replacement and augmentation. The materials properties of the hydrogel scaffolds and the responses of human corneal epithelial cells towards the hydrogel scaffolds were examined. The hydrogel scaffold is based on polyallylamine (PAH), glycidyl methacrylate (GMA) and polyethylene glycol (PEG). The solution mixture of PAH, GMA and PEG was subjected to ultraviolet irradiation to form hydrogel. Hydrogels with different compositions of PEG were fabricated and examined for the materials properties as well as the responses following the deposition of human corneal epithelial cells. Materials properties of the hydrogels were examined including viscoelastic properties using rheometry, pore size by mercury intrusion porosimetry, physical stability at different pH, swelling ratio when immersed in phosphate-buffered saline, and surface wettability through contact angle goniometry. In addition, the morphologies of the hydrogels were examined using scanning electron microscopy. Proteins including collagen type I, fibronectin and bovine serum albumin were incorporated, both individually and in a mixture, into some of the hydorgels. Human corneal epithelial cells, in either serum-free medium or in serum-rich medium, were allowed to deposit onto the hydrogels, after which their responses were examined in terms of adhesion, spreading and morphology. Tissue culture polystyrene (TCPS) was used for comparison purpose. Only when proteins were present in either the hydrogels or serum-rich medium did adhesion of cells on hydrogel occur, although no noticeable spreading was observed. Cells only adhered and spread on TCPS in the presence of proteins. In comparison, cells neither adhered nor spread on hydrogel in the absence of proteins. The results suggest that both the presence of proteins and substrate stiffness are important factors that affect the responses of corneal epithelial cells. Keywords–– hydrogels, cornea epithelia, methacrylate, poly(ethylene glycol).
I. INTRODUCTION Patients afflicted with end-stage corneal diseases such as Stevens-Johnson disease suffer from severe visual loss due to the blistering of the corneal epithelia. Because the optical nerve is still intact, vision is still possible to recover for patients. Acquired corneal injuries such as corneal burns leave patients blind due to the scarring of the epithelia layer of the cornea. One of the treatment options for these diseases
and/or injuries typically involves corneal replacement in order to restore eyesight and provide a healthy barrier to protect the eye from infections. Moreover, corneal augmentation for refractive purposes could potentially affect contact lens wearers, which is estimated to be over eight-five millions individuals worldwide. Although there are attempts to create materials that can be implemented as a scaffold for corneal replacement and augmentation, success is quite limited. Synthetic materials have been unable to support and maintain a normal stratified epithelium. Natural materials are biocompatible but may not have the necessary strength to support tissue growth and withstand wound healing. Another major problem is the recognition by the host cells on the polymer surface as a foreign substance, and thus a degree of foreign body response is present. Both hard synthetic polymers and soft hydrogels have been used for corneal replacement and augmentation; examples of which include those based on methacrylate and polyurethane [1-4]. A novel acrylate-based, photopolymerizable hydrogel that has the potential for corneal replacement and augmentation has been developed [5,6]. The material properties of the hydrogel such as pore structure, viscoelastic properties, physical stability, cellular compatibility, and surface functionality can be altered by changing the processes of fabricating the hydrogels including chemical composition and ultraviolet light exposure. These properties can be altered to optimize conditions to allow epithelia cell attachment, proliferation, and morphology within the scaffold. To further foster cell growth and attachment, biological factors such as proteins can be attached to the surface of the hydrogel. Modifications of tethered proteins and growth factors have shown enhanced epithelia cell growth on hydrogels [7,8]. Mechanical loading also influences the morphology and functions of the cells [9]. Thus, the mechanical properties of the scaffold can affect the cellular responses on the surface. We propose that by modifying the compositions of the hydrogel in addition to tethering protein to the hydrogel scaffolds, the cellular response to allow cellular attachment, augmentation, and proliferation can be optimized.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 175–179, 2010. www.springerlink.com
176
L.G. Reis, P. Pattekari, and P.S. Sit
II. MATERIALS AND METHODS A. Materials Hydrogel preparation: Poly(allylamine hydrochloride) (PAH; MW: ~70,000; Sigma), glycidyl methacrylate (GMA; Sigma), poly(ethylene glycol) diacrylate (PEG-DA; MW: ~700; Sigma), Irgacure-184 (Ciba Specialty Chemicals), acryloyl-PEG-NHS (JenKem USA). Cell culturing: Human corneal epithelial cell line (HCE2; American Type Culture Collection), bovine serum albumin (BSA; Sigma), hydrocortisone (Sigma), Hanks balanced salt solution (HBSS; Sigma); bovine collagen I (BD Biosciences), fibronectin (Calbiochem), keratinocyte serumfree medium (KSF; Invitrogen), recombinant epidermal growth factor (Invitrogen), bovine pituitary extract (Invitrogen) and insulin (Invitrogen). B. Hydrogel Fabrication Hydrogels were prepared by first dissolving PAH in deionized water to make a concentration of 0.03 g/mL. GMA (1.25 ml) was then added to the solution. The mixture was allowed to stir overnight for at least 24 hours to create PAH-GMA macromer. The ultraviolet (UV) initiator Irgacure-184 was dissolved in ethanol solution to create a concentration of 8 mg/mL. Two different concentrations of PEG-DA solution were used to create hydrogels. PEG-DA was mixed with water to create a concentration of 12.5% v/v, which was designated as 1X. A 0.5X solution was created when half of the PEG-DA concentration, 6.25% v/v, was used. To complete the PAH-GMA solution, hexane was added to the prepolymer solution and the organic layer was extracted using a pipette. The final hydrogel mixture was composed of PAH-GMA (25% v/v), Irgacure solution (25% v/v), and PEG-DA solution (50% v/v). The mixture was placed in a UV light source until all of the liquid solidified. C. Hydrogel Characterization To accommodate procedures requiring dried samples, hydrogels samples were first immersed in 10% (v/v) ethanol solution for 15 min, followed by 20%, 50% and 100%. Samples were finally left in a pure ethanol solution overnight. Ethanol-treated hydrogel samples were then placed in a critical point dryer using pressurized carbon dioxide. Dried samples were then placed in dry containers and were used within two days after drying. Morphology: The surface morphologies of dried samples were examined using field-emission scanning electron microscopy (SEM; S-4800, Hitachi). The SEM images were used to examine the structural features including roughness and the presence or absence of micron size pores.
Stability: Hydrogel samples were immersed in buffer solutions of varying pH (5, 7, and 9) and the physical appearance of the scaffold was examined daily over the course of 21 days. For neutral buffer solution (pH 7), phosphate buffer saline (PBS) solution was used. To make acidic buffer solution at pH 5, 59 ml of 0.1 M acetic acid was mixed with 141 ml of 0.1 M sodium acetate. For basic buffer solution (pH 9), 0.76 g and 5.47 g of Trizma hydrochloride and Trizma base (both from Sigma), respectively, were mixed in 1 liter of water to obtain a 50 mM solution. Swelling Ratio: Dried samples prepared from the critical point dryer were weighed and then immersed in PBS solution. After 24 hours, they were patted dry and the weight was taken again. The swelling ratio was calculated by taking the difference in the weights over the dry weight. Mercury intrusion porosimetry: Dried samples prepared from the critical point dryer were placed in a vacuum chamber, followed by the forced insertion of mercury into the pores (AutoPore IV; Micromeritics). The relationship between the applied pressure and the volume of mercury to fill the pores was used to determine the distribution of the pore size and specific surface area of the pore. Wettability: Surface hydrophilicity was determined by advancing water contact angle using the sessile drop method (OCA15+, DataPhysics). Small water droplets, under 5 μL in volume, were placed on the scaffold surface and digital images were captured using a charged-couple based video camera. The initial contact angle formed between the droplet and hydrogel surface was measured. Rheology: Rheometry (C-VOR; Malvern) was used to determine the relative values and ratio of elastic to viscous components of the hydrogel samples. Hydrogel samples in pancake shape was prepared and placed under a stainless steel parallel plate (20 mm in diameter). An amplitude sweep with controlled stress was performed first to determine the linear region of the rheological responses of the hydrogel. The maximum strain allowed within the linear response was used to perform an amplitude sweep scan at controlled strain. Afterwards, a frequency sweep with controlled strain was conducted, followed by a frequency sweep with controlled strain. D. Cellular Response Testing Cellular responses to hydrogel scaffolds were examined under two different conditions: the presence of tethered proteins on the scaffolds and the presence of proteins in the medium. Two different proteins mixtures were conjugated with the scaffolds: collagen I and a mixture solution of collagen I, BSA, and fibronectin. For the conjugation of collagen I with hydrogels, collagen I was added to HBSS to get 8% (v/v) solution. To conjugate the hydrogels with the protein mixture,
IFMBE Proceedings Vol. 32
Modification of Hydrogel Scaffolds for the Modulation of Corneal Epithelial Cell Responses
collagen I (8% v/v), BSA (0.4% v/v), and fibronectin (24% v/v) was mixed in HBSS. The protein solution was then allowed to react with acryloyl-PEG-NHS to form acryloylPEG-protein, which was added to the PAH-GMA macromer and PEG solution mixture before UV irradiation. Hydrogel scaffolds used for cell seeding were prepared by irradiating 300 μL prepolymer solutions in 24-well plates. Human corneal epithelial cells derived from a cell line were cultured in KSF medium solution supplemented with human recombinant epidermal growth factor (5 ng/ml), bovine pituitary extract (0.05 mg/ml), hydrocortisone (500 ng/ml) and insulin (5 ng/ml). Cultured cells with passage number two or higher were seeded onto hydrogel scaffolds. Cells were seeded on hydrogel scaffolds under three conditions: without tethered protein, with tethered collagen I, and with a tethered protein mixture of collagen I, fibronectin, and BSA. Cells were either suspended in protein-rich medium or in protein-free HBSS. Cells suspended in both protein-rich and free HBSS were placed on hydrogel without tethered protein. In contrast, only cells suspended in proteinfree HBSS were placed on hydrogels with tethered proteins. A hydrogel scaffold-free well with tissue culture polystyrene (TCPS) was used for comparison. Cells were incubated overnight and examined the next day. Digital images were captured for qualitative analysis of the morphologies of the cells using optical microscopy (DM IL LED, Leica) operating under bright-field mode, in which the degree of cell attachment and cell spreading were determined.
177
Swelling Ratio: The percentage of swelling of the hydrogel fell within a range of 800% to 1,200%, which was equivalent to between 9 and 13 times the original size. Pore Size: Results from mercury intrusion porosimetry revealed that the mean pore size of the hydrogel was 43 ± 18 nm (number of samples = 5). Wettability: Contact angle of 25° ± 2° on the hydrogel surface was obtained. This relatively low angle measurement suggests that the surface of the hydrogel is hydrophilic. The hydrophilic nature of the gel may be linked to the presence of PEG on the hydrogel surface since PEG is a hydrophilic polymer. Rheology: Rheology data on the hydrogel scaffolds indicated that the elastic modulus of the scaffold was about 600 Pa. The ratio of the elastic to viscous modulus was about 5:1. These values suggest that the hydrogel is rather soft and primarily elastic in nature.
Fig. 1 Scanning electron microscopic image of hydrogel containing 12.5% (v/v) polyethylene glycol. Scale bar: 50 µm
B. Cell Response
III. RESULTS AND DISCUSSION A. Hydrogen Characterization The 0.5X hydrogel did not solidify and develop a high enough stiffness to be used for testing, even after a prolonged period of UV irradiation. The PEG concentration of the gel was too low to allow enough crosslinking to form during polymerization. This lack of crosslinking led to a hydrogel that was too soft. The lack of mechanical stiffness in the hydrogel limited the number of tests that could be performed on the hydrogel. Only the stability test could be conducted for the 0.5X hydrogel. The hydrogel, when immersed in the three different pH solutions, did not undergo any structural changes. In contrast, the 1X has completely solidified during the polymerization process. Therefore, the results described below are pertinent to the 1X hydrogel only. Morphology: From the collected images (Figure 1), the morphology showed the absence of micron size pores in the structure of the scaffold. Stability: No change in the physical structure of the hydrogels was observed as the hydrogels maintained optical clarity regardless of the pH of the surrounding environment.
Cell culture tests were performed to examine the interactions between the corneal epithelial cells and the surfaces of the hydrogel scaffolds. Proteins were introduced into the environment by suspending them in the surrounding solution or attaching them directly to the hydrogel. TCPS surface was used for comparison. When TCPS was immersed in protein-rich serum, cells adhered to the surface and cell extensions were noticeable (Figure 2). However, when TCPS was immersed in protein-free solution, no cell adhesion occurred.
Fig.
2 Bright-field optical microscopic image showing corneal epithelial cells adhered on tissue culture polystyrene surface under protein-rich medium. Extensions of cell pseudopodia are indicated by the arrows. Scale bar: 50 μm
IFMBE Proceedings Vol. 32
178
L.G. Reis, P. Pattekari, and P.S. Sit
The hydrogels were modified to format three conditions: hydrogels without any conjugated proteins, hydrogels with conjugated collagen I protein, and hydrogels conjugated with a protein mixture consisted of collagen 1, fibronectin, and albumin proteins. The hydrogel without any tethered proteins was immersed in either the protein-rich serum or the protein free solution. Hydrogels tethered with proteins were immersed in only protein-free solution. A qualitative analysis of the interactions between the cells and the surfaces was performed to determine the cell compatibility towards the hydrogels, as well as their adhersion responses. For the hydrogels without protein immersed in a protein-free solution, cell adhesion was not observed. Protein-free hydrogels immersed in protein-rich serum supported some degree of cell adhesion. However, cell extensions did not developed, as observed on TCPS. The result indicates a high level of cellular compatibility of the hydrogel. In addition, hydrogels with either collagen I or the mixture of protein bound to the surface immersed in protein-free solution were able to support cell adhesion, but did not appear to support the development of cell extensions on the surface (Figure 3). As observed from the images, the hydrogels support the adhesion of corneal epithelial cells only in the presence of proteins in the immediate environment. For that to happen, proteins must be either suspended in the medium or tethered to the hydrogel scaffolds. The absence of these proteins on the hydrogel or in the surrounding medium restricts the level of cell adhesion onto the scaffold surfaces. The extension of cell pseudopodia only occurs on the TCPS surface immersed in a protein-rich serum. Although cell adhesion is observed on hydrogel, no extensions are observed on the hydrogel scaffolds. This may indicate a lack of mechanical stiffness to support the spreading of the corneal epithelial cells.
the mechanical stiffness required for material testing and cell culturing, the 1X hydrogel possesses a sufficient mechanical stiffness to be used for testing and cell culturing. The response observed from cells seeded on TCPS as compared to those on the hydrogel surfaces indicates the need for a higher degree of mechanical stiffness. The mechanical stiffness could be increased by changing the initial composition of the hydrogels, such as increasing the PEG concentration or UV light exposure time. Introducing proteins into the immediate environment by either adding proteins into the initial hydrogel solution or suspending it in the medium allows cell adhesion onto the hydrogel surfaces to occur. Hydrogel scaffold that has bounded collagen or are immersed in the protein-rich medium are able to support cell attachment. In comparison, hydrogels with no tethered protein immersed in protein-free medium are free of corneal epithelial cells. Thus, the presence of protein in the environment of the corneal cells is critical for cell attachment. In addition to the mechanical strength of the scaffold, the presence of protein during the seeding process had a crucial impact on the cell morphology as seen on TCPS.
IV. CONCLUSIONS A novel methacrylate-based hydrogel scaffold possessing the potential to be used for corneal replacement and augmentation treatments has been developed. The hydrogel scaffold is optically transparent, hydrophilic, physically stable, primarily elastic in nature, compatible to corneal epithelial cell and possesses nanometer-size pores. The combined effect of a protein-rich environment and a surface with a higher mechanical strength are likely required to support attachment and spreading of the cell.
ACKNOWLEDGEMENT
A
B
Fig. 3 Bright-field optical microscopic image showing corneal epithelial cells adhered on hydrogels with conjugated collagen I (A) and protein mixture of collagen I, fibronectin, and albumin (B) immersed in a proteinfree solution. Scale bar: 50 μm
Partial funding support was provided from National Institute of Health/INBRE Resources Grant (P20 RR16456) and the National Aeronautics and Space Administration/ Louisiana Space Consortium under the Louisiana Undergraduate Research Award Program. We express gratitude for the technical assistance provided by I. Magana and Dr. A. Gunasekaran of Louisiana Tech University.
REFERENCES The results from material testing indicate that the hydrogel has high stability and good optical clarity. The cell culture tests indicate that the hydrogel is biocompatible with the corneal epithelial cells. While the 0.5X hydrogel lacks
1. Trinkaus-Randall V, Capecchi J, Newton A, et al. Invest Ophthalmol Vis Sci 29:393-400, 1988. 2. Crawford GJ, Chirila TV, Vijayasekaran S, et al. J Refract Surg 12:525-529, 1996.
IFMBE Proceedings Vol. 32
Modification of Hydrogel Scaffolds for the Modulation of Corneal Epithelial Cell Responses 3. Dupont D, Gravagna P, Albinet P, et al. Cornea 8:251-258, 1989. 4. Thompson KP, Hanna KD, Gipson IK, et al. Cornea 12:35-45, 1993. 5. Zhu, H, McShane, MJ. Transactions. Annual Meeting for the Biomedical Engineering Society, Chicago, p84, 2006. 6. Sit, PS, Raghavendra, D, Pattekari, P, et al. Annual Meeting for the Biomedical Engineering Society, Los Angeles. p78, 2007.
179
7. Jacob JT, Rochefort J, Bi J, et al. J Biomed Mater Res 72:198-205, 2005. 8. Wallace, C, Jacob, JT, Stoltz, A, et al. J Biomed Mater Res 72:19-24, 2004. 9. Bershadsky, AD, Balaban, NQ, Geiger, B. Annu Rev Cell Dev Biol 19: 677-695, 2003.
IFMBE Proceedings Vol. 32
Making of Functional Tissue Engineered Heart Valve S.S. Patel and Y.S. Morsi Biomechanics and Tissue Engineering Group, Faculty of Engineering and Industrial Sciences, Swinburne University of Technology, Hawthorn, Australia
Abstract— The concept of tissue engineered heart valves offers an alternative to current heart valve replacements which is capable of addressing shortcomings such as lifelong administration of anticoagulants, durability and inability to grow. The ideal concept of a tissue engineered heart valve includes formation of functional valve on the basis of a rapidly absorbable scaffold which provides temporary support until the cells produce their own matrix proteins. The structural integrity and biomechanical profile of the tissue engineered heart valves ultimately depend on this matrix formation. However, regardless of numerous attempts, the complete extracellular matrix for aortic heart valves especially elastin which is critical for its proper functioning has not been successfully regenerated in vitro. Polyurethane (PU) has been investigated for decades as a scaffold for heart valves but progress was impeded by calcification and degradation of the material. Improved biocompatibility and mechanical properties of PU have reignited interest in it as a potential valve replacement in recent years. Electrospinning enables nanofibers to be produced which have larger surface areas thereby encouraging cell growth. Since the thickness of the fibres can be controlled, electrospun PU scaffold with 300 μm thickness required for heart valve leaflets was produced in our study. Human umbilical cord mesenchymal stem cells (hMSCs) were seeded onto these scaffolds and were cultured in vitro by mechanical stimulation. This resulted in the production of all the extracellular matrix components of aortic heart valve leaflets including elastin. They exhibited myofibroblast-like morphology and have good cellular kinetics suitable for tissue engineering of cardiovascular tissues including aortic heart valves. Keywords— Heart valves, Electrospun polyurethane, stem cells.
I. INTRODUCTION Polyurethane (PU) scaffolds are used widely as cardiovascular materials owing to their suitable mechanical properties and good haemocompatibility [1]. Moreover, its compliance is similar to that of natural arteries making it a suitable candidate for such soft tissue applications [2]. Although various techniques are available to create scaffolds, many of them are limited due to lack of interconnected pores. A porous network is essential for the cells to migrate
through the scaffold while also allowing the transport of nutrients and waste throughout the scaffold. Electrospinning offers many advantages in the manufac-turing of scaffolds for tissue-engineering purposes [3]. Some of the major advantages of this technique are high surface area to volume ratio, an interconnected porous network and small diameter fibers [3]. PU used in this experiment was developed using electrospinning and had nanofibers. Natural proteins like gelatin and chitosan were immobilized on PU surface to increase the adherence of the cells. Human umbilical cord mesenchymal stem cells (hMSCs) known to produce negligible T-cell response were used in this study [4]. These cells gradually take over the complete scaffold and result in the formation of a functional tissue engineered heart valve suitable for aortic valve replacement as can be observed in this study.
II. METHODS A. Development of Polyurethane Nanofibres Biodegradable polyurethane was synthesized using poly-caprolactone diol, 2,6-diisocyanate methylcaproate and a L-phenylalanine-based diester chain extender were used to produce the nanofibres. Polyurethane was dissolved using a solvent mixture of N, N dimethylformamide and tetrahydro-furan (DMF/TMF) at a ratio of 1:1 to produce a solution of 2.5% polyurethane. The polyurethane solution was then spun into nanofibres at a high voltage of 22 KV and col-lected on an aluminum foil, which continued until a mesh of the desired thickness of 300μm was achieved. Gelatin and chitosan dissolved in HFP (1,1,1,3,3,3 hexafluoro-2-propanol) were used to coat the material. The completed material was then placed in a vacuum oven at room temper-ature for 24 h to ensure complete evaporation of the sol-vents. All of the PU and its composite materials produced will be systematically characterized.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 180–182, 2010. www.springerlink.com
Making of Functional Tissue Engineered Heart Valve
181
B. Fabrication of Heart Valve Scaffold A negative mould was made by the use of Fused Deposition Modeling out of acrylonitrile butadiene styrene (ABS). The PU along with its composite materials was fabricated into a heart valve using this mould.
provided in the bio-reactor, the cell layer seemed to thicken resembling tissue-formation and the scaffold material showed very faint initial signs of degradation. Moreover, extracellular matrix (ECM) components including elastin were also secreted in the media.
C. Seeding of the Scaffolds and Cell Culture hMSCs were commercially obtained (Sciencell) and cultured in DMEM (Gibco) supplemented with 20% FBS (Gibco). They were seeded at the cell density of 1 X 106 cells/cm2 onto the valve scaffold and allowed to grow and proliferate for 4 days. This cell-seeded valve scaffold was cultured under pulsatile flow conditions provided by our custom-made bioreactor for a period of 9 weeks. The above media was supplemented with 50 μg/ml Na ascorbate (Sigma), 150 μM glucosamine hydrochloride (Sigma) during pulsatile flow conditions. D. Inverted and Scanning Electron Microscopy (SEM) Samples of cell-seeded valve scaffold were imaged using Leica DMIL inverted microscope at 10X magnification. SEM images were taken using the FeSEM – ZEISS SUPRA 40VP microscope after sputter-coating the samples with gold.
Fig. 1 Inverted microscopy image of hMSCs: hMSCs showing a myofibroblast-like morphology at 10X magnification
typical
E. Mechanical Durability of PU Mechanical characterization was achieved by applying tensile test loads to specimens prepared from the electrospun ultra fine non-woven fibre mats. The tensile testing was performed by using a tabletop Instron tester with a load cell of 10 N at ambient temperature of 200C and humidity of 65%. A cross-head speed of 10 mm/min was used for all of the specimens tested. The machine-recorded data was used to process the tensile stress–strain curves of the specimens.
III. RESULTS A. Nanofibrous Scaffold Characterization The nanofibrous scaffold formed had an average fibrediameter of 860 ±110 nm. The tensile stress of the scaffold was 7.7±0.7 MPa and ultimate strain was 365±24.5%.
Fig. 2 SEM image of hMSCs: Cell migration and coverage on the nonseeded side of the scaffold by hMSCs after 4 days of culture
B. Cell Culture Cells showed a typical myofibroblast-like morphology as observed in Figure 1. The cells completely covered the scaffold and had even migrated through-out the scaffold in 4 days (Figure 2). After 9 weeks of mechanical conditioning
IV. DISCUSSION Ideal heart valve replacements should be made using autologous cells. The main purpose of using these cells is to avoid immuno-rejection and other such related problems. This
IFMBE Proceedings Vol. 32
182
S.S. Patel and Y.S. Morsi
purpose is served even by the use of MSCs since they hardly evoke any immune response as observed previously [4]. Since the cells covered the complete scaffold in a very short time, this electrospun nanofibrous scaffold is extremely suitable for tissue engineering purposes. Besides, the cells gradually grew dense over a period of time to result in tissue formation. Since the seeded scaffold was under conti-nuous mechanical conditioning, the cells also secreted ECM in the medium. ECM formation is of utmost importance for proper functioning of the heart valve. As the tissue formation progressed, there was a very negligible amount of scaffold degradation. The PU used here is bio-degradable and hence the by-products produced as a result of its degradation are not harmful to the body. Preferably the scaffold should gradually degrade when the cells/tissue is taking over to reach a point when the scaffold is completely degraded and the neo-tissue has taken up the function of the valve. However, it is important for the scaffold to support the cells/tissue till they are strong enough to act as heart valve replacement. Given the high mechanical strength, elasticity and compliance of the PU, it is a suitable candidate for such applications.
V. CONCLUSIONS Tissue engineering of heart valve replacement has been greatly researched without much success in terms of
synthesis of complete ECM and a proper functional aortic valve. This study is able to generate heart valve tissue in the period of 9 weeks under mechanical conditioning along with ECM production. Gradually as the tissue formation increases, the scaffold will degrade. These results can be validated in vivo thus complete functional tissue engineered aortic heart valve was made in this study.
ACKNOWLEDGMENT This study was supported by Australian Research Council (ARC).
REFERENCES 1. Zhu Y, Gao C, He T, and Shen J (2004) Endothelium regeneration on luminal surface of polyurethane vascular scaffold modified with diamine and covalently grafted with gelatine. Biomaterials 25:423-430 2. Tai N R, Salacinski H J, Edwards A, Hamilton G and Seifalian A M (2000) Compliance properties of conduits used in vascular reconstruction. Br J Surg 87(11):1516-1524 3. Rockwood D N, Woodhouse K A, Fromstein J D, Chase D B, and Rabolt J F (2007) Characterization of biodegradable polyurethane microfibers for tissue engineering. J. Biomater. Sci. Polymer Edn, 18 (6):743-758 4. Weiss M L, Anderson C, Medicetty S, Seshareddy K B, Weiss R J, VanderWerff I, Troyer D and McIntosh K R (2008) Immune properties of human umbilical cord wharton’s jelly-derived cells. Stem Cells 26:2865-2874
IFMBE Proceedings Vol. 32
Ties That Bind: Evaluation of Collagen I and α-Chitin Tiffany Omokanwaye and Otto Wilson Jr. Catholic University of America/Biomedical Engineering Department, Washington, D.C., USA Abstract— Collagen and chitin are two fascinating unique matrix framework molecules that play a key role in the hierarchical development of hard tissue. Of the 28 identified types, collagen I accounts for more 90% of the total collagen found in nature and is the main constituent of bone. Of the three identified types, α-Chitin is the most abundant form found in nature and is the main component in the exoskeletons of crustaceans like crabs. α-chitin is a plentiful natural material, similar to collagen I in composition, structure, and function. Both collagen I and α-chitin have extensive use in the biomedical field. In an effort to obtain processing strategies and microstructure design criteria for the development of new bone substitute materials with novel properties, biomimetics was used to establish relationships between the properties of collagen I and α-chitin. The objective of this study was to compare collagen I and α-chitin to illuminate notable similarities and differences in chemical, physical, structural and functional properties. The more intriguing similarities involve hierarchical structuring, liquid crystal characteristics and self assembly. One of the more conspicuous differences is the biochemistry of collagen I, a protein, and α-chitin, a polysaccharide. Collagen I and α-chitin were studied using electron microscopy, elemental and thermal analysis. Bouligand or a twisting plywood system was observed on both collagen and chitin micrographs. One major difference is that the SEM micrographs of collagen fibers demonstrated a banding pattern while chitin fibers did not. The elemental analysis revealed that chitin’s carbon/nitrogen of 3 was doubled in collagen. Thermal analysis revealed that chitin had two thermal transitions while collagen had three. Both collagen and chitin have a thermal event associated with the evolution of freely bound water. These are among the similarities and differences that will be addressed in this paper.
requirement for the design of collagen and chitin, a careful study and understanding of these fibrous, hierarchical, and three-dimensional structures is warranted [4]. Biomimetics uses the lessons learned from biology to form the basis for novel materials. It involves investigation of both structure and physical functions of biological composites of interest with the goal of designing and synthesizing new and improved materials. With this in mind, this research group is investigating collagen I and α-chitin to establish relationships with properties and to obtain processing strategies and microstructure design criteria for the development of new bone substitute materials with novel properties [5]. Table 1 exhibits the principle components of two similar biological systems: crab exoskeleton and bone. Studies that feature a comprehensive review of solely collagen I and αchitin is limited. This article will examine the organic constituent of bone and crab exoskeleton, collagen I and αchitin highlighting some of their similarities and differences. Hence forth, these fibrous composites will be referred to primarily as collagen and chitin. Table 1 Principal Components of Crustacean (Crab) Exoskeleton and Bones Biological Composite Bones
Mineral
Organic
Hydroxyapatite (calcium phosphate) Ca10(PO4)6(OH)2
Collagen I
Crustacean (Crab) Exoskeleton
Calcium Carbonate CaCO3
α Chitin
Keywords— Biomimetics, Collagen, Chitin.
I. INTRODUCTION What inquiry by nature led to collagen and chitin? Collagen, a fibrous protein, and chitin, a linear fibrous polysaccharide, provide the backbone for mineralized tissues such as bone and crustacean exoskeleton [1]. A wide variety of medical applications for chitin and collagen have been reported over the last four decades [2]. Replicating these complex hierarchical structures for the purpose of repair, replacement, or restoring biological function of the damaged or diseased tissue is formidable [3]. Not knowing nature’s
Biochemistry--- Collagen and chitin both consist of relatively few constituent elements: H, C, O, and N [6]. The chitin monomer general empirical formula is [C8H13O5N]n. The empirical chemical formula for collagen is (C12H24O4N3). The basic unit of the protein collagen is a polypeptide consisting of the repeating sequence (G- X - Y)n. (G) is glycine and is the only amino acid small enough to fit in the crowded interior of the triple helix. X is usually proline and Y is usually hydroxyproline. The basic unit of chitin is long linear polysaccharide chain of N-acetylglucosamine joined together by glycosidic bonds [7]. Collagen often exists in vertebrates as a complex of ordered collagen (protein) molecules in a matrix of water and polysaccharide. On the
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 183–187, 2010. www.springerlink.com
184
T. Omokanwaye and O. Wilson Jr.
other hand, chitin consists of polysaccharide (poly-N-acetylglucosamine) chains which aggregate into crystallites, these crystallites being embedded in a matrix of water and protein [8]. Forms--- Three polymorphic forms of chitin (α-, β-, and γchitins) have been distinguished due to their crystal structure that differ in packing and polarities of adjacent chains in successive sheets. α-Chitin, the most abundant form found in nature, is arranged in an anti-parallel configuration. More than 28 different types of collagen have been identified [9] and classified primarily according to their physiological structure. Collagen I accounts for more 90% of the total collagen found in nature [10] and usually consists of three coiled subunits: two α1(I) and one α2(I) chains forming fibrils of 50 nm in diameter [11]. Liquid Crystal Properties--- Fibrous materials, which may be composed of a single polymer or a mixture of polymers, have been called liquid crystal (LC) analogues. Exhibiting properties of both liquid and a solid. Liquid crystals and fibrous composites are both mobile, have similar architectures or geometries, can self-assemble, and have similar optical properties. These factors suggest biological fibrous composites might develop by self-assembly via a liquid crystalline phase [12], providing interesting mechanism in morphogenesis for certain fibrous networks [13]. This idea is supported by the discovery of liquid crystalline phases in concentrated solutions of polypeptide and later confirmed by several studies on the main polymers extracted and purified from certain twisted materials like chitin, collagen, etc. in liquid crystals [14]. In compact bone osteons, the collagen fibrils form arced patterns as a consequence of their threedimensional organization; this structure is analogous to a geometry found in certain liquid crystals [15]. Similar patterns were observed in the sections of crab exoskeleton. Arced figures are a direct consequence of what is described as the twisted plywood model or Bouligand structure [13]. Self Assembly--- Self assembly reflects information coded as shape, surface properties, charge, etc., in individual components; these characteristics determine the interactions among them. Self assembly processes are reversible and can be controlled by proper design of components [16]. When purified type I collagen is highly concentrated in an acid soluble state, the protein spontaneously assembles into ordered liquid crystalline phases [17]. Neville, considering the sheets of microfibrils to be separate from each other, developed a theory that cuticle or exoskeleton is a self-assembling system, stabilized from a liquid crystalline deposition zone. Cuticular components, immediately after secretion, would pass through a crystalline phase, in which there is a preferred orientation of microfibrils [12]. Hierarchy--- Collagen type I, the main constituent of bone, tendon, and skin, is synthesized by fibroblast, smooth-muscle
cells, and osteoblasts. Structural order in collagen occurs at several discrete levels. The primary structure denotes the complete sequence of amino acids along each of three polypeptide chains. The secondary structure is the local configuration of a polypeptide chain [18]. Tertiary structure refers to the global configuration of the polypeptide chain into the triple helical collagen molecule. The fourth-order or quaternary structure denotes the repeating supermolecular unit structure, comprising several molecules packed in a specific lattice, which constitutes the basic element of the five molecule microfibril. Several microfibrils aggregate end to end and also laterally to form a collagen fiber. Collagen fibers exhibit a characteristic banding pattern with a period of about 65 nm [18]. α-chitin, the exoskeleton material of crustaceans that is a complex composite that is hierarchically structured and multifunctional [6], is synthesized by epidermal cells that control the environment within the exoskeleton [19]. The linear chitin chains align anti-parallel and form α-chitin crystals at the molecular level. Several of α-chitin crystals which are wrapped by proteins form nanofibrils of about 2–5 nm in diameter and 300 nm in length. A bundle of chitin– protein nanofibrils then form chitin–protein fibers of about 50–100 nm in diameter. These chitin–protein fibers align together forming planar layers which stack up helicoidally. This structure is called a twisted plywood or Bouligand structure [6]. The basic hierarchical sequence for both collagen and chitin is Molecule: Chain of molecules: Microfbril: Fibril Aggregation: Fibrous Bundle: ‘Twisted Plywood’. The major difference in the hierarchical scheme lies within the chain of molecules. Collagen has peptide bond that create a triple helix while chitin has glycosidic bonds that creates a linear crystal structure.
II. MATERIALS AND METHODS Collagen samples, derived from Bovine Achilles tendon, were obtained from Worthington. Chitin, derived from crab exoskeleton, was obtained from Sigma. Elemental Analysis--- Elemental analysis to determine the Carbon (C), Hydrogen (H), and Nitrogen (N) composition was performed by Prevalere Life Sciences, LLC, Whitesboro, NY. Theoretical values of CHN were calculated for collagen and chitin using their empirical formulas and molecular weights (mol wt). For example, collagen has the empirical formula C12H24N3O4 (mol wt = 274) and chitin’s empirical formula is C8H13NO5 (mol wt = 203). Therefore, the molecular weight ratio of the N molecule to collagen molecule is 42/274 or 0.1533 or 15.33%. The same ratios can be calculated for C and H. Additionally other ratios such as
IFMBE Proceedings Vol. 32
Ties That Bind: Evaluation of Collagen I and α-Chitin
185
C/N and CHN% can be calculated as well. These values are provided for collagen and chitin in Table 2. Thermal Analysis (TGA)--- Thermogravimetric analysis was carried out on dried collagen and chitin samples samples using a Shimadzu TGA-50H Thermogravimetric Analyzer (Kyoto, Japan). Heating was performed in an alumina pan in an air flow (20 ml /min) at a rate of 10 °C/min up to 800°C. Percentage of weight loss was calculated using the formula ܹݐ ܹ
× 100 . The percentage weight loss was plotted as a
function of temperature for both collagen and chitin in Figure. Surface Morphology (SEM)--- Collagen and Chitin samples were sputtered with an ultrathin layer of gold and studied with a Hitachi SU-70 Schottky field emission gun scanning electron microscope.
III. RESULTS AND DISCUSSION Elemental Analysis--- Results of the elemental composition measurements and theoretical values for collagen and chitin are shown in Table 2. Experimental value of %N for collagen was 7.03%, approximately half the theoretical value. Experimental CHN% for collagen was 54.27% when a value close to 76.64%, the theoretical value, was expected. Changes in the value of collagen’s CHN% and C/N can be associated with degradation and changes in the molecular structure [20]. Experimental value of %N for chitin was 14.61%, approximately doubled the theoretical value. Experimental C/N for chitin was 2.82 when a value close to 6.86, the theoretical value, was expected. According to Muzzarelli, elemental analysis of chitin isolates typically have nitrogen content close to 7% and C/N ratio 6.85 for fully acetylated chitin [21].
[22]. The nitrogen content is also a measure of the protein amount still present in the chitin [23]. Thermal Analysis (TGA)--- In the thermogram, Fig. 1, of chitin, two thermal events could be observed. The first occurs in the range of 50–100 °C, and is attributed to water loss. The second occurs in the range of 300–400 °C and could be attributed to the degradation of the polysaccharide structure of the molecule. Based on the work of Jayakumar and Tamura, this is the thermal behavior expected of chitin [24]. In the thermogram of collagen, Fig. 1, three thermal events could be observed. The first occurs in the range 50-100 °C, and is attributed to water loss. The second event occurs in the range of 250-350 °C and could be attributed to the breakdown of the triple helix molecule in chitin. The third event occurs in the range of 450- 600 °C and could be attributed to the further denaturation of collagen to gelatin. Similar behavior was observed in collagen from frog skin, amniotic membrane, and calfskin [25]. Both the collagen and chitin molecules are crystalline which is indicated by thermal shrinkage. However, chitin has two thermal transitions while collagen has three. Both collagen and chitin have a thermal event associated with the evolution of freely bound water. Chitin is linear molecule; hence it does not require the extra thermal step that collagen molecules require to break down its triple helix [8].
Table 2 Elemental Composition of Collagen Sample Collagena Collagenb c
%C 52.55 41.48
%H 8.76 5.76
%N 15.33 7.03
%CHN 76.64 54.27
47.29 6.40 6.90 60.59 Chitin 41.22 6.13 14.61 61.96 Chitind a Collagen, Theoretical b Collagen derived from Bovine Achilles Tendon, Worthington c Chtin, Theoretical d Chitin derived from Crab Carapace, Sigma
C/N 3.43 5.90 6.86 2.82
An explanation for increased nitrogen content could lie in the source of the chitin, i.e. crab. Studies indicate that nitrogen content can vary based on the species, diet, season, moulting stage, gender, and other environmental conditions.
Fig. 1 Thermogram of Collagen and Chitin Surface Morphology (SEM)--- Chitin SEM micrograph shows chitin fibers, Fig. 2. Chitin and Collagen SEM micrographs, Fig. 3, both display lattice-like structures. The fibers in both collagen and chitin micrographs were oriented parallel, perpendicular, or a variation in between, to the surface, on successive planes. Multidirectional or ‘twisting plywood’ system [26] was observed. Bouligand and Neville observed similar helicoid systems while examining crustaceans [13]. One major difference is that the collagen fibers of SEM micrographs demonstrate the characteristic banding pattern of a collagen fiber [27].
IFMBE Proceedings Vol. 32
186
T. Omokanwaye and O. Wilson Jr.
ACKNOWLEDGMENT The authors would like to acknowledge support from the NSF Biomaterials Program (grant number DMR-0645675). The authors also would like to thank the Nanoscale Imaging, Spectroscopy, and Properties (NISP) Laboratory at the Kim Engineering Building and Dr. Lloyd for the use of Schimadzu TGA-50H, at the University of Maryland at College Park.
REFERENCES Fig. 2 Chitin from Crab Exoskeleton SEM Micrographs
Fig. 3 Collagen from Bovine Achilles Tendon SEM Micrographs
IV. CONCLUSION Collagen and chitin follow the same generic storyline. They do not act alone; collagen is a triple helix embedded in an inorganic matrix and chitin in embedded in a protein helix. Collagen and chitin both have LC properties that might have a role in self-assembly. The basic hierarchical sequence for both collagen and chitin is Molecule: Chain of molecules: Microfbril: Fibril Aggregation: Fibrous Bundle: ‘Twisted Plywood’ or Bouligand Structure. These similarities demonstrate that collagen and chitin are more alike than different. An argument can be made that collagen is what holds bone together. The same argument can be made for chitin and crab exoskeleton. Since collagen and chitin share so many similarities could they be used interchangeably as implants? Previous reports indicate that chitin induced fine collagen fibers histologically [28, 29]. If chitin can, does this ability extend to its source --- crab?
1. Chen P Y et al. (2008) Structure and mechanical properties of selected biological materials. J Mech Behav Biomed Mater 1:208-226 2. Enderle J, Blanchard S, Bronzino J (2005) Introduction to Biomaedical Engineering, 2nd Edition. Elsevier Academic Press, Amsterdam 3. Jones J R, Lee P D, Hench L L (2006) Hierarchical porous materials for tissue engineering. Philos Trans R Soc London 364:263-281 4. Fratzl P (2007) Biomimetic materials research: what can we really learn from nature’s structural materials? 4:637-642 5. Sarikaya M (1994) An Introduction to Biomimetics: A Structural Viewpoint. Microsc Res Tech 27:360-375 6. Meyers M A et al. (2008) Biological materials: Structure and mechanical properties. Prog Mater Sci 53:1-206 7. Smith C A, Wood, E J (1991) Biological Molecules: Molecular and Cell Biochemistry. : Chapman & Hall, New York 8. Wainwright S A (1982) Mechanical design in organisms. Princeton University Press, Princeton 9. Gobeaux F et al. (2008) Fibrillogenesis in Dense Collagen Solutions: A Physicochemical Study. J Mol Biol 376:1509-1522 10. Köster S et al. (2008) An In Situ Study of Collagen Self-Assembly Processes. Biomacromolecules 9:199–207 11. Kolacna L et al. (2007) Biochemical and Biophysical Aspects of Collagen Nanostructure in the Extracellular Matrix. Physiol Res 56:S51-S60 12. Neville A C (1993) Biology of Fibrous Composites: Development Beyond the Cell Membrane. Cambridge Unversity Press, Cambridge 13. Bouligand Y (1972) Twisted Fibrous Arrangements in Biological Materials and Cholesteric Mesophases. Tissue Cell 4:189-217 14. Bouligand Y (2008) Liquid crystals and biological morphogenesis: Ancient and new questions. C R Chimie 11:281-296 15. Giraud-Guille M-M (1992) Liquid Crystallinity in Condensed Type I Collagen Solutions: A Clue to the Packing of Collagen in Extracellular Matrices. J Mol Biol 224:861-873 16. Whitesides G M, Grzybowski B (2002) Self-Assembly at All Scales. Science 295:2418-2421 17. Giraud Guille M-M et al. (2005) Bone matrix like assemblies of collagen: From liquid crystals to gels and biomimetic materials. Micron 36:602-608 18. Ratner B D et al. (1996) Biomaterials Science: An Introduction to Materials in Medicine. Academic Press, San Diego 19. Wright J E, Arthur R (1987) Chitin and benzoylphenyl ureas. W. Junk, Boston 20. Hinman M et al. Degradation Patterns in Archaeological Collagen: New Evidence from Pyrolysis GC-MS and Solid Satae 13C NMR. [Online] 2008. [Cited: September 21, 2009.] http://www.socarchsci.org/poster/Hinman_SAA_poster_2008.pdf 21. Muzzarelli RA A (1999) Native, Industrial and Fossil Chitins: Chitin and Chitinases. Birkhauser Verlag, Boston
IFMBE Proceedings Vol. 32
Ties That Bind: Evaluation of Collagen I and α-Chitin 22. Linton S M, Greenaway P (2000) The nitrogen requirements and dietary nitrogen utilization for the gecarcinid land crab, Gecarcoidea natalis. Physiol Biochem Zool 73:209-218 23. Cardenas G et al. (2004) Chitin Characterization by SEM, FTIR, XRD, and 13C Cross Polarization Mass Angle Spinning NMR. J Appl Polym Sci 93:1876-1885 24. Jayakumar R, Tamura H (2008) Synthesis, characterization and thermal properties of chitin-g-poly(e-caprolactone) copolymers by using chitin gel. Int J Biol Macromol 43:32-36 25. Shanmugasundaram N, Ravikumar T, Babu M (2004) Comparative Physico-chemical and in Vitro Properties of Fibrillated Collagen Scaffolds from Different Sources. J Biomater Appl 18:247-264 26. Neville A C (1993) Biology of fibrous composites: Development beyond the cell membrane. Cambridge University Press, New York 27. Franchi M et al. (2008) Different Crimp Patterns in Collagen Fibrils Relate to the Subfibrillar Arrangement. Connect Tissue Res 49:85-91
187 28. Kojima K et al. (2004) Effects of Chitin and Chitosan on Collagen Synthesis in Wound Healing. J Vet Med Sci 66:1595-1598 29. Minami S et al. (1996) Chitin induces type IV collagen and elastic fiber in implanted non-woven fabric of polyester. Carbohydr Polym 29:298299
The corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Otto Wilson, Jr., PhD Catholic University of America 620 Michigan Ave., NE Washington, DC USA
[email protected]
Chitosan/Poly (ε-Caprolactone) Composite Hydrogel for Tissue Engineering Applications Xia Zhong1, Chengdong Ji1, Sergei G. Kazarian2, Andrew Ruys3, and Fariba Dehghani1 1
School of Chemical and Biomolecular Engineering, The University of Sydney, Sydney, Australia 2 School of Chemical Engineering, Imperial College London, London, UK 3 School of Aerospace, Mechanical & Mechatronic Engineering, The University of Sydney, Sydney, Australia Abstract— The aim of this study was to fabricate the threedimensional (3D) porous structure of chitosan/Poly (εcaprolactone) (PCL) hydrogels with improved mechanical properties for tissue engineering applications. A modified emulsion lyophilization technique was developed to produce a 3D chitosan/PCL scaffold. The addition of 25 wt% and 50 wt% of PCL into chitosan substantially enhanced the compressive strength of hydrogel 160% and 290%, respectively, compared to pure chitosan hydrogel. The result of ATR-FTIR corroborates that PCL and chitosan physically co-existed in the composite mixture. The composites comprised of uniform macro (59.7 ±14.4 μm) and micro (4.4 ± 1.5 μm) pores as observed in the SEM images. The composites acquired in this study with homogeneous porous structure and improved intergrity may have a high potential for the production of 3D scaffolds that can be used in various tissue engineering applications. The results of confocal fluorescence microscopy confirmed that fibroblast cells were proliferated in the 3D structure of these composite scaffolds. Keywords— composite hydrogel, mechanical strength, emulsion lyophilization, cell proliferation.
I. INTRODUCTION The fabrication of a three-dimensional (3D) scaffold plays a significant role in tissue engineering. The scaffold provides a template for cell adhesion and proliferation, and subsequently extracellular matrix (ECM) formation [1,2]. However, few biomaterials possess all the desirable properties such as mechanical strength and cell affinity for tissue engineering applications. Composite hydrogels fabricated from natural and synthetic polymers have been considered as an ideal material for tissue engineering due to promoting properties such as biocompatibility, biodegradability and mechanical strength. Chitosan, a partially deacetylated derivative of chitin, can form a hydrogel in aqueous solution which has been used for the design of tissue scaffold, but the low mechanical properties impede its broad applications [3]. Poly(ε-caprolactone) (PCL) is a biodegradable synthetic polyester with superior mechanical strength and biocompatibility. However, the lack of cell recognition sign and its hydrophobic surface properties is not desirable for cell
affinity and attachment [4]. Since the properties of PCL and chitosan are complementary, it is anticipated that their composite will give more desirable material properties for tissue engineering applications. Moreover, the homogeneity of blending is critical to achieve the uniform composite structure and properties. Hydrophilic chitosan and hydrophobic PCL are immiscible [3, 5], thus their blend is difficult to prepare due to the lack of common solvents and the impossibility of melt processing of chitosan [6]. Recently, solvents such as hexafluoro-2-propanol (HFIP) and concentrated acetic acid (over 77%) have been used to dissolve both chitosan and PCL in one phase [5,6]. However, only thin filmswere generated with solvent evaporation [5] and stable 3D scaffold structures could not be obtained using conventional processing techniques such as freeze extraction, freeze gelation and freeze drying [7]. Emulsion freeze drying has been used to prepare the 3D porous composite from PCL and hydrophilic polyvinyl alcohol (PVA) [8]. High speed mechanical stirring was used to generate the emulsion from PVA aqueous solution and PCL dichloromethane solution, but the homogeneity of component distribution was not assessed. In our study, sonication technique was applied to achieve more homogeneous emulsion with the presence of emulsifier. The distribution of each component in the composite was investigated as well as other properties including the mechanical performance, pore size, swelling ratio and biocompatibility.
II. EXPERIMENTAL A. Materials Chitosan (medium molecular weight), poly (εcaprolactone) (PCL), fluorescein diacetate (FDA), propidium iodide (PI), Dulbecco's modified eagle medium (DMEM), fetal bovine serum (FBS), pen-strep and sorbitan monooleate (Span-80) were purchased from Sigma. Dichloromethane (DCM) (99.4% purity), ethanol (99.7% purity) and sodium hydroxide (NaOH) were purchased from
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 188–191, 2010. www.springerlink.com
Chitosan/Poly (ε-Caprolactone) Composite Hydrogel for Tissue Engineering Applications
Merck. A 0.2 M acetic acid solution was prepared using glacial acetic acid (Ajax Fine Chem) in MilliQ water. B. Fabrication of Chitosan/PCL Composite Hydrogel Chitosan solution (1.5 w/v %) was prepared in 0.2 M acetic acid, and PCL was dissolved in DCM to form 5 w/v % solutions. Chitosan and PCL solutions were mixed with different PCL compositions (25, 50 and 75 wt %), followed by sonication (Hielschar UP400S) at 20mV for 2 minutes to achieve the homogenous emulsion. Span-80 (5 v/v %) was used as the emulsifier. The emulsion mixture was then immediately poured into custom-made moulds (diameter ranging from 1cm to 2 cm), and frozen at -20 ºC overnight, followed by lyophilization for at least two days. The lyophilized scaffolds were immersed into 0.2 M NaOH for 2 hours to neutralize the acid residues and then washed three times with MilliQ water. The scaffolds were lyophilized again overnight for further characterization. Pure chitosan sample prepared by direct lyophilization of aqueous solution was used as a control in this study.
189
scaffolds were immersed into PBS for 2 hrs. The thickness (~3 mm) and diameter (~ 8 mm) of each sample were measured using a digital caliper (J.B.S). A cell load with 50N, cross head speed of 30 µm/s and 50% strain level were applied. The compressive modulus was obtained as the tangent slope of the stress-strain curve. The equilibrium swelling ratio (Sw) of the composite hydrogels was determined at physiological environment. In brief, the dried scaffold was cut into similar size (~1×1×1cm3) and weighted (W0). At each condition three samples were immersed into pre-warmed PBS at 37 ºC overnight (at least 12 hrs). The swollen samples were weighted (Wt) after excessive buffer was removed. The equilibrium swelling ratio was calculated using the equation Sw = (Wt-W0)/W0. The viability of cells cultured on composites was assessed using a confocal laser scanning microscope (CLSM) after fluorescein diacetate (FDA) and (propidium iodide (PI) staining. Live cells were fluoresced green due to intracellular esterase activity that de-acetylated to a green fluorescent product FDA and dead cells were fluoresced red as their compromised membranes were permeable to nucleic acid stain PI.
C. In-vitro Cell-Culturing Test In vitro cell-culturing test was carried out to assess the biocompatibility of the composite hydrogel for tissue engineering applications. The composites were transferred into a 24-well plate and sterilized with UV light for half an hour, and then immersed in culture media (DMEM, 10% FBS, 1% pen-strep) at 37 ºC overnight. The cells (human skin fibroblast cells GM3348) were then seeded onto the scaffolds at 3×105 cells/well and cultured in an incubator at 37 ºC in the presence of 5% CO2 and 95% humidity for 7 days. The media was refreshed every 3 days.
III. RESULTS AND DISCUSSION A. Morphologies of the Chitosan/PCL Composites As can be seen from Fig. 1, 3D scaffolds were produced by using sonication emulsion followed by lyophilization process. The chitosan/PCL composites with 25 wt % and 50 wt % PCL were stiff and had firmed structure, however, the one with 75 wt % PCL was not. The 25 wt % and 50 wt % PCL composites were used for further characterizations.
D. Characterizations The microstructure of the composite hydrogels was examined by Scanning Electron Microscopy (SEM) (Philips XL30). Prior to SEM analysis (15kV), dried sample was sputter-coated with 10 nm gold. Equivalent circle diameter (ECD) of the pores was calculated by using Image J software. At least 5 images (300 pores) with same magnification representing different areas of the scaffold were analyzed for each condition. The existence of each component in the composites and potential molecular interactions were determined with Attenuated Total Reflectance - Fourier Transform Infrared spectroscopy (ATR-FTIR) (Varian 660IR) at 4 cm-1 resolution over the range 600-4000 cm-1 wavenumber. Uniaxial compression tests of hydrated specimens were performed by using a Bose ELF 3400 mechanical tester. Prior to mechanical test, the composite
Fig. 1 Image of chitosan/PCL composites with different PCL content (25, 50 and 75 wt %) As shown in Fig. 2, all the composites exhibit homogeneous pore distribution with pore size ranging between 50 and 70 µm (Table 1). It was also found that a large number of micro pores (e.g. 4-7 µm) uniformly distributed on the macro pore walls. The formation of micro pores was most likely caused by the removal of DCM droplets existing in
IFMBE Proceedings Vol. 32
190
X. Zhong et al.
the emulsion system during the lyophilization process. Compared to the pore structure of pure chitosan, the composite possessed smaller pores but with the presence of large quantities of micro pores, which may contribute to the nutrient and oxygen transportation, which is critical for cell growth and infiltration.
Fig. 2 SEM images of (a) chitosan/PCL composites and (b) pure chitosan Chitosan can form hydrogels when immersed in the aqueous solutions and the superior capability of water capture makes it widely used in tissue engineering. The swelling ratio of the chitosan/PCL composite was slightly decreased compared to the pure chitosan due to the hydrophobicity of PCL (Table 1). Table 1 Pore sizes and swelling ratio of the composite hydrogels Chitosan/PCL (wt %)
Macropore diameter (µm)
Micropore diameter (µm)
Equilibrium swelling ratio
100/0
136.9 ± 39.4
N/A
17.2 ± 1.4
75/25
57.9 ± 12.5
4.4 ± 1.5
15.2 ± 0.3
50/50
65.5 ± 13.2
7.1 ± 3.5
11.6 ± 1.2
B. Mechanical Properties A comparison of the compressive properties of two compositions of chitosan/PCL composite with pure chitosan scaffold shows that the compressive modulus was substantially enhanced with the increase of concentration of PCL. The addition of 25 wt % and 50 wt % PCL resulted in the 160% and 290% increase of the compressive modulus compared to chitosan alone. Similar trend has been found in the PVA/PCL blend system, the more PCL, the better mechanical performance. However, when the PCL content was increased to 75 wt %, the compressive modulus reduced dramatically which was caused by the phase inversion and formation of PCL as a continuous phase [8]. The result of our study was in agreement with the PCL/PVA system. As shown in Fig. 1, with 75 wt % PCL, the specimen was soft and powdery. It can be expected that the chitosan/PCL composite with improved mechanical property can be used for broad tissue engineering applications such as skin and cartilage repair.
Fig. 3 Mechanical property of chitosan/PCL composites compared to the pure chitosan scaffold C. ATR-FTIR Analysis The presence of each component in the composite was determined with ATR-FTIR spectroscopy (Varian 660-IR). The FTIR spectra of the typical absorption peaks of the functional groups present in chitosan and PCL is shown in Figure 4. Pure chitosan has a peak absorbance at 1560 cm-1, which corresponds to the N-H band for either primary amines or amide II, and another peak was detected at 1659 cm-1, corresponding to the C=O stretch for amide I, which indicated that chitosan was not completely deacetylated [9]. The prominent characteristic peak of PCL locates at about1724 cm-1, attributable to the carbonyl group stretching absorption [10]. As expected, all the characteristic peaks of chitosan and PCL were observed in FTIR spectra of
IFMBE Proceedings Vol. 32
Chitosan/Poly (ε-Caprolactone) Composite Hydrogel for Tissue Engineering Applications
chitosan/PCL composite, which corroborates the existence of both components in the composite structures.
191
IV. CONCLUSIONS The modified emulsion followed by lyophilization process was efficient technique to fabricate chitosan/PCL composite scaffolds. The mechanical properties was increased by increasing the concentration of PCL to 50% and then decreased. Homogenous mixture of composite with uniform micro and macro sized pores were created in the samples. The high cell viability and proliferation confirmed the superior biocompatibility of the composite scaffolds. In conclusion, the chitosan/PCL composite hydrogels with maximum 50 wt % PCL exhibited great potential for tissue engineering applications.
ACKNOWLEDGMENT
Fig. 4 ATR-FTIR spectrum of chitosan/PCL composite
The authors acknowledge the financial support from Australia Research Council (ARC DP0988 545). Great help on experiment from Ms Tsun Ting Lo and Ms Elizabeth Boughton is deeply appreciated.
D. Cell Culture Result In vitro cell culture test was performed to assess the biocompatibility of the chitosan/PCL composites processed by method developed in this study. The CLSM image in Fig. 5 demonstrated that fibroblast cells stretched and formed confluent monolayers on the surfaces of the composite scaffold. A large number of cells were viable after one week incubation, therefore the composites fabricated by sonication emulsion followed by lyophilization was biocompatible. A small quantity of cells penetrated into the pores and grew along the pore walls, indicating either pore architecture or material surface properties needs further optimization to promote cell infiltration.
Fig. 5 CLSM image of fibroblast cells grown on chitosan/PCL (75/25 wt %) composite for 7days
REFERENCES 1. Khademhosseini, A. and R. Langer (2007) Microengineered hydrogels for tissue engineering. Biomaterials, 28: 5087-5092. 2. Ma, P.X. (2008) Biomimetic materials for tissue engineering. Adv. Drug Delivery Rev. 60: 184-198. 3. Madihally, S.V. and H.W.T. Matthew (1999) Porous chitosan sca!olds for tissue engineering. Biomaterials 20: 1133-1142. 4. Wan, Y., et al. (2008) Compressive mechanical properties and biodegradability of porous poly(caprolactone)/chitosan scaffolds. Polym. Degrad. Stab. 93: 1736-1741. 5. Sarasam, A. and S.V. Madihally (2005) Characterization of chitosanpolycaprolactone blends for tissue engineering applications. Biomaterials 26: 5500-5508. 6. Cruz, D.M.G.a., et al. (2009) Physical interactions in macroporous scaffolds based on poly(ε-caprolactone) / chitosan semi-interpenetrating polymer networks. Polym. 50: 2058-2064. 7. Sarasam, A.R., et al. (2007) Blending Chitosan with Polycaprolactone: Porous Scaffolds and Toxicity. Macromol. Biosci., 7: 11601167. 8. Mohan, N. and P.D. Nair (2007) Polyvinyl Alcohol-Poly (caprolactone) Semi IPN Scaffold With Implication for Cartilage Tissue Engineering. J. Biomed. Mater. Res. Part B Appl. Biomater. 84B: 584594. 9. Osman, Z. and A.K. Arof (2003) FTIR studies of chitosan acetate based polymer electrolytes. Electrochim. Acta 48: 993-999. 10. Senda, T., Y. He, and Y. Inoue (2001) Biodegradable blends of poly(ε -caprolactone) with α-chitin and chitosan: specific interactions, thermal properties and crystallization behavior. Polym Int. 51: 33-39. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Fariba Dehghani The University of Sydney School of Chemical and Biomolecular Engineering, Sydney Australia
[email protected]
Modeling and Control of HIV by Computational Intelligence Techniques N. Bazyar Shourabi Sharif University of Technology/School of Science and Engineering, International Campus, Kish Island, Iran
Abstract— The paper aims at proposing a controller through two-layer neural networks (NN) in order to control CD4 amount in HIV+ Patients. In the same direction, initially a new mathematical model by using of two-layer neural networks (NN) was designed. Medical information of 300 HIV+ Patients who were being treated in the Iranian Research Center for HIV/AIDS (IRCHA), Imam Khomeini Hospital, Tehran, was used to design this model. Highly active antiretroviral therapy (HAART) medical method is a significant method that was used in this study. It is worthy to say that CD4 amount declining leads to immune system deficiency and it is considered as one of AIDS main symptoms. Output of this mathematic model is used as feedback of one of the controller's inputs. In this paper, seven pharmaceutical groups were considered for HAART treatment method. Controller output is one of such pharmaceutical groups. Keywords— Control.
HIV/AIDS,
Neural
Networks,
Modeling,
I. INTRODUCTION AIDS, or Acquired Immune Deficiency Syndrome, is caused by the Human Immunodeficiency Virus (HIV).This disease was reported in the U.S.A in 1981 for the first time. According to a report 1n December 2008 by world health organization, around 33.4 million people lived with this disease in the world while 2 million people lost their lives in the same year due to this disease [1]. After entering the body, HIV virus reduces the number of CD4 cells which contributes to disorder of the immune system. As a result, the body loses ability to confront different kinds of diseases and the person will be exposed to a variety of infectious diseases. Over the past few years, a number of mathematical models have been proposed using differential equations to provide a better understanding of different diseases including AIDS. Among the benefits of using mathematical models is that they help us have a better understanding of the parameters and effective. Factor involved in the expansion and transfer of the disease as well as its cure. As a result, mathematical models serve for the development of different treatment strategies for different diseases.
Of Over the past years, some mathematical models have been developed on HIV. Sanathanan and Peck (SP) model describe empirically CD4 cell counts over time during drug therapy .In 1993 Yuen GJ worked on 3TC effects on CD4 cells by using sp mathematical model [2]. Garcı et al. proposed a model that considers three compartments: blood plasma (BP), lymphoid tissueinterstitial spaces (LT-IS), and follicular dendritic cells (FDC), for the HIV-1 dynamics under the application of highly active antiretroviral therapy (HAART) which allowed them to unravel distinct viral dynamics occurring in short- (2 days), middle-(21 days), and long-term (183 days) time scales[3]. In the 2006, Eric S. Rosenberg , Marie Davidian , H. Thomas Banks suggested that mathematical models describing biological processes taking place within a patient over time can be used to design adaptive treatment strategies[4]. In 2007 the traditional method of dosing (where the dose is modeled implicitly as a proportional inhibition of viral infection and production) is compared to a model that accounts for drug dynamics via explicit compartments by Robert J. Smith. He examined Four limiting cases: frequent dosing of both major classes of drugs, absence of either drug, frequent dosing of one drug alone, or frequent dosing of the other drug alone[5]. VeronicaShi et al. formulated a novel cellular automata (CA) model for HIV dynamics and drug treatment. The model is built upon realistic biological processes, including the virus replication cycle and mechanisms of drug therapy. CA was first used by Zorzenon dos Santos and Coutinho (2001) to model the evolution of HIV infection [6]. Hadjiandreou, RaúlConejero and IanWilson presented a mathematical modeling and optimal control approach to formulate patient-specific drug-specific treatment strategies for HIV- infected patients in 2009[7]. Most of the suggested models in recent years were developed from the previous mathematical models and the offered ones were independent from the input. This was based on the input for the suggested model and the model has been inspired from 300 real samples of Iranian Research Center for HIV/AIDS (IRCHA).
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 192–195, 2010. www.springerlink.com
Modeling and Control of HIV by Computational Intelligence Techniques
In this paper, a mathematical model of HIV and a controller for suggestion of one pharmaceutical group to a HIV+ patient have been proposed. The remaining sections of this paper are organized as followed. Section II describes the structure of the proposed HIV neural networks model. Sections III and IV present Haily Active Antiretroviral Therapy (HAART) method and result of proposed HIV neural networks model .Section V presents control of HIV by neural networks. Section VI describes the structure of the proposed HIV neural networks controller. Section VII present results of proposed HIV neural networks controller.
II. PROPOSED HIV NEURAL NETWORKS (NN) MODEL In an NN model, the first layer is a buffer, where sensory data are stored [8]. There are three buffers in this model. The elements of the first layer are connected either fully or arbitrarily to the elements of a second layer, called the feature layer or hidden layer. In this model,6 neurons are in the hidden layer. Each neuron from this layer combines information from different sensory elements and represents a possible feature. The neurons from this layer are connected fully to output neurons from an output layer called the perceptron layer. Figure 1 shows the proposed model.
Fig. 1 Structure of proposed HIV NN model This model has three inputs, one of them is yt-1 other one is yt-2 and finally one input comes from output of controller. In this model: yt= CD4 cell counts that were obtained from the model yt-1= yt in previous time yt-2= yt in two times ago xt= Input drug doses in HAART method that has been explained in table 1 and next chapter.
193
1053 CD4 tests were obtained from 300 patients of HIV positive from IRCHA for this paper. 853 of these data have been used for training and others have been used for testing of proposed HIV neural networks (NN) model. These data have been shown by yt*. Table 1 Drug doses Total dose(xt)
Unit
Drugs
Dosage
3TC+NFV+d4T
300+2250+80
2630
mg/day
AZT+3TC+NFV
600+300+2250
3150
mg/day
AZT+3TC+EFV
600+300+600
1500
mg/day
AZT+3TC+NVP
600+300+400
1300
mg/day
3TC+EFV+d4T
300+600+80
980
mg/day
AZT+3TC+kaletra
600+300+1000
1900
mg/day
3TC+d4T+NVP
300+80+400
780
mg/day
III. HAILY ACTIVE ANTIRETROVIRAL THERAPY (HAART) METHOD
Highly active antiretroviral therapy (HAART), which was introduced in 1996, is extremely effective compared to earlier therapy. HAART works by reducing the HIV viral load in the plasma (usually measured in the blood) to an undetectable level. Since the introduction of Highly Active Antiretroviral Therapy (HAART), a combination of at least three drugs from at least two classes of antiretroviral agents, treatment of HIV-infection has steadily improved. Antiretroviral combination therapy has decreased mortality and morbidity in HIV-disease during the last years [9]. A commonly adopted treatment policy suggests a firstline treatment consisting of a combination of two NRTIs and either a NNRTI or a PI [10]. In Iranian Research Center of HIV/AIDS (IRCHA), mostly doctors prescribe one of these seven pharmaceutical groups (Table 1) to a HIV + patient.
IV. RESULT OF PROPOSED HIV NEURAL NETWORKS MODEL Below figure shows a plot of Proposed HIV neural networks (NN) model. This figure shows that yt and yt* have good overlay. According to Figure 2 it appears that reaction of proposed model is nearly as same as system’s reaction to the same input.
IFMBE Proceedings Vol. 32
194
N.B. Shourabi
The block diagram has been suggested in this paper. As it seems, yt (cell/ml) is output of the mathematical model which in fact is the number of CD4 (Number of CD4 cells show range of immune of a body) of HIV+ patient. This CD4 is compared with CD4 Ref (1500 cell /ml).The difference between yt and CD4 Ref (cell/ml) is showed by ek. ek is appeared as ek -1 after passing z-1 . Δek is the result of ek and ek-1 comparison. Eventually, both Δek and ek are applied as two inputs of the controller. Dose rate is the controller output in this block diagram. Please keep in mind that drugs concentration and total dose rates have been applied in saturation level and its output, which is in fact input of HIV model, is one of seven pharmaceutical groups applied in Highly Active Antiretroviral Therapy (HAART). Fig. 2 Proposed HIV neural networks (NN) model
V. CONTROL OF HIV BY NEURAL NETWORKS Neural networks have been applied successfully in the identification and control of dynamic systems. The universal approximation capabilities of the multilayer perceptron have made it a popular choice for modeling nonlinear systems and for implementing general-purpose nonlinear controllers. To control HIV, Model Reference Control (MRC) architecture for designing neural networks controller as seen in Figure 3 has been used. In this application, Model Reference Control (MRC) is kind of tracking. This controller has a fix goal that is CD4 count 1500 (cell/ml).
VI. PROPOSED NEURAL NETWORKS (NN) CONTROLLER Artificial intelligence has been used successfully in medical informatics for decision making, clinical diagnosis, prognosis, and prediction of outcomes [11, 12]. Intelligent modeling can be classified in three categories: • • •
Neural Networks Fuzzy Logic Genetic algorithm
Neural Networks (NN) was used for presenting controller in this paper. In an NN model, the first layer is a buffer, where sensory data are stored. There are two input buffers in this model. In this model, seven neurons are in the hidden layer. The model has one neuron in output layer. Figure 4 shows structure of the proposed HIV neural networks controller.
Fig. 3 Block diagram of control process
Fig. 4 Structure of proposed HIV NN controller IFMBE Proceedings Vol. 32
Modeling and Control of HIV by Computational Intelligence Techniques
195
VII. RESULT OF PROPOSED HIV NEURAL NETWORKS MODEL
Fig. 5 Suggestion one drug group by proposed controller Figure. 5 shows that proposed controller suggested one pharmaceutical group (group No.3) to a HIV+ patient. Controller suggested AZT-3TC-EFV to a HIV+ patient.
Fig. 6 Suggestion one drug group by proposed controller Figure. 6 shows that proposed controller suggested one pharmaceutical group (group No.7) to a HIV+ patient. Controller suggested 3TC-d4T-NVP to a HIV+ patient.
VIII. CONCLUSIONS In fact the proposed controller indicates one of engineering science applications in medical field and medical treatment and this controller can be used as a medical consultant to propose a group of medicine in HAART method.
REFERENCES 1. Aids epidemic 2008 at http://www.WHO.org 2. Yuen GJ, Ashforth EI (1993) Mathematical modeling of CD4 cell counts to assess drug effect during dose ranging trials for GR109714X (3TC) using NONMEM , International Conference on AIDS., Jun 6-11, 1993 3. Garcı J, Soto-Ramı´rez L, Cocho G et al (2006) HIV–1 dynamics at different time scales under antiretroviral therapy. Journal of Theoretical Biology238: 220–229 4. Rosenberg E, Davidian M, Thomas Banks H(2007)Using mathematical modeling and control to develop structured treatment interruption strategies for HIV infection. Drug and Alcohol Dependence 88s: S41–S51 5. Smith R (2008) Explicitly accounting for antiretroviral drug uptake in theoretical HIV models predicts long-term failure of protease-only therapy. Journal of Theoretical Biology 251:227–237 6. Shi V, Tridane A, Kuang Y (2008) A viral load-based cellular automata approach to modeling HIV dynamics and drug treatment. Journal of Theoretical Biology 253: 24–35 7. Hadjiandreoua M, Conejerosb R, Wilson D (2009) Planning of patient-specific drug-specific optimal HIV treatment strategies. Chemical Engineering Science 64:4024—4039 8. Haykin S (2005) Neural networks a comprehensive foundation. Pearson 9. Hogg R , Lima V ,Sterne JA et al (2008) Antiretroviral therapy cohort collaboration, life expectancy of individuals on combination antiretroviral therapy in high-income countries: a collaborative analysis of 14 cohort studies. Lancet, 26(9635): 293-9 10. Josephson F, Albert J, Flamholc L et al (2007)Antiretroviral treatment of HIV infection: Swedish recommendations 2 Scan din avian. Journal of Infectious Diseases 39: 86-507 11. Sawa T, Ohno-Machado L (2003) A neural network-based similarity index for clustering DNA microarray data. Computers in Biology and Medicine 33(1):1-15 12. Ljung L (1999) System Identification: Theory for the user. Prenticehall
Author: Neda Bazyar Shourabi Institute: Sharif University of Technology International Campus Street: Amirkabir City: Kish Island Country: Iran Email:
[email protected]
IFMBE Proceedings Vol. 32
Mathematical Modeling of Ebola Virus Dynamics as a Step towards Rational Vaccine Design Sophia Banton, Zvi Roth, and Mirjana Pavlovic Florida Atlantic University/Electrical Engineering (Bioengineering), Boca Raton, USA Abstract— The greatest roadblock in vaccine design is the lack of a complete understanding of how the immune system works. Greater understanding can be achieved via mathematical models that formalize biological ideas and their ability to extract non-intuitive information from biological experiments. Rational vaccine design (RVD) aims to maximize the production of pathogen-specific memory cells following vaccination. First, the Herz model for viral dynamics was simulated using MATLAB to analyze the naïve system’s response to Ebola – a deadly hemorrhagic virus. The model was initialized for the unvaccinated system using biologically based data on Ebola virus cultivation in Vero cell-cultures. Simulations revealed generally non-quantified specifics of Ebola infection such as the virus’ birth, natural death, and cellular infection rates. The second system, initialized with the rates above, modeled Ebola infection in a vaccinated individual using a modified Herz model with equations for memory T-cell formation and proliferation. T-cell populations were expanded under biologically mimicked rates and conditions. These results provide a quantified value for the number of memory T-cells necessary for vaccine efficacy in an individual; the specifications of what the vaccine must accomplish. Reversing the roles, these results may serve as RVD guidelines for biologically effective vaccines against the Ebola virus. Keywords— Rational Vaccine Design, Mathematical Modeling, CD8+ T-cells.
Ebola
Virus,
I. INTRODUCTION The Ebola virus (EBOV) is one of the deadliest human pathogens known. Ebola emerges sporadically in central Africa causing severe hemorrhagic fever in its victims. The virus is highly infectious and EBOV mortality rates range from 23 to 90%; death can occur within a few days [1, 2]. The potential use of the EBOV as an agent of bioterrorism has created global concern. Consequently, there is a great need to develop protective methods including vaccines that can quickly control the virus’ spread. Vaccines are preparations used as protective inoculations to provide immunity against specific diseases [2]. There are currently no licensed Ebola vaccines or therapeutics [3]. The greatest limitation on vaccine development is the fragmentary knowledge available on immune system function. Furthermore, traditional vaccine design methods do not comply with Ebola.
One method that can overcome this hurdle is rational vaccine design (RVD). RVD has two aims. The first is to minimize the trial-and-error approach behind current vaccine design strategies, and the second is to maximize the production of immune cells in the body post vaccination. A key component of the immune system that the successful vaccine must stimulate is the CD8+ cytotoxic T-cell response. The cytotoxic T-cell response is of primary importance for host survival and recovery during a viral infection. T-cells that are positive for the cluster-differentiaton-8 (CD8) marker are equipped with the cellular machinery necessary to become cytotoxic T-lymphocytes (CTLs), white blood cells that kill compromised host cells on contact. Mathematical modeling can overcome vaccine design limitations by providing insight into immune system functions such as the production of CTLs, and their associated memory cells. These models formalize biological ideas and permit the extraction of experimental results that may be inaccessible to a more intuitive biological approach [4]. Thus, the aims of this study were to use mathematical models to explore the cytotoxic T-cell response to the EBOV and to quantify vaccine efficacy at the cellular level as a first step towards RVD. These aims were completed via: (1) analysis of EBOV dynamics using the Herz model, (2) analysis of the cytotoxic T-cell response to Ebola using the Tuckwell model, and (3) extension of the Tuckwell model to include the production of memory cytotoxic T-cells.
II. EBOLA DYNAMICS IN THE UNVACCINATED SYSTEM A. The Herz Model The Herz model is a deterministic model for viral reproduction [5]. Viruses are acellular, obligate intracellular parasites. Subsequently, they are dependent on host cells for reproduction in a sequential five step process: attachment, penetration, replication, assembly, and release. The Herz model describes all events using a system of three equations: x(t) = number of uninfected cells; y(t) = number of infected cells; and v(t) = number of free virus particles. Equation (3)’ includes a modification made by Tuckwell and others since contact between a virus and a cell reduces the number of available free virus [5].
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 196–200, 2010. www.springerlink.com
Mathematical Modeling of Ebola Virus Dynamicss as a Step towards Rational Vaccine Design
197
(1)
III. THE NAÏVE CYTOTOXIC T-CELLL RESPONSE TO THE EBOV
(2)
A. The Tuckwell Model
(3) (3)’ d and virus In equation (1) λ, μ, and β are the supply, death, infection (transmission) rates of uniinfected cells respectively. In equation (2) α is the naturaal death rate of infected cells. In equation (3), c is the rate att which infected cells produce virus (viral burst) and γ is the natural attrition of free virus particles.
The Herz model with the identiffied parameters serves as the basis for exploring an immunee response to the EBOV. In the basic immune response, innfected cells (y) can be killed by cytotoxic T-cells. A systeem of equations supplied by Tuckwell and others modifies thee Herz system to account for a cytotoxic T-cell response [5].. In this revised model a new term dz/dt represents the numbber of cytotoxic T-cells in the system and dy/dt is modified too account for the number of infected cells killed by cytotoxic T-cells. –
(2)’ (4)
B. Parameter Estimation The EBOV causes 80-90% cell monolayeer destruction in seven days at a low multiplicity of infection n (MOI) of 0.01 in Vero cells [6]. Vero cell lines are common nly used in virus and vaccine studies. The MOI, the proportio on of free virus particles to uninfected cells, is used to initiaalize parameters v and x: 1 virus per 100 uninfected cells. In the absence of viral contact, uninfected cells (x) are subject to homeostasis pply and natural between cell division and cell death. The sup death rates are proportional. Parameters β, β c, and γ are determined by the model. C. Results i devastated in At a low MOI of only 0.1, the system is seven days. A transmission coefficient (β) of 50%, virus attrition rate of 1%, and burst rate (c) of 80% are adequate to cause the entire population of uninfecteed cells in the system to become ‘infected’ in seven days (F Fig. 1). Multiple simulations (not shown) reveal that the parameter p most critical for viral infectivity is the initial am mount v0 of free virus in the system.
Fig. 1 Ebola dynamics in the unvaccinatedd system x(t) = uninfected cells; y(t) = infected cells; v(t) = free f virus particles and v’(t) = total virus particles
In equation (2)’ term yz is the num mber of infected cells that are killed by the cytotoxic T-cells; is the rate constant for removal. In equation (4), term kyz is the number of T-cells that make contact with infected host cells via k – the contact and expansion rate. Delta (δ) is tthe natural death rate of cytotoxic T-cells. B. Parameter Estimation mulation were preserved. The parameters from the Herz sim The Tuckwell model contains four uunknowns: z, ρ, k, and δ. The number of cytotoxic T-cells ((z) at time zero must be determined. Cytotoxic T-cells are cconsidered naïve prior to contact with infected cells. The currrent system of equations does not allocate a separate variable for naïve cells. Rather, z0 represents the number of Ebola sppecific naïve cells. The initial number of EBOV ccytotoxic T-cells can be estimated with inference. There are approximately 100 trillion cells in the human body, oof which 10 trillion are lymphocytes [7]. Among lymphoocytes, cytotoxic T-cells total 19% [8, 9]. With the cellular population in the model set to 100, the number of cytotoxic T-cells is 1.9. Of the 1.9 cells a small number are potentially Ebola specific, since the EBOV is regiospecific and sporadicc. HIV studies in healthy individuals have revealed that naïvee T-cell counts towards a virus can be as many as 1 in 100 [[10]. Thus for this study, the initial number of Ebola specificc naïve cytotoxic T-cells (z0) is set to 0.019. The natural death rate of cells iss generally referred to as the spontaneous apoptosis constantt in biological literature. For cytotoxic T-cells this value hhas been observed to be around 14.5% [11] and 13.3% [12]] at baseline levels. Thus (δ) is taken to be the mean of the tw wo values: 13.9. Parameters ρ and k must bee revealed by multiple simulations of the model. These paarameters cannot be very
IFMBE Proceedings Vol. 32
198
S. Bantonn, Z. Roth, and M. Pavlovic
high. The immune system does not deliver high clearance rates for viruses with mortality rates of 23 to 90%. C. Results The naïve cytotoxic T-cell response is minutely successful at viral containment (Fig 2.) with contact (k) and killing ( ) rates of 0.1. Multiple simulations (not shown)) revealed these rates were the most biologically sound. Notaably, the largely augmented cytotoxic T-cell population could d not rescue the host cell population and indicates a need for vaccination. v The output is still reflective of what has been obsserved clinically with Ebola infections, which can kill in sev ven to fourteen days. On the seventh day the number of cyto otoxic T-cells is greatly increased, but the response is too slow w as the system has already crashed by day five. On day fourrteen the system is rebounding, but given the hemorrhagic sy ymptoms of the virus, cell and tissue repair are virtually im mpossible at this stage of the disease. p to In Fig. 2 elevated T-cell levels are proportional systemic viral load. As the virus dim minishes, T-cell populations decrease, allowing fewer opporttunities for cell infection. This suggests that a large number of cytotoxic Tcells must be available for immune-protectiion early in the disease course. This can be provided by thee administration of a vaccine that can promote cytotoxic T-cell T activation and memory T-cell production.
the active infection. This is followedd by a contraction phase, in which cell numbers are reduceed by apoptosis leaving behind memory cells for protection against future infections [4]. The mathematical model musst contain equations that account for these events. Modelingg this process cannot be oversimplified as each step mayy be under independent control [4]. For vaccine design, thhe aim is to mimic this process by stimulating the imm mune system with the rationally designed vaccine. This ssimulation represents Tcell mechanisms following vaccinne administration, giving the biologist an insight into futurre studies in regards to issues such as vaccine dosage. T The model is simulated based on the following biologically established facts: • • • • •
Contact with antigen (virall/vaccine) causes a 5000fold proliferation and diffferentiation of CD8+ Tcells into effector cells (CT TLs) [13]. The CTL response peaks aat seven days, regardless of the pathogen[13]. Following proliferation, ccontraction –an equally rapid and massive attriition of primed cells– (~90%) occurs in 2–3 weeeks [13]. At the end of the cycle, homeostatic control by apoptosis reduces the memory cell population to about 5% of the peak numbber of CTLs [4]. Roughly (95%) of the T-ceells activated at the onset die; 5% survives for extendded periods [13].
The following equations represent the linear differentiation model created by De Boer et al. 2003 [14]. The original model is a system of three equationss.
Fig. 2 General Cytotoxic T-cell Response to Ebola Viruus (Unimmunized) x(t) = uninfected cells; y(t) = infected cells; v(t) = free f virus particles and z(t) = cytotoxic T-cells.
IV. GENERATING MEMORY CYTOTOXIC T-CCELLS AGAINST EBOLA
A. The De Boer Model 8+ cytotoxic TUpon contact with viral antigen(s), CD8 cells differentiate into two populations: effector and memory T-cells. As effector cells, activateed cytotoxic Tlymphocyte (CTLs) kill infected host cells. The T early phase of CTL proliferation and expansion is essential to subdue
t
(5)
t>T
(6)
t>T
(7)
The model is time dependent, with a critical time denoted as (T) – the time at which proliferaation ends. Equation (5) represents the number of CTLs forrmed by cytotoxic T-cell expansion (t < T). T is set to seven since the CTL population peaks at day seven [14]]. Equation (6) describes the contraction event during whichh CTL numbers decrease (t > T). Equation (7) represents the number of memory cells in the system after the expansionn phase; r is the rate at which activated cytotoxic T-cells bbecome memory cells (t ≥T). In previous publications memoory cell apoptosis ( ) is set to zero [14]. In this study the deeath rate equals the death rates of all cytotoxic T-cells annd their derivatives to facilitate a greater biological unnderstanding. De Boer's model does not account for nonzerro steady-state value M∞ of the memory cells. This can be acccomplished by adding a constant term λA in equation (6).
IFMBE Proceedings Vol. 32
Mathematical Modeling of Ebola Virus Dynamics as a Step towards Rational Vaccine Design
B. Parameter Estimation The values for the initial number of naïve T-cells were preserved as were the natural death rates. Hence A0 = 0.019 and δ = 0.0139. Parameters ρ and r must be revealed by the repeated simulations of the model. There are no memory cells at time zero. C. Results Cytotoxic T-cell expansion peaks at day seven with memory cell formation when ρ = 1.2 and r = 1 (Fig. 3). At these values the system shows biological profiles for seven day T-cell expansion (5000 fold) and 95% contraction. CTL derived memory cells stabilize at around 5% of the peak value of CTLs. The model reflects a twenty-one day period and this is concurrent with biological surveillance of the CTL immune response. This is the type of response the vaccine must trigger in order to generate a viable population of CTLs towards the Ebola virus.
199
infected cells, hence the term (pM) [14]. During proliferation some memory cells become CTL effectors at a rate of r, and some die naturally at rate δM.. The other modification involves equation (4)’ for the number of effector CTLs in the system (z). As before, (z) increases when T-cells are stimulated by infected cells at a rate k and die naturally at rate δ. Yet, the presence of memory cells at the beginning of the infection impacts the value of (z). Some memory cells proliferate into CTL effectors (z) and the rate of formation of these cells is represented as the term (rM). Once activated the CTLs also go through proliferation (pz) at rate p, which is presumably equal to the rate of proliferation of memory cells above. A. Parameter Estimation Parameters values from the previous simulations were preserved. Values p and r are unknown and must be revealed by multiple simulations of the model. z0 is no longer initialized at zero, but by the output of the De Boer system for memory cell production (section IV). The magnitude of initial memory cells, M, dictates the strength of the immune response. B. Results
Fig. 3 Cytotoxic T-cell Expansion and Memory Cell Production A(t) = Expanding CTLs; A’(t) = Contracting CTLs; M(t) = Memory T-cells
The rates of proliferation (p) and conversion of memory cells into CTLS (r) were kept low to maintain the integrity of the model; 0.1 and 0.05 respectively. In this system the EBOV is contained (Fig. 4) as indicated by the low concentration of free virus as compared to Fig. 1. The model also shows the established biological profile for the CD8+ T-cell response throughout the course of infection: expansion, contraction, and memory cell stabilization.
V. EBOLA DYNAMICS IN THE VACCINATED SYSTEM In the absence of a vaccine, the naïve T-cell response was unable to rescue the system (section III). The vaccinated system is thus expressed as a revision of the Tuckwell model with the consideration that the initial number of cytotoxic T-cells (zо) is the critical factor that determines the magnitude of the immune response. During a challenge with the EBOV, both circulating vaccine-induced memory and naïve T-cells will respond to the virus. The revised model is as follows: (4)’
Fig. 4 Ebola Challenge in the Vaccinated System x(t) = uninfected cells; y(t) = infected cells; v(t) = free virus particles, z(t) = cytotoxic T-cells, and M(t) = memory T-cells
(8) The primary modification is equation (8) for the memory cell population. Memory cells proliferate after contact with
VI. DISCUSSION The series of mathematical models presented in this study assessed the virus centered CD8+ T-cell response
IFMBE Proceedings Vol. 32
200
S. Banton, Z. Roth, and M. Pavlovic
towards the EBOV. Each model provided insight into how the immune system works, while revealing thresholds for critical parameters. The vaccinated CTL response contained the EBOV’s growth, despite its extreme virulence. The involvement of memory T-cells was successful, but not without limitations. There was still cellular damage to the system, which could not be eliminated with restrictions on biological rates and conditions. The limitations are of two types: memory T-cell ability, and the isolation of the CD8+ response. In principle, memory T-cells circulate through the entire body and are not immediately available to combat the virus. These cells travel and sometimes reside in various tissues [13]. T-cells are not activated by free viruses and require contact with infected cells instead. This contact can only occur after the EBOV has established an infection. Additionally, this study focused solely on the CD8+ T-cell response. There are other branches of the immune system to explore, such as the humoral response which is more suitable for removing freely circulating viruses, a need established by the models presented in this study. Nonetheless, the methodology outlined solidifies the use of mathematical models for establishing the specifications of a rationally designed Ebola vaccine.
2. Bente D, Gren J, Strong JE, et al. (2009) Disease modeling for Ebola and Marburg viruses. Disease models & mechanisms 2(1-2):12-17 3. Terando A, Faries M, Morton D (2007) Vaccine therapy for melanoma: Current status and future directions. Vaccine 25:B4-B16 4. Callard R, Hodgkin P. (2007) Modeling T- and B-cell growth and differentiation. Immunological Reviews 216(1):119-129 5. Tuckwell H (2003) Viral population growth models. University of Paris, Paris 6. Titenko A, Andaev EI, Borisova TI (1992) Ebola virus reproduction in cell cultures. Vopr Virusol. 37(2):110-113 7. Nowak, MA, May, RM (2000). Virus Dynamics. Cambridge University Press, Cambridge UK. 8. Tortura, GJ, Funke BR, Case, C. Microbiology: an Introduction 9th edition. Benjamin Cummings 9. J. E. Berrington, D. B. A. C. F. A. J. C. G. P. S. (2005). "Lymphocyte subsets in term and significantly preterm UK infants in the first year of life analysed by single platform flow cytometry." Clinical & Experimental Immunology 140(2): 289-292. 10. Komanduri, Krishna V.; McCune, Joseph M. Komanduri, Krishna V.; McCune, Joseph M. Volume 344(3), 18 January 2001, pp 231-232 11. Roger, P.-M., J. Durant, et al. (2003). "Apoptosis and proliferation kinetics of T cells in patients having experienced antiretroviral treatment interruptions." J. Antimicrob. Chemother. 52(2): 269-275. 12. Kessel A, Rosner I, Rozenbaum M et al. Increased CD8+ T Cell Apoptosis in Scleroderma Is Associated with Low Levels of NF-κB. Journal of Clinical Immunology. 2004 January;24(1):30-36 13. Luu RA, Gurnani K, Dudani R et al. (2006) Delayed Expansion and Contraction of CD8+ T Cell Response during Infection with Virulent Salmonella typhimurium. J Immunol. 177(3):1516-1525 14. Kohler B. (2007) Mathematically modeling dynamics of T-cell responses: Predictions concerning the generation of memory cells. Journal of Theoretical Biology. 245(4):669-676.
REFERENCES 1. Oswald WB, Geisbert TW, Davis KJ et al. (2007) Neutralizing antibody fails to impact the course of Ebola virus infection in monkeys. PLoS Pathogens 3(1)
IFMBE Proceedings Vol. 32
Respiratory Impedance Values in Adults Are Relatively Insensitive to Mead Model Lung Compliance and Chest Wall Compliance Parameters Bill Diong1, Michael D. Goldman2, and Homer Nazeran2 2
1 Engineering, Texas Christian University, Fort Worth, TX, TX, U.S.A. Electrical and Computer Engineering, University of Texas at El Paso, El Paso, TX, U.S.A.
Abstract— Impulse Oscillometry (IOS) measures respiratory resistance and reactance from 5 to 35 Hz. These data were obtained from 2 groups of adults enrolled in a study of IOS compared to other lung function testing methods: 1 group of 10 adults with no identifiable respiratory disease and 1 group of 10 adults with varying degrees of COPD. We used Mead’s model of the respiratory system to derive parameter estimates of central inertance (I), central and peripheral resistances (Rc, Rp), and lung, chest wall, bronchial, and extrathoracic compliances (Cl, Cw, Cb, Ce) by least-squares-optimal fitting to the IOS data. This procedure typically produced multiple optimal solutions, with estimates of Cl and of Cw that varied by 2 to 3 orders of magnitude and were several orders of magnitude larger than expected physiological values, up to 8.6x105 L/kPa for Cl and 2.6x105 L/kPa for Cw. We then performed constrained optimization of normal adult data with both Cl and Cw parameters fixed at 2 L/kPa, which produced a groupaveraged LS error that was 19.3% larger than for unconstrained optimization: Rc, I, Rp, Cb and Ce parameters changed by 0.99%, 1.76%, 22.0%, 11.9% and 10.6%, respectively. Constrained optimization of the COPD adults data with the Cw fixed at 2 L/kPa and Cl fixed first at 1.5 L/kPa and then at 1.1 L/kPa produced group-averaged LS errors that were 23.8% larger and 23.6% larger, respectively, than for unconstrained optimization: Rc, I, Rp, Cb and Ce parameters changed by 2.12%, 4.88%, 18.5%, 6.46% and 25.5%, respectively, for Cl = 1.5 L/kPa; they changed by 1.64%, 4.30%, 18.4%, 6.64% and 18.5%, respectively, for Cl = 1.1 L/kPa, all relative to the unconstrained case. We conclude that the Mead model’s impedance and its parameter estimates for normal and COPD adults are relatively insensitive to the Cl and Cw parameters. Keywords— Respiratory impedance, respiratory system model, parameter estimation, impulse oscillometry, COPD.
I. INTRODUCTION Studies on a better method to assess human lung function have been continuing, since the existing standard lung function test of spirometry requires subjects to inhale and exhale with maximum effort, which may be troublesome especially to the elderly and children, leading to unreliable results. One alternative to spirometry is the method of forced oscillation [1], and the Impulse Oscillometry System (IOS) [2] in particular, which requires only the subject’s passive
cooperation. This method allows them to breathe normally, with a nose clip to close the nares. Brief 40ms electrical pulses, producing 60-70 ms mechanical displacements of the speaker cone, result in pressure waves from the mouth inwards being superimposed on normal respiratory airflow into the lungs. Both the pressure stimulus and the resulting airflow response are recorded to provide information about the respiratory system’s forced oscillatory impedance that can be used to detect and diagnose respiratory diseases. The resistive and reactive (ZR and ZX) impedance values that are calculated depend on the respiratory system’s ‘mechanical’ resistances, compliances and inertances, so they can also be correlated with models consisting of electrical components that are analogous to those ‘mechanical’ components. Then parameter estimates for such models may provide an improved means of detecting and diagnosing respiratory diseases. Recently, studies have been conducted to compare the relative merits of several models of varying complexity and the 7-element Mead model (see Fig. 1) was typically found to yield the lowest error [3, 4]. However, other issues besides minimizing the error in curve fitting must be considered. Specifically, the Mead model usually yielded unphysiologically large values as the optimal estimations of the lung and chest wall capacitances (Cl and Cw), the majority of those values being several orders of magnitude larger than the expected range of values. Moreover, the estimation results for the Mead model typically produced least-squaresoptimized estimates of Cl and also of Cw that varied by 2 to 3 orders of magnitude, i.e., multiple optimal solutions were produced.
Fig. 1 Seven-element Mead model
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 201–203, 2010. www.springerlink.com
202
B. Diong, M.D. Goldman, and H. Nazeran
II. MATERIALS AND METHODS IOS measurements were obtained from 2 groups of randomly selected adults enrolled in a study of IOS compared to other lung function testing methods: 1 group of 10 adults with no identifiable respiratory disease and 1 group of 10 adults with varying degrees of COPD assessed by history and conventional spirometry. We used Mead’s model of the human respiratory system to derive parameter estimates of central inertance (I), central and peripheral resistances (Rc, Rp), and lung, chest wall, bronchial, and extrathoracic compliances (Cl, Cw, Cb, Ce) by least-squares-optimal unconstrained fitting to the IOS resistive and reactive impedance values at 5, 10, 15, 20, 25 and 35 Hz. The procedure used to derive these estimates was as described in [3, 4] and is not repeated here. We then performed constrained optimization of the normal adults data with both Cl and Cw parameters fixed at 2 L/kPa; these values being generally accepted to be the reference values for lung and chest wall compliances in normal adults [5]. Then constrained optimization of the COPD adults data with the Cw fixed at 2 L/kPa and Cl fixed at 1.5 L/kPa was performed; these values approximating the dynamic compliances at the breathing rate of 15 breaths per minute in mild-to-moderate COPD [5]. Finally, constrained optimization of the COPD adults data with the Cw fixed at 2 L/kPa and Cl fixed at 1.1 L/kPa was performed; these values representing moderate-to-severe COPD [5].
III. RESULTS The least-squares-optimal unconstrained fitting of the Mead model to the normal adults’ IOS data typically produced multiple optimal estimates of Cl and also of Cw that varied by 2 to 3 orders of magnitude, while the estimates of the remaining parameters manifested very small or no differences. Arbitrarily choosing one of these optimal solutions to represent “the” Mead model corresponding to the IOS test data being fitted resulted in values of Cl from 0.22974 to 855000 (mean 97353, SD 183666) L/kPa, and values of Cw between 0.57277 and 262000 (mean 47322, SD 81592) L/kPa. Furthermore, the minimal values of optimal Cl were mostly not paired up with the minimal values of optimal Cw. The least-squares-optimal unconstrained fitting of the Mead model to the COPD adults’ IOS data also typically produced multiple optimal estimates of Cl and of Cw that varied by 2 to 3 orders of magnitude, while the estimates of the remaining parameters again manifested little or no differences. Arbitrarily selecting one of these optimal solutions to represent “the” Mead model corresponding to the test data being fitted resulted in Cl between 1.2 and 172000
(mean 36271, SD 40501) L/kPa, and in Cw from 0.24 to 23000 (mean 5882, SD 8589) L/kPa. The minimal values of optimal Cl were again mostly not paired up with the minimal values of optimal Cw. Constrained optimization of the normal adults data with both Cl and Cw parameters fixed at 2 L/kPa produced a group-averaged LS error that was 19.3% larger than for unconstrained optimization. Rc, I, Rp, Cb and Ce parameters changed by a group-average of 0.99%, 1.76%, 22.0%, 11.9% and 10.6%, respectively, relative to their unconstrained values. Constrained optimization of the COPD adults data with Cw fixed at 2 L/kPa and Cl fixed at 1.5 L/kPa produced group-averaged LS errors 23.8% larger than for unconstrained optimization. Rc, I, Rp, Cb and Ce parameters changed by a group-average of 2.12%, 4.88%, 18.5%, 6.46% and 25.5%, respectively; these changes being relative to the unconstrained case. Finally, constrained optimization of the COPD adults data with Cw fixed at 2 L/kPa and Cl fixed at 1.1 L/kPa produced group-averaged LS errors 23.6% larger than for unconstrained optimization. Rc, I, Rp, Cb and Ce parameters changed by 1.64%, 4.30%, 18.4%, 6.64% and 18.5%, respectively, relative to their unconstrained values.
IV. CONCLUSIONS The 7-element Mead model typically fits IOS respiratory impedance data from normal and COPD adults with lower least-squares error than other commonly used models. But estimation of its parameters also usually yields multiple optimal solutions with estimates of the lung and chest wall compliances (Cl and Cw) that varied by 2 to 3 orders of magnitude. Moreover, these least-squares-optimized estimates of Cl and also of Cw were several orders of magnitude larger than the expected physiological values. This study has enabled us to conclude that the respiratory impedance values produced by the Mead model are relatively insensitive to its lung compliance and chest wall compliance parameters. In addition, the other Mead model’s parameter estimates derived from IOS data in normal and COPD adults are relatively insensitive to that model’s lung compliance and chest wall compliance parameters. In particular, the large airway parameters of resistance and inertance change in value by an average of less than 5% even when these compliance parameter values are changed by a few orders of magnitude. The small airway parameters change a little more; the peripheral airway resistance changes in value (on average) by less than 22%, while the bronchial compliance changes in value (on average) by less than 12%. We suggest that the small-pressure-small-volume
IFMBE Proceedings Vol. 32
Respiratory Impedance Values in Adults Are Relatively Insensitive to Mead Model Lung Compliance
perturbations produced by IOS are not likely transmitted beyond airways smaller than 2 mm in diameter, consistent with direct measurements of pressures reported by Macklem and Mead [6]. This leads to trivial pressures applied to the lung and chest wall, and accordingly, their compliances are not significant factors in model-derived calculations of the remaining Mead model parameters.
REFERENCES 1. DuBois AB, Brody AW, Lewis DH, Burgess BF (1956) Oscillation mechanics of lungs and chest in man. J. Appl. Physiol. 8:587-594 2. VIASYS MasterScreen IOS. VIASYS/Jaeger, Yorba Linda CA, USA 3. Diong B, Rajagiri A, Goldman M, Nazeran H (2009) The augmented RIC model of the human respiratory system. Med Biol Eng Comput 47:395–404
203
4. Diong B, Nazeran H, Nava P, Goldman M (2007) Modeling human respiratory impedance. IEEE Engineering in Medicine and Biology Society Magazine: Special Issue on Respiratory Sound Analysis 26:48–55 5. Mead J (1969) Contribution of compliance of airways to frequencydependent behavior of lungs. J. Appl. Physiol. 26(5):670-673 6. Macklem P, Mead J (1967) Resistance of central and peripheral airways measured by a retrograde catheter. J. Appl. Physiol. 22(3): 395401 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Bill Diong Texas Christian University 2840 W. Bowie St. Fort Worth U.S.A.
[email protected]
A Systems Biology Model of Alzheimer’s Disease Incorporating Spatial-temporal Distribution of Beta Amyloid C.R. Kyrtsos1,2 and J.S. Baras1,2,3 1
Fischell Department of Bioengineering, University of Maryland, College Park, MD, US 2 Institute for Systems Research, University of Maryland, College Park, MD, US 3 Department of Electrical Engineering, University of Maryland, College Park, MD, US
Abstract–– Alzheimer’s disease (AD) is one of the most devastating neurological disorders that affects the elderly. Pathological characteristics at the tissue and cell level include loss of synapses and neurons in the hippocampal and cortical regions, a significant inflammatory response, and deposition of the beta amyloid (Aβ) protein in brain parenchyma and within the basement membrane of cerebral blood vessels. These physical changes are believed to lead to gradual memory loss, changes in personality, depression and loss of control of voluntary muscle movement. Currently, 1 in 8 individuals over age 65 are affected by AD; this translates to over 5 million afflicted individuals in the US alone. Aβ has long been implicated as the main culprit in AD pathogenesis, though cholesterol, apolipoprotein E (apoE) and the low density lipoprotein-related receptor protein (LRP-1) are now also believed to play a role. In this paper, we describe a spatialtemporal mathematical model that has been developed to study the interactions between cholesterol, Aβ and LRP-1. Models for neuron survival, synapse formation and maintenance, and microglial motion have also been discussed. The paper concludes with a description of the proposed algorithm that we will use to simulate this complex system. Keywords–– Alzheimer’s disease, cholesterol, LRP-1, apoE, math modeling.
I. INTRODUCTION Lipid metabolism, particularly the processing of cholesterol, and the expression level of LRP-1, have come to the forefront of AD research in the past decade. Recent studies have demonstrated that cholesterol levels help to regulate the generation and clearance of Aβ [9, 13, 19]. The LRP-1 receptor, located at the blood-brain barrier (BBB) and on the neuronal plasma membrane, is responsible for clearance of Aβ from the brain and transport of cholesterol into the neuron, respectively. Previous research has shown that the expression of LRP-1 at the BBB decreases with age [6]. This decrease in LRP-1 has been implicated in the buildup of Aβ in the brain and breakdown of the neurovascular unit during aging, possibly leading to AD [23]. There is, however, a disagreement in the literature about whether high or low cholesterol contributes to AD pathogenesis.
Decreased brain cholesterol has been noted by several studies. A recent study by Liu et al demonstrated that increasing the level of APP (Amyloid Precursor Protein), particularly the γ-secretase cleavage product AICD, led to a decrease in LRP-1 expression levels, an increase in apoE levels and a decrease in cholesterol [15]. Further studies have shown that decreased brain levels of cholesterol are found in both apoE4 knock-in mice and in AD brains [10, 14]. The Framingham study which tracked 1894 individuals over the course of 16-18 years found that low or normal levels of cholesterol were correlated with lower cognitive performance levels [7]. Conversely, high cholesterol has also been shown to play a possible role in AD pathogenesis [19]. Since high cholesterol is believed to lead to an increased plaque load and subsequent neurodegeneration, several studies have looked at the effects of statins on AD pathogenesis. Fassbender et al studied the effect of simvastatin and lovastatin on primary neurons and found that this led to decreased levels of Aβ40 and Aβ42 [8]. Refolo et al expanded on this by studying the effects of the cholesterollowering drug BM15.766 on transgenic mice expressing an AD phenotype and saw that plasma cholesterol levels, brain Aβ peptides and Aβ load were all decreased [20]. One of the most interesting recent studies clearly demonstrates that reducing the levels of brain cholesterol may not prevent AD pathogenesis, as has been suggested by previous studies. In this study, Halford and Russell crossed transgenic AD mice with cholesterol 24-hydroxylase knockout mice, and found that Aβ plaque deposition did not vary statistically between the mutant and AD control strains [9]. The exact relationship between APP, Aβ and cholesterol processing, and the effects that low brain cholesterol and LRP-1 expression levels have on neurodegeneration in AD is currently not well understood. The goal of this paper is to develop basic mathematical framework to study how the molecules may interact with each other during the initiating phase of AD pathogenesis. By using a systems biology approach to studying these various interactions and having the ability to precisely alter the expression of individual proteins of interest, we will be able to study the effect of
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 204–208, 2010. www.springerlink.com
A Systems Biology Model of Alzheimer’s Diseasee Incorporating Spatial-temporal Distribution of Beta Amyloidd
minor perturbations on the system as a wholee. This provides direction for future experimental research.
II. LOCAL NETWORK FOR Aβ & CHOLESTEROOL PROCESSING Beta amyloid (Aβ) is considered to be the t key protein and causative factor in AD due to its increased concentration in the brain and presence as the t core protein in amyloid plaque deposits [21]. Previou us studies have shown that cleavage of the amyloid precursor protein (APP) by β-secretase and followed by γ-secretase leads l to the 3942 amino acid length amyloidogenic form ms of Aβ. The majority of Aβ is believed to be produced by y neurons in the brain; very little has been shown to cross fro om the blood to the brain via the BBB. Alternative splicing g of APP by αsecretase leads to generation of sAPPα, a no on-toxic product believed to play a role in neuronal excitabiliity and enhance synaptic plasticity, learning and memory [17 7]. The role of cholesterol in Aβ processing is just starting g to be studied, though a clear understanding has not yet been n reached.
205
decreases in hippocampal volume. Significant deposition in the leptomeningeal vessels in a proocess known as cerebral amyloid angiopathy (CAA) is also observed. For this reason, our model focuses on the ddistribution of Aβ within the hippocampus. The decision to use a statistical orr deterministic model was determined by calculating the numbeer of Aβ molecules. The volume of the hippocampus in a heaalthy adolescent is ~5.8-6 cm3 [16]. The endpoint concentratioon levels of total brain Aβ of both healthy and AD individuals haas previously been studied and found to be 0.6 µM and 8.8 µM M, respectively [5]. This corresponds respectively to a total of 2.2x1015 and 3.2x1016 Aβ molecules in a healthy and an AD bbrain. Thus, looking at a single neuron or a small number of neeurons would indicate that a stochastic process should be used to model Aβ distribution. However, since we are interested in studying much larger portions of the hippocampus containiing hundreds to thousands of neurons in our simulations, the num mber of molecules that we would be dealing with would be on the order of >1014, which cannot easily or efficiently be modeleed stochastically. We can overcome this difficulty by using a reeaction-diffusion equation (RDE) to model our system:
where c is the concentration of beeta amyloid, DAB is the diffusion coefficient of Aβ and Ri represents the reactions that occur within the control volumee. The reaction term can be expanded to account for prodduction, degradation by proteases (specifically, degradation by the insulin-degrading protease, IDE), fibril formation annd uptake by microglia, defined respectively as: ,
, ,
,
Ψ Fig. 1 Cholesterol and Aβ processing in thee brain In the brain, cholesterol is generated fro om one of two pathways: de novo synthesis, or by uptaake from brain lipoproteins [1, 4, 18]. As we age, de novvo synthesis by neurons decreases to a trivial level, and the t majority of cholesterol is synthesized by astrocytes an nd delivered to neurons via apoE [1]. Figure 1 shows Aβ and cholesterol processing, as well as their interrelationship ps between the two pathways.
III. DERIVATION OF THE RDE FOR AΒ DISSTRIBUTION The hippocampus is the initial and most affected region of the brain, with severe neuronal loss leadin ng to significant
Beta amyloid production occurs onlly in the cell body of the neuron, whose location is represennted by a delta function. The Aβ production rate is given by β, a Poisson distribution whose mean depends on several facctors: ,
,
,
1
The production rate is also dependent on the general stress level, the extent of inflammation, and whether or not the neurons in the local environmeent are re-modeling or recovering from a recent insult. Protease degradation is modeled at the macroscopic leveel and depends on the reaction rate (α), as well as the cooncentrations of Aβ and IDE. A simplified model of fibril fformation is given where γ1,2 represent the respective fibril attachment rates for Aβ onto a fibril and for two monoomers coming together.
IFMBE Proceedings Vol. 32
206
C.R. Kyrtsos and J.S. Baras
Finally, the rate at which microglia uptake Aβ is given by ε, and is dependent both on the Aβ concentration as well as the location of the microglia, which, like neurons, has been modeled using a delta function. The flux of Aβ across the blood-brain barrier is very important to the overall dynamics of the model. The majority of clearance of Aβ from the brain occurs by transport of apolipoproteinE-bound Aβ via LRP1 receptors. Mathematically, the net flux can be modeled as the sum of the passive and active transport. Passive transport can be modeled using simplified Kedem-Katchalsky equations that account for leakage across a semi-permeable BBB. This state only occurs in later stages of AD when cerebrovascular dysfunction leads to local breakdown of the BBB due to either plaque deposition (CAA) or inflammation of the neurovascular unit. For our modeling purposes, only the active transport has a non-trivial contribution to the early pathogenic stages and the rate of reaction is modeled using Michaelis-Menten kinetics: ,
R
,
where Rmax is the maximum rate of reaction, KM is the Michealis constant and c represents the concentration of Aβ within a narrow boundary region around the BBB. The net reaction rate is also dependent upon the rate at which Aβ binds to apoE to be transported and on the density of LRP1 receptors along the BBB: 1,
,
Aβ
For our model, the role of LRP1 to the reaction rate will be written as a ratiometric coefficient, L, where L ranges from 0 to 1 as follows:
appropriate to assume that this will be the form localized near LRP1 receptors.
IV. DERIVATION OF AΒ DIFFUSION COEFFICIENT The diffusion coefficient for Aβ (DAB) moving through brain tissue is not a readily available parameter due to the difficulty in obtaining accurate measurements. To overcome this, DAB was calculated using a combination of the Stokes-Einstein equation and a previously described method [12]. The effective diffusion coefficient through brain can be given by:
where D is the theoretical value for the diffusion coefficient given by the Stokes-Einstein relationship in a fluid medium free of any obstacles and λ is the tortuosity, or the average hindrance of a complex medium relative to an obstacle-free medium. In the brain, λ is typically ~1.6, though this value can increase during insult or stress to the brain, decreasing the effective diffusion. The Stokes-Einstein relationship is: 6 where kB is the Boltzmann constant (1.38e-23 J/K), T is the temperature in Kelvin (310.15 K), η is the effective viscosity (0.7-1 mPa·s, [2]), and r is the effective radius of Aβ (estimated as 2 nm, [11]). Substituting these values into the given equations gives an effective diffusion coefficient of DAB ~1.14x10-6 cm2/s.
1 1
V. MICROGLIA MODEL
The rate at which apoE and Aβ bind is defined macroscopically as: Aβ
σ Aβ apoE
The total flux of Aβ across the BBB is described by the reaction rate (Rs) and the total cross-sectional area of BBB that we are studying: Ψ
R A
Experimental values for the Michaelis constant and the Rmax for Aβ40 were derived by Shibata et al to be 15.3 nM and 70-100 nM, respectively [22]. Values for Aβ42 are somewhat unnecessary since LRP-1 predominantly transports the 40 amino acid length protein as opposed to the 42 amino acid form. Additionally, Aβ40 is the form found in cerebrovascular plaques, so it is much more
The immune response of the central nervous system (CNS) is separate from that of the rest of the body, in part due to the presence of the blood-brain barrier (BBB). CNS macrophages, known as microglia, are distributed relatively uniformly throughout the brain tissue during normal, resting states. The movement of microglia has been modeled using two separate equations dependent on whether the microglia is in a ramified or in an activated state actively traveling up a concentration gradient of chemoattractant. While in the ramified state, microglia are modeled using a simple random walk along a continuous plane: ,,
,,
where xc is a matrix that tracks the position of the center of mass of each of the ith microglia at time t, and ξ(t) is Gaussian white noise with the constraint that the center of mass of two microglia cannot be less than 2R (R=microglial
IFMBE Proceedings Vol. 32
A Systems Biology Model of Alzheimer’s Disease Incorporating Spatial-temporal Distribution of Beta Amyloid
radius) at the same time point (microglial aggregation only occurs in the activated state). Microglia are assigned initial positions prior to running the simulation. Ramified microglia have several other constraints: microglia do not traverse the BBB and are confined to brain tissue; under extreme circumstances, macrophages in the blood may cross into the brain and differentiate into microglia; and microglia switch to the activated state once a specified threshold difference has been exceeded. When the local concentration of Aβ in the brain interstitial fluid reaches 200 nM or greater, microglia become activated and migrate up the concentration gradient towards the main source of Aβ (experimentally, this has been shown to be near neurons or near the basement membrane of blood vessels). The directed movement of microglia towards a chemoattractant (chemotaxis) can be modeled using the Langevin equation of motion: ,
,
,
th
where xc represents the position of the i microglia at time t,
∇ϕ represents the Aβ concentration gradient, ξ represents
Gaussian white noise that the microglia would experience during chemotaxis, α is a positive constant that describes the strength of chemotaxis (α = 1 for our simulations), and κ describes whether it is positive chemotaxis (κ = +1; value used for our simulations) or negative chemotaxis (κ = -1).
VI. NEURAL NETWORK MODEL FOR NEURONS & SYNAPSES Neurons have been modeled using a modified McCulloch-Pitts network that has been previously developed by Butz et al [3]. The network is defined by several variables: N, NE, C, θ, Φ and β. N represents the number of logical neurons, NE is the number of excitatory neurons (1 to NE are excitatory, while NE to N are inhibitory), C is the NxN matrix of connections between neurons, θ is the common threshold of all neurons, Φ is the relative weight of inputs from inhibitory neurons, and β is the noise level in the threshold function. The state of the network at any given time, t, is defined by the vector zt: ,….,
,
0,1 , 1
,
207
,
,
The probability of neuron i being active in the next time instant is governed by the threshold potential (θ), the actual membrane potential (MP), the noise level (β), and the percentage afference (α) as: 1 /
1
The percentage afference models the shift in the probability of firing, and is related to the relative levels of Aβ. Changes in network connections are modeled by updating the connectivity matrix at each time step with respect to changes in pre- and post-synaptic elements. Decay of pre- and post-synaptic elements is proportional to the strength of existing connections and the relative level of Aβ. Pre-synaptic elements that are connected to a postsynaptic element that is lost are able to recombine and form a new synapse in future time steps, whereas the ‘lost’ postsynaptic element is removed from the post-synaptic pool. Free pre- and post-synaptic elements are updated to account for synaptic losses, recombinations, and strengthening of existing contacts. The number of possible contact offers is dependent on the number of free contacts that a neuron contributes to the network. The number of neurons in the network is also able to be varied throughout the simulation. Neuron loss is modeled by the calcium set-point hypothesis in combination with apoptosis occurring above a maximal beta amyloid concentration. Calcium concentration is directly correlated with membrane potential and activity level, and it has been previously determined that a neuron’s activity level should remain between 0.25<si<0.85. The number of neurons currently in the system is defined by a vector of N elements: 1,
, 0.25
0.85
, 0,
0
,
VII. SIMULATION ALGORITHM
The network connectivity matrix is defined as: ,
,
,
,
,
,
where ci,j is the strength of the connection from neuron j onto neuron i. From a biological standpoint, C represents the probability for synaptogenesis between two neurons (Butz 2006). The membrane potential for neuron i is:
The combination of both fast and slow time scales for reactions involved in our system implies the need for a hybrid, multiscale method to be used. An adaptive approach is being developed to model the diffusion of a relatively large number of chemical species throughout the control volume (CV). In this approach, the CV is discretized into rectangular subvolumes, each containing an initial number of molecules of each chemical species.
IFMBE Proceedings Vol. 32
208
C.R. Kyrtsos and J.S. Baras
Species can jump between adjacent subvolumes, depending on the diffusive flux of that subvolume. Chemical reactions are confined and only allowed to occur between species in the same subvolume. The cell bodies of neurons are placed into parallel subvolumes with a uniform separation distance. Microglia are randomly distributed throughout the CV with the constraint that the initial position of two microglia cannot overlap. An algorithm that adaptively chooses between using the macroscopic diffusion equations and the tau-leaping method to solve for the Aβ distribution will be the preferred method to solve this system.
VIII. CONCLUSION Discrepancies between different studies on the effect of cholesterol on Aβ processing may be quite difficult to discern using a strictly experimental approach. By utilizing a quantitative approach in combination with experiments, a mathematical model, such as the one developed here, offers vast opportunities to study facets of AD and the brain that would otherwise be quite difficult. The mathematical model developed here studies the spatial-temporal distribution of Aβ within a control volume in the hippocampus. The model has been designed to study the plausible roles of brain cholesterol levels and LRP-1 expression levels on the processing of beta amyloid. Our future work will focus on further development of an efficient algorithm to model this system, as well as using our model results to design in vivo biological experiments.
ACKNOWLEDGEMENTS This research was funded by support from the CSHCN Foundation at the University of Maryland.
REFERENCES 1. Bjorkhem, I. and S. Meaney (2004). "Brain cholesterol: long secret life behind a barrier." Arterioscler Thromb Vasc Biol 24(5): 806-15. 2. Bloomfield, I. G., I. H. Johnston, et al. (1998). "Effects of proteins, blood cells and glucose on the viscosity of cerebrospinal fluid." Pediatr Neurosurg 28(5): 246-51. 3. Butz, M., K. Lehmann, et al. (2006). "A theoretical network model to analyse neurogenesis and synaptogenesis in the dentate gyrus." Neural Netw 19(10): 1490-505. 4. Chauhan, N. B. (2003). "Membrane dynamics, cholesterol homeostasis, and Alzheimer's disease." J Lipid Res 44(11): 2019-29.
5. Cherny, R. A., J. T. Legg, et al. (1999). "Aqueous dissolution of Alzheimer's disease Abeta amyloid deposits by biometal depletion." J Biol Chem 274(33): 23223-8. 6. Donahue, J. E., S. L. Flaherty, et al. (2006). "RAGE, LRP-1, and amyloid-beta protein in Alzheimer's disease." Acta Neuropathol 112(4): 405-15. 7. Elias, P. K., M. F. Elias, et al. (2005). "Serum cholesterol and cognitive performance in the Framingham Heart Study." Psychosom Med 67(1): 24-30. 8. Fassbender, K., M. Simons, et al. (2001). "Simvastatin strongly reduces levels of Alzheimer's disease beta -amyloid peptides Abeta 42 and Abeta 40 in vitro and in vivo." Proc Natl Acad Sci U S A 98(10): 5856-61. 9. Halford, R. W. and D. W. Russell (2009). "Reduction of cholesterol synthesis in the mouse brain does not affect amyloid formation in Alzheimer's disease, but does extend lifespan." Proc Natl Acad Sci U S A 106(9): 3502-6. 10. Hamanaka, H., Y. Katoh-Fukui, et al. (2000). "Altered cholesterol metabolism in human apolipoprotein E4 knock-in mice." Hum Mol Genet 9(3): 353-61. 11. Hartley, D. M., D. M. Walsh, et al. (1999). "Protofibrillar intermediates of amyloid beta-protein induce acute electrophysiological changes and progressive neurotoxicity in cortical neurons." J Neurosci 19(20): 8876-84. 12. Hrabe, J., S. Hrabetova, et al. (2004). "A model of effective diffusion and tortuosity in the extracellular space of the brain." Biophys J 87(3): 1606-17. 13. Hudry, E., D. Van Dam, et al. (2009). "Adeno-associated Virus Gene Therapy With Cholesterol 24-Hydroxylase Reduces the Amyloid Pathology Before or After the Onset of Amyloid Plaques in Mouse Models of Alzheimer's Disease." Mol Ther. 14. Ledesma, M. D. and C. G. Dotti (2006). "Amyloid excess in Alzheimer's disease: what is cholesterol to be blamed for?" FEBS Lett 580(23): 5525-32. 15. Liu, Q., C. V. Zerbinatti, et al. (2007). "Amyloid precursor protein regulates brain apolipoprotein E and cholesterol metabolism through lipoprotein receptor LRP1." Neuron 56(1): 66-78. 16. MacMaster, F. P. and V. Kusumakar (2004). "Hippocampal volume in early onset depression." BMC Med 2: 2. 17. Mattson, M. P. (2004). "Pathways towards and away from Alzheimer's disease." Nature 430(7000): 631-9. 18. Poirier, J. (1999). "Apolipoprotein E4, cholinergic integrity and the pharmacogenetics of Alzheimer's disease." J Psychiatry Neurosci 24(2): 147-53. 19. Puglielli, L., R. E. Tanzi, et al. (2003). "Alzheimer's disease: the cholesterol connection." Nat Neurosci 6(4): 345-51. 20. Refolo, L. M., M. A. Pappolla, et al. (2001). "A cholesterol-lowering drug reduces beta-amyloid pathology in a transgenic mouse model of Alzheimer's disease." Neurobiol Dis 8(5): 890-9. 21. Selkoe, D. J. (2008). "Soluble oligomers of the amyloid beta-protein impair synaptic plasticity and behavior." Behav Brain Res 192(1): 106-13. 22. Shibata, M., S. Yamada, et al. (2000). "Clearance of Alzheimer's amyloid-ss(1-40) peptide from brain by LDL receptor-related protein1 at the blood-brain barrier." J Clin Invest 106(12): 1489-99. 23. Zlokovic, B. V. (2008). "The blood-brain barrier in health and chronic neurodegenerative disorders." Neuron 57(2): 178-201.
IFMBE Proceedings Vol. 32
A Mathematical Model of the Primary T Cell Response with Contraction Governed by Adaptive Regulatory T Cells S.N. Wilson1, P. Lee2, and D. Levy1 1
Mathematics Department and Center for Scientific Computing and Mathematical Modeling (CSCAMM), University of Maryland, College Park, MD, USA 2 Division of Hematology, Department of Medicine, Stanford University, Stanford, CA, USA
Abstract— The currently accepted paradigm for the primary T cell response is that effector T cells commit to minimal developmental programs in which these cells expand according to some predetermined expansion program. Current mathematical models that are based on these developmental programs do not show the robustness to precursor frequencies that is exhibited in experimental results. Recently we proposed a shift in paradigm wherein the expansion and contraction of effector T cells is the result of negative feedback from adaptive regulatory T cells. These regulatory T cells develop in the course of an immune response and suppress effector cells. In this work, we extend our mathematical model to include regulation of helper T cells. Simulations show that this feedback mechanism generates robust immune responses over a range of five orders of magnitude of precursor frequencies. Keywords— Delay Differential Equations, Adaptive Regulatory T cells, Primary Immune Response.
the hypothesis that the immune response is determined shortly after the response initiates (i.e. that cells will commit apoptosis after a certain number of divisions or after a specified time period). As an alternative to these approaches, Kim et al. propose the hypothesis that the contraction of the immune response is controlled by adaptive regulatory mechanisms [5]. In [5], a model is proposed based on this concept. It is shown that contraction does not have to be pre-programmed by cells, but can come about as the result of negative feedback loops through cell interactions. When compared with both the Cell-Division and TimeBased Programs, this model shows robustness with respect to precursor frequencies that are more consistent with experimental studies. The work in this paper extends the work in [5] by considering the regulation of helper and effector cells separately.
I. INTRODUCTION
II. METHODS
The primary cell-mediated immune response is the process by which the human immune system responds to a foreign antigen. Upon pathogen invasion, immature antigen presenting cells (APCs) residing at the site of infection migrate to the lymph nodes and present the pathogen. A naïve T cell with the corresponding specificity will then proliferate and differentiate into cells with a range of functionality. Two classes of these T cells are helper T cells and cytotoxic effector T cells. The main functionality of helper T cells is to aid in activating and directing other immune cells, while cytotoxic T cells primarily induce apoptosis in infected cells [1]. The focus of this paper is to model the dynamics of the primary cell-mediated immune response. More specifically, we focus on the contraction of this response as a result of adaptive regulatory cells. A number of experimental studies have shown that the dynamics of a cell-mediated immune response are determined shortly after antigen presentation [2,3,4]. A consequence of this fact is that the immune response is insensitive to the characteristics of the antigen exposure. This leads to
We present a model of T cell expansion/contraction where contraction is controlled by adaptive regulatory T cells. During expansion, proliferating immune cells occasionally differentiate into adaptive regulatory T cells, iTregs. Over time, negative feedback provided by these regulatory cells, shuts down the immune response. In this model, we consider four populations of T cells: Naïve T cells, helper T cells, effector T cells and regulatory T cells. In addition, we consider concentrations of both immature and mature antigen presenting cells. We seek to discover the dynamics of how these populations interact when presented with a pathogen. This process is modeled in the following way (Figure 1): 1. 2.
Upon encountering an antigen, immature APCs become mature APCs and migrate to the lymph node. Naïve T cells, which reside in the lymph nodes, encounter mature APCs and enter a minimal developmental program in which they divide m times.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 209–212, 2010. www.springerlink.com
210
S.N. Wilson, P. Lee, and D. Levy
3.
4.
5.
Upon completion of the minimal developmental program, naïve T cells differentiate into helper T cells. A proportion, r, of helper T cells further differentiates into regulatory T cells. Existing cytotoxic effector T cells that are activated by helper cells enter into a minimal developmental program in proportion to the number of interactions between helper T cells and mature APCs. After being differentiated from helper T cells, regulatory T cells enter into a minimal developmental program of their own. They then suppress both the helper and effector T cells.
cells, TE is the concentration of effector cells, and TR is the concentration of regulatory cells. The model is described as the following system of delay differential equations: A& 0 (t) = SA − d0 A0 (t) − a(t)A0 (t)
(1)
A&1 (t) = a(t)A0 (t) − d1 A1 (t)
(2)
T&N (t) = ST − δ 0TN (t) − k1 A1 (t)TN (t)
(3)
T&H (t) = 2 m k1 A1 (t − σ )TN (t − σ ) − δ1TH (t) −k1TR (t)TH (t) − rTH (t)
(4)
T&E (t) = 2m k1TH (t − σ )A1 (t − σ ) − k2TR (t)TE (t) +2k1 A1 (t − ρ)TE (t − ρ) − k1A1 (t)TE (t) − δ1TE (t)
T&R (t) = r2 n TH (t − γ ) − δ1TR (t)
Fig. 1 Graphical representation for the model.
Note that each compartment has an associated death rate, which is not represented in the diagram. The compartments are represented as follows: A0: immature APCs, A1: mature APCs, TN: naïve T cells, TH: helper T cells, TE: effector T cells, and TR: adaptive regulatory T cells
Representing the antigen presenting cells, A0 is the concentration of immature APCs at the infection site and A1 is the concentration of mature APCs in the lymph nodes. The T cell compartments are modeled as follows: TN is the concentration of naïve cells, TH is the concentration of helper
(5) (6)
Equation (1) describes the immature APCs maintained throughout the body. There is a constant supply rate, SA and proportional death rate, d0, of these cells. The invasion by an immunogen is represented by the rate, a(t), with which the immunogen stimulates immature APCs to become mature APCs. Mature APCs are given in equation (2). The source of these cells is provided by immature APCs that have been stimulated by the antigen. These cells have a natural death rate of d1. The third equation characterizes the dynamics of the naïve T cell population. These cells have a constant supply rate, ST, and proportional death rate, δ0. The last term describes the rate with which naïve T cells enter into a proliferative state. This rate is proportional to the mass action interactions of naïve T cells and mature APCs. Helper T cells are given by equation (4). At a given point in time, t, the source of these cells is proportional to product of concentrations of A1 and TN at time, t-σ, with σ denoting the time needed for a naïve T cell to undergo m cell divisions. The next term denotes the natural death rate of these cells. The third term captures the effect of regulatory cell suppression on helper T cells. The final term of this equation signifies the rate with which helper T cells further differentiate into regulatory T cells. The effector cell compartment is governed by equation (5). Here, the first term describes the source from cells that have undergone a minimal development program. This source is proportional to the number of interactions of helper T cells and mature APCs. It includes a time delay of magnitude σ, during which the effector cell undergoes m cell divisions. Next, we have the effect of negative feedback from regulatory cells. When effector cells are further stimulated by mature APCs, they exit the system, undergo a
IFMBE Proceedings Vol. 32
A Mathematical Model of the Primary T Cell Response with Contraction Governed by Adaptive Regulatory T Cells
Table 1 Parameter Values Parameter SA ST d0 d1 δ0 δ1 k1 k2 r m n σ ρ γ a(t) b c
Description Supply rate of immature APCs Supply rate of naïve T cells Death rate of immature APCs Death rate of mature APCs Death rate of naive T cells Death rate of helper and effector T cells Kinetic coefficient of helper/regulatory cell interaction Kinetic coefficient of effector/regulatory cell interaction Proportion of helper T cells that become regulatory T cells Number of divisions in minimal developmental program for helper and effector T cells Number of divisions in minimal developmental program for regulatory T cells Duration of minimal developmental program for helper and effector T cells Duration of one T cell division Duration of minimal developmental program for regulatory T cells Rate of APC stimulation Duration of antigen availability Level of APC stimulation
Estimate 0.3 0.0012 0.03 0.8 0.03 0.4 20
of effector cells which peaks approximately 3 days later. This coincides with the estimates that the immune response takes approximately 7 days to peak. Also, we see that contraction of the effector cells begins when the regulatory cells reach their maximal concentration. Helper Cells Effector Cells Regulatory Cells
3000
Concentration (k/uL)
single cell division and return to the system at a time ρ units in the future. This phenomenon is captured by the expression 2k1 A1 (t − ρ )TE (t − ρ ) − k1 A1 (t)TE (t) in equation (5). The final equation describes the regulatory T cell compartment. These cells differentiate from helper T cells, and divide n times with a delay of γ days. They have a natural death rate of δ1. Most of the parameter estimates used in this model were obtained from [5] and are contained in Table 1. The additional parameters: k1, k2, n, and γ were estimated based on their related parameters in [5].
211
2500 2000 1500 1000 500 0 0
5
10 Time (days)
15
20
30
Fig. 2 Time Course of Helper, Effector, and Regulatory T Cells for TN(0)=
0.01
0.1 k/μL
7
Figure 3 shows the phase portrait of effector versus regulatory cells over varying initial concentrations of naïve T cells. Here, we see how the regulatory cell response scales in order to control the effector cell response. The maximum concentration of the regulatory T cell concentration scales on the order of 10 times the initial concentration, while the effector response remains on the same order of magnitude for each of the initial conditions shown.
3 3 ⅓ 2.5 c1[0,b] 10 1
0.4
III. RESULTS Using Matlab’s DDE function, the system of delay differential equations (1)-(6) was numerically simulated for varying initial concentrations of naïve T cells. Across all simulations, this was the only varying initial condition. The initial condition of immature APCs was 10 k/μL and all initial conditions not specifically mentioned were set to zero. We concentrate on the relationship between the initial concentration of naïve T cells and the corresponding expansion and contraction of the helper and effector T cell populations. In Figure 2, we show the dynamics of the T cell populations. This begins with the expansion of the Helper T cells, which peak approximately 4 days after initial antigen presentation. This peak is followed by an increase in the population
iTReg Concentration (k/uL)
TN(0) 0.004 0.008 0.016 0.032 0.04
0.3 0.2 0.1 0
-0.1
0
1000 2000 3000 Effector T Cell Concentration (k/uL)
4000
Fig. 3 Phase Portrait of iTregs versus Effectors Over 20 Days
IFMBE Proceedings Vol. 32
212
S.N. Wilson, P. Lee, and D. Levy
A further examination of the relationship between effector response and initial conditions is shown in figure 4. Over a span of 5 orders of magnitude difference in initial T cell concentration, we see a difference on the order of 2 orders of magnitude in peak effector responses. This insensitivity to initial conditions is similar to what was seen in [6]. TN(0)
Concentration (k/uL)
3500
0.0001 0.001 0.01 0.1 1
3000 2500 2000
that the model presented in [5] can be extended to consider the aforementioned modifications while maintaining the biologically consistent stability results.
ACKNOWLEDGMENT The work of SW and DL was supported in part by the joint NSF/NIGMS program under Grant Number DMS0758374. This work was supported in part by Grant Number R01CA130817 from the National Cancer Institute. The content is solely the responsibility of the authors and is does not necessarily represent the official views of the National Cancer Institute or the National Institute of Health.
1500
REFERENCES
1000 500 0 0
5
10 Time (days)
15
20
Fig. 4 Effector T Cell Population Over Time
IV. DISCUSSION As helper T cells and their regulation are considered, this model is an improvement over the model of [5]. When compared to the model of Kim et al., the present model shows similar results in terms of stability to precursor frequencies. The model presented here differentiates iTregs from helper T cells. A result of iTregs differentiating from such a small population is that the kinetic coefficient, k2, must be increased. We also follow the dynamics of these adaptive regulatory cells by allowing them to proliferate in order to generate a sufficient number to handle the regulation of the effector population. Here, we were able to show
1. Murphy K, Travers P, Walport M et al. (2008) Immunobiology. Garland Science, New York. 2. Kaech S M, Ahmed R (2001) CD8+ T cell differentiation: initial antigen encounter triggers a developmental program in naïve cells. Nat Immunol., 2(5):415-422. 3. Mercado R, et al. (2000) Early Programming of T cell populations responding to bacterial infection. J Immunol., 165(12):231-254. 4. Yang Y, Kim D, and Fathman C G. (1998) Regulation of programmed cell death following T cell activation in vivo. Int Immunol., 10(2):175183. 5. Kim P, Lee P, Levy D Emergent group dynamics governed by regulatory cells produce a robust primary T cell response. Bulletin of Mathematical Biology. In press. 6. Badovinac V P, Haring J S, Harty J T (2007) Initial T cell receptor transgenic cell precursor frequency dictates critical aspects of the CD8(+) T cell response to infection. Immunity, 26(6):826-841.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Shelby Wilson University of Maryland, College Park 4116 CSIC Building #406, Paint Branch Ave College Park, MD 20742 USA
[email protected]
A Mathematical Model for Microenvironmental Control of Tumor Growth A.R. Galante, D. Levy, and C. Tomasetti Department of Mathematics and Center for Scientific Computation and Mathematical Modeling (CSCAMM), University of Maryland, College Park, MD, USA
Abstract— One of the most intriguing questions related to cancer is why approximately two out of three people never develop cancer? It has been proposed that this variability in cancer resistance may be due to differences in the efficiency of certain protective mechanisms, which are known to inhibit the growth of neoplastic cells. Here we focus on the mechanism known as microenvironmental control. There is a large amount of evidence that a majority of disseminated tumor cells present in the human body never develop into clinical tumors, and that the direct physical contact between normal and tumoral cells appears to be a necessary condition for such resistance to cancer. By using a system of ordinary differential equations, we model contact-controlled tumor growth and are able to simulate the three expected modes of growth: expansive, contractive and stable, thus mathematically supporting the feasibility of microenvironmental control. Keywords— Cancer, Cancer Resistance, Microenvironmental Control, Ordinary Differential Equations.
I. INTRODUCTION It is known that the majority of the tumors are monoclonal [1], that is, cancer generally develops starting from a single cell that has undergone some specific kind of mutation. However it is calculated that 3 mutations occur on average every time the cell’s DNA base pairs are duplicated, an event that occurs 1016 time in the life of a person [2]. Furthermore, there are a lot of genetic and epigenetic changes that can promote cancer [3]. This raises one of the most challenging questions related in cancer studies: why do approximately two out of three people never develop cancer? Furthermore, why does it appear that at least for some types of tumors, the disease progresses for some patients while remains latent in others? It is a striking fact, supported by a large amount of evidence, that the majority of disseminated tumor cells present in the human body never develop into clinical tumors. Some examples are the cancerous cells found in the prostate, the mammary gland, and the epithelium [4-6]. Based on recent data it has been proposed that one possible explanation may be the variability in the efficiency of certain protective mechanisms, which are known to inhibit the growth of neoplastic cells.
In a timely PNAS article [4], Klein lists some of the mechanisms that cause resistance to cancer: immunological (T cells targeting non-self proteins), genetic (genes that control the fidelity of DNA replication), epigenetic (lack of impairment of normal parental epigenetic imprinting), intracellular (triggering of apoptosis in cells with DNA damage or illegitimate activation of oncogenes) and intercellular (healthy neighboring cells inhibiting neoplastic growth). It is on this last mechanism, also known as “microenvironmental control” that we would like to focus our attention. The first evidence of such an inhibition mechanism activated by normal cell was given in a paper by Stoker et al. in 1966 [7]. There it was shown that the inhibition was working by direct contact between the healthy and neoplastic cells. Further evidence in this direction was given, for example, by [8-9] for the case of the epithelium and stroma. Other types of contactual interactions may play a relevant role. One known example is that of the adherence junctions: e-cadherin, which is a major component of such junctions, is down regulated in most epithelial tumors [4]. It is important to note that the direct physical contact between normal and tumoral cells appears to be a necessary condition for the microenvironmental control to be functional (see [10]). This type of mechanism is what we would like to model. While there are a number of mathematical models focusing on the immunological mechanism of resistance to cancer (see for example [11-12] and the references therein), environmental control has not been the focus of mathematical studies. We would like to mention a few. For example, in 1980 Cox et al. [13] presented a model where the size of the reproducing tumor population is proportional to the number of unoccupied receptors for inhibitors. Chaplain et al. [14] instead have included in their model a variable for the diffusion of inhibitors. Finally, Bajzer et al. [15] considered the contribution of paracrine interactions as a regulatory feedback in cell-cell interaction. We would like to model resistance to cancer due to microenvironmental control. We will take the simplest point of view, that is, we will not assume (and thus include) any specific role of nutrients, growth factors, and so forth. We will only consider direct physical contact, and show that this is a sufficient mechanism for explaining the observed dynamics.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 213–216, 2010. www.springerlink.com
214
A.R. Galante, D. Levy, and C. Tomasetti
II. A MATHEMATICAL MODEL A. Derivation To derive our model for microenvironmental control of a solid tumor, we begin by assuming that the tumor is spherical with radius R and is composed of approximately spherical cells of radius r (consult Figure 1). Hence, the volume of the tumor is given by
4 π R3 , 3
VT =
(1)
and the volume of a cancer cell is
VC =
4 3 πr . 3
(2)
Assuming a packing factor of ¾, we can determine that the total number of cancer cells, T , is given by the volume of the tumor, equation (1), divided by the volume of a single cell: 3
T =
3 ⎛ R⎞ ⎜ ⎟ . 4⎝ r⎠
(3)
In tumor spheroids, the majority of the proliferating cancer cells are located in the outer three to five cell layers [16]. For the sake of simplicity, we assume that only the outer layer of tumor cells is capable of proliferating; we denote this population of proliferating cancer cells on the rim of the tumor by S. To allow for microenvironmental control by physical contact, we allow normal cells, denoted by N, to interact with the cancer cells on the surface of the tumor. Upon interaction of one normal cell and one cancer cell, we allow for the formation of a two-cell complex which can either dissociate or result in the death of the cancer cell. We denote the population of complexes by X. When a cancer cell is in a complex with a normal cell, we do not allow the cancer cell to proliferate. These cell populations in and around our spherical tumor model are illustrated in Figure 1. The volume of the cancer cells in the non-proliferating core of the tumor is quantified by 4 3 VT −(S+ X ) = π (R − 2r ) . (4) 3 Again assuming a packing factor of ¾, equation (4) can be used to determine the number of cells in the nonproliferating region:
T − (S + X ) =
3 (R − 2r ) . r3 4 2
(5)
Fig. 1 A 2-D depiction of a 3-D spherical tumor of radius R composed of spherical cells of radius r. The two-cell complexes, X, are shown shaded in blue. The cancer cells and necrotic core, T, are shown with a wave pattern while the normal cells have no pattern Using equation (3) to solve for R in terms of T, a formula for the number of “uncomplexed” cancer cells on the surface of spherical tumor, S, can be derived from Eqn. (5): 1
2
2 1 ⎛ 3⎞ 3 ⎛ 3⎞ 3 S = 6 ⎜ ⎟ T 3 − 12 ⎜ ⎟ T 3 + 6 − X . ⎝ 4⎠ ⎝ 4⎠
(6)
Assuming the number of normal cells on the surface of the tumor, N, is greater than S and following a similar logic as above, we can derive an analogous expression for N: 3 ⎛⎛ 4 ⎞ N = ⎜⎜ T ⎟ 4 ⎝⎝ 3 ⎠
1/3
3
⎞ + 2⎟ − T − X . ⎠
(7)
This allows us to develop the dynamic equations for spherical tumor growth within a microenvironment. Let C be cancer cells not in complexes, i.e., C=T - X. We assume that the cancer cells in population S are the only cancer cells undergoing mitosis, at rate k. Let h and k0 be the rate constants for the formation and dissociation of complexes, respectively. Hence, the dynamic equation for C(t) reads
dC = kS − hSN + k0 X . dt IFMBE Proceedings Vol. 32
(8)
A Mathematical Model for Microenvironmental Control of Tumor Growth
The dynamic equation for complexes X allows for the association and dissociation of complexes as well as the death of a cancer cell, resulting in the elimination of a complex (with rate k2):
dX = hSN − (k2 + k0 ) X . dt
(9)
215
parameters. Each simulation was run in MATLAB using ode45 using the arbitrary initial condition of 700 cancer cells and 10 cells in complexes. In Figure 2, tumor stability is simulated. With the chosen parameters, it is noteworthy that the tumor continues to grow until a sufficient number of complexes are formed and then it contracts in size until reaching the steady state.
Note that since C+X=T, a dynamic expression for T can be derived from equations (8) and (9):
900 800
(10)
700
Considering that S and N are functions of T and X, we have derived a nonlinear system of ODEs with two equations and two variables.
600 T
dT = kS − k2 X . dt
1000
500 400 300
B. Analysis of the Model
k (k0 + k2 ) , (11) hk2 which is a translation of g(T) to the left by exactly N*. According to the value of N*, the steady state may be found numerically for large values of T. In these cases, however, the steady state is outside of the feasible area for the model assumptions; tumors tend to undergo angiogenesis when they are approximately 1mm in diameter or 105 cells [16]. Thus, depending on the value of the parameters and, hence, the value that N* takes, either the tumor will reach a steady state or will grow uncontrolled (due to the fact that the steady state is outside of the feasible region). Of course, parameter values can also be selected to force the steady state to be approximately zero. N* =
Model dT/dt = 0 dX/dt = 0 X = f(T)
200 100 0
0
To assess the feasibility of our model, we must capture the three tumor growth modes: controlled, suppressed, and uncontrolled. Indeed, we were able to show that all three modes of interest can be obtained by varying the model
100
150
200
250 X
300
350
400
450
500
In Figure 3, tumor suppression is simulated. In this case, the parameters are such that the tumor barely grows before reaching the line for X& = 0 , at which point the complexes slowly suppress the tumor to a steady state of approximately zero. 1000 900 800 700 600 500 400 300 Model dT/dt = 0 dX/dt = 0 X = f(T)
200 100 0
III. RESULTS AND DISCUSSION
50
Fig. 2 Model simulation demonstrating tumor stability. Parameter values used were k = 2, h = 0.01, k0=0.1, and k2=1
T
We can define the functions f (T) = S+X and g(T)=N+X to be the number of cancer cells and normal cells on the surface of the tumor, respectively. It is important that if a steady state (X*,T*) exists, it must be such that X*≤ f (T*). Otherwise, the model would allow for the number of “uncomplexed” cancer cells on the surface of the tumor to be negative. Noting that both f and g are always positive, it is easy to show that any solution to both nullclines X& = 0 and T& = 0 satisfy this condition. The two nullclines must cross, yielding a steady state; this must occur on the line given by
0
50
100
150
200
250 X
300
350
400
450
500
Fig. 3 Model simulation demonstrating tumor suppression. Parameter values used were k = 2, h = 0.1, k0=0.1, and k2=1 In Figure 4, the phenomenon of uncontrolled tumor growth can be seen. In this case, it is noteworthy that the
IFMBE Proceedings Vol. 32
216
A.R. Galante, D. Levy, and C. Tomasetti
line for X& = 0 is to the left of the line T& = 0 . We know that these nullclines must intersect because of the dynamics of the system. Hence, while not visible in the figure, a very large steady state does in fact exist.
REFERENCES
2000 Model dT/dt = 0 dX/dt = 0 X = f(T)
1800 1600 1400
T
1200 1000 800 600 400 200 0
0
50
100
150
200
250
X
Fig. 4 Model simulation demonstrating uncontrolled tumor growth. Parameter values used were k = 1, h = 0.01, k0=8, and k2=0.1 Note that it is not necessary to assume that the complexes are capable of killing cancer cells; this assumption can be relaxed to a situation where cancer cells in complexes are put into a quiescent state. Since the focus of our paper has been on microenvironmental control, we have considered small tumors which have not yet undergone angiogenesis. As we have been able to capture each expected trend, we can conclude that our model supports the feasibility of contact-based microenvironmental control.
ACKNOWLEDGMENT This work was supported in part by the joint NSF/NIGMS program under Grant Number DMS-0758374 and in part by Grant Number R01CA130817 from the National Cancer Institute. The content is solely the responsibility of the authors and is does not necessarily represent the official views of the National Cancer Institute or the National Institute of Health.
1. Klein G (2009) Reply to Bredberg: the voice of the whale. Proc Natl Acad Sci USA 106:E52 2. Bredberg A (2009) Cancer resistance and Peto’s paradox. Proc Natl AcadSci USA 106:E51 3. Weinberg RA (2007) The biology of cancer. Garland Science Publishers, New York 4. Klein G (2009) Toward genetics of cancer esistance. Proc Natl Acad Sci USA 106:859-863 5. Naumov GN et al. (2002) Persistence of solitary mammary carcinoma cells in a secondary site: a possible contributor to dormancy. Cancer Res 62:2162-2168 6. Zhang W (2001) Escaping the stem cell compartment: sustained UVB exposure allows p-53 mutant keratinocytes to colonize adjacent epidermal proliferating units without incurring additional mutations. Proc Natl Acad Sci USA 98:13948-13953 7. Stoker MG, Shearer M, O’Neill C (1966) Growth inhibition of polyoma-transformed cells by contact with static normal fibroblasts. J Cell Sci 1:297-310 8. Glick AB, Yuspa SH (2005) Tissue homeostasis and the control of the neoplastic phenotype in epithelial cancer. Semin Cancer Biol 15:7583 9. Alt-Holland A, Zhang W, Marguilis A, Garlick JA (2005) Microenvironmental control of premalignant disease: the role of intercellular adhesion in the progression of squamous cell carcinoma. Semin Cancer Biol 15:84-96 10. Alexander DB et al. (2004) Normal cells control the growth of neighboring transformed cells indipendent of gap junctional communication and Src activity. Cancer Res 64:1347-1358 11. Adam JA, Bellomo N (1996) A survey of models for tumor-immune system dynamics. Birkhauser, Boston 12. Arciero JC, Jackson TL, Kirschner DE (2004) A mathematical model of tumor-immune evasion and siRNA treatment. Discrete and Continuous Dynamical Systems 4:39-58 13. Cox EB, Woodbury MA, Myers LE (1980) A new model for tumor growth analysis based on a postulated inhibitory substance. Compd. Biomed. Res. 13:437-445 14. Chaplain MAJ, Benson DL, Maini PK (1994) Nonlinear diffusion of a growth inhibitory factor in multicell spheroids. Math. Biosci. 121:113 15. Bajzer Z, Vuk-Pavlovic S (2005) Modeling positive regulatory feedbacks in cell-cell interactions. BioSystems 80:1-10 16. Sutherland, RM (1988) Cell and Environment Interactions in Tumor Microregions: The Multicell Spheroid Model. Science 240: 177-184 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Cristian Tomasetti Department of Mathematics, University of Maryland Paint Branch Drive College Park USA
[email protected]
Assessing the Usability of Web-Based Personal Health Records Pedro Gonzales1 and Binh Q. Tran2 1
Department of Electrical Engineering & Computer Science 2 Department of Biomedical Engineering The Catholic University of America, Washington DC, USA
Abstract— Personal Health Records (PHRs) are becoming an important tool to manage individual health information, its usability among the general population remains of its most unexplored areas. One of its subset is Web-based PHRs, which is an online repository for the storage of individuals’ health information. There are different ways in which data gets into a PHR, one of the most common ways are for people to type in their information manually. This study will assess how people entered common PHR health data into two recently released PHRs, they are Googlehealth and Microsoft healthvault. The results indicate that there is a significant different among them especially on specific tasks such as allergies and medical history. In addition, it highlights they way user interfaces affects usability. Keywords— Web usability, PHRs, Web design.
I. INTRODUCTION Due to the recent occurrence of natural disasters, pandemics and emergent situations, there has been a great demand to improve the way individuals manage their health records. “Lessons learned from recent history (e.g, SARS, Hurricane Katrina) highlight the importance of portable personal health information in response and recovery efforts and the value of computer-based health records in the health care system”[1]. Today, a comprehensive picture of individual’s health information is very difficult to obtain due to the fragmented nature of the U.S healthcare system were different data sources can exchange or share data with one another, creating silos of information affecting, not just individuals but the health care system in general. One method to tackle this lack of data fragmentation is through the promotion of Personal Health Records (PHRs). PHRs are briefly defined as electronic tools which help individuals manage their health records by making them portable and accessible to others. Ralston et all describe PHRs as “ set of computer-based tools that allow people to access and coordinate their lifelong health information and make appropriate parts of it available to those who need it” [4]. “Personal health records (PHRs) are web-based applications that provide patients with secure access to self-generated profiles of medical information. Currently
available versions are being promoted as resources to help patients organize and track medical information collected over time from different sources. Research on PHRs has been varied compromising a variety of topics from architecture to security, but, “little work has been done on the usability of PHRs, on patient preferences for entering, maintaining, and disclosing portions of their record” [10]. Usability studies related to PHRs are important as they will provide clues to people's behavior which will improve their design and enhance adoption. “Such tests are important because improving usability we can increase its usage among the general population, and provide vendors with improvement on the way they design future PHR” [6]. A review of the literature reveals the need for additional investigation regarding the usability of PHRs. “Future development of PHRs should be guided by patientoriented research targeted to evaluate the performance and usability of evolving applications” [2] [3]. In addition, health information is very complex, individuals must be able not only to collect their health information by themselves but also, trying to make sense of the different information now available to them. The purpose of this pilot investigation is to assess usability of two web-based PHRs recently made available to the public, Google Health™ (Mountain View, California) and Microsoft Healthvault™ (Redmond, WA).
II. METHODS A. Setting This study was carried out in a computer laboratory at The Catholic University of America in Washington D.C. B. PHR Description Participants interacted with two web-based PHRs, Google’s Google Health and Microsoft’s Healthvault. Individuals enter their health records manually. Healthvault uses tab organization (see Fig. 2), while Googlehealth has all its main navigation on the left side panel (see Fig. 1).
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 217–220, 2010. www.springerlink.com
218
P. Gonzales and B.Q. Tran
Table 1 PHR tasks studied
Fig. 1 Googlehealth PHR screenshot
Task 1
Sign in to PHR(Personal Health Record)
Task 2
You are allergic to peanuts. Please add peanuts to your allergies in your PHR. Add lactose intolerant and delete peanuts from your allergies
Task 3
Your current health condition is that you suffer from asthma, add asthma to your PHR
Task 4
You just got an x-ray on your right hand today, add this event to your PHR
Task 5
You have high blood pressure and have to take the following medicines indapamicome and metolazone, add both medications to your PHR
Task 6
You suffered an accident while playing basketball, it requires you to get an x-ray on your right ankle, add event to your PHR
Task 7
You received immunizations for tuberculosis skin test and dengue fever; add both to your PHR.
Task 8
a relative(uncle) suffered from lung cancer, please add this to your PHR
Task 9
Please sign out of website.
Fig. 2 Microsoft Heatlhvault PHR screenshot C. Study Instruments Online monitoring software and questionnaires were used to collect data for this study. Software based tools included two Mozilla Firefox add-ons and a video screen recorder by Camstudio (San Diego, California). Mozilla Firefox (vers 1.5) was used as the main browser because it contained the required software for this research to track browser page history, keystroke activity and timing of events. The questionnaires were also used to collect demographic information and Internet usage (pre-test), subject usability evaluation (post-test). Finally, the participants had to complete on both PHRs a set of common PHR tasks (see table 1).
Evaluation questionnaires are the dominant form of data gathering tool to collect attitudes and opinions on people and have been used on other PHR evaluation [9] [4]. D. Participants Fifteen (15) volunteers participated in this study. Their average age was 19.4(STD=0.74) and the range was 18-21 years and 74% were male and 26% female. All subjects were full-time undergraduate students at CUA. Study subjects where high consumers of technology as assessed by pre-study questionnaires. 100% of the participants indicated that they used the Internet on a daily basis, and have been
IFMBE Proceedings Vol. 32
Assessing the Usability of Web-Based Personal Health Records
using the Internet for more than 3 years. Subjects took part in one of 3 1-hour study sessions. E. Measures and Data Analysis The usability measurements were analyzed by using descriptive statistical analysis methods: average, mean and standard deviation. A paired t-test was used to determine significance (p<0.05) between the PHRs and tasks. Three key variables were studied: number of clicks, keystrokes and time to complete tasks. User attitudes were measured based on post-test surveys. Survey answers were collected based upon a 5-point Likert scale, two formats were used: strongly agree to strongly disagree; very difficult to very easy. Each participant was given 9 tasks to complete for each PHR usability study. These tasks were intended to represent typical tasks users perform on PHR.
III. RESULTS
219
fluctuation and inconsistencies. Usability issues centered around three common themes: inconsistent terminology (confusion), missing PHRs content areas and navigation errors. A. Inconsistent Terminology: User Confusion For some medical terms, both PHRs had different word conventions which resulted in greater number of click and time it took a participant to complete a task. For tuberculosis Googlehealth had BCG tuberculosis vaccine. Healthvault had the following options Tuberculin skin test; NOS, Tuberculin skin test; old tuberculin multipuncture device Tuberculin skin test; protein derivative solution. Tuberculin skin test; protein derivative solution, multipuncture device. This study does support the literature which indicates the lack the standards on medical terminology [7]. Healtvault forced people to choose among only those items, while Googlehealth allowed people to type their own terminology. B. Missing PHR Content Areas
A. Click Behaviour The paired t-test demonstrated significant difference (p<0.05) between PHRs in two tasks (Task 1: set up PHR and Task 4: add surgery). Set up PHR is a one time task which include multiple activities (creating an id, and setting up a password and a basic profile). For task 1 (set up PHR) Healthvault (AVE=12.13, SD=5.18) required roughly twice the number of clicks compared to Googlehealth (set up PHR: AVE=6.20, SD=3.32). For task 4 (add surgery), Healthvault (AVE=9.07, SD=3.28) required twice the number of clicks compared to Googlehealth (AVE=6.20, SD=2.36). There was no significant difference between Healthvault and Googlehealth among the other PHR tasks. B. Keystroke Behavior A paired t-test demonstrated significant difference (p<0.05) among 2 tasks (Task 8: adding family history and Task 9: sign out). Most of the keystrokes tend to concentrate setting up PHR and adding surgery. Setup PHR has overwhelmingly more keystrokes than the rest because demographic information must be entered by the individual. There were interface challenges for certain tasks. For adding immunizations, Healthvault doesn’t let users type rather they are force to choose from the items indicated in the drop down list. This is different from Googlehealth, where individuals could type their own answer or select an item from a list.
IV. DISCUSSIONS During usability sessions, extensive observation and analysis was done to determine reasons for different data
Adding family history took longer using Healthvault than Google. The primary reason was that Googlehealth didn’t have a family history section in the PHR which resulted in subjects giving up on this task. One of the challenges in comparing different PHR's is that there is no standard on the contents of a PHR; their categories greatly differ from one another. Nine (9) participants had zero as their number of clicks. Even though, this section was not available, some participants attempted to enter this information elsewhere in the online record. C. Navigation Errors Due to navigation issues, eight (8) participants got lost while trying to add their allergies using Googlehealth. The reasons for this include participants going to different areas of the PHR to enter data (demographic, searching items on the drop down list several times, etc). Previous studies indicate the type of navigation errors vary depending on the group of study. One study on older adults indicates the following; “many older adults repeatedly clicked on items that were not links, including table headings, bullets, icons, and just plain text”[8]. D. Task Duration and Success Rates All users completed the entire require tasks given to Within the duration of the session. The total average time (in minutes) was calculated task-by–task via video and keystroke analyses. To complete all 9 tasks, averages for each PHR were 13.99 and 14.74 minutes for Goglehealth and Healthvault, respectively. All participants completed
IFMBE Proceedings Vol. 32
220
P. Gonzales and B.Q. Tran
the tasks with a 100% success rate. The paired t-test has demonstrated significant difference (p<0.05) in the time to complete tasks for: adding allergies, adding conditions, adding immunizations, and signing out. However, there was no significant difference in the other tasks. A comparison between all the total averages revealed that most of the time was allocated among the first two tasks (PHR set-up and adding allergies).
V. CONCLUSIONS Any generalizations from this pilot study must be made cautiously due to the small sample size. A lack of greater significance might have resulted from the following reasons: they have the same educational level and internet usage. Also, the use of a large population sample might have helped to determine greater significance difference of usage among both PHRs. One of the issues this study didn’t explore was whether participants will continue to use PHRs after the study was complete. The study provides some insight into the way individuals use web based PHRs. After assessing the indicated web based PHRs, we conclude there exists considerable significant differences on a task-by-task basis due to the design and navigation implementation strategies by PHR vendors. This supports research by others regarding the importance of data entry methods directly affecting speed and accuracy by individual end-users. This does support previous research on data entry of PHRs “Different data entry methods employed by PHRs appear to have an impact on the accuracy of patient-entered medical information” [5]. Preliminary data evaluation suggests that Microsoft’s Healthvault is less user friendly than Googlehealth in setting up the PHR account as evaluated by keystrokes, mouse clicks, and task time. Further research in this area is needed with a larger sample size and with additional PHRs to compare a more representative set of available systems.
REFERENCES [1] Detmer Don, Bloomrosel Meryl, Raymond Bryan, Tang Paul. (2008) Integrated personal health records: transformative tools forConsumer centric care. BMC medical informatics and Decision making. Oct 6 1-14. [2] Kim Matthew I, B Johnson Kevin. (2002) Personal Health Records: Evaluation of Functionality and Utility. JAMIA 9: 171-180 at DOI: 10.1197/jamia.M0978 [3] Frost Jeana, Massagli P Michael (2008). Social Uses of Personal Health Information within PatientsLikeMe, an Online Patient Community: What Can Happen When Patients Have Access to One Another’s Data J Med Internet Res. 2008 Jul–Sep 10; DOI: 10.2196/jmir.1053. [4] Ralston D James, Carrel David, Reid Robert, Anderson Melissa, Hereford James, Moran Maureena. (2007) Patient web services integrated with a shared medical record: Patient use and satisfaction. Jamia: 798-806 [5] Kim I Matthew, Johnson B Kevin. (2004) Patient Entry of Information: Evaluation of User Interfaces Journal Medical Internet Res. 2004 Apr–Jun; 6(2): e13. [6] Mandl D Kenneth, Simons W William, Crawford CR William, Abbett M Jonathan. (2007) Indivo: a personally controlled health record for health information exchange and communication BMC Medical Informatics and Decision Making: 7:25 [7] Rosenbloom S. Trent, Miller A. Randolph, Johnson B. Kevin, Elkin L. Peter, Brown H. Steven . (2006) Interface Terminologies: Facilitating Direct Entry of Clinical Data into Electronic Health Record Systems. J Am Med Inform Assoc. 2006 May–Jun; 13(3): 277–288. DOI: 10.1197/jamia.M1957. [8] Dias Chadwick-Ann, McNulty Michelle, Tullis Tom. (2003) Web usability and Age: How Design Changes Can Improve Performance. ACM 30-37 [9] Wang Maisie, Lau Christopher, Matsen A. Frederick, Kim Yongmin (2003) Personal health information management system and its application in referral management. IEEE 287-297. [10] Lober WB, B Zierler A Herbaugh, SE Shinstrom1, Stolyar A, Kim EH, Kim Y. Barriers to the use of a Personal Health Record by an Elderly Population. (2006) AMIA 514-518
IFMBE Proceedings Vol. 32
Real Time Monitoring of Extracellular Glutamate Release in Rat Ischemia Model Treated by Nimodipine E.K. Park1,2, G.J. Lee1,2, S.K. Choi3, S. Choi1,2, S.W. Kang2, S.J. Chae1,2, and H.K. Park1,2,* 1
Dept. of Biomedical Engineering, Program of Medical Engineering, Kyung Hee University 2 Healthcare Industry Research Institute, Kyung Hee University 3 Dept. of Neurosurgery, Kyung Hee University Medical Center, Seoul, Korea
[email protected]
Abstract— It is well known that cerebral ischemia is associated with extracellular concentrations of the excitatory amino acids. Real time quantitative measurement of glutamate which is the most abundant excitatory neurotransmitter would be very helpful parameter, in order to evaluate brain injury during or after surgery, as well as validation of the instantaneous effect of drug. In order to define the effect of nimodipine on glutamate release and neuronal cell damage, we monitored real-time extracellular glutamate release and cerebral blood flow (CBF) in an eleven vessel occlusion global ischemia rat model. Changes in glutamate release and CBF were monitored by laser-Doppler flowmetry and amperometric biosensor, respectively. A ten minute 11VO cerebral ischemia was initiated by pulling the snares on the common carotid arteries (CCAs) and the external carotid arteries (ECAs). Nimodipine was infused during the ischemic period (0.025 μg/100gm/min: diluted 20 times by normal saline). The infusion site for nimodipine was located lateral to the probe of cerebral blood flowmeter. Three days after occlusion, histological analysis was performed by Nissel staining, in order to assessment of neuronal cell damage. In comparison with the ischemia and nimodipine group, the maximum changes in glutamate concentration showed statistically significant difference between two groups, resulting in neuronal cell death. It is considered that nimodipine may reduce glutamate release with brain damage during a global ischemic episode in the eleven vessel occlusion rat model. Keywords— Glutamate, nimodipine, eleven vessel occlusion ischemia model, real-time monitoring.
I. INTRODUCTION Ischemic conditions facilitate the release of activated excitatory amino acids. The most well-known excitatory neurotransmitter, glutamate, attacks neuronal cells by post synaptic binding to several receptors [1]. Various studies have demonstrated a pharmacological effect resulting from blocking the cascading excitotoxic cellular injury process [2]. Nimodipine, a dihydropyridine derivative, is a wellknown neural protective drug that has been applied to some ischemic vascular diseases and may have a beneficiazl effect on cerebral ischemia after subarachnoid hemorrhage
[3]. Many authors have suggested that calcium channel blockers can facilitate vasodilatation of small arterioles, allowing cerebral blood flow to increase and inhibiting platelet aggregation. A study proposed that nimodipine could directly protect neurons in the rabbit brain by intracellular calcium antagonism [4]. Despite many reports that calcium channel blockade induces a cascade reaction, the action of nimodipine on presynaptic glutamate release is unclear. This study hypothesized that nimodipine would have a beneficial effect on surgical ischemic conditions if it decreased neurotoxic glutamate release as an L-type calcium channel blocker at both sides of the pre- and post-synaptic membrane, resulting in reduced brain cell damage. The aim of this study is to define the neuro-protective effects of nimodipine on neuronal cell death during ischemic periods.
II. MATERIAL AND METHODS An eleven-vessel occlusion (11vo) was established as the global ischemia model as described previously and partially modified. Briefly, after the dividing of omohyoid muscle, a pair of occipital arteries, the superior hypophyseal artery, ascending pharyngeal artery and pterygopalatine artery were coagulated with bipolar electrocautry. Three-mm diameter of craniotomy was drilled through the ventral clivus, centered just caual to the basioccipital suture. The pterygopalatine arteries were coagulated prior to the entrance into the tympanic bullae. Both occipital arteries and the superior thyroid arteries were identified, coagulated and transected. Snares were placed around the external carotid arteries, between the occipital arteries proximally and the superior thyroid arteries distally, and snares were placed on the common carotid arteries. Animals were placed in stereotaxic head holder for the determination of real-time glutamate levels. Changes in CBF were monitored by laser-Doppler flowmetry with cortical glutamate levels by dialysis electrode. Microdialysis electrode were inserted into the motor cortex at coordinates A 1; L 4; V 2 mm (from the bregma and the dura)
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 221–223, 2010. www.springerlink.com
222
E.K. Park et al.
through a small incision in the dura. The 10-min 11VO cerebral ischemia was initiated by pulling the snares on the CCAs and ECAs. The snares were released and withdrawn after 10-min. At 72h after brain ischemia, rat brains were fixed with cold 4% paraformaldehyde in 0.1M phosphate buffer at PH 7.4. The brains were postfixed overnight at 4℃ in the same solution and soaked in 0.5 M PBS containing 30% sucrose for cryoprotection. Serial 40µm-thick coronal sections were cut on a freezing microtome (Leica, Nussloch, Germany). The sections stored in a cryoprotectant (25% ethylene glycol, 25% glycerol, 0.05M PB, PH 7.4) at -20℃. The sections were mounted on gelatin coated slides and stained with cresyl violet for histological assessment of neuronal cell death, dependent upon viable and nonviable stained cells.
histological analysis was performed in hippocampal region at three days after the ischemia. The neuroprotective effect of nimodipine on neurons in hippocampus was evaluated by measuring the neuronal cell viability in CA1 hippocampal region at three days after ischemia. Significant higher neuronal cell damage was seen in the ischemia group at CA1 region, but small damage was found in nimodipine treated group (Fig. 2). These results indicated that the treatment of nimodipine may reduce brain damage.
III. RESULTS AND DISCUSSION To evaluate the nimodipine effects on brain damage during ischemic period, real-time monitoring of glutamate release and CBF levels were measured. In both groups, a rapid decrease in CBF was seen immediately after the 11vo (Fig. 1), coincident with the increase in glutamate release. The ischemic plateau was achieved within 10.8 ± 4.2 sec and successfully maintained during the 10-minute ischemic period. After reperfusion, CBF significantly increased to pre-ischemic levels. The glutamate level then rapidly declined to pre-ischemic levels during reperfusion. Significant increase in glutamate release was seen in the ischemia group compared to the nimodipine treated group, but glutamate release elevation was suppressed in the nimodipine treated group. This result suggested that nimodipine may decrease glutamate release during the ischemic period.
Fig. 2
Representative morphology of Cresyl violet staining in the hippocampal region at 72h of reperfusion
IV. CONCLUSION Animal models for rat forebrain ischemia could provide various benefits for human being such as testing new drug and evaluation of new surgical technique. In the present study, we demonstrated that glutamate release during ischemic period may be reduced by nimodipine, resulting in reduction of apoptotic neuronal cell damage at three days of ischemia induction. Based on these results, we propose that nimodipine infusion during the ischemic condition could be beneficial in neurosurgical conditions.
ACKNOWLEDGMENT
Fig.
1 Comparison of representative glutamate dynamics between the ischemia group and nimodipine-treatment group
To further confirm the neuro-protective effects of nimodipine on the ischemic cell death in the 11vo model,
This research was supported by the research fund from Seoul R&BD (grant # CR070054).
IFMBE Proceedings Vol. 32
Real Time Monitoring of Extracellular Glutamate Release in Rat Ischemia Model Treated by Nimodipine
REFERENCES
The corresponding author:
1. Fleck MW. (2006) Glutamate receptors and endoplasmic reticulum quality control: Looking beneath the surface. Neuroscientist 12:232244 2. Taoufik E, Probert L. (2008) Ischemic neuronal damage. Current Pharmaceutical Design 14:3565-3573 3. Tomassoni D, Lanari A, Silvestrelli G, Traini E, Amenta F. (2008) Nimodipine and its use in cerebrovascular disease: Evidence from recent preclinical and controlled clinical studies. Clin Exp Hypertens 30:744-766 4. Bullock R, Zauner A, Woodward J, Young HF. (1995) Massive persistent release of excitatory amino acids following human occlusive stroke. Stroke 26:2187-2189
Author: Hun Kun Park Institute: Kyung Hee University Street: #1 Hoeki-dong, Dongdaemun-gu City: Seoul Country: Korea Email:
[email protected]
IFMBE Proceedings Vol. 32
223
Targeted Delivery of Doxorubicin by PLGA Nanoparticles Increases Drug Uptake in Cancer Cell Lines Tingjun Lei, Supriya Srinivasan, Yuan Tang, Romila Manchanda, Alicia Fernandez-Fernandez, and Anthony J. Mcgoron Biomedical Engineering Department, Florida International University, Miami, FL, United States Abstract— Doxorubicin (DOX) is an anthracycline drug widely used in the treatment of a large spectrum of solid tumors. Its application is limited by its efflux through the multidrug resistance (MDR) protein. In this study, we explored the potential of DOX - PLGA nanoparticles (DNPs) and antibody decorated nanoparticles (ADNPs) to overcome MDR in cancer cells. DNPs were prepared by the O/W emulsion solvent evaporation method. The surface decoration of nanoparticles with antibody was performed via carbodiimide reaction. The particles were characterized for their size and zeta potential, and their in vitro uptake was compared with their unconjugated counterparts. Entrapment efficiency of DNPs and ADNPs was measured by fluorescence using the DMSO burst release procedure. Cytotoxicity was measured using the SRB assay. The DNP and ADNP nanoparticles had average diameter of 162.6 ± 2.0 nm and 213.0 ± 3.5 nm respectively. Their corresponding surface charges were -13.2 ± 2.3 mV and -1.3 ± 3.8 mV respectively. Our results showed that cellular uptake of DOX from DNPs in DOX-resistant MES-SA/Dx5 cancer cell was higher compared to free DOX. However, the uptake of DOX from DNPs in MES-SA and SKOV-3 cancer cell lines was comparable to free DOX treatment. Next, we conjugated the DNPs with HER-2 antibody to specifically target to SKOV-3 cancer cell line and MES-SA and MES-SA/Dx5 were used as negative controls. Results showed higher uptake of DOX from ADNPs compared to free DOX and DNPs in SKOV-3. However, the cellular uptake of DOX from ADNPs was comparable to DNPs in MES-SA and MES-SA/Dx5. Cytotoxicity results were consistent with the cellular uptake data. Our study concludes that the targeted DNPs may enhance the cellular uptake and cytotoxicity in SKOV-3. Keywords— Doxorubicin, Cytotoxicity, HER-2 antibody.
PLGA,
Cellular
uptake,
I. INTRODUCTION Doxorubicin (DOX) is an anthracycline antibiotic that is widely used against several solid tumors, such as ovarian cancer [1]. One important disadvantage of DOX is the development of multi-drug resistance (MDR) due to cancer cells overexpressing P-glycoprotein (P-gp), which can significantly affect the therapeutic effect of DOX [2].
Carriers at the nanoscale may be able to help overcome the undesirable effects of traditional chemotherapeutic agents since they can help overcome drug resistance by protecting the drug from being recognized by the P-gp drug efflux pump [3]. The therapeutic potential of nanocarriers can be further magnified by tagging them with appropriate ligands that selectively interact with tumor cell membrane receptors [4]. PLGA is an FDA-approved biodegradable and biocompatible polymer widely used for nanoparticle preparation due to its ability to carry both hydrophobic and hydrophilic drugs. Several studies on DOX-loaded and targeted DOX-loaded PLGA nanoparticles have been published [5, 6]. The main goal of this study was to prepare nontargeted and targeted DOX-loaded nanoparticles, and to compare the cellular uptake and cytotoxicity of DOX released from these nanoparticles in uterine and ovarian cancer cells (SKOV-3, MES-SA and MES-SA/Dx5).
II. METHODS PLGA nanoparticles with encapsulated DOX and antibody-conjugated DOX-loaded nanoparticles were formulated by the oil in water emulsification solvent evaporation method [7, 8] with slight modification. Nanoparticle size was measured by dynamic light scattering (DLS) using a Malvern Zetasizer. Size measurements were taken in triplicate at 25 ºC using a 1:100 (v/v) dilution of the nanoparticle suspension in distilled water. The polydispersity index was used as a measure of particle size distribution, scaled from 0 to 1. Zeta potential was used to measure the surface charge of the nanoparticles using a Malvern Zetasizer. The samples for surface charge measurement were prepared by dilution in distilled water. The data obtained was an average of measurements of three samples (n=3). The concentration of DOX in the nanoparticles was determined using a standard calibration curve of DOX in DMSO that was prepared with a Cary spectrophotometer (Varian). Nanoparticle yield and drug entrapment were calculated using the following formulas:
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 224–227, 2010. www.springerlink.com
Targeted Delivery of Doxorubicin by PLGA Nanoparticles Increases Drug Uptake in Cancer Cell Lines
In order to study cellular uptake of unencapsulated DOX (designated as free DOX), DOX loaded nanoparticles (DNPs) and antibody conjugated DOX loaded nanoparticles (ADNPs), we used three different cell lines: Human ovarian carcinoma SKOV-3 cells, human uterine sarcoma MES-SA cells, and their MDR (P-gp overexpressing) derivative MESSA/Dx5 (Dx5) cells. All cells were seeded in 24-well plates with cell density of 100,000 to 200,000 cells/well. After allowing for overnight attachment and attainment of confluence, the cell medium was removed, and free DOX, DNPs or ADNPs in growth medium was added into plates at a normalized DOX concentration of 10 µM (5.8ug/ml). Cancer cells were incubated with DOX, DNPs, or ADNPs at 37°C in a cell incubator for 24 hours. Cells in wells where no drug was added were used as controls. After 24 hours, the supernatant was removed and the cells were washed four times with ice-cold DPBS (pH 7.0) and lysed with 1 mL of DMSO. The supernatants were collected and centrifuged for 10 min at 140,000 rpm to remove cell debris and obtain cell lysates. We measured the fluorescence intensity of cell lysates with a Fluorolog-3 (Jobin Yvon Horiba) spectrofluorometer at λex = 496nm, λem= 592nm for DOX, in order to determine DOX concentration in cell lysates exposed to free DOX, DNPs or to ADNPs. The protein content in the cell lysates was measured using a micro BCA protein assay kit acquiring absorption data at 562nm with spectrophotometer. Cellular uptake of DOX was expressed by normalizing the amount of DOX to the amount of protein left inside the cells after different treatments. An average value from three wells for each treatment was obtained for each experiment and an average (±SD) intracellular uptake of DOX from 3 experiments was plotted. Cell proliferation was measured with an SRB assay (Invitrogen), which colorimetrically measures cellular protein [9]. In this study, the cytotoxicity of three different treatments (free DOX, DNPs and ADNPs) was investigated. The detailed procedure was the same as we have described before [10]. Briefly, cells were first seeded in a 96-well plate; after 24 hrs, they were subjected to different treatments; SRB assay was performed 24 hrs post treatment to give cytotoxicity. Tested DOX concentrations ranged from 0-10 µM. A drug concentration equal to zero means that only DPBS and no drug was added to the wells. An average (±SD) ‘Net Growth’ from 3 experiments was plotted against increasing DOX concentrations. An average value from four wells for each treatment was obtained for each experiment. The following formulas were used to calculate net growth: (Tx - To)/(C - To) * 100 if Tx > To, and (Tx - To)/To * 100 if Tx < To. To is defined as the
225
initial amount of cells. Tx corresponds to the absorbance of wells with different treatments, and C is the absorbance of the control wells. Net growth was plotted against DOX/NP concentration to show toxicity effects as described by Monks et al [9]. If Tx > To, the treatment is considered as growth inhibition; if Tx < To, there is no net growth after the treatment, and so its effect is considered as cell killing. Note that net growth values were generated by normalizing the data from each treatment to the control values, which did not receive DOX or NP. Statistical significance was identified by One-way ANOVA for the difference among treatment groups at the same DOX concentration. The p-value <0.05 was considered to be statistically significant.
III. RESULTS The mean diameters of ADNPs and DNPs were 213.0 ± 3.5 nm and 162.6 ± 2.0 nm respectively. Data represent mean ± S.D obtained from size measurements of three samples prepared on three different days (n=3). The polydispersity indices of ADNPs and DNPs were 0.193 ± 0.006 and 0.068 ± 0.014 respectively. There was a 50 nm increase in the diameter of the DOX loaded nanoparticles after conjugation with the monoclonal antibody due to the presence of IgG on the nanoparticle surface. Conjugation of antibody onto the surface of DNPs caused an increase in their zeta potential. The zeta potentials of the DOX loaded nanoparticles with and without antibody (IgG) conjugation were -1.3 ± 3.8 mV and -13.2 ± 2.3 mV, respectively. The yield of the nanoparticles (i.e., ADNPs and DNPs) and their respective entrapment efficiencies were determined using equations (1) and (2) described above. Yield and drug loading (w/w %) values were determined and tabulated as mean ± S.D (n=3) in Table 1. Table 1 Mean entrapment efficiencies and yield percent for ADNPs and DNPs (n=3). Data represent mean ± S.D. The entrapment efficiency and yield percent for both the formulations were comparable Formulation
Drug loading (w/w %)
Yield (%)
DNPs
2.7 ± 0.1
41.7 ± 6.8
ADNPs
2.3 ± 0.1
40.0 ± 3.8
We incubated cells with free DOX, DNPs, or ADNPs for 24 hours and normalized the amount of DOX to the amount of cellular protein. Fig. 1 shows that loading DOX into PLGA nanoparticles can significantly increase intracellular DOX uptake by Dx5 cells by approximately 6-fold compared to free DOX. However, DNPs (without antibody) did not improve the cellular uptake of DOX in MES-SA and
IFMBE Proceedings Vol. 32
226
T. Lei et al.
SKOV-3 cells. The experiment results show improved cellular uptake of DOX in Dx5 when DOX was delivered through PLGA nanoparticles compared to free DOX. In SKOV-3, DOX uptake is higher for conjugated DOXloaded PLGA nanoparticles compared to free DOX and nonconjugated nanoparticles.
treatments should have similar cytotoxicity in both cell lines. On the other hand, Dx5 cells overexpress drug efflux pump P-gp, so the two nanoparticle formulations should have higher cytotoxicity than free DOX in Dx5 cells. The results confirmed our expectations. In our uptake study (10 µM DOX concentration), ADNPs showed the highest uptake in SKOV-3 cells compared to free DOX and DNPs, while accumulation of DNPs and free DOX is similar. This result is not surprising because SKOV-3 cells overexpress HER-2 receptors and do not overexpress P-gp. Therefore, we expect the same pattern in cytotoxicity. As shown in Fig. 4, ADNP is the most toxic among the three different DOX formulations at 10 µM DOX concentration, though the difference did not reach statistical significance, whereas free DOX and DNPs showed comparable cell growth inhibition capability. 1.4
Free DOX DNP ADNP
Fig.
1 24 hrs intracellular DOX uptake data in SKOV-3, MES-SA, and Dx5 cells, n = 3 experiments, 3 wells per treatment. * P < 0.05 (by ANOVA) DNPs and ADNPs compared to free DOX, indicating significant uptake due to loading of DOX into PLGA nanoparticles and targeting of HER-2 antibody on the PLGA nanoparticle surface
Cell proliferation following four different treatments is shown in Figs. 2-4. Figs. 2 & 3 show the cytotoxicity of different DOX formulations in MES-SA and Dx5 cells respectively. 1.2 Free DOX DNP ADNP
1 0.6
0.6 0.4 0.2 *
0 0
-0.4
0.1
1
*
10
Concentration (µM)
Fig. 3 Dx5 cell growth for different drug formulations, n = 3 experiments, 4 wells per treatment. * P < 0.05 (by ANOVA) DNPs and ADNPs compared to free DOX, indicating significant uptake due to nanoparticle encapsulation 1.2
0.4 0.2
Free DOX DNP ADNP
1
0 -0.2
1 0.8
-0.2
0
0.1
1
10
-0.4 -0.6 -0.8 Concentration (µM)
Fig.
2 MES-SA cell growth for different drug formulations, n = 3 experiments, 4 wells per treatment. No differences between the cytotoxicity of the drug formulations were observed
The expectation was that because neither MES-SA nor Dx5 cells overexpress HER-2 receptors, DNPs and ADNPs
Cell growth (fraction)
Cell growth (faction)
0.8
Cell growth (fraction)
1.2
0.8 0.6 0.4 0.2 0 -0.2
0
0.1
1
10
Concentration (µM)
Fig. 4 SKOV-3 cell growth for different drug formulations, n = 3 experiments, 4 wells per treatment
IFMBE Proceedings Vol. 32
Targeted Delivery of Doxorubicin by PLGA Nanoparticles Increases Drug Uptake in Cancer Cell Lines
IV. DISCUSSION We did not observe improved intracellular uptake of DOX for DOX-loaded PLGA nanoparticles compared to free DOX in MES-SA and SKOV-3, whereas significantly increased DOX uptake was observed in Dx5. This is consistent with existing literature because free DOX is delivered into the cells mainly through diffusion [11], and the major delivery strategy of PLGA nanoparticles is endocytosis [12]. Endocytosis can help DOX-loaded PLGA nanoparticles escape P-gp pump efflux mechanisms, and lead to delivery of significant amount of DOX compared to free DOX in P-gp positive cancer cell lines. However, in Pgp-negative cancer cell lines, DOX-loaded nanoparticles (without antibody attachment) do not have an advantage in delivery compared to free DOX probably because diffusion is not inhibited. The uptake of DOX in SKOV-3 was also increased when encapsulated into NPs conjugated with the HER-2 antibody since SKOV-3 cells overexpress HER-2 receptors on the cell membrane. This phenomenon is a result of both electrostatic interactions of the ADNPs with the cell membrane and receptor-mediated endocytosis [13]. We observed cytotoxicity results consistent with cellular uptake results at 10 µM extracellular DOX concentration. With more intracellular uptake of DOX the cytotoxicity is higher. However, no obvious 24-hour cytotoxic effect for either DNPs or ADNPs compared to free DOX was observed in Dx5 and SKOV-3 with 0.1 µM and 1 µM extracellular DOX concentration, which could be attributed to the slow release rate of DOX from DNPs and ADNPs. According to our in vitro drug release study (data not shown), about 16% of DOX was released from the DNPs within DPBS (pH=7.0) in 24 hours. The surface modification of DNPs with antibody affects DOX release from the nanoparticles. With an extra layer of antibody coating, the release of DOX is slower than in DNPs. Only 9% of DOX was released from ADNPs within DPBS (pH=7.0) in 24 hours (data not shown).
V. CONCLUSIONS This study has shown improved cellular uptake and cytotoxicity of DOX nanoparticle formulations compared to free DOX in drug-resistant human uterine cancer cells. By encapsulating DOX into PLGA nanoparticles, we were able to successfully overcome the MDR effect in P-gp expressing Dx5 human uterine cancer cells. Additionally, targeted DOXloaded nanoparticles can result in a significant increase in cellular uptake and improved cytotoxicity of DOX in SKOV3 cells which express the HER-2 receptor. We predict that
227
DOX antibody-conjugated PLGA nanoparticles will result in higher delivery efficacy, and have promising potential for in vivo clinical applications.
ACKNOWLEDGMENT A.F.F. was supported by NIH/NIGMS R25 GM061347.
REFERENCES 1. Devita V, Hellman S, Rosenberg S (1989) Cancer, Principles and Practice of Oncology, third ed. Lippincott, Philadelphia 2. Gottesman MM (2002) Mechanisms of cancer drug resistance. Annu Rev Med 53: 615–627 3. Panyam J, Labhasetwar V (2003) Biodegradable nanoparticles for drug and gene delivery to cells and tissue, Adv. Drug Del. Rev., 55: 3, 329–347 4. Sapra P, Allen T.M. (2003) Ligand-targeted liposomal anticancer drug, Prog. Lipid Res., 42: 5, 439–462 5. Yoo HS, Oh JE, Lee KH et al. (1999) Biodegradable nanoparticles containing doxorubicin-PLGA conjugate for sustained release. Pharm. Res.16(7). 1114-1118 6. Yoo HS, Park TG (2004) Folate receptor targeted biodegradable polymeric doxorubicin micelles. J. Control. Release 96 (2), 273-283 7. Manchanda R, Fernandez-Fernandez A, Nagesetti A, McGoron AJ (2010) Preparation and characterization of a polymeric (PLGA) nanoparticulate drug delivery system with simultaneous incorporation of chemotherapeutic and thermo-optical agents. Colloids Surf B Biointerfaces 75: 260-267 8. Yang J, Lee CH, Park J et al. (2007) Antibody conjugated magnetic PLGA nanoparticles for diagnosis and treatment of breast cancer, J. Mater. Chem., 17, 26: 2695-2699 9. Monks A., et al. (1991), Feasibility of a high-flux anticancer drug screen using a diverse panel of cultured human tumor cell lines. Journal of the National Cancer Institute 83(11): 757-766. 10. Tang Y. and McGoron A.J. (2009) Combined effects of laser-ICG photothermotherapy and doxorubicin chemotherapy on ovarian cancer cells. J of Photochemistry & Photobiology. B - Biology 97(3): 138144 11. Sahoo S.K., Labhasetwar V (2005) Enhanced antiproliferative activity of transferring conjugated paclitaxel-loaded nanoparticles is mediated via sustained intracellular drug retention. Mol. Pharm. 2: 373–383 12. Qaddoumi M.G., Gukasyan H.J., Davda J. et al. (2003) Clathrin and caveolin-1 expression in primary pigmented rabbit conjunctival epithelial cells: role in plga nanoparticle endocytosis. Mol. Vision 9: 559–568 13. Sun BF, Ranganathan B and Feng SS (2008), Multifunctional poly (D, L-lactide-co-glycolide)/montmorillonite (PLGA/MMT) nanoparticles decorated by Trastuzumab for targeted chemotherapy of breast cancer, Biomaterials 29: 475–486
Author: Tingjun Lei Institute: Florida International University Street: 10555 West Flagler Street, EAS 2600 City: Miami Country: United States Email:
[email protected]
IFMBE Proceedings Vol. 32
Cellular Uptake and Cytotoxicity of a Novel ICG-DOX-PLGA Dual Agent Polymer Nanoparticle Delivery System Romila Manchanda, Tingjun Lei, Yuan Tang, Alicia Fernandez-Fernandez, and Anthony J. McGoron Department of Biomedical Engineering, Florida International University, 10555 West Flagler Street, Miami FL 33174 USA Abstract–– We recently reported the fabrication of a novel polymer nanoparticle delivery system with simultaneously entrapped indocyanine green (ICG) and doxorubicin (DOX). This system has potential applications for combined chemotherapy and hyperthermia. Research in our group showed that simultaneous use of ICG and DOX with localized hyperthermia can produce the same effect as that achieved by larger doses of chemotherapy alone. In this study, we explored the potential of dual-agent PLGA nanoparticles (ICG-DOXPLGANPs) to overcome multidrug resistance (MDR) mechanisms in cancer cells by increasing intracellular drug concentrations via nanoparticle uptake. ICG-DOX-PLGANPs were prepared by the O/W emulsion solvent evaporation method. The dominant processing parameters that control particle size and drug entrapment efficiencies of ICG and DOX were PLGA concentration, PVA concentration and initial drug content. We optimized our previous formulation based on those parameters. Entrapment efficiency of the optimized ICG-DOX-PLGANPs was measured by fluorescence measurements using the DMSO burst release procedure. The internalization of ICG–DOX– PLGANPs by three cancer cell lines was visualized by confocal laser microscopy and fluorescence microscopy. Cytotoxicity was assessed using the SRB assay. The nanoparticles produced by optimal formulation had sizes of 135±2 nm, (n=3) with a low poly-dispersity index (0.149±0.014, n=3) and a zeta potential of -11.67±1.8 mV. Drug loading was approximately 3% w/w for ICG and 4% w/w for DOX (n=3). Cellular uptake of ICG and DOX from ICG–DOX–PLGANPs in DOX-resistant MESSA/Dx5 cancer cells was higher compared to free ICG and free DOX treatment. However, the same phenomenon was not observed in MES-SA and SKOV-3 cancer cell lines. The SRB cytotoxicity results show that ICG–DOX–PLGANPs are more toxic than free DOX in DOX-resistant cell lines. Keywords— PLGA nanoparticles, Indocyanine Green, Doxorubicin, cell lines.
I. INTRODUCTION Chemotherapy is a major therapeutic approach for the treatment of localized and metastasized cancers. Important limitations of chemotherapeutic agents include the development of multidrug resistance (MDR), as well as systemic toxic side effects resulting from nonspecific localization to tumors. Drug resistance is a major obstacle in tumor therapeutics and has been an important focus of research [1]. Doxorubicin (DOX) is an example of a firstline anticancer drug that is limited by MDR development in
some cancer cell lines, as well as by toxicity effects such as irreversible cardiotoxicity. During the past few decades, PLGA nanoparticles have emerged as drug carriers in novel drug delivery systems and have also been used to overcome the MDR effect. In our previous studies we reported the fabrication of a novel polymeric PLGA nanoparticle delivery system with simultaneously entrapped ICG and DOX that has potential applications for combined chemotherapy and localized hyperthermia [3]. In the ICGDOX-PLGA delivery system, the optical agent serves as an imaging tracer to monitor drug delivery, and as a source of hyperthermia once the drug system reaches its target, which enhances the effect of chemotherapy. Previous research in our group showed that simultaneous use of ICG and DOX in combination with localized hyperthermia can produce the same effect as that achieved by greater doses of chemotherapy alone [2]. The main aim of this study was to explore the potential of dual-agent incorporated PLGA nanoparticles to overcome P-gp mediated MDR. The rationale behind this strategy is to increase the intracellular concentration of the targeted drug by the enhanced tumor accumulation effect of nanoparticles.
II. METHODS PLGA nanoparticles loaded with ICG and DOX were prepared using a modified version of the oil in water (O/W) single emulsion solvent evaporation process [3]. According to our previous experimental data, the dominant processing parameters in controlling particle size and drug entrapment efficiencies of ICG and DOX included the following: PLGA concentration, Polyvinylalcohol (PVA) concentration, and initial drug content. Based on these factors, we decided to further optimize the formulation of ICG–DOX–PLGANPs by introducing a slight modification in the protocol. Briefly, the optimized ICG–DOX–PLGANPs were prepared as follows: 40mg of PLGA, 1mg of ICG, 1mg of DOX and 20uL triethylamine were dissolved in 4.0ml of methanoldichloromethane (1:3, v/v) mixture. This organic phase was emulsified with 8 ml of PVA solution (3%, w/v) by probe sonication at 50W for one minute in an ice bath. The organic solvent was then rapidly evaporated under reduced pressure at 39◦C. The resulting PLGA nanoparticle
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 228–231, 2010. www.springerlink.com
Cellular Uptake and Cytotoxicity of a Novel ICG-DOX-PLGA Dual Agent Polymer Nanoparticle Delivery System
suspension was then ultra-centrifuged at 14,000 rpm for 30 minutes. After centrifugation, the nanoparticle precipitate was washed by using the same volume of distilled water as the supernatant, and again centrifuged at 14,000 rpm for 15 minutes. The washing process was repeated three times in order to remove the adsorbed drugs. The washed nanoparticles were then freeze-dried using the Freezone system (Labconco, Free Zone Plus 6) for 24 hours. A. Characterization of ICG-DOX-PLGA NPS Nanoparticle size and Zeta potential was characterized by dynamic light scattering (DLS) using a Zetasizer Nano ZS (Malvern instruments, UK). Uniformity of particle shape and size was verified with SEM. Drug loading of ICG and DOX was determined using the DMSO burst release method by spectrofluorometer measurements in steady-state mode. ICG and DOX calibration curves were prepared prior to measurements to verify spectral characteristics, linearity range, and absence of cross-talk. Excitation wavelengths were 785 nm for ICG and 496 nm for DOX, and spectral emissions were recorded in 1-nm intervals up to 870 nm. Nanoparticle yield and entrapment efficiency were calculated as percentages. B. In Vitro Cell Experiments Human ovarian cancer cell line SKOV-3, human uterine cancer cell line MES-SA and P-gp over expressing human uterine cancer cell line MES-SA/Dx5 (Dx5) were used to study cellular uptake of unencapsulated DOX and ICG (designated as Free DOX+ free ICG), and ICG–DOX– PLGANPs. All cells were seeded in 24-well plates with cell density of 100,000 to 200,000 cells/well. After allowing for overnight attachment and attainment of confluence, the cell medium was removed, and Free DOX + free ICG or ICG– DOX–PLGANPs in growth medium was added to the plates at a normalized DOX and ICG concentration of 10µM (5.80µg/ml) and 5.65µM (4.38µg/ml), respectively. Then, Free DOX + free ICG or ICG–DOX–PLGANPs was incubated with cells at 37°C in a cell incubator for 24 hours. Cells in wells where no drug was added were used as controls. After 24 hours, the supernatant was removed and the cells were washed four times with ice-cold DPBS (DPBS; pH 7.0) and lysed with 1 mL of DMSO. The supernatants were collected and centrifuged for 10 min at 14000 rpm to remove cell debris and obtain cell lysates. We measured the fluorescence intensity of cell lysates with a Fluorolog-3 (Jobin Yvon Horiba) spectrofluorometer at λex = 496nm, λem= 592nm for DOX, in order to determine
229
DOX concentration in cell lysates exposed to Free DOX + free ICG or to ICG–DOX–PLGANPs. ICG concentration in cell lysates was not measured since ICG fluorescence is unstable in aqueous media by 24 hours. In order to adjust for the effect of fluorescence from cellular components, a DOX calibration curve in the ICG-DOX mixture was created by dissolving the two compounds in DMSO and adding the solution to untreated cells. The supernatants were collected and measured to prepare a calibration curve for DOX in the presence of untreated cell lysates. The protein content in the cell lysates was measured by using a micro BCA protein assay kit acquiring absorption data at 562nm with a spectrophotometer. Cellular uptake of DOX was expressed by normalizing the amount of DOX to the amount of protein left inside the cells after different treatments in units of nanomoles per milligram of protein. An average intracellular uptake value of DOX from three wells for each treatment was obtained for each experiment and an average (±SD) from 3 experiments was plotted. Statistical significance was identified by paired t-test of mean 24-hour DOX cellular uptake between treatment groups at the same DOX treatment concentration. Cell proliferation under ICG–DOX–PLGANPs was measured by SRB assay (Invitrogen, USA) which colorimetrically measures cellular protein. The detailed cytotoxicity assay procedure and data analysis can be found in our previous publication (2). Free DOX + free ICG treatment groups were also included in this study for comparison.
III. RESULTS A. Characterization of ICG-DOX-PLGA NPS Nanoparticle size was 135 ± 2 nm diameter as determined by DLS (n=3). Particle diameters were about 90-100 nm in SEM measurements. The mean zeta potential (n=3) was -11.6 ± 1.8mV. B. In Vitro Cellular Uptake Studies in Cancer Cell Lines We incubated cells with Free DOX + free ICG and ICG– DOX–PLGANPs for 24 hours and normalized the amount of DOX to the amount of cellular protein. Figure 1 shows that encapsulating DOX and ICG into PLGA nanoparticles can significantly increase intracellular DOX uptake by approximately 5-fold compared to free DOX + free ICG in Dx5 cells. However, ICG–DOX–PLGANPs did not improve cellular uptake in MES-SA and SKOV-3 cells.
IFMBE Proceedings Vol. 32
230
R. Manchanda et al.
Cell growth (fraction)
1.2
ICG–DOX–PLGANPs Free DOX + free ICG
1 0.8 0.6 0.4
*
0.2 0 0
Fig. 1 24h Intracellular uptake data in SKOV-3, MES-SA and Dx5 cells, n
C. In Vitro Cell Viability Studies in Cancer Cell Lines
Cell growth (fraction)
Figures 2-4 show the cytotoxicity of ICG–DOX– PLGANPs in MES-SA, Dx5 and SKOV-3 cells respectively as compared to Free DOX + free ICG treatment. Since MES-SA and SKOV-3 cells do not express P-gp drug efflux pumps, nanoparticle encapsulation did not increase DOX cytotoxicity. On the other hand, cytotoxicity of ICG–DOX– PLGANPs in Dx5 cells was higher than for free DOX at 10 µM DOX concentrations. This is because Dx5 cells overexpress P-gp, so free DOX is pumped out of the cell. However, the P-gp system is apparently unable to recognize the DOX when it is encapsulated in nanoparticles. These results are generally in accordance with the cellular uptake results, where free DOX and ICG-DOX-PLGANPs showed comparable uptake in MES-SA and SKOV-3 cells, whereas ICG–DOX–PLGANPs showed much higher uptake in Dx5 cells than free drug. 1.2 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8
0
0.1
1
10
Dox concentration (µM)
Fig.
1.4
2 MES-SA cell growth for different drug formulations, n = 3 experiments, 4 wells per treatment
ICG–DOX–PLGANPs
1.2
Free DOX + free ICG
1 0.8 0.6 0.4 0.2 0 0
0.1 1 DOX concentration (µM)
10
Fig. 4 SKOV-3 cell growth for different drug formulations, n = 3 experiments, 4 wells per treatment
IV. DISCUSSION
ICG–DOX–PLGANPs Free DOX + free ICG
10
Fig. 3 Dx5 cell growth for different drug formulations, n = 3 experiments, 4 wells per treatment. * P < 0.05 (by paired t-test) of ICG-DOX-PLGANPs compared to free DOX + free ICG, indicating significant cell growth inhibition due to nanoparticle encapsulation
Cell growth (fraction)
= 3 experiments, 3 wells per treatment. *Paired t-test between different drug treatments, p<0.05, indicating significant differences due to loading of DOX into PLGA nanoparticles
0.1 1 DOX concentration (uM)
A. Standardized Formulation ICG-DOX-PLGANPs were prepared by using the O/W single emulsion method. The size of nanoparticles has been found to play a pivotal role in drug delivery. It is expected that small sized nanoparticles usually between 100 -200 nm, will be less susceptible to reticuloendothelial system clearance and will have better penetration into tissues and cells when used in in vivo therapy. The nanoparticle size was 135 ± 2 nm diameter (polydispersity index =0.149). By SEM the particles have diameters of approx. 90-100 nm. Drug loading was 3.0 ± 1.6 % for ICG and 4.0 ± 1.9 % for DOX. DLS analysis of ICG-DOX-PLGANPs showed uniform size distribution in nanometer range. The DLS
IFMBE Proceedings Vol. 32
Cellular Uptake and Cytotoxicity of a Novel ICG-DOX-PLGA Dual Agent Polymer Nanoparticle Delivery System
measures the hydrodynamic diameter by dispersing particles in aqueous phase or solvents whereas SEM measures the size of dried samples loaded onto the substrate and with a Pt/Pd mixture and then vacuum-dried with gold. It is speculated that the hydration and swelling of the particles in aqueous buffer may be the reason for observing larger size by DLS measurements as compared to SEM. B. In Vitro Cellular Uptake in Cancer Cell Lines Incubation of Dx5 with ICG-DOX-PLGANPs improved cellular uptake of DOX. However, the same phenomenon was not observed in MES-SA and SKOV-3. This is consistent with the literature because free DOX is delivered into the cells mainly through diffusion [4] and the major delivery strategy of PLGA nanoparticles in epithelial cells is endocytosis [5]. In P-gp positive cancer cell lines endocytosis can help ICG–DOX–PLGANPs escape P-gp pump efflux mechanism and lead to delivery of a greater amount of DOX compared to free DOX [6,7]. However, in P-gp negative cancer cell lines, ICG–DOX–PLGANPs do not have an advantage in delivery compared to free DOX. C. In Vitro Cell Viability Studies in Cancer Cell Lines Generally, the cytotoxicity and cell uptake data are in agreement. In Dx5 cells, we observed higher uptake of nanoparticle encapsulated DOX and ICG; therefore, the enhanced cytotoxicity was likely due to the increased uptake. In the P-gp negative cell lines, MES-SA and SKOV-3, the uptake of DOX in the ICG-DOX-PLGANPs is similar to the free drug; therefore, the cytotoxicity is also similar. Several other research groups also observed this phenomenon, where non-targeted nanoparticle encapsulation did not increase the anticancer effect of chemotherapeutic agent in P-gp negative cell lines [8, 9].
V. CONCLUSION Our results show promise for the use of PLGA drug delivery systems containing chemotherapy and localized hyperthermia agents (DOX and ICG) as a cancer treatment strategy. The potential of this drug delivery system is not only the ability to deliver therapeutic agents to tumor sites, but also to circumvent MDR mechanisms that are an important concern in chemotherapeutic dosing and efficacy. The coupling of increased cell uptake in MDR cells with the combined effect of localized hyperthermia and chemotherapy
231
could result in more effective dosage regimes for cancer patients, and potentially fewer side effects and increased quality of life. Future studies will focus on the characterizing the chemotherapy, imaging, and hyperthermia effects to assess the potential for clinical applications.
ACKNOWLEDGMENT A.F.F. was supported by NIH/NIGMS R25 GM061347.
REFERENCES 1. Ramachandra M, Ambudkar SV, Chen D, Hrycyna CA, Dey S, Gottesman MM (1998) Human P-glycoprotein exhibits reduced affinity for substrates during a catalytic transition state. Biochemistry 37:5010–9. 2. Tang Y, McGoron AJ (2009) Interaction of dye-enhanced photothermotherapy and chemotherapy in the treatment of cancer: an in vitro study. Proceedings of SPIE Photonics West, vol. 7164, in press. San Jose, CA, January 24-29, 2009. 3. Manchanda R, Fernandez-Fernandez A, Nagesetti A, McGoron AJ (2010) Preparation and characterization of a polymeric (PLGA) nanoparticulate drug delivery system with simultaneous incorporation of chemotherapeutic and thermo-optical agents. Colloids Surf B Biointerfaces 75: 260-267. 4. Sahoo SK, Labhasetwar V (2005) Enhanced antiproliferative activity of transferring conjugated paclitaxel-loaded nanoparticles is mediated via sustained intracellular drug retention. Mol. Pharm 2:373–383. 5. Qaddoumi MG, Gukasyan HJ, Davda J, Labhasetwar V, Kim KJ, Lee VHL (2003) Clathrin and caveolin-1 expression in primary pigmented rabbit conjunctival epithelial cells: role in plga nanoparticle endocytosis. Mol. Vision 9: 559–568. 6. Panyam J, Labhasetwar V (2003a) Biodegradable nanoparticles for drug and gene delivery to cells and tissue. Adv. Drug Deliv. Rev 55: 329–347. 7. Panyam J, Labhasetwar, V (2003b) Sustained cytoplasmic delivery of drugs with intracellular receptors using biodegradable nanoparticles. Mol. Pharm 1:77–84. 8. Wang Z, ChuiWai K, Paul C (2009) Design of a multifunctional PLGA nanoparticulate drug delivery system: evaluation of its physicochemical properties and anticancer activity to malignant cancer cells. Pharmaceutical Research 26:1162-1171. 9. Park J, Fong PM, Lu J, Russell KS, Booth CJ , Saltzman WM, Fahmy TM (2009) PEGylated PLGA nanoparticles for the improved delivery of doxorubicin. Nanomedicine: Nanotechnology, Biology, and Medicine 5: 410–418.
Corresponding author: Romila Manchanda Institute: Biomedical Engineering Dept., Florida International Univ. Street: 10555 W Flagler St., EC 2680 City and Country: Miami, FL, USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Electrospray – Differential Mobility Analysis (ES-DMA) for Characterization of Heat Induced Antibody Aggregates Suvajyoti Guha1,2, Joshua Wayment2, Michael J. Tarlov2, and Michael R. Zachariah1,2 1
Mechanical Engineering Department, University of Maryland, College Park, Maryland, U.S.A. 2 National Institute of Standards and Technology (NIST), Gaithersburg, Maryland, U.S.A.
Abstract— Aggregation of therapeutic proteins is common and can occur during bioprocessing, shipping, storage or even during delivery to the patient. Aggregation is a major concern because it is generally believed that aggregates are immunogenic and can cause adverse responses in patients. In this study we employ electrospray- differential mobility analysis (ES-DMA) to characterize small aggregates, and microflow imaging to characterize sub-visible and visible macroscopic particles. Results are presented for four monoclonal antibodies of the IgG class: a polyclonal antibody (IgG-A), Rituxan (Rmab), and a monoclonal antibody that is glycosylated (IgG-B) and deglycosylated (IgG–C) in the Fc region. These four antibodies were systematically stressed at 70o C for up to 180 minutes, and the aggregate formation tracked. We find IgG–A to be the least stable with monomer concentration decreasing exponentially with time accompanied by the appearance of visible insoluble aggregates of a few millimeters in size. For Rmab we find a similar, but less rapid, decrease in monomer concentration and increase in aggregates. Aggregates are of the order of tens of nanometers (intermediate aggregates) while particulates are in the order of tens of micrometers. IgG–B is the most stable, however, unlike Rmab, IgG–B and C do not show any evidence of intermediate aggregates. In these two cases the aggregates are in the subvisible domain and are of the order of a few to tens of micrometers. Keywords— antibody, aggregates, heat treated, electrospray-differential mobility analyzer, micro-flow imaging.
antibody aggregates especially during the formation of intermediate aggregates (15 – 100 nm) sizes. The details of this technique have been discussed in greater detail elsewhere3. We combined ES-DMA and micro-flow imaging (MFI) for characterizing the protein aggregates. MFI helps characterize particulates larger than 1 µm and has been discussed in greater detail elsewhere4.
II. MATERIAL AND METHODS* A. Antibody Sample Preparation Our study encompass four different antibodies: a polyclonal immunoglobulin (IgG–A ) obtained from Sigma Aldrich, Rituxan monoclonal antibody (Rmab) obtained from Biogen, and a glycosylated antibody (IgG–B) and deglycosylated antibody (IgG–C) obtained from Genentech. All samples were diluted to concentrations of about 100 µg/mL in 20 mmol/L ammonium acetate buffer at pH 7 for use in the electrospray – differential mobility analyzer. Aggregate formation was accelerated by subjecting samples to 70o C for 30, 60, 90, 120, and 180 minutes to monitor the time evolution of of the aggregate formation. B. Description of Experimental Set Up
I. INTRODUCTION Protein aggregation is one of the most dominant problems in current bio-pharmaceutical research1. Protein aggregation can be caused by simply shaking or stirring a sample. Concern about aggregates stems from their potential to cause an adverse immune response in patients. Hence it is important to study the formation of these aggregates, which may also shed light on the thermodynamics of the process of protein aggregation as well as provide information about the morphology of these aggregates. It has already been demonstrated elsewhere2 that ES-DMA can be a versatile technique for characterizing
The Electrospray Differential Mobility Analysis (ESDMA) works on the principle of gas-phase electrophoresis in which a charged particle can be separated on the basis of its electrical mobility, through a balance of drag and electrical forces. When scanned through a range of electrical fields, different mobility sizes can be extracted from the DMA and counted by a condensation particle counter to give a mobility size distribution. The mobility size is directly related to the projected area of the particle. To generate the protein in the gas-phase an electrospray source is used. The ES-DMA is operated with a sheath flow rate of 30 L/min and an aerosol flow rate of 1.2 L/min. The capillary size used for the electrospray was 25 µm. Size distributions were obtained from 2-20 nm in steps of 0.2 nm.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 232–235, 2010. www.springerlink.com
Electrospray – Differential Mobility Analysis (ES-DMA) for Characterization of Heat Induced Antibody Aggregates
A. Characterization of Oligomers Using ES-DMA A typical ES-DMA size distribution of antibodies, depending on the concentration and the ionic strength, can consist of monomers, dimers and trimers. Higher concentrations of protein (more than 100 µg/mL) or lower ionic strength solution may lead to more droplet induced aggregates, producing more oligomers. For any particle weighing 145 – 150 kiloDaltons, the expected mobility has already been established to be close to 9 nm2,7. We find the monomers to appear at 8.8 nm which is in conjunction with above findings. The dimers and trimers appear at 10.6 and 12 nm respectively again in accordance with previous findings2. B. Decrement of Antibody Monomers Studied by ES-DMA The IgG–A size distribution consists of monomer, dimers and trimers and is shown in figure 1. It is seen that with increasing time at 70o C, the monomer-dimer-trimer concentration decreases significantly in comparison to other antibodies as will become evident later.
Number Density (#/cc)
250000
Monomer
200000
t = 0 mins at 70 deg C t = 30 mins at 70 deg C
150000
450000
Number Density (#/cc)
III. RESULTS
SEC experiments performed on the samples (data not shown). Monomer
400000
t = 0 mins at 70 deg C
350000
t = 30 mins at 70 deg C
300000
t = 60 mins at 70 deg C
250000 200000 Dimer
150000 100000 50000
Trimer
0 8
Monomer
300000 250000
t = 120 minutes at 70 deg C
150000 100000
Dimer
50000 0
Dimer
10 12 Mobility diameter (nm)
Fig. 3 Size distribution of IgG-B using ES-DMA Trimer
10 Mobility (nm)
t = 0 mins at 70 deg C t = 60 mins at 70 deg C
200000
8
8
18
350000
100000
0
13 Mobility (nm)
Measured size distributions of IgG–B and IgG–C show the similar trends for monomers, dimers and trimers, but do not show the presence of any larger aggregates. A typical size distribution obtained with heat treated IgG–B is shown below in figure 3.
t = 90 mins at 70 deg C
50000
Intermediate aggregates
Fig. 2 Size distribution of Rmab using ES-DMA
Number Density (#/cc)
Further details about the operation and principles of the instrument can be obtained elsewhere3,5. The MFI instrument, a model DPA4200, on loan to NIST from Brightwell Tech Inc., was used to analyze particulates from 1-70 µm at a flow rate of 150 µL/min.
233
12
Fig. 1 Size distribution of IgG–A using ES-DMA Rmab seems to be slightly more stable than IgG–A, and the ES – DMA size distribution shown in figure 2 is different from the other antibodies. It shows the monomers, droplet induced dimers and trimers, and larger aggregates from 13 nm to 20 nm are observed. These data also agree with
C. Formation of Large Particulates Studied by MFI The decrease in monomer signal is followed by the formation of sub-visible or visible aggregates of the order of a few micrometers. A typical size distribution obtained using MFI is shown in figure 4. The MFI characterizes particles using optical techniques and cannot image particles smaller than 0.75 µm. Figure 4 shows that the deglycosylated sample IgG–C forms more aggregates than the glycosylated IgG–B sample. Both samples were heated at 70o for 120 minutes and concentrations were approximately 100 µg/mL.
IFMBE Proceedings Vol. 32
234
S. Guha et al.
aggregates vanish from the size distribution both with ESDMA and SEC if 0.20 µm filters are used to filter the samples prior to analysis. Despite this, we did observe effects of clogging in the capillary indicating that some type of aggregation is likely taking place even though the heat treatment had been stopped, and the particles large enough to clog the capillary removed.
Particle concentration (#/mL)
900000 800000 700000
IgG-B at 70 deg C
600000
IgG-C at 70 deg C
500000 400000 300000 200000
C. Mass Balance Closure
100000 0 0
5 10 15 Particle Diameter (um)
20
Fig. 4 Size distribution obtained for IgG–B and IgG–C obtained using MFI for samples heat incubated for 120 minutes at 70 o C
IV. DISCUSSION A. Decay of Monomers
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0.4
Rmab IgG-A IgG-B IgG-C
Normalized Particle concentration
Normalized Monomer counts
Comparing the monomer counts as a function of incubation time for all the four antibodies shows a clear trend of exponential decay of the monomeric units. Figure 5 shows the decrease of monomer as a function of time. IgG–A appears to be the least stable followed by Rmab, IgG–C and then IgG–B.
One question arising from these results is why the loss of monomer is not accompanied by the appearance of larger aggregates. There are two possibilities. One is that the larger aggregates are at concentrations below the detection limit of ES-DMA. The other possibility is that the kinetics of aggregation is so fast that intermediate sized aggregates grow to the larger particles observed by the MFI and beyond the size limit of the ES-DMA (1 µm). One of the most interesting findings is the apparent agreement between ES-DMA and MFI results for IgG–B and IgG–C. In figure 5 exponential fits through the experimental points obtained using ES-DMA establish IgG-C having a steeper gradient than IgG-C, i.e., IgG–B is more stable than IgG–C, while the MFI data as shown in figure 4 also establishes the same based on the fact that the amount of large particulates formed are smaller for IgG–B sample than IgG–C.
0.35 0.3
IgG-C at 70 deg C
0.25
Rmab at 70 deg C
0.2 0.15 0.1 0.05 0
0
50 100 Time (mins) at 70 deg C
0
150
5
10 15 Particle Diameter (um)
20
Fig. 5 Decay of the monomer counts as a function of incubation time for the four antibody samples using ES-DMA. Lines drawn are exponential fits
Fig. 6 Comparison of normalized particle counts for Rmab with IgG – C for samples heat incubated for 120 minutes at 70 o C
B. Characterizing the Intermediate Aggregates for Rmab Obtained Using ES-DMA System
Another interesting feature is that the size distribution of the aggregates depends on the antibody type. Rituxan is less stable than IgG–C, as evident from figure 5 and goes on to form larger sub-visible aggregates (> 5 µm) while IgG–C forms smaller aggregates (< 5 µm) in larger numbers as seen in figure 6. In fact, if we were to assume all these large particulates are spherical we can attempt a mass balance.
It has already been discussed in the previous section that other than the monomers, dimers and trimers, some intermediate aggregates appear in the ES-DMA size distribution. Upon further examination it was found that these large
IFMBE Proceedings Vol. 32
Electrospray – Differential Mobility Analysis (ES-DMA) for Characterization of Heat Induced Antibody Aggregates
Assuming each MFI counted particulate to be a sphere consisting of monomers, and also assuming the density of monomers equal to that of the particulates, we can calculate the number of monomers, CMFI , that constitute each par-
compared to IgG-B and IgG-C. The ES-DMA results correlate with MFI results and using the two techniques can help probe size from 2 nm to 100 µm and also provide insights into the morphological details of particulates.
ticulate using equation 1 where DMFI and d DMA are the particulate and monomer diameters respectively, determined using MFI and ES-DMA respectively.
π ×D
3 MFI
= CMFI × π × d
3 DMA
(1)
Then integrating the area under the size distribution obtained from MFI we can determine the number of monomers that contribute to the particulates. Since we have the ES-DMA data at time t=0 minutes and t=120 minutes we know the concentration of monomers in the solution. Applying equation 1 results in more CMFI than the number of antibodies existing in solution at time t = 0 minutes, implying that our geometrical assumption in equation 1 is incorrect. In other words the particulates cannot be spherical. Since we do not know their shape we cannot directly close the mass balance. Rather we define geometry of particulates in terms of equation 2,
Ctotal = CES − DMA + S × CMFI
(2)
where CES − DMA and CMFI denotes the concentration measured using ES-DMA and MFI and sphericity S=0 would indicate the particulates are fibrillar and S → 1 would mean the particulates are almost spherical. The above equation is true irrespective of the incubation time. Neglecting CMFI at time t = 0 minutes, we can write
CES − DMA t =0 = C ES − DMA
t =120
+ S × CMFI
t =120
(3)
Using equation 3, we determine the sphericity of Rmab to be 0.36 and IgG–C to be 0.58.
V. CONCLUSION In this study we examined using ES-DMA and MFI to study heat induced degradation of four antibodies and found IgG-A to be least stable and IgG-B to be the most stable. We also found a concomitant increase in very large particulates using MFI technique and no detectable intermediate species. We found IgG-A to form visible insoluble aggregates and Rmab to form sub-visible soluble aggregates larger in size
235
ACKNOWLEDGMENT The authors would like to acknowledge Brightwell Technologies Inc for the use of the MFI instrument and demonstrating its utility for the antibody samples. The authors would also like to acknowledge Kai Zheng from Genentech Inc for providing the glycosylated and deglycosylated antibodies (IgG–B and IgG–C respectively).
* Reference to commercial equipment or supplies does not imply its endorsement by the National Institute of Standards and Technology (NIST) nor does it imply to be necessarily be the best suited for its purpose. REFERENCES 1. Protein Aggregation: Pathways, Induction factor and Analysis. Mahler HC, Friess W, Grauschopf U. Kiese S. (2009). Journal of Pharmaceutical Science, 98(9) : 3058-3071 2. Determination of Protein aggregation with Differential Mobility Analysis : Application to IgG Antibody, Pease, L. F. Elliott, J., Tsai D-H. , M.R. Zachariah and M. J. Tarlov, (2008), Biotechnology and Bioengineering 101: 1214 - 1222 3. Aggregation Kinetics of Colloidal Particles Measured by Gas-Phase Differential Mobility Analysis, Tsai, D-H, Pease III, L. F., Zangmeister, R. A., Tarlov, M. J., and Zachariah, M. R. Langmuir, 2009, 25 (1), 140-146 4. Quantitation of Protein Particles in Parenteral Solutions using MicroFlow Imaging, Huang C, Sharma D, Oma P, Krishnamurthy R. (2009). Journal of Pharmaceutical Science, 98(9) : 3058 - 3071 5. Knutson EO, Whitby KT. 1975. Aerosol classification by electric mobility: Apparatus, theory, and applications. J Aerosol Sci 6:443–451. 6. Bacher G, Szymanski WW, Kaufman SL, Zollner P, Blaas D, Allmaier G. (2001) Charge-reduced nano electrospray ionization combined with differential mobility analysis of peptides, proteins, glycoproteins, noncovalent protein complexes and viruses. J Mass Spectrom 36(9):1038– 1052. Information about corresponding author : Author: Prof. Michael R Zachariah Institute: Department of Mechanical Engineering and Chemistry, University of Maryland Street: 2125 Glenn L Martin Hall City: College Park, Maryland Country: United States of America Email:
[email protected]
IFMBE Proceedings Vol. 32
Mechanisms of Poly(amido amine) Dendrimer Transepithelial Transport and Tight Junction Modulation in Caco-2 Cells D.S. Goldberg1, P.W. Swaan1,2, and H. Ghandehari1,3 2
1 Fischell Department of Bioengineering, University of Maryland, College Park, MD, USA Center for Nanomedicine and Cellular Delivery, Department of Pharmaceutical Sciences, University of Maryland, Baltimore, MD, USA 3 Departments of Pharmaceutics and Pharmaceutical Chemistry & Bioengineering, Center for Nanomedicine, Salt Lake City, UT, USA
Abstract— The purpose of this study was to investigate the mechanisms of cellular uptake, intracellular trafficking, transepithelial transport and tight junction modulation of G3.5 poly (amido amine) dendrimers in Caco-2 cells. G3.5 dendrimers have shown promise as oral drug delivery carriers due to their ability to translocate across epithelial cell monolayers by both transcellular and paracellular mechanisms. Chemical inhibitors blocking clathrin-, caveolin- and dynamindependent endocytosis pathways were used to investigate the mechanisms of dendrimer cellular uptake and transport across Caco-2 cells. Cellular uptake was found to be dynamin dependent, and was reduced by both clathrin and caveolin endocytosis inhibitors, suggesting that dendrimers take advantage of several receptor-mediated endocytosis pathways for cellular entry. In contrast, dendrimer transepithelial transport was found to be governed by dynamin- and clathrin-dependent pathways only. Intracellular trafficking studies showed that dendrimers are found in early endosomes and trafficked to lysosomes within 15 minutes, but that the pathway becomes saturated, leading to increased presence in the endosomes at later time points. The dendrimers were unable to open tight junctions in cell monolayers treated with dynasore, a selective inhibitor of dynamin, a small GTPase required for vesicle scission in endocytosis. This suggests that dendrimer internalization is required prior to its subsequent modulation of tight junctional integrity and that dendrimers act on intracellular cytoskeletal proteins to open tight junctions. Keywords— PAMAM dendrimers, oral drug delivery, tight junctions, transport, endocytosis.
the mechanisms of cellular uptake, transepithelial transport and tight junctional modulation of anionic G3.5 PAMAM dendrimers by examining the impact of endocytosis inhibitors on dendrimer interaction with Caco-2 cells and differentiated Caco-2 monolayers. In addition, we monitor the intracellular trafficking of dendrimers from endosomes to lysosomes over time. Knowledge of the specific pathways of endocytosis, intracellular trafficking and transport will aid in rational design of dendrimers for oral delivery.
II. MATERIALS AND METHODS A. Synthesis and Characterization of G3.5-Oregon Green G3.5 dendrimers were first modified with pendant primary amine groups using 1-Ethyl-3-(3-dimethylaminopropyl)-carbodiimide (EDC), and ethylene diamine at a 5:1 molar ratio to obtain 2.5 amines per dendrimer. Oregon green (OG) was stirred with 10 mg of the aminemodified dendrimer at a ratio of 1:1 in deionized water for 30 minutes, after which water was removed by rotoevaporation. Dendrimer-OG was redissolved in methanol, precipitated in ether and dried under vacuum. Unreacted OG was removed by size exclusion chromatography (SEC). OG content was determined by fluorescence (0.75 labels / dendrimer). B. Optimization of Endocytosis Inhibitors
I. INTRODUCTION PAMAM dendrimers are promising vehicles for oral delivery of poorly bioavailable therapeutics due to their versatile nanoscale structure and their ability to transverse the intestinal barrier [1]. These carriers are known to cross the epithelial barrier by a combination of transcellular and paracellular routes and can transiently open tight junctions, enhancing their own transport via the paracellular pathway and acting as penetration enhancers [2,3]. The mechanisms by which PAMAM dendrimers enhance transepithelial transport are largely unknown. In this study we investigate
Five endocytosis inhibitors were selected to examine the pathways of cellular uptake and transepithelial transport of G3.5 dendrimers. As chemical inhibitors have varied effectiveness in different cell lines and can be somewhat nonspecific, we took a multi-pronged approach, choosing two inhibitors for clathrin- and caveolin-mediated endocytosis and one for dynamin-dependent endocytosis. Before completing uptake and transport studies, we confirmed Caco-2 cell viability in the presence of inhibitors using the WST-1 cell viability assay. Table 1 shows the final inhibitor concentrations used and corresponding cell viabilities for each inhibitor.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 236–239, 2010. www.springerlink.com
Mechanisms of Poly(amido amine) Dendrimer Transepithelial Transport and Tight Junction Modulation in Caco-2 Cells
Table 1
Endocytosis Inhibitor Concentrations
Inhibitor Phenylarsine Oxide (PAO) Monodansyl Cadaverine (MDC) Filipin (FIL) Genistein (GEN) Dynasore (DYN)
Type
Concentration (μM)
% Cell Viability
Clathrin Clathrin Caveolin Caveolin Dynamin
1 300 4 100 50
90.0 ± 5.7% 92.7 ± 3.4% 95.9 ± 2.7% 86.7 ± 2.9% 106.1 ± 5.0%
237
where xi,coloc is the value of voxel i of the overlapped red and green components and xi is the value of the green component. Mx is reported for each treatment as an average of the four regions. Transferrin-AF488 was used as an endocytosis control ligand to establish the validity of the assay methods for monitoring intracellular trafficking over time. E. Transepithelial Transport
Cellular uptake of G3.5-OG dendrimers in the absence and presence of endocytosis inhibitors was determined in Caco-2 cells seeded at 300,000 cells / well in 12 well plates. Cells were pretreated with inhibitors or Hank's Balanced Salt Solution (HBSS) for one hour and then treated with G3.5-OG (25 μM), Transferrin-AF488 (5 μg/ml) or cholera toxin B-AF488 (5 μg/ml), in the presence of the inhibitors for one hour. Flow cytometry was used to quantify the intracellular fluorescence, collecting 25,000 events per sample. Uptake percent is reported comparing the mean intracellular fluorescence in cells treated with endocytosis inhibitors to those in buffer.
Caco-2 cells were seeded at 80,000 cells / cm2 in 12 well polycarbonate transwell filters and maintained for 21-25 days to form differentiated monolayers. Prior to the experiment, the transepithelial electrical resistance (TEER) of the monolayer was measured with an epithelial voltohmmeter (EVOM) (World Precision Instruments, Sarasota, FL). Monolayers with TEER of greater than 600 ohm-cm2 were used for the assay. Cells were pretreated with HBSS or inhibitors for one hour and then with 10 μM G3.5-OG in the presence or absence of inhibitors. Dendrimer transport was quantified by fluorescence in the apical to basolateral direction and compared to transport in the presence of buffer alone. Lucifer yellow (200 μM) apparent permeability was less than 1 x 10-6 cm/s in untreated cells and cells treated with inhibitors, confirming monolayer integrity.
D. Intracellular Trafficking
F. Occludin Accessibility/ Tight Junction Modulation
C. Caco-2 Cellular Uptake
2
Caco-2 cells were seeded at 40,000 cells/ cm on collagen-coated 8-chamber slides and grown to 90% confluence. Cells were pulsed with G3.5-OG (1 μM) or TransferrinAF488 (250 μg/ml) for 30 minutes at 4ºC followed by a 5, 15, or 30 minute PBS chase at 37ºC to allow for internalization. Cells were fixed, permeabilized and stained for early endosomes (rabbit polyclonal early endosomal antigen-1 (EEA-1)) and lysosomes (rabbit polyconal lysosome associated membrane protein 1 (LAMP-1)), followed by Alexa Fluor-568 goat anti-rabbit IgG. The slides were mounted, sealed and stored at 4ºC. Images were acquired using a Nikon Eclipse TE2000 inverted confocal laser scanning microscope (Nikon Instruments, Melville, NY). Excitation and emission wavelengths for Oregon Green/ AF488 and AF568 were 488/515 and 543/605, respectively. Four z-stacks were obtained for each treatment using a 60x oil objective, 60 μm pinhole with 0.5 μm z step size. The colocalization coefficient (Mx) between G3.5-OG and early endosomes or lysosomes was quantified using Volocity 3D Imaging software (Improvision, Lexington, MA). ∑i xi ,coloc (1) Mx = ∑ xi
Tight junctional opening in Caco-2 cell monolayers treated with G3.5-OG or HBSS in buffer or in the presence of dynasore was analyzed by occludin accessibility. Monolayers were fixed, permeabilized and stained for occludin using mouse anti-occludin, followed by Alexa Fluor 568 goat anti-mouse IgG. The membranes were excised, mounted and examined using a Nikon Eclipse TE2000 inverted confocal laser scanning microscope. 3D z-stacks of four regions per membrane were obtained using a 60x oil objective, 60 μm pinhole with 0.5 μm z step size. Occludin staining for each z stack was quantified using Volocity 3D Imaging software by thresholding the red voxels at 20% intensity.
III. RESULTS A. Caco-2 Cellular Uptake G3.5 dendrimers show reduction in cellular uptake in the presence of all endocytosis inhibitors tested (Table 2), suggesting the involvement of both clathrin- and caveolinmediated endocytosis pathways.
i
IFMBE Proceedings Vol. 32
238
D.S. Goldberg, P.W. Swaan, and H. Ghandehari
Table 2
Uptake in the Presence of Endocytosis Inhibitors Percent Uptake
Inhibitor
G3.5
Transferrin
Cholera Toxin B
PAO (Clathrin) MDC (Clathrin) FIL (Caveolin) GEN (Caveolin) DYN (Dynasore)
84% 56% 81% 22% 18%
111% 64% 110% 68% 13%
98% 58% 112% 20% 43%
Dendrimers showed the greatest reduction in uptake in the presence of dynasore, a selective chemical inhibitor of dynamin, a small GTPase required for vesicle scission. The significant decrease in uptake of G3.5 dendrimers and control ligands in the presence of dynasore confirms the endocytosis of G3.5 dendrimers by dynamin-dependent pathways. While G3.5 dendrimers decreased uptake in the presence of both clathrin inhibitors, transferrin uptake was not reduced in the presence of PAO, indicating that this is not an effective inhibitor at the concentration used. In contrast, transferrin shows reduced uptake in the presence of MDC, indicating that MDC blocks clathrin-mediated endocytosis. Cholera toxin B also shows reduced uptake in the presence of MDC, but cholera toxin B can be endocytosed by clathrin- and caveolin- mediated pathways in Caco-2 cells, so this decrease in uptake is not surprising [4]. G3.5 dendrimers also showed reduced uptake in the presence of both caveolin inhibitors, however, filipin did not reduce uptake of cholera toxin B. While genistein reduces the uptake of transferrin, it reduces the uptake of cholera toxin B much more, making it an effective and relatively specific inhibitor of caveolin mediated endocytosis. The most effective and specific inhibitors (monodansyl cadaverine, genistein and dynasore) were chosen for further investigation in the transport studies.
Fig. 1 Intracellular Trafficking of G3.5 PAMAM dendrimers over time in Caco-2 Cells. Colocalization (Mx) with early endosomes (EEA-1) and lysosomes (LAMP-1) is shown. Mean ± standard deviation (n=4) C. Transepithelial Transport Transepithelial transport of PAMAM dendrimers in the presence of inhibitors is shown in Figure 2. Transport of PAMAM G3.5 was significantly reduced at 4ºC, illustrating a strong energy dependence. Similar to Caco-2 uptake studies, transport was also reduced in the presence of dynasore, and monodansyl cadaverine, indicating the importance of dynamin-dependent and clathrin-mediated endocytosis mechanisms. However, unlike cellular uptake, genistein does not have a significant impact on dendrimer transport across Caco-2 monolayers, suggesting that caveolinmediated endocytosis does not play a significant role in this process. It has been suggested that differentiated Caco-2 cells lack caveolae [4]. Therefore, it is possible that while caveolae play an important role in dendrimer endocytosis in undifferentiated Caco-2 cells, they are less important in dendrimer transepithelial transport because of their lower expression in differentiated enterocytes.
B. Intracellular Trafficking Figure 1 shows colocalization between G3.5 dendrimers and early endosomes (EEA-1) or lysosomes (LAMP-1) over time. At five minutes, dendrimers show initial localization in the early endosomes and lysosomes. By 15 minutes, they show significantly more localization in the lysosomes, indicating quick trafficking to these cellular compartments. Interestingly at 30 minutes, the presence in the lysosomes remains the same, while the presence in endosomes increases. This suggests that the trafficking pathway is saturated, causing the dendrimer to stay in the early endosomes once the lysosomes are occupied.
Fig. 2 Transport of G3.5 dendrimers in the presence of endocytosis inhibitors. Mean ± standard deviation (n=4). ** indicates a significant difference (p<0.01) from 100% transport (buffer alone) D. Occludin Accessibility/ Tight Junction Modulation Figures 3 and 4 illustrate the ability of dendrimers to open cellular tight junctions under different conditions. When dendrimers interact with Caco-2 monolayers in buffer alone, tight junctions are opened as indicated by increased occludin staining relative to untreated monolayers, depicted
IFMBE Proceedings Vol. 32
Mechanisms of Poly(amido amine) Dendrimer Transepithelial Transport and Tight Junction Modulation in Caco-2 Cells
in Figure 3 and quantified in Figure 4. A similar increase in tight junction opening in the presence of dendrimers is not seen when monolayers are treated with dynasore. This suggests that cellular internalization of dendrimers is a requisite step for dendrimer modulation of tight junction integrity. Once internalized, dendrimers act on tight junctional cytoskeletal proteins, opening the junctions from the inside of the cell. This allows dendrimers to act both as catalysts for their own transport as well as penetration enhancers.
239
drug linkers, respectively. Delivery of degradation-prone drugs such as DNA and peptides to undifferentiated cells can be achieved through the caveolar pathway, which lacks pH lowering and lysosomal degradation steps. In addition, it has been reported that dendrimers can transiently open tight junctions, enhancing their own transport via the paracellular route. Here, we find that dendrimer internalization is requisite for tight junction modulation, suggesting that dendrimers act on intracellular cytoskeleton components, rather than damaging the tight junctional structures on the apical side. This suggests that dendrimers can be used safely as oral drug carriers or penetration enhancers since their effects on tight junctions can be transient, not permanent.
V. CONCLUSIONS
Fig. 3 Occludin staining in the presence and absence of G3.5 dendrimers in cells treated with HBSS or dynasore. A) G3.5/ HBSS, B) HBSS only C) G3.5/dynasore and D) dynasore only. Scale = 21 μm
This data demonstrates that G3.5 dendrimer cellular uptake and transepithelial transport occurs by vesicular endocytosis mechanisms, specifically clathrin- and caveolinmediated pathways. In addition, endocytosis of dendrimers is required for tight junction modulation, which allows for paracellular transport. This knowledge will aid in rational design of dendrimers for oral drug delivery.
ACKNOWLEDGMENT Financial support was provided by fellowships to D. Goldberg (NSF GRFP and Fischell Fellowship in Bioengineering) and NIH R01EB07470. Flow cytometry analysis was performed at the Flow Cytometry Core Laboratory, Center for Vaccine Development, School of Medicine, University of Maryland, Baltimore. Fig. 4 Quantification of Occludin Staining. Cells treated with dendrimer in the presence of HBSS show a significant increase in occludin staining from untreated cells. (***) indicates p<0.001. Cells treated with dendrimer in the presence of dynasore do not show a significant change in occludin staining relative to the control. Mean ± standard deviation, n=4
IV. DISCUSSION Mechanisms of dendrimer cellular entry and transepithelial transport have significant implications for drug delivery. We find that G3.5 dendrimers enter cells by clathrin, caveolin, and dynamin-dependent endocytosis but their transport is governed by clathrin- and dynamin-dependent pathways only. Upon cellular internalization, dendrimers are trafficked to the endosomes and lysosomes, and show significant accumulation in these compartments. Dendrimer-drug delivery systems can be designed to target the endosomes or lysosomes with pH sensitive or enzymatically cleavable
REFERENCES 1. Kolhatkar R, Sweet D, Ghandehari H. (2008) Functionalized Dendrimers as Nanoscale Drug Carriers, in Multifunctional Pharmaceutical Nanocarriers, Springer, 201-232. 2. Kitchens K, Foraker A, Kolhatkar R et al (2007) Endocytosis and interaction of poly (amidoamine) dendrimers with Caco-2 Cells. Pharm Res, 24: 2138-2145. 3. Kitchens K, Kolhatkar R, Swaan P et al (2008) Endocytosis inhibitors prevent poly(amidoamine) dendrimer internalization and permeability across Caco-2 cells. Mol Pharm, 5:364-369. 4. Torgersen M, Skretting G, van Deurs B et al. (2001) Internalization of cholera toxin by different endocytic mechanisms. J Cell Sci, 114:37373747. Corresponding Author: Hamid Ghandehari University of Utah 383 Colorow Road, Room 343 Salt Lake City, Utah, 84108
[email protected]
IFMBE Proceedings Vol. 32
Absorbable Coatings: Structure and Drug Elution S. Sarkar Das, M.K. McDermott, A.D. Lucas, T.E. Cargal, L. Patel, D.M. Saylor, and D.V. Patwardhan Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Silver Spring, Maryland, USA Abstract— Drug delivery from biodegradable polymer coatings is becoming an important area of research for applications including medical devices. In this work, we probe the impact of polymer chemistry and solvent evaporation rate on drug morphology and the subsequent elution from biodegradable polymer systems. Two different formulations of poly(lacticglycolic-acid) (PLGA) polymers are used in combination with tetracycline (TC) to fabricate coatings using two different solvent evaporation rates. We have conducted elution studies to quantify the release behavior of tetracycline (TC) at the first 2-3 days of dissolution. Significantly we have correlated the drug release kinetics with the drug microstructure of TC/PLGA coating using atomic force microscopy (AFM) and laser scanning confocal microscopy (LSCM). These results suggest that polymer chemistry affects the rate of water absorption and drug dissolution which in turn alters the rate of TC release during the early stages of drug elution. Keywords— PLGA, tetracycline, drug elution, release rate, matrix structure, matrix morphology.
I. INTRODUCTION Medical devices may be coated with a drug-polymer composite for site-specific controlled release of therapeutic drugs to manage post-surgical complications. When a surface is coated by solvent casting, drug aggregates typically form within the polymer matrix in situ as the solvent evaporates from the surface of a substrate. Hence, for any given drug, polymer, and solvent combination, the composite microstructure of the drug-polymer coating, and therefore the release kinetics of a drug, can be controlled by the processing variables. Most importantly, it was reported that at a higher evaporation rate, there was a greater fraction of drug at the surfaces of polymer/drug solution cast films of poly(DLlactide) (PDLLA) and poly(L-lactic acid) (PLLA) [1]. In the present study, the composite microstructure of the coatings and the early release kinetics of the drug from absorbable coating materials are characterized.
II. METHODS TC and PLGA were dissolved separately in tetrahydrofuran (THF) to make drug and polymer stock
solutions. Filtered drug solutions were mixed with the polymer-THF stock solution to create the final THF/ polymer/drug solution. The solution was transferred on polished stainless steel coupons at room temperature in air under conditions that gave rise to two different solvent evaporation rates, 20 and 75 mg/h [2, 3]. A. Dissolution Sample coupons were individually placed in the elution chambers of a 400-DS Dissolution Apparatus VII (Varian, Palo Alto, CA). Eluted drug samples were collected at the end of various pre-selected time points within the first two days. The concentrations of drug in the eluted samples were measured high performance liquid chromatography (HPLC) with 474 scanning fluorescence detector (Waters, Milford, MA) based upon tetracycline standards developed in our laboratories. Analysis of TC was performed using the method of Vienneau et al. [4]. B. Microscopy Surfaces of the coatings were imaged using AFM (Asylum Research). Images were acquired in tapping mode using a silicone OMCL-AC240TS probe with a spring constant of 1-3 N/m. In addition, 3D images of the drug distribution throughout the coating volume were collected with LSCM (model DMIRBE Leica Microsystems, GmBH, Wetzlar, Germany), using a 100x oil immersion objective.
III. RESULTS Experimental observations indicate that polymer chemistry and evaporation rate both significantly influence the characteristics of structure formation as well as drug elution. In figure 1, a typical AFM image is shown. The drug and polymer phase separated, resulting in spherical drug particles with a bimodal size distribution in a polymer matrix at the surface. This structure was also present within the bulk based on LSCM data. Also included in the figure
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 240–242, 2010. www.springerlink.com
Absorbable Coatings: Structure and Drug Elution
241
are drug release profiles for coatings containing 15% drug in different PLGA co-polymer ratios and under different evaporation rates.
50:50 PLGA had been suggested due to the faster biodegradability of the excess G-component in 50:50 PLGA compared to that in 85:15 PLGA [5]; However this factor is not likely to be significant over the 2-day period examined in for this study. The increase in rate in 6 to 12 hours of drug release from 50:50 PLGA is likely due to its more amorphous nature and so it absorbs water more rapidly and to a greater extent than 85:15 PLGA. [6] LSCM demonstrated that the drug within the coating is dissolved by this absorbed water over this time period. The more crystalline 85:15 PLGA had a greater initial release of drug yet released less drug at longer times compared to the more amorphous PLGA. This is a result of greater drug on the surface of the former based on AFM data.
V. CONCLUSIONS The initial elution rates have been characterized for both fast and slow casting processes. In addition, the drug and polymer microstructures in solvent cast coatings for use in controlled drug release have been characterized by AFM and LSCM. Using these data, we have elucidated the impact of manufacturing variables, such as polymer chemistry and evaporation rate on drug and polymer phase structure development, as well as the impact of phase structure and polymer chemistry on the subsequent drug elution kinetics. mg/
ACKNOWLEDGMENTS These authors are indebted to Ms. Deidra Johns, Department of Bioengineering, Rice University, and Ms. Rachel Casas, Biomedical Engineering, John Hopkins University, for their initial trial experiment.
REFERENCES Fig. 1 Representative data for PLGA coatings with 15% tetracycline.
AFM image for 85:15 PLGA (top and resulting elution curves (bottom) for various copolymer ratios and solvent evaporation rates: 50:50 PLGA (75 mg/h (¯ ¯◊ ¯ ¯) and 20 mg/h (…..∆…..)); 85:15 PLGA (75 mg/h (¯¯¯□¯¯¯□¯¯¯) and 20 mg/h (¯¯○¯¯○¯¯)). The standard deviation in the elution data was ± 7%
IV. DISCUSSION The increase in drug release from 50:50 PLGA relative to 85:15 PLGA is partly due to the chemical characteristics of the polymer. Over a much a longer time span of dissolution, an increase in tendency for drug diffusion out of
1. Zilberman M, Schwade ND, Meidell RS, Eberhart RC (2001) Structured drug-loaded bioresorbable films for support structures. J Biomater Sci Polymer Edn 12(8): 875-892. 2. McDermott MK, Saylor DM, Casas R et al. Microstructure and elution of tetracycline from block copolymer coatings. Accepted by J Pharm Sci December 2009. 3. Kim CS, Saylor DM, McDermott MK et al. (2009) Modeling solvent evaporation during the manufacture of controlled drug-release coatings. J Bio Mat Res Part B, 90B (2 ):688-699. 4. Vienneau DS, Kindberg CG (1997) Development and validation of a sensitive method for tetracycline in gingival crevicular fluid by HPLC using fluorescence detection. J Pharm Biomed Anal 16(1):111-117. 5. Mehta AK, Yadav KS, Sawant KK (2007) Nimodipine Loaded PLGA Nanoparticles: Formulation optimization using factorial design, characterization and in vitro evaluation. Current Drug Delivery, 4:185193.
IFMBE Proceedings Vol. 32
242
S.S. Das et al.
6. Collins AEM, Deasy PB, MacCarthy DJ, Shanley DB (1989) Evaluation of controlled-release compact containing tetracycline hydrochloride bonded to tooth for the treatment of periodontal disease Int J Pharm 51:103- 114.
DISCLAIMER The mention of commercial products, their source, or their use in connection with the material reported herein is not to be construed as either an actual or implied endorsement of the US Food and Drug Administration.
Corresponding Author: Dr. Srilekha Sarkar Das Division of Chemistry and Materials Sciences, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, Food and Drug Administration, 10903 New Hampshire Avenue, Building 64 Silver Spring MD 20993 US
[email protected]
IFMBE Proceedings Vol. 32
A Brief Comparison of Adaptive Noise Cancellation, Wavelet and Cycle-by-Cycle Fourier Series Analysis for Reduction of Motional Artifacts from PPG Signals M. Malekmohammadi1 and A. Moein2 2
1 Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran Department of Biomedical Engineering, Azad University, Science and Research Branch, Tehran, Iran
Abstract— The accuracy of Photoplethysmographic signals is often not adequate due to motional artifacts induced in the recording site. Over recent decades there has been a widespread effort to reduce these artifacts and different methods are used for this aim. Nevertheless there are still some contradictory results reported by different methods about their effectiveness in artifact reduction. In this paper, we aim to compare three of established methods for PPG noise reduction on a unique dataset. Among different reported methods, we have chosen Adaptive Noise Cancellation (ANC), Discrete Wavelet Transform (DWT) and a newly developed method Cycle-bycycle Fourier Series Analysis (CFSA) for denoising. To evaluate the effectiveness of mentioned methods, the Heart Rate (HR) estimated from denoised signals by each method has been calculated and compared to the extracted parameters from reference PPG signal which is artifact free. Results indicate that Cycle-by-cycle Fourier Series Analysis method (CFSA) gives the closest results to the results obtained from Reference signal and the Adaptive Noise Cancellation (ANC) method results in more accurate estimation of HR than DWT Denoising method. Keywords— Photoplethysmography (PPG), Adaptive Noise Cancellation (ANC), Discrete Wavelet Transform (DWT), Cycle-by-cycle Fourier Series Analysis (CFSA), Biomedical Signal Processing.
I. INTRODUCTION Photoplethysmography (PPG) is a low-cost optical measurement technique that can be used to detect blood volume changes in the microvascular bed of tissue. It has widespread clinical application, with the technology utilized in commercially available medical devices [1,2]. Over the recent decades, the use of PPG signal has become more popular in extracting HR. An increased HR is associated with an increased mortality and therefore, HR monitoring can be considered really important [3]. However, the practical accuracy of the PPG signal is limited when its signalto-noise ratio (SNR) becomes inadequate. The principle factors that cause these occurrences are either poor periphery perfusion or motion artifacts. This motional artifact can arise from voluntary or involuntary movement [4].
There have been several attempts to use different methods in artifact reduction of PPG signal. Some of them are Adaptive Noise Cancellation (ANC) [5,6,7], Wavelet-based denoising [8], Kalman filters [9], Independent Component Analysis [10,11], Correlation-based methods [12], Averaging [13] and most recently Fourier Series Analysis [14]. Although, there are many publications about the abovementioned methods, explaining their main ideas, there are still some contradictory reports about efficacy of them [10,11]. Regarding this fact, it seems that there is a necessity to compare different methods. This comparison should be done on a unique dataset in order to get the reliable evaluation of their performance. In this paper, we aim to evaluate three established techniques in their effectiveness in restoring corrupted PPG signals due to motions. The first technique incorporates the concept of digital adaptive filter with an accelerometer as motional detector [5]. The second adopts a denoising technique of wavelet transformation that has revealed its capability in image processing of various medical applications. Finally, the third method is a newly developed method which uses Fourier Series Analysis for removing artifacts [14]. For comparison, the HR was estimated from the denoised signals and compared as a measure of efficacy of them.
II. MATERIALS AND METHODS A. Experimental Materials PPG signals have been recorded using a ring-type finger sensor from the index fingers of both right and left hands of the seven male subjects in sitting position. A MEMS accelerometer has been attached to one of the sensors in order to measure the motion. The other hand from which the reference PPG signal is recorded, have been assumed without motion so that the signal recorded from it be regarded as an artifact free signal. Both signals are sampled at 1 kHz and have time duration for about 20 seconds.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 243–246, 2010. www.springerlink.com
244
M. Malekmohammadi and A. Moein
B. Adaptive Noise Cancellation Adaptive Noise Cancellation is used to remove background noise from useful signals. The basic idea of an adaptive noise cancellation algorithm is to pass the corrupted signal through a filter that tends to suppress the noise while leaving the signal unchanged. The filter coefficients adapt during time to converge after a while.
Although the aforementioned algorithm does not assume prior knowledge about the values of process parameters, it continuously adjusts the model parameters to minimize the error between d and s through minimizing the power in ŝ. C. Wavelet-Based Denoising The Discrete Wavelet Transform (DWT) is a time and frequency representation of a signal that uses sine and cosine as the basic analyzing functions [15]. This can be defined as the convolution between a signal x(t) and the wave: let function , ,
Fig. 1 The block-diagram of the Adaptive Noise Cancellation designed for artifact reduction of PPG signal
Figure 1 shows the block-diagram of an Adaptive Noise Canceller which is designed for artifact reduction of PPG signal. In this system, the PPG sensor records the mixture of signal and artifact as a combination and its attached accelerometer records the acceleration of movement of the corresponding finger. As it is showed in [5], this recorded acceleration is a reliable measure of the motion artifact which basically corrupts the PPG signal. The adaptive filter estimates the dynamics of the distortion process, and calculates the estimate of the distorted signal d in response to the measured acceleration n. The estimated distortion is then subtracted from the PPG sensor output d. The adaptive filter comprises a dynamic model predicting how the distorted signal component is generated in response to the body acceleration. On the assumption that corrupted PPG signal correlated with acceleration, the model of adaptive filter is a Finite Impulse Response type having high correlation with the acceleration signal. The actual PPG sensor output d is a corrupted signal given by d = s+n. y is the estimate of the distorted signal produced by the adaptive filter. The goal of the adaptive filter is to minimize the error d-y. This can be achieved by evaluating the power of the recovered signal ŝ=d-y. The original PPG signal s is uncorrelated with body motion and thus is uncorrelated with the distorted signal n and its estimate y. ̂
2
2
|
2
,
is the dilated and shifted version of a unique Where , wavelet function of : 1 ,
3
| |
Where a is a scaling variable and b is a translation variable. DWT is usually used: 2
2
1
4
The wavelet reconstruction is a multiplication of a vector w containing the coefficients for all scales with a matrix consisting of all time shifted wavelet functions for them: 5
Where is the transposed matrix of wavelet functions for all scales. The structure of the vector is: …
6
The procedure of wavelet denoising is to primarily apply a wavelet transform to the data and obtain the vector of wavelet coefficient, w. Elements in w are then suppressed by predefined threshold. Lastly, the denoised signal is reconstructed with cognate inverse wavelet transform. It is known that selecting an appropriate wavelet type, determining a suitable threshold and adopting appropriate threshold method are the key steps for achieving the best denoising result. Haar wavelet was selected for this study and the threshold is set as being the length of data samples (N) as follows:
1
The second and the third terms in the above equation are eliminated. In consequence, minimizing is equivalent to , since the first term is irreleminimization of vant to minimization with respect to the filter coefficients.
2
7
The Wavelet-based denoising is done using Wavelet Toolbox in the MATLAB 7.6.0 (The MathWorks Inc., Natick, USA).
IFMBE Proceedings Vol. 32
A Brief Comparison of Adaptive Noise Cancellation, Wavelet and Cycle-by-Cycle Fourier Series Analysis
245
1
D. Cycle-by-Cycle Fourier Series Analysis for PPG Denoising
1
Every periodic signal can be decomposed into a set of sinusoids made of a fundamental frequency and its harmonics, as described by the Fourier series [16]. If f(t) represents the periodic signal with a period T, its Fourier series expansion will be like this:
1
13
2
14
∞
8 2
Where
is the fundamental frequency of principal
harmonic of signal, and the coefficients can be calculated whit these equations: 1 2 2
9 cos
10
2
2⁄
Here,
and
2⁄ would individually evaluate to zero, provided the motion artifact is not correlated with the PPG signal-an assumption that had practically been verified earlier [14]. Therefore under this assumption and . It is easily seen that if we reconstruct the PPG from the Fourier series coefficients of the corrupted PPG, we obtain the original (artifact-free) PPG.
11
From the theories, a Fourier series expansion is applicable only to periodic signals. PPG signal is quasi-periodic. As a result it seems that it is not possible to use the Fourier series expansion for it. In the method proposed in [14], this problem has been solved by using the Fourier series expansion for every cycle of the PPG signal. First of all, a raw PPG signal is split to a sequence of cycles by using a specific algorithm. For every cycle, it is assumed that this cycle is endlessly repeated. After extraction of its period Ti and principal frequency i, its Fourier series expansion will be computed with the abovementioned equations. The number of Fourier coefficients is infinite for every extracted cycle and finding all of these terms is not possible. Using not all of the coefficients will introduce some error in the construction of a periodic signal from its Fourier series analysis, but about the PPG signal this amount of error is insignificant. The main parameter considered as the measure of accuracy is the accurate reconstruction of dicrotic notch in PPG signal. Keeping this fact in mind, only seven coefficients are quite enough for this reconstruction [14]. Main point about this analysis is its ability for noise reduction.. We name the corrupted signal . If the motion artifact is assumed additive to the pure signal:
III. RESULTS In order to do the comparison, the performance of each method is measured in the form of accuracy of HR extracted from its output denoised PPG signal and reference HR which is extracted from the reference PPG signal from the steady hand. 1.6 1.4 1.2 1 0.8 Denoised signal error Reference Signal
0.6 0.4 0.2 0 -0.2
0
is the motion Where is the pure PPG signal and artifact. Now, if we compute the Fourier series coefficients of the mth cycle of we get:
1
2
3
4
5
6
7
8
9
10
Fig. 2 A sample of using Adaptive Noise Cancellation for the PPG signal 1.6 Denoised signal Reference Signal
1.5 1.4 1.3 1.2 1.1 1 0.9 0.8 0.7
0
12
15
1
2
3
4
5
6
7
8
9
10
Fig. 3 A sample of using DWT for denoising PPG signal Figure 2 shows an example of using ANC for artifact reduction of one of PPG signals. Figure 3 is the result of thresholding for Haar wavelet transform. For finding the Fouri-
IFMBE Proceedings Vol. 32
246
M. Malekmohammadi and A. Moein
er series coefficients of each cycle, by using the Curve Fitting toolbox of MATLAB a function consists of 7 successive sine and cosine coefficients is fitted to each cycle. The functions obtained by this method are used for reconstruction of their corresponding cycles. Figure 5 shows a sample cycle and its corresponding fitted function. As it is obvious in this figure, this method could fairly find the actual trend of PPG signal. Table 1 shows the obtained results of three algorithms in the motion artefact reduction of PPG signal in the estimation of HR. Results shown indicate that among the different mentioned methods, Cycle-by-cycle Fourier Series Analysis method gives the closest result to the result obtained from Reference signal. On the other hand, the Adaptive Noise Cancellation (ANC) method results in more accurate estimation of Heart Rate (HR) and Pulse Transit Time (PTT) than Haar DWT Denoising method. From the computational complexity point of view, Cycle-by-cycle Fourier Series Analysis method has the highest rank, because this method needs more mathematical operations than two other methods. A sample of extracting cardiac cycles on the PPG signal 1.4
1.2
1
0.8
0.6
0.4
0.2
0 15
15.5
16
16.5
17
17.5
18
18.5
19
19.5
20
Fig. 4 Extraction of cardiac cycles from PPG signal
Fig. 5 Denoising a cycle of PPG signal with cycle-by-cycle Fourier Series Analysis Table 1 The results of different algorithms in artifact reduction of PPG Reference HR Mean ± std
99.37± 2.16
ANC
DWT
99.39± 1.24
99.45± 1.56
CFSA 99.38± 1.46
ACKNOWLEDGMENT Authors want to thank the Biomedical Lab of Sharif University of Technology for the cooperation in recording the signals.
REFERENCES 1. Allen J (2007) Photoplethysmography and its application in clinical physiological measurement. Physiological Measurement J, 28:1–39 2. Chan G ,Middleton P M, Lovell N H, and Celler B G (2005) Extraction of photoplethysmographic waveform variability by lowpass filtering. Proceeding of 27th Annual International Conference of the IEEE EMBS 3. Foo J,Wilson S (2006) A computational system to optimise noise rejection in photoplethysmography signals during motion or poor perfusion states. Med. Biol. Eng. Comput.44:140–145 4. Foo J Y (2006) Comparison of wavelet transformation and adaptive filtering in restoring artefact-induced time-related measurement. Elsevier Trans. on Biomedical Signal Processing and Control, 93–98 5. Wood L B, Asada H H (2006) Noise Cancellation Model Validation for Reduced Motion Artifact Wearable PPG Sensors Using MEMS Accelerometers, Proceedings of the 28th IEEE EMBS Annual International Conference, 2006, New York City, USA 6. Asada H H, Jiang J, Gibbs P (2004) Active Noise Cancellation Using MEMS Accelerometers for Motion-Tolerant Wearable Bio-Sensors, Proceedings of the 26th Annual International Conference of the IEEE EMBS, 2004, San Francisco, CA, USA 7. Comtois G, Mendelson Y (2007) A Comparative Evaluation of Adaptive Noise Cancellation Algorithms for Minimizing Motion Artifacts in a Forehead-Mounted Wearable Pulse Oximeter, Proceedings of the 29th Annual International Conference of the IEEE EMBS, Cité Internationale, 2007, Lyon, France 8. Lee C M, Zhang Y T (2003) Reduction of Motion Artifacts from Photoplethysmographic Recordings Using a Wavelet Denoising Approach. IEEE Trans. on Biomedical Engineering 9. Seyedtabaii S, Seyedtabaii L (2008) Kalman Filter Based Adaptive Reduction of Motion Artifact from Photoplethysmographic Signal. International J of Electronics, Circuits and Systems 10. Kim B K, Yoo S K (2006) Motion Artifact Reduction in Photoplethysmography Using Independent Component Analysis. IEEE Trans. on Biomedical Engineering 11. Yao J, Warren S (2005) A Short Study to Assess the Potential of Independent Component Analysis for Motion Artifact Separation in Wearable Pulse Oximeter Signals, Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China 12. Weng J, Ye Z, Weng J (2005) An Improved Preprocessing Approach for Photoplethysmographic Signal, Proceedings of the 2005 IEEE Engineering in Medicine and Biology Annual Conference, Shanghai, China 13. Lee H W, Lee J W, Jung W G, Lee G K (2007) The Periodic Moving Average Filter for Removing Motion Artifacts from PPG Signals. International J. of Control, Automation, and Systems, 5:701-706 14. Reddy K, George B, Kumar V J (2009) Use of Fourier Series Analysis for Motion Artifact Reduction and Data Compression of Photoplethysmographic Signals. IEEE Trans. on Instrumentation and Measurement 15. Wood J C, Johnson K M (1999) Wavelet packet denoising of magnetic resonance images: importance of Rician noise at low SNR. Magn. Reson. Med, 41:631–635 16. Bracewell R N (2000) The Fourier Transform and Its Applications McGraw-Hill, New York
Author: Mahsa Malekmohammadi Institute: Sharif University of Technology City: Tehran Country: Iran Email:
[email protected]
IFMBE Proceedings Vol. 32
Respiratory Resistance Measurements during Exercise Using the Airflow Perturbation Device P. Chapain1,2, A. Johnson1, J. Vossoughi1,2, and S. Majd2 1
Fischell Department of Bioengineering, University of Maryland, College Park, MD 20742 2 ESRA, 3616 Martins Dairy Circle, Olney, MD 20832
Abstract— Respiratory resistance is not a constant phenomenon. The work of breathing is influenced by the respiratory mechanics that in turn is affected by the activity of a subject. Previous works have indicated that the total respiratory resistance (resistance from the airways, lung tissues, and the chest wall) are significantly lower for about 30 seconds immediately following exercise. The change in resistance can take place either in the inhalation or exhalation direction or both. The Airflow Perturbation Device (APD) helps to non-invasively measure the respiratory resistance during a state of rest or exercise in both inhalation and exhalation directions. The objective of this study was to compare the pre-exercise resistance to resistance during exercise and to post-exercise resistance especially during the transitional states and to document how fast the respiratory resistances change with a change in physical activity. Using the APD, resistance was continuously measured during pre-exercise (rest) for 2.5 minutes followed by exercise for 6 minutes and finally rest again for 3.5 minutes on a total of 10 healthy nonasthmatic adult subjects. Exercise was performed on a bicycle ergometer pedaling at a rate of 50-70 rpm to elicit a heart rate of 70% of predicted maximum heart rate. From the preliminary data, it is noticed that there is a change in total resistance during the transitional states of “rest to exercise” and “exercise to rest”. However, the speed and the range of changes were not consistent within the subjects. A thorough statistical analysis of resistance data of all the subjects will provide us with the knowledge on the total changes in respiratory resistance of subjects during transitions, the speed with which the changes occur, as well as reproducibility and sensitivity to changes of the APD. Keywords— Airflow Perturbation Device, Respiratory resistance, Respiratory mechanics.
been found to be 20% lower than pre-exercise values and it takes under a minute for the RR values to get back to normal [2]. However, the change in respiratory resistance, if any, during exercise has never been studied before. This could have been because of a lack of a proper measurement system that helps to monitor respiratory resistance continuously during a particular physical activity. The Airflow Perturbation Device (APD) is a noninvasive diagnostic device which can be used to measure respiratory resistance. The handheld version of the APD is small, light and portable and thus can be used to monitor respiratory resistance continuously especially during exercise where other systems fail. When the subject breathes normally through the device, small perturbations in pressure and flow are introduced by the open and screened segments on the rotating wheel of the APD. The ratio of pressure to flow perturbations yields a continuous measurement of the inhalation, exhalation, and the total respiratory resistance [3]. This ability of the APD to measure resistances in both inhalation and exhalation directions provides a way to monitor exercise induced respiratory changes either in inhalation or exhalation or both. The objective of this study was to see the effects of exercise on the respiratory resistance not only after a certain time, but also during the transitional states of rest to exercise and exercise to rest. With the use of APD real time changes in the respiratory mechanics can be monitored. The results from the APD can be used to evaluate the speed of induced changes in the respiratory resistance due to exercise and the subsequent changes after exercise is ceased.
I. INTRODUCTION Evaluation of respiratory resistance (RR) gives insight into the respiratory health of any subject. The total respiratory resistance, which is the sum of resistances of the airways, lung tissues, and the chest wall, does not remain constant at all times. Control of respiration is apparently changed during exercise compared to rest [1]. Exercise affects the respiratory mechanics in a way that the work of breathing is reduced to meet higher demands for gas exchange [1]. Among various physical adjustments due to exercise, the effects on the respiratory resistance has not been studied and documented extensively. Total respiratory resistance following exercise has
Fig. 1 Diagram of APD and its effects on mouth pressure and flow. Ratio of pressure perturbation magnitude (A) to flow perturbation magnitude (B) equals RR
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 247–250, 2010. www.springerlink.com
248
P. Chapain et al.
II. METHODS
III. RESULTS
Healthy non-asthmatic subjects between the ages of 18 to 40 years were selected on a volunteering basis for this study which was done following the protocol approved by the University of Maryland Institutional Review Board. For the test, volunteers completed a medical history questionnaire and a Physical Activity Readiness Questionnaire (PAR-Q). The purpose of these questionnaires was to screen the subjects for their appropriateness to undergo the physical activities planned for this study. Once cleared for participation, the subjects were asked to complete a written informed consent form. Before each test, the subjects were given a brief orientation to the bicycle ergometer which was used as a means of exercise. The subjects were then asked to pedal the bicycle ergometer at a work rate that would elicit 70% of their predicted maximum heart rate. The heart rate was monitored by Polar heart rate monitors worn by the subjects. The subjects were allowed to take a rest long enough to get the heart rate back down to normal before starting the data collection phase. The test was performed in the seated position on the ergometer. Without pedaling, the subjects were asked to breathe through the APD and the data was monitored in real time using APD v2.0 software, a universal serial bus cable connected to the APD and a computer. The rest breathing data was collected for 2.5 minutes and then they were asked to begin pedaling at a predetermined rate to elicit a heart rate of 70% of predicted maximum rate. The exercise data collection continued for 6 minutes which was followed by another 3.5 minutes of rest. Respiratory resistance was monitored continuously throughout the process. A nose clip was used by each subject to allow breathing only through mouth. A disposable mouthpiece and medical filter were replaced on the APD after each subject use. Fig 2 shows a subject on the bicycle ergometer using the APD.
To fully scope the effects of exercise on respiratory resistance of human subjects, the target number of subjects is estimated to be 10 to 20. For this paper, two typical subjects were selected and their results are presented here. Average RR value is subject dependent and the physical characteristics play an important role in the outcomes. Table 1 shows the physical characteristics of the two subjects. Table 1
Physical characteristics of the subjects
Subject #
Sex
Age
Weight
Height
1 2
Male Male
21 27
148 lbs 155 lbs
5’11” 5’9”
Fig. 3 APD measured continuous resistance values for rest, exercise, and post-exercise rest for subject 1
Fig.
4 Exhalation to inhalation resistance ratios for rest, exercise, and post-exercise rest for subject1
Fig.
2 A subject on the bicycle ergometer
Fig.
5 Comparison of RR values during the transitions for subject 1
IFMBE Proceedings Vol. 32
Respiratory Resistance Measurements during Exercise Using the Airflow Perturbation Device
Fig.
6 APD measured continuous resistance values for rest, exercise, and finally post-exercise rest for subject 2
increase from exercise to post-exercise rest. Exhalation to inhalation ratios were found to be 1.37 for rest, 1.37 for exercise, and 1.49 for post-exercise rest. Similarly, for subject 2, there was a 27% increase in inhalation resistance from rest to exercise and an 11% decrease from exercise to post-exercise rest. In exhalation direction, there was a 10% decrease from rest to exercise and a 3% increase from exercise to post-exercise rest. Exhalation to inhalation ratios were found to be 1.29 for rest, 0.93 for exercise, and 1.06 for post-exercise rest. RR values of 30 seconds for each of the five states of : i) end of rest ii) beginning of exercise iii) end of exercise iv) beginning of post-exercise rest v) end of post-exercise rest were also compared to evaluate the exercise induced resistance changes in the transitional phases and end phases of certain activities of rest and exercise. Figures 5 and 8 show the graphs of resistances during those periods. Table 2 below shows the data obtained using the APD during those periods. Table 2
Fig.
7 Exhalation to inhalation resistance ratios for rest, exercise, and post-exercise rest for subject2
Fig.
RR changes during 30 sec of transition and end phases
Subject 1
Inh Ave (cmH2O/Lps)
Exh Ave (cmH2O/Lps)
Exh/Inh
End of rest
3.3
4.57
1.38
Beginning of exercise
3.49
4.61
1.32
End of exercise
3.19
4.33
1.37
Beg. of post-exercise rest 3.14
4.18
1.33
End of post-exercise rest 3.72
5.58
1.5
Subject 2
Inh Ave (cmH2O/Lps)
Exh Ave (cmH2O/Lps)
Exh/Inh
End of rest
2.42
3.11
1.29
Beginning of exercise
2.7
3.19
1.18
End of exercise
3.24
2.85
0.88
Beg. of post-exercise rest 3.16
2.9
0.92
End of post-exercise rest 2.82
3.13
1.11
IV. DISCUSSION
8 Comparison of RR values during the transitions for subject 2
Figures 3-9 illustrate the respiratory resistance results obtained from the two subjects. Each data is a 5 sec average automatically calculated by the APD. Figures 3 and 6 follow the RR changes for the two subjects in inhalation and exhalation directions. Results from overall averages for an activity indicated that for subject 1, there was a 2% decrease in average resistance from rest to exercise and an 8% increase in resistance from exercise to post-exercise rest in the inhalation direction. For exhalation, for the same subject, there was a 2% decrease from rest to exercise and an 18%
249
A comparison of RR values obtained during pre-exercise rest, exercise and post-exercise rest indicates that there is a significant effect of exercise on the respiratory mechanics and thus on the total respiratory resistance. However, with the results obtained, it is still not possible to state that the changes follow a specific pattern with all the subjects. For the first subject, changes in inhalation and exhalation resistances were similar. There was a decrease in values switching from rest to exercise, and an increase when switched back to rest. However, for the second subject, it was not the case. Inhalation increased when the activity was switched
IFMBE Proceedings Vol. 32
250
P. Chapain et al.
from rest to exercise and then decreased when switched back to rest. In the case of exhalation, a vice-versa phenomenon was observed. The observed values during the transitions and the end phases of activities of exercise and post-exercise followed an almost similar pattern for both the subjects. Even if the patterns of exercise induced changes could not be generalized, it is clearly seen from the results that the APD can rapidly detect such changes in the RR values of the respiratory system. It is generally supposed that the respiratory resistance measurements depend on air flow rates [1]. The individual results obtained were not compared with the flow rates of the individual subjects. Because of differences in body mass, airways volume and interior anatomy, it is possible to have differences in flow rates for different subjects. The inconsistency within results obtained from the two subjects could be attributed to the above mentioned factors. Because of the portability and ability of the APD to make rapid RR measurements, it was hoped that its usability could be extended to take measurements even during the exercise and in that, we have succeeded.
V. CONCLUSIONS This study suggested that the respiratory resistances in fact change with the activities of exercise and rest. These changes can take place in the direction of either inhalation, or exhalation, or both. With the increase of subjects to at least ten, we hope to better establish the effects of exercise on respiratory resistance, and how fast and accurately the APD is able to produce the results. Also, the flow-rate dependency of resistance values need to be analyzed and studied which can then be used to correct the resistance values obtained during and after exercise.
REFERENCES 1. Johnson AT (1999) Biomechanics and Exercise Physiology. Wiley, New York 2. Silverman NK, Johnson AT, Scott WH, Koh FC (2005) Exerciseinduced respiratory resistance changes as measured with the airflow perturbation device. Physiol. Meas. 26:29-38 3. Lausted CG, Johnson AT (1999) Respiratory resistance measured by an airflow perturbation device. Physiol. Meas. 20: 21-36
IFMBE Proceedings Vol. 32
Comparison of IOS Parameters to aRIC Respiratory System Model Parameters in Normal and COPD Adults Michael Mangum1, Bill Diong1, Michael D. Goldman2, and Homer Nazeran2 2
1 Engineering, Texas Christian University, Fort Worth, TX, TX, U.S.A. Electrical and Computer Engineering, University of Texas at El Paso, El Paso, TX, U.S.A.
Abstract— Chronic Obstructive Pulmonary Disease (COPD) is an important cause of illness and death. Sensitive assessments of COPD may be obtained by Impulse Oscillometry (IOS) during resting breathing. The respiratory impedance measured by IOS is modeled well by the augmented RIC (aRIC) circuit with parameters R, I, Rp, Cp, Ce, representing central airway resistance and inertance, peripheral airway resistance and compliance, and extrathoracic compliance, respectively. IOS measurements (from 5 to 35 Hz) were obtained from ten normal adults and ten adults with varying degrees of COPD. The aRIC parameters were then derived by least-squares-optimal fitting to the IOS data. Using MannWhitney tests, the main IOS resistance and reactance parameters (R5, R5-R15, X5) and aRIC model parameters (R, Rp, I, Cp, Ce, Rp/Cp) were analyzed to determine if the values for normal adults came from the same population as the COPD adults’ values. For IOS R5, R5-R15, X5, the approximating z statistics were -6.47, -6.48, 6.46, respectively. For aRIC R, Rp, I, Cp, Ce, Rp/Cp, the z statistics were -3.04, -4.31, 2.02, 6.34, 1.64, -6.46, respectively. At a 99% confidence level, all of these parameters, except aRIC I and Ce, were found to come from different distributions. Furthermore, we calculated the Pearson product-moment-correlation-coefficients between aRIC R, Rp, I, Cp, Ce, 1/Cp, Rp/Cp, and IOS R5, R5-R15, X5, R5-Rmin, Rmin. In COPD adults, we found very strong correlations between aRIC R and IOS Rmin (0.987), Rp and X5 (-0.992), 1/Cp and R5-Rmin (0.935), and Rp/Cp and X5 (-0.962). We conclude that the aRIC model of respiratory impedance is as sensitive as IOS to lung function changes in COPD, and that certain aRIC parameters correlate well to IOS parameters used for clinical assessment of COPD. Keywords— Respiratory impedance, respiratory system model, parameter estimation, impulse oscillometry, COPD.
I. INTRODUCTION Chronic Obstructive Pulomonary Disease (COPD) is a progressive respiratory disease that is the fourth leading cause of death in the United States. COPD is characterized by two conditions, emphysema and chronic obstructive bronchitis. Emphysema chiefly decreases the elasticity of the alveoli in the lungs, while chronic obstructive bronchitis increases resistance to breathing due to inflammation and mucous build up in the airways. The changes in respiratory function brought about by COPD can be determined using
spirometry, which requires forceful exhalations by the subject under test. Alternatively, lung function can be assessed during resting breathing using the Impulse Oscillometry System (IOS). The IOS utilizes a loudspeaker to apply pressure impulses to the patient through a mouthpiece, and measures the airflow response. The system then performs an analysis to derive the pressure-flow relationship, known as respiratory input impedance (Z), that characterizes the patient’s respiratory system over a frequency range (generally 5-35 Hz). Various resistive (R) and reactive (X) impedance values at particular frequencies, e.g., R5−R15, which is the difference between the resistance at 5 Hz and at 15 Hz, and X5, which is the reactance at 5 Hz, are then used to assess lung function. Over the years, researchers have proposed several electric-circuit models of the respiratory input impedance, i.e., circuits with electrical (voltage-current) impedance that is analogous to and approximate the mechanical (pressureflow) impedance of the respiratory system. An important aim of these efforts is to determine whether the component values of such models may be helpful to doctors in diagnosing and treating respiratory diseases. Recently, it has been found that the respiratory impedance measured by IOS testing is modeled relatively well by the augmented RIC (aRIC) circuit (shown in Fig. 1) and that the model possesses certain advantages over other models [1]. The study described in this paper aimed to (1) compare the sensitivity of various aRIC parameters and those IOS parameters used for clinical assessment of lung function in differentiating between normal adults and adults with COPD, and (2) examine the correlations between these aRIC parameters and IOS parameters.
II. MATERIALS AND METHODS For this study, we used IOS test data obtained from ten normal adults 24–67 years of age (mean 43.6, SD 14), 1.73– 1.83 m in height (mean 1.77 m, SD 3.6 cm), weighing 50.0– 100.9 kg (mean 77.7 kg, SD 14.9 kg); and from ten adults 54–79 years of age (mean 66, SD 7.4), 1.60–1.80 m in height (mean 1.74 m, SD 5.6 cm), 54.5–95.9 kg in weight (mean 79.0 kg, SD 11.5 kg) previously diagnosed with
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 251–253, 2010. www.springerlink.com
252
M. Mangum et al.
varying degrees of COPD. The aRIC model component values were then derived from the IOS measurements by the technique of parameter estimation, as previously described in [1, 2]. The main IOS resistive and reactive impedance parameters (R5, R5-R15, X5) and aRIC model parameters (R, Rp, I, Cp, Ce, Rp/Cp) were evaluated by the non-parametric Mann-Whitney test (given the relatively small sample size and the non-normal distribution of the values) to determine if they were statistically significantly different between normal adults and adults with COPD, or not. In addition, the Pearson product-moment correlations coefficients were calculated for the five IOS parameters R5, R5-R15, X5, R5Rmin, Rmin, where Rmin is the minimum R between 5 and 35 Hz, and the aRIC model parameters R, Rp, I, Cp, Ce, 1/Cp, Rp/Cp, the latter two being motivated by observations that peripheral airway dysfunction results in smaller values of Cp and larger values of Rp [1, 2].
R5-R15, X5, R5-Rmin, Rmin) were correlated with each of the aRIC model parameters R, Rp, I, Cp, Ce, 1/Cp, and Rp/Cp. The results of these correlations are shown in Table 1 for the COPD adults and in Table 2 for the normal adults.
Rp R
I
Fig. 2 Comparison of IOS parameter values
Cp
aRIC Component Range Comparison
10
Ce
1
0.1
Fig. 1 Augmented RIC (aRIC) circuit model, with large airway resistance R, large airway inertance I, peripheral airway resistance Rp, peripheral airway compliance Cp, and extrathoracic compliance Ce representing mainly upper airways shunt effects
COPD Normal
0.01
0.001
0.0001
III. RESULTS
R
To obtain a preliminary, visual sense of how the IOS parameters and the aRIC model parameters differed between the normal adults and the COPD adults, we graphed the range of values for each parameter as shown in Fig. 2 and Fig. 3. Then, using the non-parametric Mann-Whitney test, we determined that for R5, R5-R15, and X5 of the main IOS parameters, the approximating z statistics were -6.47, -6.48, and 6.46 respectively. For R, Rp, I, Cp, Ce, Rp/Cp of the aRIC model parameters, the z statistics were -3.04, -4.31, 2.02, 6.34, 1.64, -6.46 respectively. Hence the z statistics for all of the main IOS parameters and aRIC model parameters, except for the I and Ce parameters of the aRIC model, indicate at the 99% confidence level that the values came from different distributions. Furthermore, Pearson product-moment correlations were calculated to see how well the aRIC model parameters correlate with the IOS parameters. Five IOS parameters (R5,
R
Rp
Rp
I
I
C
C
Ce
Ce
Fig. 3 Comparison of aRIC component values In the COPD adults, for aRIC R, the correlation to each IOS parameter ranged in magnitude from 0.581-0.987; for Rp, from 0.654-0.992; for I, from 0.186-0.424; for Cp, from 0.641-0.846; for Ce, from 0.022-0.472; for 1/Cp, from 0.663-0.935; and for Rp/Cp, from 0.623-0.962. In particular, observe that there are very strong correlations between R of the aRIC model and the IOS parameter Rmin (0.987), as well as between Rp and X5 (-0.992), between 1/Cp and R5-Rmin (0.935), and between Rp/Cp and X5 (-0.962). The relationship between R and Rmin is understandable given that a change in large central airway resistance results in a uniform change over all frequencies of the respiratory system’s resistive impedance [3]. Unsurprisingly, changes in the peripheral airways are reflected in the relationships between
IFMBE Proceedings Vol. 32
Comparison of IOS Parameters to aRIC Respiratory System Model Parameters in Normal and COPD Adults
Rp, 1/Cp, Rp/Cp, and X5, R5-Rmin. On the other hand, note that I and Ce are the aRIC parameters with the poorest correlations to the IOS parameters, which is to be expected given their relative lack of involvement with the physiological changes brought on by COPD. For the normal adults, it was observed that there is again a very strong correlation between R and Rmin (0.933); while there are slightly weaker correlations between Rp and X5 (0.750), between Cp and R5-Rmin (-0.807), between 1/Cp and R5-Rmin (0.760), and between Rp/Cp and X5 (-0.756). Table 1 Correlation between IOS and aRIC parameters for COPD adults R5 R5-R15 R5-Rmin X5 Rmin
R
Rp
I
Cp
Ce
1/Cp
Rp/Cp
0.904
0.825
0.334
-0.805
0.581
0.880
0.418
-0.763
-0.268
0.866
0.837
0.022
0.874
0.706
0.871
0.424
0.910
-0.846
-0.044
0.935
-0.695
-0.992
0.919
-0.329
0.710
0.225
-0.836
-0.962
0.987
0.654
0.186
-0.641
-0.472
0.663
0.623
Table 2 Correlation between IOS and aRIC parameters for normal adults R5 R5-R15 R5-Rmin X5 Rmin
R
Rp
I
Cp
Ce
1/Cp
Rp/Cp
0.696
-0.345
-0.252
-0.235
-0.065
0.261
-0.237
-0.353
-0.585
-0.375
-0.585
-0.585
0.482
-0.270
-0.241
-0.443
-0.278
-0.807
-0.087
0.760
-0.020
-0.153
-0.750
-0.211
0.024
-0.563
0.166
-0.756
0.933
-0.096
-0.099
0.263
-0.016
-0.203
-0.251
IV. CONCLUSIONS We conclude that the aRIC circuit model’s central airway resistance (R), peripheral airway resistance (Rp), peripheral
253
airway compliance (Cp), and Rp/Cp ratio, are all sensitive enough to detect differences in respiratory function between normal subjects and adults with COPD. However, the extrathoracic compliance (Ce) component of the aRIC model is not useful, as expected, for the detection of respiratory function changes. Moreover, several of the calculated aRIC model parameters correlate well with measured IOS parameters used for lung function assessment, especially R with Rmin, Rp with X5, 1/Cp with R5-Rmin, and Rp/Cp with X5. In addition, the correlation analysis has illustrated the reciprocal relationship between deterioration in peripheral airway resistance and deterioration in peripheral airway compliance. These correlations provide further encouragement that the circuit modeling of respiratory input impedance is a promising alternative technique for assessing mechanical abnormalities in respiratory function of adults with COPD.
REFERENCES 1. Diong B, Rajagiri A, Goldman M, Nazeran H (2009) The augmented RIC model of the human respiratory system. Med Biol Eng Comput 47:395–404 2. Diong B, Nazeran H, Nava P, Goldman M (2007) Modeling human respiratory impedance. IEEE Engineering in Medicine and Biology Society Magazine: Special Issue on Respiratory Sound Analysis 26:48–55 3. Goldman M, Saadeh C, Ross D (2005) Clinical applications of forced oscillation to assess peripheral airway function, Respiratory Physiology & Neurobiology 148:179–194 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Bill Diong Texas Christian University 2840 W. Bowie St. Fort Worth U.S.A.
[email protected]
Effect of Waveform Shape and Duration on Defibrillation Threshold in Rabbit Hearts J. Stohlman1, F. Aguel1, G. Calcagnini2, E. Mattei2, M. Triventi2, F. Censi2, P. Bartolini2, and V. Krauthamer1 1
Food and Drug Administration, Center for Devices and Radiological Health, Silver Spring, MD, USA 2 Italian Institute of Health, Rome, Italy
Abstract— We compared the energy threshold (EDFT) for termination of ventricular fibrillation (VF) with shocks of different duration and waveform shape using an arbitrary waveform amplifier. Isolated rabbit hearts were Langendorf-perfused with Tyrode’s solution and submerged in a 37°C bath of Tyrode’s solution. VF was induced by rapidly pacing the epicardium with a bipolar electrode. Once arrhythmic activity was identified via ECG, the rapid stimulus was terminated, and in most cases VF persisted. Two conventional waveform shapes were chosen for comparison: Monophasic Damped Sinusoid (MDS) and Biphasic Truncated Exponential (BTE). These waveforms, with varying duration and amplitude, were delivered to the fibrillating heart through titanium mesh 2x2cm electrodes positioned ~1cm on either side of the ventricles. A short duration BTE waveform (10ms) was used as a control, to compare with a long duration BTE waveform (45 ms) and a long duration MDS waveform (40ms). To determine the EDFT for each waveform, a step-up protocol was used. The delivered energy of the initial shock was set to ~0.2J; if this shock failed to defibrillate, the energy of the subsequent shock was increased 0.51J. The process of incrementally increasing the shock amplitude was continued until VF was terminated or the limit of our arbitrary waveform amplifier was reached. The short duration BTE waveform consistently terminated VF with an EDFT of 0.5-3J for all hearts. The long duration MDS waveform failed to terminate VF in a number of heart preparations, but successfully terminated VF in some hearts at an average EDFT 23 times higher than that of the short duration BTE for the same heart. The long duration BTE waveform failed to defibrillate arrhythmias for all hearts. Preliminary results demonstrate a method for comparing defibrillation shocks with waveforms that have customizable shapes, as well as selectable duration and delivered energy characteristics. Keywords— defibrillation, ventricular fibrillation, cardiac electrophysiology.
I. INTRODUCTION Cardiovascular disease claims the lives of nearly 950,000 people in the United States every year. A significant
number of these deaths are due to sudden cardiac death, usually as a result of ventricular fibrillation (VF). Of those that occur outside of a hospital, about 225,000, only about 2 – 5% are resuscitated because of a greater than 5 minute delay in defibrillation. Rapid access to Automatic External Defibrillators (AEDs) have been shown to improve the chance for survival considerably. [1] Despite the documented efficacy of external defibrillation, the mechanisms by which AEDs terminate ventricular fibrillation remain unclear. The shape of defibrillation waveforms have been extensively studied. It has been shown that biphasic waveforms are more efficient than monophasic waveforms in defibrillating the heart for both ventricular and atrial fibrillation. [2-6] It has been reported that untruncated monophasic waveforms (i.e., Monophasic Damped Sinusoidal, or MDS) can be ineffective because they can re-induce fibrillation [7]. The main objective of this study was to determine the effect of shock duration on defibrillation efficacy.
II. METHODS A. Heart Preparation The study conforms to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication No. 85-23, revised 1996) and were approved in advance by the Food and Drug Administration Institutional Animal Care and Use Committee. Six New Zealand white rabbits of either sex, with an average weight of 3.96±0.34kg, were anesthetized with an intramuscular ketamine-xylazine injection (35-50mg/kg and 100mg/mL, 5-10mg/kg and 20mg/mL respectively). After it was verified that loss of limb withdrawal reflex had occurred, the rabbit was euthanized using an intravenous injection of sodium pentobarbital (50mg/kg) and heparin (2000units) via the auricular vein. The excised heart was then cannulated through the aorta and transferred to the experimentation chamber.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 254–257, 2010. www.springerlink.com
Effect of Waveform Shape and Duration on Defibrillation Threshold in Rabbit Hearts
OR
Rapid Pacing
4V
255
Normal Sinus Rhythm
Persistent VF Induced 0.5 sec
Fig. 1 Sample recording from a single ECG lead. The sequence of events presented is: 1) Induction of VF by rapid pacing, 2) Persistent VF identified, 3) Defibrillating shock delivered and 4) Immediate return to normal sinus rhythm The hearts were perfused with warm Tyrode’s solution composed of (in mM): 126 NaCl, 23.8 NaHCO3, 4.4 KCl, 1.08 CaCl2, 1 MgCl2, 1 NaH2PO4, 22 dextrose, 20 taurine, 5 creatine, 5 pyruvic acid, equilibrated with 95% O2-5% CO2 to a pH between 7.3 and 7.5. Bovine albumin was added to the perfusate to reduce edema. Aortic pressure was monitored and recorded with a blood pressure monitor (World Precision Instruments) and kept at 60±10mmHg. A three lead differential ECG was acquired and recorded from the Tyrode’s filled experimental chamber throughout the length of the experiment. The ECG signals were used to determine fibrillation induction and defibrillation. B. Arbitrary Waveform Amplifier We developed an arbitrary waveform amplifier (AWA) for use in in-vitro animal experiments [8,9]. This device is based on two linear amplifiers in bridge configuration which can produce a maximum of ±130V and 10A over an impedance load ranging from 10-25Ohms. The isolated power supply used by the system is a series of 2 rechargeable battery packs (8 lead acid batteries each), which allows for a high surge of current over hundreds of msec. The AWA is computer controlled using LabVIEW and is capable of outputting waveforms of customizable shape and duration. The effective voltage and current of the output is recorded in LabVIEW. C. Energy Defibrillation Threshold Protocol Defibrillation shocks were delivered from the AWA to terminate each VF episode. Four specific waveforms were tested for defibrillation efficacy; each is defined by their shape and duration: 1.
BTE10: Biphasic Truncated Exponential waveform with 1st phase duration of 6ms and second phase duration of 4ms.
2.
BTE30: Biphasic Truncated Exponential waveform with 1st phase duration of 20ms and second phase duration of 10ms.
3.
BTE45: Biphasic Truncated Exponential waveform with 1st phase duration of 30ms and second phase duration of 15ms.
4.
MDS40: Monophasic Damped Sinusoid waveform with total duration of 40ms.
The BTE10 waveform duration was considered to be a reference waveform due to its known clinical and experimental performance. The potential refibrillatory nature of long-duration monophasic waveforms is well documented [7] and therefore the MDS40 waveform served as a positive control for a waveform that could exhibit a low efficacy of defibrillation. The shape of the MDS waveform used in these experiments was designed to conform to ANSI/AAMI DF80:2003 parameters for MDS waveforms used in automated external defibrillators [10]. BTE 30 and BTE45 were chosen as examples of long-duration biphasic waveforms, whose efficacy in defibrillation is not clearly defined. The order in which these defibrillation waveforms were applied was chosen at random. When an episode of persistent VF was verified by ECG, the first chosen waveform would be delivered across the ventricles of the rabbit heart via 2 titanium mesh electrodes. Figure 1 shows a recording from a single ECG lead at two different points in the experiment: induction of VF by rapid pacing and defibrillation resulting in return to normal sinus rhythm. The first applied electric shock was set to a low expected energy output (~0.2-1J). If the shock was a success, as determined by the ECG recording, then the heart was allowed to recover approximately 3 minutes before attempting the next fibrillation induction and defibrillation iteration with another waveform. If the first shock failed to defibrillate the heart, the expected energy of the next shock for the
IFMBE Proceedings Vol. 32
256
J. Stohlman et al. All Failed and Successful Shocks Delivered, By Experiment 14
Energy of Delivered Shock (J)
12
10
BTE10 Success BTE10 Failed MDS40 Success MDS40 Failed BTE30 Success BTE30 Failed BTE45 Success BTE45 Failed
8
6
4
2
0
1
2
3
4
5
6
ALL
Rabbit Heart Number
Fig. 2 Scatter plot of all shocks delivered for each experiment. The data is organized along the x-axis by experiment number and then different shaped and shaded plot points indicate waveform type and whether the shock was a success or failure same waveform would be increased by ~0.5-1J and be delivered immediately. The process of incrementally increasing expected energy of each subsequent shock would persist until VF was successfully terminated or the limit of the AWA was reached. The maximum possible output energy from the AWA for each experimental waveform was, approximately: 4J for BTE10, 9.5J for BTE30, 11.5J for BTE45 and 12J for MDS40. If a long-duration waveform failed to defibrillate at the maximum amount of allowable energy, then the defibrillation protocol would be reinitiated with the BTE10 waveform, starting at a low energy output and increasing it incrementally until defibrillation or the limit of the amplifier was reached. If the BTE10 waveform failed to defibrillate at the maximum output energy of our AWA, a Ventak external defibrillator (Cardiac Pacemaker, Inc., St. Paul, MN) was used to deliver a rescue shock. The Ventak defibrillator uses a MDS waveform and is capable of defibrillating with energies of up to 50J, but in most cases would successfully terminate an arrhythmia with a 2.5ms duration shock of approximately 8J. Shock outcome, ECG, perfusate temperature and pressure, shock voltage and current were recorded for analysis with software created in LabVIEW (National Instruments Corporation, Austin, TX).
III. RESULTS Figure 2 shows a scatter plot of all shocks delivered for each experiment. In this figure, the point shape represents the waveform type. Filled-in points represent successful defibrillation shocks and hollow points represent failed shocks. In rabbit heart 1, MDS40 failed to terminate any arrhythmias but BTE10 consistently defibrillated with an average delivered energy of approximately 1.25J. The BTE10 waveform was the only waveform that successfully defibrillated heart 4 even though repeated attempts to defibrillate were made with high-energy BTE30, MDS40 and BTE45 waveforms. Only one MDS40 shock of 9.1J defibrillated heart 5, and only one BTE30 shock of 4.8J defibrillated heart 6. Six MDS40 shocks defibrillated heart 3, but the mean energy for the MDS40 waveform was 4.62±1.59J, whereas the mean energy to defibrillate heart 3 with the BTE10 waveform was 1.29±0.4J. In no instance did a BTE45 waveform defibrillate episodes of VF in any of the 6 rabbit hearts as can be seen in the last column of figure 2. Figure 3 shows the comparison of defibrillation probability for each waveform based on the number of successful defibrillating shocks over the total number of shocks delivered for all experiments. For each waveform, all delivered shocks were divided into 1J bins and then for each bin, the number of successful shocks was divided by the
IFMBE Proceedings Vol. 32
Effect of Waveform Shape and Duration on Defibrillation Threshold in Rabbit Hearts
Probability of Defibrillation Success (%)
Probability of Defibrillation Success for Waveforms 60
BTE10 MDS40
50
BTE30 BTE45
257
the more realistic scenario where the defibrillation shock is delivered after the heart tissue is ischemic. Again, additional experiments are needed to determine whether these results are applicable to fine as well as coarse VF.
40
ACKNOWLEDGMENT
30
Special thanks to the members of the Electrophysiology and Electrical Stimulation Laboratory: Farhan Munshi, Thais Moreira. Dulciana Chan and Dr. Richard A. Gray for support and assistance in experimentation and theory.
20 10 0
0-1
1.1-2
2.1-3
3.1-4
4.1-5
5.1-6 6.1-7 7.1-8 8.1-9 Energy (J)
9.1-10
REFERENCES
Fig. 3 Graph depicting probability of defibrillation success for each waveform type. Delivered Energy is organized into 1J sized bins for calculation of probability
total number of shocks delivered. The binning of the data by energy made it possible to estimate a probability of successful defibrillation as a function of delivered energy for each waveform. It is clear from figure 3 that the probability to successfully defibrillate is higher for the BTE10 waveform than the other three long duration waveforms for all energies tested. It should be noted that the maximum energy that could be delivered with our AWA was limited to ~4J for the BTE10 waveform so that we were not able to deliver BTE10 shocks exceeding this energy level. The only effective waveform for 0-2J delivered energies was the BTE10 waveform. The MDS40 and BTE30 waveforms were intermittently successful in defibrillating rabbit hearts, but at a higher energy amplitude than the BTE10 waveform for the same rabbit heart.
IV. CONCLUSIONS We show that in our experimental model short duration BTE defibrillation shocks are more efficacious in terminating a pacing-induced episode of ventricular fibrillation than long duration BTE or MDS shocks. It should be noted that our results are limited to our experimental model – an ex-vivo Langendorf-perfused rabbit heart. It may be that this apparent difference in the efficacy of defibrillation shocks does not apply in larger hearts or invivo. Additional experiments are needed in in-vivo large animal models. The data presented here is preliminary and has not been tested for statistical significance. In order to maximize the use of animals and to prevent the deterioration of our preparation, we did not discontinue perfusion during a VF episode. It has been documented that defibrillation mechanisms may be different for fine versus coarse VF such that these results may not be applicable to
1. Robertson RM. (2000) Sudden Death from Cardiac Arrest -- Improving the Odds. N Engl J Med 343:1259-60 2. Bardy GH, Ivey TD, Allen MD, Johnson G, Mehra R, Greene HL. (1989) A prospective randomized evaluation of biphasic versus monophasic waveform pulses on defibrillation efficacy in humans. J Am Coll Cardiol 14:728-33 3. Bardy GH, Marchlinski FE, Sharma AD, Worley SJ, Luceri RM et al. (1996) Multicenter Comparison of Truncated Biphasic Shocks and Standard Damped Sine Wave Monophasic Shocks for Transthoracic Ventricular Defibrillation. Circulation 94:2507-14 4. Behrens S, Li C, Kirchhof P, Fabritz FL, Franz MR. (1996) Reduced Arrhythmogenicity of Biphasic Versus Monophasic T-Wave Shocks: Implications for Defibrillation Efficacy. Circulation 94:1974-80 5. Gliner BE, Lyster TE, Dillion SM, Bardy GH. (1995) Transthoracic Defibrillation of Swine With Monophasic and Biphasic Waveforms. Circulation 92:1634-43 6. Koster RW, Dorian P, Chapman FW, Schmitt PW, O'Grady SG, Walker RG. (2004) A randomized trial comparing monophasic and biphasic waveform shocks for external cardioversion of atrial fibrillation. American Heart Journal 147:e1-e7 7. Schuder JC, Stoeckle H, West JA, Keskar PY. (1971) Transthoracic ventricular defibrillation in the dog with truncated and untruncated exponential stimuli. IEEE Trans Biomed. Eng. 18:410-5 8. Triventi M, Mattei E, Delogu A, Censi F, Calcagnini G, Bartolini P, Aguel F, Stohlman J, Krauthamer V. (2008) In-Vitro Investigation of Very Long Defibrillation Shocks: Design and Testing of a CapacitorFree Defibrillator. Computers in Cardiology 35:493-496. 9. Triventi M, Mattei E, Delogu A, Censi F, Calcagnini G, Bartolini P, Aguel F, Stohlman J, Krauthamer V. (2008) Innovative Arbitrary Waveform Defibrillator for Cardiac Electrophysiology Research. BioMED Proc. 601:288-291. 10. ANSI/AAMI DF:80:2003. (2004) Medical electrical equipment – Part 2-4: Particular requirements for safety of cardiac defibrillators (including automated external defibrillators). Association for the Advancement of Medical Instrumentation. Arlington, VA.
Address for correspondence: Jayna Stohlman Food and Drug Administration Center for Devices and Radiological Health 10903 New Hampshire Avenue, WO62-1129 Silver Spring, MD 20903-0002 USA
[email protected]
IFMBE Proceedings Vol. 32
The Measurement and Processing of EEG Signals to Evaluate Fatigue M.R. Yousefi Zoshk and M. Azarnoosh Department of Biomedical Engineering, Islamic Azad University of Mashhad, Mashhad, Iran
Abstract— Fatigues and drowsiness influentially affect human minds ability and capability to proceed correctly. Human fatigue and drowsiness are a major cause of brain’s inefficiency and ineffectiveness. This phenomenon is of great Psycho-sociological concerns among scholars. Monitoring EEG signals provides the possibility of detecting the extreme fatigue conditions and warning of it in vital circumstances such as long way driving and monotonous exercises. The aim of this paper is to describe an EEG-based fatigue signal measurements and to report its trustworthiness. Methodology: Changes in all major EEG signals during alert and fatigue conditions were used to develop the algorithm for detecting different levels of fatigue in laboratory conditions on 17 subjects. Results: The MATLAB software was shown to be able of identifying fatigue with 83% accurately in 17 subjects undergone the tests. The percentage of time the subjects were detected to be in different stages of fatigue was moderately different than the alert phase. Discussion and debates: To our knowledge, this is the first measurement software described that has shown to detect fatigue based on EEG signal changes in different frequency bands. It seems that more field research is required to evaluate the fatigue software in order to produce a robust and reliable fatigue measurement system. Keywords — EEG Signals, Fatigue, Monotonous exercises, Road Accident, Iran.
I. INTRODUCTION One of the major causes of car collides and traffic accident is the driver fatigue and drowsiness which is believed to account approximately 20-35% of all road vehicle accident in Iran (Annual Report of Islamic Republic of Iran’s Ministry of Road and Transportation, 2009). Many academics in field of transportation believe that this is a very conservative estimate and the actual contribution of driver fatigue to road accidents must have been much higher than what appears in statistics presented by governmental authorities. Statistical analysis of accident data in Iran suggests that fatigue is the major concern in road accidents, particularly at night and especially in situations in which
driving hours are very long and monotonic (Ibid). Doing any activity for a long time will render a person unable to maintain skilled performance; this is as true for long distance driving as any other skill and is a precursor for road accidents (Dinges, 1995; Horne & Reyner, 1995). In our research, the issue of monotonic practices, we could identify the importance of developing fatigue countermeasure devices in order to help prevent driving accidents and human practices errors. Moreover, evidence of road accident suggests reasons for giving serious consideration to the implementation of technological countermeasures for driver fatigue since drowsiness is a constant danger for long-distance drivers. Drowsiness can harmfully affect the drivers’ capability to assess alertness needed level to continue driving in a safe mode (Ibid). In response to these serious issues, on-line monitoring of fatigue and drowsiness in monotonic task has the potential and capability to detect brain activity changes during deteriorations in alertness. Almost all of fatigue countermeasure devices measure physiological response in the subjects such as the EOG signals or changes in the subject's alertness through steering behavior (Yabuta et al 1985). While a variety of potential countermeasures to fatigue have been developed, the EEG signal has been shown to be one of the most predictive and reliable (Lal et al., 2002b). However, very little evidence exists on the efficacy of incorporating EEG signal detection and analysis into a technological countermeasure device for fatigue (Ibid). Researchers have suggested the possibility of using EEG grouped alpha waves and electrocardiogram in sleep detection systems (Fukuda et al., 1994; Ninomija et al., 1993). Due to the lack of an EEG-based fatigue detector, we assessed 17 subjects during a monotonic task with the aim to isolate EEG changes during early, medium, and extreme phases of fatigue. We found significant changes in frequency activity such as delta and theta during the early phase decrease in delta from the alert phase to the early fatigue, and further no substantial changes in theta, beta and alpha during fatigue phases.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 258–261, 2010. www.springerlink.com
The Measurement and Processing of EEG Signals to Evaluate Fatigue
II. METHODOLOGY A. EEG Algorithm From the data collected in 17 subjects, an EEG fatigue algorithm was created. The EEG of fatigue was classified into phases of early fatigue, medium fatigue phase and extreme fatigue phases. The EEG changes were watched during the alert, early fatigue, medium fatigue phase and extreme fatigue phase and were used to develop the algorithm that could detect a set of programmed changes that occur during different phases of fatigue. The average change in EEG for each of the fatigue phases was computed as the difference from the alert baseline. The algorithm was developed using MATLAB. This software was applied to detect the mentioned four different functional states of the brain in alert, early fatigue, middle fatigue and extreme fatigue. EEG data in these four phases were registered into 24 channels represented in the software by color panels, which were green, yellow, orange, and red, respectively. A color scale indicated red as highest level of energy and blue spectrums as the lowest energy levels of bands in delta, theta, alpha and beta. The fatigue software was developed so that it was capable of analyzing EEG data in real-time as well as off-line analysis of previously acquired data. It is capable of acquiring two channels of EEG data. The software uses an FFT to transform raw EEG data into the frequency domain. The program then calculates the magnitude, for each second of data, in each of the delta (0–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), and beta (13–40 Hz) frequency bands (see Fig.2). The magnitude is calculated as the sum of the values within a particular band of the EEG spectrum. A section of the data is taken over a period of time that is representative of the individual’s alert state. These data are taken from the beginning of the trial before the subject develops symptoms of fatigue. From this baseline data, the mean and standard deviation of the magnitudes in each frequency band are calculated for all three fatigue phases. B. Subjects and Research Procedure Seventeen male volunteers were randomly engaged for the research. Subjects were aged between 20 to 35 years. The subjects had no medical contraindications and drug abuse to limit compliance. The research was performed in 18 centigrade temperature as the subjects performed a standardized fatigue test in laboratory of biomedical Engineering at Islamic Azad University of Mashhad. The task consisted of asking subjects to
259
take a seat behind the monitor and selecting three unlike images differentiated by the number of white dots (Fig. 1). The subjects were asked to select 4 dots image by pressing control key, 3 and 5 dots images by pressing shift key on the keyboard in 4 consecutive quarters. The locations of the dots were illustrated in a random procedure. The time limit for each image was determined within 3 seconds. The subjects were bombarded with 350 images in each quarter and Simultaneous EEG and EOG 1 signals were obtained during the task. Twenty four channels of EEG were recorded according to the International 10-20 System, which spans the entire brain. A monopole montage was used; that is, EEG activity was recorded in relation to a linked-ear reference.
Fig. 1 Shows the dotted images asked the subjects to choose, pressing SHIFT key for 3 and 5 dots images and CONTROL key for 4 dots image in four consecutive quarters The EEG and EOG data were acquired using a multichannel recording system. An individual EEG data point was classified as an epoch; a basic unit for stored EEG data. Data were sampled at 200 Hz and the total sample time was individual dependent until arousal from fatigue by a verbal question from the investigator. The subjects were also asked to grade themselves at the end of each quarter ranging from 0 to 10 to clarify the degree of the drowsiness (see Table 1). The EEG was defined in terms of frequency bands including delta (0–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), and beta (13–40 Hz).
1
- The one channel EOG measurement was applied to filter the noises The EOG is a potential produced by movement of the eye or eyelid. The original EEG is hence recovered by subtracting separately recorded EOG from the measured EEG, using appropriate weights through rejecting the influence of the EOG on particular EEG channels.
IFMBE Proceedings Vol. 32
260
M.R. Yousefi Zoshk and M. Azarnoosh
Table 1 The sample of the grades given by the subjects to themselves at the end of each quarter st
1 quarter
Data No
nd
2 quarter
rd
3 quarter
th
4 quarter
DATA03
1
3
5
7
DATA04
2
4
6
9
DATA05
2
5
8
10
DATA06
3
6
8
10
DATA07
4
6
7
8
DATA08
5
6
8
10
that is, theta in both relative and absolute cases. Note the simultaneous decrease in alpha and beta activity (indicated by a decrease in the red color in the alpha and beta bands). The total epochs were distributed among each of the four phases. These epochs were validated according to the EOG analysis, which acted as the control against which the software allocation of the epochs was compared. Table 2 demonstrates the allocation by the software of the total number of epochs to the alert and the three fatigue phases for each subject.
C. Statistical Analysis of the Data In off-line analysis mode, the data could be viewed graphically with a line indicating which panel, a particular epoch had been allocated. The extracted parameters consist of 5 indicatives for EEG signals and 2 others for the EOG signals. To determine the spectrogram consist of combined signals, the following equation is adjusted. Due to the point that at any stage of fatigue, some epoch frequency exists, the following equation is defined to calculate the power ratio frequencies; 40
Power _ ratio =
∫ SPEC (t, f )df
13 13
∫ SPEC (t , f )df 0
III. CONCLUSIONS The software categorized the simultaneous delta, theta, alpha, and beta data according to the algorithm into alert, early fatigue, medium fatigue and extreme fatigue phases show an example of EEG changes according to a spectrogram display in an individual subject (Figs. 2). The spectrogram summarizes the EEG data in a color coded map. The values are color coded and plotted to produce a continuous color map. The bottom of each column shows the bandwidths, which are delta (0-4 Hz), theta (4–8 Hz), alpha (8–13 Hz), and beta (13–40 Hz). The color scale displayed as a bar represents low (blue) to higher energy bands (red). This row maps the full color scale spectrum across the entire amplitude range of the four bands. The spectrogram is color coded to indicate how much activity it contains. The bottom of each column colored in red to orange spectrums in spectrogram show the EEG activity of alertness, that is, the presence of alpha and beta activity (indicated by the presence of less red color in the alpha and beta band). The yellow, green and blue spectrums show in spectrogram during fatigue showing an increase in slow wave activity,
Fig. 2 Shows the spectrogram of EEG which is Delta (0–4 Hz), Theta (4-8 Hz), Alpha (8–13 Hz), and Beta (13–40 Hz). Note: The darkest shade (specified by a red color) indicates more activity in the Delta bands and the gray shade (specified by yellow and blue colors) indicates a reduction or lack of activity in the Theta, Alpha and Beta bands The analysis showed that there was an overall difference in the comparison of the means of the four states. It also found that the percentage of time the subjects were in the medium fatigue and extreme fatigue phases was significantly different to the alert phase. The software detected a larger proportion of epochs in the first fatigue state, that is, early fatigue phase, compared to the other two fatigue phases. The number of epochs detected in the medium fatigue and extreme phases was not significantly different. The EOG analysis had identified subjects as being in the alert phase for an average of 35% of the time, in the early fatigue phase for 30% of the time, in the medium fatigue for 19%, and in the extreme fatigue phase for 16% of the total study of the 4 quarters. This research has produces a fatigue-detecting algorithm based on EEG changes and cites the ability of the algorithm to detect different phases of fatigue in monotonic practices. The results of testing the software found that the 17 subjects
IFMBE Proceedings Vol. 32
The Measurement and Processing of EEG Signals to Evaluate Fatigue
were in a fatigue state for at least 65% of the total time they spent the task. This confirms the researchers’ findings that drivers doing monotonic activity are at risk of driving in a fatigue state for a majority of time when the driving task is monotonous (Lal & Craig, 2002a; Lal et al 2003). Longdistance drivers especially the transit road of Bandar Abbass to Mashhad and Tehran to Mashhad in which they drive more than 15 hours should know that they endanger themselves and others when they ignore the feelings of fatigue, where the natural end result is falling asleep. Doing monotonic practices, we found increases in slow wave brain activity, specific changes in EOG. In the current study, the software was shown to be capable of detecting the three stages of fatigue reliably from changes in EEG, especially the slow wave variations. It was also shown that three different phases of fatigue can compute EEG changes simultaneously in the delta, theta, alpha, and beta bands. In addition, it has the capability to detect fatigue on an individual basis where an algorithm can be computed based on the individual’s specific EEG changes during fatigue. Table 2 5 samples percentage out of 17 of powqer Ratio detected in the 4 quarters of the task functional states of the brain
261
ACKNOWLEDGMENT I would like to thank Dr. Mohammad Ali Khalilzadeh, Head of Department of Biomedical Engineering of Islamic Azad University of Mashhad, who has been kind enough to give his time and thoughts to the problems presented here. His comments have been very important in the completion of the paper. My sincere gratitude goes to Dr. Rouhollah Yousefi, for his extremely kind support and useful advices, offering crucial assistance over the preparation of the script, reading the manuscript and his structural edition. Without his assistance, the research would be next to impossible. We alone assume responsibility for the article's nature and conclusions and for not always following my colleagues' advice.
REFERENCES Annual Report of Islamic Republic of Iran’s Ministry of Road and Transportation. Vol. 1. IRI’s Ministry of Road and Transportations. ( in Persian language)., Tehran, Iran, pp 345-390. Dinges, D. F. (1995). An overview of sleepiness and accidents. Journal of Sleep Research, 4, 4 – 14. Horne, J. A., & Reyner, L. A. (1995). Sleep related vehicle accidents. British Medical Journal, 310, 565– 567. Yabuta, K., Iizuka, H., Yanagishima, T., Kataoka, Y., & Seno, T. (1985). The development of drowsiness warning devices (Rep. No. Section 4). Washington, DC: U.S. Department of Transportation. Lal, S. K. L., & Craig, A. (2002b). Driver fatigue in professional versus non-professional drivers: Investigation of brain activity. Proceedings of the Road Safety, Research, Policing and Education Conference, Ade laide,Australia. Fukuda, C., Funada, M. F., Ninomija, S. P., Yazy, Y., Daimon, N., Suzuki,S., & Ide, H. (1994). Evaluating dynamic changes of driver’s awakening level by grouped alpha waves. IEEE, 1318– 1319. Ninomija, S. P., Funada, M. F., Yazu, Y., Ide, H., & Daimon, N. (1993). Possibility of ECGs to improve reliability of detection system of incliningsleep stages by grouped alpha waves. IEEE, 11410–1411. Lal, S. K. L., & Craig, A. (2002a). Driver fatigue: Electroencephalogra phyand psychological assessment. Psychophysiology, 39, 1–9. Lal, S. K. L., Craig, A., Boorda, P., Kirkupb,L., & Nguyenc,H., (2003). Development of an algorithm for an EEG-based driver fatigue counter measure. Journal of Safety Research, 34 , 321– 328.
Mohammad Reza Yousefi Zoshk, Department of Biomedical Engineering, Islamic Azad University of Mashhad, Ghasem Abad, Mashhad, Iran Email: MR_Yousefi_BME @yahoo.com Mahdi Azarnoosh, Department of Biomedical Engineering, Islamic Azad University of Mashhad, Ghasem Abad, Mashhad, Iran Email: M_Azarnoosh @mshdiau.ac.ir
IFMBE Proceedings Vol. 32
Modeling for the Impact of Anesthesia on Neural Activity in the Auditory System Z.B. Tan1, L.Y. Wang1, H. Wang2, X.G. Zhang3, and J.S. Zhang3,4 1
Dept. of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan 48202, USA 2 Dept. of Anesthesiology, Wayne State University, Detroit, Michigan 48202, USA 3 Dept. of Otolaryngology, School of Medicine, Wayne State University, Detroit, Michigan 48201, USA 4 Dept. of Communication Sci. and Dis, College of Liberal Arts and Sciences, Wayne State University, Detroit, Michigan 48202, USA Abstract— In this paper, mathematical models for the auditory system were derived to characterize the impact of anesthesia on auditory systems and mechanism of hearing damage. The auditory system was represented by a model with external sound stimuli as the input and the neuron firing rates as the output, and represented as a black-box input-output system. Two parallel subsystem models were developed: an ARX model for the auditory system under external stimuli and an ARMA model for the spontaneous activities of the neurons in primary auditory cortex. The models provide a quantitative characterization of anesthesia’s impacts and describe the mechanism of hearing loss on auditory transmission channels. Keywords— Modeling, ARMA, ARX, Anesthesia, Auditory system.
auditory system under external stimuli as the input and the neuron firing rate as the output. The output of this subsystem represents the firing rate in response to acoustic stimuli. The other subsystem is an ARMA (Auto-Regression and Moving Average) model structure which represents the spontaneous activities (no inputs) of the neurons in primary auditory cortex. By combining these two models, the total measured firing rate is a weighted summation of the two models. The weighting constant α is a function of anesthesia levels. By comparing the models of healthy and damaged auditory systems, it is possible to reveal how hearing damage affects the neural activity recorded from different channels in the auditory cortex. Figure 1 is a diagram of the auditory system model.
I. INTRODUCTION In this paper, mathematical models for the auditory system were derived to characterize the impact of anesthesia on the auditory system and mechanism of hearing damage. The auditory system was modeled as a black-box input-output system instead of the physiological and neural structures that represent different physiological functions of the ear and transmission channels in neural systems [1]. This simplified model can be obtained without compromising the accuracy that is necessary for prediction of neural responses to different types, strengths, duration of stimulus without exhaustive experiments. Also, model parameter changes under different conditions can be used to analyze the impact of drugs, stimulations, and physiological conditions on the auditory system. In this study, different dosages of anesthesia showed substantial impact on the auditory system. The derived models can provide a quantitative characterization of such impacts. In addition, rats with hearing damage demonstrate different neural responses compared to normal controls. Models can be used to describe the mechanism of hearing damage on auditory transmission channels. Finally, model analysis may provide some insights on remedies to correct adverse effects of anesthesia drugs or hearing loss. Two parallel subsystems are introduced in the proposed auditory system models: one is an ARX (Auto-Regression with eXternal input) model structure that represents the
II. MATERIALS AND METHODS A. Materials Five long-Evans rats were used in experiments. The subjects were anesthetized isoflurane (gas). To understand the impact of anesthesia on the auditory system, two dosages were used: a low dose of concentration 1.5-1.75%, which about 1.5-1.75 times of the MAC (the minimum alveolar anesthesia concentration preventing purposeful movement to supramaximal noxious stimulation in 50% of animals, which is about 0.98 for the rat) for rats; and a high dose of concentration 2.5-3.0%. The anesthesia was maintained over the entire period of the experiment.
Fig. 1 Diagram of auditory system model
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 262–265, 2010. www.springerlink.com
Modeling for the Impact of Anesthesia on Neural Activity in the Auditory System
B. Methods The measured neural spike waveforms were first processed to derive the neural firing rate trajectories over time. The neuronal firing rate is a commonly-used variable to characterize neural activities. It contains essential information on neurons activities and is the core of computational neurosciences [2]. The firing rate was used as the output of the proposed auditory system model. To estimate the model parameters, a typical data set was chosen for system identification, which is the field of deriving model parameters using experimental data. In the following sections, the procedure of our method is detailed. ARX Model: When rats were under deep anesthesia with a high dose of isoflurane, the activities of primary auditory cortex neurons were significantly activated by acoustic stimulation with broadband noise. On the other hand, the spontaneous activities were largely suppressed. These features are visually apparent from the data shown in Figure 2. In Figure 2, the firing rate (in the middle plot) increased dramatically in response to the input broadband noise, and then dissipated after the stop of the noise. The signals recorded when in deep anesthesia were used for ARX model estimation. The structure of an ARX model in polynomial of order na is: A(q) y (t ) = B( q)u (t ) + e(t ) , Where A(q) = 1 + a1q −1 + a2 q −2 + ...... + ana q − na , B (q) = b1q −1− nk + b2 q −2 − nk + ...... + bnb q − nb − nk , nb − 1 is the
number of zeros of the system, nk is the delay, and e(t ) is a white noise with zero mean and a given but unknown variance. To obtain the averaged firing rate y (t ) , the stimulus was presented periodically every 10 seconds. The data were then divided into segments of 10 seconds each. The firing rates over consecutive data segments were averaged. In this data processing, five segments of the data (a total of 50-second data) were averaged to generate a 10-second segment for model estimation. The model parameters were estimated using the function ARX in the Matlab System Identification Toolbox [3]. The orders of the model were chosen as na = 3 and nb = 2 . The delay time was estimated with the function DELAYEST in System Identification Toolbox. The inherent sampling interval of the original firing rates was 0.001 second. For system identification, this step size is too small to get an accurate model. As a result, the data set was resampled with a larger sampling interval of 0.018 second. Given a white noise with power of 85dB as input signal, the identified transfer function of the ARX model structure with a delay of nk = 31 was:
H
ARX
263
( z ) = z −31
10−3 (−0.36 z − 0.52) z 3 − 1.805 z 2 + 1.084 z − 0.2605
(1)
Fig. 2 The top plot is the spike train of a single neuron recorded over a period of 45 seconds. The middle plot is the firing rate to external (acoustic) stimulation. The red vertical line indicates the starting time of the noise stimulus. The stimulus was given in an interval of 10 s with duration of 50 ms, 85dB broadband noise. The bottom plot is the power spectrum of the firing rate. The primary frequency component is 0.1 Hz The input to this subsystem is the noise stimulation and output is the firing rate. In Figure 3, the real data were compared with the model output. In Figure 4, the model was used to predict the future firing rate in one segment period of 10 seconds. The model output captured the real data authentically. This discrete-time system can be represented by a reduced-order continuous-time system:
H ARX ( s) = e −0.558 s
−0.0016 s + 0.02 s + 2.427
(2)
The step responses of the discrete time system and the simplified system are compared in Figure 5. ARMA model: The spontaneous activity of neurons in the primary auditory cortex became more dominant when animals were under low-dose anesthesia or in nearly awake state. This can be seen from Figure 6 where the data were recorded under a low dose of anesthetic (1.5-1.75%). When animals were not stimulated, the firing rate may be viewed as a time-series produced by noise stimulation with zero mean and a given (but unknown) variance. It can be expressed as an ARMA model structure. The structure of ARMA model is: A(q) y (t ) = C ( q)e(t ) ,
Where,
A(q) = 1 + a1q −1 + a2 q −2 + ...... + ana q − na
,
C (q) = 1 + c1q −1 + c2 q −2 + ...... + cnc q − nc , na is the order of the
model, nc is the number of zeros of the system, and the e(t ) is the white noise disturbance.
IFMBE Proceedings Vol. 32
264
Z.B. Tan et al.
Fig. 6 The firing rate and the power spectrum in light anesthesia with external (acoustic) stimulation. The spontaneous activity becomes more dominant than in deep anesthesia
Fig. 3 Model output and real firing rate
III. RESULTS
Fig. 4 Predicted firing rate and measured data in a 10 second interval
Fig.
5 The step responses of the discrete-time system and the simplified continuous-time system
From a 50 seconds segment of the firing rate data, with an arbitrarily chosen orders of na = 3 and nc = 2 , the discrete time transfer function of the ARMA model was identified as: H ARMA ( z ) =
z 2 + 0.03146 z − 0.8969 z − 1.102 z 2 − 0.6824 z + 0.7996 3
(3)
The power spectrums of the model output and the real data were compared in Figure 7.
To investigate the impact of different levels of anesthesia on the auditory system and to characterize hearing-damageinduced effects on the auditory system from system point of view, the basic system parameters, such as delay and response speed for ARX model, anesthesia depth indicator ‘α’ as well as the parameter ‘β’ which is the distance measurement between model output and real firing rate for ARMA model, were extracted and compared among different anesthesia levels, healthy and unhealthy auditory systems. The Plight PModel ‘α’ and ‘β’ are defined as: α = , β= . Pdeep Prealdata Where, P represents the power of firing rates. The model identification in section II was performed in one simulation. If the simulation was run several times, the model parameters would be random variables. To get the averaged system parameters, the simulation was run 50 times. The mean value and variance of the interested system parameters under various conditions were calculated. In Table 1, the ARMA model parameters α and β were compared for distinct anesthesia levels and the conditions of the auditory system. From Table 1, we can see that the output of the ARMA model can well represent the real firing rate in statistical meaning (β is approximately equal to 1 for all cases, and α parameters derived from model outputs and real firing rate are similar). On the other hand, the obvious difference of α can be found from the auditory system with and without hearing damage. The parameter α of damaged auditory system is much larger than that of undamaged system. For ARX model, the delay, gain and pole of the normal/damaged auditory system simplified transfer functions in deep/light anesthesia were extracted and tabulated in Table 2.
IFMBE Proceedings Vol. 32
Modeling for the Impact of Anesthesia on Neural Activity in the Auditory System
Fig. 7 The power spectrums (Averaged spectrum of 50 times simulation) comparison between ARMA model output and the spontaneous firing rate
Fig. 8 Step responses of the ARX system in light/deep anesthesia and with/without hearing damage •
Table 1 α and β of undamaged and damaged auditory system β
Deep Anesthesia Light Anesthesia
α of model output α of real data
Normal AS 0.9265±0.152
Damaged AS none
1.028±0.303
0.8586±0.13
6.4336±2.16 5.6592
11.9435±2.16 12.5148
Obviously, the gain of the transfer function is not meaningful for analysis because of the relatively large variance. In Table 2, we can see that the levels of anesthesia and hearing damage do not have significant impact on system delay, while the case is inversed for system poles. For a stable LTI system, the position of the system poles determines the settling time and response speed to outside stimulus. From Table 2, we found that the response speed is slower and the settling time is longer for the system in light anesthesia without hearing damage, while the hearing damaged system performs similarly to the normal system in deep anesthesia. Table 2 The delay, gain and poles of the ARX model transfer functions in deep/light anesthesia and with/without hearing damage
normal damaged
light deep light
delay 0.65±0.079 0.624±0.074 0.63±0.087
gain 0.0164±0.013 0.0263±0.03 0.022±0.03
pole -1.94±0.34 -2.58±0.4 -2.8±0.46
265
•
First, a new and simple model structure for the auditory system was presented, which combines an ARX model representing the auditory system with external (acoustic) stimulation and an ARMA model representing spontaneous activity (no inputs) of neurons in the primary auditory cortex. Second, our results showed that the impact of anesthesia and other conditions can be modeled into the weighting α.
Current research results are based on the analysis of limited anesthesia levels (two stages) and some basic parameters of a first order system (ARX model). Future work will be focused on the analysis of the system under more sophisticated anesthesia and a higher order system which is the cascade of 2 or more first order subsystems, that means the whole transmission channel of the auditory system will be separated into several parts, then it will be possible to figure out which part of the transmission channel is actually influenced by the levels of anesthesia.
REFERENCES 1. Borst M, Knoblauch A, and Palm G (2004), Modeling the auditory system: preprocessing and associative memories using spiking neurons”, Neurocomputing, 58-60:1013-1018. 2. Dayan P and Abbott L F (2001), Theoretical Neuroscience: Computational and Mathematical Modeling of Neural systems. MIT press, Cambridge, MA, USA. 3. Ljung L and Söderström T (1983), Theory and Practice of Recursive Identification, MIT Press, Cambridge, MA, USA Corresponding author:
IV. CONCLUSIONS In this paper, the impact of two levels of anesthesia on the auditory system and the mechanism of hearing damage were investigated from the system point of view and several interesting findings were highlighted.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Le Yi Wang Department of Electrical and Computer Engineering 5050 Anthony Wayne Dr. Detroit USA
[email protected]
Cortical Excitability Changes after Repetitive Self-regulated vs. Tracking Movements of the Hand S.B. Godfrey1, P.S. Lum1,2, C.N. Schabowsky1, and M.L. Harris-Love2 1
The Catholic University of America, Washington, D.C., USA 2 National Rehabilitation Hospital, Washington, D.C., USA
Abstract— In stroke rehabilitation, the optimal training parameters for re-gaining impaired hand function are unknown. Examining cortical reorganization in normals during different training paradigms could aid in developing these optimal parameters. Previous work using imaging techniques has shown differences in brain activation between self-regulated movements (i.e. movement timing and amplitude determined by the subject) and tracking movements (i.e. movement timing and amplitude goals specified) of the index finger in both normals and stroke patients. The goal of this study is to use TMS to compare single session modulation of corticospinal excitability and short-interval intracortical inhibition (ICI) induced by two different training methods: repetitive self-regulated and repetitive tracking movements in a finger flexion/extension task that mimics a functional grasp and release. No significant changes in resting motor threshold (RMT) or recruitment curve were found. In the extensor digitorum communis (EDC) muscle, there tended to be a decrease in ICI after self-regulated practice and increase after tracking practice when ICI was measured using a conditioning stimulus (CS) intensity of 60% RMT. At CS intensity of 80% RMT, there tended to be a decrease in EDC ICI after both types of tasks. In the flexor digitorum superficialis (FDS) muscle, there was a significant interaction effect as FDS ICI remained stable or slightly increased after self-regulated practice and decreased after tracking practice at CS intensities of 60 and 80% RMT. The training seems to have an opposite effect on the FDS and EDC, perhaps because the EDC is a prime mover. This implies that precision training could produce an increase in inhibition in the primary muscle and a decrease in the antagonist. Keywords— Transcranial Magnetic Stimulation, Rehabilitation Robotics, Stroke.
I. INTRODUCTION Approximately 795,000 people in the United States experience strokes annually; of these, over 600,000 are first attacks [1]. Between 50-70% regain functional independence while 15-30% are permanently disabled. Highly repetitive, task-specific movements have proven an effective method of regaining functional use of the upper extremities, but are extremely labor and cost-intensive [2]. To ease this burden, robot-aided therapy was developed. Current robotic
modalities include passive movements, robot assisted movements, bimanual movements, or a combination of approaches. The most effective and efficient means of retraining functional use of the hand is still unclear. Some recent research has focused on the effects of practicing gross or simple motor tasks versus complex or precision motor tasks [3-7]. Possible differences in the neural effects of these two approaches have not been fully elucidated. Transcranial magnetic stimulation (TMS) can be used to examine the size of muscle representations in the motor cortex, corticospinal excitability, and intracortical or interhemispheric inhibition. Previous work in animals has shown that the training of specific muscles by learning a skilled movement results in an increased representational area of these muscles in the motor cortex [8-9]. This work has been extended to humans in several ways: in a comparison of pianists and non-musicians, pianists showed more prominent activation of the primary motor cortex during training of a tapping task than non-musicians [10]. In a study of the representation of the first dorsal interosseous (FDI) of Braille proofreaders, the muscle representation was greatly enlarged immediately after a six-hour reading shift than after two days off work, showing rapid modulation of the motor representation [11]. We are interested in the cortical changes related to skilled versus non-skilled use of hand muscles. A single session of non-directed piano practice produced similar but less prominent cortical changes compared to a single session practicing a specific piano sequence, implying that skilled movements result in greater cortical changes than non-skilled movements [12]. Hand tracking tasks have been studied primarily using the index finger with the FDI as the principal muscle [4-7]. Using fMRI, greater activation of M1 is seen with tracking when compared to producing a similar movement without tracking [4-5]. In an isometric, force tracking task using TMS, the more difficult tracking task was found to result in higher MEP amplitudes without affecting silent period [6]. In dynamic force and position tracking tasks, dynamic tracking resulted in greater MEP amplitudes and shorter silent periods than static controls [7]. We have expanded on these studies by comparing kinematically similar tasks: a subject first produces a roughly
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 266–269, 2010. www.springerlink.com
Cortical Excitability Changes after Repetitive Self-regulated vs. Tracking Movements of the Hand
267
sinusoidal waveform through opening and closing the fingers of the hand at a self-selected speed and amplitude. In a second trial, the subject is asked to track the movement of a visual target that is programmed to perform the same waveform produced by the subject in the first trial. To examine the neural effects of motion and tracking trials, we used TMS to determine cortical excitability (via MEP amplitude) as well as intracortical inhibition.
II. MATERIALS AND METHODS A. Overall Study Design Ten normal subjects (ages 19-45, mean of 26.1, standard deviation of 7.8), 4 females and 6 males, participated in this study. The subjects were tested on two days separated by at least a one-week washout period. On each day, subjects underwent TMS evaluation for roughly a one hour period before and after a robotic training session. B. Transcranial Magnetic Stimulation Before and after each training session, measurements of corticospinal excitability and intracortical inhibition were made using transcranial magnetic stimulation (TMS). Recording surface electromyography (EMG) electrodes were placed over the extensor digitorum communis (EDC) and flexor digitorum superficialis (FDS) and tested using the methods found in [13]. The hot spot of each subject was then located by delivering test pulses at multiple scalp locations and determining the optimal location for eliciting a response in both muscles. The location of the hot spot was marked on the scalp or on a close-fitting cap to ensure consistency. The Resting Motor Threshold (RMT) was then determined by searching for the lowest intensity that produced a motor evoked potential (MEP) of at least 50 µV in at least five out of ten trials. A recruitment curve (RC) was then recorded consisting of ten pulses at 90-150% of RMT in increments of 10%. The ICI procedure consisted of ten single pulses to determine the average amplitude of the uninhibited, test MEP. We chose a test stimulus intensity that would reliably elicit MEPs of 300-500 μV peak-to-peak amplitude in the target muscles. 3 sets of ten paired pulses were delivered with the conditioning pulse at 60, 70, or 80% of RMT. The interstimulus interval was 3 ms. Following the training procedure, subjects underwent a second TMS session where RC, ICI, and RMT were recalculated, in that order. The same hot spot was used as before the procedure and electrodes remained in place throughout testing and training.
Fig. 1 HEXORR interface with the human hand in the closed (top) position and open (bottom) position C. The Device A robot, HEXORR (Hand EXOskeleton Rehabilitation Robot), was used in both training sessions. Each subject's right hand was strapped into the device using a stabilizing strap across the palm, as well as straps across the proximal and intermediate phalanges of the fingers and at the distal phalanx of the thumb. The four fingers are controlled as one unit in HEXORR and the thumb is controlled separately. The subject's forearm rested on a platform of appropriate height to hold the wrist in neutral. HEXORR was also sized for each subject, accommodating different finger lengths and hand sizes. Through proper sizing, all subjects achieved full flexion and extension range of motion (ROM); Figure 1 shows the position of the fingers and thumb at each extreme. During operation, the device compensates for its own weight and for the effect of both static and viscous friction so the subject is only required to produce forces necessary to move the weight of their own hand. Also, the thumb component was driven by a slave mode in which its movement mirrored that of the fingers. D. Training Protocol For the first session on HEXORR, subjects were instructed to open and close the hand slightly less than the full range of motion (ROM) at a steady, comfortable pace. This
IFMBE Proceedings Vol. 32
268
S.B. Godfrey et al.
session was dubbed self-regulated practice (SRP). The subject was presented with a visual interface that displayed a blue ball on a yellow background. The ball rose on the screen as the subject opened the hand and fell as the subject closed. During the training, the thumb was moved in slave mode to follow the movement of the fingers and the fingers were provided with gravity compensation to offset the weight of the robot as well as static and dynamic friction compensation. Once the subject had completed 50 full repetitions, the subject was stopped and feedback given to limit differences between SRP and TP as well as provide motivation for the subject. The subject received a score comparing their average frequency to an unknown target frequency of 1/2 hertz, with 100% being 1/2 hertz and higher or lower scores meaning a faster or slower frequency, respectively. The subject was then instructed to repeat the procedure, altering their speed if necessary to attempt a better score. A total of 6 trials of 50 movements each were completed. In the second training session, tracking practice (TP), subjects were asked to track a ball on screen. A similar visual interface was presented: a white ball appeared on the screen moving up and down while the subject tracked with her or her own blue ball (Fig. 2). The subject was presented with his/her own trajectory from the previous session but was only informed that the tracking ball could have varying speed and displacement. Subjects tracked all 6 trials of 50 movements and received a score related to their error. To calculate error, the root mean square (RMS) difference between the actual trajectory and the target trajectory was subtracted from a normalization value, the RMS difference between the closed hand starting position and the target trajectory. This value was then divided by the normalization value and subtracted from 100 to give the subject’s score, with a 100% indicating no tracking error.
III. RESULTS AND DISCUSSION All subjects tolerated the TMS and robot sessions well and completed both days of training. The average resting motor thresholds SRP, pre and post training, and TP, pre and post training, respectively were: 44.2, 44, 43.5, 43.4. For both muscles, the recruitment curves were similar both between pre/post timepoints and between training tasks. Some subjects showed a difference in baseline inhibition between SRP and TP. To reduce this bias, only those subjects that had a change in baseline inhibition of 20% or less were included in the analysis. In the EDC, with a conditioning stimulus of 60% RMT, there tended to be an interaction effect between task and test sessions (Fig. 3A,B; p=0.11) such that EDC ICI tended to decrease after SRP
Fig. 2 Graphical user interface (GUI) for the TP sessions. The blue ball is controlled by the subject’s finger movements a main effect of test session (p=0.12) such that EDC ICI tended to decrease after both training tasks. In the FDS, there was a significant interaction effect between training task and test session (Fig. 3C,D; p<0.05) such that FDS ICI remained unchanged or slightly increased after SRP and decreased after TP. In the FDS, subjects’ ICI showed little change after SRP and decreased after TP. The EDC ICI at 60% tends to increase after TP. The robot’s weight compensation algorithms do not account for the weight of the hand in the device, slightly biasing the training toward EDC usage. The difference seen in EDC and FDS behavior could be a result of the EDC being the prime mover for these tasks. By having subjects track their own movements which had been recorded during the previous session, kinematics between the two days were controlled to be as similar as possible. However, a sampling rate problem resulted in the tracking task occurring at a speed roughly 1.7 times slower than the original motion, for example a 0.5 Hz SRP would be tracked at 0.3 Hz. While this discrepancy produces an unknown effect, the TP task still requires more precision than and maintains a similar trajectory to the SRP task. These results imply that precision training could produce an increase in inhibition in the primary muscle and a decrease in the antagonist. In the future, we plan to reproduce this experiment with the following modifications: first, by adjusting the sampling rate of the tracking program, we can assure that the kinematics between the self-selected and tracking tasks remain as similar as possible. Second, by using the robot to produce a flexion force, we can create an extensor loaded task where concentric contractions are required to open the hand and eccentric contractions are required to close the hand.
IFMBE Proceedings Vol. 32
Cortical Excitability Changes after Repetitive Self-regulated vs. Tracking Movements of the Hand
EDC ICI Self-Regulated Practice
A
FDS ICI Self-Regulated Practice
C
70
70
60
60
50
50
40
Pre
30
Post
40
Pre
30
Post
20
20
10
10
0
269
0
60
70
80
60
EDC ICI Tracking Practice
B
70
60
60
50
50
40
Pre
30
Post
40
Pre
30
Post
20
20
10
10
0
80
FDS ICI Tracking Practice
D
70
70
0
60
70
80
60
70
80
Fig. 3 Comparison of pre and post testing sessions in EDC SRP (A) and TP (B) and FDS SRP (C) and TP (D)
ACKNOWLEDGMENT Funding was provided by the U.S. Army Medical Research and Materiel Command, Project Number: # 09354004, PI: Healton.
REFERENCES 1. Lloyd-Jones D, Adams R J, Brown T M et al. (2010) Heart Disease and Stroke Statistics—2010 Update A Report From the American Heart Association. Circulation 121:e1-e170. 2. Prange G B, Jannink M J A, Groothuis-Oudshoorn C G M et al. (2006) Systematic review of the effect of robot-aided therapy on recovery of the hemiparetic arm after stroke. J Rehabil Res Dev 43:171184 DOI: 10.1682/JRRD.2005.04.0076 3. Carey J R, Bhatt E, & Nagpal A. (2005) Neuroplasticity Promoted by Task Complexity. Exerc Sport Sci Rev 33: 24–31. 4. Carey J R, Greer K R, Grunewald T K et al. (2006) Primary Motor Area Activation during Precision-Demanding versus Simple Finger Movement. Neurorehabil Neural Repair 20: 361-370 DOI: 10.1177/1545968306289289 5. Carey JR, Durfee WK, Bhatt E et al. (2007) Comparison of finger tracking versus simple movement training via telerehabilitation to alter hand function and cortical reorganization after stroke. Neurorehabil Neural Repair 21:216–232 DOI: 10.1177/1545968306292381
6. Pearce A J & Kidgell D J. (2009) Corticomotor excitability during precision motor tasks. J Sci Med Sport 12:280-283 doi:10.1016/j.jsams.2007.12.005. 7. Pearce A J & Kidgell D J. (2010) Comparison of corticomotor excitability during dynamic and static tasks. J Sci Med Sport 13:167-71 doi:10.1016/j.jsams.2008.12.632 8. Plautz, E.J., Milliken, G.W. & Nudo, R.J. (2000) Effects of repetitive motor training on movement representations in adult squirrel monkeys: role of use versus learning. Neurobiol Learn Mem 74:27-55. 9. Remple M S, Bruneau R M, VandenBerg PM et al. (2001) Sensitivity of cortical movement representations to motor experience: evidence that skill learning but not strength training induces cortical reorganization. Behav Brain Res 123:133-141. 10. Hund-Georgiadis M and von Cramon D Y. (1999) Motor-learningrelated changes in piano players and non-musicians revealed by functional magnetic-resonance signals. Exp Brain Res 125:417–425. 11. Pascual-Leone A, Wassermann E M, Sadato N, and Hallett M. (1995) The role of reading activity on the modulation of motor cortical outputs to the reading hand in Braille readers. Ann Neurol 38:910–15. 12. Pascual-Leone A, Nguyet D, Cohen LG et al. (1995) Modulation of muscle responses evoked by transcranial magnetic stimulation during the acquisition of new fine motor skills. J Neurophysiol 74:1037–45. 13. Perotto A O, Delagi E F, Iazzetti J, and Morrison D. (2005) Anatomical Guide for the Electromyographer: The Limbs and Trunk. Charles C. Thomas, Springfield. Author: Michelle Harris-Love Institute: National Rehabilitation Hospital Street: 102 Irving St NW City: Washington DC Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
What the ENT Wants in the OR: Bioengineering Prospects D.A. Depireux1 and D.J. Eisenman2 2
1 Institute for Systems Research, U of Maryland, College Park, MD 21042 Department of Otorhinolaryngology-Head & Neck Surgery, U of Maryland, Baltimore, MD 21201
Abstract— Dr Robert Fischell has repeatedly suggested that the medical doctor in the operating room might be the best inspiration for the development of new tools and techniques by biomedical engineers. There has been a lot of progress recently in the field of otorhinolaryngology, particularly with respect to cochlear implants (CI). Yet there is much room for innovation to help access and treat diseases of the cochlea, a very small organ inside the hardest bone of the human body. For some common conditions such as tinnitus, there are particularly few treatments available. In this talk, the authors (an MD/ENT and a PhD) will go over some ideas that have arisen from their collaborations over the last few years, such as - the use of optical computed tomography as a teaching tool for the implantation of cochlear prostheses, by allowing the surgeon to observe the organ of Corti before a cochleostomy (drilling of the cochlea for implantation of the CI) - biodegradable polymers in the middle ear as a method of continuous and controllable delivery of drugs, such as antibiotics or steroids, in the cochlea, or to promote healing of an eardrum graft with little scarring, practical implantable cochlear promontory stimulator that might provide the kind of cochlear stimulation needed to reduce tinnitus in certain cases. The talk will demonstrate that the organ of hearing still provides a very fertile ground for innovative research of high relevance to human diseases and syndromes that often result in deafness and tinnitus. More generally, much progress is expected in the near future in the all of the subspecialties within the field of otorhinolaryngology: balance and vestibular function, rhinology, plastics and reconstructive surgery, head & neck oncology, and laryngology. Keywords— Hearing, Implants, BioEngineering, Electrical stimulation, Drug delivery. Abbreviation— ENT: Ear-Nose-Throat doctor, i.e. otorhinolaryngologist.
I. INTRODUCTION Biomedical engineering has made tremendous progress recently, and this progress shows no sign of abating. The most tangible success in the last ten years has quite possibly been made in the fields of hearing and balance, for which microsurgery, low-power signal processing, drug delivery, imaging, electrical stimulation and other advances have allowed the prevention of hearing loss, the restoration of hearing through cochlear implants and the attenuation of the
detrimental effects of dizziness from a diseased vestibular organ to name but a few. In spite of all these advances, there is still a lot of room for progress in the field of otorhinolaryngology. Up to ninety percent of hearing loss occurs due to destruction of hair cells (where the movements of the fluids that bathe the cochlea are translated into a release of neurotransmitters) or auditory nerve cells, the first neurons that relay the information from the hair cells of the cochlea to the brain. Yet scientific research on the inner workings of the cochlea and the vestibular system has been hampered by the fact that the inner ear is situated deep in the temporal bone, the hardest and densest bone of the human body. The location of the organ that can transduce the movements of the fluids that fill the inner ear into electrical impulses to be interpreted by the brain is therefore hard to access. For instance, at this point it is impossible to directly observe movements of the basilar membrane of the cochlea in response to typical environmental sounds without majorly disturbing the entire organ or removing it from the body. Some recent progress in medicine has made it the more pressing to further develop ways to prevent hair cell loss. Hearing loss, tinnitus and vertigo or dizziness are listed as side-effects of hundreds of medications. Drugs that are commonly used with great success to treat cancer (carboand cis-platin) and gram-negative bacterial infections (aminoglycosides) are known to destroy, at clinical levels, most of the active amplifiers (the outer hair cells) of the cochlea. This may cause up to a 60 dB hearing loss. The chronic use of pain killers can lead to an auto-immune reaction that results in the outer hair cells being wiped out. As our population’s life expectancy increases towards 80, age-related presbycusis and Ménière's disease often result in severe hearing loss. Soldiers are better equipped than ever to survive combat injuries, yet little can be done about how fragile the inner ear is; over 50% of soldiers who have served in Iraq have been within 150ft of an Improvised Explosive Device, and the resulting blast wave injury often results in hearing damage. This is compounded by the noise exposure received by tank and armored vehicle drivers and passengers, exposure to gunshots etc. The Department of Veterans Affairs has
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 270–273, 2010. www.springerlink.com
What the ENT Wants in the OR: Bioengineering Prospects
stated that hearing damage is the No.1 disability in the war on terror, and experts say the true toll could take decades to become clear. The number of servicemen and servicewomen on disability because of hearing damage is expected to grow about 18 percent a year (as it has since 2000), with payments totaling over $1.1 billion annually by 2011, according to an analysis of VA data by the American Tinnitus Association. Published studies found that of the blastexposed personnel, at least half reported tinnitus as a longterm effect [1,2]. Indeed, the ear is the organ that is most vulnerable to damage by blast overpressure [3]. Finally, vestibular schwannomas are benign slowgrowing tumors that develop where the eighth cranial nerve (comprising the balance and auditory nerves) leaves the inner ear. They account for 8% of tumors in the skull. Unfortunately, they are often detected too late and surgical intervention requires the resection of the eighth nerve, resulting in total deafness and lack of vestibular information on the operated side. Unfortunately this resection also often results in tinnitus from that same side (even though the cochlea has been disconnected). Restoration of some hearing in these patients is possible through an auditory midbrain implant [4,5], though it is not known how to permanently reduce their tinnitus.
271
A. Optical Computed Tomography Optical computed tomography (OCT) is a form of medical imaging in which three-dimensional images are reconstructed from the transmission or scattering of light emitted from a probe [6]. While its potential utility in imaging cochlear structures has been considered, as far as we know it has yet to see clinical use in an ENT setting. Fig.2 shows OCT imaging of the cochlea of a human cadaver using a commercial OCT probe from LightLab Imaging [7]. In this case, the probe was inserted in the scala vestibuli of the cochlea. For an actual cochlear implant, the electrode array would preferably be inserted in the scala tympani, which is to the lower right of the picture below the “b”. Several avenues of research seem worthy of consideration. From a purely scientific point of view, a lot has yet to be learned about the movements of the basilar and tectorial membranes during sound stimulation. While the cochleostomy required to insert the probe would likely interfere somewhat with the normal functioning of the cochlea, it would be worth observing the relative movements of the cochlear structures during sound stimulation in the live cochlea, particularly in response to traumatic sounds known to induce hearing loss and tinnitus.
Fig. 1 A schematic of the inner ear, showing the organ of hearing (cochlea) and the vestibular organ. Picture adapted from http://medicaldictionary.thefreedictionary.com
II. NEW TECHNOLOGIES: RECENT DEVELOPMENTS In the following, we mention some of the new technologies and their possible use in the context of treating patients with hearing loss and/or tinnitus. This is by no means an exhaustive list, but rather a list that arose from discussions over the years between the authors.
Fig. 2 OCT image from a cadaver (courtesy James Lin): the probe (centered in the middle of the rings above the t) was placed via a cochleostomy into the scala vestibuli. Clearly visible are the tectorial membrane (t) and the basilar membrane (b) From the clinical side, OCT could be used as a teaching tool to visualize, as much as possible, the cochlear structures
IFMBE Proceedings Vol. 32
272
D.A. Depireux and D.J. Eisenman
prior to cochleostomy. Also, meningitis accounts for a large percentage of the children who become candidates for cochlear implantation. Unfortunately, bony growth in the cochlea following meningitis often creates problems for or prevents implantation. High field MRI is usually able to detect ahead of time the presence of tissue growth in the cochlear duct. It would be interesting to see whether OCT is capable of imaging the presence of the spongy tissues that precedes bone growth earlier than MRI, thereby shortening the waiting time for a pediatric cochlear implant. B. Biodegradable Polymers for Drug Delivery About 4000 cases of “Sudden Sensorineural Hearing Loss” or “Sudden Deafness” occur every year in the United States. The current treatment, which works in a minority of cases, involves getting cortico-steroids (e.g. dexamethasone) into the cochlea. The clinical procedure consists in getting the patient to lie supine, fill the middle ear with the steroid, and wait for 30 minutes. As soon as the patient stands up and swallows, the drug leaves the middle ear through the eustachian tube (see Fig.1). Research points to the need to have longer residence time for the drug in the middle ear, from where it diffuses into the inner ear via the round window membrane, to be more effective and in particular to have the time to diffuse through the entire length of the cochlea. A similar problem of drug delivery arises in the cases of Meniere’s disease and other balance disorders for which a similar method is used to deliver drugs that will affect the hair cells in the vestibular system, thereby preventing erroneous information from reaching the brain and causing vertigo. This situation has been very recently (this week, as we write this chapter) attacked by Otonomy, which has been approved for a phase 1b clinical trial to use a slow-release polymer to deliver dexamethasone to the inner ear of patients with unilateral Ménière’s disease. However, we believe there is still much room for progress. For instance, many of the gels used in bioengeering research are polyesters that hydrolytically degrade into acidic products, which would damage the sensitive epithelium that lines the middle ear. Recently developed biomaterials however do not suffer from this [8]. Furthermore, since the curative effects of steroids and aminoglycosides in the inner ear depend on peak concentration, it would be better to have a polymer that can be degraded in pulses. Biodegradable polymers developed in the Fisher lab at the U of Maryland do just that, and one can easily imagine modulating the drug delivery by flashing a light through the eardrum. Finally, cochlear implantation requires a middle ear free of bacterial infection for a period of a few weeks prior to surgery. While evidence points to the need for infants to be implanted earlier rather than later for the implant to help in
language acquisition and development, infants, for reasons sometimes associated with their hearing loss (e.g. middle ear malformations), have chronic otitis media which can delay surgery for months. A biodegradable gel loaded with the appropriate antibiotics might prove to be successful in controlling bacterial growth long enough that an implantation can be considered without the risks of an infection invading the cochlear space and possibly causing postoperative meningitis. In a related vein, there is a need to develop a good middle ear packing material that could support an eardrum graft, promote healing of the graft but avoid scarring of the middle ear. As mentioned in the introduction, a lot of our soldiers come back from Iraq and Afghanistan, having been exposed to an IED. This often results in destroyed eardrum which can be replaced, the issue being to keep the graft in place long enough. Ideally, the packing material that would provide a scaffolding of sorts for the graft would be easy to instill into the middle ear, preferably being able to conform to the contours of the space, be absorbed after 2-3 weeks (or possibly could be removed by using a light source, as mentioned above), not lead to middle ear fibrosis- though promote fibrosis of the graft to the eardrum, and maybe have some antibiotic properties. C. Cochlear Promontory Stimulator for the Treatment of Tinnitus The basal turn of the cochlea forms a “promontory” protruding in the middle ear of humans and most mammals. Electrical stimulation of this promontory was used at some point prior to cochlear implant surgery: if cochlear promontory stimulation elicited some sound perception, it was deemed that the candidate was more likely to benefit from the cochlear implant. Tinnitus is the perception of sounds when no external source is present. These sounds can be constant, high pitched and loud enough to be completely debilitating. In most cases, tinnitus is associated with hearing loss, and it is thought that the first event that leads to most cases of tinnitus is damage to some hair cells in the cochlea. Indeed, the majority of cochlear implant candidates suffer from tinnitus, in addition to their severe hearing loss. Some studies for the evaluation of patients for cochlear implantation by promontory stimulation noticed that the stimulation often resulted in a reduction and sometimes a suppression of tinnitus [9,10]. More recent research, independent of cochlear implant surgery considerations, has reached some of the same conclusions, though using only a single stimulation paradigm with an electrode placed over the round window membrane. In all promontory and round window stimulation studies we are aware of, the electrode placement was temporary
IFMBE Proceedings Vol. 32
What the ENT Wants in the OR: Bioengineering Prospects
(through the eardrum) and the stimulation parameter space was only briefly explored. On the other hand, it is well known that following implantation, most cochlear implant users experience an important reduction in their tinnitus and some complete suppression [11]. Interestingly, the tinnitus is reduced when the cochlear implant is turned on, but at levels not eliciting a conscious perception of a sound. Recently, a study involving patients with unilateral deafness and severe unilateral tinnitus showed that a cochlear implant greatly reduced the severity of their tinnitus[12]: based on a standard questionnaire, the severity of their tinnitus dropped from an 8.5/10 to 2.5/10 even after 24 months. While this study is very promising, it remains that a cochlear implant is a major and irreversible surgery. Many factors point to the fact that tinnitus suppression arises from re-establishing a background level of constant, spontaneous activity in the auditory nerve. It would therefore seem that the development of a low power, chronic electrical promontory stimulator, possibly externally stimulated with an electromagnetic transduction system, might bring the advantage of promontory stimulation without the disadvantages of a full cochlear implant. A chronic implant could also allow for a more systematic, possibly patient-driven, exploration of the best set of parameters that would allow the optimal reduction in tinnitus.
III. CONCLUSIONS In this paper, we have attempted to present some ideas that arose following discussions between the authors over the years. Because of space limitations, we have restricted out discussion to 3 main ideas, but modern technologies point to many more possibilities. For instance, many of the issues associated with current cochlear implants might be avoided by having a more focal (localized) stimulation in the cochlea. This should be achievable by the use of light stimulation instead of the current electrical stimulation. Some progress has been made in this direction [13], though the development of a multi-optical fiber array still seems to be some ways off. This paper is by no means an exhaustive list of what recent progress in BioEngineering can contribute to the ENT; rather it was meant as an illustration of the kind of ideas that can arise when a scientist working in basic research (DAD) and a medical doctor working with patients (DJE) talk to each other and freely exchange ideas.
273
REFERENCES [1] Cave KM, Cornish EM, and Chandler DW. Blast injury of the ear: clinical update from the global war on terror. Military medicine 172: 726-730, 2007. [2] Helfer TM, Jordan NN, and Lee RB. Postdeployment hearing loss in U.S. Army soldiers seen at audiology clinics from April 1, 2003, through March 31, 2004. American journal of audiology 14: 161168, 2005. [3] Xydakis MS, Bebarta VS, Harrison CD, Conner JC, Grant GA, and Robbins AS. Tympanic-membrane perforation as a marker of concussive brain injury in Iraq. The New England journal of medicine 357: 830-831, 2007. [4] Lim HH, Lenarz M, Lenarz T. Auditory midbrain implant: a review. Trends Amplif. 2009 Sep;13(3):149-80 [5] Colletti V, Shannon RV, Carner M, Veronese S, Colletti L. Progress in restoration of hearing with the auditory brainstem implant. Prog Brain Res. 2009;175:333-45 [6] Chen Y, Bousie E, Pitris C, Fujimoto JG, "Optical Coherence Tomography: Introduction and Theory," in Handbook of Biomedical Optics, D.A. Boas, C. Pitris, N. Ramanujam, Eds., Taylor & Francis Books [7] Lin J, Staecker H, Jafri MS. Optical coherence tomography imaging of the inner ear: a feasibility study with implications for cochlear implantation. Ann Otol Rhinol Laryngol. 2008 May;117(5):341-6. [8] Moreau JL, Kesselman D, Fisher JP., “Synthesis and properties of cyclic acetal biomaterials.”, J Biomed Mater Res A. 2007 Jun 1;81(3):594-602. [9] Cazals Y, Negrevergne M, Aran JM. ”Electrical stimulation of the cochlea in man: hearing induction and tinnitus suppression.” J Am Audiol Soc. 1978 Mar-Apr;3(5):209-13. [10] Rothera M, Conway M, Brightwell A, Graham J. “Evaluation of patients for cochlear implant by promontory stimulation. Psychophysical responses and electrically evoked brainstem potentials.” Br J Audiol. 1986 Feb;20(1):25-8. [11] Ruckenstein MJ, Hedgepeth C, Rafter KO, Montes ML, Bigelow DC. Tinnitus suppression in patients with cochlear implants. Otol Neurotol. 2001 Mar;22(2):200-4. [12] Van de Heyning P, Vermeire K, Diebl M, Nopp P, Anderson I, De Ridder D. Incapacitating unilateral tinnitus in single-sided deafness treated by cochlear implantation. Ann Otol Rhinol Laryngol. 2008 Sep;117(9):645-52. [13] Izzo AD, Walsh JT Jr, Ralph H, Webb J, Bendett M, Wells J, Richter CP. Laser stimulation of auditory neurons: effect of shorter pulse duration and penetration depth. Biophys J. 2008 Apr 15;94(8):315966. Epub 2008 Jan 11.
Corresp Author: Didier A Depireux Institute: Institute for Systems Research Street: University of Maryland City: College Park, MD 20742 Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
An in vitro Biomechanical Comparison of Human Dermis to a Silicone Biosimulant Material I.D. Wing1,2, H.A. Conner1, P.J. Biermann1, and S.M. Belkoff2,3 1
Johns Hopkins University / Applied Physics Laboratory, Laurel, MD, USA Johns Hopkins University / Dept. of Mechanical Engineering, Baltimore, MD, USA 3 Johns Hopkins Medical Institutions / Dept. of Orthopaedic Surgery, Baltimore, MD, USA 2
Abstract— A major quality of life concern for designers of prosthetics is the look and feel of the limb. Even up close, the limb should look and feel natural. This means covering the underlying mechanical components with a “cosmesis”. This cosmesis should match the skin tone of the patient and the material from which it is constructed should have similar mechanical properties to human skin. The objective of this study was to compare a candidate biosimulant material to cadaveric skin. Skin specimens were harvested from three donors furnished by the Maryland State Anatomy Board. In daily use, the cosmesis material would often be compressed against the hard backing of the case. Thus, to compare the two materials, indentors with eight different tip geometries were driven into samples supported by a flat metal plate. 1.5” square samples were compressed to 40% of their initial thickness at a rate of 0.1 mm/sec using a servohydraulic test machine. Force versus deflection curves were automatically reduced to stiffness and elastic modulus values for each trial using a custom MATLAB algorithm. Results of this study showed that the surrogate material was significantly more compliant than human skin (ANOVA, p<0.05). A secondary, perhaps more interesting result is an estimate of the compressive elastic modulus of human skin. While the properties of human skin in tension and penetration are well documented in the literature, there is a paucity of data on compression of skin against a hard surface. In the broader biomechanics context, this loading modality is observed frequently where skin abuts directly to bone, such as the hands, chest, face, and scalp. The wide range of tests performed in this study estimate the compressive elastic modulus of human skin at 6.8 +/- 2.0 MPa. Keywords— skin, compressive modulus, prosthetics, biomechanical testing, biosimulant.
I. INTRODUCTION Advances in body armor, combat casualty care and battlefield trauma medicine have greatly improved the survivability of warfare in recent years. Injuries that were once almost certainly lethal are now survivable, but often at the cost of a limb. The result is that more soldiers are returning home, but bearing what has become the characteristic battle scar of the Global War on Terror – amputation [1]. These shifting injury patterns have led to a great deal of research into advanced prosthetics to help returning veterans live normal lives.
Generally, mechanical actuators that produce motion are concealed beneath a cosmesis designed to mimic the appearance of the user’s skin. One critical factor is the feel of the limb. In order to provide a convincing surrogate for skin, the cosmesis should approximate the mechanical properties of skin.
Fig. 1 A prototype limb system developed by JHU/APL and a silicone cosmesis developed by ARTech Laboratory Skin is a structure composed of three layers: the keratinized epidermis, the fibrous, collagen rich papillary layer of the dermis and finally the deep reticular layer of dermis, which contains a great deal of amorphous ground substance. Oomens et al. postulate that the response of skin in tension is dominated by fibrous structures of collagen and elastin in the dermis, while the compression response is largely dependent on the ground substance [2]. Measuring the mechanical properties of skin is a difficult task due to the wide variance in these quantities. Age, gender, and lifestyle choices can all play a role in the biomechanics of tissues, and the skin is no exception [3,4,5,6]. Skin is highly anisotropic – collagen bundles reinforce the skin along the direction of maximal tension in vivo [6]. Also, the skin can vary in thickness and properties depending on the location on the body where the sample is taken [3,4,5,6]. For example, skin from the sole of the foot will be very different than skin harvested from the underarm.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 274–277, 2010. www.springerlink.com
An in vitro Biomechanical Comparison of Human Dermis to a Silicone Biosimulant Material
Much of the prior work on the penetration of skin is concerned with deep penetration through the skin and tissues beneath, such as muscles, connective tissue, and internal organs [7,8,9,10,11]. Perhaps the most complete evaluation of puncture of soft solids comes from Shergold and Fleck. Their experimental results agree with their proposed model for failure by puncture and suggested silicone rubbers behaved like mammalian skin [9,10]. However, there is insufficient literature available on the loading of sheets of biological materials in compression against a flat plate. This loading modality is relevant to prosthetics because it is important to understand the response of skin and the surrogate material with a hard backing, that is, the hard outer casing of the mechanical limb. In the broader biomechanics context, this area is interesting because there are many locations on the body where skin abuts directly against bone. The hands, chest, facial features and scalp are all examples. Data describing the behavior of skin in this situation is useful for researchers studying injuries to the skin. The present study investigates the mechanical response of a candidate cosmesis material (MED-4915, NuSil Technology LLC, Carpinteria, CA) and compares it directly to human skin with a series of indentor tests of varying size and shape. This phenomenological comparison gives a clear, concise set of metrics by which to compare the response of two materials, while also providing an estimate of the compressive elastic modulus of cadaveric skin, a quantity not well documented in literature.
II. MATERIALS AND METHODS A pre-molded sample of NuSil MED 4915 was used as a source of experimental simulant material. Additionally, three cadaveric skin samples were obtained from the Maryland State Anatomy Board. Donor information is summarized in Table 1.
edges to countertop edges to pencils. Two styles of indentors were used: wedges (right angles of radius 0.25 mm, 1.57 mm, 3.18 mm, 6.35 mm) and cylinders (diameters of 1.57 mm, 2.36 mm, 3.18 mm, 6.35 mm).
Fig. 2 Indentors, arranged by size The surrogate material was cut into eight 38 mm on a side squares then tested. Biological samples were cut away from larger samples, divided into eight 38 mm squares and were kept carefully hydrated until testing. Sample thickness was measured using a caliper at four points, averaged, then used as a baseline height for compression tests. Samples were tested on an MTS Bionix 858 servohydraulic test system (MTS, Eden Prarie, MN) with a 445 N single axis load cell (Sensotec, Columbus, OH). Specimens were placed on a flat plate and the indentor was positioned such that it just touched the surface of the skin. During the test, samples were compressed at a rate of 0.1 mm/sec until they were 40% of their average initial thickness. Each sample was indented at three different locations for a total of 96 tests (four specimens, eight samples, three trials each). Time histories of axial force and axial displacement were recorded for later analysis.
LOAD CELL
Table 1 Donor Information Specimen #
Age at Death
Race/ Gender
Sample Location
Cause of Death
08-0390
65 yrs
B/M
Foot – Dorsal
09-1475
72 yrs
W/F
Forearm
W/M
Thigh – Anterior
Pancreatic Cancer Lung Cancer, COPD Cerebral Ischema
10-0025
47 yrs
275
Time since death at test 22 mos 2 mos 1 mos
A set of indentors was custom made for this project. A wide gamut of sizes was selected to represent a wide variety of objects that might be encountered in daily life, from knife IFMBE Proceedings Vol. 32
Fig. 3 Schematic and photograph of indentor set up
276
I.D. Wing et al.
III. RESULTS A MATLAB (The Mathworks, Natick, MA) algorithm was developed to estimate stiffness and elastic modulus. The threshold values mentioned below were determined empirically based on visual observation of the slopes the computer chose. These values were selected by iteratively running the algorithm with slightly different parameters until suitable agreement between the computer estimated slope and the actual slope as determined by a trained operator was achieved. First, the algorithm identifies the point on the forcedeflection curve where the load time history passes 10% of the maximum load. Data is then re-zeroed to that point (assumed to be in the toe region of the curve). Next, the stiffness is defined as the slope of the line connecting 20% of the maximum load to 80% of the maximum load on the force-deflection curve. True stress and true strain were then computed using the following formulae:
ε eng =
t0 − d t0
ε true = ln(1 + ε eng )
σ eng =
f π ⋅r2
σ true = (1 + ε eng ) ⋅ σ eng
where t0 = initial thickness, d = indentation depth, f = force applied, and r = indentor radius. A similar 20% - 80% procedure was then applied to estimate elastic modulus. Figure 4 shows a typical result.
Because the exact contact area of the edge indentor was constantly changing during the indentation, the recorded stiffnesses were normalized based on the indentor edge radius. This produced a quantity (“normalized stiffness”) dimensionally equivalent to modulus which served as a basis for comparing edge indentors of different geometry. The 0.25 mm edge indentor proved problematic. Analysis after the fact showed that the wedge was mounted somewhat crookedly, resulting in premature contact with the hard plate on one end and artificially high loads which were not uniformly distributed. For this reason, the 0.25 mm edge indentors were removed from the final analysis. In the cylindrical indentor experiments, the contact area of the indentor was known and constant. This value was used to convert force into stress and estimate the material’s modulus. After outliers (defined as points more than two standard deviations from the mean) were removed, data was analyzed using an analysis of variance (ANOVA) performed in JMP 8 (SAS Institute, Cary, NC). For the edged indentor experiment, statistical analysis showed that the surrogate material was more compliant than skin. However, this result was not significant (p=0.184). For the cylindrical indentor tests, clearer trends emerged in the data. In this experiment, the surrogate material was observed to be significantly more compliant than skin (p=0.017) when loaded in compression.
Fig. 5 Bar charts showing mean and 95% confidence interval for all tests. Cylindrical indentations differentiated surrogate material from skin (p<0.05), but edge tests did not
IV. DISCUSSION
Fig. 4 Stress-strain curve showing both the raw data and the estimated elastic modulus from one test
The results of the present study suggest that the skin surrogate material is more compliant than human skin. In both loading schemes, human skin was determined to be less compliant in compression than the surrogate material, as measured by the normalized stiffness or elastic modulus. For the edge tests, this result was not statistically
IFMBE Proceedings Vol. 32
An in vitro Biomechanical Comparison of Human Dermis to a Silicone Biosimulant Material
significant, but significance was achieved for the cylindrical indentation tests. This finding does not necessarily mean that MED-4915 is a poor choice of material for a cosmesis. There are many other factors that contribute to the material selection for this application. For instance, in vivo, skin on the human forearm is supported by soft tissues, which means that at the system level, with all tissues intact, indentation testing would exhibit a great deal more compliance than when testing skin against a flat plate. In the prosthetic limb, however, the skin is backed by a hard plastic case or metal components. In this respect, having a cosmesis material that undershoots the mechanical stiffness of skin is a positive, because it means that the assembled limb behaves more like a human limb at the system level. An ideal test would measure the resistance of skin/muscle/bone structure of healthy volunteers in vivo, and then compare those results to the cosmesis material in the assembled limb. Such a test would give a much more direct and applicable metric for comparison. There are many other properties to consider besides mechanical likeness to human skin when choosing a cosmesis material. A more compliant material is desirable to reduce drag loads on the mechanical system, minimizing the power required for operation and extending battery life. Also, silicones cannot repair themselves the way human skin can. If a silicone were created to perfectly match human skin, it would likely be prone to damage from the impacts of daily life. Thus, a more compliant material which will elastically deform more easily when impacted is preferable. Similarly, if the limb is damaged and the cosmesis must be removed for service, a material compliant enough to stretch over the hand, then fit snugly on the wrist once in place is essential. Taking into account these competing design factors, designers would likely select a material less stiff than skin. This study estimates the compressive elastic modulus of human skin to be 6.8 +/- 2.0 MPa. This value is somewhat higher than expected based on literature values [12]. This is, however, explained by the composite structure and anisotropic properties of skin and the fact that most values for the elastic modulus of skin are derived from tensile experiments. The ground substance likely plays a major role in the behavior of skin in compression, whereas it has virtually no role in tension, when most of the load is transmitted through the collagen and elastic fibers of the dermis [2]. The wide scattering of results in this study highlights the dramatic variation in the mechanics of human tissue from person to person and from place to place on the body. A larger sample of data from a more uniform population with standardized sample sites would have improved results and most likely resulted in greater statistical significance in more categories.
277
ACKNOLEDGEMENTS The authors would like to acknowledge the Defense Advanced Research Projects Agency and the Revolutionizing Prosthetics 2009 Program for providing guidance and motivation for this study as well as furnishing samples of the candidate cosmesis material. Special thanks are due to the support staff who made this work possible through their continuing hard work: Demetries Boston of JHMI, and Charles Schuman of JHU/APL.
REFERENCES 1. Cary, B, and Belfiore, M. "Bionic Made Better." Popular Science 22 Aug. 2007. 2. Oomens, C., Van Campen, D., et al. "A mixture approach to the mechanics of skin." J Biomechanics 20.9 (1987): 877-85. 3. Bader, D., and Bowker, P. "Mechanical characteristics of skin and underlying tissues in vivo." Biomaterials 4.Oct (1983): 305-08. 4. Cua, A., Wilhelm, K , et al. "Frictional properties of human skin: relation to age, sex and anatomical region, stratum corneum hydration and transepidermal water loss." Brit. J Derm. 123.4 (1990): 473-79. 5. Daly, C, and Odland, G. "Age-Related Changes in the Mechanical Properties of Human Skin." J. Inv. Derm. 73.1 (1979): 84-87. 6. Ridge, M., and Wright, V. "Mechanical properties of skin: a bioengineering study of skin structure." J. Appl. Physiol. 21.5 (1966): 1602606. 7. Azar, T, and Hayward, V. "Estimation of the fracture toughness of soft tissue from needle insertion." Lecture Notes in Comp. Sci.: Biomed Sim. Vol. 5104. Berlin: Springer, 2008. 166-75. 8. O'Callaghan, P., Jones, M, et al. "Dynamics of stab wounds: force required for penetration of various cadaveric human tissues." For. Sci. Int.104 (1999): 173-78. 9. Shergold, O., and Fleck, N. "Experimental investigation into the deep penetration of soft solids by sharp and blunt punches, with application to the piercing of skin." J Biomech. Eng. 127 (2005): 838-48. 10. Shergold, O., and Fleck, N. "Mechanisms of deep penetration of soft solids with application to the injection and wounding of skin." Proc. Royal Soc. Lndn 460 (2004): 3037-058. 11. Whittle, K., Kieser, J., et al. "The biomechanical modelling of nonballistic skin blunt force injury." For. Sci. and Med. Path. 4 (2008): 33-39. 12. Shergold, O., and Fleck, N., et al. "The uniaxial stress versus strain response of pig skin and silicone rubber at low and high strain rates." Int. J Impact Eng. (2004).
Corresponding Author: Ian Wing Institute: Johns Hopkins University Applied Physics Laboratory Street: 11100 Johns Hopkins Rd. City: Laurel, MD Country: United States of America Email:
[email protected]
IFMBE Proceedings Vol. 32
Telemetric Epilepsy Monitoring and Seizures Aid K. Hameed, F. Azhar, I. Shahrukh, M. Muzammil, M. Aamair, and D. Mujeeb Sir Syed University of Engineering and Technology Dept of Biomedical Engineering, Karachi, Pakistan
Abstract–– The following paper outlines a design of an electromyogram (EMG) controlled console for real time monitoring of epileptic patient for the treatment of seizer if it occurs. The final design consisted of an electrode attachment device, an analog circuit to amplify these and design an software based oscilloscope for the real-time monitoring of patient also provide a quick response return signal to treatment side. A prototype circuit was built and tested and the design functions. We used EMG acquisition and amplifiers circuit, Data processing, real time Display and control Devices for Monitoring and treatment of patient. Treatment can be made by using the return signal we made to be produced after the threshold gets crossed and onset of epilepsy is the cause. This return signal can further be used for the stimulation of the treatment circuit, for our plans we will use it for Vagus Nerve Stimulation (VNS), we concluded with effectiveness of our proposal and its application in medical sciences. Keywords— Electromyogram, Epileptic, Vagus nerve, Simulation, real-time.
I. INTRODUCTION Our work was based on the theme that real time monitoring and on time treatment is made possible for the patient suffering from epileptic attacks, even not accompanied by any other person, system is designed to work as a standalone station. Since epileptic seizures are the uncontrolled disruption and distortion in the potentials at any area of the brain and the patient suffering from these type of seizure mediated along with fits is unable to treat him/herself, so it is necessary to detect onset of the attack and treat the patient as soon as possible and even without any ones assistance. The project is low cost, easily affordable by the health care institutions and by the doctors because it contains a cheap circuit for analog dealing of signal and a very low cost wireless circuit and designing of the monitoring control console. All previous work done for monitoring epilepsy was based on high cost systems due to costly EEG electrodes. This is completely applicable to and compatible with EMG biopotential and for the observation of patient suffering from myoclonic epilepsy, the analysis we made was abased on the Electromyography that is purely muscular activity is retrieved; previously it was done
only for electroencephalograph (EEG) signals, observations and monitoring was made for epileptic patients only by retrieving EEG biopotential that require scalp electrodes more than three. [1] All observations were made non-invasively, using only three surface electrodes thus system suggests painless, comfortable monitoring for the patient. A return signal was generated by the monitoring console as well in order to provide quick treatment to the patient. Since the project is prototype and we designed it at graduate level, no invasive technique is allowed for the monitoring or treatment, thus the use of return signal was assumed to be made when advancements will be made in the project, as for foremost considerations we decided to operate Vagus Nerve Stimulator (VNS), using this return signal for the treatment of juvenile myoclonic epileptic attacks. [2] The wireless transmission and reception of signal is possible up to the distance of 1Km, all previous analysis were made with the patient present in the range of 500meters. [3]
II. METHODOLOGY Surface electrodes placement technique was adopted to analyze the muscle response, for which a wearable system band is designed, which would easily be worn by the patient to be observed. A. Procedure We took Electromyograph of the biceps muscle using two surface electrodes at the biceps head and one common (gnd) with the triceps proximal end. Preamplification of signal: Since EMG signal is so small that computer could not read it. Amplitude of the EMG biopotential ranges between 50 μV to 200mV, or 0 to 1.5 mV (rms) [4], the usable frequency of an EMG signal is ranging between 50-150 Hz [5]. At this state, we need a huge gain (about thousand times) to boost the EMG signal without changing phase or frequency of the signal. For this purpose we used single mode instrumentation amplifier with gain of 1000. The output signal with this was a difference of the two potentials of the biceps (from the two
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 278–281, 2010. www.springerlink.com
Telemetric Epilepsy Monitoring and Seizures Aid
279
electrodes with the triceps considering as its ground), amplified and in-phase. Filtration of noises: Signal we took was comprised of noises of low frequencies as well as high frequencies (due to amplification). So we used band pass filter for this purpose to obtain EMG at its defined range 200Hz to 2 KHz (after amplification). The band pass filter was the combination of the two filters, a high pass filter with a cutoff of 2 KHz and a low pass filter with the cutoff of 200 Hz. The final signal obtained was a, amplified, in-phase signal in a range of 200 Hz to 2 KHz. Analog to digital Conversion: Signal obtained from the amplification then filtration was an analog signal, which has to be converted into digital form of signal in order to transmit it via wireless means. The A-D conversion required an A-D converter generating an 8-bit output. Transmission of signal: Transmission over a range of the EMG signal, which was previously quantized using the A-D converter, included a transmitter with some coverage, paired with a receiver and able to transmit with fewer artifacts, for this purpose suitable set of transmitter receiver pair was selected. The transmitter was able to encode the 8bit parallel data into 8-bit serial data, this conversion was
required in order to send signal bit by bit to the receiver side. Signal Reception: Signal reception required a receiver, able to decode 8-bit serial data into parallel form again in order to analyze it further. Digital to Analog conversion: We used DAC converter to convert digital data from Receiver. Artificial Signal: Since the project is prototype we made additional circuit to provide artifitial circuit to be summed up by the EMG signal and make it amplified, as happens when the seizures are likely to occur. B. Sound Card Interfacing and Programming Labview Oscilloscope: Sound card interfacing included a sound card chip interfaced with the computer. We made the signal capable to be seen by the computer using Labview software. We entered the Analog data coming from the DAC into the sound card first by using its given ports, then we designed the oscilloscope to view the signal using Labview. Labview Function generator: We designed function generator as well, so that we could generate artificial signal.
Fig. 1 Layout showing Labview programming to design oscilloscope for sound card
IFMBE Proceedings Vol. 32
280
K. Hameed et al.
III. RESULTS
Mean Normal EMG: 1.19
In figure 2 the layout behind the design of the telemetric epilepsy monitoring has been shown. The complete graphical user interface of the system has been shown in figure 3. The figure 4 shows the real time oscilloscopic view of the normal EMG signal.
Mode Normal EMG: 1.7 Mean Distributed signal after trigger: 1.69 Mode Distributed signal after trigger: 2,2
Fig. 2 Layout showing Labview programming to analyze and monitor multiple analog signals
Fig. 3 Diagram showing finally designed Oscilloscopic view
IFMBE Proceedings Vol. 32
Fig. 4 Oscilloscopic view of normal EMG signal
Telemetric Epilepsy Monitoring and Seizures Aid
281
IV. CONCLUSION This system design will make user free to move and treatment of such patient will possible at right time. The real time monitoring system developed helps in observation of the critical conditioned patient from anywhere, any time around the world. The software based approach proposed is an efficient and effective way to analyze epilepsy patients. It aids doctors and researcher of various organizations to collaborate and analyze patients in real time. In future more bio-potentials from body such as EEG, ECG, EOG, etc can be graphically visualized and analyzed using the proposed system. The results show a complete well organized system capable of analyzing and displaying EMG of a epilepsy patients.
ACKNOWLEDGMENT We thank the authority of Sir Syed University of Engineering and Technology, Laboratory Staff Members, for providing adequate environment and facilities for this research.
Author: Institute: Street: City: Country: Email:
Kamran Hameed Sir Syed University of Engineering & Technology University Road, Postal Code: 75300. Karachi Pakistan
[email protected]
Author: Institute: Street: City: Country: Email:
Faisal Azhar Sir Syed University of Engineering & Technology University Road, Postal Code: 75300. Karachi Pakistan
[email protected]
Author: Institute: Street: City: Country: Email:
Ijlal Shahrukh Sir Syed University of Engineering & Technology University Road, Postal Code: 75300. Karachi Pakistan
[email protected]
Author: Institute: Street: City: Country: Email:
Muhammad Muzammil Sir Syed University of Engineering & Technology University Road, Postal Code: 75300. Karachi Pakistan
[email protected]
REFERENCES 1. R. Aschenbrenner-Scheibe, T. Maiwald, M. Winterhalder, H.U. Voss, J. Timmer and A. Schulze-Bonhage. How well can epileptic seizures be predicted? An evaluation of a nonlinear method. Brain, 126:26162626, 2003. 2. Robert S. Fisher and Adrian Handforth, Vagus nerve stimulation for epilepsy, Neurology 1999;53;666Technology Assessment Subcommittee of the American Academy of Neurology. 3. Modarreszadeh, M. Schmidt, R.N., Wireless, 32-channel, EEG and epilepsy monitoring system, Engineering in Medicine and Biology Society, 1997. Proceedings of the 19th Annual International Conference of the IEEE 30 Oct-2 Nov 1997, Volume: 3, On page(s):1157-1160vol.3. 4. G De Luca, Fundamental Concepts in EMG Signal Acquisition, filters, pg23. 5. G De Luca, Fundamental Concepts in EMG Signal Acquisition, filters, pg23.
IFMBE Proceedings Vol. 32
Spike Detection for Integrated Circuits: Comparative Study A. Sarje and P. Abshire ISR/Dept. of E & C.E., Univ. of Maryland, College Park, USA Abstract— Transmission of large amount of data generated by sequential scanning of high density microelectrode arrays is a challenge. Detecting and transmitting only relevant data i.e. spikes and rejecting the noise reduces data bandwidth. In this paper, we discuss and evaluate CMOS implementable spike detection methods for low-power high-density electrode arrays used in neurophysiology. Neural data is simulated, spikes preemphasized and detection using comparator carried out. Our evaluation of a method is based on robustness of the algorithm, compactness of the circuit. Keywords— Spike detection, bandwidth reduction, CMOS, MATLAB, noise.
I. INTRODUCTION Single-unit recording of neurons for a tissue or dense cell culture has been made possible by integration of high density microelectrode array (HD-MEA) with electronic circuits for signal processing [1][2]. There is an on-going active effort to integrate electrodes with low-power low-noise electronic circuitry to reduce the number of external connections required from the electrode. One of the bottlenecks of a large HD-MEA is transmission of huge amount of data acquired by sequential scanning of the electrode array. A large portion of data acquired from electrodes is noise and can be discarded before transmission of data off site. Hence, reduction of data bandwidth is achieved by using real-time on-chip spike detection and transmitting only useful spike data and discarding data containing noise. Data with small bandwidth can be transmitted easily, using fewer channels and circuit components to a distant computer for further processing. Active HD-MEAs are constrained by limited available chip space and minimal power for on-chip consumption and hence, compactness of spike detector is essential. For obtaining real-time single-unit recordings from each cell of a tissue or dense culture, implementation of computationally intensive, traditional spike sorting methods like principle component analysis, support vector machine [3] and wavelet transform [4] is not feasible. A number of integrated methods like adaptive threshold [5], non-linear energy operator (NEO) [6], double thresholding [13], requiring fewer circuit components and consuming less power and space have been implemented in hardware. Earlier works by other authors have compared computationally intensive methods
like matched filter, wavelet transform method and energy operator [7][8]. Keeping in mind the eventual implementation of spike detection in an integrated HD-MEA, we evaluate the methods of thresholding, NEO and derivatives for their accuracy and operational limits using simulations in MATLAB 7.9. We find optimum threshold values for various noise levels. For a high-density system, compact circuit occupying a smaller chip area is more desirable than a large circuit and hence, preference is given to compact circuits. Organization of this paper is as follows: Section I discusses the various algorithms, their usage and the performance indices used for evaluation. Section II and III describe the simulation of the algorithms in MATLAB and the results, respectively. Section IV gives some details of the circuits for spike sorting. Section V summaries the paper with conclusion.
II. SPIKE DETECTION Neural recordings: Electrode recording from cells and tissue suffer from background noise caused by activity of other cells in vicinity. Most of these noises are low frequency [9]. Frequency spectrum of recorded signals has shown that noise has more low frequency components whereas neural spike has a bandpass characteristic [9]. Neural spikes are of the order of few microvolts (20 µV – 100 µV) and their wavelength is approximately 1 ms. They can been characterized as signals with localized high frequency and increased instantaneous energy [8]. Main challenge in spike detection is to have a good algorithm that can identify the spikes from background noise. Some of the commonly used spike detection methods in integrated circuits are: simple comparator, adaptive comparator and non-linear energy operators. Inspired by frequency based filters [10], we are also going to evaluate derivatives of the waveform as a method for pre-detection spike emphasis. Simple Comparator: Using a comparator is the simplest way to detect a spike. Input data from an electrode is compared to a fixed threshold voltage defined by the user. A positive output is generated if input voltage exceeds the threshold. A more efficient method reported is to use dual threshold comparator. Although, this method requires more computational blocks, is more robust [13].
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 282–285, 2010. www.springerlink.com
Spike Detection for Integrated Circuits: Comparative Study
283
Adaptive threshold Comparator: Adaptive threshold comparator uses a threshold that changes with noise level. Standard deviation of the signal has been used to determine the threshold value [5]. Using standard deviation can lead to very high threshold values if the firing rate is high [11]. An improved method, where the threshold is determined by the median of the signal, has also been suggested [11]. Another way of determining the threshold is by filtering the high frequency noise and using another filter to form a local average (i.e. the threshold voltage) [11]. Non-linear Energy Operator: NEO has been widely used to pre-emphasis the spikes before detection. NEO accentuates high frequency signal. However, it has been reported that its harder to discriminate a wide base signal using NEO as a wide spike contains more energy at lower frequencies [8]. Equation 1 gives the function for calculating NEO (Φ) function for a discrete signal x(n).
Amplitude (volts)
0.06 0.04 0.02 0 -0.02 -0.04 -0.06 0
0.2
0.4 0.6 Time (seconds)
0.8
1
Fig. 1 Simulated spike and noise (-30 dB before filtering)
(1)
Derivatives: Derivates are high pass filters as they attenuate low frequency signal and accentuate high frequency signal. Frequency shaping filters can be derived taking derivatives to create informative samples for feature extraction. This can be pre-emphasis of spikes just like NEO but by using fewer numbers of computational units. However, if any high frequency noise is present, it will be amplified and produce false positive detections [10]. Performance indices: Robustness of a spike detection method is evaluated by counting number of true positives (TP) (actual spikes that are detected), false positive (FP) (number of noise peaks detected as spikes) and false negative (number of actual spikes missed). Plotting probability of TP spikes against probability of FP gives the receiver operating characteristics (ROC). Probability of TP is defined as the ratio TP to actual number of spikes. Probability of FP is defined as ratio of FP to total number of detections made. ROC plots are used in decision making and help to discard a model for binary classifier system like Spike Detector or to choose an appropriate range of operation.
III. SIMULATION We simulated spike detection algorithms mentioned in section above using MATLAB 7.9 Neural spikes were simulated using train of triangular pulses generated at random time intervals. Their frequency spectrum was trimmed to represent neural spike data (band-pass) [9]. Noise was simulated using white Gaussian noise of known power but higher frequencies were filtered out. Fig. 1 shows the signal and spikes in time domain. Frequency shaped noise and neural spikes were added to form simulated data. Fig. 2 shows the frequency spectrum of simulated input data.
30 Noise
20 Magnitude (dB)
n -x n 1 x n-1
Noise Spikes
0.08
10
Spike
0 -10 -20 -30 -40
200
400
600
800 1000 1200 1400 1600 1800 Frequency (Hz)
Fig. 2 Frequency Spectrum of simulated input signal represents frequency spectrum of actual recordings
Spike NEO Derivative
0.08 0.07 0.06 Amplitude (volts)
φx n
0.1
0.05 0.04 0.03 0.02 0.01 0 -0.01 0.334
0.336
0.338 0.34 0.342 Time (seconds)
0.344
0.346
Fig. 3 Simulated Spike (red) and outputs from NEO (black) and derivative function (blue)
IFMBE Proceedings Vol. 32
284
A. Sarje and P. Abshire
Actual spikes (without noise) were counted and time of their occurrence noted. Spikes in simulated data were preemphasized using NEO and by taking waveform derivative (deri). Fig. 3 shows a simulated spike from simulated data, waveform after passing it through NEO and the waveform after taking its derivative. A simple comparator was used to detect spikes in simulated data and the spike emphasized data (i.e. NEO and derivative). Time of occurrence of spikes in each method was compared with the true spike time data to determine whether spike was TP or FP. Simulations made no discrimination between noise peak overlapping with neural spikes. Threshold value of the comparators was varied to find the optimum threshold for a given SNR. Noise power was varied to test the robustness of a particular algorithm. A high TP value and low FP is indicator of a robust detection system. ROC plots were obtained for each of the method and comparisons made.
IV. SIMULATION RESULTS Spike detection capability for a circuit degrades as noise power increases. NEO has a better performance at higher noise level than a simple comparator. A simple comparator performs poorly with probability of false positives approaching unity for simulated data having noise power of 20 dB. NEO and derivative method are able to detect spikes with 100 % efficiency and with 0 % false detection for low noise levels (below -20 dB).
curves for a detector using NEO. Threshold voltage decreases from left to right in a curve. We see that as noise increases, performance of the NEO deteriorates beause probability of TP decreases. For a high noise power, the detection does not achieve probability of TP equal to unity. Further, probability of FP increases with lowering of threshold voltage. Optimum value of threshold voltage is defined as the value for which, probability of true positives is close to one and probability of false positive is close to zero. For a high SNR (noise < -25 dB), all detectors have probability of TP equal to unity and probability of FP equal to zero. Both NEO and derivative methods can make spike detection at high noise levels (power of -15 dB). NEO was able to detect 100% of the spikes while derivative method (abb. deri) detected only 95% of the real spikes. 40% and 80% of the detections were noise peaks for each NEO and derivative method, respectively. At same noise level simple comparator (SC) had 100% false positive detections. For a lower noise level (-20 dB), SC made correct detection of 70% spikes and 90% of the detections were false positive. NEO and derivative method correctly detected 100% of the spikes with false positives of 20% and 35% respectively. Comparing NEO and derivative method, we see that at low SNR (15 dB noise), NEO can detect neural spikes with 100 % efficiency and with only 50% detections as false. Derivative method can detect 100% spikes with 90% of the detections of noise peaks. Fig. 5 shows ROC plots for the three different methods (simple comparator, NEO and derivative) at two different SNR. We conclude that NEO is the most robust and simple comparator is least robust. 1
0.8 0.6
-25 dB -20 dB -15 dB -10 dB
0.4 0.2 0 0
0.2 0.4 0.6 0.8 Probability of False Positives
1
Fig. 4 ROC using NEO method. Shows a decrease in ability to make good
Probability of True Positives
Probaility of True Positives
1
0.8 0.6 0.4 0.2 0 0
discrimination with increase in noise
From simulation results we see that false detection increases with increase in noise (or decrease in SNR). As detection threshold voltage drops close to baseline value, the number of FP also increases. Fig. 4, shows various ROC
SC 1 SC 2 NEO 1 NEO 2 Deri 2 Deri 1
0.2 0.4 0.6 0.8 Probability of False Positives
1
Fig. 5 Comparison of ROCs for Simple Comparator (SC), NEO and derivative (Deri) for two different SNR. SNR1 Noise= -25 dB & SNR2 Noise= 15 dB
IFMBE Proceedings Vol. 32
Spike Detection for Integrated Circuits: Comparative Study
V. CMOS CIRCUITS For real-time data bandwidth reduction using spike detection in HD-MEA, depending on the implementation, the spike detector can be implemented in each pixel or at each column. Each pixel may not be greater than a 40 µm x 40 µm- 50 µm x 50 µm (depending on the application). Having a spike detector at each column is easier as this method does not require the circuit to be very compact, but this implementation consumes more power. In this section, we review various spike detection methods implemented in CMOS. a) Comparator can be easily implemented using very few transistors. Dual threshold circuit implemented in 0.35µ process is not very compact. It requires 15 transistors but consumes 1.8 nW. All transistors operate in the subthreshold range [13]. This implementation can be useful for column spike detection b) Adaptive threshold using standard deviation of signal has been achieved in 1.5µ CMOS process and occupies 0.094 mm2, making it unsuitable for implementation at each pixel [5]. c) Number of implementations for NEO have been published. Some of them, implemented in CMOS occupy few millimeters of space but consume power of the order of µW [6] and [14]. NEO involves a few computationally intensive blocks and hence, are inappropriate for in pixel implementation. d) Derivatives have been implemented as filters for prespike enhancing [10]. Compactness and power consumption can be optimized further.
VI. CONCLUSIONS Our simulations confirm that NEO is a robust algorithm for spike detection and can provide reliable detection of low SNR. A compact implementation of NEO is not available. Hence, implementation of NEO at each pixel may not be feasible. It does form a good circuit to be implemented at column level. Using waveform derivative seems to be an attractive method and its using frequency filters. Other implementations of using derivatives for spike detection maybe worth exploring and can form a good method for systems
285
with high to moderate SNR. Comparator has the simplest implementation, is very compact and will be a good choice for systems with very high SNR.
REFERENCES 1. Litke A.M et al. (2003) Large-scale imaging of retinal output activity. Nucl Instrum Meth A 501(1): 293-307 2. Berdondini L et al. (2009) Active pixel sensor array for high spatiotemporal resolution electrophysiological recordings from single cell to large scale neuronal networks. Lab Chip 9:2644-2651 3. Vogelstein R.J et al. (2004) Spike sorting with support vector machines. Eng Med Biol Soc Ann, San Francisco, CA 2004, pp 546-549 4. Letelier J.C et al. (2000) Spike sorting based on discrete wavelet transform coefficients. J Neurosci Meth 101:93-106 5. Watkins P.T et al. (2004) Validation of Adaptive threshold Spike detector for neural recording. Eng Med Biol Soc Ann, San Francisco, CA, 2004, pp 4079-4082 6. Chae M.S et al. (2009) A 128-channel 6 mW wireless neural recording IC spike feature extraction and UWB transmitter. IEEE T Neur Sys Reh 17(4):312-21 7. Gibson S et al. (2008) Comparison of spike sorting algorithms for future hardware implementation. IEEE Eng Med Bio, Vancouver, Canada, 2008, pp 5015-5020 8. Mukhopadhyay S, Ray G.C et al. (1998) A new interpretation of nonlinear energy operator and its efficacy in spike detection. IEEE T BioMed Eng 45(2):180-187 9. Kim K.H, Min S.J (2000) Neural spike sorting under nearly 0-dB sihnal to noise ratio using non-linear energy operator and artificial neural network classifier. IEEE T Bio-Med Eng 47(10):1406-11. 10. Yang Z et al (2009) Spike feature exrtraction using informative samples NIPS 21:1865-1872 11. Quiroga R.Q et al. (2004) Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering. Neural Comput 16(8):1661-1687 12. Rogers C.L, Harris J.G (2004) A low power analog spike detector for extracellular neural recordings. IEEE I C Elect Circ, Tel-Aviv, Israel, 2004, pp 290-293 13. Hiseni S et al, (2009) A compact nano-power CMOS action potential detector. IEEE BioCAS Beijing, China, 2009, pp 97-100 14. Hoang L, Yang Z et al. (2009) VLSI architecture of NEO spike detection with noise shaping filter and feature extraction using informative samples. Eng Med Biol Soc Ann. Minneapolis, USA, 2009, pp 978981
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Anshu Sarje, Pamela Abshire University of Maryland 2160 AVW College Park USA {asarje, pabshire}@umd.edu
Effect of Ambient Humidity on the Electrical Conductance of a Titanium Oxide Coating Being Investigated for Potential Use in Biosensors Jorge Torres1, James Sweeney1, and Jose Barreto2 2
1 Department of Bioengineering, Florida Gulf Coast University, Fort Myers, Florida, USA Department of Math and Chemistry, Florida Gulf Coast University, Fort Myers, Florida, USA
Abstract— Titanium oxide coatings (TiO2) have photocatalytic properties. They eject electrons when activated by ultraviolet light and can vary their electrical conductance as a result. In a gas media environment, coating conductance can be potentially altered by the presence of volatile analytes that attach or interact with the coatings. Our group is investigating the possible use of these coatings for the detection of a variety of molecules present in the gas. This sensing process, however, can be greatly influenced by the simultaneous presence of water molecules in the gas. Here we have studied the effect of the ambient humidity on the coatings’ conductance and the results showed a very dramatic effect with over 100 fold increase in conductance when going from low to high gas relative humidity. Keywords— titanium oxide, electrical conductance, water, sensor, humidity.
I. INTRODUCTION Titanium oxide coatings have photocatalytic properties as a result of their ability to eject electrons into a conductance band when activated by ultraviolet light. The ejected electrons can increase the overall electrical conductance of the coatings. For a gas media environment, a volatile analyte can potentially modify the electrical conductance of the titanium oxide coating, depending on the scavenging power of the analyte for the activated electrons. The coating conductance can be also affected by the presence of other molecules in the same gas environment, particularly water molecules, and their effect must be taken into consideration when analyzing current signals from the coating. We report here our findings regarding the effect of water present in a surrounding gas on the coating electrical conductance.
weight was used to pressed down on the electrodes in order to ensure good electrical contact between the electrodes and the coating. A voltage of 1 volt was suddenly applied to the electrodes, maintained for 45 seconds, and then suddenly removed while simultaneous grounding of the voltage biasing electrode. The resulting electrical current across the coating was measured by means of an electrometer (model 6514, Keithley Instruments Inc., Cleveland OH). A scale of 200 nA was used for the measurements. Current measurements above 200 nA were taken as “off scale”, and current measurements below 1 nA were considered unresolved. The entire experimental apparatus with coating and electrodes was placed in a sealed chamber that was filled with either air or nitrogen gas with varying relative humidity levels. Relative humidity was measured by a digital hygrometer probe inside the chamber. See Figures 1-3. The voltage utilized in all cases was provided by a DC power supply (model E3631A from Agilent Technologies Inc., Santa Clara, CA) and was terminated at 45 seconds. At that moment, the active electrode was grounded in order to discharge any residual capacitance and allow the coating to return to base conditions and to be ready for the next experiment.
II. METHOD We carried out experiments to measure the electrical current flowing through titanium oxide coatings prepared with Degussa P-25 powder and deposited on a glass substrate. Two titanium electrodes were placed on the coating surface with a 1 cm separation between them. Coating and electrodes were placed between two glass slides. An ~8 ounce
Fig. 1 Titanium oxide coating and titanium electrodes sandwiched between two glass slides
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 286–288, 2010. www.springerlink.com
Effect of Ambient Humidity on the Electrical Conductance of a Titanium Oxide Coating Being Investigated for Potential Use
287
three testing values were 21, 50, and 84 %. The results are summarized in the graphs shown in Figure 4 below.
Fig. 2 Sealed chamber to control gas humidity. Coating/electrodes are inside
Fig. 3 Electrometer, pulse/switching device, and power supply used in the experiments
III. RESULTS We found that the electrical conductance of the titanium oxide coatings deposited on glass was indeed greatly affected by the relative humidity of the ambient gas in the chamber. The water vapor effect was independent of the gas being room air or pure nitrogen. The figures below illustrate these effects. In all three cases, a temperature of 23 ºC was maintained. Since the same titanium oxide coating was used and the same voltage step of 1 volt was applied, the only varied parameter was the relative humidity whose
Fig. 4 Graphs showing the electrical current produced in the titanium oxide coating and its progression in time as a result of a 1 volt step-voltage application for three different gas humidities
IFMBE Proceedings Vol. 32
288
J. Torres, J. Sweeney, and J. Barreto
As can be seen in the graphs, At low relative humidity (21%), the measured current was below 2 nA and could not be resolved at this scale, at medium humidity (50%), the current peak value increased to approximately 120 nA. And at high relative humidity (84%), the current was ‘off-scale’ (Above 200 nA). The electrical behavior seen in the graphs was consistently observed regardless of the gas used. No difference was initially found between the electrical conductance of the coating in the presence of room air versus the response in the presence of nitrogen gas for the same relative humidity. In the graphs, it is also possible to observe a ‘capacitivelike behavior’ of the titanium metal electrodes - titanium oxide coating interface, showing a pseudo-exponential decay of the current after the initial peak corresponding in part to the coating electrical resistance.
and the response to a voltage application on the material. The effect of relative humidity poses a new design challenge for the development of a gas sensor using this type of coating, since it seems to imply a requirement for a humidity controlled environment. The ability to detect water vapor is an important demonstration of the possible sensor constraints and applications of our coatings and creates new possibilities for analyte detection, assuming there are analytes that might modify the water vapor dependent signal.
ACKNOWLEDGMENT We want to thank Patricia Barreto for preparing the titanium oxide coatings used in this study. This research was supported by the Office of Naval Research grant (ONR) N00178-09C-3009.
IV. CONCLUSION The relative humidity of a gas interacting with a titanium oxide coating greatly affects both the electrical conductance
IFMBE Proceedings Vol. 32
Brain Computer Interface in Cerebellar Ataxia G.I. Newman1, S.H. Ying2,3, Y.-S. Choi1, H.-N. Kim5, A. Presacco1, M.V. Kothare4, and N.V. Thakor1,2 1
Department of Biomedical Engineering, 2 Department of Neurology, 3 Ophthalmology, The Johns Hopkins University School of Medicine, USA, 4 Chemical Engineering, Lehigh University, USA, 5 Electronics and Electrical Engineering, Pusan National University, Korea
Abstract— In healthy humans, the cortical brain rhythm (or electroencephalogram, EEG), shows specific mu (~8-12 Hz) and beta (~16-24 Hz) band patterns in the cases of both real and imaginary motor movements. As cerebellar ataxia is associated with impairment of precise motor movement control as well as motor imagery, ataxia is an ideal model system in which to study the role of the cerebellocortical circuit in rhythm control. We hypothesize that the EEG characteristics of ataxic patients differ from those of controls during the performance of a Brain-Computer Interface (BCI) task. EEGs were recorded from four patients with cerebellar ataxia and six healthy controls. Subjects were cued to imagine relaxation or motor movement, while an EEG-based BCI translated motor intention into a visual feedback signal through real-time detection of motor imagery states. Ataxia and control subjects showed a similar distribution of mu power during cued relaxation. During cued motor imagery, however, the ataxia group showed significant spatial distribution of the response, while the control group showed the expected decrease in mu-band power (localized to the motor cortex). This pilot study suggests that impairment of the cerebellocortical control network is associated with spatial spreading of the normal event-related synchronization of motor cortical areas during relaxation. The mechanism of this association, whether degenerative or compensatory, bears further investigation. Use of BCI has important implications for our basic understanding of motor imagery and control, as well as clinical development of braincomputer interface as an assistive or rehabilitative technology. Keywords— BCI, Ataxia, EEG.
I. INTRODUCTION A. Ataxia Cerebellar ataxia is a rare movement disorder characterized by cerebellar degeneration. As ataxia progresses, motor movement control gradually worsens, often to the point where patients can no longer walk, talk, or perform activities of daily living [1]. Rare diseases such as cerebellar ataxia are often overlooked because relatively few individuals are affected; however, ataxia could provide us with useful insight into a human lesion model in the control center of the motor system. This will help to improve our basic understanding of motor imagery and control. Because
degeneration appears to be the most severe in the cerebellum [2], it is possible that the output of the presumably intact motor cortex is capable of being correctly interpreted to form the intended movements of the patient. In exploring therapeutic interventions, we hope to bypass the corrupted movement commands of the defective cerebellum. Doing so would allow us to reconstruct the original intention of motor cortex commands, and thus regain motor control. Furthermore, recent studies [3] show that unilateral cerebellar stroke affects not only motor performance, but also motor imagery. This suggests that the cerebellum is also involved in nonexecutive motor functions, such as the planning and internal simulation of movements. Our preliminary results show that EEG abnormalities in patients with cerebellar ataxia can partially normalize with learning, and that short-term EEG-BCI feedback facilitates these changes. Encouraged by these promising findings, we predict that long-term practice of a motor imagery task using EEG-BCI will be associated with long-term changes in the typical EEG pattern. B. Brain-Computer Interface Electroencephalography (EEG) is a measure of neural activity, based on the voltages generated by the firing of large populations of neurons. These easily-recognizable electrical rhythms are recorded from specific areas on the scalp, and may be indicative of the state of the brain. One of the most important characteristics of the EEG recorded over the sensorimotor cortex is linked to possible modulation of EEG rhythms through simple motor imagery (e.g., imagining a flexion of the right or left elbow). The most widely used rhythm for motor control is the “mu” rhythm (~8-12 Hz). Mu rhythm shows an increase in power during relaxation (event-related synchronization) and a decrease during real and imaginary motor movement performance (event-related desynchronization) [4]. This characteristic can be utilized with a BCI interface to allow mental control of a computer cursor in at least one dimension [5]. The two electrodes shown to have the largest modulation of mu rhythm are located at C3 and C4 or adjacent positions, but recruitment of more electrodes could be necessary for more sophisticated control. This BCI method has been successfully tested
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 289–292, 2010. www.springerlink.com
290
G.I. Newman et al.
with normal subjects, as well as patients suffering from various forms of locked-in syndrome [6]. Prior to our study, it had not been tested with ataxia patients, in part due to the sparseness of the subject population, which is estimated at about 0.05% of Americans [7]. By having ataxia patients control a simple cursor task through EEG modulation, we tested the hypothesis that BCI is an effective means of restoring limited motor function. Since a byproduct of cerebellar feedback pathway degeneration is impaired motor modulation, a relative decrease or increase in power in the mu band not only might manifest itself over the hand area, but might also appear as a more globally diffuse change in mu power. Based on the clinically evident impairment of motor movements and reported impairment of motor imagery, we predicted that ataxia patients would not initially perform as well as controls at the BCI task, but would eventually regain some level of motor control.
between 6 and 9. The exact number of blocks depended on whether the subject achieved a consistent level of success that did not improve after subsequent blocks. Each block consisted of 16 trials (8 relaxation and 8 movement trials), which were presented in random order throughout each block. B. Data Analysis Success rates and trial completion time were examined in order to investigate any obvious differences between ataxic patients of varying disease severity and controls. A power density spectrum was calculated using each subject’s 64channel EEG raw data. This was used to determine the spatial distribution of mu band modulation in ataxic subjects and controls.
III. RESULTS
II. METHODS A. Experimental Setup Four cerebellar ataxia patients and six control subjects were selected to perform the task. Each subject sat in front of a computer screen, and was asked to imagine movement or relaxation without actually moving. EEG signals were collected with a 64-electrode scalp cap connected to a SynAmps amplifier system (Neuroscan, Charlotte, NC). The output of the amplifier was connected to a PC, which spatially filtered the signals using common average referencing. The C3 and C4 electrodes, which coincide with the hand-area of the primary motor cortex, were then used in an autoregressive model in order to determine the power spectrum. During relaxation, all of the subjects tested exhibited a peak generally centered between 8-12 Hz. By imagining bilateral movements, subjects could temporarily suppress the power in this frequency band with variable effectiveness. Based on the combined amplitude of the peaks in the C3 and C4 electrodes, a movement classification was made, which was then output to a cursor control program. In this program, a target was displayed either toward the top (relaxation trial) or the bottom (movement trial) of the computer screen, with the imagery-controlled cursor in the middle. A movement decision was made every 500 ms based on the power in the mu band. This decision determines whether the cursor moves up, moves down, or remains still. There were seven steps in the correct direction from the origin to the target. A trial ended either successfully when the cursor reached the target or unsuccessfully when 15 seconds had passed. A variable number of blocks were run, generally
Our comparisons showed no obvious trends in the data to suggest that performance alone is enough to classify subjects into control or ataxic categories (Figure 1).
Fig. 1 Trial success rates show the variability in subject success rates grouped by condition. Squares indicate the mean success rate of all subjects for each condition, with diamonds representing the average (across blocks) success rates (trials won out of 16) for each subject in that category Figure 2 shows the power spectral trends (based on control or ataxic status) that were recorded from electrodes C3 and C4. The results of this analysis suggest that in contrast to controls, ataxic subjects have a lower amplitude mu peak during relaxation. Furthermore, we observed a difference in the beta band (~16-24Hz) between relaxation and motor imagery that appeared in ataxic patients, but was absent in control data.
IFMBE Proceedings Vol. 32
Brain Computer Interface in Cerebellar Ataxia
291
Fig. 2 The power spectrum averaged over all successful trials are displayed according to subject type (controls, 6 subjects: a,c, ataxia patients, 4 subjects: b,d), along with the channel of interest (C3: a,b, C4: c,d). Blue lines represent the power spectrum from trials in which the goal was to move the cursor upward by relaxing, while red lines represent trials in which the goal was to move the cursor downward by imagining movements. The control subjects have a more prominent peak during relaxation in the mu band (highlighted grey) than ataxic patients, as well as a larger separation between trial conditions Figure 3 shows the averaged mu band scalp maps. These maps demonstrate that while control subjects greatly increased mu power in the area covering the motor strip during relaxation, ataxic patients appeared to have a relatively smaller increase in mu power during relaxation. In ataxia patients, the more modest increase in power is a less localized phenomenon; for example, there appears to be a greater change in power over the frontal lobes in ataxic subjects only in addition to changes over the motor cortex.
Fig. 3 Topographic maps of mu average power during relaxation (a, b) or movement imagination (c, d) in control subjects (a, c) and ataxia patients (b, d). The section of the mu frequency band that showed the largest change in power between movement types was selected for each subject, and the powers in these bands were averaged over subject types. These maps demonstrate that while control subjects increased mu power during relaxation with a focus in the area covering the motor strip, ataxic patients appeared to have a relatively smaller, but more global increase in power during relaxation
IV. DISCUSSION Our results show clear differences between the EEG signals of ataxic patients and controls during a motor imagery task. Success and learning rates do not appear to be directly related to the clinical diagnosis of the subject. This implies that to use BCI for diagnostic purposes, some degree of analysis must be performed on the EEG data. On average, ataxic patients have a smaller increase in mu band power during imagined relaxation, compared to controls. This is consistent with the possibility that ataxic patients are unable to properly modulate the synchronous firing of large groups of neurons in the motor cortex due to a deterioration of feedback pathways from the cerebellum. While overall activity in this area may be increased, this lack of increased synchronization results in the non-existence of a prominent mu peak. Contrary to our initial expectations, the ataxic subjects were able to perform the task with no obvious decrease in performance compared to controls. This suggests that despite not having a significant mu peak in the same frequency range, individual subjects are capable of creating a mu power separation between relaxation and movement imagination, but each with different peak frequencies. This
IFMBE Proceedings Vol. 32
292
G.I. Newman et al.
causes these peaks to disappear after spectrum averaging. One of our hypotheses was that due to decreased cerebellar feedback modulation, motor imagination would not be localized to just the area of cortex corresponding to the part of the body where the movement was visualized. Instead, we expected to observe more global increases in neuronal synchronization in ataxia patients, which was demonstrated in our results. This indicates that ataxia patients are capable of some amount of compensation for a loss of feedback by increasing global activity. Those who cannot generate large, focused increases in power are able to create more diminished, but well-distributed increases in power to achieve the same impact -- in this case, control of a BCI.
V. CONCLUSION While using our current setup allows ataxic subjects to control a BCI with similar efficacy to control participants, it is clear that the neural method of control is different between the populations. It may be possible to take advantage of these differences to create a BCI that ataxia patients would be able to control more easily, perhaps by determining the average power over several central electrodes, instead of just C3 and C4. By creating a BCI specific to ataxia patients, we may be able to increase their ability to naturally control an end-effector, and with continuous training, improve their motor control skills. For severe ataxia patients, who are wheelchair bound, and unable to even maintain a steady grip on a cup, this could mean a significant improvement in quality of life. Our preliminary data show robust and consistent differences between the EEG patterns of ataxia patients and control subjects during a motor imagery task. Exploring these differences may be vital to gaining a better understanding of cerebellar ataxia, which currently has no cure, and its impact on the human brain. The EEG differences we have uncovered could be used as a diagnostic tool, and may find a role in rehabilitation therapy.
REFERENCES 1. Teive, H.A. (2009). "Spinocerebellar ataxias." Arq Neuropsiquiatr, 67(4): 1133-42. 2. Wang, X., Wang, H. "Spinocerebellar ataxia type 6: Systematic pathoanatomical study reveals different phylogenetically defined regions of the cerebellum and neural pathways undergo different evolutions of the degenerative process." Neuropathology, in press. 3. González, B., R.M., Ramirez, C., Sabaté, M., Disturbance of Motor Imagery After Cerebellar Stroke. Behavioral Neuroscience, 2005. 119: p. 622-626. 4. McFarland, D.J., McCane, M.L., David, S.V., Wolpaw, J.R., Spatial filter selection for EEG-based communication. Electroencephalogr Clin Neurophysiol., 1997. 103: p. 386-394. 5. Trejo, L.J.,R. Rosipal, et al. (2006). "Brain-computer interfaces for 1-D and 2-D cursor control: designs using volitional control of the EEG spectrum or steady-state visual evoked potentials." IEEE Trans Neural Syst Rehabil Eng, 14(2): 225-9. 6. Chatterjee, A., V. Aggarwal, et al. "A brain-computer interface with vibrotactile biofeedback for haptic information." J Neuroeng Rehabil, 2007. 4: 40. 7. National Ataxia Foundation, http://www.ataxia.org
IFMBE Proceedings Vol. 32
Effects of Stray Field Distribution Generated by Magnetic Beads on Giant Magnetoresistance Sensor for Biochip Applications Kyung Sook Kim1,2, Samjinn Choi1,2, Gi Ja Lee1,2, Dong Hyun Park1,2, Jeong Hoon Park1,2, Il Sung Jo1,2, and Hun-Kuk Park1,2,3 1
Department of Biomedical Engineering, College of Medicine, Kyung Hee University, Seoul, Korea 2 Healthcare Industry Research Institute, Kyung Hee University, Seoul, Korea 3 Program of Medical Engineering, Kyung Hee University, Seoul, Korea
Abstract–– This study examined the effects of the stray field distribution, which depends on the relative orientation of the magnetic bead and sensor, on the sensing performance of a giant magnetoresistance (GMR) sensor for biochip applications. The beads were magnetized in two different ways: parallel and perpendicular to the sensor surface. In the case of the parallel magnetized bead, it was saturated in either the same (x direction) or opposite (+x direction) direction to the free layer magnetization. A significant difference in the magnetization configuration of the free layer was observed with the stray field distribution. The MR values of the sensor were dependent on the stray field distribution. The largest MR value was obtained at Bpara (parallel magnetized bead in +x direction). However, the smallest MR was observed at –Bpara (parallel magnetized bead in -x direction). A moderate MR value was observed at Bperp (perpendicularly magnetized bead). The dependence of MR on the distance (h) between the bead and sensor was also different from that of the stray field: the MR values at Bpara and Bperp increase with increasing h, while MR at –Bpara decreases. Keywords— Magnetic bead, giant magnetoresistance sensor, stray field, biochip application.
I. INTRODUCTION In recent years magnetoresistance (MR) sensors based on multilayered giant MR (GMR) or exchange biased spin valves have attracted considerable attention in biomedical sensor applications on account of their excellent properties, such as high sensitivity, fast and low-volume assay, low cost, and stability[1-4]. GMR sensors are used mainly for detecting biomolecular recognition, such as hybridization of two complementary strands of deoxyribonucleic acid (DNA) or antigen-antibody interactions between proteins[57]. The principle of biomolecule detection using a GMR sensor is as follows. First, samples of probe molecules are immobilized on a sensor surface. Second, biotin-labeled analyte molecules are added and conjugated to the probe molecules. Third, streptavidin-coated magnetic beads are introduced and bound specifically to the biotin of the conjugated molecules. In the final step, the magnetic stray field
generated by the magnetic bead is detected as a change in resistance in the GMR sensor. The sensing performance of the GMR sensor is strongly dependent on the conditions of not only itself but also the magnetic bead. The commonly used magnetic bead consists of iron oxide (γ-Fe2O3) nanoparticles dispersed in a polymer matrix. The bead is a superparamagnet with no external magnetic field that is easy to manipulate without serious agglomeration problems [3,4]. When the bead is magnetized by an external field, the bead generates a stray field that is proportional to the magnetic volume and inversely proportional to the distance cubed. Therefore, a large magnetic bead and small distance between the bead and GMR sensor are required to achieve high sensitivity. In addition, the sensor performance is also sensitive to the relative orientation of the magnetic bead and sensor [4,8]. The bead can be magnetized in two directions, parallel and perpendicular to the sensor surface. Magnetizing the bead parallel to the sensor surface is limited to the external field because the sensor should not be saturated. Therefore, the magnetic bead may not be fully saturated in the external field direction, which results in a low stray field. However, in order to magnetize the bead perpendicular to the sensor, a much larger external field can be applied without saturating the sensor because a thin layered magnetic film in the sensor is very difficult to magnetize in the normal direction due to shape anisotropy. Therefore, magnetizing the bead in the perpendicular direction is more favorable. According to report [8], however, a perpendicularly magnetized bead exhibits undesirable sensing behavior, such as sensing instability and a low signal to noise ratio. This study examined the dependence of the sensitivity of the GMR sensor on the distribution of the stray field generated by the magnetic bead. The MR changes in the sensor were calculated using a micromagnetic simulation at various stray fields that depend on the orientation of the magnetization of the bead. The bead was magnetized in two different directions, parallel and perpendicular to the sensor surface. For the case of the parallel magnetized bead, two different magnetization directions (+x and –x directions) in
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 293–296, 2010. www.springerlink.com
294
K.S. Kim et al.
the plane were also considered. The dependence of the GMR value on the stray field strength was also examined by varying the distance between the GMR sensor and magnetic bead.
II. MODEL AND CALCULATION The GMR sensor used in this work is composed mainly of two ferromagnetic layers (CoFe) separated by a conducting layer (Cu). Two ferromagnetic layers have the same lateral dimensions, which were changed from 300 × 150 nm2 to 1800 × 900 nm2 with a fixed aspect ratio (AR) defined by a length to width ratio of 2.0. The thickness of the top (free layer) and the bottom layer (pinned layer) was fixed to 1.5 nm and 4.0 nm, respectively. The thickness of Cu was 2.6 nm. The distance between the magnetic bead and GMR sensor surface (h) was changed from 40 to 300 nm. The magnetic bead was composed of iron oxide (γFe2O3) nanoparticles dispersed in a polymer matrix, which was considered to be a dipole under the external field to calculate the stray field. The magnetic bead was 800 nm in diameter. The magnetic bead was aligned in the x-and zdirections. The stray field generated by the magnetic bead was calculated using a commercial program package (multiphysics finite element method COMSOL) based on the finite element method in three dimensions (3D). The GMR value was calculated by a micromagnetic simulation using the commercial program code of MicroMagus based on the Landau-Lifschtz equation. Both ferromagnetic layers were divided in the plane using a mesh size of 7.5 × 7.5 nm2. The mesh sizes along the thickness direction were fixed to 1.5 and 4.0 nm, which is identical to the thickness of the magnetic layers. The other magnetic parameters used were: a saturation magnetization (Ms) of 1540 emu/cc and an exchange constant (Aex) of 1.53 × 10-6 erg/cm. A uniaxial anisotropy constant (K) of the free and pinned layers were used as 26400 and 664,000 erg/cm3 (formed in the long axis direction), respectively.
III. RESULTS AND DISCUSSION Figure 1 shows the calculation results of a stray field generated by the magnetic bead. Fig. 1(a) shows the three field components of the stray field, Bx, By, and Bz, as a function of x. The center of the film was set to x = 0. The result was calculated from the magnetic bead aligned parallel to the sensor surface under an external field applied in the +x direction. The distance (h) between the magnetic bead and sensor surface was 300 nm. The component of Bx is parallel to the length direction of the ferromagnetic thin film. In
Fig. 1 (a) shows the three field components of the stray field generated by the magnetic bead that is magnetized in x-direction, Bx, By, and Bz, as a function of x. (b) shows the averaged value of stray field (Bave) at Bpara (stray field from parallel magnetized bead, +x direction) and Bperp (stray field from perpendicularly magnetized bead) as a function of h
addition, it is the field component that plays a main role of switching the free layer because free layer magnetization points in the +x or –x directions. The component of By, which is almost zero, is parallel to the width direction. The component Bz is perpendicular to the free layer. As shown in Fig. 1(a), the magnitude of Bx reaches a maximum at the center of the film, Bx = 351 Oe at x = 0. The component Bx decreases quickly as x deviates from the center, and reaches zero at x = ± 406 nm. The direction of Bx changes from the +x to –x direction in the regions of x > 406 nm and x < -406 nm. This undershoot field region produces a free layer with a complicated domain structure. According to numerical analysis of tunneling magnetoresistance (TMR) biosensor using a ferrimagnetic nanoparticle, this undershoot field region can be reduced by a magnetic shield layer [1]. The magnitude of Bx for parallel and perpendicularly magnetized beads was calculated to obtain the h dependence of the stray field. The components of By and Bz were not considered because they are almost zero or not important due to the geometry of the ferromagnetic film, as shown in Fig. 1(a). The magnitude of Bx was obtained by averaging Bx from x = -600 nm to x = 600 nm, which corresponds to the length of the ferromagnetic film. Fig. 1(b) shows the results for Bave as a function of h from 40 to 300 nm. In both the parallel and perpendicularly magnetized beads, Bave showed a similar dependence on h; Bave decreases with increasing h. For the case of the parallel magnetized bead, Bave is 323 Oe at h = 40 nm, and decreases rapidly with increasing h. Bave was 85 Oe at h = 300 nm. In the case of the perpendicularly magnetized bead, Bave was 267 Oe at h = 40 nm, which decreased to 76 Oe at h = 300 nm. From the definition of Bave, the value of Bave for a parallel magnetized bead is larger than that for a perpendicularly magnetized bead. The difference in Bave is approximately 50 Oe. There are several changes in the relative orientation of the magnetic bead, GMR sensor, and external field that
IFMBE Proceedings Vol. 32
Effects of Stray Field Distribution Generated by Magnetic Beads on Giant Magnetoresistance Sensor for Biochip Applications
295
Fig. 2 The magnetization configuration of the free and the pinned layers at various stray field distributions. (a) shows the initial state of the magnetizations without a magnetic bead. The results of (b)-(d) show the magnetization configuration at the fields of Bpara, –Bpara, and Bperp, respectively. The arrows denote the magnetization direction magnetizes the bead. As mentioned previously, the bead has two orientations: parallel and perpendicular to the GMR sensor surface. Considering the magnetization of the free layer of the GMR sensor, the bead can also be magnetized in two different directions in the plane: same as the magnetization of the free layer, and opposite to the magnetization of the free layer. These orientations of the bead with respect to the sensor generate stray fields with quite different distributions. The effects of the stray field distribution on the sensitivity of the GMR sensor were examined by simulating the magnetization configuration of the free layer under various stray fields. Fig. 2(a) shows the initial state of the free and the pinned layers without a magnetic bead. The magnetization configurations shown in Figs. 2(b)-(d) were obtained under the stray fields of Bpara, –Bpara, and Bperp, respectively. The fields of Bpara and Bperp denote the stray field generated by the bead magnetized parallel and perpendicularly to the sensor surface, respectively. The –Bpara and +Bpara fields are in the same and opposite directions to the magnetization of the free layer, respectively, because the free layer is initially saturated in the –x direction, as shown in Fig. 2(a). All stray fields were obtained at h = 200 nm. The arrows in the figure denote the magnetization direction. With no bead, the magnetizations of the free and pinned layers were initially coupled antiferromagnetically, as shown in Fig. 2(a), and were almost saturated to the -x and +x directions, respectively. In both layers, the end domains forming the S-state are shown in the narrow region. As shown in Figs. 2(b)-(d), the magnetization configurations differ significantly depending on the distribution of stray fields. For the case of –Bpara in Fig. 2(b), the magnetization direction of the end domains rotates toward either the +y (left side) or –y (right side) directions but the magnetizations at the center of the film remain in the –x direction,
Fig. 3 The results for the MR values as a function of h at the stray fields of Bpara, –Bpara, and Bperp. To compare, the MR value with no magnetic bead was also calculated. The solid lines are only for a guide to the eye
which is similar to the direction of –Bpara. As a result, the magnetization configuration formed a ∪–shaped domain state. Obviously, this is due to the non-uniformity of the – Bpara; namely, the –Bpara acting on the interior of the film is much greater than at the edges where the end domains are located. The magnetization configuration under a Bpara field is quite complicated, as shown in Fig. 2(c). Like in the case of –Bpara, the end domains rotate toward the +y (left side) or –y (right side) directions but the end domain regions are smaller than that of the –Bpara case. On the other hand, the magnetization in the interior region is aligned to the +y (left side) or –y (right side) directions. The magnetization at the center of the film is rotated to the Bpara direction (+x direction) due to the large magnitude of Bx in this region. In the case of Bperp in Fig. 3(d), the magnetizations were aligned towards to the +y direction showing axial symmetry. The competition between the antiferromagnetic coupling interaction field, the shape anisotropy field, and the stray field has responsibility for the complicate magnetization configurations of the free layer. The difference in the magnetization configuration according to the stray field profiles shown in Fig. 2 affects the sensitivity of the GMR sensor because the MR value is determined from the angle between the magnetization of the free and pinned layers. Figure 3 shows the calculated results for the MR values as a function of h at stray fields of Bpara, – Bpara, and Bperp. The largest MR value of 1.740 was obtained without the magnetic bead because the free and the pinned layers are coupled antiferromagnetically and almost saturated, as
IFMBE Proceedings Vol. 32
296
K.S. Kim et al.
shown in Fig. 2(a). As applying the stray field, the MR value decreased due to magnetization rotation, as shown in Figs. 3(b)-(d). However, the change differs significantly according to the stray field distribution. As expected, the MR value showed a dependence on h in all cases but the dependence differed according to the stray field distribution. Two main points can be obtained from the results. First, the largest and smallest MR values were obtained under the Bpara and –Bpara field, respectively. A moderate MR value was observed in the case of Bperp. The largest MR values for the Bpara and Bperp fields were 1.713 and 1.2542 at h = 40 nm, respectively. In the case of –Bpara, the largest MR of 1.0248 was obtained at h = 300 nm. Second, the MR value showed a similar dependence on h for the Bpara and Bperp fields; the MR value decreased as h increases. However, h dependence was higher in Bpara than in Bperp. In the case of Bpara, the value of MR at h = 40 nm was 1.713, which decreased rapidly to 1.356 at h = 300 nm. However, in the case of Bperp, the MR value was 1.2542 at h = 40 nm, which decreased gradually to 1.131 at h = 300 nm. The h dependence of the MR value on -Bpara showed an opposite trend to that of Bpara and Bperp; MR value also increased with increasing h. The MR value at h = 40 nm was 0.8381, which increases to 1.025 at h = 300 nm. One interesting feature to be noted from the results shown in Fig. 3 is that the MR value was affected significantly on the relative orientation between the stray field and the magnetization of the free layer, as well as by the orientation between the magnetization of the bead and the sensor. The MR value is significantly different with the stray field direction in the plane. The difference in MR value between the cases of Bpara and -Bpara is 0.875 and 0.3315 at Bave = 85 and 330 Oe, respectively. In the cases of Bpara (-Bpara) and Bperp, the smaller difference in MR value was observed. The largest difference (Bpara = 85 Oe and Bperp = 76 Oe) is 0.4161, and the lowest (Bpara = 262 Oe and Bperp = 267 Oe) is 0.1488.
IV. CONCLUSIONS A micromagnetic computer simulation was carried out to examine the effects of the stray field distribution generated by a magnetic bead on the sensing performance of a giant magnetoresistance (GMR) sensor for biochip applications. Two different bead orientations with respect to the sensor surface were considered: parallel and perpendicular. The parallel magnetized beads were saturated in the same or opposite directions to the free layer magnetization. The
magnetization configuration differed according to the distribution of the stray field. The change in MR showed a dependence on the stray field distribution. The largest and smallest MR values were obtained at Bpara and –Bpara, respectively. A moderate MR value was observed at Bperp. The MR values at Bpara and Bperp increased with increasing h while MR at –Bpara decreased.
ACKNOWLEDGMENT This study was supported by the research fund from Seoul R&BD program (grant # CR070054).
REFERENCES 1. Sunwook Kim, Seongtae Bae, and Jang Kwon Lim (2008) Improvement of sensing performance by specially designed magnetic shield layer in an in vitro tunneling magnetoresistance biosensor using immobilized ferromagnetic nanoparticle agents. J. Appl. Phys. 103:07E916_1-07E916_16 2. Mischa Megens, Femke de Theije, Bart de Boer, and Frans van Gaal (2007) Scanning probe measurement on a magnetic bead biosensor. J. Appl. Phys. 102:014507_1-014507_5 3. J. C. Rife, M. M. Miller, P. E. Sheehan, C. R. Tamanaha, M. Tondra, and L. J. Whitman (2003) Design and performance of GMR sensors for the detection of magnetic microbeads in biosensors. Sensors and Actuators A 107:209-218 4. Mischa Megens and Menno Prins (2005) Magnetic biochips: a new option for sensitive diagnostics. J. Magn. Magn. Mater. 293:702-708 5. S. P. Mulvaney, C. L. Cole, M. D. Kniller, M. Malito, C. R. Tamanaha, J. C. Rife, M. W. Stanton, and L. J. Whitman (2007) Rapid, femtomolar bioassays in complex matrices combing microfluidics and magnetoelectronics. Biosens. Bioelectron. 23:191-200 6. Liang Xu, Heng Yu, Shu-Jen Han, Sebastian Osterfeld, Robert L. White, Nader Pourmand, and Shang X. Wang (2008) Giant magnetoresistive biochip for DNA detection and HPV genotyping. Biosens. Bioelectron. 24:99-103 7. Hugo A. Ferreira, Daniel L. Graham, Nuno Feliciano, Luka A. Clarke, Margarida D. Amaral, and Paulo P. Freitas (2005) Detection of cystic fibrosis related DNA targets using AC field focusing of magnetic labels and spin-valve sensors. IEEE Trans. Magn. 41:4140-4142 8. W. Schepper, J. Schotter, H. Bruckl, and G. Reiss (2006) A magnetic molecule detection system –A comparison of different setups by computer simulation. Physica B 372:337-340
The corresponding author: Author: Hun-Kuk Park Institute: Kyung Hee University Street: #1 Hoeki-dong, Dongdaemun-gu City: Seoul Country: Korea Email:
[email protected]
IFMBE Proceedings Vol. 32
Electrostatic Purification of Nucleic Acids for Micro Total Analysis Systems E. Hoppmann and I.M. White Fischell Department of Bioengineering, University of Maryland, College Park, MD Abstract— Nucleic acids (such as DNA and RNA) play a central role in biological systems, carrying instructions that govern the function of all living organisms. Many of the most promising applications of lab-on-a-chip technology, such as infectious disease detection, gene expression profiling and DNA sequencing and amplification involve processing these carriers of genetic information. While decades of work has been dedicated to moving processes like these on-chip, the important precursor steps of isolating nucleic acids from samples have been largely overlooked. Fully integrating the sample preparation on-chip will increase processing speed while reducing operator error and sample contamination as well as labor and reagent related costs. To date the typical approach for implementing nucleic acid purification in a microchannel format is to translate the commonly used solid phase extraction method which relies on reagent changes to achieve immobilization and release of DNA. By using electrostatic control of DNA we are able to avoid the reagents (including PCR inhibitors) and complicated valving that this approach requires. We microfabricate chips using photolithography to create gold microelectrodes on glass substrates, upon which we bond a PDMS microchannel which is fabricated using soft lithography. Through COMSOL simulations we optimize our electrode geometry to maximize DNA capture efficiency. Designs are qualitatively evaluated using fluorescence microscopy. For quantification, we measure both the DNA removed from a sample (capture efficiency) and the DNA that can be released back into solution (elution efficiency) using UV spectrophotometry. By dramatically improving the ease and flexibility of nucleic acid purification, this enhanced microfluidic module can be integrated into existing microfluidic designs, improving how researchers approach on-chip nucleic acid processing and diagnostics.
becomes possible to effectively study complex molecular systems in biology. In the clinical environment, as well as in forensics science, these same tools enable affordable molecular analysis with sample-in answer-out capabilities, holding unparalleled diagnostic potential. While molecular diagnostics are currently possible to perform on the bench top, there are many disadvantages associated with this. Performing molecular diagnostics by hand is labor intensive (expensive) with many manual steps in which error can be introduced, and requires large volumes of costly reagents. By moving these processes onchip, potential for operator error is reduced, parallelization is enabled, greater analysis speeds become possible and costs are dramatically reduced. While many molecular analysis techniques are being successfully moved on-chip, nearly all of the designs that have been demonstrated require the sample preparation to be performed off chip, which greatly reduces or eliminates many advantages offered by the chip-based format. If an effective method can be developed to extract nucleic acids directly from a sample, the dream of a fully integrated onchip diagnostic system will be considerably closer to reality. For this reason, we are examining the electrostatic purification of nucleic acids, which would enable simple, reagent-free extraction of nucleic acids from a sample, and has the ability to directly feed these pure nucleic acids to downstream processes.
Keywords— Microfluidic, Micro-total analysis systems, DNA extraction, separation systems.
Micro total analysis systems (µTAS), or Lab-on-a-Chip systems, have experienced rapid development over the past 20 years. These systems offer many advantages over their large-scale counterparts: Reduced reagent consumption, minimal sample volumes, speed, multiplexing capabilities, amenability to mass production, portability and reduction of operator error among others. One area of research in µTAS that holds particular promise is the area of on-chip genetic analysis[1]. For many applications (forensics laboratories, clinical laboratories, DNA sequencing, gene expression profiling, etc) a critical on chip step is the capture, purification and pre-concentration of nucleic acids. Genetic analysis is used for a wide variety of applications and any technique that uses genetic information must purify nucleic acids (DNA/RNA) from the raw sample at some
I. INTRODUCTION Nucleic acids (such DNA and RNA) play a central role in biological systems, carrying instructions that govern the function of all living organisms. A key factor driving advances in medical and biological research is our ability to analyze and understand these nucleic acids, and thus understand biological systems at the molecular level. By using lab-on-a-chip technology to enable low-cost highthroughput quantitative molecular analysis tools (gene expression profiling, DNA sequencing and amplification), it
II. BACKGROUND
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 297–300, 2010. www.springerlink.com
298
E. Hoppmann and I.M. White
point in the processes. Current bench top techniques for performing nucleic acid purification involve many manual steps (centrifugation, pipetting), are slow, and rely on the operator to perform the tedious steps in a repeatable and reliable manner. Due to the many potential advantages of on chip nucleic acid purification, good on-chip purification has been a goal for many researchers. Almost all on-chip nucleic acid purification devices rely on two key facts. First, the fact that nucleic acids readily adsorb to silica in the presence of a chaotropic agent. This phenomenon is hypothesized to rely on electrostatic forces, dehydration, and intermolecular hydrogen bonding[2]. Secondly, the fact that nucleic acids are negatively charged enables the possibility of electrostatic control: either through an applied electric field or through attraction to positively charged molecules in the system. In bench top nucleic acid preparation, solid phase extraction (SPE) protocols for nucleic acid preparation are prevalent. These SPE protocols use a silica binding matrix, in combination with buffers containing chaotropic agents. The bound nucleic acids can then be eluted into a different buffer. The abundant use of solid phase extraction (SPE) protocols, in bench top nucleic acid preparation made developing on-chip SPE an obvious first step. In the past 10 years, researchers have worked on different methods of integrating a solid phase on-chip. Most groups have investigated nucleic acid purification using silica binding phenomena[3, 1]. Initial studies attempted to directly replicate bench top SPE by packing a silica extraction phase in a microchannel. This was done by creating a weir against which the beads would be trapped, and then filling the channel[4]. This approach was immediately used to achieve extraction efficiencies comparable to bench top techniques. However, there are many problems with this approach, most notably the inability to pack the channel with beads in a repeatable manner and the fact that specific loading and elution buffers are required. Follow-up studies began to make improvements: first, by using a branching network of silica channels, one group eliminated the need to pack beads onto a microchip. Then, by coating the microchannel with chitosan, they took advantage of chitosan’s moderate pKa to control its charge with buffer solutions, electrostatically capturing and releasing DNA[5]. Despite improving the reliability of these chips, they still suffer from the requirement of using buffers with specific pH for loading and elution. The desire to eliminate the requirement of using buffers of specific pH, which requires complicated valving and introduces agents that are potentially damaging to downstream processes (such as PCR inhibitors), led groups to investigate how to avoid the need for buffers entirely. A few groups have investigated using electrostatic control for
nucleic acid purification, which largely eliminates requirements for specific buffers and valving, and in fact enables buffer changes between capture and elution if desired[6, 7]. However, success thus far remains limited (efficiency remains low). By directly using electrostatic control to drive DNA to the surface of the capture electrode, and to protonate the capture matrix (Chitosan), we combine the best of both approaches while avoiding any of the issue that come with buffer changes. This greatly reduces complexity of the system, and allows for easier and more practical integration into larger lab-on-a-chip systems.
III. EXPERIMENTAL A. Device Fabrication The electrodes for DNA capture (see Fig. 1a) are fabricated by depositing gold atop a chromium adhesion layer on a soda-lime glass substrate. In order to create the electrodes, first the desired configuration is patterned using 1813 photoresist and photolithography. Then, the unwanted metals are selectively etched away using a wet etch.
Fig.
1 (A) Electrodes for nucleic acid capture and release, placed in microchannel in path of flow, viewed from above through PDMS. (B) Electrostatic capture of DNA with electrodes in channel, viewed from side
In order to package the system, a polydimethylsiloxane (PDMS) channel is fabricated using a standard molding procedure. First, the desired channel geometry is patterned on a silicon wafer using 1813 photoresist and photolithography. After development and a hard bake, the wafer is anisotropically etched to a desired depth, 60µm in our case, using deep reactive ion etching (DRIE). After stripping the remaining photoresist, this provides a mold to use for PDMS channels. After coating the wafer with
IFMBE Proceedings Vol. 32
Electrostatic Purification of Nucleic Acids for Micro Total Analysis Systems
silicone spray to prevent permanent adhesion, PDMS is poured on the wafer to the desired thickness, and then baked for one hour at 50°C. Finally the PDMS channel is covalently bonded to the glass/electrode substrate through plasma bonding (150mTorr, 60sccm O2, 25W, 45s), and fluidic access is achieved by attaching glass capillary to the channels. B. Electrode Surface Modification We experiment with both bare gold electrodes, and electrodes modified with chitosan. Chitosan surface modification is intended to help with retention in two ways. First, we expect that the chitosan will provide a matrix for the DNA to become entangled in, and thus be removed from the flow stream. Secondly, the chitosan chain's many amine groups have a pKa of ~6.3[8], which means that the chains can switch between being strongly positively charged, attracting DNA, and neutral, around a physiological pH[9]. While normally taking advantage of the pKa of chitosan to attract DNA would require buffer changes (indeed, chitosan coated silica has been used in this manner before[5]), we can generate a strong pH gradient in the vicinity of the electrode through applying a potential. Applying a positive potential then not only attracts the DNA through electrostatic interactions, but protonates the chitosan be generating a locally acidic region[10] (see Fig. 1b). Then, applying a negative potential both drives the DNA away from the service and de-protonates the chitosan. The surface modification is performed as follows. A syringe pump is hooked up to the capillary connected to the PDMS channel. A solution of .5% DHLA w/v in 200 proof ethanol is withdrawn through the packaged microfluidic chip at a rate of .1uL/min (to prevent ethanol from evaporating through PDMS) for 5 hours. The chip is then rinsed with ethanol for 20min. For crosslinking, 200mM EDC and 200mM NHS in acetic acid buffer (5% by volume acetic acid in 200 proof ethanol) is withdrawn through the chip at a rate of .1uL/min for 10m. Following a 2min acetic acid buffer rinse, a 1% solution of chitosan is flowed through the chip for 20min. Lastly, the chip is rinsed for 60min with 18.2 MΩ DI water.
299
To provide a consistent DNA target for these initial experiments, a DNA ladder (DNA ranging from 100bp to 1kb, in 100bp increments) typically used as a ruler for gel electrophoresis is used. This provides a sufficiently high concentration of DNA to allow for easy visualization. For visualization, we combine 1µL of 1000x SYBR Green I in DMSO with 10µL of DNA ladder solution, and allow it to incubate on ice for 1hr. Then, as we flow the solution through the chip; the DNA can be visualized through the PDMS using a fluorescent microscope with a green fluorescent protein (GFP) filter set. For quantitative analysis, we run unlabeled DNA through the chip while applying a +2V potential. Then, reversing the potential while flowing DI water, we elute any captured DNA into water. The eluted DNA can be compared to the stock DNA solution using spectrophotometry, by comparing the magnitudes of their 260nm absorption peaks.
IV. RESULTS Fig. 2 presents a fluorescence image of DNA (fluorescently labeled with SYBR Green I and viewed with a GFP filter set) which has been focused on the center electrode from a solution which was flowing from left to right. This focusing of DNA occurs on a relatively short time-scale, and results in a distribution of DNA as shown below. After releasing the applied potential, the DNA is released from being focused, and is able to flow downstream. Reversing the potential results in the DNA being driven towards the counter-electrode.
C. Experimental Procedures As seen in Fig. 1, DNA solution flows from left to right, with a capture potential of +2V applied using a National Instruments DAQ to the center electrode. As the solution flows over the electrode, DNA is both focused towards the capture electrode and drawn down to the surface. We test our system both qualitatively, using fluorescence microscopy, and quantitatively, by using a spectrophotometer to examine eluted DNA.
Fig.
2 Fluorescence microscopy image of focused DNA (labeled with SYBR green I) in a microchannel. . Channel width is 500µm, interelectrode spacing is 50µm
Potentials ranging from .5V to 2V have been used for DNA focusing/capture. Increasing the potential improved the electrode’s ability to retain the DNA, as indicated by the
IFMBE Proceedings Vol. 32
300
E. Hoppmann and I.M. White
observed fluorescence. However, increasing beyond 2V results in bubbles due to the electrolysis of water, which can be both damaging to the chip and disruptive to its intended function. An applied potential of 2V is used in the experiment shown in Fig. 2.
V. SUMMARY Since silica based solid phase extraction is commonly used for bench-top nucleic acid purification, it is logical to translate this technique directly onto a chip for initial studies. However, this brings with it significant disadvantages. While it is not an issue to perform buffer exchanges on the bench-top, on-chip buffer exchanges require complicated valving. Additionally, these buffers can inhibit other important on-chip processes, such as PCR. By using electrostatic control to purify nucleic acids, we can bring significant advantages to on-chip purification systems, greatly simplifying their construction and operation while enabling universal applicability of the technique, since no inhibitors are introduced to the sample. Using fluorescence microscopy, we have found that electrostatic control of nucleic acids can readily be achieved using simple gold electrodes in a microfluidic channel. However, we have found that even at very low flow rates, it is difficult to prevent the DNA from being pulled downstream. In order to mitigate this problem, we are experimenting with surface chemistry to deposit a chitosan layer, as well as looking at ways to alter the microstructure of the device to help encourage DNA retention. In the future, once high efficiency purification extraction of DNA has been established, it will be important to determine the condition of the extracted DNA, since past studies have shown that bare electrodes without some sort of surface coating can damage DNA[7]. First, we would like to examine extracted DNA by selecting appropriate primers and amplifying a segment of interest. This can be compared to a control using gel electrophoresis. For further verification, a qPCR (quantitative real time polymerase chain reaction) assay can be used to effectively determine the fraction of undamaged molecules[11]. Already, many processes of high value to clinical, forensics and molecular biology labs have been demonstrated on chip. However, as yet on chip sample preparation has not been refined to the point where it can be practically implemented. By developing an effective, mass
producible on chip purification system, many promising applications of lab on a chip technologies can be realized.
ACKNOWLEDGMENTS The authors are grateful for financial support from the Fischell Department of Bioengineering, the Maryland Department of Business and Economic Development, and the National Institute of Biomedical Imaging and Bioengineering.
REFERENCES 1. Wen, J., Legendre, L. A., et al. (2008). Purification of nucleic acids in microfluidic devices. Analytical Chemistry 80:6472–6479 2. Melzak, K., Sherwood, C., et al. (1996). Driving Forces for DNA Adsorption to Silica in Perchlorate Solutions. Journal of Colloid and Interface Science 181:635-644 3. Wen, J., Guillo, C., et al. (2006). DNA extraction using a tetramethyl orthosilicate-grafted photopolymerized monolithic solid phase. Anal. Chem 78:1673–1681 4. Breadmore, M. C., Wolfe, K. A., et al. (2003). Microchip-based purification of DNA from biological samples. Anal. Chem 75:1880– 1886 5. Cao, W., Easley, C. J., et al. (2006). Chitosan as a Polymer for pHInduced DNA Capture in a Totally Aqueous System. Analytical Chemistry 78:7222-7228 6. Shaikh, F. A., and Ugaz, V. M. (2006). Collection, focusing, and metering of DNA in microchannels using addressable electrode arrays for portable low-power bioanalysis. Proceedings of the National Academy of Sciences 103:4825 7. Lee, M., Lee, J., et al. (2008). Reversible capture of genomic DNA by a Nafion-coated electrode. Analytical Biochemistry 380:335-337 8. Park, J. W., Choi, K. H., and Park, K. K. (1983). Acid-base equilibria and related properties of chitosan. Bull Korean Chem Soc 4:68–72 9. Wu, L. Q., Gadre, A. P., et al. (2002). Voltage-dependent assembly of the polysaccharide chitosan onto an electrode surface. Langmuir 18:8620–8625 10. Macounova, K., Cabrera, C. R., et al. (2000). Generation of natural pH gradients in microfluidic channels for use in isoelectric focusing. Anal. Chem 72:3745–3751 11. Ayala-Torres, S., Chen, Y., et al. (2000). Analysis of gene-specific DNA damage and repair using quantitative polymerase chain reaction. Methods 22:135–147
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Ian White University of Maryland 2216 Kim Engineering Building College Park, MD 20742 USA
[email protected]
Applicability of Surface Enhanced Raman Spectroscopy for Determining the Concentration of Adenine and S-Adenosyl Homocysteine in a Microfluidic System Omar Bekdash1, Jordan Betz1, Yi Cheng2, and Gary W. Rubloff2,3 1
Fischell Department of Bioengineering, University of Maryland, College Park, MD, USA 2 Institute for Systems Research, University of Maryland, College Park, MD, USA 3 Department of Materials Science and Engineering, University of Maryland, College Park, MD, USA Abstract— Surface-enhanced Raman spectroscopy (SERS) has shown great potential as a highly sensitive detection technique for biologically relevant compounds in small volume scenarios. The applicability of using SERS to differentiate between enzymatic reaction products and their precursors to determine their relative concentrations is investigated here. The enzyme Pfs, important for bacterial quorum sensing, converts S-adenosyl-L-homocysteine (SAH) to adenine by means of a hydrolysis reaction. This leads to the substrate and product having very similar structures. Solutions containing SAH and adenine in ratios ranging from 0:1 to 1:0 at a total concentration of 1 mM were investigated by Raman spectroscopy on a silver SERS substrate. The results show that peaks at 530, 560, 733, and 860 cm-1 all increased in relative intensity as the ratio of adenine to SAH grew larger. While the relative increases were different for these peaks, these differences are not yet sufficient to consistently distinguish and quantify the relative amounts of adenine and SAH in a system. Since the SERS phenomenon is highly dependent on molecular proximity to silver nanostructures, these results suggest that the predominant binding mode of adenine and SAH are similar; for example, the adenine moiety present in both molecules is responsible for their binding to the silver surface. Keywords— Surface-enhanced Raman (SERS), microfluidics, adenine, SAH.
spectroscopy
I. INTRODUCTION Raman spectroscopy has proven itself a very useful tool in identifying individual molecules from a solution [1]. However, quantitative analysis of the concentrations of substances has not been fully explored even with the advent of surface-enhanced Raman spectroscopy (SERS). SERS has shown itself to be the most likely candidate for such investigations, as detection of even a single molecule of analyte has been reported [2]. Small molecule detection in biological systems is of great interest as small molecules are often involved in intercellular signaling cascades and represent metabolites indicative of cellular processes, but are often present in very low concentrations. A non-invasive, non-destructive method for interrogating cellular metabolism and signaling would be a huge boon to many fields of research, and SERS is an attractive candidate to serve as the detection method of choice for non-destructive sensing.
The ability to obtain spectra within the aqueous environment of a microfluidic channel is a great advantage of Raman spectroscopy and SERS over other detection methods that require more extensive sample preparation. The small volume of reagent within a microfluidic system allows for a more efficient means of performing assays, especially with difficult to isolate enzymes and metabolites. The ability of SERS to detect low levels of compounds of interest within small volumes created a natural coupling with microfluidics. Development of on-chip detection methods is gaining much ground in recent years where before mainly off-chip methods such as chromatography [3] and mass spectrometry [4] were the predominant detection methods [5]. By obtaining on-chip in situ spectra, continuous monitoring can be performed in a nondestructive fashion. Quorum sensing is a bacterial communication method initiated by signaling molecules called ‘auto-inducers’, which can result in large scale, sometimes undesirable, bacterial growth [6, 7]. A rapid and highly sensitive detection method for these molecules would allow for monitoring of biofilm formation and progression, and is the first step in realizing methods for preventing unwanted biofilm growth, and development of a means to combat it. We hypothesized that SERS could be used to differentiate between adenine and a related molecule, S-adenosyl-Lhomocysteine (SAH). These two molecules are of interest as they are components of the AI-2 bacterial quorum sensing signaling pathway, with adenine as an enzymatic byproduct of auto-inducer synthesis. The experimental results presented were obtained by using SERS to examine different concentrations of adenine and SAH mixed together. By successfully detecting a given amount of adenine or SAH, we can the metabolic flux through the auto-inducer synthesis pathway, and subsequently the degree of biofilm growth without directly observing or disturbing biofilm formation.
II. MATERIALS AND METHODS Fabrication of SERS substrates in microfluidic device— A microfluidic device was fabricated by oxygen plasma
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 301–304, 2010. www.springerlink.com
302
O. Bekdash et al.
III. RESULTS AND DISCUSSION As seen in past research, adenine is highly Raman active and produces clearly identifiable peaks [8]. The spectra
Raman Spectra for Adenine and SAH Solutions 0% Adenine 10 20 30 40 50 60 70 80 90 100% Adenine
16000 14000 12000 10000
Intensity
bonding of a cured polydimethylsiloxane (PDMS) layer with microfluidic trenches onto a photolithography patterned glass slide. Before bonding, a 100 nm thick aluminum layer was deposited onto the glass slide by thermal evaporation, and rectangular aluminum electrodes (1 mm × 0.5 mm) were patterned by photolithography and subsequent wet etching. The PDMS layer with the microfluidic channel was fabricated by mixing PDMS gel with curing agent (10:1 ratio), pouring the mixture onto a silicon wafer with patterned SU-8 stripes, and baking at 65 ºC for 2 hrs. The PDMS layer was then peeled off the silicon mold and treated with oxygen plasma together with a patterned glass slide for permanent bonding. SERS substrates were fabricated by flowing 50 mM Tollens’ reagent (ammoniacal silver nitrate solution) through the channel. The reaction between the Tollens’ reagent and the aluminum electrodes was allowed to proceed for 10 minutes before the channel was flushed with water. Preparation of Adenine/SAH mixtures—Solutions of adenine and SAH were first prepared individually at a concentration of 1 mM by dissolving each in DI water. The mixtures were created by mixing pure adenine and SAH solutions in ratios ranging from 0:1 to 1:0 at a total volume of 10 mL. For example, a 40% adenine solution contains 4 mL of 1 mM adenine and 6 mL of 1 mM SAH. SERS measurement of solutions—Raman spectra of adenine/SAH mixtures were acquired using a Horiba Jobin Yvon LabRamHR Raman microscope. Solutions were loaded into the microfluidic channel and spectrum acquisition was initiated immediately after introduction to the channel. Acquisition spots on the aluminum electrode were selected randomly, and seven spots were selected at each concentration. The spectra were acquired using the internal HeNe laser (633 nm) with a hole diameter of 500μm, 100 μm slit width and a 10× objective lens. Raman spectra were obtained over the range of 400cm-1 to 1800cm1 with a signal integration time of 30 seconds at an individual acquisition time of 3 seconds. Instrument control, spectra acquisition, background subtraction, and analysis were done with the LabSpec software (v. 5.25.15, Horiba).
8000 6000 4000 2000 0 710
720
730
740
750
760
-1
Raman Shift (cm )
Fig.
1 Raman Spectra for each solution mixture were taken at seven random points on the SERS substrate and then averaged. Data is taken from the perspective of Adenine with 0% Adenine and 100% Adenine indicating a purely SAH and purely Adenine solution respectively
selected are of the peaks which demonstrated dramatic changes in intensity with increasing adenine concentration. Due to the variability inherent in the SERS substrate, several spectra were acquired at points across the substrate and averaged to form the overall spectra shown in Figure 1. The difficulty lies in reproducing a SERS substrate such that any amplification of vibration on behalf of the molecule is not affected. The proximity and interactions of a molecule to the SERS surface greatly affects the Raman signal [9]. The slightest variation in particle size, shape, or location can result in enhancement factor changes of several orders of magnitude [9]. This is due to the changes in interaction and adsorption that the molecule under study undergoes when in contact with the SERS surface. Vibrational spectroscopy is highly dependent on molecule orientation along the SERS surface, and spectra obtained through Raman reflect that in their intensity values. The individual peak analysis of intensity was done by finding the maximum peak intensity over the wavelength region where the peak is present. Slight changes in the Raman shift are well documented in the literature [10], and the maximum value over the peak's range was used to account for this variability. The maximum intensity of each peak at a given wave number was plotted for each solution in Figure 2.
IFMBE Proceedings Vol. 32
Applicability of Surface Enhanced Raman Spectroscopy for Determining the Concentration of Adenine
1400 1200 1000
14000 12000 10000 8000
800
6000
600 400
4000
200
2000
0
0 0
20
40
60
80
100
Peak Intensity for 733 cm
530 cm-1 560 cm-1 860 cm-1 733 cm-1
1600
Peak Intensity
16000
-1
Raman Peak Intensities 1800
120
Pecentage of Adenine Present in Solution
Fig.
2 Intensities of selected peaks that most clearly indicated dramatic increases in intensity as adenine percentage increased were selected. The whole molecule ring-breathing mode at 733 cm-1 is increasingly present as adenine concentration increases. However, none of the peaks exhibits a monotonic increase in intensity with increased adenine concentration
Fig.
3 Structural representations of adenine (a) and S-adenosyl-Lhomocysteine, SAH (b). SAH is composed of an adenine moiety and an SRH moiety, which is composed of ribose and homocysteine
The close similarity in structure between adenine and SAH (Figure 3) however results in them having very similar Raman spectra. Differentiating between the two molecules solely based upon peak intensity has been demonstrated based on the peaks presented in Figure 2; however it is merely proof of the predominant presence of one species
303
over the other. It is still under investigation whether the intensity trends for the Raman shifts selected can be used to quantify the amount of SAH relative to the concentration of adenine. Figure 2 clearly shows a correlation between adenine presence and peak intensity, and yet the peak intensities do not increase monotonically as adenine concentration increases. Regardless of the marked increase in intensity, this is not enough to guarantee an accurate assessment of the relative concentrations of the two molecules. Analysis of the spectra in light of the peak assignments from Giese and McNaughton [11] indicate that the peaks present are attributable to the adenine moiety present in both molecules, and thus the peak intensities are a combination of both the SAH and adenine contributions. The difference in size between adenine and SAH can explain much of the concentration dependent peak intensity behavior demonstrated in Figure 2. Higher concentrations of adenine allow more adenine molecules to adsorb to the SERS substrate per unit area as compared with SAH. Steric clashing due to the S-ribosyl homocysteine (SRH) moiety of SAH prevents other molecules from adsorbing to the surface and subsequently contributing to the signal intensity. Conspicuously absent from the spectra containing SAH were peaks contributed by the SRH moiety. We expected to see C-S stretching from the homocysteine moiety, C-O-C ether stretching from the ribose moiety, and C-O stretching from both homocysteine and ribose. Yet none of these peaks were present in the spectra. Furthermore, the SERS enhancement effect is highly dependent upon the distance between the SERS substrate and the analyte. Since these vibrational modes were not present in the SERS spectra, we conclude the SRH moiety of SAH is not in close proximity with the SERS substrate. This is consistent with previous reports that compounds containing an adenine moiety were observed to adsorb to the SERS substrate through the adenine moiety [8]. This leads us to conclude that the SAH binding to the substrate is primarily governed by the adenine moiety as well. Hence any SERS enhancement as a result of substrate-molecule interactions will predominantly favor vibrations present in adenine. Accordingly, we also conclude that SERS is not the proper analytical technique for rapid, non-destructive in situ determination of the relevant concentrations of adenine and SAH.
IV. CONCLUSIONS Real time, in situ determination of adenine and SAH relative concentrations would provide a great deal of information into quorum sensing mediated bacterial communication and the extent of subsequent biofilm. There is still the need for a rapid, sensitive, non-destructive, and
IFMBE Proceedings Vol. 32
304
O. Bekdash et al.
accurate means of differentiating between SAH and adenine such that a simple calculation can provide the degree of conversion without the need for lengthy, off-line analysis. Additionally, this method would ideally be compatible with a microfluidic setup such as the one described to create an efficient, real-time sensor. The speed and sensitivity of SERS is very appealing in this respect. Four peaks were identified that correlate with increasing adenine concentration, and yet these correlations are not sufficient to consistently distinguish relative concentrations of adenine and SAH. The spectra we obtained were remarkably similar despite the relative concentrations of adenine and SAH. Furthermore, the lack of any characteristic peaks corresponding to vibrations from the non-adenine portion of SAH indicates that the adenine moiety of SAH is responsible for adsorption to the silver SERS substrate, consistent with literature reports of other adenine-containing molecules. Steric hindrance due to the large size of SAH explains the concentration dependent intensity behavior of the adenine peaks in the SAH-adenine mixture solutions. Thus we conclude that SERS cannot be used to reliably distinguish two molecules whose mode of binding to the SERS substrate is identical.
2. Kneipp K, Wang Y, Kneipp H, et al. (1997) Single molecule detection using surface enhanced Raman scattering (SERS). Phys Rev Lett 78:1667–1670 3. Ohno K., Tachikawa K., Manz A (2008) Microfluidics: Applications for analytical purposes in chemistry and biochemistry. Electrophoresis, 29, 4443-4453 4. Astorga-Wells J, Vollmer S, Bergman T, Jörnvall H (2007) Formation of Stable Stacking Zones in a Flow Stream for Sample Immobilization in Microfluidic Systems. Anal Che 79 (3), 1057-1063 5. Chen L, Choo J (2008) Recent advances in surface-enhanced Raman scattering detection technology for microfluidic chips. Electrophoresis 29:1815–1828 6. Jayaraman A., Wood T. K. (2008). Bacterial quorum sensing: signals, circuits, and implications for biofilms and disease. Annu Rev Biomed Eng 10, 145–167. 7. Camilli A., Bassler B. L. (2006) Bacterial small-molecule signaling pathways. Science 311:1113-1116. 8. Kundu J, Neumann O et al. (2009) Adenine− and Adenosine Monophosphate (AMP)−Gold Binding Interactions Studied by Surface-Enhanced Raman and Infrared Spectroscopies. J. Phys. Chem. C 113 (32), 14390-14397 9. Kneipp K, Kneipp H, Itzkan I, Dasari R, Feld M (2003) Ultrasensitive Chemical Analysis by Raman Spectroscopy Chem. Rev. 1999 99 (10), 2957-2976 10. Zheng J, Zhao Y, Li X, et al. Surface enhanced Raman scattering of 4-aminothiophenol in assemblies of nanosized particles and the macroscopic surface of silver. Langmuir 19:632–636 11. Giese B, McNaughton D (2002) Surface Enhanced Raman Spectroscopic and Density Functional Theory Study of Adenine Adsorption to Silver Surfaces. J. Phys. Chem.B106(1),101-112
ACKNOWLEDGEMENTS The authors would like to thank Dr. Xiaolong Luo and Dr. Susan Buckhout-White for valuable discussions. All the microfabrications were done in the Maryland Nanocenter Fablab at the University of Maryland in College Park. This work is supported by the Robert W. Deutsch Foundation and the NSF-EFRI grant NSF-SC0352441.
Author: Gary Rubloff Institute: Institute for Systems Research Street: University of Maryland City: College Park, MD Country: USA Email:
[email protected]
REFERENCES 1. Baena J.R., Lendl B (2004) Raman spectroscopy in chemical bioanalysis. Curr. Opin. Chem. Biol. 8 , pp. 534–539.
IFMBE Proceedings Vol. 32
Integration of Capillary Ring Resonator Biosensor with PDMS Microfluidics for Label-Free Biosensing Farnoosh Farahi and Ian White University of Maryland, Fischell Department of Bioengineering Abstract— In this work, we integrate a capillary ring resonator (CRR) biosensor with microfluidics based on polydimethylsiloxane (PDMS) with soft-lithography. The CRR performs refractive index detection for chemical or biomolecule detection, and has previously been used for protein and DNA sequence detection. In initial implementations of the CRR there are practical difficulties in fabrication and integration with other microfluidic functions. Microfluidic systems have been demonstrated for many on-chip capabilities such as sample/reagent mixing, concentration gradient preparation, cell culture and lysis, and pre-concentration of samples. Combining the CRR biosensor with microfluidic systems will allow for creating a single chip microfluidic system, which will eliminate complex steps and complicated techniques. Keywords— microfluidics, ring resonator, biosensor, labelfree, refractive index.
I. INTRODUCTION Recently there has been a great amount of interest in the development of biosensors that do not require fluorescent or chemicaluminescent labels. These “label-free” biosensors include surface plasmon resonance biosensors[1, 2], interferometric biosensors [3], and optical ring resonator biosensors [4]. The capillary ring resonator (CRR) biosensor is a class of optical ring resonator in which the microfluidic sample delivery is inherently integrated with the optical label-free biosensor. The CRR has been demonstrated for the detection of refractive index [5], protein [6], DNA sequence [7], and pathogens [8]. The CRR is conceptually diagrammed in Fig. 1. It consists of a short glass capillary with a thin wall. When light is coupled into the glass wall in the transverse direction, it is guided around the circumference via total internal reflection. For certain frequencies, this causes a standing wave to form. These resonant frequencies result in a high intensity field around the circumference. If the capillary wall is sufficiently thin, then the resonant optical modes will have an evanescent field that extends beyond the inner surface of the capillary, into the sample that is passing through the capillary. It is this evanescent field that enables the refractive index based biosensing mechanism. As biomolecules (e.g., protein, nucleic acids) are captured at the surface, the local refractive index changes. Because the guided mode interacts
with these molecules, the effective refractive index experienced by the mode changes. As a result, the condition for optical resonance changes. These changes can be measured in real time using a low cost diode laser (such as a telecommunications laser) and a photodetector. The CRR biosensor can be developed for a number of clinical and research applications. Because of its ability to detect specific proteins or DNA sequences, it can be used to detect biomarkers in samples such as serum, saliva, or urine. Because of its potential low cost, the device may be appropriate for point-of-care disease diagnosis based on known biomarkers. In addition, it can be used as a generic platform for measuring the reaction between any two molecules (one immobilized, one in solution), and thus can be a useful tool in biochemistry and molecular biology laboratories for the quantified study of events at the molecular level. Although there have been several academic publications using this biosensing technique, there is a large gap to overcome to make this method practical for commercialization. The production of the devices must be improved so that it is a repeatable process. In addition, the devices should be packaged using softlithography to adapt the biosensor to microfluidic sample preparation. Here we report the use of simple techniques to produce the capillary and to integrate the biosensor into a microfluidic platform based on polydimethylsiloxane (PDMS).
Fig. 1 Conceptual diagram of the capillary ring resonator (CRR)
II. OPTOFLUIDIC CAPILLARY RING RESONATOR BIOSENSOR ASSEMBLY
A. Capillary Drawing A quartz tube (O.D. 1.2mm, I.D. 0.90mm, 10cm length) is used to create the capillary in the CRR biosensor. The
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 305–308, 2010. www.springerlink.com
306
F. Farahi and I. White
tube is mounted at each end with clamps to motorized stages. Two carbon dioxide laser beams are directed at a spot on the capillary from opposing sides to heat the glass tube while the motorized stages draw out a thin capillary. During the drawing process, one stage pulls the capillary quickly while the other slowly feeds the capillary into the laser spot (heating zone) at a ratio of 50:1. The resulting product is a thin (O.D. ~100µm, I.D. ~88µm, 70mm length) glass tube that is used as the capillary ring resonator. A LabView program designed in our laboratory controls the drawing process, including the laser power and drawing speed. Fig. 2 shows the laser and drawing apparatus setup.
Fig. 2 Diagram of the capillary drawing apparatus The LabView program that controls the laser pulling apparatus controls five parameters: warm up time, pulling distance, laser power, pulling speed, and pulling ratio. In order to optimize the program, each parameter was varied and the resulting capillary dimensions were analyzed by inspecting the capillary cross section. In each category, the value that resulted in the most circular capillary with thin walls was the value that was chosen. The warm up time is set for 8 seconds, which was found to be sufficient for the glass tube to initially heat up. The pulling distance (the distance of the thin tube) is set for 70mm. The laser power must be hot enough so that the glass tube can be drawn out without collapsing on itself. The optimal laser power was found to be 6.2Watts. A faster pulling speed is desired for efficiency; however a pulling speed that is too fast will not allow the glass tube to heat up enough to be drawn out fully. Fig. 3 is an image of two different cross-sections of capillaries that were drawn at different speeds. The capillary cross section on the left was drawn at the speed of 15mm/s, while the capillary image on the right was drawn at 100mm/s. Different ranges of each parameter will result in significantly different capillaries. The pulling ratio is the relationship between the pulling of one end of the glass and feeding (pushing in) of the other end of the glass tube. The optimal pulling ratio is 50.
Fig. 3 Cross sectional view of capillaries under the microscope. The capillary on the left was drawn at a speed of 15mm/s, while the capillary on the right was drawn at 100mm/s. The two cross sections are significantly different in shape and dimensions B. Integration with PDMS Microfluidics After the capillary is drawn, we integrate it into PDMS channel using soft-lithography techniques. Each end of the capillary is fixed into a PDMS channel while the center of the capillary is suspended like a bridge. The suspended region is the biosensing region. In order to create the mold for channels in the PDMS, electric tape and a Petri dish are used. Three layers of electric tape are taped together and cut into four separate square pieces that are then taped onto a Petri dish. The pieces of tape are then made much thinner by using a razor and a ruler as a guide to cut the tape. The tape is used as the mold to create the channels in the PDMS (Fig. 4). PDMS is then made at a 10:1 ratio and poured into the created mold and placed in the oven to cure. Once the PDMS is cured, a razor is used to cut a rectangular shape around the channels and the PDMS is slowly peeled out. The resulting PDMS piece has a channel formed by the electric tape.
Fig. 4 The replication of PDMS microchannel The PDMS piece with the channel is cut transversely in the center of the channel and then each half is placed onto a microscope glass slide with the channel facing upward. The glass capillary is then placed into each channel, with about
IFMBE Proceedings Vol. 32
Integration of Capillary Ring Resonator Biosensor with PDMS Microfluidics for Label-Free Biosensing
1inch of the middle portion of the capillary exposed. A second piece of PDMS is then cut out and used to cover the channels. A low cost corona generating system, which generates oxygen plasma near the surface, is used to bond the two pieces of PDMS together permanently [9]. The capillary is then sealed into each microchannel. We first attempted to place a drop of PDMS where the capillary comes out of the newly formed microchannel, but the low surface tension of PDMS allowed it to travel down the microchannel and block the capillary inlet/outlet. Because silicone has a higher surface tension than PDMS, temperature curing silicone rubber is used to fully seal the channel. The next step in creating the CRR biosensor is assembling the connectors for samples to flow in and out of the channel. The connectors used are glass chromatography capillaries (Polymicro, Inc.) that have been bent at a 90º angle for easier access. The carbon dioxide laser apparatus is used to heat and bend the capillary connectors. One end of the capillary is weighed down using heavy double sided tape that has been folded, while the other end of the capillary is clamped down. The lasers are turned on for a few seconds, and the torque caused by the weight at the end of the capillary causes the connector to bend at ~90º angle (Fig. 5).
307
Fig. 6 CRR biosensor integrated with PDMS C. Biosensor Setup The carbon dioxide laser setup uses two separate clamps for fabricating the glass capillary and the fiber optic cable. Similarly, both control programs are created in LabView, but each system has its own control software. Unlike the capillary puller, this program slowly pulls on both ends of the fiber optic cable and has a focusing lens on one laser. Both lasers move back and forth on a small portion of the fiber optic cable. Initially, the laser without the focus lens is on and moves back and forth for 10 seconds to warm up the fiber optic cable. The clamps will then slowly begin to move further apart. As the fiber optic cable is getting thinner, the power level of each laser is increasing exponentially to its maximum set power. Using a microscope glass slide, a stand is created for the fiber taper. Fig. 7 demonstrates the final biosensor setup where the CRR biosensor and the fiber taper are placed perpendicular to one another. The fiber taper is mounted on a 3dimentional micro-positioner for precise contact of the CRR and the fiber taper.
Fig. 5 The image on the left is of the chromatography capillary clamped down at one end, with heavy double sided tape weighing down the other end. The image on the right is the same chromatography capillary after the carbon dioxide laser was used to heat a point on the capillary. The heavy double sided tape creates the torque to bend the chromatography capillary At each end of the channel, a 0.35mm Uni-Core is used to create a hole on the top piece of PDMS. The bent chromatography capillaries are then placed in the holes; the capillary size matches the hole size, and thus it makes a press-fit. The connectors are then sealed into place using temperature curing silicone rubber. A fully assembled device is shown in Fig. 6.
Fig.
7 Final CRR biosensor setup. The fiber taper is perpindicularily placed above the CRR biosensor using a 3-dimentional micro-positioner. The fiber taper is in direct contact with the CRR biosensor
IFMBE Proceedings Vol. 32
308
F. Farahi and I. White
D. Measurements
IV. SUMMARY
To perform a biosensing experiment, the tapered fiber cable is connected to a diode laser (distributed Bragg reflector, 1550 nm) on one end and to a photodetector (PIN) on the other. A LabView program created in our lab applies a current to the laser in a triangle waveform at approximately 1 Hz. The increasing current causes the amplitude of the laser to increase; additionally, it causes the output wavelength to change by changing the refractive index of the laser cavity. In our experiments, we scan the laser by approximately 300 pm, which is significantly broader than the resonance mode (typical Q-factors are approximately one million). The LabView program also records the optical intensity at the output of the tapered fiber cable. When the laser wavelength is on resonance with the capillary, there is an intensity dip at the photodetector. Each time the laser scans, we can observe the relative position of the resonance wavelength. The LabView program records the traces, which enables us to track the resonance wavelength over time after the experiment.
III. CAPILLARY RESONANCE RESULTS Using the setup described above, we can visualize the optical resonant modes in the capillary biosensor and observe the shift in resonance over time. Fig. 8 shows a recorded spectrum of a capillary ring resonator in our laboratory. This particular capillary was etched on the inside using hydrofluoric acid down to a wall thickness of less than 5 microns, which enables high sensitivity. Each dip in the spectrum indicates a resonant mode. Some of the resonant modes in this capillary have full-width-half-max linewidths of less than 5 pm. Thus, the quality factor of the resonant mode (with the sample inside the capillary) is (155 nm) / (5 pm) = 300,000.
In this work, we have demonstrated a simple and practical method for the production of quartz capillary resonators and tapered fiber optic cables, and we have developed a new technique for integrating the capillary ring resonator into a PDMS microfluidics platform using common softlithography techniques. As a result, the biosensor can now be integrated into a highly functional lab on a chip system with other automated sample handling and processing functions.
ACKNOWLEDGMENT The authors are grateful for financial support from the Fischell Department of Bioengineering, the University of Maryland General Board, and the National Institute for Biomedical Imaging and Bioengineering.
REFERENCES 1. Homola, J. (2006). Surface Plasmon Resonance Based Sensors, (Berlin: Springer). 2. Piliarik, M., Párová, L., and Homola, J. (2009). High-throughput SPR sensor for food safety. Biosensors and Bioelectronics 24, 1399–1404. 3. Ymeti, A., Greve, J., Lambeck, P.V., Wink, T., Hovell, S.W.F.M.v., Beumer, T.A.M., Wijn, R.R., Heideman, R.G., Subramaniam, V., and Kanger, J.S. (2007). Fast, Ultrasensitive Virus Detection Using a Young Interferometer Sensor. Nano Letters 7, 394-397. 4. Vollmer, F., Arnold, S., and Keng, D. (2008). Single virus detection from the reactive shift of a whispering-gallery mode. Proceedings of the National Academy of Sciences 105, 20701–20704. 5. White, I.M., Oveys, H., and Fan, X. (2006). Liquid-core optical ringresonator sensors.Optics Letters 31, 1319-1321. 6. Zhu, H., White, I.M., Suter, J.D., Dale, P.S., and Fan, X. (2007). Analysis of biomolecule detection with optofluidic ring resonator sensors. Optics Express 15, 9139-9146. 7. Suter, J.D., White, I.M., Zhu, H., Shi, H., Caldwell, C.W., and Fan, X. (2008). Label-free quantitative DNA detection using the liquid core optical ring resonator. Biosensors and Bioelectronics 23, 1003–1009. 8. Zhu, H., White, I.M., Suter, J.D., Zourob, M., and Fan, X. (2008). Opto-fluidic micro-ring resonator for sensitive label-free viral detection.The Analyst 133, 356–360. 9. Haubertt, K., Drier, T., and Beebe, D. (2006). PDMS bonding by means of a portable, low-cost corona system. Lab on a Chip 6, 1548– 1549.
Fig. 8 A recorded spectrum of a capillary ring resonator IFMBE Proceedings Vol. 32
Surface Plasmon-Coupled Emission from Rhodamine- 6G Aggregates for Ratiometric Detection of Ethanol Vapors R. Sai Sathish, Y. Kostov, and G. Rao University of Maryland Baltimore County / Center for Advanced Sensor Technology and Department of Chemical and Biochemical Engineering, Technology Research Center, Baltimore, USA Abstract— Surface plasmon-coupled emission (SPCE), the phenomenon associated with nanometer thick metal films has been extensively used on account of its enhanced spectral resolution, directional and polarized fluorescence. Here we report the detection of ethanol vapors based on the ability of SPCE to spectrally resolve individual emissions from Rhodamine-6G (R-6G) aggregates encapsulated in chitosan. This study opens doors to a broad spectrum of next generation ratiometric SPCE sensors based on high-resolution spectral determination of nano-environments in a multi-species system. Keywords— Surface plasmon-coupled emission, Rhodamine-6G aggregates, Ethanol vapor detection, Ratiometric sensor, Multi-species system.
I. INTRODUCTION Fluorescence spectroscopy is an increasingly important analytical tool for highly sensitive analysis and detection in the many fields of chemistry, biology, biochemistry and medicine. Even further improvement of detection limits can be achieved using surface plasmon coupled fluorescence emission (SPCE) [6], which facilitates study of single-molecules [1,2] and helps handling complex sample matrices like whole blood [3,4] and muscle [5]. SPCE is a powerful technique that results from interactions of fluorophores with thin metal films [7,8]. SPCE has several advantages as compared to other fluorescence based methods [9-13]: (i) over 50% of the total fluorescence gets coupled with the surface plasmons to attain 10-14 fold fluorescence intensity enhancement; (ii) highly polarized and directional emission simplifies the light collection; (iii) SPCE is a near field-phenomenon (within 200 nm from the metal film) offering a significant reduction in sample volume and resulting in strong background suppression. SPCE properties make the technique very attractive for development of low-cost sensing instrumentation. However, the approach requires both low-cost plasmonic substrates and low-cost optoelectronics for surface plasmon coupled fluorescence (SPCF) detection. Recently, excitation of SPCE from Rhodamine-B using light-emitting diode (LED) [14] was demonstrated. Plasmon-supporting solutiondeposited thin silver films on glass and plastic surfaces [15] have been also created. Furthermore, the inherent spectral
dispersive property of SPCE [8,16-18] was utilized to resolve fluorescence emission from molecular multiplexes using SPCE as a high-resolution plasmonic resonant filter [19]. The angular dependence of fluorescence emission on coupling with surface plasmons results in enhanced separation between the emissions from monomer and higher order aggregates of Rhodamine-6G (R-6G) in polyvinyl alcohol (PVA). This was achieved in its native state without requiring specialized cryogenic and/or high pressure platforms. The dimerization and disaggregation of R-6G has been used as fluorescence based sensor detection systems for temperature [20], humidity [21] and organic vapors [22]. For these applications highly sensitive technology that is compact, robust and amenable to mass production is highly desirable. The use of SPCE offers such a technique [23]. It offers many advantages over conventional electrochemical probes including noninvasive, reagentless operation with high sensitivity and ease of miniaturization [24]. Furthermore, fluorescent techniques that use ratiometry are particularly robust and impervious to probe photobleaching, sample positioning and fluctuations in source intensity [25]. The ratiometric method measures the ratio of fluorescence intensities at two different wavelengths thus overcoming the drawbacks of intensity-based measurements and increasing the sensitivity, selectivity and dynamic range of the method. In the current study we have exploited the dimer-tomonomer transformation of R-6G in chitosan films as a ratiometric SPCE sensor for detection of ethanol vapor. The widespread use of ethanol as a fuel, fuel additive or in fuel cells [26], as organic solvent in a number of industries as well as its common application as a breath analyzer [27] justifies further development of optical techniques for its measurement.
II. EXPERIMENTAL A. Sample Preparation Thin silver films were deposited onto BK7 glass substrates using a commercially available silvering kit (Peacock Laboratories Inc.). A detailed procedure for the fabrication of these SPCE substrates by a low-cost
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 309–312, 2010. www.springerlink.com
310
R.S. Sathish, Y. Kostov, and G. Rao
wet-chemistry approach has been presented in our previous work [15]. Silver films of 47+ 3 nm uniform thickness were solution deposited onto flat BK7 microscopic slides and subsequently spin-coated with a 10+ 3 nm thick film of 10 mM R-6G in chitosan/acetic acid = 2.5/2.5 v/v at 5000 rpm for 30 s [28]. The thickness and surface roughness of the solution-deposited silver films and fluorophore/polymer layer were determined by AFM measurements. B. Fluorescence Measurements The silver-coated substrates were attached to a BK7 hemi-cylindrical lens with glycerol (n=1.47) as the index matching fluid and placed on a 360o rotary platform. The platform’s capability to monitor any changes in these nanoenvironments at any angle relative to the incident angle was achieved by mounting the PMT and monochromator on the rotation stage. This arrangement enhanced the sensitivity of our apparatus eliminating the need of an optical fiber. A hand-made three-sided demountable cuvette with a trough for holding microliter volume of ethanol, was attached to the sample side. The cuvette was plugged (a cork wrapped in polythene film forms a convenient closure) to saturate the chamber with ethanol vapors. A schematic representation of the sample configuration is shown in Fig. 1. Excitation of the sample was achieved in reverse Kretschmann (RK) configuration [29] with a 405 nm, 3 mW laser diode [TE cooled module, Photonics Products]. A uniform excitation of the fluorophores across the glass substrate, from the air side was achieved in the RK geometry and the SPCE emission was observed on the BK7 prism side of the substrate.
Fig. 1 Schematic representation of sample configuration and angle dependent SPCE emission
The SPCE emission was collected from 0 – 90° and 270 –360° and the free-space signal from 90 –180°, with respect to the front of the prism (Fig. 1). The polarized emission spectra were obtained through a 450 nm long-pass filter and polarizer placed ahead of the detector of an ISS K2 (Champaign, IL) fluorometer. The angular distribution associated with the radiated light from SPCE were calculated with commercially available TFCalc. 3.5 software [Software Spectra, Inc., Portland,OR] based on equations that are well described in the literature [30].
III. RESULTS AND DISCUSSION At high concentrations in film, R-6G tends to form aggregates, typically dimers and trimers, or even higher order aggregates. These aggregates are easily disrupted and converted into monomers by the presence of ethanol vapors. Every aggregate exhibits different fluorescent properties, with the intensity of the emission being strongly related to the concentration of the aggregates. Hence, it was intended to follow the ratio of the monomer to dimer intensity as a method to observe the presence of ethanol vapors. However, as the emission peaks are closely spaced, they significantly overlap, making it difficult to resolve them spectrally. This can affect the performance of organic vapor detection systems based on the dimerization and disaggregation of R-6G. Use of cryogenic temperatures and higher pressure conditions can suppress the spectral broadening of the peaks, but favor aggregate formation which results in an equilibrium shift affecting the system’s nativity. In this work, another approach was selected [31].
Fig. 2 Reflectivity curves calculated for the three-layer system shown in Fig. 1 for BK7 glass at emission wavelengths of 581 nm and 611 nm
IFMBE Proceedings Vol. 32
Surface Plasmon-Coupled Emission from Rhodamine- 6G Aggregates for Ratiometric Detection of Ethanol Vapors
311
In order to deconvolute the spectra to explicitly obtain the spectral signatures of only the individual components from a multi-species system, real time SPCE was employed. The use of ratiometric SPCE exploits the platforms sensitivity and resolution capabilities for the noninvasive detection of ethanol vapor by monitoring individual fluorescence intensities at 581 nm and 611 nm, overcoming the limitations of intensity-based measurements. The measurements in this study were made using instruments that are readily available in most laboratories.
Fig. 4 Free space (FS) spectra of 10 mM R-6G in chitosan in presence of 50 % ethanol vapors in 0.5 hr intervals
Fig. 3 Normalized free-space (FS) and SPCE spectra of 10 mM R-6G in
chitosan. The top portion has photographs taken at a 44o SPCE angle with a 550 nm long-pass filter
Thin films of 10 mM R-6G in chitosan spin coated on SPCE substrates were used as ethanol gas sensor. The reflectivity curve shown in Fig.2 is calculated for the 3-phase system represented in Fig.1 with the thickness of the sample layer at 10 nm. The calculated reflectivity minima, corresponding to the SPR angle for the system at the emission wavelengths of 581 nm (monomer) and 611 nm (dimer) are 44.90o and 44.35o respectively. This is in excellent agreement to the experimentally observed plasmon resonance dip for the system at 44o (Fig.3). The normalized spectra of 10 mM R6G obtained in free space (FS) and SPCE regions are presented in Fig.3. The FS spectrum resembles the spectra obtained using conventional fluorescence technique. This is a congested spectrum that results from closely spaced and difficult to separate fluorescence emissions from monomer and dimer aggregates of R-6G in chitosan. The FS spectrum has strong emissions at 580 nm and 610 nm, however several other scattering artifacts show up as shoulders in the spectrum. In comparison to this, a deconvoluted spectra is obtained in real time by viewing the system at 44o in the SPCE region, that presents R-6G dimers at time t = 0. The disaggregation of the dimers to yield monomers is complete at 5 hrs when constantly exposed to 50 % ethanol vapors
and 3 hrs for 100 % ethanol vapors. This was accompanied with increase in fluorescence intensity as R-6G solution in ethanol has a high quantum yield of 0.95 [32]. No change in the spectrum is however observed when the thin film of R6G in chitosan is exposed to 100 % water vapors for 5 hrs, in absence of ethanol vapors.Chitosan forms supramolecular structures or netpoints through intermolecular interactions via hydrogen bonding after evaporation of the solvent [33]. The chitosan film is however suggested to be more hydrophobic in dry state when coated on a glass plate surface [34]. This alters the microenvironment of the chitosan thin film in comparison to the bulk solution and in turn its permeability to ethanol vapors.
Fig. 5 SPCE spectra of 10 mM R-6G in chitosan with observation at a 44o SPCE angle in presence of 50 % ethanol vapors at 0.5 hr intervals
IFMBE Proceedings Vol. 32
312
R.S. Sathish, Y. Kostov, and G. Rao
The sensor response is measured as the rate limiting step is diffusion controlled with insufficient permeability and hence after sufficient exposure to ethanol vapors, the dimer to monomer conversion is achieved. As a mere 0.550 difference in the observation angle shifts the color of the SPCE emission from red (611 nm for the dimer at 44.35o) to yellow (581 nm for the monomer at 44.90o ) we maintained a single SPCE observation angle of 44o to continuously monitor the dimer to monomer conversion in presence of 50 % ethanol vapors. Figs. 4 and 5 were obtained by a wavelength scan of the FS and SPCE regions in 0.5 hr intervals. In contrast to the FS spectra, deconvoluted SPCE spectra facilitates visual monitoring of ethanol vapor concentrations without employing any additional dispersive optics.
IV. CONCLUSIONS Selective wavelength dispersion of light using the SPCE phenomenon was exploited for ethanol vapor detection based on its ability to alter R-6G aggregate formation. This SPCE based ratiometric sensing system can be extended to monitor several organic vapors and physical parameters that affect the nano-environments of R-6G dye doped polymers. The phenomenon is a vital tool for lab-on-a-chip applications used in the screening of chemical species in biotechnology, medicine and the environment with extraordinary spectral resolution capabilities.
ACKNOWLEDGMENT This work was supported by funding from the National Science Foundation, Division of Bioengineering and Environmental Systems, grant award: NSF-BES 0517785.
REFERENCES 1. Stefani FD, Vasilev K, Bocchio N, Stoyanova N, Kreiter M (2005) Phys Rev Lett 94:023005-4 2. Gryczynski Z, Gryczynski I, Matveeva EG, Calander N, Grygorczyk R, Akopova I, Bharill S, Muthu P, Klidgar S, Borejdo J (2007) Proceedings of SPIE 8:64440G.1-64440G.11 3. Matveeva EG, Gryczynski Z, Malicka J, Lukomska J, Makowiec S, Berndt KW, Lakowicz JR, Gryczynski I (2005) Anal Biochem 344:161–167 4. Aslan K, Zhang Y, Geddes CD (2009) Anal Chem 81:3801–3808
5. Borejdo J, Gryczynski Z, Calander N, Muthu P, Gryczynski I (2006) Biophys J 91:2626–2635 6. Lakowicz JR, Malicka J, Gryczynski I, Gryczynski Z (2003) Biochem Biophys Res Commun 307:435–439 7. Lakowicz JR (2004) Anal Biochem 324:153-169 8. Gryczynski I, Malicka J, Gryczynski Z, Lakowicz JR (2004) Anal Biochem 324(2):170-182 9. Ray K, Chowdhury MH, Lakowicz JR (2008) Chem Phys Lett 465(13):92-95 10. Aslan K, McDonald K, Previte MJR, Zhang Y, Geddes CD (2008) Chem Phys Lett 464(4-6):216-219 11. Ray K, Szmacinski H, Enderlein J, Lakowicz JR (2007) Appl Phys Letts 90:251116-3 12. Kostov Y, Smith DS, Tolosa L, Rao G, Gryczynski I, Gryczynski Z, Malicka J, Lakowicz JR (2005) Biotechnol Prog 21:1731-1735 13. Smith DS, Kostov Y, Rao G (2007) Sens Actuator B 127:432-440 14. Smith DS, Kostov Y, Rao G, Gryczynski I, Malicka J, Gryczynski Z, Lakowicz JR (2005) J Fluoresc 15(6):895-900 15. Sai Sathish R, Kostov Y, Smith D, Rao G (2009) Plasmonics 4:127133 16. Benner RE, Dornhaus R, Chang RK (1979) Opt Commun 30:145-149 17. Kaneko F, Nakano T, Terakado M, Shinbo K, Kato K, Kawakami T, Wakamatsu T (2002) Mater Sci Eng C 22:409– 412 18. Nakano T, Kobayashi H, Shinbo K, Kato K, Kaneko F, Kawakami T, Wakamatsu T (2001) Mater Res Soc Symp Proc 660: 8351–8356 19. Sai Sathish R, Kostov Y, Rao G (2009) Appl Phys Lett 94:223113-3 20. Ghasemi J, Niazi A, Kubista M (2005) Spectrochim Acta Part A 62:649–656 21. Chen H, Farahat MS, Law KY, Whitten DG (1996) J Am Chem Soc 118:2584-2594 22. Malashkevich GE, Poddeneznyi EN, Mel’nichenko IM, Prokopenko VB, Dem’yanenko DV(1998) Phys of the Solid State 40:427-431 23. Kermis HR, Kostov Y, Rao G (2003) Analyst 128:1181–1186 24. Chan C, Lo W, Wong K (2000) Biosens Bioelectron 15:7–11 25. Ge X, Tolosa L, Rao G (2004) Anal Chem 76:1403-1410 26. Arico A, Creti P, Antonucci P (1998) Electrochem Solid-State Lett 1:66-68 27. Chou SM, Teoh LG, Lai WH, Su YH, Hon MH (2006) Sensors 6:1420-1427 28. McIlwee HA, Schauer CL, Praig VG, Boukherroub R, Szunerits S (2008) Analyst 133:673–677 29. Kreschmann E, Raether H (1968) Z Naturforsch Teil A 23:2135-2136 30. Raether H (1997) Physics of Thin Films, Advances in Research and Development, Academic Press, New York. 31. Sai Sathish R, Kostov Y, Rao G (2009) Appl Opt 48(28):5348-5353 32. Kubin RF, Fletcher AN (1982) J Luminescence 27:455-462 33. Clasen C, Wilhelms T, Kulicke WM (2006) Biomacromolecules 7:3210-3222 34. Ding L, Fang Y, Jiang L, Gao L, Yin X (2005) Thin Solid Films 478:318– 325 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Dr. Yordan Kostov University of Maryland Baltimore County 1000 Hilltop Circle, MD- 21250 Baltimore USA
[email protected]
Formation of Dendritic Silver Substrates by Galvanic Displacement for Surface Enhanced Raman Spectroscopy Jordan Betz1, Yi Cheng2, Omar Bekdash1, Susan Buckhout-White3, and Gary W. Rubloff2,3 1
Fischell Deparment of Bioengineering, University of Maryland, College Park, MD, USA 2 Institute for Systems Research, University of Maryland, College Park, MD, USA 3 Department of Materials Science and Engineering, University of Maryland, College Park, MD, USA Abstract— Surface enhanced Raman spectroscopy (SERS) has become increasingly valuable in recent years as a biosensing technique because it allows nondestructive, in situ identification of molecules present at concentrations of physiological importance. Many methods of fabricating SERS substrates exist, primarily focusing on formation and distribution of nanoparticles or producing ordered nanoscale structures. This study reports a method for fabricating dendritic silver substrates on aluminum electrodes by galvanic displacement. Silver nitrate and ammoniacal silver nitrate (Tollens’ reagent) solutions were used on thermally evaporated, patterned aluminum surfaces in microchannels. Silver ions displace aluminum from the surface, forming dendritic structures with submicron features resembling snowflakes or lace. The features are highly heterogeneous, with an abundance of submicron voids with a profound linear alignment. These dendritic silver substrates are simple to form, and serve as an excellent SERS substrate. Maximum in situ enhancement factors on the order of 106 were obtained for the SERS-active molecule 4aminobenzenethiol. Thus this method represents a fast, inexpensive, and easy way of forming silver SERS substrates that show excellent enhancement characteristics. This technique opens the door for in situ formation of SERS biochemical sensing sites in packaged microfluidic devices. Keywords— surface enhanced Raman spectroscopy (SERS), dendritic silver, galvanic displacement, electroless deposition, 4-aminobenzenethiol.
I. INTRODUCTION Since the first report of surface enhanced Raman spectroscopy (SERS) in 1974 by Fleischmann et al [1], interest in using Raman spectroscopy for biosensing and biodetection applications has increased greatly. Raman spectroscopy can be performed in aqueous media, is non-destructive, and does not require extensive sample preparation [2]. Additionally, SERS enhancement factors allow for detection down to the single molecule level [3], allowing for even the most rare of biomolecules to be detected. Biological and analytical techniques, including enzyme kinetics [4], micro RNA detection and classification [5], single nucleotide
polymorphism (SNP) genotyping [6], and detection of arsenic contamination in water [7] have been reported using Raman spectroscopy and SERS. These factors have contributed to its adoption for in situ biosensing by many research groups worldwide. Albrecht and Creighton proposed the electromagnetic contribution of surface plasmons to explain the SERS phenomenon [8], and a significant amount of research has been focused on developing new SERS-active substrates which exploit surface plasmon resonance and surface plasmon enhancement. Microfabrication techniques such as electron beam lithography and reactive ion etching have been used to create arrays of highly ordered, reproducible SERS substrates [9]. This also allows for the precise spacing of nanoscale features, allowing researchers to improve interparticle electromagnetic coupling effects [10]. While these techniques produce highly regular structures of a desired pattern, the prohibitively high cost of purchasing, maintaining, and operating such equipment has limited the development and use of such micro- and nano-patterned substrates to the small but growing number of labs that have access to such microfabrication facilities. An alternative approach to fabrication of nanoscale structures with high plasmon enhancement has been the use of nanoparticles. Nanoparticles provide the advantages of a large surface area for surface plasmon enhancement and interparticle electromagnetic field coupling [11]. With particle shapes including nanocrescents [12], nanowires [13], and nanocrystals [14], researchers have devised ways to create large electromagnetic field enhancements. These systems often require special treatment and equipment, and particle systems used under flowing conditions would require significant quantities of nanoparticles for continuous, in situ measurement. More recently, interest has arisen in galvanic displacement reactions for depositing nanostructured metals on a surface without the use of external electric fields or intricate fabrication methods. Dendritic structures have been shown to form on both semiconductor [15] and metal [16, 17] surfaces by means of galvanic displacement, and both substrates have reported significant Raman enhancement.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 313–316, 2010. www.springerlink.com
314
J. Betz et al.
Galvanic displacement results in localized formation of nanostructures, and the nanostructured surfaces can be used under flowing conditions. These advantages have attracted a lot of attention to development of galvanic displacement processes for the formation of new SERS substrates. This study describes the formation of silver dendrites on a patterned aluminum surface for use as a SERS substrate. The method exploits the galvanic reaction between silver ion reduction and aluminum metal oxidation. The silver dendritic structures form rapidly and can be created using fairly common laboratory equipment and reagents. In fact, the original experiments that led to the present research were conducted on commercial aluminum foil wrapped around glass microscope slides. The Raman signal enhancement obtained using these structures is significant, and can allow for the detection of biomolecules present at physiological concentrations. Thus, the method described here represents a simple and universally applicable method for fabricating SERS substrates with excellent Raman enhancement characteristics that can be readily integrated into prepackaged microfluidic devices.
II. MATERIALS AND METHODS A. Silver Dendrite Formation All reagents used were reagent grade or better. A 50 mM solution of silver nitrate was prepared using silver nitrate (Sigma-Aldrich) and deionized water. The ammoniacal silver nitrate (Tollens’ reagent) was prepared by adding ammonium hydroxide (ThermoFisher Scientific) to the 50 mM solution of silver nitrate drop-wise and swirling until the brown precipitate disappeared. The solutions were prepared immediately before use. A 100 nm thick film of aluminum was deposited on a glass slide using aluminum pellets (Kurt J. Lesker Company) in a Metra TEBC 22-26 thermal deposition chamber. Shipley 1813 positive photoresist was used to pattern rectangles 500 µm in length by 100 µm in width. The silver dendrites were formed by placing 100 µL of the silver nitrate or Tollens’ reagent solution on separate aluminum surfaces for one hour before being rinsed with deionized water and allowed to dry. B. Scanning Electron Microscopy Characterization Images of the silver dendrite structures were taken using a Hitachi SU-70 scanning electron microscope with an accelerating voltage of 10 kV. Elemental composition of the dendrite structures was determined by energy dispersive spectroscopy (EDS) using a Bruker silicon drift detector.
C. Raman Spectroscopy A 1 mM solution of 4-aminobenzenethiol (SigmaAldrich) was prepared using deionized water. Dissolution was aided by sonication. 100 µL of 4-aminobenzenethiol was placed on the silver dendrites and allowed to react for 10 minutes before being analyzed by Raman spectroscopy. Raman spectra were obtained using a Horiba Jobin-Yvon LabRamHR-VIS system using the internal HeNe laser at 632.8 nm with a hole diameter of 500 µm and a slit width of 100 µm. Spectra were acquired over the range of 600 cm-1 to 1700 cm-1. Total signal integration time was 30 seconds, with an individual acquisition time of 3 seconds. LabSpec software (v. 5.25.15, Horiba) was used to control the instrument, acquire spectra, perform background subtraction, and analyze the spectra.
III. RESULTS AND DISCUSSION A. Characterization of Silver Dendrite Structures Dendritic silver structures formed on aluminum surfaces patterned on glass slides. Galvanic displacement, taking advantage of the difference in reduction potentials between silver and aluminum, displaced aluminum from the surface and deposited silver in its place. In this process, silver ions were reduced and aluminum was oxidized. Silver nitrate was successful in producing dendritic silver structures, and it was believed that the Tollens’ reagent, commonly used to oxidize aldehydes to carboxylic acid in organic chemistry, might produce similar dendritic structures with superior Raman enhancement characteristics due to its decreased reduction potential compared with silver ions. Scanning electron microscopy revealed that the dendritic structures span several orders of magnitude, from the submillimeter to the sub-micrometer range. Feature sizes varied greatly as well, and were highly heterogeneous. From the scanning electron micrographs, it appeared as though several nucleation points were involved, and dendritic growth continued until multiple growth fronts overlapped, creating a metal substrate with a pattern of voids that resembles snowflakes or lace. The sharp corners of these voids and the branching points in the dendrites are believed to create areas where the electromagnetic field effects of the surface plasmons were greatly enhanced by the fringing effect. Figure 1a shows a representative scanning electron micrograph of the dendritic structures formed on the aluminum surfaces. The structures were examined by EDS, which provides elemental analysis based on characteristic X-ray energies emitted during electron impact from the electron microscope beam. Figure 1b shows an EDS spectrum of the dendritic structure. The spectrum was truncated at 4 keV for
IFMBE Proceedings Vol. 32
Formation of Dendritic Silver Substrates by Galvanic Displacement for Surface Enhanced Raman Spectroscopy
315
metals accounting for 20% of the weight and 7% of the atomic composition of the void. Interestingly enough, silver was the second most abundant trace metal after calcium, and outnumbered aluminum by a factor of four. EDS line scans (data not shown) also indicated a sharp delineation between the glass slide and the silver dendrites, and very little aluminum remaining on the patterned surface. B. SERS on Dendritic Silver Substrates
Fig. 1 Scanning electron microscope characterization of silver dendritic structures formed on patterned aluminum. A representative scanning electron micrograph of the dendritic structures and their submicron voids is shown in a). An energy dispersive X-ray spectrum, truncated after 4 keV for clarity, of a point on the silver dendritic structure is shown in b), indicating a high amount of silver metal present. An energy dispersive X-ray spectrum, truncated after 4 keV for clarity, of one of the larger voids is shown in c), indicating that the primary material present is silicon and oxygen (glass). Both spectra show that aluminum and other metals are present only in trace amounts clarity, but all prominent peaks are shown in the 0 keV to 4 keV range. Silver comprised 88% of the normalized mass, while silicon and oxygen, representing the glass slide, accounted for a combined 9%. The remaining 3% was due to trace metals, with aluminum comprising less than 0.3% of the mass. Discounting the signals from the glass slide (silicon and oxygen peaks), the dendritic structures were highly pure, with over 91% of the material on a per atom basis being silver. As a comparison, one of the voids was also subjected to EDS. Figure 1c shows the EDS spectrum of the void, also truncated at 4 keV for the above reasons. Silicon and oxygen were the predominant species in the void, with trace
The SERS-active molecule 4-aminobenzenethiol, also known as p-aminothiophenol, was used to evaluate the in situ Raman enhancement of the silver dendritic substrates. 4-aminobenzenethiol was selected for its thiol group, which reacts with silver surfaces. The physisorbed molecules are excellent candidates to experience any SERS effect, as both proposed SERS mechanisms (charge transfer and electromagnetic effects from surface plasmons) require close proximity for signal enhancement. The molecules were allowed to physisorb to the silver dendritic surfaces under static conditions for 10 minutes to ensure saturation of the dendrites. A time series analysis (data not shown) indicated that Raman signal intensity increased up to 10 minutes of physorption but reached saturation after that point. Figure 2 shows an abbreviated Raman spectrum of 4aminobenzenethiol. Peaks consistent with Zheng et al. [11] were observed, although the Raman shifts differed by a few wavenumbers in several cases. Acquisition occurred through the 4-aminobenzenethiol solution and enhancement factors were calculated in a different fashion than commonly reported in the literature. For this reason, the phrase
Fig. 2 Abbreviated Raman spectrum of 4-aminobenzenethiol, spanning the range of 1000 cm-1 to 1200 cm-1. The red line indicates the maximum enhancement achieved at a hot spot on the silver dendritic structures, and the black line indicates an average of multiple scans at different locations across the substrate. The characteristic peak at 1076 cm-1 corresponds to the C-S vibration, and the signal is enhanced by more than five orders of magnitude on average, and six orders of magnitude at a hot spot
IFMBE Proceedings Vol. 32
316
J. Betz et al.
“in situ enhancement factor” is used, and it represents a ratio of the signal intensity of the SERS enhanced signal to the normal Raman signal of the solution. Due to the uncontrolled nature of dendrite formation, and the high degree of heterogeneity in the structure of the substrate, the enhancement is variable depending upon where the laser is focused. This results in “hot spots” of high enhancement occurring throughout the substrate randomly. Spectra were also acquired through a liquid (the 4aminobenzenethiol solution), which undoubtedly contributes to the inherent variability. However, with a maximum in situ enhancement factor on the order of 106 and an average in situ enhancement factor of 5×105, these silver dendritic structures represent a powerful class of SERS substrates that can be easily fabricated in most laboratories, as the original work on these substrates was performed on commercial aluminum foil wrapped around a glass slide.
IV. CONCLUSIONS Dendritic silver structures were fabricated on aluminum surfaces by galvanic displacement using silver nitrate or ammoniacal silver nitrate (Tollens’ reagent). The method is fast, simple, inexpensive, and does not require specialized equipment or treatment. The silver dendrites were characterized by scanning electron microscopy, revealing a heterogeneous, highly linear arrangement of submicron voids believed to be ideal for SERS. Energy dispersive X-ray spectroscopy revealed highly pure silver dendrites. These dendritic silver structures were used as a SERS substrate for the detection of the SERS active probe 4aminobenzenethiol. In situ Raman spectra were recorded through aqueous medium, yielding a 106 signal enhancement when compared with the 1 mM solution without a SERS substrate. This high degree of enhancement opens the door for on-demand functionalization of SERS substrates for biochemical sensing at physiological concentrations in packaged microfluidic devices.
REFERENCES 1. Fleischmann M, Hendra PJ, McQuillan AJ (1974) Raman spectra of pyridine adsorbed at a silver electrode. Chem Phys Lett 26:163–166 2. McCreery, RL (2000) Raman Spectroscopy for Chemical Analysis. Wiley Interscience, New York 3. Kneipp K, Wang Y, Kneipp H, et al. (1997) Single molecule detection using surface enhanced Raman scattering (SERS). Phys Rev Lett 78:1667–1670 4. López-Pastor M, Domínguez-Vidal A, Ayora-Cañada MJ, et al. (2007) Enzyme kinetics assay in ionic liquid-based reaction media by means of Raman spectroscopy and multivariate curve fitting. Microchem J 87:93–98 5. Driskell JD, Seto AG, Jones LP, et al. (2008) Rapid micro RNA (miRNA) detection and classification via surface enhanced Raman spectroscopy (SERS). Biosens Bioelectron 24:917–922 6. Huh YS, Lowe AJ, Strickland AD, et al. (2009) Surface enhanced Raman scattering based ligase detection reaction. J Am Chem Soc 131:2208–2213 7. Mulvihill M, Tao AR, Benjauthrit K, et al. (2008) Surface enhanced Raman spectroscopy for trace arsenic detection in contaminated water. Agnew Chem Int Ed 47:6456–6550 8. Albrecht MG, Creighton JA (1977) Anomalously intense Raman spectra of pyridine at a silver electrode. J Am Chem Soc 99:5215– 5217 9. Kahl M, Voges E, Kostrewa S, Viets C, Hill W (1998) Periodically structured metallic substrates for SERS. Sens Actuat B 51:285–291 10. Gunnarsson L, Bjerneld EJ, Xu H, et al. (2001) Interparticle coupling effects in nanofabricated substrates for surface-enhanced Raman scattering. Appl Phys Lett 78:802–804 11. Zheng J, Zhao Y, Li X, et al. (2003) Surface enhanced Raman scattering of 4-aminothiophenol in assemblies of nanosized particles and the macroscopic surface of silver. Langmuir 19:632–636 12. Lu Y, Liu GL, Kim J, et al. (2005) Nanophotonic crescent moon structures with sharp edge for ultrasensitive biomolecular detection by local electromagnetic field enhancement effect. Nano Lett 5:119–124 13. Tao A, Kim F, Hess C, et al. (2003) Langmuir-Blodgett silver nanowire monolayers for molecular sensing using surface-enhanced Raman spectroscopy. Nano Lett 3:1229–1233 14. Tao AR, Habas S, Yang PD (2008) Shape control of colloidal metal nanocrystals. Small 4:310–325 15. Song YY, Gao ZD, Kelly JJ, et al. (2005) Galvanic deposition of nanostructured noble-metal films on silicon. Electrochem Solid State Lett 8:C148–C150 16. Wang Z, Zhao Z, Qiu J (2008) A general strategy for the synthesis of silver dendrites by galvanic displacement under hydrothermal conditions. J Phys Chem Solids 69:1296–1300 17. Gutés A, Carraro C, Maboudian R (2010) Silver dendrites from galvanic displacement on commercial aluminum foil as an effective SERS substrate. J Am Chem Soc 132:1476–1477
ACKNOWLEDGMENTS We would like to thank Dr. Xiaolong Luo for valuable discussions. This research was supported by the Robert W. Deutsch Foundation and the NSF-EFRI grant NSFSC03524414. The authors acknowledge the support of the Maryland NanoCenter, its FabLab, and its NispLab. The NispLab is supported in part by the NSF as a MRSEC Shared Experimental Facility.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Gary W. Rubloff Institute for Systems Research University of Maryland College Park, MD USA
[email protected]
High Specificity Binding of Lectins to Carbohydrate Functionalized Etched Fiber Bragg Grating Optical Sensors Geunmin Ryu1, Mario Dagenais1, Matthew T. Hurley2, and Philip DeShong2 1
Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742 2 Department of Chemistry and Biochemistry, University of Maryland, College Park, MD 20742
Abstract— We present results demonstrating the high specificity binding of lectins, concanavalin A (ConA) and peanut agglutinin (PNA), to carbohydrate functionalized fiber Bragg gratings. ConA shows high specificity to the glucose functionalized biosensor, but not to the lactose functionalized biosensor. In contrast, PNA shows high specificity to the lactose functionalized biosensor, but not to the glucose functionalized biosensor. Quasi-monolayer selective binding of the lectins to the fiber sensor was inferred based on a theoretical analysis of the observed changes in the effective refractive index. Also it was found that the selective binding shows a strong dependence on the binding temperature. Keywords— Concanavalin A (ConA), Peanut agglutinin (PNA), Optical biosensor, Fiber Bragg Grating (FBG), Carbohydrates, Surface functionalization, Lectins.
I. INTRODUCTION Eukaryotic cells are decorated with a dense array of carbohydrate derivatives. Several major diseases, including most cancers, are associated with a change in the glycosylation pattern of a central protein structure. Lectins, carbohydrate binding proteins, have been known to be valuable probes for studying and identifying the structure of complex carbohydrates [1]. We have functionalized etched fiber Bragg grating (FBG) with two carbohydrates to investigate the specificity and temperature dependence of the binding of two lectins, concanavalin A (ConA) and peanut agglutinin (PNA). It is well known that ConA and PNA bind with high specificity to glucose and lactose respectively, in solution. Using an in situ monitoring set-up, we monitored the Bragg wavelength changes and the solution temperature in real time. By analyzing the Bragg wavelength shift, we can calculate the change of the surrounding refractive index and from the index of refraction, we can infer the film thickness. We used a fiber Bragg grating with sensitivity to change in the refractive index as small as 2-3x10-5 when the index of refraction of the surrounding analyte is about 1.35. This sensor has sufficient sensitivity to detect the selective binding of a quasi-monolayer of proteins to a certain carbohy-
drate. Our biosensor does not require any labeling with a fluorescent label.
II. PREVIOUS WORK Previously, we demonstrated a simple theory to describe the shift of the FBG resonance as a function of the fiber diameter and as a function of the surrounding index of refraction [2-3]. A surrounding index of refraction sensitivity as small as 7.2 x 10-7 was demonstrated [4], limited by a wavelength resolution of our instrumentation, 0.001nm. This index sensitivity drops to 2-3 x 10-5, as the probed analyte has an index of refraction close to 1.35 rather than closer to the index of maximum sensitivity (n = 1.45). The wavelength change of an etched core FBG sensor can be written as Δ λ = S Δ n (1) where S is defined as the sensitivity of the sensor and is measured as the change in wavelength per unit change of the surrounding medium index, and Δn is the change of the surrounding medium index. Using this fiber sensor, we have previously demonstrated that DNA hybridization can be measured [2]. The surface of the etched fiber was functionalized by attaching a 20 nucleotide single strand probe DNA to its surface and then surrounding the fiber with the matching 20 oligomer single strand target DNA. Recently, we have demonstrated the attachment of a glucose derivative on the fiber [5]. These experiments were a first step toward the development of carbohydrate functionalized fibers as biosensors for the detection of lectins. A variety of biomolecules have previously been attached to silica and related surfaces to allow recognition for biochemical substances, including single strand DNA, antibodies, enzymes, proteins, and cells. Until now, however, the functionalization of silica with carbohydrates has received scant attention even though many cellto-cell recognition events are mediated by carbohydrateprotein multidentate interactions [6].
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 317–320, 2010. www.springerlink.com
318
G. Ryu et al.
III. EXPERIMENT Etched FBG optical sensors were prepared in two steps. First, we purchased a single mode photosensitive fiber, in which two FBGs were inscribed in the core. Then two etching processes were performed using 7:1 buffered oxide etch with surfactant. The primary etching etches the fiber down to 40um and the secondary etching etches the fiber down to 5um. Figure 1 shows the diagram of the etched FBG optical sensor.
Fiber Diameter Profile Primary Etch
FBG1533nm Secondary Etch
FBG1563nm
(a)
(b)
Fig. 1 (a) Diagram of the fiber housing and positions of the etchant during primary and secondary etches; (b) fiber diameter profile after secondary etch
For the broadband source, we used the broadband amplified spontaneous emission spectrum of an erbium-doped fiber amplifier (EDFA). The sharpest minimum reflectivity on the long wavelength side was continuously monitored by optical spectrum analyzer (OSA) to determine the wavelength shift. The spectrum of the etched FBG sensor is reproducible and shifts by the same amount as the peak wavelength. Figure 2 shows the experimental set-up and the typical spectrum of the etched FBG sensor. The monitored feature is indicated in Figure 2. p -45
_ 2X 1 c oup le r 2x1
O SA p _ = 2n _ _
Intensity (dBm)
-50
ED FA
-55 Monitored Feature
-60 -65
F B G S ens or -70 1.55
1.551
1.552
1.553
1.554
1.555
1.556
Wavelength (um)
(a)
(b)
Fig. 2 (a) Experimental set-up (b) The spectrum of the etched FBG sensor. The monitored figure is shown
The experiment began with preparing chemicals: HBS with blocker buffers, sodium ethoxide, synthesis of glucose, lactose-siloxane conjugate, ConA, and PNA. HBS with blocker buffers is a mixture of HEPES buffered saline (HBS) with Tween-20 and bovine serum albumin (BSA). BSA works as a blocker since it is a protein that interacts non-specifically to compete with other non-specific interactions. Tween-20, however, is a detergent and is meant to prevent or break up non-specific binding. Sodium ethoxide is used for removing the acetate groups after functionalization of the fiber with glucose- or lactose-siloxanes. Sodium ethoxide is prepared by adding ~ 1 g of freshly cut sodium metal to 25 mL of anhydrous ethanol under a nitrogen atmosphere at room temperature, followed by stirring at room temperature for 1 h, or until the solid metal has reacted completely to provide a clear, colorless solution. ConA and PNA were purchased from Signal-Aldrich. Proteins were mixed with the HBS buffer. Immediately before the binding, the protein solutions were diluted to1uM based on molar mass (104kDa for ConA and 110kDa for PNA). Measurement began first with a DI water reference, before we immersed our sensor into 10 uM solution of glucose- and lactose-siloxane. After the carbohydrates attachment, we took another DI water reference to measure how much glucose- and lactose-siloxane were attached. The acetate groups were removed by immersing the sensor in a sodium ethoxide (NaOEt) solution at 50 °C for 1.5 h. Then, we immersed the sensor into HBS-Tween20 with BSA solution. This step was necessary in order to prevent nonselective binding of lectins. Finally, ConA or PNA in HBSTween20 with BSA were introduced and bound to the carbohydrates-functionalized fiber.
IV. RESULTS AND DISCUSSION Figure 3 shows an in situ measurement of the Bragg wavelength shift of glucose-ConA and lactose-PNA at 33 °C. An initial shift of 770 pm was observed due to the difference in index between DI water and ethanol. A shift of 24 pm was measured for the case of glucose-siloxane attachment, which corresponds to a change in the surrounding index of 5.6x10-4. If we assume that a solid glucose layer is formed on the fiber, which has an index of 1.543 [7], a beam propagation simulation shows that a layer thickness of 1.3 nm formed, which corresponds to a monolayer of glucose conjugate. A shift of 23 pm was measured for the case of lactose-siloxane attachment. A shift of 23 pm was approximately the same as glucose-siloxane attachment. Theoretically, the wavelength shift of lactose should be twice the size of the glucose. This indicates that a lower density of lactose molecule attached to the fiber as compared to the
IFMBE Proceedings Vol. 32
High Specificity Binding of Lectins to Carbohydrate Functionalized Etched Fiber Bragg Grating Optical Sensors
below its room temperature value. Similarly the PNA binding rate increases with the binding temperature and reaches a maximum value at 31 °C. 4
3
3.66
3.5
2.43
2.5
2.35
3
2
2.5
1.65
2.07
1.5
2 1.5 1
1.24
1
1.30 1.00
1.02
1.00
0.56
0.50
0.5
0.5
38
37
36
35
33
32
3 4 (2 )
30
3 1 (2 )
29
28
27
26
25
24
23
38
37
36
35
34
32
33(2)
31
30
29
28
27
26
25
24
23
2 2 (3 )
0
0 22
bulk density of lactose. In both cases, we observed a negligible wavelength shift after the removal of the acetate groups. We observed that the wavelength shift dropped from 800 pm to 240 pm when we immersed the sensor into the HBS with blocker buffers. The wavelength drop occurred due to the difference in index between ethanol and HBS with blocker buffers. Binding of ConA to the glucosefunctionalized fiber resulted in a wavelength shift of 60 pm, while PNA did not bind to the glucose-functionalized fiber. Binding of PNA to lactose functionalized fiber gave a wavelength shift of 40 pm. ConA did not bind to the lactose functionalized fiber. These results clearly indicate that the lectins bind selectively to the appropriate carbohydratefunctionalized fiber.
319
Binding Temperature (C)
Binding Temperature (C)
(a)
(b)
Fig. Glucose Attachment
4 Binding temperature vs. binding rate of (a) ConA to the glucosefunctionalized fiber (b) PNA to the lactose-functionalized fiber
HBS-Tween20 with BSA ConA binding NaOEt
V. CONCLUSIONS A fluorescent label free etched fiber Bragg grating optical sensors that detects the small change of surrounding medium index has been developed. Functionalization of the fiber’s surface using carbohydrate-siloxane conjugates yields a functionalized fiber that is exposed to physiologically relevant concentrations of lectics. A high specificity of binding for lectins ConA and PNA with the cognate ligand was observed. ConA binds to the glucose-functionalized fiber but not to the lactose-functionalized fiber. PNA binds to the lactosefunctionalized fiber but not to the glucose-functionalized fiber. Also it was found that the selective binding strongly depends on the binding temperature. The high sensitivity observed with this biosensor indicates that this general approach can be utilized to measure a variety of biologically relevant processes including DNA-DNA or DNA-RNA hybridization, protein-protein interactions, and carbohydrateprotein interactions under physiological conditions.
(a) Lactose Attachment
NaOEt
HBS-Tween20 with BSA PNA binding
REFERENCES (b)
Fig. 3 In situ measurement of the Bragg wavelength shift a) Glucose-ConA b) Lactose-PNA
In addition to high specificity, we have investigated the optimal temperature for lectin binding and have discovered that the binding efficiency strongly depends on temperature. Figure 4 shows the normalized binding rate with respect to the binding temperature of ConA and PNA. The ConA binding rate increases with the binding temperature and reaches a maximum value at 33 °C, then quickly drops even
1. Irwin J. Goldstein, Lee A. Murphy, Ahigeyuki Ebisu (1977) Pire & Appl. Chem., vol. 49, pp 1095-1103 2. A. N. Chryssis, S. M. Lee, S. B. Lee, S. S. Saini, M. Dagenais (2005) IEEE Photon. Technol. Lett. vol. 17, pp 1253-1255 3. S. S. Saini, C. Stanford, S.M. Lee, J. Park, P. DeShong,W. E. Bentley, M. Dagenais (2007) IEEE Photon. Technol. Lett. vol. 19, pp 13411343 4. M. Dagenais, C. J. Stanford (2009) Taylor & Francis, New York 5. C.J. Stanford, G. Ryu, M. Dagenais et al (2009) Journal of sensor, vol. 2009, Article ID 982658 6. A. Varki, R.D. Cummings, J.D. Esko et al (2009) Cold Spring Harbor Laboratory Press, p784 7. X. Cao, B.B. Hancock, N. Leyva et al (2009) Inter. Journal of Pharm. Vol.368, pp 16-23
IFMBE Proceedings Vol. 32
320
G. Ryu et al. Use macro [author address] to enter the address of the corresponding author: Author: Geunmin Ryu Institute: Electrical and Computer Engineering Department, University of Maryland, College Park, MD Street: 2410 A.V. Williams building City: College Park Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Oximetry and Blood Flow in the Retina P. Lemaillet1, A. Lompado2, D. Duncan3, Q. D. Nguyen4 , and J.C. Ramella-Roman1 1
The Catholic University of America, 620 Michigan Ave., N.E., Washington DC, USA. 2 Polaris Sensor Technologies, 200 Westside Square Suite 320, Huntsville, AL, USA 3 Portland State University, 1900 SW 4th Ave, Portland, OR, USA 4 Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, USA
Abstract— We present the development of setup composed of a retinal oximeter and a blood velocimeter implemented on a regular fundus ophthalmoscope. The oximeter part can acquire nine wavelength-dependent sub-images of the patient fundus in a single snapshot. The fundus image is projected on a large format CCD by an array of nine lenses and wavelength selection is provided by a filter array. Higher wavelength images help establishing melanin absorption, which is further extrapolated to shorter wavelength and subtracted from the absorption images. The remaining shorter wavelength subimages are used measure the oxygen saturation in the retina by fitting oxy- and deoxy-hemoglobin absorption curves. The setup was calibrated with optical phantoms of know optical properties. The velocimeter part of the setup relies on following clusters of erythrocytes to assess the velocity of blood. Keywords— Oxygen saturation, spectroscopy, blood flow.
I. INTRODUCTION Diabetic retinopathy (DR) has been linked to both the oxygen saturation values (SO2) in retinal vessels and the retinal blood flow [1]. These metrics are essential in monitoring DR progress and reduce the risk of visual loss threatening diabetic patients. Measurement of SO2 in a non invasive way relies on the differences in the absorption spectrum of oxy- and deoxy-hemoglobin. However, assessment of oxygen saturation from spectroscopic images of the retina vessels is difficult not only because of the layered structure of the retina but also because of the eye saccadic movements. Numerous wavelengths are helpful to fit specific curves of oxy-and deoxy-hemoglobin absorption spectra and enable the discrimination of the retinal background contribution to the reflected signal. Snapshot acquisition of mutli-wavelength images can be used to eliminate eye movement artifacts. Measuring the velocity of blood is actually measuring the velocity of the red blood cell (RBC). The most popular techniques are (LDV) and Laser Speckle Velocimetry (LSV). Laser Doppler Velocimetry [2, 3, 4] uses the Doppler effect that is the frequency shift between an incident wave and the wave reemitted from a moving object. Laser illumination allows detection of the Doppler shift with an improved precision because of its monochromaticity. Laser
Speckle Velocimetry [5, 6, 7] is another blood velocity measurement technique that relies on the coherence of the laser light. In LSV, the light scattered from the RBC forms a speckle pattern at the detector, i.e. an interference pattern due to constructive and destructive scattered wavelets from the RBC. Since the RBC are moving objects, the speckle pattern is blurry and become ever more so as the velocity of the RBC increases. The analysis of this signal leads to the velocity of the RBC. Blood flow can also be measured by following clusters of erythrocytes [8]. This former technique is used in our retinal oximeter/blood flow velocimeter. The oximeter part of the setup relies on the spatial division of the fundus image obtained by a lenslet array that projects nine sub-images on a digital camera. A filter array laying between the lenslet array and the CCD provides wavelength selection. The effects of the eye movements are minimized by the simultaneous acquisition of spectrally different sub-images.
II. MATERIAL AND METHODS The experimental setup is based on a commercially available fundus ophthalmoscope (Carl Zeiss, Jena, Germany). The oximeter part of the setup relies on a custom-built multi-aperture camera comprising an optical train and a large CCD (Luminera 12 bit monochromatic digital camera, 35 mm x 23 mm, 4008 pixels x 2672 pixels, North Andover, MA, USA). A beam splitter separates the oximeter part from the velocimeter part of the setup, which is composed of a zooming lens and a fast acquisition camera also used for focusing purposes (Prosilica, 1900 pixels x 1080 pixels, 60 Hz, Allied Vision Technologies, BC, Canada). Figure 1 illustrates the technical realization of setup and its optimized optical train. A. Retinal oximeter The image plane of the fundus ophthalmoscope is reimaged by an achromatic doublet (L1, converging lens, f=140 mm, and L2, diverging lens, f=-150 mm). The resulting beam is then spread thru the filter array for spectral
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 321–324, 2010. www.springerlink.com
322
P. Lemaillet et al.
division of the sub-images. The sub-images are obtained by an array of nine achromat (f=300 mm). One should note that the chosen focal length derives from the requested image resolution, and that the CCD size imposes small diameter lenses. Large focal length and small diameter are two conditions that off-the-shelf lenses do not reach. Hence, commercially available lenses were edged down to achieve the requested diameter. Fold mirrors were also added to help making a more compact optical train.
tion. The registration of the images is done manually by choosing a common area if interest. This procedure gives the coordinates of equivalent areas in each of the 9 subimages and these coordinates are further used in registering the in-vivo images of the eye fundus.
Fig.1. Experimental setup: L1, converging lens (f=140 mm); L2, diverging lens (f=-150 mm). The array lenses have a 300 mm focal length, and a 9 mm diameter (edged down off-the-shelf lenses). The focusing part of the setup is composed of a fast acquisition camera and a zoom lens L3.
Position adjustment of the lens array and the CCD was greatly eased by using an eye phantom recently built by our group [9], which reproduce the eye structure as well as the fundus optical properties. This helped locating the eye fundus to be imaged by the optical train. The filters were 6mm square to 8mm square narrow band filters (10nm FWHM, Newport, Irvine, CA, USA). The choice of filter, i.e. 540, 550, 560, 580, 610, 630, 640, 650 and 670 nm, was guided by previous publication [10, 11], Monte Carlo simulations of light travel into retinal tissues as well as commercial availability. Longer wavelength filters, i.e. in the 600-700 nm range, were included in the filter array so as to help establishing melanin concentration [12]. Effects of melanin can then be extrapolated for lower wavelengths and subtracted from the overall absorbance. Shorter wavelength sub-images are then used to fit oxy- and deoxy-hemoglobin absorption curves and assess the oxygen saturation in the retina. An USAF 51 resolution target (Edmunds Optics Inc., Barrington, NJ) located at the fundus of the eye phantom was used to register the nine sub-images. The illumination source was the regular flash source of the fundus ophthalmoscope. Figure 2 presents the nine sub-images of the USAF target with no filter array, i.e. no wavelength selec-
Fig.2.Nine sub-images of the USAF 51 registration target with no wavelength selection (i.e. no filter array). The registration of the images is done manually by choosing a common area of interest.
A set of NIST color reflectance standards (green, yellow and red standards from Labsphere, North Sutton, NH), was used to assess the spectral sensitivity of the setup, following a procedure explained in a previous publication [13]. Calibration of the setup was achieved using a set of cylindrical absorbing-scattering calibration standard built by another group [14]. These calibration phantoms were designed so as their absorption coefficient (Pa) and reduced scattering coefficient (Ps’) match a specific Pa vs Ps’grid. The cylindrical phantoms were set in a mechanical housing made of polycarbonate walls (width =78 mm, height =72 mm, depth =140 mm) simulating the eyeball. Similarly to our eye phantom [9], a small hole (6 mm diameter) in front of the mechanical housing simulates the pupil of the eye. A plano-convex lens (f=17 mm, Rolyn Optics, Coviana, CA) was positioned in front of the pupil to mimic the crystalline lens. The housing was filled with purified water to simulate the vitreous humor. Images of the cylindrical phantoms were taken, using the flash as the illuminating source and the diffuse reflectance values were calculated using Eq.(1): I I (1) R A u Phantom Dark
IFMBE Proceedings Vol. 32
IStandard IDark
Oximetry and Blood Flow in the Retina
323
where A is a scaling factor accounting for the setup apparatus. The intensity reflected from the eye phantom, IPhantom, is normalized by IStandard, the intensity reflected from a 60 % reflectance standard (Labsphere, North Sutton, NH). A dark image was also captured keeping the light source off. This image was subtracted from every image. We compared measured reflectance to Monte Carlo simulation. All simulations were conducted with MCML [15] and 1 million photons. An incident pencil beam was set at eight different wavelengths, distributed between 450 nm and 700 nm. We used Pa and Ps’ values issued of IAD computations, a scattering anisotropy of g=0.5 was used for these simulations. Results presented in Fig. (4) show that the measured reflectance is close to the simulated reflectance issued of Monte Carlo simulations.
velocities. For two blood velocities of 2 mm/sec and 4 mm/sec, the data analysis gave 1.98 and 3.96 mm/sec.
Fig.4 . Re ectance measurement issued of nine wavelength-dependent snapshot images ( lled symbols) and comparison to Monte Carlo simulations (lines) for 4 of the calibration phantoms.
III. RESULTS We report here the oxygen saturation results that where obtained on healthy patients. Figure 5 shows the nine-subimages of a healthy patient’s retina whereas Fig.6 illustrates the resulting SO2 retinal map obtained after tracing the vesFig.3. Picture of the phantom grid. Absorption coefficient Pa increases from bottom row to top row whereas reduced scattering coefficient Ps’ increases from left column to right column.
B. Blood flow velocimeter In the retinal blood flow part of the setup, a fast acquisition camera coupled to a zooming lens is used to record movies of the erythrocytes clusters. Since hemoglobin show higher absorption in the green region of the visible light spectrum, so is contrast and a green illumination is chosen to track hemoglobin-filled erythrocytes. Hence, a high power green LED (Enfis Ltd, UK) is used for the illumination and triggered to the camera. This light source replaces the former bulb illumination source of the Zeiss ophthalmoscope. Calibration of the blood flow measurement was performed by pumping blood in a calibrated needle (Internal Diameter = 100 Pm, World Precision Instrument, Inc., Sarasota, FL) located in the back of an eye phantom. A sequence of images was acquired at a rate of 60Hz and a Radon transform algorithm was used to quantify absolute centerline
Fig.5. Nine wavelength dependent sub-images of a healthy patient fundus.
IFMBE Proceedings Vol. 32
324
P. Lemaillet et al.
-sels by hand, using the higher wavelength to assess from the melanin concentration and fitting the resulting reflectance to the oxy and deoxy-hemoglobin absorption spectra.
support from the Coulter Foundation and NIH grant # EY017577-01A11.
REFERENCES 1.
2.
3.
4.
5.
6.
7. 8.
Fig. 6 . Oxygen saturation in retinal vessels
9.
Oxygen saturation was 95% in the arteries and 85% in the veins.
IV. CONCLUSIONS
10.
11.
We presented a retinal oximeter/ retinal blood flow velocimeter based on a commercial fundus ophthalmoscope. The oximeter relies on division of aperture and ninewavelength dependent sub-images of the patient fundus are taken in a snapshot. The velocimeter relies on tracking red blood cell clusters. Both part of the setup were calibrated and retinal oxygen saturation for a healthy patient was presented.
ACKNOWLEDGMENT
12.
13.
14.
15.
N. D. Wangsa-Wirawan and R. A. Linsenmeier, “Retinal oxygen: Fundamental and clinical aspects,” Arch Ophthalmol 121, 547–557 (2003). C. E. Riva, “Basic principles of laser Doppler owmetry and application to the ocular circulation,” International Ophthalmology 28, 183– 189 (2001). C. E. Riva, B. L. Petrig, R. D. Shonat, and C. J. Pournaras, “Scattering process in ldv from retinal vessels,” Appl. Opt. 28, 1078–1083 (1989). M. J. Mendel, V. V. Toi, C. E. Riva, and B. L. Petrig, “Eye-tracking laser Doppler velocimeter stabilized in two dimensions: principle, design, and construction,” J. Opt. Soc. Am. A 10, 1663–1669 (1993). N. Konishi and H. Fujii, “Real-time visualization of retinal microcirculation by laser owgraphy,” Optical Engineering 34, 753–757 (1995). H. Cheng, Y. Yan, and T. Q. Duong, “Temporal statistical analysis of laser speckle images and its application to retinal blood- ow imaging,” Opt. Express 16, 10214–10219 (2008). D. A. Boas and A. K. Dunn, “Laser speckle contrast imaging in biomedical optics,” Journal of Biomedical Optics 15, 011109 (2010). E. Aloni, A. Pollack, A. Grinvald, I. Vanzetta, and D. Nelson, “Noninvasive Imaging of Retinal Blood Flow and Oximetry by a new Retinal Function Imager,” Invest. Ophthalmol. Vis. Sci. 43, 2552– (2002) P. Lemaillet and J. C. Ramella-Roman, “Dynamic eye phantom for retinal oximetry measurements,” Journal of Biomedical Optics 14, 064008 (2009). M. Hammer, A. Roggan, D. Schweitzer, and G. Muller, “Optical properties of ocular fundus tissues-an in vitro study using the doubleintegrating spheres technique and inverse Monte Carlo simulation,” Physics in Medicine and Biology 40, 963–978 (1995). S. J. Preece and E. Claridge, “Monte carlo modelling of the spectral re ectance of the human eye,” Physics in Medicine and Biology 47, 2863–2877 (2002). H. M. Sarna, “The physical properties of melanins,” in “The Pigmentary System,” R. E. Nordlund, V. J. Hearing, R. A. King, and J. P. Ortonne, eds. (Oxford University Press, 1998), pp. 439–450. J. Ramella-Roman and S. Mathews, “Spectroscopic measurements of oxygen saturation in the retina,” Selected Topics in Quantum Electronics, IEEE Journal of 13, 1697–1703 (2007). S. A. Prahl, “The adding-doubling method,” in “Optical-Thermal Response of Laser Irradiated Tissue,” A. J. Welch and M. J. C. van Gemert, eds. (Plenum Press, 1995), 10 chap. 5, pp. 101–129. L. Wang, S. L. Jacques, and L. Zheng, “Mcml–monte carlo modeling of light transport in multi-layered tissues. ”Computer methods and programs in biomedicine 47, 131–146 (1995).
The authors thank D. Smolley for technical assistance in achieving the mechanical items. We are grateful for the
IFMBE Proceedings Vol. 32
Monitoring and Controlling Oxygen Levels in Microfluidic Devices Peter C. Thomas1,2, Srinivasa R. Raghavan3, and Samuel P. Forry1 1
National Institute of Standard and Technology/Biochemical Science Division, Gaithersburg, MD 2 University of Maryland/Fischell Department of Bioengineering, College Park, MD 3 University of Maryland/Department of Chemical and Biomolecular Engineering, College Park, MD
Abstract— Mammalian cell culture has been traditionally performed in a static oxygen concentration of 21 mol %. However, oxygen level in vivo is significantly more hypoxic with an average oxygen concentration of 3 mol % to 5 mol %. In addition, many cells within the body experience dynamic oxygen levels. Such differences in oxygen tension have been shown to affect cell behavior, and controlling and monitoring oxygen level is crucial in creating biomimetic cell culture conditions. Previously, we have developed a luminescence-based oxygen sensor capable of monitoring cellular oxygen consumption rates in a multi-well plate format that is compatible with conventional cell microscopy techniques (e.g. phase contrast and fluorescence imaging). In the current study, we demonstrate successful integration of this oxygen sensor into a multi-layer microfluidic cell culture device. The oxygen sensor provides a facile method for continuous monitoring of on-chip oxygen levels. Polydimethylsiloxane (PDMS) based microfluidic cell culture devices are permeable to oxygen, allowing physiologically relevant oxygen environments to be generated. Control channels are incorporated to enable on-chip control of dissolved oxygen tension. Finite element simulations and experimental measurements are in excellent agreement in monitoring oxygen diffusion through the PDMS to generate stable oxygen gradients and rapidly changing conditions on-chip. Further, on-chip calibration matches sensitivities measured outside of the microfludic environment. Cells will be monitored during culture in this microfluidic system under physiologically relevant oxygen environments. Keywords— Microfluidics, oxygen control, PDMS, oxygen sensor.
I. INTRODUCTION Oxygen is a critical parameter to the behavior and function of cells. Cells in vivo experience oxygen environments that are tightly regulated. In general, the average oxygen concentration in tissue is around 3 mol %-5 mol % (PO2 = 0.03 atm to 0.05 atm).[1, 2] In comparison, in vitro cell cultures are generally performed at approximately 21 mol % (PO2 = 0.21 atm, ambient oxygen level). Such differences in oxygen levels have consistently been shown to alter cell responses, leading to unreliable experimental results [1-3].
These findings reinforce the importance of oxygen in cell culture and the critical need of controlling oxygen levels. Microfluidic devices made from polydimethylsiloxane (PDMS) are highly permeable to oxygen, allowing control of the oxygen level on-chip. The generation of oxygen gradients was demonstrated by flowing gas through a control channel and allowing diffusion to modulate the oxygen level in adjacent fluid-filled channels. In previous research, similar gas gradients were observed using dissolved fluorescence indicators in solution before cell experiments were begun [4]. However, real-time oxygen measurements could not be made during cell culture, and any fluctuations in oxygen level remained undetected. The current work describes the successful integration of a new thin film oxygen sensor into a multi-layer microfluidic cell culture device. The sensor functionality was demonstrated previously in multi-well culure plates to be compatible with phase contrast and fluorescence imaging [5]. Integrated into a microfluidic device, this sensor allows continuous monitoring of oxygen partial pressure. Control channels incorporated into the device design allow control of dissolved oxygen tension in the fluidic channel. Finite element simulations and experimental results showed excellent agreement in monitoring the changing oxygen level onchip. Based on the simulation results, various device geometries were developed that allow generation of a wide range of oxygen gradient and oxygen levels to be examined.
II. EXPERIMENTAL SECTION A. Sensor Preparation and Integration Oxygen sensors were prepared as previously described with slight modification [5]. All components were spincoated onto a 3 in × 2 in microscope slide (Fisher Scientific) and allowed to cure before integration with the device. The microfluidic device is a two layer system consisting of a fluidic layer with a single channel and a control layer with multiple pneumatic valves and gas control lines (Figure 1). The thin film oxygen sensor became the floor of the device through plasma bonding (Figure 1).
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 325–328, 2010. www.springerlink.com
326
P.C. Thomas, S.R. Raghavan, and S.P. Forry
Fig. 2 Actuation of pneumatic valves controls fluid flow within the device. When valves were opened, fluid can pass through the device continuously (left). When pressured is applied, the valves close and flow is stopped (right)
Fig. 1 Microfludic device with integrated oxygen sensor. The schematic illustrates the top-down view of the device with a fluid channel, control lines and pneumatic valves (top). Cross sectional view of the dotted box shows the two control lines above the fluidic channel (bottom). All oxygen measurements were made within this section (box) of the device B. On-Chip Sensor Calibration Calibration of on-chip oxygen sensor was accomplished by placing the entire device inside a continuous flow chamber with different gas concentrations (10 mol %, 5 mol %, 2 mol %, 1mol % or 0 mol % O2 in N2) in the head space. The emission intensity from the sensor was captured using a 10× (0.3NA) objective on an inverted microscope (Zeiss Axiovert 200, Thornwood, NJ) with the focus on the floor of the fluidic channel/surface of the oxygen sensor (Figure 1 top, enclosed box). Device was illuminated using an X-cite metal halide light source and captured using a Color IEEE1394 camera (Scion Corporation, Frederick MD) as previously described. All images were analyzed using MATLAB software. A pixel by pixel analysis was performed on each image to determine the oxygen level captured in that pixel. C. Finite Element Simulation A finite element model utilizing FlexPDE software (PDE Solutions Inc, Antioch CA)was generated for comparison to the measured oxygen values. A 2D steady state model based on the cross section of the device geometry was developed (Figure 1 bottom). To simulate gas control in the device, the boundary conditions for the control lines were set at 0 mol % or 21 mol % O2 for N2 or air, respectively.
Fig. 3 Calibration of on-chip oxygen sensor. Quenching of oxygen sensor followed the Stern-Volmer equation and from the slope of the plot, KSV = 535 ± 60 atm-1
III. RESULTS AND DISCUSSION A. Sensor Integration and Calibration Through plasma bonding, the oxygen sensor was successfully integrated into the floor of the microfluidic device. The bond formed between the sensor and the channels was very robust and the device did not delaminate under moderate hydrostatic pressure. Pneumatic valves, incorporated within the device, remained functional as demonstrated by the control of fluid flow during actuation (Figure 2). The oxygen sensor incorporated into microfluidic devices is quenched in the presence of oxygen and the response can be described using the Stern-Volmer equation:
IFMBE Proceedings Vol. 32
Io = 1 + K SV ⋅ PO 2 I
Monitoring and Controlling Oxygen Levels in Microfluidic Devices
327
Fig. 4 Oxygen control and measurements on-chip. When N2 was pumped through the control lines, the overall oxygen tension across the fluidic channel decreased to a new equilibrium that was measured by the oxygen sensor (solid circles). Finite element simulation showed excellent agreement with the experimental results (white circles). Data shows mean and standard deviation across the channel
where Io is the phosphorescence intensity in the absence of oxygen, I is the quenched intensity at a higher oxygen level, Ksv is the Stern-Volmer constant measured in calibration, and PO2 is the partial pressure of oxygen. The Ksv value was found to be 535 ± 60 atm-1 and is in close agreement with calibrations made off-chip (Figure 3). B. Oxygen Monitor and Control The control lines designed into the chip allow the control of dissolved oxygen tension in the adjacent fluid-filled channels. Due to the high gas permeability of PDMS, oxygen in the fluid-filled channels equilibrates to the gas partial pressure in the control line. Any changes in the overall oxygen level were then observed using the incorporated oxygen sensor in real time. When air was pumped through the control lines, the oxygen level in the fluid channel remained constant at PO2 = 0.21 atm (data not shown). In contrast, the oxygen tension in the fluid channel equilibrated to a lower level, PO2 = 0.68 atm - 0.43 atm, when N2 was pumped through the control line (Figure 4). As previous studies have shown, the high permeability of PDMS makes it difficult to reduce the overall oxygen tension in the device below 4mol %.[6] Simulation using the actual device geometry indicated strong agreement with the measured oxygen level (Figure 4). C. Simulation and New Device Geometries While mean tissue oxygen level is 3 mol % - 5mol %, many interesting cell types (e.g. cancer cells, adult stem
Fig. 5 Design
of different device geometries based on model simulation. By increasing the width of the control lines, the simulation shows that the overall oxygen level across the channel can be reduced significantly when N2 is present in the control lines (a). In addition, the convex oxygen profile across the channel was reduced when the distance between the control lines and the fluidic channel is increased (black circle 20 µm, white circle 60 µm, grey circle 200 µm). Oxygen gradients were simulated in different fluidic channel geometries (500 µm, 1000 µm, 1500 µm). The control line to the left was at air, while the N2 was in the opposite line
cells, and embryonic stem cells) are found in niche environments where the oxygen levels are much lower. To study these cell lines, hypoxic environment need to be generated within the device. Alternate device geometries from the one depicted here were evaluated by simulation for to producing interesting oxygen environments. By increasing the width of the control line the oxygen tension in the fluid channel was reduced below 4 mol % (Figure 5a) when N2 was present. Furthermore, increasing the distance between the control line and the fluid channel produced more uniform oxygen environments and lowered the overall oxygen level across the channel (Figure 5a). Oxygen gradients were also simulated using two separate control lines compositions in various geometries (Figure 5b). Such
IFMBE Proceedings Vol. 32
328
P.C. Thomas, S.R. Raghavan, and S.P. Forry
environments can assist in the design of devices to produce optimal oxygen profiles for cell studies.
IV. CONCLUSIONS Controlling oxygen is critical to creating more biomimetic environments for cell based assays. Gas permeable PDMS-based microfluidic devices allow control of oxygen environments that are not easily accomplished in conventional cell culture formats. In the current study, an oxygen sensor was incorporated into a PDMS microfluidic device. The oxygen sensor allowed monitoring of the oxygen environment in real time. The measured oxygen level showed excellent agreement with finite element simulation. Finite element simulations exhibited excellent agreement with experimental measurements and allowed several alternative geometries to be evaluated were capable of creating low oxygen environments and oxygen gradients. The current work provides the foundation for the development of an accurate controlled-oxygen microfluidic cell culture device.
REFERENCES (1) M. Csete, 2005 "Oxygen in the cultivation of stem cells," in Stem Cell Biology: Development and Plasticity, vol. 1049, Annals of the New York Academy of Sciences, 2005, pp. 1-8. (2) K. R. Atkuri, L. A. Herzenberg, A. K. Niemi, et al., (2007) Importance of culturing primary lymphocytes at physiological oxygen levels. Proc. Natl. Acad. Sci. U. S. A. 104:4547-4552 (3) B. Sahaf, K. Atkuri, K. Heydari, et al., (2008) Culturing of human peripheral blood cells reveals unsuspected lymphocyte responses relevant to HIV disease. Proc. Natl. Acad. Sci. U. S. A. 105:5111-5116 (4) M. Polinkovsky, E. Gutierrez, A. Levchenko, et al., (2009) Fine temporal control of the medium gas content and acidity and on-chip generation of series of oxygen concentrations for cell cultures. Lab Chip 9:1073-1084 (5) P. C. Thomas, M. Halter, A. Tona, et al., (2009) A Noninvasive Thin Film Sensor for Monitoring Oxygen Tension during in Vitro Cell Culture. Anal. Chem. 81:9239-9246 (6) G. Mehta, J. Lee, W. Cha, et al., (2009) Hard Top Soft Bottom Microfluidic Devices for Cell Culture and Chemical Analysis. Anal. Chem. 81:3714-3722 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Samuel P. Forry National Institute of Standard and Technology 100 Bureau Dr Gaithersburg, MD U.S.A
[email protected]
An Imaging Pulse Oximeter Based on a Multi-Aperture Camera Ali Basiri and Jessica C. Ramella-Roman The Catholic University of America, 620 Michigan Ave., N.E., Washington DC, USA Abstract— This paper presents an imaging arterial pulse oximeter based on the acquisition of two images respectively at the peak and trough a local arterial pulse. Spectroscopic sensitive images are obtained using a multi-aperture system synchronized to a point pulse oximeter. Each acquired image consists of 16 spectroscopic images of the same field of view. By subtracting the images obtained at the peak and trough of the arterial pulse one is able to eliminate common absorbers and scatterers and ultimately focus only on the metric of interest. Keywords— Oxygen saturation, Pulse Oximetry, Imaging, Arterial, Multi-Aperture Camera.
I. INTRODUCTION Transmission and reflectance based pulse oximetry are commonly used in medicine [1]. Pulse oximeters are based on the light absorption of oxygenated hemoglobin (HbO2) and deoxyhemoglobin (Hb), for the assessment of oxygen saturation (SO2), which is the amount of oxygenated hemoglobin present in the blood. The detection of oxygen saturation is now going a step further utilizing images to facilitate insights that would otherwise be difficult to obtain [2,3]. Some applications of imaging pulse oximetry are monitoring of neoadjuvant chemotherapy, characterization of vascular skin lesions, and detection of tumors [4,5]. Image pulse oximetry could enable a physician to gain a better understanding of the area of interest while requiring less invasive testing. Our imaging technique is based on synchronizing an imaging system to a photoplethysmographer. Photoplethysmography is the monitoring of time-varying changes in the volume of blood for a tissue. Light is directed onto an area of the skin and a photodetector is used to detect the light that is either reflected or transmitted through the skin, blood, and other tissue; the change in intensity is correlated to the change in blood volume. Within the plethysmogram signal, there is an AC signal and a DC signal. The AC component is a cardiac-synchronous signal caused by the arterial pulse and the DC component is a slow varying signal that is primarily caused by the total blood volume in the skin [6]. In a study done by Verkruvsse et al. [3] a consumer grade video camera was used to visualize the photoplethysmographic
signal and determine a patient heart rate. With some additional filtering, they were able to distinguish arterial vasculature. Filtering was necessary because the DC component of the plethysmograph results in an offset that makes it difficult to know where the peak and trough are. By removing the DC offset arterial flow could be visualized. Their system used a broadband RGB filtering given by their specific imaging apparatus; consequently, quantitative data could not be ascertained. A study done by Lee et al. [7], used noninvasive diffuse optical spectroscopy to monitor oxygen saturation during hypovolemic shock and fluid replacement. They too used a broadband source and were consequently unable to take into consideration the individual effects of oxyhemoglobin, deoxyhemoglobin, and melanin. Recently we have introduced a multi-aperture camera capable of taking 16 different images at known wavelengths in a single snapshot, we have used the system to determine oxygen saturation in the retina as well as skin wounds [8,9]. The system uses narrowband filters, which enable to quantitatively determine the concentration of oxyhemoglobin, deoxyhemoglobin, and melanin. In this paper, we propose a technique that uses a plethymographer to trigger the multi-aperture camera and take a snapshot at the peak and at the trough of the pulse waveform, the acquired images are finally used to calculate a value proportional to arterial oxygen saturation of the superficial skin vasculature.
II. MATERIAL AND METHODS The volume of blood in the superficial layer of skin changes during an arterial pulse. This change is a periodic function, where the highest difference in blood volume is in the peak and thought of the pulsatile signal. Our system included a commercially available pulse oximeter and plethymographer (Nonin, Plymouth, MN), a multi-aperture camera [9], and a controlling computer that used a custom Matlab® interface. The plethymographer signal was acquired in real time using serial communication. The signal was acquired for 20 seconds before starting acquisition for stabilization purpose. A typical pulsatile signal is shown in Fig.1 below. The red and green dots correspond to the triggering point.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 329–331, 2010. www.springerlink.com
330
A. Basiri and J.C. Ramella-Roman
Fig. 3 Subtraction of a peak and trough images
Fig. 1 Plethymographer signal and camera synchronization Camera trigger was achieved with a thresholding mechanism based on the plethymographer signal amplitude and the imager exposure time was kept below 60 msec to avoid averaging over the arterial pulse. Two sets of data were obtained one at the peak and one at the trough of the arterial pulse; a typical peak image obtained with our system is shown in Fig. 2. The difference between this two set of images is the arterial effect shown in Fig. 3. Subtraction of the two images is used to eliminate common factors such as melanin absorption, capillary bed absorption and generalized scattering. In order to confirm that the synchronization with the plethysmographer was accurate several peaks and trough images were acquired. Subtraction of two trough images is shown in Fig. 3, the lack of contrast clearly differentiate this image from the peak-trough image of Fig. 4.
Fig. 4 Subtraction of two trough images The calculation of tissue oxygenation was based on two measured wavelengths, λ1 = 590 nm and λ2 = 700 nm. A region of interest was selected on the peak and trough images at both wavelengths and their relative value was calculated using Eq. 1.
(
)
⎛ I ⎞ peak +through ⎜ ⎟ log10 ⎜ I through ⎟ ⎝ ⎠ λ1 D= ⎛ I ⎞ peak +through ⎜ ⎟ log10 ⎜ I through ⎟ ⎝ ⎠λ 2
(
)
(1)
Finally a value that is relative to oxigen saturation was calculated using Eq. 2.
M SO2 =
Fig. 2 Typical peak image obtained with our system
ε Hbλ 2 D − ε Hbλ1 (ε Hbλ 2 − ε HbO2 λ 2 )D − (ε Hbλ1 − ε HbO2 λ1 )
(2)
Figure 5 shows some results obtained during a hypoxic test. A pressure cuff was positioned on the forearm of 4 healthy volunteers. The plethysmographer was located on the medium finger of the volunteer at less than 5 cm from the imaged area. A commercially available transmission
IFMBE Proceedings Vol. 32
An Imaging Pulse Oximeter Based on a Multi-Aperture Camera
pulse oximeter was located on the index finger of the volunteer. Peak and trough images were acquired at three different stages of pressure cuff inflation and compared to the pulse oximeter readings. Some typical results are shown in the figure below.
III. RESULTS
331
used to take snapshots at the peak and trough of a pulse waveform. The results confirm that arterial effect is detectable by subtraction of images from peak and trough. Also by using a two wavelength-based algorithm some semi-quantitative values of arterial SO2 can be obtained. More sophisticated models that use the full 16 wavelength capability of our imaging system will be devised in future work.
REFERENCES
Fig.
5 Comparison between measurement result and commercial pulse oximeter (Nonin)
Results show a significant separation between each group of oxygenation although values did not correspond numerically to physical values of SO2.
IV. CONCLUSIONS We have introduced a simple imaging system based on a plethysmographer and a multi-aperture camera that can be
1. B. Hertzman and C. R. Spealman, “Observations on the finger volume pulse recorded photoelectrically,” Am. J. Physiol. 119, 334-335, (1937). 2. S. Hu, J. Zheng, V. Chouliaras, and R. Summers, ”Feasibility of imaging photoplethys- mography,” in Proceedings of the International Conference on BioMedical Engineering and Informatics, NewYork, pp. 7275, (2008). 3. W. Verkruysse, L. O Svaasand, J S. Nelson, “Remote plethysmographic imaging using ambient light,” 16, (26) Optics Express (2008) 4. S. Wendelken, S. McGrath, G. Blike, and M. Akay, “The feasibility of using a forehead reflectance pulse oximeter for automated remote triage,” in Bioengineering Conference, 2004. Proceedings of the IEEE 30th Annual Northeast, (2004), pp. 180-181. 5. Zhou, R. Choe, N. Shah, T. Durduran, G. Yu, A. Durkin, D. Hsiang, R. Mehta, J. Butler, A. Cerussi, B. Tromber, A. Yodh, “Diffuse Optical Monitoring of Blood Flow and Oxygenation in Human Breast Cancer During Early Stages of Neoadjuvant Chemotherapy,” Journal of Biomedical Optics 12(5), September/October 2007. 6. J. A. Crowe and D. Damianou, “The Wavelength Dependence of the Photoplethysmogram and its implication to Pulse Oximetry,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, (Institute of Electrical and Electronics Engineers, NewYork, 1992), pp. 2423-2424. 7. J. Lee, A. Cerussi, D. Saltzman, T. Waddington, B. Tromberg, M. Brenner, “Hemoglobin Measurement Patterns During Noninvasive Diffuse Optical Spectroscopy Monitoring of Hypovolemic Shock and Fluid Replacement,” Journal of Biomedical Optics 12(2) (March/April 2007). 8. J.C. Ramella-Roman, S.A. Mathews, Q. Nguyen, “Measurement of oxygen saturation in the retina with a spectroscopic sensitive multi aperture camera,” Optics Express, 16, pp. 6170-6182, (2008). 9. A Basiri, J.C. Ramella-Roman, “Use of a multi-aperture camera in the characterization of skin wounds,” Optics Express, Vol. 18 Issue 4, pp.3244-3257 (2010).
IFMBE Proceedings Vol. 32
Fluorescent Microparticles for Sensing Cell Microenvironment Oxygen Levels within 3D Scaffolds Miguel A. Acosta and Jennie B. Leach University of Maryland Baltimore County, Department of Chemical and Biochemical Engineering; 1000 Hilltop Circle, ECS #314, Baltimore, MD 21250 Abstract–– Oxygen diffusion through tissues is a critical factor in maintaining physiologic cell processes and growing engineered tissues in vitro. We have recently reported a novel fluorescent microparticle system for sensing cell microenvironment oxygen levels within 3D scaffolds. Cell response to suboptimal oxygen concentration is controlled by the hypoxia inducible factor-1 (HIF-1). Thus, HIF-1 is an ideal marker for investigating cellular response to oxygen levels in engineered tissues. We have created a cell line consisting of human C8161 subcutaneous melanoma cells transfected to express an EGFP-HIF-1α fusion. Expression of this fusion protein may be quantified with analysis tools such as microscopy or flow-cytometry. In this study, we present the preliminary characterization of these cells and we demonstrate that they are capable of expressing measurable quantities of EGFP-HIF-1α fusion protein in response to prolonged exposure to hypoxia. Keywords–– Hypoxia, HIF-1α, oxygen sensing, biomaterials.
I. INTRODUCTION Oxygen diffusion through biomaterial scaffolds plays an important role in maintaining healthy tissues in vitro. Oxygen supply is a limiting factor during the growth of highly metabolic tissues and large tissue masses, mainly as a result of the lack of vascularization in tissues cultured in vitro and the low solubility of oxygen in the culture medium. Therefore, gaining an understanding of the cellular response to changes in soluble cues, such as oxygen concentration, through their biomaterial microenvironment may potentially lead to improving methods for control over cell behavior in tissue engineering [1]. Unfortunately, few methods exist that allow for direct correlation between spatial changes in oxygen concentration through a biomaterial scaffold and their impact on cellular function. To address this need, we have developed fluorescent oxygen-sensing microparticles that can be suspended in any transparent biomaterial scaffold used in cell culture and tissue engineering and with which ratiometric measurements of spatial and temporal changes in oxygen concentration can be performed in a non-invasive manner [2]. Furthermore, since operation of the microparticles is based on dynamic fluorescence quenching of tris (4,7-diphenyl-1,10phenanthroline) ruthenium (II) dichloride, oxygen consumption during use is negligible. These sensing microparticles have
been demonstrated to have fully reversible response in the presence of cyclical changes in oxygen concentration, to have a size distribution suitable for application to experiments with tissue culture, and to be non-cytotoxic to cells. Additionally, a calibration was created using the two-cite Stern-Volmer model that allows for direct calculations of oxygen partial pressures from measured fluorescence intensity data. Application of the sensing microparticles for investigating cellular response to changes in oxygen concentration through a biomaterials scaffold needs to incorporate simultaneous monitoring of particle response and cellular function. Cellular response may be characterized by several methods; for example measurements of a protein product specific to a cell type, or metabolic activity may be performed. However, in order to specifically quantify cellular response to gradients in oxygen concentration, measurement of a response that is triggered only by fluctuations in oxygen and that is directly related to how cells mediate their function under such conditions is needed. Measurements of protein products or metabolic activity may not be direct indicators of such a response due to the fact that they may change with fluctuations in other cues from the microenvironment such nutrient content, temperature, pH, and the presence of toxic species. The helix-loop-helix transcription factor, HIF-1, which is composed of two subunits, HIF-1α and HIF-1β, is directly controlled by hypoxia in the cellular environment. During normoxic conditions, HIF-1α is continuously degraded by the ubiquitin-proteome system. This degradation process is oxygen dependent, as under hypoxic conditions HIF-1α is no longer degraded and is translocated to the cell nucleus where it binds with the hypoxia-inducible factor-1β (HIF-1β) to form HIF-1 [3, 4]. Thus, we have selected to transfect cells to produce a fusion of HIF-1α and green fluorescent protein (GFP). In conjunction with the oxygen-sensing microparticles we propose a methodology to create a correlation between oxygen concentration in the biomaterial microenvironment and HIF-1α expression with confocal microscopy. Herein, we present the preliminary characterization of C8161 subcutaneous melanoma cells transfected to produce a GFP-HIF-1α fusion protein. We demonstrate that the cells were successfully transfected and that expression and regulation of the GFP-HIF-1α fusion is comparable to that of conventional HIF-1α in non-transfected cells.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 332–335, 2010. www.springerlink.com
Fluorescent Microparticles for Sensing Cell Microenvironment Oxygen Levels within 3D Scaffolds
II. MATERIALS AND METHODS All cell culture materials were purchased from Invitrogen (Carlsbad, CA) unless otherwise stated. A.
III. RESULTS
IV. DISCUSSION
Microscopy and Image Analysis
Prior to microscopy, the cells were fixed with 4 vol. % formaldehyde and the cell nuclei were labeled with 3 nM 4’,6-diamino-2-phenylindole (DAPI). Imaging was carried out with an IX-81 fluorescence microscope (Olympus, Center Valley, PA). Image analysis for cell density and HIF-1α expression was conducted NIH-ImageJ freeware. Line scans across the width of a particular image were used to obtain an average value of the grayscale intensity for that image due to GFP fluorescence from the cells. Each image was segmented with a series of line profiles from which an average value of intensity was determined. Although the image analysis performed for these experiments was a global measurement of fluorescence intensity across the image area, the image analysis procedure may also be adapted to measure the intensity of individual cells. D.
To study the expression of the EGFP-HIF-1α fusion protein, C8161 cells were cultured under hypoxia for 12, 24, and 48 hours. Two parameters were measured, cell density and global grayscale intensity. Figure 2a shows the cell densities at 12, 24, 48 hours of culture under hypoxia. Statistical analysis demonstrated that there is no statistical difference between the measured cell densities at the three time points. Grayscale intensities corresponding to EGFP-HIF-1α expression at 12, 24, and 48 hours is shown on Figure 2b. Measured intensity at 24 hours showed an increase of 34.4 ± 13.7 % from that measured at 12 hours; whereas, measured intensity at 48 hours showed an 18.3 ± 6.4 % decrease from that measured at 24 hours. Statistical analysis demonstrated that the measured grayscale intensities at the three time points are distinct.
Plasmids and Stable Transfections
Full length HIF-1α transcript DNA coding sequence was graciously donated by Dr. Joachim Fandrey and his research group from the University of Duisburg-Essen (Essen, Germany) [5]. The sequence was ligated into pEGFP-C1 (Clontech, Palo Alto, CA). The C8161 cells were transfected for 48 hours, before transferred to a culture flask with the culture medium described in section A for selection. C.
Student’s T-test with a cutoff of p < 0.05 was also used for paired comparison of normalized grayscale intensity between the 24 and 48 hour time points.
Cell Culture
Human C8161 subcutaneous melanoma cells were cultured in RPMI supplemented with 10 vol. % defined fetal bovine serum (Hyclone Laboratories, Logan, UT), 1 vol. % 5,000 (i.u.M/L)/5,000 (u.g/mL) penicillin/streptomycin, 1 vol. % 0.05 mM β-mercaptoethanol (Mallinckrodt Baker, Phillipsburg, NJ), 1 vol. % 1 M HEPES, 1 vol. % 0.2 M Glutamax, 0.1 vol. % 50 mg/mL Gentamycin, 0.02 vol. % 25 mg/mL Plasmocin. For selection of the transfected cells the same culture medium was used, with the addition of 0.016 vol. % 1 mg/mL G418 (Geneticin; EMD Biosciences, San Diego, CA). Cells were cultured in 0.8 cm2-wells of a plastic chambered slide (Lab-Tek Chamber Slide system; Fisher, Pittsburgh, PA) at a density of 20,000 cells/well. Hypoxic incubation was carried out inside a custom made closed chamber were the cells were subjected to an atmosphere of 5% CO2 and 95% N2 (0% O2) at 37°C. Cells were incubated for 12, 24, and 48 hours. B.
333
Statistics
Statistical analysis for hypoxia experiments was performed using ANOVA with a cutoff of p < 0.05.
Transfection of the C8161 cells with pEGFP-C1-HIF-1α has yielded a cell line capable of expressing an EGFP-HIF1α fusion that may be characterized with fluorescence microscopy. Figure 1 (a-c) displays the changes in EGFPHIF-1α fluorescence with increased culture time under hypoxic conditions. EGFP-HIF-1α fluorescence observed increased with culture time, reaching its peak at 24 hours and later decreasing at 48 hours. Fluorescence is greater around the cell nucleus, which suggests that HIF-1α trafficking occurs as expected; the protein is moved to the nucleus during hypoxia to bind to HIF-1β. This result is also consistent with observations made by Wotzlaw et al., who transfected human U2OS osteosarcoma cells to express cyan fluorescent protein-HIF-1α (ECFP-HIF-1α) and yellow fluorescent protein-HIF-1β (EYFP-HIF-1β) fusions with which they were able to estimate the separation between the two molecules once combined within the cell nuclei [5]. Figure 2b shows the changes in EGFP-HIF-1α expression with culture time under hypoxic conditions. Similar to what is observed in Figure 1, there is an increase in measured intensity from 12 to 24 hours and a decrease from 24 to 48 hours. Measured intensity may be dependent on cell density; however, statistical analyses performed on the results shown in Figure 2a demonstrate that there are no statistical differences in cell density between the three time points. Thus, attributing this decrease in measured intensity between 24 and 48 hours to the possibility of cell death is not possible. A normalized representation of the measured intensity is more
IFMBE Proceedings Vol. 32
334
M.A. Acosta and J.B. Leach
appropriate. Figure 2c shows the measured grayscale intensity normalized against cell density. Statistical analysis performed using a Student’s T-test for comparing the normalized intensity at 24 and 48 hours (p < 0.05) demonstrated that there was no statistical difference between the measured intensity on a per cell basis for the two time points.
(a)
(a)
(b)
(b)
(c)
(c)
Fig. 2 (a) Measured cell density (cells/cm2) for C8161 cells cultured under hypoxia for 12, 24, and 48 hours. (b) Measured grayscale intensity for C8161 cells corresponding to expression of EGFP-HIF-1α fusion protein at 12, 24, and 48 hours of hypoxic culture. (c) Calculated normalized intensity (a.u.-cm2/cells) for C8161 cells cultured under hypoxia for 12, 24, and 48 hours. Data shown as mean ± standard deviation for three trials
Fig. 1 Fluorescence images of C8161 cells showing expression of EGFPHIF-1a at (a) 12 hours, (b) 24 hours, and (c) 48 hours of culture under hypoxia (0% O2, 5% CO2, and 95% N2). Blue fluorescence corresponds to cell nuclei labeled with DAPI. Images shown at 10 x magnification
Hence, the measured values at 12 hours, which are lower, may be attributed to the fact that less HIF-1α has accumulated inside the cells at that time. This suggests that the fusion protein is being maintained in a similar manner to which HIF1α is maintained in non transfected cells; its degradation
IFMBE Proceedings Vol. 32
Fluorescent Microparticles for Sensing Cell Microenvironment Oxygen Levels within 3D Scaffolds
process down-regulated under hypoxic conditions and later the protein being translocated to the nucleus. Future studies will combine the oxygen-sensing microparticles together with the transfected C8161 cells in 3D culture to correlate EGFP-HIF-1a expression with the gradients in oxygen concentration present in the biomaterial microenvironment. These studies will investigate the possibility of differential EGFP-HIF-1α expression due to gradients in oxygen concentration through the volume of a biomaterial scaffold greater than 100 μm thick.
V. CONCLUSIONS We have presented the preliminary characterization of C8161 subcutaneous melanoma cells transfected to produce a GFP-HIF-1α fusion protein. The cells were cultured for 12, 24, and 48 hours under an atmosphere of 5% CO2 and 95% N2 (0% O2) at 37°C. Fluorescence microscopy was employed to image and analyze expression of the fusion protein. The cells were demonstrated to not differ statistically in cell density through all three time points. Despite initial observed differences in expression of the EGFP-HIF-1α fusion, calculated values normalized intensity revealed that the only differences existed at 12 hours, where the protein was still accumulating inside the cell body and around the nucleus. Measured values of intensity at 24 and 48 hours were not statistically different. These results discounted the possibility of a decrease in intensity by cell death due to prolonged exposure to hypoxia and also suggest that the fusion protein is regulated in similar fashion to its natural counterpart.
335
ACKNOWLEDGEMENTS We extends our thanks Dr. Joachim Fandrey and his research group at the University of Duisburg-Essen (Essen, Germany) for kindly providing the HIF-1α DNA and for valuable technical discussions, to Dr. Charles Bieberich and his research group for their aid in the ligation of the HIF-1α DNA to the pEGFP-C1 plasmid, and to Dr. Suzanne Ostrand-Rosenberg and her research group for their aid with the transfection of the cells. Financial support for this project has been provided by the Henry Luce Foundation, UMBC, and NIH-NINDS (R01NS065205).
REFERRENCES 1. Wong JY, Leach JB, Brown XQ. Balance of chemistry, topography, and mechanics at the cell-biomaterial interface: Issues and challenges for assessing the role of substrate mechanics on cell response. Surface Science. 2004;570:119-33. 2. Acosta M, Ymele-Leki P, Kostov Y, Leach J. Fluorescent microparticles for sensing cell microenvironment oxygen levels within 3D scaffolds. Biomaterials. 2009;30:3068-74. 3. Wang GL, Jiang BH, Rue EA, Semenza GL. Hypoxia-inducible factor 1 is a basic-helix-loop-helix-PAS heterodimer regulated by cellular O2 tension. Proc Natl Acad Sci U S A. 1995;92:5510-4. 4. Vaupel P. The role of hypoxia-induced factors in tumor progression. Oncologist. 2004;9 Suppl 5:10-7. 5. Wotzlaw C, Otto T, Berchner-Pfannschmidt U, Metzen E, Acker H, Fandrey J. Optical analysis of the HIF-1 complex in living cells by FRET and FRAP. Faseb J. 2007;21:700-7.
IFMBE Proceedings Vol. 32
Determination of in vivo Blood Oxygen Saturation and Blood Volume Fraction Using Diffuse Reflectance Spectroscopy P. Chen and W. Lin Department of Biomedical Engineering, Florida International University, Miami, USA
Keywords— Blood oxygen saturation, blood volume fraction, in vivo, diffuse reflectance spectroscopy, wavelet transformation.
I. INTRODUCTION Local blood oxygen saturation (SatO2) and blood volume fraction (BVF) are important physiological indicators for healthy tissue. Insufficient blood flow and tissue hypoxia can be related to several different pathophysiological conditions such as stroke, heart failure, peripheral vascular diseases and development of cancers [1-4] . It has been shown that diffuse reflectance spectroscopy is an effective non-invasive technique to measure SatO2 and BVF levels in tissue in vivo [4-6]. In general, such a technique requires the utilization of a mathematical model to generate diffuse reflectance spectra for a specific detection geometry. Then a fitting routine is employed to match the simulated diffuse reflectance spectrum with the measured one by systematically alternating the input parameters of the models (i.e., scattering and absorption coefficients). In the end the optical properties produced the best fitting result are used to calculate SatO2 and BVF levels in the tissue investigated. In this pilot study, we derived a new methodology which does not require a sophisticated model and a computational
intensive fitting routine. The method extracts the profile characteristics associated with hemoglobin (Hb) absorption from a diffuse reflectance spectrum quantitatively using wavelet transformation. The variations of the extracted profile characteristics are closely related to the variations of SatO2 and BVF levels.
II. METHOD The distinct and dominant absorption properties of Hb make it stand out in a diffuse reflectance spectrum between 460 nm and 600 nm. Moreover, oxy-Hb and deoxy-Hb possess distinctly different spectral profiles in this wavelength region, as shown in Fig. 1; the double-peak and the single peak are the distinguishing signatures of oxy-Hb and deoxy-Hb respectively. Therefore, a spectral processing algorithm could be developed to detect and quantify these conspicuous spectral profile characteristics from a diffuse reflectance spectrum. These features, in turn, can be used to determine BVF and the SatO2 in the investigated tissue.
80000
M olar extinction coefficient (M /cm )
Abstract— Variations in blood oxygen saturation (SatO2) and blood volume fraction (BVF) in tissue have been associated with various pathophysiological conditions such as cancer development. These variations can be assessed directly through the absorption properties of hemoglobin in the visible wavelength region, which is detectable using optical techniques such as diffuse reflectance spectroscopy. In this pilot study, we derived a new methodology that quantitatively extracted the spectral profile characteristics of hemoglobin absorption from a diffuse reflectance spectrum using wavelet transformation. The variations in the extracted profile characteristics were then related to the alterations in SatO2 and BVF levels. The applicability of the methodology was evaluated using a set of diffuse reflectance spectra produced theoretically using a Monte Carlo simulation for photon migration.
oxy-Hb deoxy-Hb 60000
40000
20000
0 500
550
600
Wavelength (nm)
Fig. 1 Absorption spectrum of oxy-Hb and deoxy-Hb
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 336–339, 2010. www.springerlink.com
650
Determination of in vivo Blood Oxygen Saturation and Blood Volume Fraction Using Diffuse Reflectance Spectroscopy
Studies have shown that wavelet transformation (WT) is a robust technique to detect and quantify the profile characteristics of a signal [7, 8]. It is able to look into different structures of a signal that reside at different scales [9]. To date, wavelet transformation has been applied for optical spectral analysis and medical imaging processing [10, 11]. Hence WT is employed to detect and to quantify the characteristics of the spectral profiles generated by the Hb absorption. WT is a linear operation that decomposes a signal into small wavelet components at progressively changed scales and localizes important singularities in the signal during the zooming procedure. By definition, a wavelet is a finite energy function with a zero average. In addition it also satisfies the admissibility condition which is ∞
Let 1 s
Ws f (λ ) =
+∞
∫
f (u ) ⋅
⊗ denotes
(2)
B. Characteristic Index for Blood Oxygen Saturation
SatO2i = Ws 2 Lrd (λSatO2 )
⎛λ −u⎞ ⎟ ⎝ s ⎠
−∞
ψ s (λ ) =
BVFi = Ws1 Lrd (λWT _ isosbestic )
ψ⎜
where u and s are the translation and the dilation scale parameters, respectively. WT of a signal f(λ), therefore, is defined as
where
transformed absorption spectra of oxy-Hb and deoxy Hb; they are named as WT_isosbestic wavelengths in this paper. At an appropriate WT scale, it is found that some WT_isosbestic wavelengths are not only independent of SatO2 but also insensitive to the scattering effects. Such a characteristic makes them ideal for evaluating absolute BVF without taking the tissue type into consideration. Therefore the algorithm proposed here utilized the WT_isosbestic wavelength to create characteristic indices BVFi for the diffuse reflectance. That is:
In a similar fashion, the characteristic indices SatO2i for various SatO2 levels can be obtained at the wavelength λSatO2 using the following equations:
| ψˆ ( w) | 2 dw < +∞ w 0
Cψ = ∫
ψ u , s (λ ) =
337
1 ⎛ λ −u ⎞ ψ⎜ ⎟ du = f ⊗ψ s (λ ) s ⎝ s ⎠
(1)
convolution and
1 λ ψ( ) s s
From the above derivations, WT can also be treated as a bandpass filter whose frequency response can be adjusted by s. Prior to WT, a logarithmic transformation is applied to the measured diffuse reflectance spectrum, which is denoted by Lrd(λ), to enhance the Hb absorption properties in the spectrum. A. Characteristic Index for Blood Volume Fraction The isosbestic wavelengths are those wavelengths where oxy-Hb and deoxy-Hb possess identical absorption properties. Therefore, diffuse reflectance signals at these wavelengths can be used for relative blood volume estimation. Similar isosbestic wavelengths can be found in the wavelet
(3)
III. EVALUATION MC simulation has been used widely as the numerical solution of the radiation transfer theory in several fields. It has also been employed to describe the light propagation in biological tissues [12-14]. MC simulation of photon migration utilizes the particle characteristics of light and simulates the photon propagation as statistical random walk in a medium. The method simulates photons as small energy packets that encounter absorption events (the reduction of energy) and scattering events (the change in direction of propagation) based on a set of predetermined probabilities. These probabilities are defined by the absorption coefficient and the scattering coefficient of the medium. In a simulation, a photon will either (1) reemerge at the surface of the entrance, (2) transmit through the medium, or (3) be absorbed completely by the medium. The main drawback of this method is the high demand of computing power, which makes it unsuitable for real-time clinical applications. For a comprehensive evaluation of the proposed algorithm, an extensive MC database (MCDB) of diffuse reflectance spectra was established which mimics photon migration in different biological tissues at various SatO2 and BVF levels. A one layer homogeneous semi-infinite medium geometry was employed in all simulations. In each simulation, three million photons were injected at the origin. The remanding energy of reflected photons was recorded to create the diffuse reflectance spectrum. The distance from
IFMBE Proceedings Vol. 32
338
P. Chen and W. Lin
the origin to the location where the photon exited the medium was recorded and be denoted as r. A. Absorption Coefficient Within the wavelength region of 460 nm to 650 nm, Hb and myoglobin (Mb) are the dominant chromophore of the majority of the biological tissue types. Therefore, the simulation work proposed here only concerns the absorption from Hb and Mb. The absorption spectra of Hb and Mb closely resemble each other in the visible and the NIR wavelength regions [15]. Hence, the Hb absorption spectrum is used to represent the absorption of both Hb and Mb. The absorption coefficient used to establish the MCDB was defined by Eq. 4. That is
parameters used in the MC simulations and their ranges utilized to build MCDB were the same as those proposed in Table I. Thus far, a total of 2205 diffuse reflectance spectra were produced and stored in the MCDB, and more diffuse reflectance spectra are being generated. In the current MCDB, there are a total of 315 spectra at each BVF level. Fig. 2 demonstrates that at r=0.095cm, the characteristic indexes BVFi obtained from Eq. 2 is highly sensitive to the BVF levels despite the significant variations in the other parameters. The Spearman correlation coefficient of BVFi-BVF is 0.988 with p<0.05, which indicates a high linear correlation between the BVFi values and the BVF levels.
6.5
μ a (λ ) = BVF[ SatO2 × μ aoxy (λ ) + (1 − SatO2 ) × μ adeoxy (λ )] (1/cm). (4)
6.0
B. Scattering Coefficient
5.5
μ s' = a × m −b (1/m),
(5)
where m = λ × 10 3 . The ranges and the increments of each variable used to construct the MCDB are summarized in the following table. The ranges of these variables encompassed the majority of the biological tissue types, hemodynamic conditions and, probe geometries [18]. The diffuse reflectance spectra in the MCDB were denoted as rd [ r , BVF , SatO 2, a ,b ] (λ ) . Table 1 The ranges of the variables used to construct the MCDB R
Variable
Start 0.0075 cm
0.005 cm
Increment
1 cm
BVF
2%
0.5%
8%
5.0
BVF (%)
The reduced scattering coefficient in biological tissue can be approximated by Mie theory [16, 17]. That is
4.5 4.0 3.5 3.0 2.5 1.4
1.6
1.8
2.0
2.2
2.4
2.6
2.8
3.0
BVFi
Fig. 2 The comparison of the BVFi values produced by Eq. 2 and the actual BVF levels. Each black dot indicates the median of the data points at each BVF level. The horizontal error bars represent the 90th and 10th percentiles of the data sets, respectively. The solid line represents the liner regression of the data
End
SatO2
0%
10%
100%
A
600
200
1400
B
0.9
0.1
1.5
Λ
460 nm
2 nm
650 nm
In the MCDB of diffuse reflectance spectra, there are a total of 441 spectra at a given SatO2 level. Fig. 3 demonstrates that at r=0.095cm, the characteristic indexes SatO2i obtained from Eq. 3 is highly sensitive to the SatO2 levels despite the significant variations in the other parameters. The Spearman correlation coefficient of SatO2i-SatO2 is 0. 980 with p<0.05, which indicates a high linear correlation between SatO2 i values and the SatO2 levels.
IV. RESULTS The MCDB of diffuse reflectance spectra was established to develop the spectral profile analysis algorithm. The
IFMBE Proceedings Vol. 32
Determination of in vivo Blood Oxygen Saturation and Blood Volume Fraction Using Diffuse Reflectance Spectroscopy
120
100
SaO2 (%)
80
60
40
20
0 -1.0
-0.5
0.0
0.5
1.0
1.5
2.0
SaO2i
Fig. 3
The comparison of the SatO2i values produced by Eq. 11 and the actual SatO2 levels. The black dots indicate the median of the data at a given SatO2 level. The horizontal error bars represent the 90th and 10th percentiles of the data sets. The solid line represents the liner regression of the data
V. DISCUSSION AND CNCLUSIONS The preliminary results show that the characteristic indices extracted from a diffuse reflectance spectrum, produced by a MC simulation, using wavelet transformation can be closely related to SatO2 and BVF levels. This outcome demonstrates the potential clinical utilities of the proposed algorithm. Further ex vivo validation using tissue phantoms and in vivo validation experiments need to be carried out to confirm the reported observation.
REFERENCES 1. Wang H. W., Jiang J. K., Lin C. H., et al. (2009) Diffuse reflectance spectroscopy detects increased hemoglobin concentration and decreased oxygenation during colon carcinogenesis from normal to malignant tumors 17:2805-2817
339
2. Mallia R., Thomas S. S., Mathews A., et al. (2008) Oxygenated hemoglobin diffuse reflectance ratio for in vivo detection of oral precancer 13:041306 3. Subhash N., Mallia J. R., Thomas S. S., et al. (2006) Oral cancer detection using diffuse reflectance spectral ratio R540/R575 of oxygenated hemoglobin bands 11:014018 4. Bargo P. R., Prahl S. A., Goodell T. T., et al. (2005) In vivo determination of optical properties of normal and tumor tissue with white light reflectance and an empirical light transport model during endoscopy 10:034018 5. Gade J., Palmqvist D., Plomgard P., et al. (2006) Diffuse reflectance spectrophotometry with visible light: comparison of four different methods in a tissue phantom 51:121-136 6. Stratonnikov A. A. and Loschenov V. B. (2001) Evaluation of blood oxygen saturation in vivo from diffuse reflectance spectra 6:457-467 7. Mallat S. and W. L. Hwang (1992) Singularity detection and processing with wavelets 38:617-643 8. Mallat S. and Zhong S. (1992) Characterization of signals from multiscale edges 14:710-732 9. Mallat S. G. (1989) A theory for multiresolution signal decomposition: the wavelet representation 11:674-693 10. Gributs C. E. and Burns D. H. (2003) Haar transform analysis of photon time-of-flight measurements for quantification of optical properties in scattering media 42:2923-2930 11. Gupta S., Nair M. S., Pradhan A., et al. (2005) Wavelet-based characterization of spectral fluctuations in normal, benign, and cancerous human breast tissues 10:054012 12. Jacques S. L. and L. V. Wang (1995) Monte Carlo modeling of light transport in tissues Plenum Press New York 13. Wang L., Jacques S. L. and Zheng L. (1995) MCML--Monte Carlo modeling of light transport in multi-layered tissues 47:131-146 14. Fukui Y., Ajichi Y. and Okada E. (2003) Monte Carlo prediction of near-infrared light propagation in realistic adult and neonatal head models 42:2881-2887 15. Schuder S., Wittenberg J. B., Haseltine B., et al. (1979) Spectrophotometric determination of myoglobin in cardiac and skeletal muscle: separation from hemoglobin by subunit-exchange chromatography 92:473-481 16. Nilsson A. M., Sturesson C., Liu D. L., et al. (1998) Changes in spectral shape of tissue optical properties in conjunction with laserinduced thermotherapy 37:1256-1267 17. Doornbos R. M., Lang R., Aalders M. C., et al. (1999) The determination of in vivo human tissue optical properties and absolute chromophore concentrations using spatially resolved steady-state diffuse reflectance spectroscopy 44:967-981 18. Tuchin V. V. (2007) Tissue Optics: Light Scattering Methods and Instruments for Medical Diagnosis. SPIE Press
IFMBE Proceedings Vol. 32
Fredholm Integral Equations in Biophysical Data Analysis P. Schuck Dynamics of Macromolecular Assembly, LBPS, National Institute of Biomedical Imaging and Bioengineering, NIH, Bethesda, U.S.A. Abstract— In the last decade, the abundant availability of computational resources has allowed for significant improvements in the interpretation of data in traditional disciplines of physical biochemistry. In particular, there are many examples where the data are fitted with a model describing a distribution of parameters, taking the form of a Fredholm integral equation. Algorithms traditionally applied in image analysis have proven highly useful to solve the corresponding ill-posed inverse problem. Two examples are presented from optical biosensing and sedimentation velocity analytical ultracentrifugation. In both examples, standard regularization techniques such as Tikhonov and maximum entropy regularization are applied, in conjunction with non-negativity constraints. Further, Bayesian adaptations of the regularization functional are possible that incorporate available prior knowledge on the system under study. Practical limitations and problems will be discussed. Keywords— regularization, analytical ultracentrifugation, optical biosensing, size distribution, affinity distribution.
I. INTRODUCTION Studying in vitro the physical solution properties of biological macromolecules, such as their sizes, shapes, and non-covalent interactions, is a foundation for understanding their function and role within a biological system. Biological macromolecules pose specific problems for the detailed biophysical characterization in many techniques, arising from their intrinsic residual heterogeneity. Although a molecular machinery has evolved that is able to synthesize in vivo extremely precisely macromolecules with a homogeneity far superior to man-made polymeric synthesis, the samples are usually still far from mono-disperse. Reasons are partly imperfect purification, partly reflect microheterogeneity arising from posttranslational modifications and ensembles of conformational states, and partly due to unavoidable proteolytic cleavage and aggregation processes. These trace species can be hampering the study of the main macromolecular population, or they may be the focus of the investigation, such as in the quality control of the stability of protein pharmaceuticals in their formulations. As a consequence, many of the experimental observables arise from a mixture of many, generally unknown species. It can be useful to describe this problem from a general point of view, which has been presented in the biophysical
literature by Provencher [1]. Let us denote the measurement as a(x), where an observable a is measured along an experimental parameter x, and the macromolecular sample as a differential distribution c(p) along a molecular property p (with c(p)dp being the concentration of species exhibiting values between p and p+dp), for example, with p representing macromolecular size. Fortunately, it is often possible to conduct experiments in a linear mode, where all macromolecular species are not disturbed by each other, and all provide independent, additive signal contributions. (For example, this may require sufficiently low total macromolecular concentrations to attain ‘thermodynamic ideal conditions’ where the available solution volume for each molecule is unaffected by the occupancy of space by other molecules.) In this case, the measured data are superpositions from the signal of each species, which naturally leads to a Fredholm integral equation of the form
a( x) = ∫ k ( x, p)c( p)dp
(1)
where the kernel k(x,p’) is the ideal signal profile expected to be measured from a true single species with parameter p’. (Often, there may be multiple data dimensions and/or multiple distribution parameters.) Eq. (1) is an ill-posed problem, and regularization techniques, such as TikhonovPhillips or maximum entropy regularization must be used in order to achieve reliable results [1,3,4,6-10]. The best experimental case would be an ideal of mass spectrometry, where sharp peaks are obtained k(x,p’)=δ(xλp’) and the true distribution (modulo a calibration constant λ) is directly measured. The worst-case scenario would be constant dk/dp=0 everywhere, which would not allow any discrimination at all. Unfortunately, typical kernels are somewhere intermediate, often with fairly smooth functions, such as Boltzmann exponentials in sedimentation equilibrium [2], or decaying exponentials in autocorrelation functions of the fluctuations of the scattered light in dynamic light scattering [3] or in lifetime distributions [4]. However, Lamm equation solutions describing the sedimentation boundaries of large macromolecules [5,6] (see below) can be relatively sharp. Another example for Eq. (1) is image
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 340–343, 2010. www.springerlink.com
Fredholm Integral Equations in Biophysical Data Analysis
341
deconvolution (where the kernel might be a point spread function) [7,8]. In practice, (1) appears as a discrete problem with data points ai, and the numerical solution of (1) can be sketched as follows: We may approximate the distribution by a discrete set unknown values cn on an equidistant grid pn, in r vector notation c . This leads to a non-negativity constrained least-squares problem 2 ⎡ ⎛ r⎤ ⎞ Min ⎢∑ ⎜ ai − ∑ cn kin ⎟ + αB(c )⎥ c ≥0 n ⎠ ⎢⎣ i ⎝ ⎥⎦
(2)
k
where B is the regularization functional. α is a scaling parameter to adjust the weight of the regularization constraint relative to the fit of the data such to ensure the χ2 of the fit will not exceed the statistical permitted limits, as predicted, for example, by F-statistics [6]. Eq. (2) can be solved largely using standard algebraic techniques and non-linear optimization [11]. For Tikhonov-Phillips regularization, B can take the form r r c T H 0 c , where H0 is the square of a second derivative matrix [5], whereas for maximum entropy regularization, it may be Σcn log(cn) or similar [3,4,7,12]. For experiments with low information content, such regularization suppresses the amplification of experimental noise into sharp, but unreliable features of the distribution, and produces the most parsimonious distribution consistent with the data. While this avoids over-interpretation, the resulting distributions can be unsatisfactorily broad. A very powerful Bayesian strategy to circumvent this problem is the definition of a prior expectation for the distribution, γn, and to express the regularization such that it produces the most parsimonious deviation from the prior expectation. This can be achieved in maximum entropy–like regularization with Σcn log(cn/γn) [4,7,11], or in Tikhonov–like regularization by substituting the matrix H0 with a matrix H of elements Hpq= H0pq(γp γq)-1 [12,13]. In this way, data consisting only of noise will reproduce the prior, indicating that we have not learned anything, and new features will arise only if the data supply the necessary information with statistical significance. Following this concept, we have applied the distribution analysis to the study of macromolecular sedimentation coefficient distributions in analytical ultracentrifugation [6,11,12], and to the study of combined distribution of affinity and life-time of complexes of surface binding sites in with analyte molecules in optical biosensing [10,13]. Typical results are presented, and common opportunities and problems are described.
II. RESULTS A. Sedimentation Velocity Analytical Ultracentrifugation
In sedimentation velocity analytical ultracentrifugation (SV), the evolution of the radial macromolecular concentration profiles a(r,t) is measured following the application of a centrifugal field (Fig. 1). For a single, ideally sedimenting species at unit concentration, it follows the Lamm equation [5,6]. ∂a (1) ∂a (1) 1 ∂ ⎡ ⎛ 2 (1) + ⎢r ⎜⎜ sω ra − D ∂r ∂t r ∂r ⎢⎣ ⎝
⎞⎤ ⎟⎟⎥ = 0 ⎠⎥⎦
(3)
with r and t denoting the distance from the center of rotation and time, respectively, ω the angular velocity. The initial condition is a(1)(r,t)=1, and there are reflective boundary conditions at r=m and r=b (m
Fig. 1 SV data from the sedimentation of an immunoglobulin G sample We can use solutions of (3) as kernel in (1) for macromolecular size-and-shape distributions, for example, as a twodimensional distribution c(s,D) in parameters s and D [14]. The computations necessary for data analysis can be accomplished very efficiently [5,11]. Alternatively, more often it is exploited that most macromolecular samples are fairly homogeneous in their translational friction coefficient ratio f/f0, which allows using a hydrodynamic scaling relationship to calculate D as a function s, and thus arrive at a sedimentation coefficient distribution c(s) [15]. Finally, the generalization of (1) to multiple spectral signals and separate distributions for multiple components affords further spectral resolution in the study of macromolecular mixtures [16].
IFMBE Proceedings Vol. 32
342
P. Schuck
Since the Lamm equation solutions in the limit D = 0 are Heaviside step functions and in this case (noise-less) radial derivatives da/dr would be a simple transformation of the distribution c(s), solving (1) for D > 0 and noisy data essentially amounts to the deconvolution of diffusion, analogous to the deconvolution of the point-spread function in imaging applications.
macromolecules, and at the determination of the dissociation rate constant koff of these complexes.
Fig. 3 SPR surface binding traces for three analyte concentrations and contact time tc=3000 sec, and residuals from a P(koff ,Kd) fit. For details, see [13]
Fig. 2 c(s) distribution of the SV data from Fig. 1 with maximum entropy regularization (solid line), and in 10fold magnification (dotted line). The red (dashed) line shows c(Pδ)(s) using the prior of monodispersity
The results of the c(s) distribution for the data in Fig. 1 are shown in Fig. 2. Besides the main peak at 5.8 S, there are significant populations of larger species (at 8.5 S and ~ 10.5 S) that probably reflect dimers and trimers of the protein, as well degradation products at 2.5 S. This demonstrates the high sensitivity of SV for quantifying trace components. While the main peak can be well resolved, the dimer and trimer peaks are more broad. Based on the expectation that each oligomer should have a specific conformation and be intrinsically mono-disperse, we can integrate the peaks in the initial c(s) distributions, determine the signal weighted average s-value, sw, and construct a distribution of prior knowledge γn that consists of a series of δ-functions (or as close as the grid permits) at these sw -values. When used in the regularization as Bayesian prior, the resulting distribution is termed c(Pδ)(s) [12]. As shown in Fig. 2 as red dashed lines, these distributions are sharp even for the minor oligomer populations. There may be cases, however, where broad peaks cannot be described by a δ-function, in which case c(Pδ)(s) would have features contradicting the prior. B. Surface Binding Sites in Optical Biosensing
Surface plasmon resonance biosensing (SPR) is usually aimed at the measurement of equilibrium binding constants Kd of analyte molecules interacting with surface-immobilized
Data from the time-course of surface-bound material a(t) are collected after applying a pulse of analyte macromolecules at a concentration cA in a flow across the sensor surface for the duration tc, followed by an observation period of the dissociation process with buffer flow not containing any analyte [17]. Families of traces are recorded at different pulse concentrations (Fig. 3). Ideally, the signals a(1)(t,cA) of a single site at unit concentration should follow piece-wise exponential shape ⎧ 1 − e − kont (1) a (1) (t , c A ) = amax ⎨ −koff ( t −tc ) −kont ⎩(1 − e )e
0 < t ≤ tc t > tc
(4)
where a(1)max=(1+KD/cA)-1, kobs=koncA+koff , and kon = koff/Kd. Heterogeneity can arise from many factors, including the physical properties of the microenvironment of each immobilized molecule, the conformational ensemble of the immobilized molecule (which may be intrinsic or induced by non-uniform chemical crosslinking), and the ubiquitous presence of extraneous ‘non-specific’ surface sites [10,13,17]. Inasmuch as the individual molecules act as (thermodynamically and physically) independent sites, the total signal is a simple superposition, and Eq. (1) can be used to determine a two-dimensional affinity and life-time distribution of surface sites termed P(koff ,Kd) [10]. This allows discriminating signal contributions from the main macromolecular interaction of interest from extraneous sites with very weak and virtually irreversible binding (Fig. 4). Analogous to the monodispersity assumption in c(Pδ)(s) in SV, it can be reasonable to calculate a refined distribution P(δ) (koff ,Kd) that embeds the prior expectation that there is only a single, discrete site, with parameters values estimated hierarchically from an initial fit with uniform prior [13]. In the example of Fig. 4, this allows demonstrating the microheterogeneity of the sites comprising the main peak [13].
IFMBE Proceedings Vol. 32
Fredholm Integral Equations in Biophysical Data Analysis
343
ACKNOWLEDGMENT This research was supported by the Intramural Research Program of the National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health.
REFERENCES
Fig. 4 Contour plot of P(δ) (koff ,Kd) for the data in Fig. 3. The red dashed line encircles the location of the peak in the mono-disperse prior (see [13])
III. CONCLUSIONS Although SV and SPR are very different techniques, the data analysis poses a problem of very similar structure. As in the previous applications of Fredholm integral equations to the data from other biophysical techniques [1,3,4], the distribution analysis gives generally excellent results. Data can be fit usually close to the level of noise in the data acquisition. Significant bias is avoided that would arise from a naïve single species fit of Eq. (3) and (4) directly to the data. In contrast, very detailed features of the distributions can be resolved in the distribution analysis. In the practical application, the systematics of the residuals of the fit must be closely watched, since imperfect fits may indicate deviations from a ‘linear mode’ of the experiment, which may arise, for example, in SPR from mass transport limitations [10,13], and in SV from the hydrodynamic drag mutually affecting the migration of particles at high concentration. Not surprisingly, the detailed analysis often also exposes experimental imperfections that result in systematic errors in the data acquisition itself. In our opinion, this makes it difficult to adjust both the regularization amplitude and the amplitude of the prior rigorously according to statistical framework. Good results can nevertheless be obtained by varying the choice of regularization parameters and studying the robustness of the features of interest. In particular, alternating the choice of prior is a powerful tool to probe the information content of the data. Biological macromolecules often offer a possibility for Bayesian regularization that exploits the ideal of their monodispersity. This can be easily implemented in a hierarchical approach.
Provencher SW (1982) A constrained regularization method for inverting data represented by linear algebraic or integral equations. Comp Phys Comm 27:213-227 Provencher SW (1967) Numerical solution of linear integral equations of the first kind. Calculation of molecular weight distributions from sedimentation equilibrium data. J Chem Phys 46:3229-3236 Livesey AK, Licinio P, Delaye P (1986) Maximum entropy analysis of quasielastic light scattering from colloidal dispersions. J Chem Phys 84:5102-5107 Steinbach PJ, Chu K, Frauenfelder H, et al. (1992) Determination of rate distributions from kinetic experiments. Biophys J 61:235-245 Brown P, Schuck P (2007) A new adaptive grid-size algorithm for the simulation of sedimentation velocity profiles in analytical ultracentrifugation. Comp Phys Comm 178:105-120 Schuck P (2000) Size distribution analysis of macromolecules by sedimentation velocity ultracentrifugation and Lamm equation modeling. Biophys J 78:1606-1619 Narayan R, Nityananda R (1986) Maximum entropy image restoration in astronomy. Ann Rev Astron Astrophys 24:127-170 Engl HW, Hanke M, Neubauer A (2000) Regularization of Inverse Problems. Kluwer, Dordrecht Hansen PC (1992) Numerical tools for analysis and solution of Fredholm integral equations of the first kind. Inverse Probl 8:849-872 Svitel J, Balbo A, Mariuzza N, et al. (2003) Combined affinity and rate constant distributions of ligand populations from experimental surfacebinding kinetics and equilibria. Biophys J 84:4062-4077 Schuck P (2009) On computational approaches for size-and-shape distributions from sedimentation velocity analytical ultracentrifugation. Eur J Biophys, in press, DOI 10.1007/s00249-009-0545-7 Brown P, Balbo A, Schuck P (2007) Using prior knowledge in the determination of macromolecular size-distributions by analytical ultracentrifugation. Biomacromolecules 8:2011-2024 Gorshkova II, Svitel J, Razjouyan F et al. (2008) Bayesian analysis of heterogeneity in the distribution of binding properties of immobilized surface sites. Langmuir 24:11577-11586 Brown P, Schuck P (2006) Macromolecular size-and-shape distributions by sedimentation velocity analytical ultracentrifugation. Biophys J 90:4651-4661 Schuck P, Perugini MA, Gonzales NR et al. (2002) Size-distribution analysis of proteins by analytical ultracentrifugation: strategies and applications to model systems. Biophys J 82:1096-1111 Balbo A, Minor KH, Vlikovsky CA et al. (2005) Studying multi-protein complexes by multi-signal sedimentation velocity analytical ultracentrifugation. Proc Natl Acad Sci USA 102:81-86 Schuck P (1997) Use of surface plasmon resonance to probe equilibrium and dynamic aspects of interactions between biological macromolecules. Ann Rev Biophys Biomol Struct 26:541-566 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Peter Schuck National Institutes of Health 13 South Drive, Bldg 13, Rm 3N17 Bethesda, Maryland U.S.A.
[email protected]
High-Resolution Autofluorescence Imaging for Mapping Molecular Processes within the Human Retina Martin Ehler1,2, Zigurts Majumdar1, Emily King1,2, Julia Dobrosotskaya2, Emily Chew3, Wai Wong3, Denise Cunningham3, Wojciech Czaja2, and Robert F. Bonner1 1
National Institutes of Health, Eunice Kennedy Shriver National Institute of Child Health and Human Development, PPB/LIMB/SMB, Bethesda, MD, 20892, USA 2 University of Maryland, Mathematics Department, College Park, MD, 20742, USA 3 National Institutes of Health, National Eye Institute, Bethesda, MD, 20892, USA
Abstract— Age related macular degeneration (AMD) is a common eye disease that often leads to vision loss. High levels of increased accumulation of fluorescent photoproducts appear to induce local retinal pigment epithelium (RPE) dysfunction associated with AMD. Low macular pigment level has also been identified as a risk factor for AMD. We developed multispectral noninvasive fluorescence imaging of the retina by modifying a standard fundus camera with selective filter sets. By exciting the fluorescent photoproducts at two or more wavelengths, we were able to quantify macular pigment. Our image analysis identified artifacts of the camera optics, and we propose a noninvasive modality to obtain a map of the retinal microvascularization. Keywords— autofluorescence imaging, retina, macular pigment, multi-spectral, age related macular degeneration.
I. INTRODUCTION Age-related macular degeneration (AMD) is the leading cause of vision loss in elderly patients in industrialized nations. The early progression of AMD is accompanied by pathological changes in the retinal pigment epithelium (RPE), evident clinically as hypopigmentary and hyperpigmentary changes. In addition, accumulations of fluorescent cytotoxic photoproducts within RPE cells have also been found in association with AMD [2]. High levels of these photochemicals appear to induce local RPE dysfunction that has been clinically characterized by increased backscattering of light from subRPE deposits and pigmentary changes within the RPE [5,6,7]. Blue-light absorbing macular pigments (lutein and zeaxanthin) concentrate within the photoreceptor nerve fibers. Low macular pigment levels have been identified as risk factors for AMD, but better techniques are needed to measure their retinal levels when given orally. The retina, a multi-layer neural tissue, is uniquely suited for noninvasive optical imaging with high resolution. Multispectral noninvasive fluorescence imaging allows mapping of overlying macular pigments and monitoring early changes within the RPE via the fluorescent photoproducts that accumulate within them. We developed noninvasive
multispectral fluorescence imaging of the human retina by adding selected interference filter sets to standard fundus cameras. We seek to optimize the quality of molecular maps generated by automated image analysis of different multispectral sets of autofluorescence and reflectance images with an iterative process of refinement of filters, imaging protocols, and analysis tools. By removing autofluorescence components in these images not arising locally from RPE fluorophores within each pixel, we can obtain concentration maps of these fluorophores and absorbing molecules (chromophores) in overlying retinal layers. These measurements in AMD patients are complicated by an age-related increase in lens autofluorescence and optical scattering changes within the RPE and its basement membrane. The fundus camera images exhibit diffuse backscatter of RPE and lens autofluorescence, which we must mathematically remove before quantitatively mapping retinal chromophores (lutein, zeaxanthin, and hemoglobin) and the underlying RPE fluorophores. Our multispectral image sets and algorithms improve precision in these noninvasive molecular analyses and should allow more precise measurements of changes during early AMD progression. We also introduce a new and potentially clinically useful method for the noninvasive reconstruction of the retinal microvascular system from autofluorescence images without injection of exogenous contrast agents (i.e., fluorescein).
II. METHODS A. Autofluorescence Method for Macular Pigment Maps Macular pigment measurements based on twowavelength autofluorescence images have been introduced by Delori et al. [4]. We recall this method in the following and extend the approach to multiple-wavelength image sets that can lead to more relyable measurements. Let FF (Λ, λ ) and FP (Λ, λ ) be the autofluorescence measured at the fovea and the perifovea, respectively, where Λ is the excitation and λ the emission wavelength. While
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 344–347, 2010. www.springerlink.com
High-Resolution Autofluorescence Imaging for Mapping Molecular Processes within the Human Retina
FF (Λ, λ ) depends on the particular location within the fovea, the term FP (Λ, λ ) is often replaced by a circular average at 6 degree [4]. We denote the optical density of macular pigment at the foveal and perifoveal region by DF and DP , respectively. Let Φ F (Λ, λ ) and Φ P (Λ, λ ) be the fluorescence efficiencies in absence of macular pigment and I (Λ) the radiant power of the excitation light at wavelength Λ . The foveal and perifoveal autofluorescence is given by FF (Λ, λ ) = I (Λ) ⋅ Φ F (Λ, λ ) ⋅10 − ( D ( Λ )+D ( λ )) , FP (Λ, λ ) = I (Λ) ⋅ Φ P (Λ, λ ) ⋅10 − ( D ( Λ )+D ( λ )) . F
F
P
P
We now seek the optical density of macular pigment at 460nm, i.e.,
DAF (460) = DF (460) − DP (460) , the optical density difference between the fovea and perifovea at the peak absorption of macular pigment at 460nm. Using the relative extinction coefficient K MP (Λ) that satisfies K MP (Λ) ⋅ DAF (460) = DAF (Λ) , we obtain ⎛ F (Λ, λ ) ⎞ ⎛ Φ (Λ, λ ) ⎞ log⎜ P ⎟ = log⎜ P ⎟ + DAF (460) ⋅ [K MP (Λ) + K MP (λ )] ⎝ FF (Λ, λ ) ⎠ ⎝ Φ F (Λ, λ ) ⎠
Φ (Λ , λ ) does not depend We postulate that the quotient P Φ F (Λ , λ ) on the excitation wavelength Λ . To extend the method in [4] from two- to multiple-wavelength image sets, choose excitation wavelengths 1 ,…, n and weights w1 ,…, w n
B. Principal Component Analysis Principal component analysis (PCA) is a statistical tool that linearly transforms data into an orthogonal coordinate system whose axes correspond to the principal components in the data, i.e., the first principal component accounts for as much variance in the data as possible and, successively, further components capture the remaining variance. Through an eigenanalysis, the principal components are determined as eigenvectors of the dataset’s covariance matrix and the corresponding eigenvalues refer to the variance that is captured within each eigenvector. We put d images into a stack, which provides a datacube that is built from d-dimensional pixel vectors. After subtracting the mean of the datacube, PCA was performed on the collection of pixel vectors { x1 ,…, x n} of the zero mean datacube. We first diagonalize the covariance matrix Cov(X) = E(X ⋅ X T ) ,
where X = (x1 x n ) is the zero mean data matrix. The d eigenvectors p1 ,…, pd - the principal components ordered according to the magnitude of their eigenvalues - provide the transformed data Y =W T ⋅ X,
where W = (p1 pd ) . Rearranging the vectors in Y into matrices yields again a d-layer datacube of 2D images. Its first layer represents the abundance of the primary component at each pixel position. The datacube’s second layer is each datapoint’s projection along the second eigenvector.
n
such that
∑w
III. RESULTS
= 0 . Given the above equation for each
k
345
k=1
wavelength Λ k , multiplying by w k and adding them up yields n
∑w k=1
n ⎛ F (Λ , λ ) ⎞ P k ⋅ log = D (460) ⋅ wk K MP (Λ k ) . ⎜ ⎟ ∑ k AF ⎝ FF (Λ k , λ ) ⎠ k=1
The formula for the macular pigment then is DAF (460) =
⎛ n F w (Λ , λ ) ⎞ k ⋅ log⎜∏ Pw ⎟. FF (Λ k , λ ) ⎠ ⎝ k=1 w K (Λ ) ∑ k MP k 1
n
k
k
k=1
If we choose only two wavelengths Λ 1 = 480 and Λ 2 = 520 with weights w1 = 1 and w 2 = −1 , then this formula becomes DAF (460) =
⎛ F (480, λ ) F (520, λ ) ⎞ 1 ⋅ log⎜ P ⋅ F ⎟ K MP (480) − K MP (520) ⎝ FP (520, λ ) FF (480, λ ) ⎠
as proposed in [4].
A. Confocal Scanning Laser Ophthalmoscope versus Standard Fundus Camera The macular pigment measurement based on the spectral absorbance proposed in Section IIA requires autofluorescence images obtained at 2 or more different excitation wavelengths. Dual-wavelength confocal scanner laser ophthalmoscopes (cSLO) allow imaging of autofluorescence excited at 488nm and 516nm with low background that can directly be used as input for the formulas in Section IIA [4]. The twowavelength cSLO, however, is a special instrument, and only few are in use world-wide. Acquiring multispectral autofluorescence images from standard fundus cameras with inserted interference filters (in specific excitation/emission pairs) could enable many eye centers to routinely perform molecular mapping of macular pigment and other retinal chromophores and RPE fluorophores. By using two- or multiwavelength fundus autofluorescence images instead of cSLO measurements, the framework presented in Section IIA can
IFMBE Proceedings Vol. 32
346
M. Ehler et al.
provide spatial macular pigment maps and, by extension, maps of other significant chromophores within the retina. Using a standard fundus camera introduces additional complexity due to contributions within a given pixel from nonlocal fluorescence sources. The cSLO camera is a confocal system that illuminates only a small spot at the retina at once and then scans the retina line by line. Such a flying spot minimizes light scattering and reduces the background signals. The flash of the standard fundus camera, in contrast, illuminates the whole retina causing a simultaneous background signal of scattered fluorescence originating from sources outside each particular image voxel. Blood vessels, for instance, appear very dark in cSLO images, but contribute a significant autofluorescence signal in the standard fundus camera due to their forwardscatter of fluorescence emitted from other regions of the RPE and backscatter of lens autofluorescence, see Figure 1. To apply the macular pigment formula of the previous section, the fundus camera images must be corrected for these contributions.
Through a PCA analysis of a stack of standard yellow fundus images, we identify a further artifact. Figure 3 shows the first and second principal components and the second one clearly shows a pattern that is introduced by the optics of the standard fundus camera. By varying the fixation point between successive replicate images and applying PCA, we have reduced the contributions of illumination nonuniformities in the subsequent molecular mapping algorithms.
Fig. 3 (left) first principal component of a collection of 3 standard fundus images (excitation 520-600nm, emission > 660nm) (right) second principal component
B. Macular Pigment
Fig. 1 (left) cSLO average over 10 single images (right) standard fundus image, less contrastful, lesions are more visible but blood vessels appear brighter and washed-out
The initial approach is to subtract a constant background proportional to the pixel intensities along major blood vessels from the fundus camera images. A constant background is certainly idealized and often appears insufficient to correct images from elderly patients with highly fluorescent lenses or local, highly scattering retinal pathology leading to artifacts in the macular pigment maps as shown in Figure 2.
Fig. 2 (left) color fundus image, drusen appear as yellowish spots. (right) macular pigment map based on the two-wavelength formula in Section IIA. Blood vessels and optic disc are artifacts due to the background in standard fundus images
Different techniques have been developed to quantitatively measure macular pigment in the human retina [3,4,8]. National Eye Institute study patients have been measured by heterochromatic flicker photometry, but significant inter- and cross-method variations cause a need for more consistent and reliable quantification [1]. The fluorescence method appears consistent but requires a highly specialized optical system. We obtain spatial macular pigment maps from the fluorescence method while replacing the two-wavelength cSLO with a modified standard fundus camera, see Figure 4.
Fig.
4 (above) autofluorescence images obtained using two excitation bands (460-500nm, 520-600nm) with emission collected above 660nm at a standard fundus camera (below) spatial macular pigment map and radial profile centered around the fovea
IFMBE Proceedings Vol. 32
High-Resolution Autofluorescence Imaging for Mapping Molecular Processes within the Human Retina
C. Retinal Microvascular System
IV. CONCLUSIONS
Abnormalities of the retinal microcirculation is clinically very important to image in a variety of diseases such as diabetes and sickle cell disease. In the earliest stages of their associated retinal pathology local microvascular dropout appears to precede proliferation of new retinal vessels that are fragile and subject to leakage leading to edema and hemorrhage that frequently lead to visual loss. Currently, the retinal microvascular system is imaged by injecting fluorescein into an arm vein and taking a time series of fluorescence images as the fluorescein washes in and out of the retinal vessel and underlying choroidal microvasculature. The ability to map the finest retinal microcirculation non-invasively (without injection of fluorescein) might be particularly useful for following early microvasculature changes in retinal diseases. We found that the local attenuation of cSLO autofluorescence by hemoglobin in retinal microvessels can provide a simple noninvasive modality to obtain a map of the retinal microvascularization. The local presence of red blood cells within a given retinal capillary is a stochastic process in each cSLO image in which one pixel at a time is acquired (in < 1 microsec). Thus to map the retinal capillaries we require accurate registration and averaging over an image set curated to remove images distorted by spontaneous eye motions. In Figure 5, we show such an averaged cSLO autotofluorescence movie where the image was inverted and a standard edge detection algorithm applied to identify the microvascular network in a normal eye. The noninvasive character of this simple method allows for broad screening studies to detect and follow early local changes in the retinal microvasculature in preclinical disease in conditions where invasive fluorescein angiography would not be appropriate or counterindicated.
Fig. 5 (left) inverted cSLO average. (right) edge detection provides map of the retinal microvascular system
347
We have developed algorithms to analyze noninvasive multispectral retinal autofluorescence image sets. We have shown that this approach allows us to map with high resolution the distribution of different strongly absorbing retinal species within the retina (specifically lutein, zeazanthin, and hemoglobin) using widely available clinical imaging devices. We have identified and characterized a number of image artifacts that are driving further interactive refinements in our multispectral imaging protocol and our analysis algorithms. We hope to extend our clinical noninvasive mapping to include rhodopsin, oxy and deoxyHb, lutein, zeaxanthin, and the different principal fluorophores within the A2E pathway that maybe driving early age-related RPE pathology.
ACKNOWLEDGMENT The research was funded by the Intramural Research Program of NICHD/NIH, by NSF (CBET0854233), by NGA (HM15820810009), and by ONR (N000140910144). The authors gratefully acknowledge Prof. John J. Benedetto.
REFERENCES 1. Beatty S, van Kuijk FJ, Chakravarthy U (2008) Macular pigment and age-related macular degeneration: longitudinal data and better techniques of measurement are needed. Invest Ophthalmol Vis Sci 49(3): 843-845 2. Bird AC, Bressler NM, Bressler SB, Chisholm IH, Coscas G, Davis MD, de Jong PT, Klaver CC, Klein BE, Klein R (1995) An international classification and grading system for age-related maculopathy and age-related macular degeneration. The International ARM Epidemiological Study Group. Surv Ophthalmol 39(5): 367-374 3. Delori FC (2004) Autofluorescence method to measure macular pigment optical densities fluorometry and autofluorescence imaging. Arch Biochem Biophys 430(2): 156-162 4. Delori FC et al. (2001) Macular pigment density measured by autofluorescence spectrometry: comparison with reflectometry and heterochromatic flicker photometry. J Opt Soc Am A Opt Image Sci Vis 18(6): 1212-1230 5. Framme C, Brinkmann R, Birngruber R, Roider J (2002) Autofluorescence imaging after selective RPE laser treatment in macular diseases and clinical outcome: a pilot study. Br J Ophthalmol 86(10): 10991106 6. Holz FG, Bindewald-Wittich A, Fleckenstein M, Dreyhaupt J, Scholl HPN, Schmitz-Valckenberg S (FAM-Study Group) (2007) Progression of geographic atrophy and impact of fundus autofluorescence patterns in age-related macular degeneration. Am J Ophthalmol 143(3): 463472 7. Meyers SM, Ostrovsky MA, Bonner RF (2004) A model of spectral filtering to reduce photochemical damage in age-related macular degeneration. Trans Am Ophthalmol Soc 102: 83-93 8. Trieschmann M, et al. (2003) Macular pigment: quantitative analysis on autofluorescence images. Graefes Arch Clin Exp Ophthalmol 241(12) 1006-1012
IFMBE Proceedings Vol. 32
Local Histograms for Classifying H&E Stained Tissues M.L. Massar1, R. Bhagavatula2, M. Fickus1, and J. Kovačević2,3 1
Department of Mathematics and Statistics, Air Force Institute of Technology, Wright Patterson Air Force Base, USA 2 Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, USA 3 Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, USA
Abstract— We introduce a rigorous mathematical theory for the analysis of local histograms, and consider the appropriateness of their use in the automated classification of textures commonly encountered in images of H&E stained tissues. We first discuss some of the many image features that pathologists indicate they use when classifying tissues, focusing on simple, locally-defined features that essentially involve pixel counting: the number of cells in a region of given size, the size of the nuclei within these cells, and the distribution of color within both. We then introduce a probabilistic, occlusionbased model for textures that exhibit these features, in particular demonstrating how certain tissue-similar textures can be built up from simpler ones. After considering the basic notions and properties of local histogram transforms, we then formally demonstrate that such transforms are natural tools for analyzing the textures produced by our model. In particular, we discuss how local histogram transforms can be used to produce numerical features that, when fed into mainstream classification schemes, mimic the baser aspects of a pathologist's thought process. Keywords— histology, local histogram, occlusion.
I. INTRODUCTION In this paper, we consider some mathematical theory that arose during the development of an automatic classification scheme for histology, specifically an algorithm that classifies the type and positioning of tissues found in digital microscopy images of hematoxylin and eosin (H&E) stained tissue sections. Here, we focus on the motivation behind the new mathematics itself; more detail on the particular application and classification scheme is given in [1]. The motivating application arose in studies of embryonic stem (ES) cells undertaken by Dr. John A. Ozolek of the Children’s Hospital of Pittsburgh and Dr. Carlos Castro of the University of Pittsburgh. Understanding how ES cells differentiate into tissues will yield better insight into early biological development, and could provide advanced research into tissue regeneration and repair, the treatment of genetic and developmental syndromes, and drug testing and discovery [1]. The work here arose from Ozolek and Castro’s study of teratomas produced by injecting primate cells
into immunocompromised mice; a teratoma is a tumor which is known to contain tissues derived from each of the three primary germ layers of ectoderm, mesoderm and endoderm. Upon removal from the mice, the teratomas are sectioned, H&E stained, and digitally imaged using a microscope. An example of such an image is given in Figure 1.a; here, the purple-pink coloring is characteristic of H&E stain. In normal tissues, different tissue types are arranged in predictable ways. However, in teratomas, the tissues arrange themselves in seemingly chaotic fashions. Nevertheless, using their years of histology experience, Ozolek and Castro are able to look at these images and quickly discern which tissues are present, as well as their locations. In particular, for the image given in Figure 1.a, they have indicated the presence of several tissue types: cartilage, as typified by Figure 1.b, and concentrated in the lower left corner of the overall image; connective tissue, seen in detail in Figure 1.c, and forming a wide oval overall; and bone, detailed in Figure 1.d, and forming much of the center. Ozolek and Castro have large numbers of such images – many sections of many teratomas – and hope to gain new biological insight by determining the degree to which they contain certain tissues, as well as the spatial relationships between tissues. However, in order to gain this insight, they first need to have the tissues in these images classified according to type. When accomplished by hand, this task, though straightforward, is time-consuming, error-prone and laborious. When analyzing many images, the cost of this manual labor becomes prohibitively high, both in terms of time and money. As such, what is needed is an image processing system which can perform this analysis with minimal user input. In the following section, we discuss some basic concepts from the theory of image classification that we have borne in mind while designing such a system. These considerations lead to our use of local histogram transforms, whose basic properties are discussed in Section III. We further discuss an occlusion-based mathematical model for the histological images in question, using it to provide a rigorous analysis of the potential use of local histograms as image classification features.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 348–352, 2010. www.springerlink.com
Local Histograms for Classifying H&E Stained Tissues
349
(a) Histology image
(b) Cartilage
(c) Connective Tissue
(d) Bone
(e) Two-dimensional, redblue (RB) histogram of the red (x-axis) and blue (y-axis) channels of (a).
(f) Two-dimensional histogram of (b). The peak’s location indicates the dominance of dark blue-purple.
(g) Two-dimensional histogram of (c). Here, the dominant color is brighter than that of (b) and is more balanced in red/blue.
(h) Two-dimensional histogram of (d). The intensity is more similar to that of (b), but the balance of color is more similar to that of (c).
Fig. 1 Histological images and the two-dimensional histograms of their red and blue channels
II. CLASSIFICATION Most classification systems have two main components: a feature extractor and a decision rule. The feature extractor is a collection of transforms which, when applied to a given image, produce a feature vector which is intended to represent the essential properties of that image. The second component of a classification system is a decision rule, namely a function that assigns a label to a given feature vector. For example, for the H&E stained histology images depicted in Figure 1, Ozolek and Castro have indicated that when performing manual tissue classification, they believe their minds are making use of image features such as the color, shape, size and texture of the tissue structures. Based upon these qualities, and their experience and training, they are able to assign a label, such as “cartilage,” “connective tissue,” or “bone” to a given portion of a histological image. Our goal is to automate this process. In automated classification schemes, image features are produced using mathematical formulae. For example, we have investigated an automated classification system [2] that computes Haralick texture features of discrete wavelet transforms of the images in question. Once computed, this feature vector is then fed into a decision rule, typically a neural network or a support vector machine, to produce a label. In this paper, we will not comment further about
decision rules, and will rather focus entirely on our choice of histogram-based image features. There are two reasons why we shall make use of histograms, as opposed to other features. One reason is that histograms are easy to understand intuitively, and this intuition has been the key for us to conjecture and prove rigorous results concerning them. The second, more significant, reason is that histograms are directly related to the image features that Ozolek and Castro have indicated they themselves use when classifying histological images. For example, again consider the histological image given in Figure 1.a – a 2-D histogram of its pixel values is given below it in Figure 1.e. Here, the 2-D histogram is obtained by counting how many pixels have a given value of red (x-axis) and blue (y-axis); we have discarded the green channel of the RGB image, as it contains little distinguishing information in the purple-pink class of H&E stained images. As this histogram is taken over the entire image, it combines information from all tissues. Meanwhile, tissue specific histograms are given in Figures1.f, 1.g and 1.h. For example, in Figure 1.f, the low-valued, off-diagonal blob corresponds to the dominant, more-blue-than-red purple color of cartilage, while the long tail of the distribution is indicative of the white cell interiors. Meanwhile, the light pink of connective tissue and deeper red of bone can be discerned in the histograms of Figures 1.g and 1.h, respectively. We note that while these histograms themselves may be regarded as image
IFMBE Proceedings Vol. 32
350
M.L. Massar et al.
(a)
Synthesized cartilage background
(b)
Synthesized cartilage foreground
(e)
RB histogram of (a)
(f)
RB histogram of (b)
(c)
Synthesized indicator function
(d)
Synthesized cartilage texture
(g)
RB histogram of (d).
Fig. 2 Synthesizing cartilage-like textures. A {0,1}-valued function (c) indicates where to occlude a background texture (a) by a foreground texture (b). The histogram of the synthesized image is a combination of the histograms of the background and foreground; the relative heights of the peaks of (g) can be used to infer how much background (cell exterior) and foreground (cell interior) is characteristic of a given tissue features, we have only been using the locations and heights of their dominant peaks. More importantly, we see that little information can be gleaned from looking at the histogram of the entire image – our goal is to determine which tissues are present at any given location, and as such, our histograms must have some location dependence. As such, it is natural to consider local histograms, that is, histograms that are only computed over a small neighborhood of any given pixel.
III. LOCAL HISTOGRAMS AND OCCLUSIONS Local histograms are a well-studied signal processing tool [3-8]. Here, we define local histograms for images f which are regarded as functions from one finite abelian group G of pixel locations into another finite abelian group P of pixel values. For example, for the 1200 by 1200 image given in Figure 1.a, we have G = Z1200 x Z1200, where ZN denotes the group of integers from 0 to N-1, in which arithmetic is performed modulo N. Here, the pixel values themselves lie in P =Z256 x Z256 ; we are considering only the 8-bit red and blue channels of the original RGB image. The local histogram of such an image f is defined in terms of a window, that is, a nonnegative function w over G that sums to one. To be precise, the local histogram of f with respect to w is:
(LH w f )( g , p ) = ∑ w( h )δ f ( g + h ) ( p ). h∈G
For any fixed pixel value p in P, the corresponding portion of the local histogram may be computed by filtering the function which indicates where f assumes this particular pixel value with the window w. Even with this realization, the computation and storage of a local histogram requires nontrivial resources: for our running example, the local histogram is a four-dimensional array of size 1200 x 1200 x 256 x 256. In order to determine the appropriateness of using local histograms as feature transforms for histological images, we consider an occlusion-based model for synthesizing test images. Similar models have previously been considered in [9-13]. To be clear, given an indicator function I, which assigns a label from 0 to K-1 to each pixel location in G, we define the corresponding occlusion of a collection of images {f0,…,fK-1} to be the composite image:
(occ { f } )( g ) := f S
K −1 k k =0
I (g)
( g ).
An example of an image generated in this fashion is given in Figure 2.d. Here, the number of images is K=2, with f0 given in Figure 2.a, with f1 given in Figure 2.b, and the indicator function I given in Figure 2.c. Though by no means photorealistic, this synthesized image nevertheless possesses much of the basic color and shape information of cartilage. The reason we use such a simple image model is that it permits a rigorous analysis of the properties of local histograms. Indeed, examining the (global) histograms of the background, foreground and composite images of Figure 2, we note that
IFMBE Proceedings Vol. 32
Local Histograms for Classifying H&E Stained Tissues
(a)
351
(b)
(c)
(d)
Fig. 3 Building complicated indicator functions from simple ones: (a) and (b) give two {0,1}-valued indicator functions; (c) gives a {0,1,2}-valued indicator function obtained by laying (a) over (b); (d) produces a distinct {0,1,2}-valued indicator function obtained by laying (b) over (a) the histogram of the composite is nearly a convex combination of the histograms of the background and foreground. Indeed, it is possible to show that such a result will always occur, even for local histograms, provided one does not focus on a single means of occlusion, but rather computes an expectation over every possible occlusion. To be precise, consider a probability density function P defined over the class of all {0,...,K-1}-valued indicator functions I over G. We say that P is fair if for all k = 0,...,K-1, there exists some real scalar λk such that:
∑ P( I )δ
k
( I ( g )) = λk .
I
When P is fair, we can prove that, on average, the local histogram of a composite image is indeed a convex combination of the local histograms of each image: Theorem. If P is fair, then K −1
E I [LH w (occ I { f k }kK=−01 )] = ∑ λk (LH w f k ). k =0
This begs the question of whether or not fairness is a realistic assumption. Our current research is focused on answering this question. In particular, we are studying methods of producing more complicated occlusion indicator functions from simpler examples, as given in Figure 3. In particular, new indicator functions may be produced by overlaying other known examples of them. More significantly, we can extend this notion of overlay to probability density functions over the set of all indicator functions, and can use this idea to build more complicated fair probabilities from simpler fair ones.
ACKNOWLEDGMENT Massar and Fickus were supported by AFOSR F1ATA09125G003. Bhagavatula and Kovačević were
supported by NIH through award NIH-R03-EB009875 and the PA State Tobacco Settlement, Kamlet-Smith Bioinformatics Grant. The authors would like to thank Dr. John A. Ozolek of the Children’s Hospital of Pittsburgh and Dr. Carlos Castro of the University of Pittsburgh. The views expressed in this article are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S. Government.
REFERENCES 1. Bhagavatula R., Fickus M., Ozolek J. A., Castro C. A., Kovačević J. (2010) Automatic identification and delineation of germ layer components in H&E stained images of teratomas derived from human and nonhuman primate embryonic stem cells. To appear in Proc. IEEE Int. Symp. Biomed. Imag. 2. Chebira A., Ozolek J. A., Castro C. A., Jenkinson W. G., Gore M., Bhagavatula R., Khaimovich I., Ormon S. E., Navara C. S., Sukhwani M., Orwig K. E., Ben-Yehudah A., Schatten G., Rhode G.K., Kovačević J. (2008) Multiresolution identification of germ layer components in teratomas derived from human and nonhuman primate embryonic stem cells. Proc. IEEE Int. Symp. Biomed. Imag. 979–982. 3. Koenderink J. J., van Doorn A. J. (1999) The structure of locally orderless images. Int. J. Comput. Vis. 31:159–168. 4. van Ginneken B., ter Haar Romeny B. M. (2000) Applications of locally orderless images. J. Vis. Commun. Image Represent. 11:196– 208. 5. Koenderink J. J., van Doorn A. J. (2000) Blur and disorder. J. Vis. Commun. Image Represent. 11:237–244. 6. van de Weijer J., van den Boomgaard R. (2001) Local mode filtering, Proc. IEEE Comput. Soc. Conf. Comput. Vis. & Pattern Recognit. 2:428–433. 7. Hadjidemetriou E., Grossberg M. D., Nayar S. K. (2004) Multiresolution histograms and their use for recognition. IEEE Trans. Pattern Anal. & Mach. Intell. 26:831–847. 8. Dalal N., Triggs B. (2005) Histograms of oriented gradients for human detection, Proc. IEEE Comput. Soc. Conf. Comput. Vis. & Pattern Recognit. 1:886–893. 9. Lee A. B., Mumford D. (1999) An occlusion model generating scaleinvariant images, Proc. IEEE Workshop Stat. & Comput. Theor. Vis. 10. Lee A. B., Mumford D., Huang J. (2001) Occlusion models for natural images: A statistical study of a scale-invariant dead leaves model. Int. J. Comput. Vis. 41: 35–59. 11. Mumford D., Gidas B. (2001) Stochastic models for generic images. Quart. Appl. Math. 59:85–111.
IFMBE Proceedings Vol. 32
352
M.L. Massar et al.
12. Ying Z., Castanon D. (2002) Partially occluded object recognition using statistical models. Int. J. Comput. Vis. 49:57–78. 13. Bordenave C., Gousseau Y., Roueff F. (2006) The dead leaves model: A general tessellation modeling occlusion. Adv. Appl. Probab. 38:31– 46. Author: Matthew Fickus Institute: Air Force Institute of Technology Street: 2950 Hobson Way City: WPAFB Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Detecting and Classifying Cancers from Image Data Using Optimal Transportation G.K. Rohde1, W. Wang1, D. Slepcev2, A.B. Lee3, C. Chen1, and J.A. Ozolek4 1
Center for Bioimage Informatics, Biomedical Engineering Department, Carnegie Mellon University, Pittsburgh, PA, 15213 USA 2 Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA, 15213 USA 3 Department of Statistics, Carnegie Mellon University, Pittsburgh, PA, 15213 USA 4 Department of Pathology, Children's Hospital of Pittsburgh, Pittsburgh, PA, 15201 USA
Abstract— We describe a new approach to digital pathology that relies on measuring the optimal transportation (Kantorovich-Wasserstein) metric between pairs of nuclei obtained from histopathology images. We compare the approach to the standard feature space approach and show that our method performs at least as well if not better in automatically detecting and classifying different cancers of the liver and thyroid. 100% classification accuracy is obtained in 15 human test cases. In addition, we describe methods for using the geometric space framework to visualize and understand the differences in the data distribution that allow one to classify the data with high accuracy. Keywords— Optimal transportation, nuclear structure, chromatin, pathology, classification.
I. INTRODUCTION A. Motivation Basic research in cancer treatment has focused on uncovering molecular signatures of tumors and designing new therapies that target specific growth and signalingpathways[1], [2].Before therapy, however, an accurate diagnosis must be made. Surgical pathologists have used visual interpretation of nuclear structure to distinguish cancer from normal tissue for many years [3]. Aberrations in the genetic code and the transcription of different messenger RNAs lie at the heart of transformation from normal to pre-malignant and malignant lesions [4]. These changes occur in the nucleus and are accompanied by the unfolding and repackaging of chromatin that in part or in whole produces changes in nuclear morphology (size, shape, membrane contours, the emergence of a nucleolus, chromatin arrangement, etc.). Nuclei can be big, small, round, elongated, bent, etc. Cells can have their chromatin distributed uniformly inside the nucleus, along its borders, concentrated into small regions (dots), anisotropically distributed, and with any combination of the above. We propose a new approach to describe the distribution of nuclear structure in different tissue classes (cancers). In contrast to most previous works, in which each nucleus image is reduced to a set of numerical features, we utilize a
geometric approach to quantify the similarity of groups of nuclei. Beyond automated classification, our approach seeks to also provide easy-to-visualizeinformation that characterizes and differentiates normal versus cancerous populations of cells. In this work we focus particularly on two diagnostic challenges: one in the liver and one in the thyroid. However, we believe our approach could be used whenever large quantities of nuclei can be reliably segmented. B. Previous Works on Automated Digital Pathology Computational approaches have emerged as very powerful tools for reproducible and automated cancer diagnosis based on histopathology digital images. For decades, numerous papers have been published using computational methods to separate diagnostic entities, and some commercial software packages have been developed to screen for cancer cells with varying degrees of success [5]. The overwhelming majority of computational approaches follow a standard feature-based procedure where an image can be represented by a set of numerical features (see [6], [7], [8] for reviews). These methods can be described as a “pipeline” consisting of: image preprocessing (normalization, segmentation), feature extraction, and classification of the state of the tissue (e.g. normal or diseased) (see [5], [8], [9] for a few examples). These methods have been applied to the diagnosis of several types of cancers. While successful in some cases (see our earlier work [10],where we have applied such an approach to some of the same data used in the results shown below), feature-based methods have some important limitations. First, although classification can be accomplished in some cases, it is difficult to learn useful and biologically relevant information about the cells or tissues. This is due to the fact that when classifiers are used in multidimensional feature spaces, they rely on combinations (linear or nonlinear) of features each with different units, making physical interpretation notoriously difficult. Secondly, the reduction of each image to a set of features results in compression of information. In this context information from the digital image that may ultimately have diagnostic or biological significance is discarded.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 353–356, 2010. www.springerlink.com
354
G.K. Rohde et al.
C. Overview of Our Contribution: A Geometric Framework for Nuclear Morphometry Using Optimal Transportation We describe a new approach for nuclear chromatin morphometry and pathology that utilizes the optimal transportation (OT) metric for quantifying the distribution of nuclear morphometry of different tissue classes. We believe the OT metric can capture some of the important information that defines the differences in nuclear structure in different cells (see Figure 1C for a few examples). More precisely, we utilize the OT metric to quantify relatively how much chromatin is distributed in which region of the nucleus (see subsection III-A for more details). Once a metric can be computed, classification of sets of nuclei is achieved with a kernel support vector machine approach, utilizing the distances given by the OT metric, in combination with a majority voting procedure. We compare the classification accuracies of the OT metric with several implementations of the more standard feature-based approach often utilized in digital image pathology. We also devise methods for visualizing and understanding the differences between nuclear distributions in different tissues (normal vs.cancerous), which utilizes geodesics derived from the OT framework, together with the Fisher Linear Discriminant Analysis (LDA) technique.
II. DATA AND PRE-PROCESSING A. Tissue Processing and Imaging Tissue blocks were obtained from the archives of the University of Pittsburgh Medical Center. Cases for analysis can be separated into two categories, thyroid and liver. Thyroid cases included five resection specimens with the diagnosis of follicular adenoma of the thyroid (FA) and five cases of follicular carcinoma of the thyroid (FTC); liver cases include five cases of fetal-type hepatoblastoma (HB). Tissue sections were cut at 5 micron thicknessand stained using the Feulgen technique whichonly stains DNAin deep magenta (see Figure 1A).All images were acquired using alight microscope with a 100X objective (see [10] for details).Slides were chosen by the pathologist (J.A.O.) that contained both lesion (HB, FA and FTC) and adjacent normal appearing tissue (NL). For each case, between 10 and 20 random fields were imaged to guarantee that at least 200 nuclei were obtained, for both lesion and normal tissue. B. Segmentation, Intensity Normalization and Preprocessing Nuclear segmentation consisted of the following three-step procedure that included a graph cut initialization method [11] and a level set optimization [12] for obtaining smooth contours segmenting each nucleus. In the end, the pathologist
Fig. 1 Sample image data. A: raw image. B: segmented image. C: individual segmented nuclei after preprocessing. Some sample nuclei show variations in size, shape and etc. Note that each of these images has been contrast stretched for best visualization (J.A.O.) reviewed all the segmented nuclei, and removes nuclei that were incorrectly segmented, or were imaged out of focus. A typical segmentation result is shown in Figure 1B.Images containing individual nuclei were converted to grayscale by selecting the green channel from the RGB images, and inverting the intensity values such that a zero (color coded in black) corresponds to the relative minimum amount of chromatin in the nucleus. All nuclei were normalized so that the sum of their intensity values is 1. This was done to guarantee that non-uniformities related to staining and image acquisition, from case to case, are not able to interfere with our method. In total, we extracted 1550 nuclei for our experiment. A few sample nuclei chosen for the entire data are displayed in Figure 1C.Nuclei images were also preprocessed as in our previous worksto eliminate, approximately, variations due to arbitrary rotation, translation, and coordinate inversions of each nucleus [13].
III. METHODS A. Optimal Transportation for Comparing Nuclear Chromatin Here we describe the optimal transportation metric used for quantifying and classifying nuclear structure.In our application, each nuclear structure is represented in a gray level digital image (of size 192 x 192 pixels). Each image I containing one single nucleus can be represented as M
I = ∑ v iδxi
(1)
i=1
whereδxi is a Dirac delta function at pixel locationxi, M is the number of pixels in image I, and vi are the pixel intensity values.To accelerate the computation, we use a point mass approximation to the model the chromatin distribution of each nucleus. In specific, we use Lloyd’s weighted K-means algorithm [14] to adjust the position and weights
IFMBE Proceedings Vol. 32
Detecting and Classifying Cancers from Image Data Using Optimal Transportation
of a set of N< 800 particle masses to approximate the total intensity distribution of each nuclei. In the discrete setting, the OT minimization problem reduces to finding N p Nq
d(I0,I1 ) = min ∑ ∑ c (x i , y j )f i, j f
subject to:
Nq
∑ f i, j = I1(y j ), i=1
(2)
i=1 j=1
Np
∑f
i, j
= I0 (x i ), f i, j ≥ 0
j=1
withNp and Nq the number of masses chosen for representing images I0 and I1, and c(x,y) is the "cost" of transporting unit mass located at x to location y. We use the quadratic symmetric cost c(x,y) = c(y,x) = |x - y|2.We utilize Matlab'simplementation to solve the linear program. We note that the optimal transportation distance metric has been used in the past for different image analysis problems [15], [16]. The geodesic interpolation between I0 and I1 can be approximated byIαwithα∈ [0, 1] Nq
Iα (y j ) = ∑ f i, j I0 (x i )δ ((1− α ) − αy j )
B. Supervised Classification 1) Kernel based support vector machines: In our own previous experience we have found that the support vector machine (SVM) method, when combined with a simple voting strategy, performed best when compared with other classification methods for determining the class of a given set of nuclei [10].Weuse the kernel SVM [17] to train and test the data. In our work, we utilize the radial basis function (RBF) kernel 2⎞ ⎛ K (I ,I )= exp⎜ −γ f (I ) − f (I ) ⎟, γ ≥ 0 whenever numerical features i
j
⎝
i
j
⎠
are used, where f is the function to compute feature from images. In order to utilize the OT distances described above, the kernel is modified as K (Ii ,I j )= exp −γ ′dOT (Ii ,I j )2 , γ ≥ 0 . For
(
C. Characterizing Distributions of Nuclei The geodesics that connect the nuclear structures in the entire dataset can be used to characterize and contrast the differences between different tissue classes. The idea is to interpret each nuclear structure as a point in the OT manifold and seek geodesics onto which the projection (in the same metric sense) of nuclear exemplars from different tissue classes most differs according to some quantitative criterion.We use the Fisher Linear Discriminant Analysis (LDA) method [19] to find such geodesics automatically. However, because explicit "coordinates" for each nuclear structure are not available (only pariwise distances), we first use multidimensional scaling (MDS) to find a Euclidean embedding for the data; then use Fisher LDA in this Euclidean space to find the most discriminating direction.
IV. RESULTS
(3)
i=1
)
multiple class problems, we use "one-versus-one" strategy[18] to reduce the single multiclass problem into multiple binary problems and used a max-wins voting strategy to combine these binary results and classify the testing instance. 2) Cross validation: Cross validation is performed to select the optimal parameters as well as test the average classification accuracy of the system. We use "leave-one-out" strategy to separate the data into training and testing date, where data from one case is used for testing and the rest cases are used for training the classifier. In order to train a classifier that has good predict accuracy, we use k-fold cross validation to further separate the training set into two parts (lower-level), and search for good error penalty C (determine how SVM tolerate errors when training), as well as the kernel parameter γ, which have the best accuracy in this k-folds. We set k=10, and perform an exhaustive search for the two parameters.
355
Here we describe results obtained in analyzing nuclear structure in two different diagnostic challenges, one in the liver and the other of thyroid cancers. We first show that the distances computed using the OT framework could be used to achieve similar accuracy to the traditional feature approach to this problem described in detail in [10]. We then demonstrate how the OT framework described above can be useful to extract meaningful quantitative information depicting the differences (in a distribution sense) that allow the data to be automatically classified. The results of classifying individual liver using RBF kernel based SVM methods for both features and OT metric are contained in Table I. For thyroid cases (omitted for brevity), features and OT metric based method have similar performance (average accuracy for feature based: NL 80.6%, FA 61.7%, FTC 54.7%; for OT metric: NL 80.6%, FA 58.4%, FTC 62.4%). We note that both feature-based and OT-based classifiers are identical in their implementation using the kernel SVM method; the only difference is in the actual distance (OT vs. feature-based normalized Euclidean distances). We use the automatic method described in section III.C to identify discriminant geodesic projections for liver the liver data(Figure 2). Results suggest that, according to the available data, the most important information for discriminating between NL and HB is the amount, in relative terms, of chromatin concentrated towards the border of the nucleus. The histogram shown inFigure 2 suggests that it is uncommon for HB nuclei to have a chromatin distribution concentrated exclusively at the nuclear periphery. The same experiment suggests that nuclear size is the most discriminating information for the thyroid data (results not shown).
IFMBE Proceedings Vol. 32
356
G.K. Rohde et al.
Table 1 Average Classification accuracy in liver data Case 1 Case 2 Case 3 Case 4 Case 5 Average
Features 89.0% 92.0% 94.0% 80.0% 71.0% 85.2%
OT metric 93.0% 91.0% 92.0% 89.0% 84.0% 89.8%
V. DISCUSSION AND CONCLUSION A new approach for automated digital pathology using nuclear structure is described. The approach is based on quantifying chromatin morphology in different tissues classes (normal, cancer A, cancer B, etc.) using the optimal transportation metric between pairs of nuclei. These distances are utilized within a supervised learning framework to build a classifier capable of determining the tissue class to which a particular set of nuclei belongs. We compare our approach to the standard feature-based classification approach using image data from a total of 15 human cases. Results show that in most cases, on average, the optimal transportation metric performs at least as well or better than a popular feature-based implementation. In addition to automated classification we also describe how optimal transportation-based geodesic paths can be used to summarize differences between the nuclear structure (chromatin distribution) of different tissue classes. The approach involves computing the pairwise distances between all nuclei in the dataset and using the MDS technique to find a Euclidean embedding for the data. Fisher LDA is the applied to discover the modes of variation that are most responsible for distinguishing two classes of nuclei. Once the variation, in the form of an optimal transportation geodesic, is computed, a projection of the data can be used to visualize the main differences in chromatin configuration in two or more tissue classes.
REFERENCES 1. W.W. Ma and A.A.Adjei, “Novel agents on the horizon for cancer therapy.”, CA Cancer J Clin, vol. 59, no.2, pp.111-137, (2009) 2. C.M. Schlotter, U. Vogt, H.Allgayer, and B. Brandt, “Molecular targeted therapies for breast cancer treatment.”, Breast Cancer Res, vol.10, no.4, pp.211, (2008) 3. G.N. Papanicolaou, “New cancer diagnosis,” in Proceedings of the 3rd race etterment conference, Michigan, (1928), pp. 528 4. D. Zink, A.H. Fischer and J.A. Nickerson, “Nuclear structure in cancer cells,” Nat. Rev. Cancer, vol.4, no.677-687, (2004) 5. E. Bengtsson, “Fifty years of attempts to automate screening for cervical cancer,” Med. Imaging Tech. vol.17, pp.203-210, (1999)
Fig. 2 Geodesic identified automatically by our method. Distribution of the data over this geodesic shows variation in where chromatin is positioned within the nucleus (bottom). The histograms over the corresponding images indicate the relative number of nuclei in each population of the data (normal vs HB) that looked closest (in the OT sense) to it 6. P.H. Bartels, T. Gahm and D. Thompson, “Automated microscopy in diagnostic histopathology: From image processing to automated reasoning,” I. J. Imaging Systems and Technology, vol. 8, no.2, pp214223, (1998) 7. C. Demir and B. Yener, “Automated cancer diagnosis based on histopathological images: a systematic survey,” Tech. Rep. TR-05-09, Rensselaer Polytechnic Institute, (2005) 8. K. Rodenacker and E. Bengtsson, “A feature set for cytometry on digitized microscopy images,” Anal. Cell. Pathol., vol.25, pp.1-36 (2003) 9. J. Gil and H.S. Wu, “Application of image analysis to automatic pathology: realities and promises”, Cancer Investigation, vol.21, no.6 pp.950-959, (2003) 10. W. Wang, J.A. Ozolek and G.K. Rohde, “Detection and classification of thyroid follicular lesions based on nuclear structure from histopathology images”, Cytometry A, Jan (2010) 11. Y. Boyokov and G. Funka-Lea, “Graph cuts and efficient n-d image segmentation”, Intern. J. Comp. Vis., vol.70, no.2, pp.109-131, (2006) 12. C. Li, R. Huang, Z. Ding, C. Gatenby, D. Metaxas and J. Gore, “A variational level set approach to segmentation and bias correction of images with intensity inhomogeneity”, Int. Conf. Med Image Comput Assist Interv, vol.11, pp.1083-1091, (2008) 13. G.K. Rohde, A.J.S. Ribeiro, K.N. Dahl and R.F. Murphy, “Deformation-based nuclear morphometry: capturing nuclear shape variation in Hela cells” Cytometry A, vol.73, no.4, pp.341-350, (2008) 14. S.P. Lloyd, “Least squares quantization in pcm”, IEEE Trans. Inf. Theory, vol.28, no.2, pp.129-137, (1982) 15. Y. Rubner, C. Tomassi and L.J. Guibas, “The earth mover’s distance as a metric for image retrieval”, Intern. J. Comp. Vis. Vol. 40, no.2, pp.99-121, (2000) 16. S. Haker, L. Zhu, A. Tennembaum and S. Angenent, “Optimal mass transport for registration and warping”, Intern. J. Comp. Vis., vol. 60, no.3, pp.225-240, (2004) 17. A. Aizerman, E.M. Braverman and L.I. Rozoner, “Theoretical foundations of the potential function method in pattern recognition learning”, Automation and Remote Control, vol.25, pp.821-837, (1964) 18. U.H.G. Krepel, “Pairwise classification and support vector machines”, Advances in Kernel Methods Support Vector Learning, pp.255-268, (1999) 19. R.A. Fisher, “The use of multiple measurements in taxonomic problems”, Annals of Eugenics, vol. 7, pp. 179-188, (1936)
IFMBE Proceedings Vol. 32
Nanoscale Imaging of Chemical Elements in Biomedicine M.A. Aronova1, Y.C. Kim2, A.A. Sousa1, G. Zhang1, and R.D. Leapman1 1
National Institutes of Health / National Institute of Biomedical Imaging and Bioengineering, Bethesda, Maryland, USA 2 Center for Computational Materials Science, Naval Research Laboratory, Washington DC, USA
Abstract— Imaging techniques based on transmission electron microscopy can elucidate the structure and function of macromolecular complexes in a cellular environment. In addition to providing contrast based on structure, electron microscopy combined with electron spectroscopy can also generate nanoscale contrast from endogenous chemical elements present in biomolecules, as well as from exogenous elements introduced into tissues and cells as imaging probes or as therapeutic drugs. These capabilities complement biomedical imaging used in diagnostics while also providing insight into fundamental cell biological processes. We have developed electron tomography (ET) techniques based on unconventional imaging modes in the electron microscope to map specific types of macromolecules within cellular compartments. ET is used to determine the three-dimensional structure from a series of two-dimensional projections acquired successively by tilting a specimen through a range of angles, and then by reconstructing the three-dimensional volume. We have focused on two approaches that combine ET with other imaging modes: energy filtered transmission electron microscopy (EFTEM) based on collection of inelastically scattered electrons, and scanning transmission electron microscopy (STEM) based on collection of elastically scattered electrons. EFTEM tomography provides 3D elemental mapping and STEM tomography provides 3D mapping of heavy atom clusters used to label specific macromolecular assemblies. These techniques are illustrated by EFTEM imaging of the subcellular nucleic acid distribution through measurement of the intrinsic marker, elemental phosphorus; and by STEM imaging of gold clusters used to immunolabel specific proteins within the cell nucleus. We have also used the EFTEM and STEM techniques to characterize nanoparticles that might be used as drug delivery systems.
proteins involved in disease mechanisms, has produced large numbers of candidate molecules that can interact with a particular biological target of interest. The imaging field has embraced this opportunity through the discovery and development of a range of novel approaches for generating protein and gene specific contrast in an image. For example, in magnetic resonance imaging (MRI), a contrast agent not only gives clues about the location of specific organ abnormality, but can also be used to quantify its size, growth rate and possibly chemical composition [1, 2]. MRI and many other imaging modalities are enabling the non-invasive visualization and quantification of specific biological processes [3]. On the cellular level, electron microscopy (EM), which bridges the gap in spatial resolution between x-ray crystallography and light microscopy, provides detailed structural information. The related approach of electron tomography (ET) [4] together with various reconstruction algorithms can generate the 3D organization of cells and their components. However, quantitative information that strengthens and enhances ET is rarely obtained, since it is difficult to extract and interpret the data. One of our goals has been to develop and implement efficient acquisition and quantitative interpretation of EM data to answer some of the basic questions related to cellular biology.
Keywords— Electron tomography, elemental mapping, energyfiltered imaging, scanning transmission electron microscopy.
A. TEM
I. INTRODUCTION Emerging methods in nanoscale imaging create new opportunities to explore basic biological processes in the life sciences. Each of these imaging techniques contributes a unique type of information, corresponding to a specific mechanism of contrast generation. However, each technique also has trade-offs between spatial resolution and an avalanche of information regarding the specific genes and
II. IMAGING MODALITIES
The most commonly used operation mode in the electron microscope is TEM, in which electrons transmitted through the specimen are imaged on a CCD detector. With recent advances in software and hardware, TEM can now be combined with ET in a relatively easy way to obtain 3D density maps. These maps depending on the type of specimen and preparation technique can provide a remarkable amount of detail. For example in case of frozen-hydrated viruses [5] or prokaryotic cells [6], high resolution 3D x-ray structures can be docked into the lower resolution electron density obtained from TEM. However it is difficult to extract
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 357–360, 2010. www.springerlink.com
358
M.A. Aronova et al.
quantitative information from these types of 3D density maps, since contrast is generated either from phase contrast in the case of frozen-hydrated specimens or from high-angle scattering in the case of heavy atom-stained preparations. B. EFTEM Energy filtered transmission electron microscopy (EFTEM), using the inelastic signal from excitation of inner shell electrons by the incident electron beam [7, 8], have improved the sensitivity and spatial resolution of elemental mapping. This is facilitated with the latest generation of charge coupled device (CCD) detectors, design of magnetic energy filters and flexible control of the data acquisition. In the EFTEM the transmitted electrons are dispersed in energy at an energy-selecting slit (Fig. 1). This slit selects a specific energy range, and magnetic lenses behind the slit produce an energy-selected image on the CCD detector. Elemental maps are obtained by subtracting a background image recorded at an energy loss below a core edge from an
image recorded above the core edge. Alternatively, the energy-selecting slit can be removed and the energy loss spectrum can be recorded on the CCD camera, which allows quantitative elemental analysis. EFTEM mapping has been applied successfully to biological systems as well as to a wide range of materials [8, 9, 10]. In biological applications, elemental maps of unstained sections provide information about the 2-D distributions of various classes of biomolecules within a sectioned cell [11]. For example, the nitrogen signal provides information about the total distribution of biomolecules, including proteins and nucleic acids [12, 13]. Sulfur indicates the presence of proteins that contains high levels of the amino acids, cysteine, and methionine [14]. Phosphorus reveals the distribution of nucleic acids, phosphorylated proteins, and phospholipids [15, 16], since small molecules containing phosphorus are mostly removed in plastic-embedded preparations. In the cell nucleus, the relative concentrations of phospholipids and phosphoproteins are low enough for, the phosphorus distribution to provide information about the packing density of DNA within chromatin [17]. However, a quantitative interpretation of elemental distributions requires a 3-D analysis due to ambiguity caused by overlapping structures in 2-D projections. 3-D elemental distributions can be obtained by combining EFTEM with ET. C. STEM In the scanning transmission electron microscopy (STEM) mode, a finely focused nanometer sized probe is scanned cross a specimen, and a variety of signals detected at each pixel in a 2D array. An annular dark-field detector placed after the specimen picks up the high-angle elastic scattering produced by heavy atoms. A bright-field detector situated on axis collects transmitted electrons, which is advantageous for imaging micrometer-thick specimens. In addition, it is also possible to acquire EELS data at each pixel to obtain hyperspectral images from which compositional information can be extracted.
III. TOMOGRAPHY IN THE ELECTRON MICROSCOPE Fig. 1 Schematic
diagram of electron microscope with EFTEM, STEM, and tomography capabilities. Essential components are: magnetic lens (L), magnetic energy filter (M), energy selecting slit (S) and chargecoupled device camera (CCD). At the slit plane (arrow) an electron energy loss spectrum (EELS) is formed, and a 2D EFTEM image is produced at the CCD. With this arrangement 3D information can also be obtained in EFTEM, STEM and TEM tomography modes
In ET, a specimen of thickness from 50 nm to 1000 nm is tilted over a range of angles and imaged in an EM to provide a series of projections onto planes perpendicular to the beam direction (Fig.2). By backprojecting the images and summing over all the orientations, it is thus possible to obtain a three-dimensional reconstruction of the specimen [18, 19]. This technique is increasingly finding applications in numerous fields of science [4, 20].
IFMBE Proceedings Vol. 32
Nanoscale Imaging of Chemical Elements in Biomedicine
359
Fig. 2 A representation of tomography in 2D. The projections of the specimen (curves) are recorded as the specimen is tilted. To obtain the original object these projections can be then reprojected using various reconstruction algorithms Although conventional ET provides important structural information about a cell at the macromolecular scale, it is useful to obtain other types of complementary information in order to identify and quantify specific types of macromolecules within cellular compartments.
IV. APPLICATIONS With the aim of extending the conventional 2D approach to EFTEM, we have combined ET and EFTEM. This technique, which we call quantitative electron spectroscopic tomography (QuEST), is demonstrated by determining the subcellular distribution of nucleic acids by measuring elemental phosphorus. Specifically, excitations of inner shell electrons of phosphorus atoms in the specimen results in a characteristic energy loss at 132 electron volts (L2,3 edge) correspond to ejection of 2p electrons, which can be detected in the energy filtered images. We have explored the potential of QuEST for determining the organization of DNA and proteins in cell nuclei. Previously, 3-D ultrastructure of the nucleus has mainly been derived from imaging samples stained with extrinsic heavy metals. Use of energy filtering for mapping phosphorus in cells has been limited to 2-D [9]. We have demonstrated that QuEST reveals the 3-D distributions of nucleic acid within the nucleus. For example, we have been able to quantify the phosphorus content of individual ribosomes, which contain RNA, and the cell nucleus, which contains DNA in the form of chromatin arranged in 10 nm or 30 nm fibers [21]. The ribosomes (blue-green) and nuclear
Fig. 3 Quantitative analysis of 3D phosphorus distribution: SIRT algorithm was used to reconstruct the EFTEM tomographic tilt series and the 3-D volume was rendered with Amira software. Ribosomes (colored bluegreen) exhibit contrast from the phosphorus in their RNA and chromatin (colored orange) from their DNA chromatin (orange) in a Drosophila larval cell are shown in Figure 3. Simultaneous iterative reconstruction technique (SIRT) algorithm was used to reconstruct EFTEM tomographic data, since it allows preservation of numerical values associated with the densities of elements present. Our quantitative analysis showed that ribosomes contain 8000 ≤ 2000 P atoms, in agreement with the known value of around 7000 P atoms. The density of phosphorus was 0.7±0.2 atoms per nm3, which is consistent with a model of tightly packed 30 nm chromatin fibers. In this way it was possible to visualize and quantify phosphorus distribution in 3D within a sectioned eukaryotic cell. It is also feasible to image other biological elements, e.g., iron and nitrogen, in 3D. We also considered some important practical limitations of the technique including: (1) precision in extracting the phosphorus signal, (2) detection limits and (3) effects of damage when the specimen is irradiated with 300 kV electrons.
IFMBE Proceedings Vol. 32
360
M.A. Aronova et al.
ACKNOWLEDGMENT This work was supported by the intramural research program of the NIH.
REFERENCES 1. Jung C W, Jacobs P (1995) Physical and chemical properties of superparamagnetic iron oxide MR contrast agents: Ferumoxides, Ferumoxtran, ferumoxsil. Magn. Res Imag 13 (5): 661-674 2. Svenson S, Tomalia D A (2005) Dendrimers in biomedical applications - reflections on the field. Adv Drug Deliv Rev 57 (15): 21062129 3. Cherry S R (2004) In vivo molecular and genomic imaging: new challenges for imaging physic. Phys Med Biol 49:R13–R48 4. McIntosh R et al., Tr Cell Biol 15:43 5. Grünewald K, Desai P, Winkler D C, Heymann J B, Belnap D M, Baumeister W, Steven A C (2003) Three-dimensional structure of herpes simplex virus from cryo-electron tomography. Science, 302:1396-1398 6. Grünewald K, Medalia O, Gross A, Steven A C, Baumeister W (2003) Prospects of electron cryotomography to visualize macromolecular complexes inside cellular compartments: implications of crowding. Biophys Chem 100:577-591 7. Reimer L (1995) Energy-Filtering Transmission Electron Microscopy. Spring er, Berlin 8. Egerton R F (2003) New techniques in electron energy-loss spectroscopy and energy-filtered imaging. Micron 34:127-139 9. Krivanek, O, Friedman S, Gubbens A, Kraus B (1995) An imaging filter for biological applications. Ultramic 59:267–282 10. Hofer F, Warbichler P, (2004). Elemental mapping using energy filtered imaging. In: Ahn, C. (Ed.), Transmission Electron Energy Loss Spectrometry in Materials Science and the EELS Atlas, second ed. Wiley-VCH, Berlin 11. König P, Braunfeld M B, Sedat J W, Agard D A (2007) The three dimensional structure of in vitro reconstituted Xenopus laevis chromosomes by EM tomography. Chromosoma DOI:10.1007/s00412-0070101-0)
12. Goping G, Pollard H B, Srivastava M, Leapman R (2003) Mapping protein expression in mouse pancreatic islets by immunolabeling and electron energy loss spectrum-imaging. Microsc Res Tech 61:448– 456 13. Bazett-Jones D P, Hendzel M J, Kruhlak, M J (1999) Stoichiometric analysis of protein- and nucleic acid-based structures in the cell nucleus. Micron 30:151–157 14. Leapman R D, Jarnik M, Steven A C (1997) Spatial distributions of sulfur-rich proteins in cornifying epithelia. J Struct Biol 120:168–179 15. Korn A, Spitnik-Elson P, Elson D, Ottensmeyer F P (1983). Specific visualization of ribosomal RNA in the intact ribosome by electronspectroscopic imaging. Eur J Cell Biol 31:334–340 16. Ottensmeyer F P (1984) Electron spectroscopic imaging: parallel energy filtering and microanalysis in the fixed-beam electron microscope. J Ultrastruct Res 88:121–134 17. Ottensmeyer F P (1984) Electron spectroscopic imaging: parallel energy filtering and microanalysis in the fixed-beam electron microscope. J Ultrastruct Res 88:121–134 18. Frank J (1992) Electron Tomography: Three-dimensional Imaging with the Transmission Electron Microscope. Plenum Press,New York 19. Mastronarde D N (1997) Dual axis tomography: an approach with alignment methods that preserve resolution. J Struct Biol 120:343-352 20. Midgley P A, Weyland M (2003) 3D electron microscopy in the physical sciences: The development of Z-contrast and EFTEM tomography. Ultramic 96 (3-4): 413-431 21. Aronova M A, Kim Y C, Harmon R, Sousa A A, Zhang G, Leapman R D (2007) Three-dimensional elemental mapping of phosphorus by quantitative electron spectroscopic tomography (QuEST). J Struc Biol 160:35–48
Author: Maria A. Aronova Institute: National Institutes of Health/National Institute of Biomedical Imaging and Bioengineering Street: 9000 Rockville Pike, Bldg 13/3N17 City: Bethesda, MD Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Sparse Representation and variational Methods in Retinal Image rocessing J. Dobrosotskaya,1 , M. Ehler1,2 , E. King1,2 , R. Bonner2 , and W. Czaja1 Norbert Wiener Center for Harmonic Analysis and Applications , Department of Mathematics University of Maryland , College Park, MD 20742 2 National Institutes of Health, Eunice Kennedy Shriver National Institute of Child Health and Human Development, PPB/LIMB/SMB, Bethesda, MD, 20892 1
Abstract- Relations between different types of cameras used for retinal imaging were studied with the purpose of improving the quantitative precision of the imaging data (used for diagnostics and medical research). Based on the differences in visual quality and quantitative parameters, we designed analytical models of the effects that cameras introduce into the retinal data and described possible ways of digital post-processing. Some processing tasks involve detection and separation of features (such as the retinal microvessels) prior to subsequent analysis of underlying retinal pathology. Mathematical techniques for feature detection and inpainting are variational, implemented via numerically stable gradient descent schemes. Other tasks involve the estimates of translation - invariant sparse image coefficients allowing to separate the background and significant scales of the image from the texture-like auxiliary information. The above techniques are based on the recent work on the wavelet Ginzburg-Landau energy and methods of adaptive thresholding of the stationary wavelet transform coefficients. We consider algorithms with partial specialist supervision and deliberate choice of processing methods for different eye areas as well as separate processing of healthy vs. pathological eye data. Keywords- Retinal imaging, variational method, edge detection, wavelet.
I. Introduction Ophthalmologists often rely on retinal imaging to diagnose, detect, and follow disease progression. Classifying early stages of age-related macular degeneration (AMD), for instance, relies on qualitative and quantitative analyses of the data from the confocal scanning laser ophthalmoscope (cSLO) and standard fundus camera images [13, 7, 9]. Decrease in macular pigment has been identified as a risk factor for AMD, and observing its distribution over time would allow to make further conclusions about natural and pathological dynamics of macular pigment changes. However, due to inter and cross modality variations, better quantitative measurements are still needed [12]. Macular pigment measurements based on twowavelength autofluorescence images have been introduced
by Delori et al. [8]. To compute the macular pigment map, we either pair a blue cSLO image (488nm excitation, > 500nm emission) with a yellow standard fundus image (520-600nm excitation, > 600nm emission) or, if available, we pair blue (460 − 500nm excitation) and yellow standard fundus images [9]. While image artifacts and background components cancel out (to some extent) in inter-modality pairings, they are emphasized in crossmodality pairings and introduce significant errors in the macular pigment maps. To trace the dynamics of macular pigment and other chromophore changes in retrospective studies, one must compare cSLO autofluorescence with standard fundus autofluorescence, and quantitative measurements require image pre-processing to reduce image artifacts, non-uniform illumination profiles, and contrast differences between modalities. This paper addresses two retinal image analysis problems. First, we correct autofluorescence images from cross-modalities (cSLO, standard fundus camera) to be used in the same two-wavelength computations of macular pigment maps. Blood vessels are detected and masked to facilitate quantitative analysis. Secondly, we extract a binary map of the retinal vascular system from cSLO images. Those two techniques share the common feature of using the stationary wavelet transform for translationinvariant operations. Since the microvascular system in the image is relatively contrasting, some special features of the wavelet decomposition of almost-binary images provide the apparatus for the detection of the blood vessels.
II. Adaptive translation invariant wavelet thresholding First, we will introduce the adaptive thresholding technique that was designed in the attempt to perform minimal changes needed to obtain a more reliable pigment map using images from different digital sources. The need for non-uniform, “relative” thresholding arises from the necessity to automatize the procedure, as well as the need to use different thresholds for edges in separate directions. We will use the image decomposition via the translation invariant (stationary) wavelet transform to perform the adaptive corrections. The involved wavelet function ψ is assumed to be sufficiently regular and compactly supported. The respective 2D wavelet basis is assumed to be
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 361–364, 2010. www.springerlink.com
J. Dobrosotskaya et al.
362
separable, so that the wavelet coefficients can be catego- wavelet thresholding procedure defined above in order to rized into directions H,V and D (horizontal, vertical and compensate for the contrast differences specifically near diagonal respectively). the blood vessels (Fig. 1(c)). The result of the MPM computation without any additional correction is shown Relative thresholding in Besov spaces If we consider a characteristic function of some mea- in Fig.2(a), the result of the MPM computation using surable set u = χE ∈ L2 (R2 ), then, due to ψ having a thresholded cSLO image is shown in Fig.2(b). One can compact support and χE being locally homogeneous of see that while unevenness of the MPM originating from degree 0, we get: sources other than blood vessels was retained, the vesselrelated artifacts that are visible Fig.2(a) are almost comu, ψj,k 2p u, ψj+p,2p k . The decrease in the coefficient values as the scale in- pletely eliminated. creases and the respective increase of the integration domain for the translation parameter k makes the the standard deviation change as O(2−2j ) while the scale j increases. If the boundary of the set E is piecewise smooth one can expect the coefficients of the wavelet modes supported within the same distance form the boundary to (a) (b) (c) be of the same order of magnitude. Fig 1 Images: (a) blue cSLO; (b) yellow fundus; (c) corrected blue cSLO. Let us define the thresholding rule for each scale j of the wavelet decomposition. In order to do that, consider the expected value(the mean) Mj and the Dj for the set of all wavelet coefficients within one fixed direction cj,k = ψj,k , u at a fixed scale j ∈ N: −N j
Mj = 2
J 2 −1
cj,k
ki =0
Dj2
−N j
=2
J −1 2
ki =0
−N j
(cj,k − Mj ) = 2 2
J −1 2
c2j,k − Mj2
(a)
Fig 2
(a) MPM computed without correction, (b) MPM computed using thresholded blue cSLO image.
ki =0
We define the relative significance threshold at scale j as τj = C22j Dj , C = 2−2Jmax where Jmax , is the the thresholding scale, which is defined as the maximum level of wavelet decomposition J (i.e. the image resolution ) or as the scale that stores the most significant information (visually significant, or defined by a specific application or the given data quality). In this manner, we define the following unified criterium S for the relative wavelet thresholding. A mode ψj,k is chosen to be relatively significant for a function u within a chosen direction (H,V, or D), i.e. bu (j, k) = 1 if and only if it differs from the mean coefficient value at the scale j by more than the standard deviation times the dyadic scaling multiple: |u, ψj,k − Mj | ≥ C22j Dj . Thus, the relative thresholding leaves intact those coefficients that differ sufficiently (as much as in the binary image case) from the mean of all coefficients at this wavelet scale.
III. Semi-supervised contour detection using the wavelet Ginzburg-Landau energy
The wavelet Ginzburg-Landau energy(WGL), introduced in [3, 4] was shown effective in variational methods for various imaging problems. Here we describe one of its applications which is essentially a variational method for detecting of the microvascular elements within retinal images. As was mentioned before, blood vessels are a completely different structure for the purposes of the retinal image analysis, and, in particular, should be excluded from the computation of the macular pigment map in order to improve the precision of the latter. Here, once again, we will use the assumption that those are of sufficient contrast with respect to the background, i.e. can be treated as almost-binary elements of the image. Semisupervision required here involves the manual choice of several pixel areas that belong to the blood vessels prior to automatized computations that detect the rest of the microvascular system. Wavelet Ginzburg-Landau energy
Fourier analysis provides many elegant approaches to difNumerical tests ferential operators and related tools in PDE-based imFig. 1-2 show the results of the numerical tests that were age processing. In the design of the wavelet Ginzburgperformed using a cSLO and a yellow standard fundus Landau energy, a more localized basis than the Fourier image. The cSLO image was modified via the adaptive one was used in the context of variational methods based IFMBE Proceedings Vol. 32
Sparse Representation and Variational Methods in Retinal Image Processing
on diffuse interfaces([5] and more). Such construction is naturally consonant with image processing applications involving binary images and treat(recover) respective binary values as two equilibria of some system [1],[6]. WGL originated from the idea of designing new types of pseudo-differential energy functionals that inherit important properties of the ones involving derivatives, but leave out the computational drawbacks associated with the discrete differentiation. The key idea in [3] combined the basic geometric framework of diffuse interface methods with advantages of the well-localized and inherently multiscale wavelet operators. The Total Variation (TV) seminorm was proven to be a natural and efficient measure of image regularity [2],[11]. To avoid computational challenges related to equations for minimizers of this norm, one can reformulate the problem using the phase-field method and approximate the TV functional (in the Γ sense). The Ginzburg-Landau (GL) functional, 1 2 GL(u) = |∇u(x)| dx + W (u)dx, 2 4
363
for the same values of the interface parameters comparing to the classical GL energy. WGL minimization can be performed via the gradient descent method. The latter problem is equivalent to solving the following ODE with a sufficiently regular initial condition u(x, 0) = u0 (x): 1 ut = Δw u − W (u) (GD) The above problem is well-posed: it has a unique solution that exists globally in time and converges to a steady state as t → ∞. The steady state solution is infinitely smooth provided that wavelet ψ used in the construction of the energy has sufficient regularity. Modified WGL in the variational formulation of the segmentation problem.
Our segmentation model involves minimizing the sum of 1 WGL (as a regularizer) and the L2 spatial and the B2,2 edge-preserving forcing terms. μw μs E(u) = W GL(u)+ (u−f )χΩ 2L2 + |P rΛ (u−uorig )|2B , 2 2 (M W GL) χΩ and χΛ are masks in the spatial and wavelet domains respectively, and μs and μw are corresponding weights. Ω is assumed to be the manually preclassified part of the image, Λ - the set of wavelet modes that need to be preserved close to the original image. Function f assumes value 1 at the non-vessel pixels and 0 at the pixels within the blood vessels in the image, uorig denotes the original image rescaled to the range [0, 1]. The set Λ of “relatively” significant modes is obtained by the adaptive thresholding method described earlier. The gradient descent equation for this modified WGL energy takes the form 1 ut = Δw u− W (u)−μs (u−f )χΩ −μw Δw P rΛ (u−uorig ) The initial guess used for numerical simulations may be chosen to be equal to the given image except for black and white values at the preclassified areas:
W (u) = u2 (u − 1)2 is a diffuse interface approximation to the Total Varia tion functional |∇u(x)|dx in the case of binary images [10]. GL energy is used in modeling of a vast variety of phenomena including the second-order phase transitions. However, if used in signal processing applications, diffuse interface models tend to produce results that are oversmoothed comparing to the optimal output. In the new model the H 1 seminorm |∇u(x)|2 dx is replaced with a Besov seminorm (or Besov-type seminorm defined using 0-regular wavelets). This allows to construct a method with properties similar to those of the PDE-based methods but without as much diffuse interface scale blur. The “wavelet Laplace operator ” was defined by having the wavelet basis functions as eigenfunctions, and acting on those in the same “scale - proportional” manner as the Laplace operator acts on the Fourier basis. Given an orthonormal wavelet ψ the “wavelet Laplacian” of any u(x, 0) = u0 (x) = uorig χΩc (x) + f χΩc (x) u ∈ L2 (R) is formally defned as +∞ Numerical simulations were performed using discrete Δw u = − 22j f, ψj,κ ψj,κ dκ, ψj,κ = 2j ψ(2j x − κ). gradient-stable semi-implicit schemes. Indeed, Δ is a w j=0 diagonal operator in the wavelet basis, but the presence Then the “wavelet Allen-Cahn” equation ut = Δw u − of nonlinearity does not allow to make it fully implicit. 1 W (u) describes the gradient descent in the problem The gradient stability is achieved by the convexity splitof minimizing the Wavelet Ginzburg-Landau (WGL) en- ting method described in [3]. The computational speed of ergy: WGL-based algorithms is mostly defined by the choice of 1 W GL(u) := |u|2B + W (u)dx, the translation-invariant discrete wavelet transform. The 2 4 stationary wavelet transform (SWT) matches the model +∞ 2j 2 2 |f, ψj,κ | dκ 2 (3) perfectly, however, it requires more operations than FFT |u|B = 2 j=0 that is used within related PDE-based methods. The fact is the square of the Besov 1-2-2 (translation-invariant) that the SWT is relatively slow in comparison with the FFT is compensated by WGL-based methods requiring semi-norm if the wavelet ψ is r-regular, r ≥ 2. WGL functionals are inherently multiscale and take fewer iterations to converge. Thus, the pseudo-differential advantage of simultaneous space and frequency localiza- method is comparable to or outperforms the PDE methtion, thus allowing much sharper minimizer transitions ods in terms of the CPU time. IFMBE Proceedings Vol. 32
364
J. Dobrosotskaya et al.
References
Numerical tests
The test was performed on an average of several cSLO images - Fig. 3. Depending on the combination of parameters and μ which define the importance of the output being binary and having the same set of edges respectively, the results vary in the level of detalization and sharpness of the vessel/non-vessel classification -Fig. 3(c).
(a)
(b)
(c)
(c)
Fig 3
(a) Initial image, (b) the set of edges that need to be preserved (denoted f in the algorithm), (c) the resulting maps of detected blood vessels.
IV. Conclusions The authors addressed some questions related to the retinal image processing, in particular - to the computation of the macular pigment map. A successful method of correction of autofluorescence images from cross-modalities (cSLO, standard fundus camera) allowing to use those in the same two-wavelength computations of macular pigment maps was introduced along with a variational technique for extraction of a binary map of the retinal vascular system from cSLO images. The latter can be improved by designing an explicit, non-iterative way of finding or approximating solutions of the described variational problem and, thus, decreasing the computational time. This issue is one of the aspects of the authors’ work in progress. Acknowledgements The research was funded by the Intramural Research Program of NICHD/NIH, by NSF (CBET0854233), by NGA (HM15820810009), and by ONR (N000140910144). The authors are grateful to Professors John J. Benedetto and Andrea L. Bertozzi for many insightful discussions and their long-term support.
[1] A. Bertozzi, S. Esedoglu, and A. Gillette. Analysis of a two-scale Cahn-Hilliard model for image inpainting. Multiscale Modeling and Simulation, 6(3):913– 936, 2007. [2] A. Chambolle and P.-L. Lions. Image recovery via total variation minimization and related problems. Numerische Mathematik, 76(2):167–188, April 1997. [3] J. Dobrosotskaya and A. Bertozzi. A WaveletLaplace variational technique for image deconvolution and inpainting. ”IEEE Transactions on Image Processing”, 17(5), 2008. [4] J. Dobrosotskaya and A. Bertozzi. Wavelet Ginzburg-Landau energy in the edge-preserving variational techniques of image processing. To be submitted to SIAM Jour. or Appl. Analysis, March 2010. [5] S. Esedoglu and J. Shen. Digital inpainting based on the Mumford-Shah-Euler image model. Euro. Jnl of Applied Mathematics, 13:353–370, 2002. [6] Selim Esedoglu. ”Blind Deconvolution of Bar Code Signals”. Inverse Problems, (20):121–135, 2004. [7] A.C.Bird et al. An international classification and grading system for age-related maculopathy and age-related macular degeneration. The International ARM Epidemiological Study Group. Surv Ophthalmol, 5(39):367–374, 1995. [8] F.C.Delori et al. Macular pigment density measured by autofluorescence spectrometry: comparison with reflectometry and heterochromatic flicker photometry. [9] M.Ehler et al. High-resolution autofluorescence imaging for mapping molecular processes within the human retina. UMD, 2010. SBEC. [10] G. Dal Maso. An introduction to Gamma convergence. Progress in nonlinear differential equations and their applications. Birkhauser Boston, Inc., Boston, MA, 1993. [11] L. I. Rudin, S. Osher, and E. Fatemi. ”Nonlinear Total Variation based noise removal algorithms”. Physica D., 60:259–268, 1992. [12] S.Beatty, F.J. Van Kuijk, and U. Chakravarthy. Macular pigment and age-related macular degeneration: longitudinal data and better techniques of measurement are needed. Invest Ophthalmol Vis Sci, 3(49), 2008. [13] S.M.Meyers, M.A. Ostrovsky, and R.F. Bonner. A model of spectral filtering to reduce photochemical damage in age-related macular degeneration. Trans Am Ophthalmol Soc, (102):83–93, 2004.
IFMBE Proceedings Vol. 32
Optimization and Validation of a Biomechanical Model for Analyzing Running-Specific Prostheses Brian S. Baum1, Roozbeh Borjian1, You-Sin Kim1, Alison Linberg1, and Jae Kun Shim1,2,3 1 Department of Kinesiology, University of Maryland, College Park, MD USA Department of Bioengineering, University of Maryland, College Park, MD USA 3 Neuroscience and Cognitive Science (NACS) Graduate Program, University of Maryland, College Park, MD USA 2
Abstract— Modeling the ankle joint during amputee locomotion is difficult since a definitive joint axis may not exist. Gait analysis estimates joint center positions and defines body segment motions by placing reflective markers on anatomical landmarks. Inverse dynamics techniques then estimate joint kinetics (forces and moments) and mechanical energy expenditure using data from ground reaction forces (GRFs) and the most distal joint (usually the ankle) to make calculations for proximal joints. Running-specific prostheses (RSPs) resemble a “C” or “L” shape rather than the human foot. This allows RSPs to flex and return more propulsive energy, like a spring, but no “ankle” exists. Current biomechanical models assume such a joint exists by placing markers arbitrarily on the RSP (e.g. the most acute point on the prosthesis curvature). These models are not validated and may produce large errors since inverse dynamics assumes rigid segments between markers but RSPs are designed to flex. Moreover, small errors in distal joint kinetics calculations will propagate up the chain and inflate errors at proximal joints. This study develops and validates a model for gait analysis with RSPs. Reflective markers were placed 1 cm apart along the lateral aspects of five different RSPs. Prostheses were aligned in a material testing system between two load cells. Forces simulating peak running loads were applied and the load cells measured forces and moments at the top (applied force) and bottom (GRF) of the prostheses. Inverse dynamics estimated force transfers from the bottom to top of the prostheses through the defined segments. Differences between estimated and applied values at the top are considered model error. Error will be calculated for every possible combination of markers to determine the minimal marker set with an “acceptable” level of error. The results yield a model that can be confidently used during gait analyses with RSPs. Keywords—Kinetics, Amputee, Amputation, Prosthesis.
I. INTRODUCTION Modeling the lower extremity joints, and specifically the ankle joint, proves to be continual source of difficulty and remains as an inherent problem analyzing locomotion (walking and running) of individuals with lower extremity amputations (ILEA). Many of today’s commonly prescribed
prosthetic foot designs are either energy storage and return (ESAR) or dynamic response feet, which have a resemblance to an intact foot. During a three-dimensional gait analysis, reflective markers are placed on anatomical landmarks to estimate the positions of joint centers and to define the body segment motions. Researchers will often treat current prostheses like an intact limb and label the relative location of the landmarks on the prosthesis. In biomechanics of human locomotion, identifying the ankle joint is one of the most important tasks because the calculations of joint kinetics (forces and torques) and joint mechanical energy expenditure start from the ankle joint. A small joint position error at the ankle can easily propagate up the chain to the knee, hip, and beyond producing greater errors for the joint kinetics calculations in these more proximal joints. In previous amputee locomotion studies, markers defining the ankle joint axis are often affixed to spots on the prosthetic foot mimicking the marker placement on the intact foot and ankle complex. With the development of running specific prostheses, new prosthetic foot designs have emerged that no longer resemble the human foot. Many of the designs resemble a “C” or “L” shape at the distal end of the limb, which allows the prosthesis to flex and return more energy for propulsion during running, similar to a spring. These designs do not have a typical ankle joint (Fig. 1a-b); however, similar methods of biomechanical analyses have been employed to analyze these prostheses as have been used in ESAR and dynamic response prosthetic feet, traditional prosthetic feet, and the intact limb. Studies investigating running with these devices have estimated the prosthetic limb ankle joint to be either at the same relative position as the intact limb’s ankle joint or the most acute point on the prosthesis curvature (i.e., the greatest curvature; see Fig. 1a-b) [1-3]. These estimations have not been validated and potentially result in large errors in the kinetic calculations and subsequent interpretations of results. Consequently, improved and validated modeling techniques are needed to estimate accurate centers of rotation for running prostheses that can be applied to multiple prosthetic designs, and be utilized in those with bilateral lower extremity amputations where an
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 365–367, 2010. www.springerlink.com
366
B.S. Baum et al.
6 DOFFP
a
b 6 DOFFP
Fig. 1 a-b. Literature has reported marker placement for running prostheses (a) placed at the height of the intact limb’s lateral malleolus or (b) the point at which the radius of the prosthesis is most acute
intact ankle joint is not available for reference. An accurate model will provide data that can be interpreted with confidence and is needed to produce biomechanical and physiological data necessary to identify optimal running techniques, prosthetic alignment, prosthetic designs, training regimens, and energy efficiency. Without understanding the biomechanical and physiological consequences of exercise after amputation, clinicians will have difficulty in prescribing appropriate prostheses and exercise regimes to people with a lower extremity amputation with and without diseases such as diabetes, high blood pressure, and obesity. The aim of this experiment is to develop and validate a model with unique optimal marker placements for specific running prosthesis designs and to determine the resultant optimal marker placement for all tested running prosthetic designs.
II. METHODS A biomechanical model is being developed through motion analysis of running-specific prostheses in a material testing system (MTS, Eden Prairie, MN). Four runningspecific prosthesis designs are being tested for this project including the 1E90 Sprinter (OttoBock Inc.), Flex-Run (Ossur), Cheetah® (Ossur) and Nitro Running Foot (Freedom Innovations). These prostheses were chosen because they are the most commonly prescribed runningspecific prostheses on the market. Each prosthesis was placed in the MTS between two load cells (Bertec PY6, Columbus, OH) in a neutral alignment (Fig. 2). Neutral alignment was defined according to the specific
Fig. 2 Approximate marker placement on running prosthesis and position in MTS machine between two 6-DOF forceplates (FP). Fewer markers than actual are shown in this illustration for clarity
manufacturers’ recommendations for prosthesis alignment. The load cells captured data at 1,000 Hz. Forces up to 2,300 N were applied to simulate peak vertical forces commonly observed during running (approximately three times the body weight of a 75 kg person), and the load cells measured the force and moment at the head (applied) and toe (simulates ground reaction force and moment) of each prosthesis. Reflective markers were placed at 1 cm intervals along the lateral aspect of the keel of each runningspecific prosthesis (see Fig. 2). Reflective markers were also placed orthogonally on the anterior, lateral, and medial aspect of the “head” of the prosthesis, at the point of connection to the socket or pylon, in order to define the local coordinate system of the prosthesis. A 6-camera motion capture system (Vicon, Oxford, UK) with a capture frequency of 500 Hz was used to collect the 3-D positional data of the markers during each trial. Two consecutive markers defined individual segments of the prosthesis (assumed to be rigid) and consecutive segments shared a common marker. The joint between these segments was assumed as a hinge joint. Standard inverse dynamics calculations were made to estimate the force and torque transfer from the base of the prosthesis to the head, through the defined prosthesis segments. The difference between force and moment values at the head of the prosthesis from the estimated inverse dynamics calculations and the directly measured values from the top load cell is considered model error. Force and moment estimations will be made with every combination of markers giving a resultant error value for each combination. These error values will be analyzed to determine an “acceptable” level of error for a minimal
IFMBE Proceedings Vol. 32
Optimization and Validation of a Biomechanical Model for Analyzing Running-Specific Prostheses
marker set that can be used by most motion capture laboratories. Less than 5% error from the peak force and moment values will be considered acceptable.
III. RESULTS Complete results are not yet available as testing is still in progress. Early preliminary data suggest that fewer than eight markers will be necessary to accurately (less than 5% error) estimate force and moment transfer through runningspecific prostheses using inverse dynamics equations.
IV. DISCUSSION A majority of motion capture laboratories have a limited number of cameras and may have difficulty tracking a large number of markers close together during activities such as running. Determining a minimal marker set for runningspecific prostheses is important to ensure widespread use of such a model regardless of the number of cameras available to a laboratory. Moreover, fewer markers on a prosthesis makes setup less tedious and saves testing time. Optimal marker sets (the fewest number of markers yielding acceptable error) will be identified for each running-specific prosthesis design and an attempt will be made to identify an overall optimal marker set that yields the smallest summation of error for all designs. The development and validation of an accurate biomechanical model for use with running-specific prostheses will allow researchers to fully examine the kinematic and kinetic adaptations that occur during running in ILEA. Virtually no information is available in the literature to guide clinicians in aligning, prescribing, or rehabilitating ILEA who wish to run. It is currently unknown whether running with running-specific prostheses poses an increased risk for injury in the residual limb joints or joints in the contralateral limb. ILEA are already at greater risk of degenerative joint diseases such as osteoarthritis (OA), and the larger forces generated during running could promote the development and progression of these diseases. Prior research supports that OA may initiate in joints that experience a traumatic or chronic event (such as amputation due to injury or disease) that causes kinematic changes [4]. The rate of OA progression is currently thought to be associated with increased loads during ambulation [4, 5]. Identifying running techniques, prosthetic alignments, or new prosthetic designs that reduce peak lower extremity joint loading may reduce the risk of developing and progressing OA.
367
Additional research needs include investigating the effects of various prosthetic components in meeting different running goals, investigating the effects of variations in prosthetic alignment that minimize asymmetries and maximize energy efficiency during running.
V. CONCLUSIONS A validated biomechanical model is necessary to aid in our analysis and knowledge of the effects of using runningspecific prostheses. Development of this model will allow researchers to systematically analyze the kinematic and kinetic adaptations of individuals with lower extremity amputations during running. This information will lead to improved prosthetic prescription and alignment, rehabilitation techniques, and prosthetic designs that will improve performance and reduce risks for injury and disease.
ACKNOWLEDGMENTS This research was funded by the University of Maryland’s Department of Kinesiology Graduate Research Initiative Fund.
REFERENCES 1. Buckley JG (2000) Biomechanical adaptations of transtibial amputee sprinting in athletes using dedicated prostheses. Clin Biomech 15: 352358 2. Buckley JG (1999) Sprint kinematics of athletes with lower-limb amputations. Arch Phys Med Rehabil 80: 501-508 3. Burkett B, Smeathers J, Barker T (2003) Walking and running interlimb asymmetry for Paralympic trans-femoral amputees, a biomechanical analysis. Prosthet Orthot Int 27: 36-47 4. Andriacchi TP, Mundermann A (2006) the role of ambulatory mechanics in the initiation and progression of knee osteoarthritis. Curr Opin Rheumatol 18: 514-518 5. Andriacchi TP, Koo S, Scanlan SF (2009) Gait mechanics influence healthy cartilage morphology and osteoarthritis of the knee. J Bone Joint Surg Am 91Suppl 1: 95-101
Corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Brian S. Baum University of Maryland, College Park Department of Kinesiology College Park USA
[email protected]
Prehension Synergy: Use of Mechanical Advantage during Multi-finger Torque Production on Mechanically Fixed- and Free-Object Jaebum Park1,2, You-Sin Kim2, Brian S. Baum2, Yoon Hyuk Kim5, and Jae Kun Shim2,3,4,5 1
Department of Kinesiology, The Pennsylvania State University, University Park, USA 2 Department of Kinesiology, University of Maryland, College Park, USA 3 Department of Bioengineering, University of Maryland, College Park, MD 20742 4 Neuroscience and Cognitive Science (NACS) Graduate Program, University of Maryland, College Park, MD 20742 5 Department of Mechanical Engineering, Kyung Hee University, Global Campus, Korea 130-701 Abstract— The aim of this study was to test the mechanical advantage (MA) hypothesis, greater involvements of effectors with longer moment arms, in multi-finger torque production tasks in humans. Seventeen right-handed subjects held a customized rectangular handle and produced prescribed torques to the handle, and the forces from all five digits were recorded. There were eight experimental conditions: two prehension types under different sets of mechanical constraints (i.e., fixed-object prehension and free-object prehension) with two torque directions (i.e., supination and pronation) and two torque magnitudes: (i.e., 0.24 and 0.48 Nm). The subjects were asked to produce prescribed torques during the fixed-object prehension or to maintain constant position of the free handheld object that required the same magnitude and direction of torques as the fixed-object prehension. The index of MA was calculated for agonist and antagonist fingers, which produce torques, respectively, in the same direction and opposite direction to the assigned torques. Agonist fingers showed that the fingers with longer moment arms produced greater grasping forces while antagonist fingers showed that that the fingers with shorter moment arms produced greater grasping forces. These results support the MA hypothesis. The MA index was greater in fixed-object condition as compared to free-object condition. The MA index was greater in pronation condition than supination. We concluded that the central nervous system utilizes the MA of fingers during multi-finger torque production tasks. Keywords— Prehension, mechanical advantage, torque production.
I. INTRODUCTION When the human motor system involves redundant motor effectors for a specific motor task, the central nervous system needs to provide a solution for the motor task by determining the involvements of multiple effectors. Specifically, when the motor task involves a production of a torque using multiple effectors that are aligned parallel and contributing to the torque [1], the central nervous system may consider the mechanical advantage (MA) of effectors as a solution to the redundant motor system. Previous
studies showed that effectors with greater MA are associated with greater involvements in muscle activation patterns [2] and multi-digit grasping tasks [3]. The MAs of individual effectors in the system are mainly determined by their anatomical structures, such as the origin and insertion of individual muscles and parallel finger connections. Eventually, the use of effectors with greater MA would be an effective way to perform the tasks, minimizing the total “effort” (e.g., total force used for the task). Previous studies suggested that the central nervous system (CNS) utilized the MA of fingers during torque production tasks [3]. According to the MA hypothesis, the fingers positioned further away from the axis of rotation have greater MA due to their longer moment arms. The force production of lateral fingers (i.e., index and little fingers) would be a more effective way of producing moments as compared to the force production of the central fingers due to the longer moment arms of lateral fingers. The selections of individual finger forces/moments are partially governed by the controller’s specific principle. Thus, utilizing MA of various fingers in multi-finger torque production tasks can be the controller’s specific strategy to control the kinetically redundant hand-finger system. Recognizing such a pattern may be a way to minimize the total finger forces in torque production. However, this would only be true when the fingers act as the moment agonist, when the effectors produce the moment of force in the required directions. Actions of individual fingers are not independent because of the inter-dependent muscle-tendon connections of fingers [4] and common inputs to the same finger muscles [5]. Thus, a voluntary movement or force production by one finger is often accompanied by involuntary movements or forces by other fingers [6]. The CNS might produce a smaller finger force with a longer moment arm, where the fingers produce moments of force opposite to the required direction (i.e., antagonist moment). In this study we employed a free-object and a mechanically fixed-object in static prehension in order to investigate the effect of static constraints during static
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 368–371, 2010. www.springerlink.com
Prehension Synergy: Use of Mechanical Advantage during Multi-finger Torque Production
prehension and how the CNS controls digits’ forces and moments against static constraints. The following two general hypotheses were tested in this study: 1) The MA of fingers is utilized by the CNS in both agonist fingers and antagonist fingers. 2) Mechanical constraints (fixed- vs. free-object prehensions) are considered by the CNS for the utilization of the MA of fingers in both agonist fingers and antagonist fingers.
II. METHOD Seventeen right-handed male volunteers (age: 29 ± 3.1 years, weight: 67.1 ± 2.9 kg, height: 174.2 ± 5.3 cm, hand length: 18.7 ± 2.5 cm, and hand width: 8.7 ± 0.9 cm) were recruited in the current study. Before testing, the experimental procedures of the study were explained to the subjects and the subjects signed a consent form approved by the University of Maryland’s Institutional Review Board (IRB). Five six-component (three force and three moment components) transducers (Nano-17s, ATI Industrial Automation, Garner, NC, USA) were attached to an aluminum handle (Fig.1) in order to measure each digit’s forces and moments. One six-component (three position and three angle components) magnetic tracking sensor (Polhemus LIBERTY, Rockwell Collins Co., Colchester, VT, USA) was mounted to the top of the aluminum handle in order to provide feedback of the linear or angular positions of the handle during the free-object prehension task (Fig. 1). The thumb sensor was positioned at the midpoint between the middle and ring finger sensors in the vertical direction. In addition, a horizontal aluminum beam (32cm in length) was attached to the bottom of the handle in order to hang a load (0.31kg) at different positions along the beam so as to provide different external torques for the freeobject condition. The sampling frequency was set at 50 Hz. The experiment consisted of two sessions. In the first session, the subjects performed four single-finger maximal voluntary force (MVF) production tasks (i.e., index, middle, ring, and little fingers) under the fixed object condition. The fingers’ MVFs along the Z-axis (i.e., the direction of grasping force) were measured. The subjects were instructed to keep all digits on the sensors during each task and to pay attention to the task-finger maximal force production. Each subject performed a total of 8 trials: 2 prehension types (fixed and free) × 4 fingers (index, middle, ring, and little) = 8 trials. The second session involved a series of multi-finger torque production tasks under both fixed- and free-object conditions. In this session, there were eight experimental conditions: 2 prehension types × 4 torque conditions about y-axis (supination efforts: -0.48, -0.24 Nm; pronation efforts: 0.24,
369
0.48 Nm). For the fixed-object condition, the handle was mechanically fixed to the vertical aluminum plate (Fig. 1b) so that the handle could not be translated or rotated. The subjects were instructed to produce an assigned torque for 6-s while watching the feedback of torque being produced on a computer screen. For the free-object prehension task, the task for the subjects was to hold the handle while maintaining its pre-set constant linear and angular position of the handle against the given external torques. The subjects were instructed to minimize the angular and linear deviations of the handle from the initial positions. For each condition, twenty five consecutive trials were performed. Thus, each subject performed a total of 200 trials (2 prehenson types × 4 torques × 25 trials = 200 trials) in the second session. Two-minute breaks were given at the end of each trial in order to avoid fatigue effects. The order of experimental conditions was balanced and no subject reported fatigue.
Fig. 1 Schematic
illustration of experimental setup for (a) free-object prehension and (b) fixed-object prehension. Real-time feedback of translation along the z-axis (horizontal translation), translation along the yaxis (vertical translation), and rotation about the x-axis were provided during free-object prehension
Individual fingers were classified into moment agonists and moment antagonists with respect to direction of the moment of finger force [3]. Agonist fingers produce the moment of normal force in the required direction of torque, while antagonist fingers produce the moment of normal
IFMBE Proceedings Vol. 32
370
J. Park et al.
force in a direction opposite to the task torques. Within the moment agonists (or moment antagonists), fingers were further classified into two types of moment agonists (or moment antagonists) based on the lengths of the moment arms of finger grasping forces from the thumb position. The normal forces of fingers with shorter moment arms were designated as F1 while those with longer moment arms were designated as F2. Then, we calculated the ratio of F2 to F1 within each group of moment agonists and moment antagonists to quantify the index of mechanical advantage (Eq. 1 & 2). In addition, F2 and F1 were normalized by corresponding fingers’ maximal voluntary forces (MVF) measured during the first session, and the ratio of normalized F2 to F1 was computed for both the moment agonist and antagonist (Eq. 3 & 4).
MAago = Fago 2 / Fago1
(1)
MAant = Fant 2 / Fant 1 norm ago
MA
= ( Fago 2 / F
max ago 2
pronation efforts, while MAago values were not different between fixed- and free-object conditions. These results were supported by the ANOVA with a significant main effect of DIR [F (1, 16) = 19.93, p <.0001] and significant interaction effects of TYPE × DIR [F (1, 16) = 7.740, p <.01] and DIR × MAG [F (1, 16) = 10.66, p <.01]. For supination efforts, MAant values were significantly smaller than 1 for both fixed- and free-object conditions while (p<.05) (Fig. 2), while for pronation efforts, only the 0.24 Nm condition showed MAant values significantly greater than 1 (p<.05). MAant values were greater in pronation condition than supination condition. This result was supported by the significant main effect of DIR [F (1, 16) = 111.88, p <.0001].
(2) max ago 1
) /( Fago1 / F
)
norm max max MAant = ( Fant 2 / Fant 2 ) /( Fant 1 / Fant 1 ) ,
(3) (4)
where ago and ant stand for the agonist and antagonist, respectively. MA=1 indicates that normalized F2 and F1 force magnitudes (i.e., normalized efforts in individual finger actions by the central nervous system) are the same, which suggests no MA used. The greater the MA value, the greater the MA utilized. Three-way repeated-measures ANOVAs were used with the following factors: TYPE (two levels of prehension type: fixed and free), MAG (two levels of torque magnitude: 0.24 and 0.48 Nm), and DIR (two levels of torque directions: pronation and supination efforts). MAago and MAant were compared with 1 by the one-sample t-test in order to test if MA values were significantly different from 1. All statistical analyses were performed at a significant level α = 0.05.
Fig. 2 The mechanical advantage index calculated from torque agonist fingers(a) (MAago) and torque antagonist fingers (b) (MAant). (c) & (d) The MAs of normalized F2 and F1 by corresponding fingers’ maximal voluntary forces (MVF). The averages across all subjects’ data are shown with standard error bars. * represents statistical significance from one sampled ttest (p < .01). † represents statistical significance of pair-wise comparison on MA values between fixed- and free-object conditions (p <.01)
IV. DISCUSSION
III. RESULTS For both fixed- and free-object conditions, MAago values were significantly greater than 1 for both supination and pronation torque tasks in agonist fingers (p<.05) (Fig. 2), which was confirmed by one-sample t-test (p<.05 for all conditions). This result suggests that the MA were utilized in agonist fingers during torque production for both pronation and supination efforts. MAago values were greater in fixed-object condition than free-object condition during
The results from this study showed greater grasping force production by fingers with greater MA (i.e., longer moment arms) when the fingers acted as moment agonists for both fixed-object prehension and free-object prehension. Thus, one can suggest that the MA is considered by the central nervous system regardless of the mechanical constraints imposed in the tasks. The utilization of MA of fingers by the central nervous system is utilized more in fixed-object prehension with increasing utilization of MA during
IFMBE Proceedings Vol. 32
Prehension Synergy: Use of Mechanical Advantage during Multi-finger Torque Production
pronation efforts. This suggests that the central nervous system utilizes the MA of fingers considering mechanical constraints. When fingers act as torque antagonists, greater indices of MA may not be the best strategy regarding the “economy” of the total finger force to be produced for the torque production. During supination effort, for example, producing a greater force magnitude by the little finger than the ring finger would result in a greater magnitude of antagonistic torque, which would need to be compensated by an increased torque by agonistic fingers (i.e., index and middle fingers) with greater magnitudes of finger forces. This would eventually result in the increased sum of finger force magnitudes. In this case, the thumb would also need to produce greater grasping force during free-object prehension to satisfy the static equilibrium. Eventually, the central nervous system needs to produce greater grasping force during all fingers and the thumb . The results from our study showed that the MA index was smaller than 1 in torque antagonistic fingers under most of the torque production conditions except for the 0.24 Nm torque condition in free-object prehension. Considering the “economy” of the total finger force production, one may suggest that this result demonstrates an “efficient” strategy by the central nervous system to reduce the total grasping force. However, for both fixed-object and free-object tasks, the central nervous system does not seem to use the minimization of the total force for torque production tasks. This claim is evidenced by the force production of agonist fingers with shorter moment arms as well as force production of antagonist fingers. If total force minimization was used by the controller as the sole optimization criterion, this would result in zero force production by the agonist fingers with shorter moment arms or antagonist fingers. Our study showed that the fingers with greater MA were utilized more during the fixed-object prehension as compared to the free-object prehension, especially during pronation efforts. For the free-object condition, the sum of the individual grasping forces should be equal to the thumb grasping force as a horizontal translation constraint for a static equilibrium. For the fixed-object condition, however, the selections of the individual finger grasping forces were not constrained since static equilibrium did not need to be satisfied. A recent study reported that the force productions of peripheral fingers with longer moment arms (i.e., index and little finger) were less independent under the free-object condition than those under the fixed-object condition [7]. The results from the mechanically fixed- and free-object tasks support the idea that the central nervous system
371
utilizes different finger force sharing patterns when different external constraints (e.g., fixed- vs. free-object prehensions) are given in motor tasks. During the fixed- and free-object prehensions, the central nervous system needs to consider numerous constraints that are different between these two prehension types such as slip prevention [8], and translational/rotational equilibrium constraints [9]. It remains to be investigated how these constraints affect the sharing pattern of individual finger forces for stable prehension during torque production tasks.
ACKNOWLEDGMENT The project was supported in part by the grants from Maryland Industrial Partnerships Program, Seoul Olympic Sports Promotion Foundation of the Ministry of Culture, Sports and Tourism of Korea, and Kyung Hee University International Scholars Program.
REFERENCES 1. Shim, J.K., M.L. Latash, and V.M. Zatsiorsky, Prehension synergies in three dimensions. J Neurophysiol, 2005. 93(2): 766-76. 2. Prilutsky, B.I., Coordination of two- and one-joint muscles: functional consequences and implications for motor control. Motor Control, 2000. 4(1): p. 1-44. 3. Zatsiorsky, V.M., R.W. Gregory, and M.L. Latash, Force and torque production in static multifinger prehension: biomechanics and control. I. Biomechanics. Biol Cybern, 2002. 87(1): 50-7. 4. Leijnse, J.N., Measuring force transfers in the deep flexors of the musician's hand: theoretical analysis, clinical examples. J Biomech, 1997. 30(9): 873-82. 5. Schieber, M.H. and M. Santello, Hand function: peripheral and central constraints on performance. J Appl Physiol, 2004. 96(6): 2293-300. 6. Zatsiorsky, V.M., Z.M. Li, and M.L. Latash, Enslaving effects in multi-finger force production. Exp Brain Res, 2000. 131(2): 187-95. 7. Park, J., Y.S. Kim, and J.K. Shim, Prehension synergy: effects of static constraints on multi-finger prehension. Hum Movement Sci, 2010. 29(1): 19-34. 8. Johansson, R.S., J.L. Backlin, and M.K. Burstedt, Control of grasp stability during pronation and supination movements. Experimental Brain Research, 1999. 128(1-2): 20-30. 9. Latash, M.L., et al., Rotational equilibrium during multi-digit pressing and prehension. Motor Control, 2004. 8(4): 392-404. Corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Jaebum Park Pennsylvania State University Department of Kinesiology University Park, PA U.S.A
[email protected]
Investigating Vortex Ring Propagation Speed Past Prosthetic Heart Valves: Implications for Assessing Valve Performance Ann Bailey1, Michelle Beatty2, and Olga Pierrakos3 1
University of Virginia, Dept. of Mechanical Engineering, Undergraduate Student, Charlottesville, VA, USA 2 James Madison University, School of Engineering, Undergraduate Student, Harrisonburg, VA, USA 3 James Madison University, School of Engineering, Assistant Professor, Harrisonburg, VA, USA
Abstract— Every year approximately 200,000 human heart valves are replaced worldwide. Although prosthetic heart valves are widely accepted, their performance is far from ideal, especially when compared to natural and healthy heart valves. There is a need to develop more accurate methods of characterizing the performance and efficiency of prosthetic heart valves. Currently, though, heart valves are clinically evaluated based on transvalvular pressure characteristics and left ventricular ejection fraction. These current assessment methods overlook the complex flow field downstream of these valves and inside the left ventricle which if understood can provide valuable insight on valvular and cardiac performance. Thus, our research focuses on flow characterization past heart valves. Namely, we are interested in characterizing vortex ring formation, resulting from the roll-up of the shear layers shedding past heart valves, which has been shown by previous studies to be important and to provide insight into left ventricle energy losses. Having designed a flow loop to enable the visualization of vortex ring formation past various mechanical and biological heart valves, we tested four prosthetic valves (three mechanical and one biological). The use of a high speed camera enabled us to capture vortex ring formation past the various prosthetic valve designs. Important flow parameters were estimated, such as vortex ring propagation speed which has been shown to be an important diagnostic parameter. Comparative analysis across valves revealed important differences between biological and mechanical heart valves. The potential of this research is that it can aid researchers and practitioners to determine the effectiveness of all types of valves (natural, diseased, prosthetic, tissue engineered, percutaneous) and also serve as ground work leading towards clinical translation. Keywords— heart valve performance, mechanical heart valve, biological heart valve, vortex ring formation, vortex ring propagation speed.
I. INTRODUCTION Although prosthetic heart valves have evolved to a level of universal acceptance, they have never reached a level of performance comparable to that of the natural valves of the heart. When it comes to valve performance though, both
durability and hemodynamic performance are crucial. Herein, our interest is the latter as we investigate the vortex ring formation past prosthetic heart valves. Conventionally, in clinical settings, assessment of heart valve performance has involved transvalvular pressure gradients as well as effective and geometric orifice areas. More recent discoveries, however, have uncovered the value of quantifying vortex ring formation past heart valves to assess valve performance [1-3]. Studies have shown that vortex ring formation, caused by the rollup of the shear layers shed past the valve leaflets, is dependent on valve design [1]. Mass entrainment of the surrounding fluid leads to energy entrapment within the gradually increasing vortex ring and converts linear momentum to angular momentum. Yet, there is a limit on the size the vortex ring can attain corresponding to a maximum amount of energy the ring can sustain [4]. Once this amount is exceeded, the vortex ring is pinched off and the surplus of energy is shed in a trailing jet wake. This mechanism of energy rejection from the leading vortex ring to the trailing jet not only forms a wake where viscous dissipation takes place, but also reduces the propagation speed of the jet and thus its kinetic energy and propulsive efficiency [1]. In other words, reduction of the time scale of vortex formation results in an increase of dissipation losses in the wake and increase of kinetic energy losses due to the deceleration of the vortex ring propagation speed [1]. Although one parameter that has been used widely to quantify vortex ring formation is the ratio of jet length to diameter (L/D), also known as the formation number, herein we investigate vortex ring propagation speed (VRPS) as an indicator of valve performance. Moreover, mitral propagation velocity (also known as left ventricular propagation speed or vortex ring propagation speed) has been identified in clinical research settings to be a very important indicator of cardiac health [5-8]. In fact, numerous clinical research studies have linked a progressive decrease of propagation velocity to various degrees of left ventricular dysfunction [5-7].
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 372–375, 2010. www.springerlink.com
Investigating Vortex Ring Propagation Speed Past Prosthetic Heart Valves: Implications for Assessing Valve Performance
Although this research is not the first to recognize the importance of vortex ring formation and vortex ring propagation speed (VRPS) in left ventricular flows, our hope herein is to contribute to the body of literature by investigation VRPS past different types of valves (mechanical and biological) and under varying flow regimes. This was achieved by acquiring high-speed images of the flow patterns past four prosthetic heart valves, three mechanical and one biological. Typical images of vortex ring formation are shown in Figure 1. We hope that the insights we gain from these in-vitro experiments can be translated in clinical settings. By characterizing the flow patterns associated with different heart valves and then analyzing their performance, we can attempt to find correlations between the different valves’ performances and their characteristic flow fields. The results from this in-vitro study can be used for testing and evaluation of existing valves and lead to improved valve designs in the future. Lastly, we hope that the results from this study can be translated in a clinical setting (using medical imaging technology to characterize vortex ring formation and vortex ring propagation speed in left ventricular flow fields). Achieving this, one would be able to determine how valves are functioning within the body.
373
Four different valves were investigated during this effort and included three mechanical valves and one biological valve. More specifically, the three mechanical valves were the Bjork-Shiley Convexo-concave (BSCC), Lillihei Kaster (LK), and the Omni-Science (OS) tilting-disc valves. The biological valve was a St. Jude Medical Biocor Porcine (P) tri-leaflet valve. To assure repeatability, data was collected for three separate trials for each of the heart valves at three different flow rates, respectively at about 3.3, 5.0, and 7.0 liters per minute. This corresponded to 48 different trials with a range of 2% to 6% flow rate uncertainty. During these experiments, the Reynold’s number range varied from 1700 to 5000, which is within the physiological range for such cardiac flows. A complete cardiac cycle was not simulated during this study because our main interest was to investigate vortex ring formation at the onset of valve opening (early diastole). Given that a major objective during this study was estimating vortex ring propagation speed, we tracked the tip of the leading vortex ring in the horizontal (stream-wise) direction in order to estimate the speed at which this structure was propagating. As discussed previously, the propagation speed of the vortex ring gives us insight into the energetic of the structure.
Fig. 1 Typical images of vortex ring formation acquired using the highspeed camera
II. MATERIALS AND METHODS The experimental setup used in this study is illustrated in Figure 2. This setup was designed so as to allow visualization of the flow patterns downstream of prosthetic heart valves. Thus, the setup consisted of a transparent polycarbonate chamber (8 x 8 x 12 inches) with silicone tubing on one side used to secure and easily interchange the various valves being tested. A variable-speed pump with a10 liter per minute maximum flow rate allowed us to test the valves at a variety of flow conditions. A 60 mL syringe was used to inject a mixture of water, food coloring, and neutrally buoyant glass particles (approximately 20 to 30 microns in diameter) so as to more easily visualize the flow patterns past the prosthetic heart valves. A Phantom high speed camera (Vision Research) operating at 800 frames per second (fps) was used to capture the flow sequence of images.
Fig. 2 Simplified Schematic of Experimental Setup
III. RESULTS AND DISCUSSION In this section, we present and discuss key findings. For each of the prosthetic heart valves tested, Figure 3 shows the vortex ring propagation speed for three distinct flow rates. These results reveal some interesting trends. Initially, we see fairly distinct patterns of propagation speed waveforms across the four valves. These observations suggest that vortex ring formation patterns are different across valves and are attributed to differences in vortex ring formation and propagation. When comparing across flow rates,
IFMBE Proceedings Vol. 32
374
A. Bailey, M. Beatty, and O. Pierrakos
we also observe some interesting trends. For example, whereas the Bjork-Shiley valve shows more consistent and convergent propagation speed patterns across the three flow rates, the Lillihei-Kaster valve reveals a broader range of propagation speed waveform patterns. In contrast, the porcine valve propagation speed curves remain relatively similar in shape and seem to converge with time. In looking at Figure 3, we also observe another interesting phenomenon. When looking at the propagation speed waveforms as the flow rate changes, we observe that for the porcine valve, as the flow rate increases, the vortex ring propagation speed decreases. Such a trend is expected when considering energy conservation and knowing that with increased flow rate (thus an increase in the velocity and energy of the incoming jet) comes more mass entrainment and a larger leading vortex ring (thus a slower propagation speed). On the other hand, for all three mechanical heart valves, this same trend was not consistently observed. In fact, we observed that the Omni-Science valve showed a decrease in propagation speed moving from a flow rate of 3.3 lpm (liters per minute) to 5.0 lpm and then an increase in propagation speed when moving from a flow rate of 5.0 to 7.0 lpm. Both the Lillihei-Kaster and Bjork-Shiley valves revealed an increase in propagation speed when moving from a flow rate of 3.3 to 5.0 lpm and then a decrease in propagation speed when moving from a flow rate of 5.0 to 7.0 lpm. These inconsistent trends in the case of the mechanical heart valves may be attributed to the pinchoff phenomenon described previously. Vortex ring pinchoff occurs when the leading vortex ring cannot sustain additional energy at which point the excess energy is dumped to the trailing jet and this phenomenon affects the propagation speed. Differences in vortex ring formation and vortex ring propagation speed past mechanical vs. biological heart valves have been evidenced in previous studies [1] and some of these differences included the fact that vortex ring pinch-off occurred in the case of mechanical heart valves but was non-existent in the case of a porcine valve. Although the results herein seem to suggest something similar, further investigation needs to take place to assure that such trends are not mere artifacts of the data. To address these needs, our future goals are to use Stereo Particle Image Velocimetry to quantify the vortex ring formation and propagation patterns in depth.
(a)
(b)
(c)
(d)
Fig.
3 For the four valves, graphs of vortex ring propagation speed for three different flow rates
IFMBE Proceedings Vol. 32
Investigating Vortex Ring Propagation Speed Past Prosthetic Heart Valves: Implications for Assessing Valve Performance
375
Certainly, vortex ring propagation speed is a parameter that has merit for use in clinical settings. In fact, propagation speed is currently an indicator that is being used to assess cardiac health, specifically left ventricular dysfunction, in clinical settings; yet, the parameter has not been explained in the context of or linked to vortex ring formation, the rollup of shear layers shedding past the valve leaflets. While the results revealed interesting comparisons across the biological and mechanical heart valves, for future work we do plan to collect and analyze more in depth data using Stereo Particle Image Velocimetry and potentially also use Doppler Ultrasound Echocardiography data.
ACKNOWLEDGMENT
Fig. 4 Vortex ring propagation speed for all four heart valves tested during a flow rate of about 3.3 lpm
Further, Figure 4 shows vortex ring propagation speed for all valves tested during a flow rate of about 3.3 lpm. What we see in this figure is that the porcine valve has the highest propagation speed when compared to the three mechanical valves. This is expected given that the porcine (being more similar to the natural heart valve) is known to have a more efficient vortex ring formation and higher propagation speeds [1]. Based on Figure 4, the OmniScience valve seems to perform better than the Bjork-Shiley and Lillihei-Kaster valves. Based on these findings and the distinct vortex ring propagation speed patterns that are evident when comparing the four heart valves tested herein, we believe that these preliminary data point to the fact that vortex ring propagation speed might serve as an important indicator for valve performance.
IV. CONCLUSIONS Characterizing the flow field downstream of heart valves can not only lead to better diagnostic and performance metrics, but improved designs of prosthetic valves. In this study, using a high speed camera (800 fps), we captured a variety of flow regimes downstream of four unique prosthetic valve designs (three mechanical and one biological) and compared vortex ring formation and propagation speed across these valve designs. The findings suggest that vortex ring propagation speed might serve as a useful parameter/indicator for heart valve performance. Upon analyzing the propagation speed patterns for vortices produced by different valves, we recognized a noteworthy distinction between the patterns produced by the biological porcine valve tested and the patterns produced by the three mechanical valves. This distinction aligns with the findings from previous experimentation [1], leading to the confirmation of the basic usefulness of propagation speed parameters in determining valve performance.
We would like to acknowledge the James Madison University Research Experiences for Undergraduates (REU) program and Mr. Lawrence Scotten of ViVitro Systems, Inc. for donating the mechanical heart valves that were used in this study.
REFERENCES 1. Pierrakos, O, Vlachos, P (2006) The effect of vortex formation on left ventricular filling and mitral valve efficiency. J Biomed Eng 128: 527539. 2. Cooke J, Hertzberg J, Shandas R (2004) Characterizing vortex ring behavior during ventricular filling with dopplar echocardiography: an in vitro study. Ann Biomed Eng 2:245-256. 3. Kheradvar A, Milano M, Gharib M (2007) Correlation between vortex ring formation and annulus dynamics during ventricular rapid filling. J Am Soc of Artificial Internal Organs 53: 8-16. 4. Green Sheldon, I (1995) Fluid Vortices, Kluwer Academic Publishers, Netherlands 5. Meller, J, et al. (2000) Ratio of left ventricular peak e-wave velocity to flow propagation velocity assessed by color m-mode Doppler echocardiography in first myocardial infarction. J Am Col of Cardio 35:363370. 6. Schwammenthal, E, et al. (2004) Association of left ventricular filling parameters assessed by pulsed wave doppler and color M-mode doppler echocardiography with left ventricular pathology, pulmonary congestion, and left ventricular end-diastolic pressure. Am J of Cardio 94: 488-491. 7. Djaiani, G, et al. (2002) Mitral flow propagation velocity identifies patients with abnormal diastolic function during coronary artery bypass graft surgery. Cardiovascular Anesthesia, 95: 524-530. 8. Ogawa, T, et al (2005) What parameters affect left ventricular diastolic flow propagation velocity? In vitro studies using color m-mode Doppler echocardiography. Cardiovascular Ultrasound 3:24. Author: Dr. Olga Pierrakos Institute: James Madison University Street: MSC 4113 HHS Building, Room 3227 City: Harrisonburg, VA Country: United States Email:
[email protected]
IFMBE Proceedings Vol. 32
Transient Heat Transfer in a Dental Prosthesis Implanted in Mandibular Bone M.N. Ashtiani and R. Imani Faculty of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran 15914
Abstract— Heat transfer in a dental implant during consumption of hot drinks or foods may adversely affect surrounding biological tissues, especially mandibular bone. The severity can be critically increased at the bone-implant interface. The goal of this investigation is to numerically determine the temperature penetration pattern through a dental implant. To this end, a three-dimensional model of a commercial implant accompanying with a certain portion of the mandible has been employed. To reach realistic results, analysis has been done in a dynamic incremental solver using Finite Element Method. Loading has been applied in convection type on the outer surface of the crown of the implant, according to beverage of a hot liquid. Temperature of the other parts in the model has been held constant at 37˚C. The achieved results are reasonably coincided with those data developed by experiment and the maximum temperature occurs at the bone-implant interface which exceeds from 55˚C, a meaningful time delay after the thermal loading on crown. This peak value in temperature can be hazardous for bone cells particularly noting on repetitive thermal loading during a day.
no data reported for bone temperature. With respect to dental prosthesis and without considering bone, Wong et al. (2001) developed a model for heat conduction during a fivesecond period of time in the screw of the dental implant using analytical approach [4]. Furthermore, Nissan et al. (2006) performed an in vitro research to measure the heat generated at the implant-bone interface caused by the exothermic setting reaction of two impression plasters [5]. An ex vivo experimental investigation performed by Feuerstein et al. (2007) established the maximum temperature produced intraorally while consuming hot substances in the root of a commercial dental prosthesis [6]. The aim of this investigation is to numerically analyze heat transfer in a dental prosthesis screwed into the mandible. The results have been reported as temperature values in different regions of mandibular bone against time and position.
Keywords— Dental prosthesis, heat transfer, finite element method.
II. MATERIALS AND METHODS A. Model Geometry and Materials
I. INTRODUCTION Thermal stimuli caused by consumption of hot liquids and foods may adversely affect surrounding tissues near a dental prosthesis. Unlike natural teeth formed by heat transfer-resistant layers of dentin, enamel, periodontal ligaments, etc., presence of high conductive material can lead to represent a different thermal response in a totally replaced dental implant. The threaded root of the implant is generally made of stainless steel or titanium and screwed into the jaw bone. Besides being in the vicinity of the metal root, some special mechano-physiological functions make the bone cells more significant among the other oral tissues – the most important of them is bone remodeling. Eriksson and Alberktsson (1983) stated that the threshold temperature for bone cells necrosis is 47˚C [1]. In addition, Li et al. (1999) proposed that even a thermal impulse of 42˚C may lead to transient intervention in osteoblasts' function and vitality [2]. Majority of the published work in natural teeth (such as [3]) are focused on tooth layers and their junctions and often
To gain realistic results, a three-dimensional model of a commercialized dental prosthesis has been developed using CAD software, including root, crown and surrounding portion of the mandible. Fig. 1 demonstrates three parts of the model separately and in assembled view. The properties of materials used have been listed in Table 1, which are all homogenous, isotropic, linear and independent of temperature. Table 1 Applied material properties Material
K (W.m-1.s-1)
Cp (J.kg-1.˚C-1)
Bone [7]
0.586
1350
Root [8, 9]
4.18
520
Crown [10]
1.5
1070
The numbers in square brackets are references.
In this simulation, gingival tissues exactly situated on top of the mandible have been disregarded.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 376–379, 2010. www.springerlink.com
Transient Heat Transfer in a Dental Prosthesis Implanted in Mandibular Bone
377
convection, and other 137975 ones with 3D conduction elements. The solution criterion is transient, the integration method is arbitrated Euler Backward Integration, and the iteration scheme is based on modified Newton. In order to reach a reliable certainty about networkindependent results, the solution process is repeated for different measures of elements and it was stated that the results are network-free.
Fig. 1 Meshed geometry of the (a) crown, (b) root, (c) whole model. It is to be noted that separate parts are not scaled
B. Governing Equations and Boundary Conditions Since the main goal of this study is to investigate heat transfer in diverse materials through conduction, the main governing equation is the conservation of energy in terms of temperature variations, namely, heat diffusion equation (1) where x, y and z represent the Cartesian coordination components, and, k, ρ and Cp are thermal properties, i.e., coefficient of thermal conductivity, density and specific heat, respectively. In the above equation, T stands for temperature as a function of both time and space, i.e. , , , . The term is the heat produced or absorbed and is eliminated in this study, due to lack of heat generation and/or dissipation. It is assumed that the temperature of two parts of prosthesis and the mandibular bone have been held constant at 37 °C and the temperature difference between this constant value and applied 67 °C temperature of hot liquid [11] triggers the convection, according to time function plotted in Fig. 2. Heat transfer coefficient between hot liquid and the crown is considered equal to 5000 W/s °C [12].
Fig. 2 Normalized convective thermal loading during 10 seconds
III. RESULTS A. Temperature Penetration In order to reach meaningful results, three points in bone side of the mandible-root interface have been marked, and additionally, a path line from superficial region downward to the deeper areas has been defined, as illustrated in Fig. 3.
C. Numerical Solution
Fig. 3 Three marked points and defined path in anterior – posterior section of mandible
The analysis has been performed using finite element method in a commercial numerical solver. The whole meshed model incorporates 28014 nodes and also 139483 elements in which 1508 ones are associated with 2D shell
Since the solution is transient, changes in temperature have been firstly reported by time. Diagram of Fig. 4 plots these variations for points B1, B2 and B3.
IFMBE Proceedings Vol. 32
378
M.N. Ashtiani and R. Imani
B. Validation The achieved data of this analysis can be validated by the experiment done by Feuerstein et al. (2008) which attached three thermocouples on corresponding root side of points B1, B2 and B3. The maxima of temperature values of these points have been evaluated relative to the temperature of upper projection of the abutment of the root. Table 2 compares the data of the present analysis with those published by aforementioned experiment. Table 2 Comparison between the finite element analysis data and the experimental ones
Indices Fig. 4 Temperature changes against time for three marked points of B1, B2 and B3
Plotting a diagram of temperature changes along the defined path line can confer a conceptual intuition about the penetration of the thermal energy initially exerted on the outer surface of the crown, as shown in Fig. 5.
Fig. 5 Temperature changes along the path Furthermore, contours of the temperature in anterior – posterior section in three times of the analysis have been illustrated in Fig. 6.
Fig. 6 Temperature contours at 0.2 s, 1.0 s and 1.8 s in the defined section
FEM
Exp.
R1 / R0 0.899 0.936 R2 / R0 0.705 0.656 R3 / R0 0.653 0.606 R0 = temperature of abutment as the reference.
Rel. Error (%) 3.95 7.47 7.75
IV. DISCUSSION According to Fig. 4, higher temperatures in the mandible belong to the superior region which is in proximity to the thermal source. The maximum value of temperature in this region is 48.5 ˚C, slightly more than bone cells' necrosis threshold of 47 ˚C. The middle region of the root – bone interface receives less thermal energy and the maximum temperature reaches to 45.7 ˚C. Also the maximum value of temperature in lower region of the interface has been decreased to 42.6 ˚C. It is to be noted that all the peak values in temperature exceed from 42 ˚C – the threshold of bone remodeling intervention. On the other hand, all the regions tend to be thermally relaxed after the end of loading on top of the crown. From the diagram of Fig. 5, it is deduced that the main decrease in temperature has been occurred in primary 1 mm of the mandibular side of the interface. More, the eccentric drop in temperature in distance between 5 to 6 mm along the path line may be due to the geometry of the root, as the local thermal source. In contrast to either the superior region which is in higher levels of temperature, or the inferior region which has been left in a dead-end situation, the middle region merely acts as an energy corridor and no possibility remains for transfer of heat from middle root to the opposite side of the interface in mandible. Furthermore, Fig. 6 shows that the temperature penetrates through the section of mandible, but the area of higher temperature does not enlarge far from the interface. Although in lower values than critical thresholds, whole of the portion of the mandible has been affected by the thermal stimuli.
of mandible
IFMBE Proceedings Vol. 32
Transient Heat Transfer in a Dental Prosthesis Implanted in Mandibular Bone
V. CONCLUSION A three-dimensional model of a dental prosthesis implanted in mandible was employed to analyze by finite element method to determine the temperature penetration through the jaw bone. The results stated that the regions near to the interface of the bone-implant are in risk of osteoporosis, because the temperatures exceed from the bone remodeling and cellular necrosis thresholds.
ACKNOWLEDGEMENT The authors would like to appreciate Mrs. Malikeh Nabaee for her effective guidance.
379
5. Nissan J, Gross M, Ormianer Z et al. (2006) Heat transfer of impression plasters to an implant-bone interface. Implant Dent 16:8388 6. Feuerstein O, Zeichner K, Imbari C (2007) Temperature changes in dental implants following exposure to hot substances in an ex vivo model. Clin Oral Implan Res 19(6):629-633 7. Davidson S R H, James D F (2000) Measurement of thermal conductivity of bovine cortical bone. Med Eng Phys 22: 741-747 8. Boyer T H E, Gall T L (1985) Metals Handbook. Desk Edition, American Society for Metals, Ohio 9. Moroi H H, Okimoto K, Moroi R et al. (1993) Numerical approach to the biomechanical analysis of thermal effects in coated implants. Int J Prosthodont 6:564-572 10. De Podesta M (2002) Understanding the properties of matter. 2nd edition, CRC Press, Boca Raton 11. Palmer D S, Barco M T, Billy E J (1992) Temperature extremes produced orally by hot and cold liquids. J Prosthet Dent 67: 325–327 12. Spierings D, Bosman F, Peters T et al. (1987) Determination of the convective heat transfer coefficient. Dent Mater 3:161-164
REFERECES 1. Eriksson A R, Albrektsson T (1983) Temperature threshold levels for heat-induced bone tissue injury: a vital-microscopic study in the rabbit. J Prosthet Dent 50: 101–107 2. Li S, Chien S, Branemark P I (1999) Heat shock-induced necrosis and apoptosis in osteoblasts. J Orthopaed Res 17: 891–899 3. Linsuwanont P, Versluis A, Palamara J E et al. (2008) Thermal stimulation causes tooth deformation: a possible alternative to the hydrodynamic theory? Arch Oral Biol 53:261 – 272 4. Wong K, Boyde A, Howell P G T (2001) A model of temperature transients in dental implants. Biomaterial 22:2795–2797
Author: Mohammed Najafi Ashtiani Institute: Faculty of Biomedical Engineering, Amirkabir University of Technology Street: 424 Hafez Ave. City: Tehran Country: Iran, Islamic Republic of Email:
[email protected]
IFMBE Proceedings Vol. 32
Characterization of Material Properties of Aorta from Oscillatory Pressure Tests V.V. Romanov, K. Darvish, and S. Assari Biomechanics Laboratory, Temple University, Philadelphia, USA Abstract— Traumatic Aorta Rupture (TAR), also known as Traumatic Rupture of Thoracic Aorta (TRA), is one of the major causes of fatalities in motor vehicle accidents. This issue has been studied in the past; however, the results of the suggested mechanisms are speculative and inconclusive. One cause for these speculations is an incomplete understanding of the material properties of aorta. Therefore, the goal of this experiment was to characterize the dynamic structural response of the aorta to a biaxial loading. Seven samples were subjected to a pressure oscillation input ranging from 7kPa to 76kPA at a frequency ranging from .5Hz to 5Hz. The results were presented in the form of pressure versus volumetric strain which showed an increase in both phase and magnitude of the linear modulus. Initial modeling was completed utilizing hyperelastic constitutive equations. The results of this study will be used to determine a viscoelastic constitutive model and a finite element model for the aorta material.
ascending and descending aortas, as well as the aortic arch, see injury as well. The general aortic anatomy is diagrammed in Figure 1.
Ascending Aorta
Aortic Arch
Peri-Isthmus
Descending Aorta
Keywords— traumatic aorta rupture, dynamic aorta test, material properties of aorta, viscoelasticity, hyperelasticity.
I. INTRODUCTION Traumatic Aorta Rupture (TAR), also known as Traumatic Rupture of Thoracic Aorta (TRA), is a major cause of deaths in motor vehicle collisions. According recent studies, TAR was diagnosed in 12% to 29% of autopsied fatally injured occupants. [1] Furthermore, when people experience such trauma, only 9% (7500-8000 victims in US and Canada) survive from the scene of the accident and the overall mortality rate is 98%. [2] In 94% of the cases, the shearing forces of high speed impacts have been associated with transverse tears at the peri-isthmic region which is subjected to the greatest strain. [3] Thoracic aorta consists of three major segments: ascending aorta, aortic arch, and descending aorta. Ascending aorta originates from the heart at the aortic valve. It then becomes the aortic arch which is suspended by brachiosephalic, left common carotid, and left subclavian arteries, which supply the blood to upper extremities. The last section, the descending aorta, supplies the blood to the lower limb and is fixed to the spine with the intercostals arteries. The periisthmic region is located between the aortic arch and the descending aorta, at the point where the vessel becomes unattached from the spine. It should be noted that even though most of the rupture occurs at the peri-isthmic region,
Fig. 1 Anatomy of the aorta [4] Several rupture mechanisms have been recommended in the literature. Presumably, the earliest proposal came from Rindfleisch, as cited by Richens, who suggested that the injury was caused by the sudden stretching of the aorta. [5] [2] Therefore, as the body experiences a rapid deceleration, the heart moves forward creating stress between fixed descending aorta and aortic arch at the peri-isthmus. The second mechanism is the rupture due to pressure increase in the vessel when the thorax and the abdomen compress. This theory, however, was dismissed by a number of investigators who argued that if the aorta were an isotropic axial cylinder vessel under pressure, it would rupture axially rather than transversely. Another mechanism proposed was the osseous pinch which suggests that the rupture of the aorta is caused by high local stress created by the pinch between highly compressed thorax and the spine. [2] However, though these mechanisms provide plausible explanations of rupture, a lot of the studies conducted are conflicting and insubstantial.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 380–384, 2010. www.springerlink.com
Characterization of Material Properties of Aorta frrom Oscillatory Pressure Tests
381
This study is one of the series with a com mmon final goal to determine the most feasible cause of aorttic rupture. The specific aim of this research is to understan nd the dynamic material properties of the aorta through the pressure p oscillation tests. A noncontact experiment was deveeloped to better understand the structural properties of the ao orta while keeping it intact.
II. MATERIALS AND METHODDS A. Experimental Procedure m 6 month-old For the experiments, seven samples from pigs were acquired from the local slaughter house. Porcine aorta was chosen because it has been widely used in the past in the cardiovascular related research as a substitute for a urthermore, huhuman aorta due to their similarities. [6] Fu man samples would have been difficult to o obtain and it would have been highly unlikely that the samples would have been healthy. To have all of the samples tested under the t same conditions, aortas were ordered attached to the heart. h Once received from the slaughter house, the samplees were kept in saline solution at 5 ºC until the beginning of the experiment. Though out the test, the aorta was kept hydraated in the same solution but at the room temperature (25 °C). hrough the same Preparation work for each sample went th following procedure. The descending aorta waas cut at 152mm from the aortic valve. The excessive fatty tissu ue was removed from the aorta. Careful attention had to be paid p during this procedure in order to avoid cutting off the inttercostal arteries at the base. Once the tissue was removed, inttercostal arteries were tied off at the base to prevent pressure loss. nges, ultrasonic To keep track of the geometrical chan measurement sensors (Sonometrics Corp., Caanada, 2R-34C40-NB) were utilized. Each of the sensors sends s and receives a pulse every 31ns to 2ms. Based on the speed of sound in the medium and the time of travel, Sonom metrics software determined the distance between the two sensors s with an accuracy of 30µm. Six sensors were suture to the adventitia, the outside layer of the aorta. The sensors were attached in pairs, on either side of the mm from the cut aorta. First pair of sensors was placed at 31m edge. Second and third pair of sensors was then sutured at p pair. 22mm and 30mm, respectively, from the preceding This position of the sensors allowed trackin ng the axial and transverse changes of the vessel. This po osition allowed calculating volumetric strain, one of the paraameters used in the study. It is defined as the ratio of the chaange of volume to the initial volume of the aorta. n is a structural It should be noted that volumetric strain property rather than material property of the aorta but is still suitable for the purpose of this study.
Fig. 2 Sensor Position Diagram To determine the pressure insidde the vessel fiber optic pressure sensors (FISO Technologgies Inc., Canada, FOPMIV) were used. With a pressure aaccuracy of 0.1kPa, these sensors are designed around Fabry--Perot interferometer and use fiber optic cables to send the signal. One sensor was placed at both ends of the vessel. Figure 3 below is the photograph of the sensor position.
Displacement Sensoors
Descending Aorta
Pressure Sensors
Fig. 3 Sensor Poosition Aorta was then attached to two tubes, one of which was sealed, while the other had a connnection available for the supply tank. Next, aorta was placedd inside the display case
IFMBE Proceedings Vol. 32
382
V.V. Romanovv, K. Darvish, and S. Assari
and coupled to the supply tank. Saline was purged p through the system, and then poured into the display y case submerging the aorta. Figure 4 gives an example of th he experimental set up. The pressure was supplied through the airr regulator from (Control Air Inc, USA, 550X). The regulatorr had a range of 0 psig to 30 psig (0kPA to 206.8kPa) which linearly corresponded to a current input 0mA to 20mA A. Repeatability accuracy of the regulator was 2.07kPa. Th he pressure was applied in the form of a sinusoidal wave with w a range of 7kPa to 76kPa and a frequency range of .5H Hz to 5Hz. This specific pressure range was chosen to fin nd the material properties that account for the maximum systolic and the failure pressures. LabView program was used to control th he current input and to record pressure output from the prressure sensors. Sonometrics software was used to record th he deformation change of the vessel. To insure that the two o programs recorded at the same time, a trigger was used to o active both of the programs. The entire experiment was also recorded ding at 2200fps using a high speed camera capable record (Vision Research, USA). The movie clips weere used to keep track of the experiment.
paths. This hysteresis represents thhe energy loss per cycle and it is almost independent of the strain rate within several decades of the strain rate variation. [7] This independency of the tissuee to the strain rate, can serve as the basis for the simplificaation of the material behavior for the initial modeling proccess. That is if the stressstrain relationship is truly indepenndent of the strain rate, then the material can be treated aas nonlinear elastic, also known as hyperelastic or pseudoelasstic. [7] One of the more accepted ways to describe the stressstrain relationship of a hyperelastic material is to utilize the exponential form of the strain energgy function as the following. [8] 2
exp
(1)
2
The constants C (with units of stresss) and , , (dimensionless) are the material constantss. , are the strains corresponding to a pair of stresses , chosen to the be the offset of the dynamic oscillationns. The strains are calculated as definned by Green: 1 2
1 ;
1 2
1
(2)
where , are the stretch ratios oof the blood vessel in the circumferential and axial directions.. For this research, the radial stress can be considered negligible comparing to the stresses inn the axial and circumferential directions. [8, 9] Thereforee, the arterial wall was considered as a two-dimensional body subjected only to , , which are defined as follow ws (3)
(4) Stresses , can be found using tthe deformation data from the displacement sensors and utilizingg Lame’s theory. [10]
Fig. 4 Experimental Set Up B. Hyperelasticity Biological tissues have complex structurees and are capable of undergoing large deformations. Fu urthermore, the stress-strain relationship of the tissue is in general nonlinear. If the slope, or Young’s Modulus, off a stress-strain curve is to be calculated one would see that with low loadg. [7] ing the modulus is lower than at high loading Another characteristic of the biological tisssue is the hysteresis that would occur between the loading g and unloading
III. RESULTS AND DDISCUSSION A. Initial Result Analysis As shown in Figure 5, where thhe results for two sample frequencies are shown, a hysteresis behavior was seen in the The change of area inside pressure-volumetric strain graphs. T the graph is proportional to the energgy loss and is representative of the internal friction (dampinng effect) of the material.
IFMBE Proceedings Vol. 32
Characterization of Material Properties of Aorta from Oscillatory Pressure Tests
80
40 35 Linear Modulus
The material also exhibits frequency dependency, as can be seen by the change of slope in the graph from low to high frequency. These are the characteristics of the viscoelastic material. [11]
383
30 25 20 15
70
10
Pressure (kPa)
60
0.1
1 Linear Modulus
50 40
10
Frequency (Hz)
30
Fig. 6 Linear Modulus versus Frequency
20 10 35
0 1.7
1.9 0.5 Hz
2.1
2.3 4 Hz
2.5
30
2.7 Phase Shift (deg)
1.5
Volumetric Strain
Fig. 5 Volumertic Strain versus Pressure The change in average strain that can be seen from the .5 Hz frequency to the 5 Hz can be attributed to the relaxation of the material. [11] Preconditioning procedure for the test only included preconditioning the material once before the entire experiment. However, it is possible that the time spent between changing frequencies was sufficient for the material to re-set back to its original state and required preconditioning once again. For future experiments, preconditioning protocol will be changed. To study the frequency dependency of the material properties, Fast Fourier Transform (FFT) was used in LabView to extract the fundamental amplitude and phase for both pressure and volumetric strain. A linear modulus has been defined as the ratio of the pressure amplitude to the amplitude of volumetric strain at each frequency. The results showed an increase in linear modulus and phase shift as the frequency increased (Figures 6 and 7). This stiffening behavior confirmed the viscoelastic assumption. The results of this study will be used to optimize and validate a finite element model that is being developed in parallel to this research.
25 20 15 10 5 0 0.00
1.00
2.00
3.00 4.00 Phase Shift
5.00
6.00
Frequency (Hz)
Fig. 7 Phase Shift versus Frequency B. Hyperelastic Modeling The hyperelastic constitutive model was successful in modeling stress in both axial and circumferential directions at low frequencies, where the phase shifts between the stress and the strain were small. (Figure 7 and 8) The constants used for a sample hyperelastic model shown below are listed in Table 1. Table 1 Constants of sample hyperelastic model Frequency .5
C 1.87E+01
5.7284
2.14E-05
4.8635
As mentioned above, however, with higher strain rate phase shift increases, a condition that could not be predicted with the hyperelastic model alone. For future work, a quasilinear viscoelastic constitutive model will be utilized to model aorta to account for the phase shift in addition to the nonlinear elastic response.
IFMBE Proceedings Vol. 32
384
V.V. Romanov, K. Darvish, and S. Assari
250
Stress (kPa)
200 R2=0.910619 150 100 50 0 0
2
4 Sth 1
6
8
Sth 1 (model)
Time (sec)
Stress (kPa)
Fig. 8 Circumferential Stress 160 140 120 100 80 60 40 20 0
R2=0.915655
0
2
4 6 Sz 1 Sz 1 (model) Time (sec)
2. Richens D, Field M, Neale M, Oakley N (2002), The Mechanism of Injury in Blunt Traumatic Rupture of the Aorta. European Journal of Cardio-Thoracic Surgery, pp. 288-293. 3. Katyal, D., McLellan, B., Brenneman, F., Boulanger, Bernard R. Sharkey, P., Waddell, J., (1997) Lateral Impact Motor Vehicle Collision: Significant Cause of Blunt Traumatic Rupture of the Thoracic Aorta. The Journal of Trauma; Injury, Infection, and Critical Care, Vol. 42, pp. 769-772. 4. “The Descending Aorta”, Yahoo! Education,Yahoo! Inc, n.d., Web, 23 April, 2010 5. Rindfleisch, E. (1893), Zur Entstehung und Heilung des Aneurysma Dissecans Aortae. Virschows Arch Pathol Anat, Vol. 131, pp. 374378. 6. Crick S., Sheppard M., Ho S., Genstein L., Anderson R., (1998), Anatomy of the Pig Heart: Comparison with Normal Human Cardiac Structure. Journal of Anatomy, Vol. 193, pp. 105-119. 7. Fung, Yuan-Cheng., (1975) On Mathematical Models of Stress-Strain Relationships for Living Soft Tissues. Riga : Plenum Publishing Corporation, UDC 611.08:539.4. 8. Fung, Y.C., Fronek, K., Patitucci P., (1979) Pseudoelasticity of Arteries and the Choice of its Mathematical Expression. American Journal of Physiology, 1979, Vol. 237. 9. Bass, C.R., Darvish K, Bush B, Crandall JR, Srinivasan SC, Tribble C, Fiser S, Tourret L, Evans JC, Patrie J, Wang C. (2001) Material Properties for Modeling Traumatic Aortic Rupture.: Stapp Car Crash Journal. Vol. 45. 10. Singh D.K., (2008) Strength of Materials, Boca Raton: CRC Press, ISBN-10: 1 42006 916 0 11. Fung, Yuan-Cheng., (1993) Biomechanics, Mechanical Properties of Living Tissue. New York: Sringer-Verlag New York, ISBN 0-38797947-6
8
Author: Institute: Street: City: Country: Email:
Fig. 9 Axial Stress
ACKNOWLEDGEMENT The support for this study was provided by the NHLBI under Grant Numbers K25HL08651201A2 and Temple University College of Engineering.
REFERENCES 1. Bertrand S, Cuny S, Petit P, Trosseille X, Page Y, Guillemot H, Drazetic P. (2008), Traumatic Rupture of Thoracic Aorta in Real-World Motor Vehicle Crashes. Traffic Injury, Vol. 9, pp. 153-161.
IFMBE Proceedings Vol. 32
Kurosh Darvish Temple University 1947 North 12th Street Philadelphia USA
[email protected]
Quasi-static Analysis of Electric Field Distributions by Disc Electrodes in a Rabbit Eye Model S. Minnikanti1, E. Cohen2, and N. Peixoto1 1
Electrical and Computer Engineering, George Mason University, Fairfax, VA, USA 2 Division of Physics, Food and Drug Administration, Silver Spring, MD, USA
Abstract— We developed a compartmentalized finite element model (FEM) of the electric fields generated in the rabbit retina due to a biphasic stimulus pulse. The model accounts for the different resistivities and capacitances of the retina, pigment epithelium (PE), and sclera. Axiosymmetric 2-D FEMs were created for monopolar stimulation electrodes using COMSOL. 250 μm diameter electrodes with 10 μm thick insulation were placed at three different locations near the retina: the inner limiting membrane (epiretinal), the subretinal space (PE/retina) (subretinal), and the choroid layer behind the PE/retina (suprachoroidal). A broad return electrode was located at the back of the eye (sclera). The relative dielectric constants of each eyewall layer with linearly varying resistivity for the retina layers were incorporated into the model. Biphasic 1 mA/cm2 current pulses with pulse widths of either 0.5 ms (0.5 μC/cm2), 1ms (1 μC/cm2), and 5 ms (5 μC/cm2) were passed through the tip of the electrode for stimulation. We found that these waveforms, which match waveforms commonly used to activate the retina in retinal implants, show a transient-sustained electric field profile due to charging of the high capacitance and resistivity of the PE. The PE develops high electric fields in all three electrode models. Wider pulses induce greater electric fields in the PE than shorter pulses. This needs to be accounted for when determining safe levels of stimulation. Simulation models that assume constant resistivity (4k Ω-cm) for the retina calculate larger electric fields across the retina than Gaussian resistivity models (3k-7k Ω-cm). Electric field strength is known to be greatly enhanced at the electrode edges. We found that the electric fields at the electrode edge can cause significant damage to the retina even when the nominal current density is below the damage threshold. Keywords— Resistivity.
Retina,
Implants,
Stimulation,
Damage,
I. INTRODUCTION Retinal implants aim at to providing useful vision for patients suffering from retinal degenerative diseases [1]. These implants use electrical stimulus pulses to activate the remaining retinal circuitry. The region activated depends on the location of the stimulus electrodes in the retina, which could be epiretinal, subretinal or suprachoridal [1]. Current stimulation models of retinal activation consider static field distributions with the assumption that the sensory retina has
constant resistivity. However, the eye wall consists of different layers each defined by its resistivity (ρ) and capacitive (relative dielectric constant (εr)) properties. The sensory retina sits over a thin high resistance and capacitance layer, termed the pigment epithelium (PE), which has a fenestrated structure of high surface area and tight junctions [2]. To account for the inhomogeneous retinal resistivity and distribution of capacitance, transient FEM analysis needs to be done. This paper examines the electric field distribution by transient stimulation pulses commonly used to activate the retina.
II. METHODS 2D Axiosymmetric finite element models were created for monopolar stimulation using AC/DC quasi static transient analysis module in COMSOL Multiphysics 3.3. In these models, a 250 μm diameter by 50μm electrode with a 10 μm thick insulation was inserted near the inner retinal surface (epiretinal) near the ganglion cell layer (GCL), on top of the PE (subretinal), or behind the PE (suprachoroidal). The eyewall was divided into three layers: sensory retina, PE and a single layer constituting sclera and choroid together (Table I). For electrode grounds, the epiretinal models used the back of the sclera and subretinal and suprachoroidal models used the top of the vitreous respectively. The resistivity profile of sensory retina is not constant but varies with depth [3], which we fitted with Gaussian models. Capacitance was accounted for each layer by published relative dielectric constants [4] with the exception of the PE. The eyewall time constant was derived from unpublished rabbit data (Fig 56, Faber thesis, 1969 SUNY Buffalo) and had a fitted time constant of ~4 ms, a resistance of 60 Ω/cm2 and an area of 4 cm2. The permittivity of sensory retina and sclera with choroid was set equivalent to grey matter such that εr = 150000 (at 1 kHz) [4]. Based on this data, the relative dielectric constant was calculated for the PE (table 1). A cathodic biphasic current pulse of 1 mA/cm2 with pulse widths of 0.5ms (0.5 µC/cm2), 1ms (1 µC/cm2), and 5 ms (5 µC/cm2) were passed through the stimulus electrodes. To validate the model a 15 ms monophasic current pulse
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 385–388, 2010. www.springerlink.com
386
S. Minnikanti, E. Cohen, and N. Peixoto
was applied across the model eyewall. The voltage generated across our model showed a time constant of ~ 4ms matching the Faber data. The slow rise time is due to the large resistivity and capacitance of the PE. Table 1 Model Parameters
Electrode * Sensory Retina
Resistivity (Ω cm)
Permittivity
1.72e-6 4000, Gaussian curve 240000
1
(εr)
150000
Size (µm) 250 diameter 200
235240
10
Sclera and Choroid 2700
150000
407
Saline
78
3783
PE
78
Fig. 2 Peak current density generated during a pulse for a cross section through of the retina at the GCL for an epiretinal electrode model. The current density generated is extremely high and enhanced at the edges of the electrode
* Current sources were specified at the electrode contact with the retina. The total axisymmetry volume conductor measured 5.83 mm tall by 5 mm wide.
III. RESULTS A. Effect of Resistive and Capacitive Properties of Tissue Figure 1 shows the maximum field distribution during a pulse for an epiretinal electrode placed on top of GCL (fig 1). During the pulse, the electric field intensity in the PE goes as high as 180 V/m (fig 1).
Fig. 3 Electric fields generated for a cross section through the PE for an epiretinal model. A biphasic pulse (1 mA/cm2, 1ms) is passed through the electrode. The inset in the graph indicates the time points of stimulation pulse for which the electric fields were evaluated Subretinal electrodes placed right on top of the PE generated annular like E-fields in the PE (fig 4). The PE regions around the electrode edges attained fields up to 350 V/m (fig 4). The electric field through PE was greatest at the transition point (mid) of the of the biphasic pulse (fig 5).
Fig. 1 Surface plot representing the electric field generated in a slice of 2D epiretinal retina model, during the transition point of a stimulation biphasic (1 mA/cm2) pulse. Lighter color denotes high electric field strength ( scale bar). The resistivity of the sensory retina here is assumed to be constant. The white insulation is not representative of any electric field in this and subsequent figures Figure 2 shows the current density distribution for the electrode in Fig 1 at the GCL level. The current density is high near the electrode edges (singularity), which declines to a lower plateau in the middle. The electric field through PE is greatest at transition point (mid) of the of the biphasic pulse (fig 3).
Fig. 4 Surface plot representing the electric field generated in a slice of 2D subretinal model, at the transition point of a stimulation biphasic (1 mA/cm2) pulse
IFMBE Proceedings Vol. 32
Quasi-static Analysis of Electric Field Distributions by Disc Electrodes in a Rabbit Eye Model
Suprachoridal electrode models produced significant Efields in the PE (fig 6), while the retina field was attenuated. The PE was charged to electric fields as high as 2000 V/m (fig 7). Figure 7 shows the electric field through the PE was greatest at the pulse transition point (mid).
387
B. Effect of Changing the Pulse Width on PE We examined how varying pulse width affected charging of the PE. The peak electric field (mid transition point) was highest for a wide pulse (5ms) (fig 8). Shorter pulses produced less PE charging. Similar results were seen for the subretinal and suprachoroidal models (data not shown).
Fig.
5 Electric fields generated for a cross section through the PE for a subretinal electrode stimulus. During a biphasic pulse (1 mA/cm2, 1ms) fields peaked at the transition point. Inset: Indicates the time points during the stimulation pulse where electric fields were evaluated
Fig. 8 Electric fields generated for a cross section through the PE for an epiretinal electrode for three pulses of the same amplitude (1mA/cm2) but of varying width (0.5, 1, and 5 ms). Fields peaked at the transition point (mid). The electric field charging of the PE for a longer pulse was larger compared to shorter pulses C. Effect of Asymmetrical Resistivity The electric fields estimated across the retina for constant resistivity (4 kΩ-cm) models (fig 1) decreased uniformly in the vertical direction (GCLÆsubretinal). However, by modeling the asymmetrical resistivity as a Gaussian distribution, regions within the retina attain electric fields strengths as the inner limiting membrane (fig 9).
Fig. 6 Surface plot representing the electric field generated in a slice of 2D suprachoroidal model, at the transition point of a biphasic stimulation (1mA/cm2) pulse
Fig. 7 Electric fields generated for a cross section through the PE for an suprachoroidal model. A biphasic pulse (1 mA/cm2, 1 ms) is passed through the electrode. The inset in the graph indicates the time points of stimulation pulse for which the electric fields were evaluated
Fig. 9 Surface plot showing the electric fields generated in a 2D slice of an epiretinal electrode model using a Gaussian retinal resistivity profile. During the transition point of the stimulation pulse (1 mA/cm2), the E-field is now activated deeper in the retina
IFMBE Proceedings Vol. 32
388
S. Minnikanti, E. Cohen, and N. Peixoto
IV. CONCLUSIONS It is essential that tissue capacitance be included in retinal neuro-stimulation models [5]. This is because the PE (or R-membrane) time constant significantly effects the passage of current pulses by stimulus electrodes across the eyewall boundaries. The PE reactive components affect the shape and amplitude of the stimulus current and fields generated in the tissue [5]. The purpose of this study was to determine the role of capacitance and resistivity of the tissue on the spatial distribution of electric field in the retina. The simulation results indicate that PE develops high electric fields at pulse transitions in all electrode models for a biphasic stimulation pulse. Due to its large resistivity, and capacitance, the PE time constant dominates over the other layers of retina. Comparison of wide and short duration pulses showed long 5 msec pulses induced greater electric fields in the PE. The maximum field intensity predicted in the PE is not high enough to cause electroporation (100 kV/m) [6]. However, fields of about 125 V/m and 250 V/m disrupt the tight junction of neural endothelial barrier thus increase the permeability of the blood brain barrier [7]. The PE cells are connected via tight junctions and form the blood eye barrier [8]. The predicted electric fields may lead to protein disorganization [7] of these tight junctions leading to improper PE function. An in vivo study characterizing electrically induced retinal damage reveals photoreceptor damage and PE atrophy via epiretinal stimulation [9]. Thus PE fields need to be accounted for when determining safe levels of stimulation. Simulation models that assume constant resistivity (4 kΩ-cm) for the retina estimate a uniform decrease in electric field moving away from the electrode. By modeling the asymmetrical resistivity as a Gaussian function (3-7 kΩ-cm) regions inside the retina that had higher electric fields (Fig 9). Thus assuming constant resistivity of the retina could lead to either underestimating or overestimating the region of activated tissue.. Finally the models in our study do not take into account the polarization effect of the electrode–tissue double layer interface during the course of stimulation [10]. This polarization at the electrode- tissue (electrolyte) interface would affect the electric fields adjacent to the electrode and the activation volume [10]. Future models will include the time varying nature of the tissue-electrode interface, and experimental validation of the simulated results. Determining the spatial distribution of electric fields in the retinal tissue of
humans could help in designing more effective retinal implants by optimizing the locations of the stimulus electrodes, their insulation, and inter-electrode spacing, which will aid keeping safe levels of voltage and charge in delicate eye tissue.
ACKNOWLEDGMENT This work was supported by the Oak Ridge Institute for Science and Education.
REFERENCES 1. Cohen ED. (2007) Prosthetic interfaces with the visual system: biological issues. J Neural Eng 4(2):R14-R31. 2. Brindley GS. (1956) The passive electrical properties of the frog's retina, choroid and sclera for radial fields and currents. J Physiol. 134(2):339–352. 3. Xu X, Karwoski CJ. (1994) Current source density (CSD) analysis of retinal field potentials. I. Methodological considerations and depth profiles. J Neurophysiol 1;72 (1):84-95. 4. Gabriel S, Lau RW, Gabriel C. (1996) The dielectric properties of biological tissues: II. Measurements in the frequency range 10 Hz to 20 GHz. Phy Med & Bio 41(11):2251-2269. 5. Butson CR, McIntyre CC. (2005) Tissue and electrode capacitance reduce neural activation volumes during deep brain stimulation. Clinical Neurophysio116 (10):2490-2500. 6. Gilad O, Horesh L, Holder DS. (2007) Design of electrodes and current limits for low frequency electrical impedance tomography of the brain. Med & Bio Eng & Comp 45(7):621–633. 7. Lopez-Quintero SV, Datta A, Amaya R, Elwassif M, Bikson M, Tarbell JM. (2010) DBS-relevant electric fields increase hydraulic conductivity of in vitro endothelial monolayers. J Neural Eng 7(1):016005. 8. Holtkamp GM, Kijlstra A, Peek R, de Vos AF. (2001) Retinal Pigment Epithelium-immune System Interactions: Cytokine Production and Cytokine-induced Changes. Progress in Ret & Eye Res 20(1):2948. 9. Colodetti L, Weiland J, Colodetti S, Ray A, Seiler M, Hinton D et al. (2007) Pathology of damaging electrical stimulation in the retina. Exp Eye Res 85(1):23-33. 10. Cantrell DR, Inayat S, Taflove A, Ruoff RS, Troy JB. (2008) Incorporation of the electrode–electrolyte interface into finite-element models. J Neural Eng 5:54–67. IFMBE at http://www.ifmbe.org Author: Saugandhika Minnikanti Institute: George Mason University, ECE department Street: 4400 University Drive City: Fairfax, VA-22030 Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Optimizing the Geometry of Deep Brain Stimulating Electrodes J.Y. Zhang and W.M. Grill Department of Biomedical Engineering, Duke University, Durham, NC, USA Abstract— Deep brain stimulation (DBS) is an effective treatment for movement disorders, including essential tremor, Parkinson’s disease, and dystonia, and is under investigation as a treatment for epilepsy and depression. Despite the rapid clinical growth of DBS, there has been little effort to optimize the geometry of the DBS electrode for either stimulation efficiency or selectivity. The objective of this study was to identify the electrode geometry that optimized stimulation efficiency. Due to the large number of possible electrode geometries, genetic algorithms (GA), a global heuristic search method, were used to find the optimal electrode geometry. The electrode contact was discretized into 15 equal-length segments, and the algorithm determined whether each segment was a conductor or insulator. The optimization algorithm was initially designed to minimize the stimulation voltage, and the cost function to be minimized was the sum of the voltage thresholds needed to activate 20%, 50%, and 80% of a randomly distributed population of model axons positioned around the electrode. The algorithm results demonstrated that despite the non-uniformity of the current density across the electrode, the most efficient geometry was a single segment that was 27 % shorter than the standard clinical electrode. Subsequently, the optimization was conducted to maximize the power efficiency of the electrode, and the cost function to be minimized was the sum of the power thresholds needed to activate 20%, 50%, and 80% of the randomly distributed axons. The results showed the optimal geometry was triple-band segmented electrode with insulating gaps in between. The results of this study reveal that optimal electrode geometry depends on the cost function to be optimized, and suggest that modifications, such as decreasing electrode width, may reduce power consumption and increase device longevity. Keywords— Electrical Stimulation, Genetic Algorithms, Computation Modeling, Optimization.
I. INTRODUCTION Over the past decade, deep brain stimulation (DBS) has become one of the most effective treatments for movement disorders, including Parkinson’s disease, essential tremor, and dystonia [1]. With the rapid emergence of DBS, there has been much attention dedicated to optimizing the procedure to implant the electrode. However, most of the work has centered around anatomical targeting, and the electrode geometry has been largely neglected [2,3]. The conventional DBS lead (Medtronic model 3387 and 3389) employs 4 cylindrical electrode contacts on an
insulating shaft, and each contact can be individually programmed to a different stimulation voltage. The electrode is inserted directly into the midbrain in order to activate the local neurons through extracellular stimulation. However, neural activation is driven by the second spatial derivative of the electrical potential (the so-called “activating function”), suggesting that a solid cylindrical geometry, which has uniform potential over its surface, may not be optimal for efficient stimulation. Design modifications that increase the spatial fluctuations in voltage and thereby the activating function may decrease the stimulus voltage required to activate a sufficient proportion of neurons. Increasing the efficiency of stimulation will increase battery life and decrease the costs and risks of battery replacement surgery. The objective of this study was to conduct a model-based design of segmented electrode geometries that optimize the efficiency for two independent cases – the stimulus voltage and the stimulus power. The results from this study reveal that electrode modifications may improve the efficiency and performance of the electrode, including the power required for stimulation.
II. METHODS A. Finite Element Models of DBS A finite element model (FEM) of the lead (electrode contact and flanking regions of insulation) and the surrounding tissue (Figure 1) was implemented in the three-dimensional (3D) Conductive Media DC Package in COMSOL Multiphysics 3.5a (Comsol Inc., Stockholm, Sweden). The electrode was based on the Medtronic DBS model 3387 (Table 1). The tissue was modeled as a homogeneous, isotropic medium with conductivity similar to that of brain tissue (σ=0.2 S/m). Two insulating shafts (σ=10-10 S/m) flanked the electrode, so that the model DBS lead spanned the entire tissue length. Table 1 Dimensions implemented into model Subdomain
Radius (mm)
Length (mm)
Tissue
50
15
Electrode
1.5
0.635
Insulating shafts
24.25
0.635
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 389–392, 2010. www.springerlink.com
390
J.Y. Zhang and W.M. Grill
Fig. 1 Model of discretized DBS electrode (black center) positioned within a homogeneous tissue medium. The inset displays an enlarged version of the electrode, with alternating conductive (dark bands) and insulating segments (clear bands) in this case The electrode was discretized into 15 independent segments (each 0.1mm long), which could be either conductive (σ=1010 S/m) or insulating (σ=10-10 S/m). The outer boundary of the tissue was set to 0V (i.e., ground), the insulating segments of the lead were set to “continuity”, and the conductive segments were set to 1V. The model was meshed using a built-in algorithm that generated 28,152 discrete tetrahedral elements of graduated size that had greater resolution at the electrode proximity. B. Genetic Algorithm A genetic algorithm (GA) is a mathematical optimization method that operates similarly to biological evolution. The organisms are the electrode geometries, with each organism consisting of 15 genes, corresponding to the 15 electrode segments. The first generation is comprised of 15 random “parents”, 5 of which are then selected to survive based on their fitness (see below). Although only the survivors progress to the subsequent generation, all the parents “mate” to create 10 offspring. The offspring are derived from the parents, but may be modified either by mutations or recombinations (30% probability of either occurring in any offspring). This process continues for 40 generations, with a cost-function to determine the survivors based on minimizing the stimulation voltage or stimulation power (see below). The GA was implemented in Matlab R2008b (The Mathworks, Inc.). C. Randomly Distributed Population Model of Myelinated Axons To determine the stimulation efficiency of the different electrode geometries, a population of model axons was randomly distributed around the electrode. For both optimizations, 100 axons were randomly distributed within a 4mm
x 4mm box, all oriented parallel to the electrode shaft. The axons spanned 48.4mm, and consisted of 122 nodes per axon. The FEM model was used to calculate the extracellular potential at each node of each axon, and for optimization to minimize stimulation power, the total current delivered by the electrode I1V was determined by integration of the current density over the electrode surface. The myelinated axons were represented using the Sweeney model [5] and implemented in NEURON (Version 6.1, Carnevale and Hines, 2006). The model assumes a temperature of 37°C, node diameters of 2.4 µm, and a node length of 1 µm. The stimulation parameters consisted of a pulsewidth of 0.1ms and a pre-pulse delay of 0.1ms. D. Analysis The threshold voltages, Vth (in V), of all the axons were determined using NEURON. For optimization to minimize the stimulation voltage, an activation curve of the number of axons stimulated as a function of the stimulation voltage was generated based on the collective Vth values. Then, 3 values were determined from the curve: the Vth needed to activate 20% of the axons (V20), 50% of the axons (V50), and 80% of the axons (V80). These values were then used in the cost function of the GA with coefficients to weight the contributions of each term approximately equally to the overall cost to be minimized: 4
2
(1)
For the optimization to minimize stimulation power, a similar cost function was used, except the power thresholds were substituted in instead of the voltages. The power threshold, Pth = Ith×Vth= (Vth)2×I1V, was determined for each axon from the threshold voltage, and the threshold current was determined as Ith = I1V×(Vth/(1V)), since the volume conductor was entirely linear. These values were then used in the cost function of the GA, again with coefficients to weight the contributions of each term approximately equally for the overall cost to be minimized: 4
2
(2)
III. RESULTS A. Optimization to Minimize Stimulation Voltage The fitness of the electrode geometry optimized to minimize stimulation voltage reached a steady state after 24 generations (Figure 2). The electrode design that minimized the stimulation voltage was a slightly shortened version of a solid contact with all conducting segments. GA-1 is shorter
IFMBE Proceedings Vol. 32
Optimizing the Geometry of Deep Brain Stimulating Electrodes
391
However, the difference in efficiency between the two geometries was small (Figure 4), and the cost of GA-1 geometry was 2.7% lower than that of the solid electrode. B. Optimization to Minimize Stimulation Power
Fig. 2 Mean fitness of all the electrode geometries and the fitness of the best electrode geometry of each generation (for optimization to minimize stimulation voltage)
The fitness of the electrode geometry optimized to minimize stimulation power reached a steady state after 25 generations (Figure 5). The GA-optimized (GA-2) geometry was a triple-band segmented electrode, separated by insulating segments of 0.4mm and 0.3mm (Figure 6). Compared with the Medtronic 3387 model, the geometry of GA-2 is completely novel, and not just a scaled version of the solid contact. By introducing insulating gaps between segments, it substantially improves the power efficiency of the electrode (Figure 7). The cost of the GA-optimized electrode was lower than that of the solid electrode by 71.8%.
Fig. 3 Geometries of GA-optimized (GA-1) and solid electrodes Fig. 5 Mean fitness of all electrode geometries and fitness of the best electrode geometries of each generation (for optimization to minimize stimulation power)
Fig. 4 Comparison of activation curves for GA-1 vs. solid electrodes (for threshold potential optimization) Fig. 6 Geometries of GA-optimized (GA-2) and solid electrodes than the conventional electrode by 4 segments, or 0.4mm (Figure 3). IFMBE Proceedings Vol. 32
392
J.Y. Zhang and W.M. Grill
Fig. 7 Comparisons of activation curves for the geometry of GA-2 and the
considerations in electrode design. Third, the Sweeney model of neurons may not represent well the properties of neurons activated during DBS [5], and more complex implementations model may better approximate human neural responses [6]. Fourth, this model only considered monophasic pulses, whereas in clinical applications, biphasic pulses are commonly used, as well [7]. Finally, the population model only included myelinated axons, while in applications, stimulation will likely effect heterogeneous neural elements, including dendrites and cell bodies, which may have different excitation properties [8]. Nonetheless, the model provided insight into the utility of different electrode geometries, and demonstrated the feasibility of using genetic algorithms to optimize the design of complex therapeutic medical devices.
conventional solid geometry (for power optimization)
ACKNOWLEDGMENT IV. DISCUSSION The goal of this study was to design DBS electrode geometries that minimized the voltage or power required for stimulation of a population of surrounding neurons. A genetic algorithm was implemented to carry out a search for the optimal solution based on two independent cost functions, and two different solutions were found. For the cost function that minimized stimulation voltage, 11 of the 15 segments were set to conductive, resulting in a shortened solid electrode geometry. For the cost function that minimized stimulation power, a triple-band segmented electrode configuration was optimal. The results indicate that electrode geometry does play an essential role in determining the efficiency (and possibly efficacy) of stimulation. Although the thin single-segment electrode requires a greater threshold voltage, its much lower power threshold could reduce power consumption and thereby improve battery longevity. Because invasive surgery is required to install a new battery, a longer battery lifespan would enhance the convenience, ease-of-use, and safety of the device, important factors to consider for neural implants. Some additional considerations remain to be addressed. The model was simulated with a homogeneous, isotropic medium that is not a realistic representation of human brain tissue. These factors may affect the activation patterns and ultimately the activation curves that determine the efficiencies of different electrodes [4]. Second, changing the electrode geometry may alter the potential for electrode corrosion or tissue damage, and these are paramount
Financial support for this work was provided by NIH Grant R21NS054048 and a Pratt Fellowship for Undergraduate Research.
REFERENCES 1. Kumar R, Lozano A, et al (1998) Double blind evaluation of subthalamic nucleus deep brain stimulation in advanced Parkinson’s disease. Neurology. 51:850-855. 2. Wei XF and Grill WM (2005) Current density distributions, field distributions and impedance analysis of segmented deep brain stimulation electrodes. J Neuro Eng. 2:139-147 3. Kuncel, AM, Grill WM (2004) Selection of stimulus parameters for deep brain stimulation. Clin Neurophysiol. 11:2431-2441 4. Butson CR, Cooper SE, Henderson JM, and McIntyre CC (2006) Patient-specific analysis of the volume of tissue activated during deep brain stimulation. NeuroImage. 34(2):661-670 5. Sweeney JD, Mortimer JT, and Durand D (1987). Modeling of mammalian myelinated nerve for functional neuromuscular electrostimulation. Proc 9th Ann Conf, IEEE EMBS. Ed. 2:1577-1578 6. McIntyre CC, Richardson AG, Grill WM (2002) Modeling the excitability of mammalian nerve fibers: influence of afterpotentials on the recovery cycle. J Neurophysiol. 87:995-1006 7. Gimsa J, Habel B, et al (2005) Choosing electrodes for deep brain stimulation experiments – electrochemical considerations. J Neuro Meth. 142:251-265. 8. McIntyre CC and Grill WM (1999). Excitation of central nervous system neurons by nonuniform electric fields. Biophys J. 76:878-888 Author: James Zhang Institute: Duke University Street: 1138 CIEMAS City: Durham, NC Country: United States of America Email:
[email protected]
IFMBE Proceedings Vol. 32
Exploratory Parcellation of fMRI Data Based on Finite Mixture Models and Self-Annealing Expectation Maximization S. Maleki Balajoo1, G.A. Hossein-Zadeh1 , and H. Soltanian-Zadeh1,2 1
Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, University of Tehran, Tehran 14395-515, Iran. 2 Image Analysis Laboratory, Radiology Department, Henry Ford Health System, Detroit, MI 48202, USA.
Abstract— We present a new exploratory method for brain parcellation based on a probabilistic model in which anatomical and functional features of fMRI data are used. The goal of this procedure is to segregate the brain into spatially connected and functionally homogeneous components, and to account for variability of the fMRI response in different brain regions. To achieve this goal, the parcellation algorithm relies on the optimization of a compound criterion reflecting both the spatial and functional structures of the individual brains and hence the topology of the dataset. We employ unsupervised learning techniques to classify the fMRI data in an exploratory fashion through a Finite Mixture Model (FMM) for the distribution of the feature vectors of voxels, where the voxels of each parcel follow a normal density. A self-annealing Expectation Maximization (EM) algorithm is used to fit the FMM to the data. To find, the number of parcels, K, we employ Akaike Information Criterion (AIC). The algorithm is tested on synthetic fMRI data as well as real fMRI data. Applying this method to data from a motor experiment, we were able to find homogeneous and connected regions in the motor cortex and the cerebellum that have been previously found using hypothesis-driven methods. Simulation studies have shown that the parcellation results of our method are more accurate than those of the formerly developed methods. Keywords— Parcellation, functional Magnetic Resonance Imaging (fMRI), Finite Mixture Model (FMM)
I. INTRODUCTION Functional Magnetic Resonance Imaging (fMRI) is a popular technique for the exploration of brain function that allows studies of the brain functions. In this technique, the Blood Oxygen Level Dependent (BOLD) signal is measured in vivo, with a spatial resolution of a few millimeters and a temporal resolution of a few seconds while the subject is performing a task or presented with a stimulus. A high resolution (around 1 mm) T1-weighted image that shows the brain anatomy is also acquired during the same MR session.
The most common approach for activation detection from fMRI data is to treat each voxel separately and test for correlation between time series at each voxel and the predicted response given by the experimental paradigm. The “activation” voxels are then shown on the anatomical image [1]. Another approach consists of determining regions of interest (using either functional or anatomical a priori information) and then testing for the activation signal within these regions [2]. Both approaches are limited in several ways. In the voxel-wise approach, all voxels are tested, while in some situations in analyzing cognitive processes, a coarser resolution might be sufficient. Moreover, the resolution of the voxel may not be optimum when functional images have to be put in correspondence with other modalities of functional neuroimaging such as electroencephalography (EEG) [3]. On the other hand, regions-of-interest-based techniques are difficult to implement in practice for two main reasons. First, automatic labeling of many of the sulci is very difficult. Secondly, defining 3D volumes on the cortex are difficult by current tools. Manual parcellation takes many hours and requires expertise. In this case, only a limited number of large regions, for instance the brain lobes, are delineated. Thus, the relation between anatomy and function is addressed with a very rough resolution. On the other hand, one usually applies spatial deformations to the images to co-register anatomical structures of each subject to a common template prior to data analysis [4]. In this transformation, the data is heavily smoothed (using a smoothing kernel of around 812 mm full width at half maximum). Yet, inter-subject variability still is an important factor to be better modeled and considered. Flandin, et al. [5] have overcome the shortcomings of spatial normalization by introducing a novel method of data analysis which segregates the brain into many parcels of homogeneous functional activity and spatial location. Several algorithms, e.g., spectral clustering, can be used to delineate the parcels. In previous works [5,6], a Gaussian Mixture Model (GMM) of pooled (spatial/functional) features is used and built on the dual
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 393–396, 2010. www.springerlink.com
394
S.M. Balajoo, G.A. Hossein-Zadeh, and H. Soltanian-Zadeh
nature of the GMM, which is not only a clustering technique, but also a density estimation technique. However, model-based clustering or mixture modeling assumes that the data is derived from a mixture of probability density functions corresponding to different clusters. The Expectation Maximization (EM) algorithm for Gaussian mixture modeling has been shown to perform well when: (i) the number of clusters is known in advance; and (ii) the initialization is close to the true parameter values. However, determining the number of clusters and providing a good initialization are two problems that limit its application. We therefore propose a method that assumes a Finite Mixture Model (FMM) for the distribution of the feature vectors where the voxels of each parcel follow a normal distribution. The algorithm relies on fitting FMM on the data which reflect both anatomical and functional features by using a selfannealing EM algorithm. The preference of this approach to Gaussian Mixture Model (with standard EM algorithm) is an initialization strategy, which can be interpreted as a self-annealing algorithm. Moreover, the number of parcels, K, is an important issue. We employ Akaike Information Criterion (AIC) to fit the model to the data. The algorithm is run for each K, and the AIC criterion is evaluated. The K corresponding to the minimum AIC value is then selected as the number of parcels. The rest of the paper is organized as follows. In Section 2, we explain the details of our procedure. Section 3 reports the performance of our approach in terms of an accuracy parameter defined using the confusion matrix. Section 4 presents conclusions of the work.
II. MATERIALS AND METHODS In this work, we propose a new method of brain parcellation based on a probabilistic model in which anatomical and functional features of fMRI data are used. The goal of this procedure is to segregate the brain into spatially connected and functionally homogeneous components. It helps us achieve a new representation of the fMRI data, which is useful in region-based analysis of the data, especially in group analysis of the fMRI. A. Extracting Anatomic and Functional Features of Brain Here, we briefly explain the desired characteristics of a parcel. First, each parcel is an ensemble of spatially connected voxels. Secondly, each parcel is defined as a functionally homogeneous region. This characteristic could be achieved by requiring that voxels belonging to a common parcel should have similar time courses or that
they have homogeneous activity (which could be summarized by some model parameters). We propose a parcellation approach that considers all of the aforementioned characteristics. This algorithm relies on the optimization of a compound criterion reflecting both the spatial and functional structures of the individual brains and hence the topology of the dataset. We present an algorithm that yields a set of parcels from the pooled voxels. To this end, the anatomical information are brought into the model by the three spatial coordinates IJ (v ) of each voxel. To reflect the functional part of each voxel, we use the ȕ-parameters estimated during a first-level GLM fit to the time series data via ܻ ൌ ܺߚ ݁ (1) Some methods like Independent Component Analysis (ICA) use raw fMRI time courses for analysis. In order to find functionally similar voxels, however, we use information from the first level analysis of the fMRI data. This effectively projects the original high-dimensional time courses to a low dimensional feature space. B. EM Algorithm With Self-Annealing Behavior This section briefly introduces the EM algorithm with self-annealing behavior and the high entropy initialization [7] used in the experiments. Our method operates on feature vectors that represent both of the anatomical and functional properties of the voxels. We assume a FMM for the distribution of the feature vectors where the voxels of each parcel follow a normal distribution. The algorithm relies on fitting FMM on the data by using a selfannealing EM algorithm. This approach is superior to the Gaussian Mixture Model (with standard EM algorithm) in an initialization strategy that can be interpreted as a selfannealing algorithm. The EM algorithm and its application to Gaussian FMMs are well known and, hence, this subsection is intentionally concise. The EM algorithm produces a sequence of parameter estimatesሼߠ ሺݐሻǣ ݐൌ Ͳǡͳǡ ǥ ሽ by alternately applying the following two steps until convergence. E-Step: The conditional expectation of the log likelihood ݈ሺߠሻgiven the data ሼݔ ሽand the current parameter estimate ߠ ሺݐሻ is computed. In the case of FMMs, this translates to computing the posterior probabilities ߨ ሺݐሻ ൌ ܲ൫݇หߠ ሺݐሻǡ ݔ ൯ (2) for each data point and class. M-step: Updates of the parameter vector are computed by maximizing ܳ ቀߠቚߠሺݐሻቁ ൌ σே (3) ୀଵ σୀଵ ߨ ሺݐሻሾ݈݃ ݈݂݃ ሺݔ ȁߠሻሿ
IFMBE Proceedings Vol. 32
Exploratory Parcellation of fMRI Data Based on Finite Mixture Models and Self-annealing Expectation Maximization
We initialize the EM algorithm with a high entropy initialization leading to the self-annealing behavior as explained in [25]. The EM algorithm is initialized by setting ଵ ߨ ሺͲሻ ൌ ݁ ǡ݅ ൌ ͳǡ ǥ ǡ ܰǡ ݇ ൌ ͳǡ ǥ ǡ ( ܭ4) where ݁ is a random perturbation drawn (uniformly) Ǥହ Ǥହ from the intervalሾെ ǡ ሿ. The initial values for the parameter vectors are then computed based on these initial posteriors. In other words, the initial values ߨ ሺͲሻcan be considered as a result of the E-step based on which the parameter vector in the M-step is computed. In practice, the mean value will initially be close to the sample mean of ሼݔ ሽ, the covariance will initially be close to the sample covariance, and the mixing parameters will initially have almost equal values. This initialization strategy is called high entropy initialization because letting ߨ ሺͲሻǡ ݇ ൌ ͳǡ ǥ ǡ ܭto be nearly equal for all classes (nearly) maximizes the entropy of the posterior probabilities of the voxel labels. This equivalently minimizes the information contained in the initialization. Note that we cannot select ߨ ሺͲሻ to be equal because this would be a stationary point of the EM algorithm. The high entropy initialization leads to a behavior of the EM algorithm similar to the deterministic annealing. In the deterministic annealing, the optimization problem is recast as minimizing the thermodynamical free energy, defined as an effective cost function that depends on the temperature. As its stochastic counterpart, simulated annealing avoids getting caught in suboptimal maxima. C. Determining Number of Parcels The number of parcels, K, is an important factor in the algorithm. To compare mixtures with different numbers of components, various criteria have been developed. These include Akaike information criterion (AIC), the Bayesian information criterion, and the minimum message length criterion. In most applications, the EM algorithm is randomly initialized for K = 1, ..., Kmax, where Kmax is some pre-specified maximum number of components. The algorithm is run for each K and the criterion evaluated. The model that achieves the minimum value is then selected. A main problem with these techniques is the selection of the maximum number of components. If it is too small, the model may be too coarse for the data; if it is too large, the computational time may become very large. In this work, we employ AIC to fit the model to the data. The algorithm is run for each K and the criterion evaluated. The K that achieves the minimum value is then selected as the number of
395
parcels. The larger the number of parcels, the higher the degree of within-parcel homogeneity will be. However, there exists a trade-off between the within-parcel homogeneity and the signal-to-noise ratio (SNR).
III. EXPERIMENTAL RESULTS The algorithm is tested on synthetic functional data as well as experimental fMRI data. We have compared the results of our algorithm with previous parcellation algorithms such as K-means and GMM. We have applied all of the above algorithms on synthetic functional data and calculated the confusion matrix and the accuracy (AC) parameter.
A. Synthetic fMRI Dataset These data were obtained by using fMRI resting state study, which model the noise and other parameters realistically. Then, 4 different regions are specified in the brain. Two of them are completely distinct but the other regions have overlaps. Then, specific time series, obtained by convolving a stimulus pattern and a given HRF-response for each of these regions are added. The AC parameter is the proportion of the correct detections to the total number of voxels. It evaluates the performance of the classification. Table 1 shows that our algorithm outperforms the other algorithms and has the highest AC. Table 1. Performance of the algorithms compared using the accuracy (AC) parameter.
Method
FMM- EM Algorithm & Self-Annealing
Accuracy (AC)
0.98
Gaussian Mixture Model K-Means (GMM)
0.84
0.63
B. Real fMRI Dataset We also demonstrate the results of our method on a motor fMRI study (Finger Opposition Task; Block design): One healthy volunteer was studied using a block design periodic fMRI paradigm in which the subject performed a sequential finger to thumb opposition task. He was instructed to perform the finger opposition task with his right hand or left hand if he saw the letters R or L on an LCD display, respectively. Otherwise, he saw a fixation mark on the LCD and was at rest. A total of 112
IFMBE Proceedings Vol. 32
396
S.M. Balajoo, G.A. Hossein-Zadeh, and H. Soltanian-Zadeh
volumes were acquired from the subject using a T2*weighted gradient echo single-shot EPI sequence with TR = 3 sec, TE = 45 ms, Flip Angle = 90º, and FOV = 250u250 mm2. Each volume consisted of 20 axial slices of size 64u64 and covered a FOV = 250u250u100 mm3 with no gaps between slices. The task included four cycles (of 84 seconds each) of self-paced sequential finger to thumb opposition. Each period started with 12 seconds of rest and continued with 30 seconds of left hand finger opposition, followed by another 12 seconds of rest, and finally 30 seconds of right hand finger opposition. Here, we employed our method to discover homogeneous and connected regions, parcels, in the brain in response to a motor task. In order to find the best model, the AIC was employed that gave the optimal number of parcels K=10. In Figure 1, we illustrate spatial maps of three parcels in the left and right motor cortices and the cerebellum. Note that these parcels are spatially connected regions, each having a distinct profile of response to the fMRI motor experiment. These regions are reported in prior hypothesis-based studies. Our exploratory approach is also able to define them.
IV. CONCLUSIONS In this paper, we presented a probabilistic model for exploratory fMRI analysis based on Finite Mixture Models. Applying this method to the data acquired using a motor experiment, we were able to find homogeneous and connected regions in the motor cortex and the cerebellum that have been previously found using hypothesis-driven methods. Simulation studies have shown that the parcellation results of our method are more accurate than those of the formerly developed methods.
Fig . 1 Spatial maps of the three parcels, discovered in the motor cortex and the cerebellum in an fMRI motor experiment, with the corresponding time courses.
REFERENCES 1. Friston K J, Holmes A P, Poline J-B, Frith C D, Frackowiak R S J et al.(1995) Statistical parametric maps in functional imaging: A general linear approach. Human Brain Mapping 2:189-210. 2. Tzourio-Mazoyer N, Landeau B, Papathanassiou D, Crivello F, Etard O, Delcroix N, Mazoyer B, Joliot M et al (2002) Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage15: 273–289. 3. Kruggel F, Herrmann C J, von Cramon D Y et al (2000) Recording of the event-related potentials during functional MRI at 3.0 tesla field strength. Magnetic Resonance in Medicine 44:277-282. 4. Ashburner J, Friston K J, Penny W (2004). Human Brain Function, 2nd Edition. Academic press. 5. Flandin G, Kherif F, Pennec X, Malandain G, Ayache N, Poline J-B (2002) Improved detection sensitivity of functional MRI data using a brain parcellation technique, In Proc. 5th MICCAI, LNCS 2488 (Part I), Tokyo, Japan, September 2002, pp467–474. 6. Thyreau B, Thirion B, Flandin G, Poline J-B (2006) Anatomofunctional description of the brain: a probabilistic approach, In: Proc. 31th Proc. IEEE ICASSP. Vol. V. Toulouse, France, May2006, pp. 1109–1112. 7. Figueiredo M, Jain A (2000) Unsupervised selection and estimation of finite mixture models, In Proc. 15th Int. Conf. Pattern Recognition (ICPR00), 2000, vol. 2, pp. 87–90.
IFMBE Proceedings Vol. 32
Computational Fluid Dynamic Modeling of the Airflow Perturbation Device S. Majd1, J. Vossoughi1,2, and A. Johnson2 1
2
ESRA, 3616 Martins Dairy Circle, Olney, MD 20832 Fischell Department of Bioengineering, University of Maryland, College Park, MD 20742
Abstract— Computational Fluid Dynamics (CFD) of Airflow through the airflow perturbation device during normal exhalation was performed to assess the effect of outlet blockage, temperature and turbulence on air flow and pressure patterns. Five models were analyzed with 0%, 30%, 50%, 70% and 90% outlet blockage. Maximum velocity increased with increasing outlet blockage percentage (from 0 to 90%) from 2.36 (m/s) to 16.31 (m/s) and maximum static pressure increased from 3.25 (Pa) to 200 (Pa). Turbulence did not show a significant effect. Density changed less than 1% when blockage percentage is more than 70%. Density ranged from 1.139 to 1.141 (kg.m-3) in the model with 90% outlet blockage. Increasing the temperature from 25 to 37 °C did not affect the static pressure and velocity significantly.
to close the nasal pathway. In an alternative method, an oronasal mask is used if nasal airway resistance is of interest or if patients such as children are not comfortable with the noseclip.
Keywords— Airflow Perturbation Device, CFD, Finite Element, ANSYS, CFX.
I. INTRODUCTION Respiratory pathological conditions such as asthma, emphysema, and chronic obstruction affect respiratory resistance [1, 4-5]. Therefore, measurement of respiratory resistance can assist in diagnosing such diseases. The Airflow Perturbation Device (APD), is a non-invasive hand held, light weight and accurate respiratory diagnostic device to evaluate respiratory resistance while individuals breathe effortlessly [1-5]. It works by periodically inserting a known added resistance in the flow path, which causes a reduction of airflow and an increase in mouth pressure. The magnitudes of flow and pressure perturbations depend on the relative resistance inside the patient’s respiratory system and resistance of the APD itself [1]. By measuring mouth pressure and airflow rate, with and without the APD resistance inserted into the path of airflow, external resistance becomes known, and internal respiratory resistance can easily be calculated as the magnitude of the pressure perturbation divided by the magnitude of the flow perturbation [1, 5]. The periodic added resistance is applied by using a rotating wheel with screened and open segments and measurements of airflow and mouth pressure are performed by pneumotachometer and pressure transducer as seen in Figures 1 and 2. Measurements are performed during period of 1 minute while the patient breathes normally through a mouthpiece attached to the APD, with a nose clip
Fig. 1 Schematic of the APD. A pneumotachometer with two differential pressure transducers measures mouth flow and pressure. The screened wheel rotates to perturb airflow and mouth pressure
Fig. 2 Schematic of APD operation. The rotating disk C causes perturbations in mouth pressure (upper left panel) and flow (upper right panel) while the patient breathes through mouthpiece at left. Ratio of mouth pressure to flow perturbation magnitude equals respiratory resistance
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 397–400, 2010. www.springerlink.com
398
S. Majd, J. Vossoughi, and A. Johnson
Figure 3 shows the current version of the hand-held APD (left) and its interior (right). It is small, weighing only 14 ounces. After use it immediately displays the respiratory resistance values on the screen. Approximately 450 to 500 data points are used to calculate the average values of inhalation and exhalation resistance during one minute breathing. The pressure transducers and pnemotachometer in the current APD are sensitive enough to measure respiratory resistance of adults and children [1-5], and is not sensitive to airflow leaks [3].
II. METHODS ANSYS 12.0 and CFX are used to simulate the airflow inside the APD at different conditions (during exhalation) to assess the effects of turbulence, outlet blockage and temperature on air pressure and flow patterns inside the APD channel. Air flow channel geometry was simulated using the dimensions of the current handheld device (Figure 4). Airflow blockage caused by the rotating wheel was simulated at five different snapshots wherein the outlet is blocked at (0, 30%, 50%, 70%, 90%). Outlet blockage is modeled symmetrically. Figure 4 shows the APD outlet blocked by 90%.
Fig. 3 Handheld APD, left (APD), interior elements (right) The APD has been used to collect respiratory resistance data on over 2000 individuals ranging from 2 to 88 years old [4]. Results also validate that the APD readings are reproducible and sensitive to changes in the respiratory system [1]. However, there are certain questions remaining to be answered, such as effect of turbulence and air temperature on measured resistance, answers to these depend on a detailed understanding of pressure and flow patterns inside the device. Computational fluid dynamics is a useful tool to simulate the APD performance, which can lead to better understanding of pressure and flow patterns within the device. In this study, we used numerical analysis (ANSYS 12.0 and CFX) to validate the device performance and explore the effects of several key factors, such as turbulence and temperature on air pressure and velocity inside APD. In addition, because the wheel perturbs the air, the outlet faces different degrees of blockage at different times. In this study, we simulated several levels of outlet blockage to better understand the effect of blockage during normal exhalation. Results are compared to optimize or correct the resistance measurements performed under different conditions.
Fig.
4 Airflow channel mesh and boundary conditions: inlet velocity of 47.3 cm/s and outlet static pressure of 0 Pa
Air was defined as an ideal gas to allow for compressibility. Normal breathing was defined using a tidal volume of 500 cc and breathing rate of 20 breaths/minute. Therefore, the inlet velocity would be 47.3 cm/s, which was defined at the larger cross section of the device (inlet) to simulate exhalation. Steady state condition was considered. Outlet static pressure was defined as 0 Pa to represent the outside air condition when air exited the device. Air velocity, static pressure, total pressure, Mach number and density inside the flow channel were compared in different conditions to assess the effects of several key factors such as turbulence, blockage percentage, and temperature. Because the pneumotachometer converts the flow to a laminar form, models were initially analyzed assuming that flow was laminar (pneumotachometer screen is ignored). Then, turbulence of up to 10% intensity was added to the inlet air entering the device and its effect on pressure and flow patterns were studied.
IFMBE Proceedings Vol. 32
Computational Fluid Dynamic Modeling of the Airflow Perturbation Device
Models were analyzed at two different temperatures of 25º C and 37º C to explore the effect of air temperature on air pressure and flow patterns inside the device.
III. RESULTS Laminar steady state analyses were performed and results show that increasing the blockage percentage increased the maximum velocity, maximum static pressure and maximum total pressure. Figure 5 and Figure 6 show the effect of outlet blockage (0% and 90% blockage) on maximum velocity and maximum static pressure, respectively. To improve visualization, contour values are shown on a plane that divides the airflow channel into two segments along the APD length.
399
Maximum velocity increased from 2.36 m/s to 16.31 m/s when outlet the blockage changed from 0% to 90%. Similarly, maximum static pressure increases from 3.25 Pa to 200 Pa. Figure 7 and Figure 8 show the change of maximum velocity and maximum static pressure, respectively, relative to the outlet blockage percentage. Figure 7 showed that maximum air velocity increased with increase of outlet blockage percentage. Significant change in maximum velocity and static pressure exponential function was seen when outlet blockage increased from 70% to 90%. An exponential regression line was added to the graph to see the pattern. Density changed only when the outlet blockage was 70% or higher, in which case it increased from 1.184 to 1.187 kg.m-3. Higher densities occurred at the inlet where pressure increased.
A
A
B B
Fig.
5 Maximum velocity (m/s) when the outlet is 0% (A) or 90% (B) blocked
Fig.
As seen in Figure 5, maximum velocity occurred at the outlet whereas maximum pressure occurred at the inlet.
Including the high intensity turbulence (10%) in CFX did not change the results significantly. Maximum change was
6 Maximum static pressure (Pa) when outlet is 0% (A) or 90% (B) blocked
IFMBE Proceedings Vol. 32
400
S. Majd, J. Vossoughi, and A. Johnson
observed in the model results with 90% blockage, which maximum velocity and pressure values increased up to 1% and 3% , respectively. Maximum Mach number changed from 0.006 to .047. In addition, results of analyses were compared when air temperature is defined at 37º C (body temperature) rather than 25º C. Results show that density and pressure values decreased whereas velocity increased. Density changed from 1.139 to 1.141 kg.m-3 in the model with 90% outlet blockage. Furthermore, maximum static pressure changed to 197.3 Pa, whereas maximum velocity reduced to 16.6 m/s in the model with 90% outlet blockage.
Maximum Air Velocity (m/s)
Maximum Air Velocity 18 16 14 12
R2 = 0.8457
10 8 6 4 2 0 0
20
40
60
80
100
Blockage percentage (%)
means that at a higher level of outlet blockage percentage, air is compressed in the high pressure region near the inlet. This potentially means that an extra amount of flow by this compressibility would pass the pnemotachometer when the outlet opens. However, the maximum Mach number is 0.47, which means that compressibility may not be a significant key player for air flow and pressure pattern during normal breathing. While maximum density changed less than 1% in the model with 90% outlet blockage and this would tend to confirm that air compressibility does not significantly alter flow measurement results. More research is needed because maximum blockage simulated in this work is 90% and increasing that to 100% can potentially cause more accumulation of air in the high pressure region. Furthermore, resistance measurement during exercise would involve a higher inlet velocity [2], which may change the effect of air compressibility. Increasing the temperature from 25 to 37°C, reduced the pressure and increased the velocity. However, the change during normal breathing is minimal. Turbulence did not change the pressure and velocity values significantly potentially as a result of low air velocity at the inlet. This observation needs to be re-evaluated during exercise resistance measurement.
Fig. 7 Maximum air velocity versus outlet blockage percentage (%)
This study provides a platform to study the effect of certain parameters on pressure and flow pattern in better understanding of airflow perturbation device.
Maximum Static Pressure 250
Maximum Static Pressure (Pa)
V. CONCLUSION
200
REFERENCES
150 100
R2 = 0.8463
50 0 0
20
40
60
80
100
Blockage percentage
Fig. 8 Maximum static pressure versus outlet blockage percentage (%)
IV. DISCUSSION Increasing the outlet blockage percentage increased the maximum air velocity and maximum static pressure. This
1. Lausted CG, Johnson AT (1999) Respiratory resistance measured by an airflow perturbation device Physiol. Meas. 20 21-35 2. Silverman NK, Johnson AT, Scott WH, Koh FC (2005) Excersiseinduced respiratory resistance changes as measured with the airflow perturbation device. Physiol Meas. 26(1):29-38. 3. Vossoughi J, Johnson AT, Goldman M, Silverman NK, Effect of Partial Air-leak on Respiratory Resistance using the Airflow Perturbation Device (APD). In: Proceedings of the 32nd Northeast Bioengineering Conference, 205-206. 4. Vossoughi, J, AT Johnson, NK Silverman. In-Home Hand-Held Device to Measure Respiratory Resistance. Distributed Diagnosis and Home Healthcare, 2006. D2H2:12-15. 5. Silverman, NK, AT Johnson. 2005. Design for Stand-Alone Airflow Perturbation Device. International Journal of Medical Implants and Devices 2005; 1:3/4:139-148
IFMBE Proceedings Vol. 32
Mechanism and Direct Visualization of Electrodeposition of the Polysaccharide Chitosan Yi Cheng1, Xiaolong Luo2, Jordan Betz1,3, Omar Bekdash1,3, and Gary W. Rubloff1,4 1
2
Institute for Systems Research (ISR), University of Maryland, College Park, MD, USA Center for Biosystems Research, University of Maryland Biotechnology Institute, College Park, MD, USA 3 Fischell Department of Bioengineering, University of Maryland, College Park, MD, USA 4 Department of Materials Science and Engineering, University of Maryland, College Park, MD, USA
Abstract—Chitosan, a biocompatible and bioactive polysaccharide biopolymer that comes from the deacetylation of chitin, has attracted much attention recently due to its electro-responsive gelation property. Such a property allows chitosan hydrogel to be assembled from aqueous solution onto a cathode surface in response to an electrical signal. To in situ visualize and quantitatively characterize the electrodeposition of the chitosan hydrogel, we developed a unique transparent fluidic system with paired sidewall electrodes. Such a system allows us to demonstrate and directly measure the time dependent thickness and density of the deposited hydrogel. Based on the results, we explained the electrogelling mechanism and interpreted the dominant causes responsible for the formation and the density distribution of deposited chitosan hydrogel. This report provides important guidelines for pursuing applications of electrodeposited chitosan hydrogel in integrated biochips and bioelectronic devices. Keywords— Chitosan, electrodeposition, hydrogel, gelation, biopolymer
I.INTRODUCTION There is significant interest in creating rapid, convenient, integrated microscale and nanoscale biosensing systems that couple the capabilities of biology for selective detection with the power of microelectronics for signal transduction. The abilities to pattern, assemble, and functionalize chemical and biological molecules onto a desired inorganic surface with high spatial resolution down to the micrometer or even nanometer scale, as well as the capability to precisely control the molecular synthesis assembled onto miniatured electronic platforms, has become increasingly important nowadays for creating novel functional devices or integrated systems such as bio-microelectromechanical systems (bioMEMS), biosensors, biochips, and labs-on-a-chip. Researchers are constantly evaluating candidates for biodevice interfaces that are capable of coupling biological and non-biological components and are easily assembled on-site for effective transduction of biological recognition to measurable physical signals. Chitosan, a distinctive type of polysaccharide biopolymer that comes from the deacetylation of
chitin, has become one of the prime candidates for this application. Chitosan offers unique physical and chemical properties which make it widely used in agriculture [1], liquid filtration process [2] and biomedical uses [3]. Its pH-responsive solubility, attributed to the pKa value (~6.3) of the primary amine groups of chitosan [4], allows chitosan hydrogel to be assembled from aqueous solution onto a cathode surface in response to an electrical signal. The nucleophilic properties of chitosan, attributing again to its primary amine functional groups, make chitosan highly bio-reactive. The solubleinsoluble transition point around neutral pH makes chitosan an ideal carrier for assembling soft biological subjects on any conductive surface with three-dimensional spatial control [5]. In fact, electrodeposited chitosan has served as a reproducible scaffold for the assembly of versatile biological components including proteins [6], nucleic acid [7], viruses [8] and catalytically active enzymes [9]. Although there is an increasing number of reports on fabrication and applications of functional devices [10, 11] or hybrid composites [12, 13] which utilize electrodeposited chitosan, the origin of its electro-stimulated deposition, the fundamental compositional properties and internal conformation of the as-grown hydrogel in aqueous format remains unclear. To better understand the kinetics of the gelation process and materials properties of chitosan in hydrated state, here we investigated the deposition mechanism by directly visualizing the deposition process using sidewall electrodes in a fluidic channel. By analyzing the captured optical microscopic images of the deposited chitosan, the thickness and density of deposited chitosan hydrogel were visualized and measured. From these results we were able to draw conclusions on the mechanism of electrodeposition.
II. RESULTS AND DISCUSSION A. Sidewall electrodes in transparent fluidic channel For direct visualization of the electrodeposition process of chitosan hydrogel, we employed a transparent fluidic channel system with integrated sidewall electrodes. Fig. 1a
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 401–403, 2010. www.springerlink.com
402
Y. Cheng et al.
shows the three-dimensional schematic view of the device structure. It consisted of two thin layers of polydimethylsiloxane (PDMS) as the bottom and ceiling of the fluidic channel and two glass slides as the sidewalls. Metallic stripes on the channel sidewalls were patterned as the sidewall electrodes by angled thermal evaporation of Cr (10 nm) and Au (100 nm) with a shadow mask as illustrated by fig. 1b. The channel was 1 mm-high, 0.5 mm-wide and the active sidewall electrode area in fluidic channel was 0.5 mm × 0.5 mm. The glass slides and PDMS were permanently bonded by oxygen plasma treatment. The light source was located underneath the device to monitor the transmitted light intensity change due to sol-gel transition. This paired sidewall electrode system enabled in situ visualization and further characterization of chitosan electrodeposition.
Fig. 1 (a) Schematic diagram of a transparent fluidic channel structure. The glass slides are placed side by side (with a separation of 1 mm) to form the sidewalls of the fluidic channel and sandwiched by two thin layers of solid PDMS as the channel bottom and ceiling. (b) Schematic illustration of angled thermal evaporation of gold onto a glass slide with a shadow mask to define the sidewall electrodes (anode and cathode).
B. Visualization, thickness and density characterization of electrodeposited chitosan hydrogel Acidic chitosan polyelectrolyte solution with a concentration of 0.5% (w/v) was prepared by dissolving chitosan powder with HCl and DI water. The pH of the solution was adjusted to 5.3 so that the chitosan was fully dissolved and most of the amine groups on the chitosan chains were protonated. The fluidic system was filled with acidic chitosan solution under static conditions. Fig. 2a-d shows the captured optical micrographs of chitosan solution during electrodeposition recorded at the time frames of 0, 40, 80, and 120 seconds at an 8 A/m2 current density. A brown color was observed in the chitosan gel while the chitosan polye-
lectrolyte solution was transparent. The brightness level was studied to examine the hydrogel density.
Fig. 2 Bright field optical microscopic images show the chitosan hydrogel growth at the cathode surface with a constant 8 A/m2 current density at: (a) 0 s, (b) 40 s, (c) 80 s, and (d) 120 s during deposition using 0.5% (w/v) chitosan polyelectrolyte solution. (e) Time dependent brightness level (arbitrary unit) of a spot 10 m above the cathode (indicated with a yellow dot in fig. 2d) and the time dependent hydrogel thickness profile. (f) Brightness line profile (indicated by a yellow dashed arrow in fig. 2d) of the hydrogel from the bottom (cathode surface) to the top (gel/electrolyte interface) after 120 s deposition with a constant 8 A/m2 current density. To demonstrate the dynamic density change of the gel due to sol-gel transition, we recorded the time dependent brightness level of a specific spot which is located 10 m above the cathode surface (indicated with a yellow dot in fig. 2d). Fig. 2e shows the brightness level of the spot (black) and the hydrogel thickness (red) as a function of electrodeposition time. Fig. 2f shows the brightness value line plot from the core (center of the cathode surface) to the shell of the gel (gel/electrolyte interface). The results suggest that the electrodeposited hydrogel is not uniform and the density of a specific spot is dependent on the distance from it to the cathode. The outline of the gel maintains a half-oval shape. The shape of the gel/solution interface is similar to that of the simulated equipotential line between cathode and anode (not shown here).
IFMBE Proceedings Vol. 32
Mechanism and Direct Visualization of Electrodeposition of the Polysaccharide Chitosan
To explain the observed phenomenon, we proposed the following hypothesis: The electrolysis of water molecules at the cathode creates a local accumulation of OH- anions, which leads to an elevated pH level at the surface of cathode. As we mentioned earlier, the soluble-insoluble transition point, or the pKa value of primary amine on chitosan, is about 6.3. Such a pH value increase induces the neutralization of positively charged chitosan. Therefore, we observed a brown color gel growing on the cathode surface. The expansion of the hydrogel can be explained by the migration of OH- ions from cathode to anode due to the ionic concentration gradient and electric field. The density gradient of the hydrogel is possibly due to the different number of neutralized chitosan molecules which are attracted by the negative potential of the cathode. They tend to form a denser gel structure at higher pH region due to pHdependent interactions between deposited chitosan layers [14].
III.CONCLUSIONS
Center and its FabLab. This work was supported by Robert W. Deutsch Foundation and NSF-EFRI NSF-SC03524414.
REFERENCES 1. 2.
3.
4.
5.
6.
7.
To conclude, we have visualized and characterized the electrodeposited of chitosan hydrogels using paired sidewall electrodes in a transparent fluidic channel. The thickness and density profiles of the hydrogel during deposition were studied. A rational explanation is proposed to interpret the observed phenomenon. The reported method, technique of building sidewall electrodes enabled transparent fluidic system and employed analytical method demonstrate an in situ, non-destructive way of characterizing the internal conformation of chitosan hydogel. Such technique and method are of significant interest and will be potentially useful in creating, controlling, and optimizing the complex polymeric structures for biomedically related applications such as cell entrapment, drug delivery, and tissue engineering.
8.
9.
10.
11.
12. 13.
14.
ACKNOWLEDGMENT We thank Dr. Gregory F. Payne for valuable discussions. We acknowledge the support of the Maryland Nano-
403
Hirano S (1999) Chitin and chitosan as novel biotechnological materials. Polym Int 48:732-734 Peniche-Covas C, Alvarez L W, Argüelles-Monal W (1992) The adsorption of mercuric ions by chitosan. J Appli Polym Sci 46:11471150 Pusateri A E, McCarthy S J, Gregory K W et al. (2003) Effect of a chitosan-based hemostatic dressing on blood loss and survival in a model of severe venous hemorrhage and hepatic injury in swine. J Trauma-Injury Infection and Critical Care 54:177-182 Strand S P, Tommeraas K, Varum K M et al. (2001) Electrophoretic light scattering studies of chitosans with different degrees of Nacetylation. Biomacromolecules 2:1310-1314 Buckhout-White S L, Rubloff G W (2009) Spatial resolution in chitosan-based programmable biomolecular scaffolds. Soft Matter 5:36773681 Chen T H, Small D A, Wu L Q et al. (2003) Nature-inspired creation of protein-polysaccharide conjugate and its subsequent assembly onto a patterned surface. Langmuir 19:9382-9386 Yi H, Wu L-Q, Ghodssi R et al. (2003) A Robust Technique for Assembly of Nucleic Acid Hybridization Chips Based on Electrochemically Templated Chitosan. Anal Chem 76:365-372 Yi H M, Nisar S, Lee S Y et al. (2005) Patterned assembly of genetically modified viral nanotemplates via nucleic acid hybridization. Nano Lett 5:1931-1936 Luo X L, Lewandowski A T, Yi H M et al. (2008) Programmable assembly of a metabolic pathway enzyme in a pre-packaged reusable bioMEMS device. Lab Chip 8:420-430 Luo X L, Xu J J, Du Y et al. (2004) A glucose biosensor based on chitosan-glucose oxidase-gold nanoparticles biocomposite formed by one-step electrodeposition. Anal Biochem 334:284-289 Powers M A, Koev S T, Schleunitz A et al. (2005) A fabrication platform for electrically mediated optically active biofunctionalized sites in BioMEMS. Lab Chip 5:583-586 Pang X, Zhitomirsky I (2005) Electrodeposition of composite hydroxyapatite-chitosan films. Mater Chem Phys 94:245-251 Bardetsky D, Zhitomirsky I (2005) Electrochemical preparation of composite films containing cationic polyelectrolytes and cobalt hydroxide. Surf Eng 21:125-130 Claesson P M, Ninham B W (1992) pH-Dependent Interactions between Adsorbed Chitosan Layers. Langmuir 8:1406-1412 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Gary Rubloff Institute for Systems Research University of Maryland College Park, MD USA
[email protected]
Chito-Cotton: Chitosan Coated Cotton-Based Scaffold O. Agubuzo1, P. Mehl2, O.C. Wilson1, and R. Silva1 1
Department of Biomedical Engineering, Catholic University of America, Washington, DC 2 Department of Physics, Catholic University of America, Washington, DC
Abstract— Chitin, the most abundant polysaccharide in nature (in addition to cellulose and starch), is a long-chain structural polymer which can be found in the exoskeleton of crustaceans (such as crab, shrimp). Chitosan is a linear polysaccharide and deacetylated derivative of chitin. It is excellently biocompatible and biodegradable, cationic, can form insoluble complexes (similar to collagen), and can easily be fabricated into bulk porous scaffolds, films, and beads. Due to its ample versatility, chitosan has a multitude of biomedical applications in areas such as tissue engineering, drug delivery, wound healing, and bone/periodontal substitutes. Early experiments conducted with the aim of producing 3-D matrix chitosan scaffolds (organized and porous) proved to be quite challenging. As a result, a new scaffold composed of cotton and coated with chitosan was engineered; the cotton, serving as an organized and porous 3-D matrix, when combined with chitosan, serving as an environment for cell attachment and growth, could potentially be a highly used scaffold. Results showed that at a concentration of 2 wt%, cotton coated with low molecular weight chitosan provided better adhesive for cell attachment to the scaffold. On comparison, the low molecular weight scaffold allowed the attachment of more cells than the medium. The results from this test were then compared to a range obtained from a control test in order to measure its success. The range was attained by performing a negative control test with uncoated cotton (one extreme) and a positive control test with poly-lysine coated cotton (other extreme). In this paper, the developed protocol is briefly reviewed, the unique design of a device made solely for this project is discussed, the results from preliminary tests with the scaffolds are analyzed, and finally, a review of impending work to be done, questions yet to answered, and potential biomedical engineering applications of chito-cotton. Keywords— cotton, chitosan, poly-lysine, 3-D matrix, chitin.
I. INTRODUCTION Chitin [(C8H13O5N)n] is a long-chain structural polymer which can be found in the exoskeleton of crustaceans (such as crab, shrimp, or lobster), cuticle of insects, radula of mollusks, beaks of cephalopods (such as squid or octopuses), and cell wall of fungi. Next to cellulose, chitin is also the most abundant biopolymer [1]. Chitin is quite similar to cellulose; it is composed of β-(1,4)-linked units of N-acetylD-glucosamine (or GlcNAc) and unlike cellulose, one hydroxyl group is replaced by an acetamide group [2]. The
derivatives of chitin have many properties that make them useful for a numerous number of applications ranging from biomedicine, agricultural, cosmetics, and industrial [3]. Chitosan is a linear polysaccharide and deacetylated derivative of chitin [4]. Chitosan consists primarily of repeating units of β(1-4) 2-amino-2-deoxy-D-glucose obtained through the N-deacetylation of chitin [5]. As revealed from past research, chitosan is excellently biocompatible and biodegradable, is cationic in nature, has bacteriostatic, hemostatic, and cholesterol-lowering properties, can form insoluble complexes, and can easily be fabricated into bulk porous scaffolds, films, and beads [5]. As the most well known derivative of chitin, chitosan has a multitude of biomedical applications that have been proposed and scientifically demonstrated which confirms its ample versatility and promise in areas such as tissue engineering, drug delivery, wound healing, and bone/periodontal substitutes [6]. In tissue engineering, chitosan cross-linked with a porous collagen-hydroxyapatite composite could decrease the immunogenicity of the collagen, increase the capability for cell attachment, improve the histocompatibility of artificial bone matrix, improve the absorption of growth factor by the matrix, and promote the repair of bone defects [4]. In drug delivery, chitosan-xanthan microspheres have been studied and possessed better degradation control which would be ideal as a potential delivery system [6]. In wound dressing, a bi-layer of chitosan-carboxymethyl chitin is used; the layer of chitosan contains an antimicrobial agent, while the layer of carboxymethyl chitin functions as a fluid-absorbing layer [6]. In bone/periodontal substitutes, a chitosan material chemically modified with azide functionality has been shown to be a non-cytotoxic adhesive with strength comparable to fibrin glue. It has also been reported that a chitosan sponge releases a growth factor that is suitable for periodontal bone regeneration [6]. One of the most common concerns that arise when attempting to relate a material to tissue engineering applications is the design of the scaffold; does the scaffold need to be big or small, 2-D or 3-D, porous or non-porous, elastic or inelastic, hard or brittle or soft, short or long lasting (biodegradability), etc. One popular tissue engineering application of chitosan is chitosan based fibers used to produce 3-D mesh scaffolds; such a scaffold is
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 404–408, 2010. www.springerlink.com
Chito-Cotton: Chitosan Coated Cotton-Based Scaffold
essential for use in cartilage tissue engineering [7]. While the chitosan alone is innately biocompatible and bioabsorbable, the mesh exhibits (needed) predictable porosity and degradation rate.[7] Again, the 3-D mesh scaffold facilitates new tissue ingrowth as well as presenting mechanical characteristics matched to that of the native tissue, thus increasing the chances for the reparative process to be compatible with the host’s tissue physiology [7]. The same 3-D mesh scaffold also has applications in bone tissue engineering and intervertebral disc tissue engineering [6]. The objective of this research was to design a biocompatible, 3-D porous and ordered scaffold that would present an environment suitable for cell attachment and growth. This research began by producing chitosan fibers of varying molecular weights (MW; low, medium, and high) following the process described by Tuzlakoglu et al [7]. After concluding that low MW chitosan worked best for this application, attempts to produce an actual 3-D porous scaffold using the same fibers were made. Unfortunately, without the equipment and machinery needed to fabricate straight/linear, continuous strands without the occurrence of air pockets, this proved to be more challenging a task then initially comprehended. At this point, the concept of using woven fabric as the foundation of the scaffold came to mind. With the fabric from an ordinary t-shirt or scarf already woven into a 3-D porous cloth, the same cloth could simply be soaked in chitosan; thus resulting in an ordered and highly porous (matrix) scaffold. Using fabrics such as cotton, silk, nylon, rayon, etc. along with chitosan, would make the scaffold more biocompatible. Due to its attainability, cotton from a plain white t-shirt was made the fabric of choice.
II. MATERIALS AND METHODS A. Production of Scaffolds The chitosan solutions were prepared by stirring (overnight) a mixture of 2 g of chitosan (of desired MW), 2 mL of acetic acid, and 96 mL of H2O. The mixture is then centrifuged for 15 minutes at 20,000 RPM. The chitosan solution (supernatant) was then decanted and stored until needed. The desired pieces of cotton from a plain white t-shirt were cut into 2 cm x 2 cm pieces.The cotton pieces were then placed in a sealed container with 70% EtOH and left overnight. The alcohol was then poured out and the cotton pieces were left to dry within the container under a hood for a couple of hours. Once dried, the cotton pieces were then soaked in a solution of chitosan of a specific MW and percentage by weight (2 wt%) for 30 min – 1 hour. After soaking, the pieces were spread/stretched and clamped onto cardboard where they were left to dry overnight. To avoid the scaffolds sticking to the cardboard, excess chitosan was
405
removed by either squeezing the scaffolds before clamping to the cardboard or by dabbing the scaffolds with a paper towel after clamping. Once dried, the scaffolds were soaked in a solution of NH4OH for cross-linking overnight. Following this, the scaffolds were rinsed with deionized water (dH2O) and then left in 70% EtOH until needed. While soaking the scaffolds in alcohol, it was noticed that they coiled (folded); resulting in an undesirable structure for the cells to attach to. To correct this, clamps consisting of two drilled plexi-glass sheets and a pair of plastic screws were made (Fig. 1). These clamps were designed to maintain the flat structure of the scaffold while within the alcohol and medium, and to also allow the uninterrupted application of cells to the scaffolds.
Fig. 1 Observation clamp designed to maintain the flat structure of the scaffolds and allow planting of the cells on scaffolds The observation clamps (Fig. 1) were then rinsed in 70% EtOH before placing the scaffolds inside them. Once the scaffolds were placed within the observation clamps, the clamps (with scaffolds) were then placed in a container of 70% EtOH and left there until needed for cell attachment. When ready, the scaffolds were removed from the alcohol and placed in a sterile dish to dry for a couple of hours. Media was then added into the Petri dish until the scaffold was submerged; after submerging, an additional 1 mL of media was added. The scaffolds were then finally placed in an incubator at 37°C for 23 days to check for gross contamination; if no contamination was present, the scaffolds were ready. B. Preparation of Cells The cell culture used was MC3T3-E1 (mouse preosteoblast adherent cell line). They were grown in αEMEM media supplemented with 10% fetal bovine serum and glutamine. The cell culture was split every 4-5 days. To detach the cells, a T75 flask containing the cells in the media (αEMEM) was brought under a sterile hood to
IFMBE Proceedings Vol. 32
406
O. Agubuzo et al.
remove the media. The cells were then rinsed with PBS (phosphate buffered saline) or with trypsin solution [3 mL]. After removing the rinse, 3-4 mL of trypsin versene (EDTA) was added. The flask was then returned to a 37°C incubator for 10-20 minutes. The cells do not have a tendency to get rounded; they have to be forced to detach by shaking the flask. Once detached, the cells were then checked under the microscope and counted. In order to count the cells, a mixture of 90 µL of cell solution and 10 µL of trypan blue solution was prepared in an eppendorff tube and used to stain dead cells. The dye penetrates dead cells which appear blue. Live cells exclude trypan blue; it is absorbed, but later ejected by living cells. About 15 µL of the cell mixture (stained) was placed between a hemacytometer and cover slip. On the microscope, the 9 square grid was located; once located, counting of the cells began from within the 4 corners of the grid and the center square [total – 5 squares]. The cell concentration [cells/mL of original solution] is given by the formula:
III. RESULTS My first task was to test the effects of the MW of chitosan on cell attachment. Scaffolds were prepared using 2 wt% low and medium MW chitosan solutions; high MW was excluded due to its difficulty in preparation. This required performing several cell attachment tests over 3 week periods (Day 2, Day 9, Day 16, etc). After viewing the scaffolds via light microscopy, on average, more cells were identified qualitatively on and within the fiber meshes of the low MW scaffolds than on the medium. Due to this observation, all further cell attachment tests were focused on low MW scaffolds.
a
(1) 200,000 cells per well were desired. Using the original cell solution, the volume was then determined. As a safety measure, extra volume was added for an additional scaffold. With “n” number of scaffolds, we planned with (n+1)×200,000 cells. The corresponding volume was then placed in a 10 mL centrifuge tube and the cells were centrifuged at 2,000 RPM for 5 minutes. The pellet, if present, was checked and the supernatant was removed by decanting slowly. (n+1)×100 µL of media was added, and incubated between 2-3 hours at 37°C. An optional/additional step for sterilization taken was to expose the scaffolds to UV radiation for 30 minutes. C. Cell Attachment After mixing, 100 μL of cell preparation was placed within the well above the scaffold. Poking of the scaffolds was strictly avoided. The Petri dish with the scaffold was then placed inside the incubator. When needed (example, Day 2, Day 9, etc.), the Petri dish was retrieved.
b
Fig. 2 Light microscopy images of a plain uncoated cotton scaffold. (a) 7 days after initial cell planting in normal light. (b) 14 days after initial cell planting in fluorescence
A control test was then conducted; this involved testing a negative and positive control. In the negative control test, cells were attached to a plain uncoated piece of cotton (from the same plain white t-shirt); after 2-9 days since the cells were transplanted, no cell was visible on the extremities of the cotton fibers or within the mesh of whiskers (Fig. 2). For the positive control test, cells were attached to a piece of cotton coated with poly-lysine, as expected, cell attachment was drastically more than even the chitosan coated scaffolds. Under the light microscope, it was revealed that the cells even managed to situate themselves well within the fiber meshes (Fig. 3).
D. Preparation of Stain The scaffolds were removed from the observational clamps and placed inside a 2.5 cm Petri dish with the staining solution (calcein AM 4mM: 0.7 μL of calcein AM in 1 mL of PBS). After waiting for 20 minutes, the scaffolds were removed and placed on top of a glass slide. To prevent drying, 100 μL of staining solution was added to the scaffolds. A cover slip was then placed on top of the scaffold, and the slide was placed under the microscope for viewing.
a
Fig.
b
3 Light microscopy images of a poly-lysine coated cotton scaffold 21 days after initially planting cells. Image on the left (a) is in normal light, while the image on the right (b) is with fluorescence to detect the stained cells
IFMBE Proceedings Vol. 32
Chito-Cotton: Chitosan Coated Cotton-Based Scaffold
IV.
DISCUSSION
After cell attachment tests, on average, it was observed that more cells attached to the low MW scaffolds than the medium MW scaffolds; very little or no cells attached to the medium MW scaffolds. From this, it is possible that the greater the MW of the chitosan, the less suitable the environment is for the cells to attach and eventually begin replicating. While the scaffolds did better than the plain uncoated cotton, they were not as successful as the polylysine coated cotton. Poly-lysine is a small polypeptide of the amino acid L-lysine that is used to coat tissue cultureware as an attachment factor to improve cell adherence. As a result, most of the cells transplanted onto the poly-lysine coated cotton remained attached and alive, thus making this the positive control. Eventually, the attached cells even began to proliferate. With the two extreme conditions for cell attachment, a range was available for the chitosan coated scaffolds to be compared to; a reasonably accurate placement of chitosan on the scale/range could be made.
407
wound bandaging (in which chitosan is already applied). In conclusion, this research showed that chito-cotton is a successful biocompatible, 3-D porous and ordered scaffold apt for cell attachment and growth; definitely possessing the potential for further advancement in Tissue Engineering.
ACKNOWLEDGMENT Special thanks to Dr. Otto Wilson for not only providing me with the lab and equipment, materials and instruments, and funding, but also the guidance that encouraged me to embark on this research. Thanks to Dr. Patrick Mehl for his assistance in cell attachment to the scaffolds as well as aiding me in developing my protocol and lending his lab equipment. Great thanks to Mr. Don Smolly for his craftsmanship in producing the observation clamps I designed as well as making needed modifications. The authors would also like to acknowledge the National Science Foundation (NSF) for supporting this work (DMR 0645675).
REFERENCES
V. CONCLUSION As stated earlier, the objective of this research was to design a biocompatible, 3-D porous and ordered scaffold that would present an environment suitable for cell attachment and growth. Over the course of this research, cotton was shown to provide the structure that would be essential as a scaffold; but while sufficient in structure, cotton provided an inadequate environment for cell attachment and growth. When coated with chitosan, cells attached to the scaffold; and the molecular weight difference between low and medium MW chitosan had an effect, thus making low MW chitosan most suitable for cell attachment. While poly-lysine proved to be best suited for cell attachment, low MW chitosan still has much potential in its ability to sustain attached cells. With more work, all the possible applications of chitocotton could be revealed. While only prospects at this time, some of these possible applications include the use of chitocotton in skin grafting procedures, organ growing, and even
1. Wang Y, Zhang L, Hu M et al. (2007) Synthesis and characterization of collagen-chitosan-hydroxyapatite artificial bone matrix. Journal of Biomedical Materials Research 86A: 244-252 DOI:10.1002/jbm.a.31758 2. Morin A, Dufrense A (2002) Nanocomposites of chitin whiskers from riftia tubes and poly(caprolactone). Macromolecules 35: 2190-2199 DOI:10.1021/ma011493a 3. Shelma R, Paul W, Sharma C (2008) Chitin nanofibers reinforced thin chitosan films for wound healing application. Trends in Biomaterials & Artificial Organs 22.2: 107-111 DOI:20080626-22 4. Martino A, Di M, Risbud M (2005) Chitosan: a versatile biopolymer for orthopedic tissue engineering. Biomaterials 26: 5983-5990 DOI:10.1016/j.biomaterials.2005.03.016 5. Nettles D, Elder S, Gilbert J (2002) Potential use of chitosan as a cell scaffold material for cartilage tissue engineering. Tissue Engineering 8: 1009-1016 6. Khor E (2002) Chitin: a biomaterial in waiting. Current Opinion in Solid State and Material Science 6: 313-317 DOI:10.1016/S13590286(02)00002-5 7. Tuzlakoglu K, Alves C, Mano J, et al. (2004) Production and characterization fibers and 3D fiber mesh scaffolds for tissue engineering applications. Macromolecular Bioscience 4: 811-819 DOI:10.1002/mabi.200300100
IFMBE Proceedings Vol. 32
408
O. Agubuzo et al. Author: Institute: Street: City: Country: Email:
Obinna Agubuzo Catholic University of America 620 Michigan Ave NE Washington USA
[email protected]
Author: Institute: Street: City: Country: Email:
Dr. Otto C. Wilson Jr Catholic University of America 620 Michigan Ave NE Washington USA
[email protected]
Author: Institute: Street: City: Country: Email:
Dr. Patrick Mehl Catholic University of America 620 Michigan Ave NE Washington USA
[email protected]
Author: Institute: Street: City: Country: Email:
Roberto Silva Catholic University of America 620 Michigan Ave NE Washington USA
[email protected]
IFMBE Proceedings Vol. 32
Effects of Temperature on the Performance of Footwear Foams: Review of Developments M.R. Shariatmadari, R. English, and G. Rothwell Liverpool John Moores University, School of Engineering, Liverpool, UK Abstract–– The human foot is a multifunctional system that serves as the primary physical interaction between the body and the environment during gait. Footwear foam components maintain efficient foot function which is essential for daily living and provide cushioning by acting as a protective layer between the foot and the ground that attenuates the shock of impact. Footwear foam materials have high temperature dependency mechanical characteristics. The lower the temperature, the less elastic the material becomes. Consequently, it would seem reasonable to expect different mechanical and cushioning characteristics for the same shoe under different temperature conditions. Although the footwear foam materials great temperature sensitivity and the clinical implications of excessive temperature rise in the footwear during activities on lower extremity injuries and ultimate amputation are well recognized, but very few studies have demonstrated this dependency. This study reviews the developments made on the temperature effect on the performance of footwear foams and assesses its consequent clinical complications. The search strategy was constructed around the temperature dependency of elastomeric by identifying the sources which affect the footwear foam temperature and also its important clinical implications. This study has intended to provide a review of such factors in order to aid the future materials development and product design. These findings will be useful individuals for those individuals with a history of lower extremity complications particularly diabetes. Keywords— Footwear Foams, Diabetic Mellis (DM), Hyperfoam, Temperature.
I. INTRODUCTION Human foot is a complex articular body formed by a variable framework of bones, cartilage and by several muscles responsible of the quality of the support and the kinetics of its movement [3]. Elevated temperature is another important factor in causing foot injuries particularly among diabetes [13]. The footwear research has mainly been concentrating on shoe stability, shock absorption, and pressure reductions, and it is noticeable that there are only few papers in literature concerned with temperature evolutions in shoes, in addition, they have concentrated essentially on working shoes [14-17]. The reason may be that the multitude of layers of materials that constitute the shoes (upper
part, cushioning and different insoles) makes modeling of the heat transfer problem very complex. However, the footwear performance can easily be affected by varying temperatures [4-8] Since, the heat generation and heat flow should be regarded as a part of a complete model of shoe foam performance, and also in the absence of adequate material data for footwear (orthotic intervention) to simulate the effects of varying temperatures, this review data are important for temperature modulation of footwear foams to enhance the wearer's comfort and performance, and enable a suitable orthotic footwear to be designed for those individuals such as diabetics to reduce the stresses resulting from the effects of temperature, thus preventing injury or frostbite and providing better circulation to lower extremity. Therefore, the aim of the present study is to present a full review of the developments on the effect of temperature of the performance of footwear foam materials taking into account the clinical complications.
II. LITERATURE
REVIEW
A. Footwear Materials Footwear foam materials are shock absorbers which are made from elastomeric closed cell foam elements which are low-cost, lightweight, have the ability to conform to complex contours and can recover large deformations. These foams are widely used in footwear fields [19]. The mechanical, shock absorption and physical properties of elastomeric foams can be greatly influenced by temperature variation [4-8]. Even slight rise in room temperature can affect measured firmness and recovery rates. Recovery rate has been positively correlated to heat, so that as the foam increases in temperature, pliability and compression and recovery rates increase [8]. The recent researches carried out [4 & 5], showed that heat has a direct effect in attenuating the footwear foam materials, and subsequently affect the forces in the foot. These authors showed that footwear with softer foam materials will bottom out when loaded, producing higher impact forces than firmer shoes that do not bottom out and also that the softness of footwear foams result in a longer contact time which is the time period that a foot
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 409–413, 2010. www.springerlink.com
410
M.R. Shariatmadari, R. English, and G. Rothwell
is in contact with the ground, giving a corresponding peak reaction forces transmitted during foot contact with the ground [1 & 2]. Longer contact time between the foot and the ground can increase the stress on the knee joint, and cause knee pain over time [24 & 25]. The three main factors which contribute towards changing the temperature inside the footwear are: • • •
Foot temperature [26 & 27] Friction between the foot and footwear [31-35] Environmental (air, ground temperature) [6 & 7]
B. Foot Temperature Foot temperature variation plays an important but as yet unrecognised role in the variations of in-footwear materials temperature [27]. Human body is the most sophisticated regulating system. By their active metabolic organisms, it is trying hard to maintain its temperature at the value close to 36.9°C [28]. When it produces a prolonged muscular effort, important sweat losses appear, allowing him to evacuate the heat produced during the effort. As that given by [29], the mechanical output of the muscles for human body is very weak, with about 75% to 80% of the spent energy transformed into heat. With such a rate, core temperature could reach high values, and organism could not survive in the absence of regulating mechanisms. Thermal regulation of the body [3] is controlled by the extremities, such as feet and any minor fluctuations in core body temperatures can cause significant foot health problems such as foot injuries particularly among diabetes [30]. The two most common types of pathologies, associated with foot which have effective roles over the in-shoe temperature are, cold and hot feet [44-49]. C. Friction between the Foot and Footwear Shoe friction is unquestionably another cause of foot discomfort and in-shoe temperature elevation [31-35]. The friction between the foot and the footwear produces heat and induces a non-homogeneous elevation of temperature in the foot. This elevation varies according to the kind of quasi – static and dynamic movements and to the ability of the footwear materials to evacuate the heat. Due to the usage of footwear the heat is transferred to the shoe components (midsole and insole), resulting in temperature increases in the footwear [35]. The foot-footwear friction is one of the other most important factors in the local elevation of skin temperature and has been implicated as an early sign of impending ulceration especially in regions of high shear stress [36]. Researchers [37] suggested that the heat internally generated by repetitive compressive and shear forces
impacts are the major factor for foam temperature changes during activities. D. Environmental Effect Studies have suggested that the heat exchanged between the foot and the environment through the shoe is also important in the mechanisms of thermal regulation and in thermal comfort sensory [6 & 7]. Indeed, the foot is the only contact point between the human body and the environment. Air temperature, radiant temperature, humidity and air velocity are the four environmental factors [38] which affect their responses to thermal environments. Environmental temperature is the most important factor which affects the mechanical and energy absorption of the footwear foams. Since the footwear foams have a melting temperature Tm of about 70oC [6], their mechanical properties change as Tm is approached, and consequently their pliability and compression and recovery rates of the foams increase. In colder environmental temperature conditions, foams tend to become firmer or even stiff depending on their formulation and cold ambient temperatures significantly reduce the shock attenuation of commonly used running shoes [39]. The recent researches carried out by [4-7], showed that cold has a direct effect in stiffening the footwear foam materials. The footwear foams are susceptible to changes in shockattenuating properties such as functional softening as a result of exposure to different environmental temperature resulting in less stability [11]. Since previous research has shown that footwear foam temperature is affected ambient environmental temperature [12], it is probable that the shock-attenuation of theses foams may change under different environmental condition. The temperature dependent shock attenuation has potentially significant clinical implications for wearers under extreme environmental temperature [18]. Although there is a strong possibility that the properties of footwear foams will change as a result of changing environmental temperature [11 & 12], however, the amount of cooling or warming that might occur when shoe are worn and outside during activities remains poorly defined and very few studies have investigated this hypothesis on commercial shoes of different midsole construction designs. Surprisingly, only four previous studies have directly examined the phenomenon of temperature dependent shock attenuation [4-7]. A study by [6 have suggested that although there is a strong possibility that the properties of the shoe foams will change as a result of changing environmental temperature and measured the temperature rise in midsole foam, for runners under variety of conditions and concluded that there is heat transfer from tarmac road surface. They examined the effect of environmental temperature changes on shock attenuation of three shoes with different midsole foams of
IFMBE Proceedings Vol. 32
Effects of Temperature on the Performance of Footwear Foams: Review of Developments
stiffness. Their data demonstrated that midsole stiffness did increase with decreasing environmental temperature. Further investigation [7], evaluated the changes in cushioning properties of common midsole foams exposed to a wide range of ambient temperatures of -20oC to 50oC. They hypothesized that the shock attenuation would significantly decrease with decreasing temperature. Furthermore, they concluded that different shock absorption designs might demonstrate differential temperature sensitivity as a function of materials and construction. These findings by [6 & 7] suggest that although the midsole temperature is not identical to the environmental temperature during an actual outdoor activity, it is still heavily influenced by it. An investigation into the effect of temperature on the performance of a wide range of commonly used footwear midsole and insole foams exposed to a ambient temperatures of 10oC to 45oC was carried out [4 & 5]. Considering a wide range of extreme weather conditions in which, ie. today’s athletes train, their running shoes are taken through extreme temperatures that could be very close range of temperatures tested. These authors also hypothesized that shock attenuation of the footwear foams significantly change with varying temperature. The energy and shock absorption of foam material depends on the cushioning properties of foam. Therefore, a smaller amount of shock absorbency (firm foam) will result in less energy absorption by the foam and more energy being transferred to the body during impact necessitating greater muscle effort to absorb it. On the other hand, the softer foams exhibit higher shock absorbency, and hence higher energy absorption by the foam and less energy returns to the body. E. Clinical Implications Varying temperature can reduce the attenuation, firmness and stability of shoe foams and consequently increases the risk for a multitude of foot injuries [6]. This is caused by the footwear losing its stability and shock absorbing properties, increasing the stress load on the lower extremity as a result of the change in its components. The clinical complications occurred as a result of varying high and low temperatures of the footwear elastomeric materials are reviewed below. •
Effect of High Temperature: High temperature makes the elastomeric materials softer [41]. The current study also suggests clinically important changes in foam function at higher temperatures. The midsole foam serves as an additional function of ensuring rearfoot control and arch support. Although many manufacturers use different technologies to provide arch support, adequate midsole firmness serve as a foundation for the arch itself. At higher temperatures, such foam systems would become so malleable that
•
411
they may bottom out during heel impact and early forefoot push-off, providing minimal cushioning to the wearer at those times during foot impact when the demand for cushioning is at its greatest [42]. Effect of Low Temperature: Low temperature makes the footwear foams stiffer, and as a result the energy absorption of the material decreases [43]. Researchers [18] found that cold ambient temperatures significantly reduce the shock attenuation of commonly used running shoes. Low temperature injuries occur predominantly in the extremities, with the feet being at greatest risk [44].
III. DISUSSION OF RESULTS This review considered the varying temperature effect on the performance of footwear materials by identifying the three factors of foot temperature, footwear friction and environmental temperature which are mainly responsible for temperature changes in footwear material. The heat generated in the footwear as a result of these factors are capable to vary the forces acting upon foot. The environmental temperature was found to play a more key role in temperature elevation in footwear than the body temperature and the temperature generated as a s result of foot-footwear friction. The researchers have so far concentrated on the experimental and analytical method of offloading these forces in the foot, and the temperature factor has been ignored. These findings will be useful and have important clinical implications for individuals training in extreme weather environments, particularly those with a history of lower extremity complications particularly diabetes, who are more vulnerable to injuries due to their poor blood circulation and loss of sensation.
IV. CONCLUSIONS The following conclusions were drawn:•
•
The elastomeric footwear foams have great temperature dependency and their mechanical and shock absorption characteristics can easily be influenced by environmental, body temperatures and foot-footwear friction factors and consequently the load transmission to the foot. The environmental factor was found to play a more central role in the variation of in-shoe temperature than the other two factors. In summary, both to prevent injuries and facilitate running economy, shoes with appropriate foam hardness should be selected based upon the environmental temperature for different activities.
IFMBE Proceedings Vol. 32
412
M.R. Shariatmadari, R. English, and G. Rothwell
V. FUTURE WORK In summary, the current data suggest that how the temperature affects the behavior of footwear foams. This is a clinically important topic and warrants further investigations, which involve: • • •
•
Determination of the contact time between the foot and ground with temperature rise softening of the footwear foams, and subsequent clinical implications. Review of the developments on the effect of temperature and strain rates on footwear foams. Incorporation of temperature and strain rate dependent material properties into finite element models of footfootwear interaction, in order to investigate the maximum stresses in the foot during extreme situations such as running and jumping. To develop design guidelines for suitable orthotic intervention for reducing stresses in foot plantar.
REFERENCES 1. Cavanagh, P. et al (1993) Ulceration, Uunsteadiness and Uncertainty: the Biomechanical Consequences of DM. J. Bio. 26:23-40. 2. Boulton, A.J. et al (1987) Abnormalities of Foot Pressure in Early Diabetic Neuropathy. Diabetic Medicine 4:225-228. 3. Grabiner M.D. (1993) Current Issues in Biomechanics. (M.D. Grabiner, editor), Champaign, IL: Human Kinetics Publishers. 4. Shariatmadari, M.R. et al (2009) Effects of Temperature on the Performance of Footwear Foams Subjected to Quasi- Static Compression Loading. IFMBE Proceedings, 24:107-110. 5. Shariatmadari, M. R. et al, Effects of Temperature on the Performance of Footwear Foams Subjected to Quasi- Static Compression Loading. in progress. 6. Kinoshita, H. et al (1996) The Effect of Environmental Temperature on the Properties of Running Shoes. J. App. Bio. 12(2). 7. Mansour Y. et al (2005) Effect of Environmental Temperature on Shock Absorption Properties of Running Shoes. Cli. J. of S. M, 15(3):172-176. 8. Mills, N. (2007) Polymer Foams Hanbook: Engineering and Biomechanics Applications and Design Guide. pp 307-327, 2007. 9. Koch, M., (1993) Measuring plantar pressure in conventional shoes with the TEKSCAN sensory system”, Biomed Tech, 38:242–250. 10. Martınez-Nova, A. et al. (2007) In-shoe system: Normal values and assessment of the reliability and repeatability”, The Foot 17:190-196. 11. Ferry, J.D. (1980) Viscoelastic Properties of polymers. New York: Wiley. 12. Nielsen. L.E. (1975) Mechanical properties of polymers and composites. New York: Marcel Dekker. 13. Edmonds, M. E. (1986) The Diabetic Foot: Pathophysiology and Treatment. Cli. Endo. Meta. 15:889-916. 14. Shoes : A Celebration of Pumps, Sandals, Slippers & More, ISBN 07611-0114-4 15. A Century of Shoes: Icons of Style in the 20th Century, Angela Pattison ISBN 0-7858-0835-3 16. Shoes : A Lexicon of Style, Valerie Steel ISBN 0-8478-2166-8 17. The Perfect Fit: What Your Shoes Say about You, Meghan Cleary, Sydney Van Dyke ISBN 0-8118-4501-X
18. Mansour Y. et al (2005) Effect of Environmental Temperature on Shock Absorption Properties of Running Shoes. Cli. J. of S. M, 15(3):172-176. 19. Verdjeo, R. et al (2003) Heel-Shoe Interactions and the Durability of EVA Foam Running Shoe.. J. Bio. 2003. 20. Mills, N.J. et al. (2003) Polymer Foams for Personal Protection: Cushions Shoes and Hhelmets. Comp. Sci Tech. 63:2389–2400. 21. Petre M.T.et al (2006) Determination of Elastomeric Foam Parameters for Simulations of Complex Loading. Comp. Meth. Bio. Biomed. Eng. 9(4):231–42. 22. Harper, Charles A. (220) Handbook of Plastics Elastomers and Composites. 4th edition, McGraw-Hill Handbooks, 2002 23. Nigg, B. M. et al. (1981) Methodological Aspects of Sport Shoe and Sport Surface Analysis. Biom. Pp 1041-1052. 24. Calder, C.A. et al (2008) Measurement of Shock-Absorption Characteristics of Athletic Shoes. Exp. Tech., 9:21 – 24. 25. Armenti, A. (1992) The Physics of sports, pp 103-109. 26. Rutkove, S.B. (2005) Foot Temperature in Diabetic Polyneuropathy: Innocent Bystander or Unrecognized Accomplice?. Dia. Med., 22: 231-238. 27. P. Strickland, G. et al (1997) Thermal profiles in footwear design: an in-sole measurement system. 4th Annual Conference on (M2VIP '97). P 271. 28. Hall, M. et al. (2004) Plantar Foot Surface Temperatures with Use of Insoles. The Iowa Orthopaedic Journal, 24:72-75. 29. Kenshalo, R. (1990) Correlations of Temperature Sensation and Neural Activity: A Second Approximation, in: Thermo-Reception and Temperature Regulation. Ed. Springer-Verlag. 30. Bunten, J. (1982) Foot Comfort and Health. Internal Report 1211: SATRA. 31. Reichel, S.M. (1958) Shearing Force as a Factor in Decubitus Ulcers in Paraplegics”, JAMA, 166:762-763. 32. Bennet, L. (1972) Transferring load to flesh. Part III: Analysis of shear stress”, Bull Prosthet Res, 10-17:38-51. 33. Bennett, L., et al. (1979) Shear versus pressure as causative factors in skin blood flow occlusion. Arch Phys Med Rehabil 60(7): 309-14. 34. Pollard, J. P. et al. (1983) Forces under the foot. Journal of Biomedical Engineering 5:37-40. 35. Rossi, W. (2008) Shoe Friction: The Enemy Within”, Sheehan Associates Publications. 36. Mekjavic, I. B. et al. (2005) Static and Dynamic Evaluation of Biophysical Properties of Footwear: The Jozef Stefan Institute Sweating Thermal Foot Manikin System. NATO OTAN Organization, pp 1-8. 37. Hall, M. et al. (2004) Plantar Foot Surface Temperatures with Use of Insoles., IOWA Orthopedic Journal, 24:72-75. 38. Bergquist K & Holmér I (1997) A Method for Dynamic Measurement of the Resistance to Dry Heat Exchange by Footwear. Applied Ergonomics, 28(5/6):383-388. 39. Ewald M. et al. (2007). The Influence of Sock Construction on Foot Climate in Running Shoes, The 8th Foot Biom Symp. Proc. Taiwan,. 40. Clarke, T.E. et al. (1983) Biomechanical measurement of running shoe cushioning properties. In B.M. Nigg & B.A. Kerr (Eds.), Biomechanical aspect of sport shoes and playing surfaces, PP 25-33, Calgary, AB: The University of Calgary FVess. 41. McKeen, L.W. (2007) Effect of Temperature and other Factors on Plastics and Elastomers. 42. Dozen, Y. (1989), Studies of the heat and moisture transfer through clothing using a sweating thermal manikin, in: Thermal Physiology, Ed. Mercer, pp 519–524. 43. Polyurethane Foam Association (PFA) (2003), Information of the Footwear Foam Flexibility”, PFA Publications, 11(1). 44. Kuklane, K. (1999) Footwear for cold environments-thermal properties, performance and testing. Ph.D. Thesis: Lulea technocal University, Sweden.
IFMBE Proceedings Vol. 32
Effects of Temperature on the Performance of Footwear Foams: Review of Developments 45. Francis T.J.R. et al (1985) Non-freezing cold injury: the pathogenesis. J. Royal Naval Med Serv 71:3–8. 46. Cold Feet / Cold Foot (2009) Dr. Foot Publications 47. Williamson DK, Chrenko FA & Hamley EJ (1984) A study of exposure to cold in cold stores. Applied Ergonomics, 15(1), PP 25-30. 48. Dyck W. (1992) A review of footwear for cold/wet scenarios”. Part I: The boot (U). AD-A264 870, Ottawa, Defence Research Establishment. 49. Oakley EHN (1984) The design and function of military footwear: a review following experiences in the South Atlantic. Ergonomics, 27(7): 631-637.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Mohammad Reza Shariatmadari Liverpool John Moores University Byrom Street Liverpool England
[email protected] [email protected]
413
A Tissue Equivalent Phantom of the Human Torso for in vivo Biocompatible Communications David M. Peterson1,2, Walker Turner1, Kevin Pham1, Hong Yu1, Rizwan Bashirullah1, Neil Euliano2, and Jeffery R. Fitzsimmons1 1 2
University of Florida, Gainesville, FL, USA Convergent Engineering, Gainesville FL, USA
Abstract— A tissue equivalent phantom (TEQ) was designed and constructed for in vivo biocompatible communication systems operating from 902-928 MHZ (Industrial, Scientific and Medical (ISM) band). The tissue equivalent phantom was designed by first noting the permittivity and conductivity of various tissues in the human torso using the FCC website http://www.fcc.gov/oet/rfsafety/dielectric.html, then by mixing the appropriate amounts of TX-151 (a polysaccharide gel), distilled water, sodium chloride and sucrose until the different regions of the phantom matched the parameters of the human torso Initial values were recorded based on previous work at lower frequencies and determined empirically at 915MHz. Computer modeling studies of human tissue were performed over the 902-928MHz band using a finite difference time domain computer modeling program (xFDTD, RECOM). Comparative analysis was conducted to determine the performance of the phantom. The phantom allows for testing and evaluation of very small antenna devices designed for in vivo diagnostics and monitoring. Keywords— tissue, electrically, equivalent, phantoms, biomedical.
I. INTRODUCTION Advances in medical technology related to biomedical and radio frequency (RF) engineering have dominated the engineering and medical community, from magnetic resonance imaging (MRI) to implantable RF devices. These advances have lead to ever increasing frequencies in the RF range and increasing signal to noise ratios (SNR). This work focused on the RF range around 900MHz where a saline phantom no longer accurately represents the human body. The need to have an accurate model of the human body for study at UHF frequencies is important for the development of RF devices which interact with the human body where RF power deposition is a limiting factor. While simulations can be done to estimate SAR, no comparative analysis between phantoms and simulations exist in the 900MHZ range of the ISM band. There are many important uses of RF energy including commercial, industrial and medical. In commercial settings,
such as telecommunications, radio and television broadcasting are used for transmitting information. Noncommunication devices, such as microwave ovens are used as convenience devices for cooking food. Applications for industry include industrial heaters and sealers that use RF energy to rapidly heat a food or material or other applications including sealing items such as processed food products. Medical applications include things like magnetic resonance imaging (MRI), thermal ablation devices and devices under current development that include in vivo wireless communication systems. Development of these systems requires an accurate human body phantom model that mimics the electrical characteristics of the human body. The conductivity and permittivity of tissues change with frequency. Because the human body is heterogeneous, it is comprised of several different tissue types. Each tissue type has varying conductivity and permittivity; therefore some tissues will absorb more energy than others. Saline phantoms are homogeneous, and by nature only have one value of conductivity and permittivity. Biomedical Engineers require phantoms that emulate the human body at the frequency of interest in order to develop hardware to be used in and around the human body. Typically, containers of saline have been used to mimic this biological system [1]. However, as frequency increases, especially above 100MHz, saline phantoms fail as an accurate representation of the human body, because various materials behave differently as the frequency increases. At very high frequencies, the high dielectric constant of water causes the electrical wavelength to be severely shortened within the phantom [1-3]. This introduces excessive errors of the electromagnetic field distributions as it would apply to a biological system. It becomes necessary to take a new approach to phantom development, where the phantom that is used experimentally has the proper permittivity and conductivities to be equivalent to real tissues at the specified frequency of interest. The conductivity and permittivity of the human body has been well characterized from 10-6000MHz, once the frequency of interest is specified, the required permittivity and conductivity values can be identified [4-10].
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 414–417, 2010. www.springerlink.com
A Tissue Equivalent Phantom of the Human Torso for in vivo Biocompatible Communications
II. MATERIALS AND METHODS A. Measurement of Tissue Equivalent Mixtures The permittivity (dielectric constant) and conductivities for phantom development are verified using a 85070B dielectric probe kit with software (Agilent, Santa Clara, CA) and a HP8752A vector network analyzer (Hewlett Packard, Santa Clara, CA). The probe measures the complex relative permittivity ε of the mixtures expressed as ε = ε’ –jε” where ε’ is the relative permittivity of the material and ε” is the out of phase loss factor associated with it. The conductivity, σ, is derived using ε”, where σ = ε”ε0ω, and ε0 is the relative permittivity of free space and ω is the angular frequency of the field. The materials that are tested must be >20mm in diameter, be non-magnetic, homogeneous and have a smooth surface that does not have gaps when interfaced to the probe. Each mixture is mixed in about a 500 ml of total volume, and then a small amount is poured into a cylindrical container that is approximately 40 mm in diameter and 25mm deep. Four tissue mixtures were prepared through empirical development; those tissues are fat, average abdomen, average muscle, and an average heart, liver, spleen mixture. The results of those mixtures are shown in table 1. Table 1 Phantom mixture values for conductivity and Tissue Fat Average Muscle Organs Average Abdomen Average
Epsilon 5.459623 52.569 57.0573 43.3249
permittivity
Sigma 0.051398 1.01197 1.371084 0.871451
415
clarity. Finally, the phantom was filled to the top with the final layer simulating muscles of the upper chest back and shoulders. The completed phantom with the PVC esophagus sticking out for use with studies involving the stomach or alternatively as an access port for any type of study that would involve putting an RF circuit in the central portion of the body. The completion of this phantom requires some verification which is demonstrated and discussed in the next section. The phantom is verified through both reflection measurements and is compared against saline and the human body. C. Phantom Results via Reflection Measurements Upon completion of the phantom shown in figure 1, a test was performed to demonstrate the electrical loading of the phantom torso versus a human torso. The test devised to exhibit equivalence between the phantom and the human body was a reflection measurement. An MRI coil that had been tuned and matched to a 90kg human body at 915MHz, which is similar the human body used in the REMCOM model was used. The measurement was made using an HP 4396B vector network analyzer on a human subject, with approximately with a ¼” spacing. A ¼” pad was placed between the subject and the coil to account for the 1/4” wall thickness of the phantom. The identical reflection measurement was then made with the MRI loop placed directly against the tissue equivalent phantom (TEQ). In the Figures below are the results showing very good agreement between the human subject and the TEQ phantom. The performance of the TEQ phantom can be verified via bench measurements and simulation utilizing electromagnetic field simulators.
B. Phantom Construction The next step was the actual phantom construction; an acrylic cylinder was heated and shaped to form an ellipse similar to the torso of the human body with an inner diameter x-axis of 14.5” and a y-axis of 11” with a wall thickness of ¼”. The first layer added to the phantom was a fat layer composed of shortening a semisolid fat typically used in food preparation and is shown against the outer edge of the phantom, held in place by plastic wrap. The next layer added was the lower abdomen, along with a separate stomach compartment made of PVC pipe that can be filled or left as an airspace depending upon the experiment. The next layer to be added was the heart, liver, spleen mixture, which covered over the stomach and lower abdomen was composed of plastic bags doped with Diethylene glycol (DEG), distilled water, NaCl, and TX-151, for IFMBE Proceedings Vol. 32
Fig. 1 Partially filled phantom
416
D.M. Peterson et al.
Figure 2 demonstrates the difference between real human tissue using a vector network analyzer shown in blue on Figure 2 and a crude bulk loading phantom filled with a 1S/m saline solution (average conductivity of most muscle and organ tissue) shown in red on Figure 2. The deep “dip” or reflection (S11), demonstrates good tuning and matching on the human load, when the same antenna is applied to the bulk loading phantom (saline box), the S11 parameters show poor agreement with the actual human load. This can be rectified through better phantom design and is demonstrated by the tissue equivalent phantom used to make the reflection measurements shown in green on Figure 2. This process was then repeated using a folded dipole designed for 915MHz and tuned and matched to the human body, these results are shown in Figure 3.
power was applied to the phantom until a 2ο C temperature rise was obtained. The SAR was calculated and then compared to the simulations for the folded dipole and the MRI coil case, showing agreement. The results of the SAR simulations and heating experiments are shown are shown in Table 2 and Table 3. Table 2 MRI SAR calculations and measurements 45mmMRI coil SAR
REMCOM Body Model 6 W/kg
Tissue Equivalent Phantom 7 W/kg
Homogeneous Saline Phantom 2.5 W/kg
Table 3 Folded Dipole SAR calculations and measurements
D. Phantom and Simulation Comparisons The program chosen to do the simulations was REMCOM, which has a human body model and is widely used in industry and academia [11-15]. The first step is to open the program and start a geometry file. In the geometry file it is important to select a cell size, however, having the best resolution results in slow running simulations. It is therefore important to balance the resolution of the mesh to get the required results without taking too much time for the simulation. Once the geometry is drawn, it is important to “pad” the boundaries, in this with air. This allows the program to converge, which brings us to some more parameters options of setting the convergence threshold in REMCOM (-30dB for simulations running in air, -16dB for simulations running with the human body model) and setting the number of time steps, which I set to 10,000 to help the program run faster. Initial studies were done using a λ/2 dipole for approximately 915MHz, center of the ISM band spanning from 902-928MHz. Once the solid geometry is drawn or imported from another computer aided drawing program (CAD), it must be converted to a wire mesh and add sources (passive- capacitors or inductors, active- signal generator). For the standard dipole an active series voltage source; no matching was required and the dipole was connected to the 50 Ohm source. After the setup of a regular dipole, a folded dipole configuration was used with the human body model. E. Correlation with Heat Test Upon confirming that the tissue equivalent was electrically equivalent to the human body, an SAR experiment was performed to compare simulation versus experiment. The SAR experiment made an assumption that the majority of the power transmitted was far-field and utilizing the calorimetric method from IEC 600-601-1, 2W of continuous
Folded Dipole Antenna SAR
REMCOM Body Model 9 W/kg
Tissue Equivalent Phantom 10.4 W/kg
Homogeneous Saline Phantom 3.9 W/kg
III. CONCLUSIONS This work demonstrated the need for tissue equivalent phantoms that simulate the electrical properties of a biological system. The work begins with a general discussion about phantoms and their relevance to biomedical applications. This leads into an in depth study of the current standards as applicable to specific absorption rate (SAR) that was later applied to the tissue equivalent phantom. Equipment and procedures for measuring the complex permittivity were discussed. Three distinct types of phantoms for simulating the electrical properties of humans were discussed. The three types of phantoms are a saline phantom, a tissue equivalent phantom and a segmented tissue equivalent phantom. It was shown that the saline phantom breaks down as a good electrical model of the human body at very high frequencies (VHF) and at ultra-high frequencies the parameters are worse with results shown in Figures 4-8. Antenna simulations were performed for two different types of antennas. The first simulation was of a folded dipole at 915MHz, the second simulation was of a 45mm MRI surface coil at 915MHz. These simulations were used to compare simulation against a tissue equivalent and saline phantoms, with the method of comparison being a study of induced SAR. A tissue equivalent phantom was constructed consisting of four combined tissues to accurately act as a human load.
IFMBE Proceedings Vol. 32
A Tissue Equivalent Phantom of the Human Torso for in vivo Biocompatible Communications
417
calculations were within 15% for the folded dipole and within 19% for the MRI coil. Future generations of phantoms should include a lung space; this should help with some of the complex air-tissue interfaces that are in the human body.
REFERENCES
Fig. 2 915MHz MRI coil reflection results
Fig. 3 915MHz Folded dipole reflection results Those four tissues were fat, average muscle, a heartliver-spleen average and an average abdomen (small and large intestines). These four tissue types were then concentrically placed in the former to electrically simulate human tissue. The next step was to compare the electrical properties of the phantom versus a human load. A reflection measurement of the tissue equivalent phantom performed almost identically to the human load, whereas the saline load differed significantly from the human load. This work has shown the usefulness of the tissue equivalent phantom for measurements, testing and empirical analysis of RF interactions with the human body. The measurements of the TEQ heat rise and subsequent SAR
1. Beck, B.L., et al., Tissue-Equivalent Phantoms for High Frequencies. Concepts in Magnetic Resonance Part B: Magnetic Resonance Engineering, 2003. 20B(1): p. 30-33. 2. Durney, C.H. and D.A. Christensen, Basic Introduction to Bioelectromagnetics. 1st ed. 2000, New York: CRC Press LLC. 169. 3. Hartsgrove, G., A. Kraszewski, and A. Surowiec, Simulated Biological Materials for Electromagnetic Radiation Absorption Studies. Bioelectrmagnetics, 1986. 8: p. 29-36.Smith J, Jones M Jr, Houghton L et al. (1999) Future of health insurance. N Engl J Med 965:325–329 4. Foster, K.R. and H.P. Schwan, Dielectric Properties of Tissue - A Review, Handbook of Biological Effects of Electromagnetic Radiation. 1986, Cleveland: CRC Press. 5. Gabriel, S., R.W. Lau, and C. Gabriel, The Dielectric Properties of Biological Tissues: I. Literature Survey. Phys Med Biol 1996. 41: p. 2231–2249. 6. Gabriel, S., R.W. Lau, and C. Gabriel, The Dielectric Properties of Biological Tissues: II. Measurements in the Frequency Range of 10 Hz to 20 GHz. Phys Med Biol 1996. 41: p. 2251–2269. 7. Gabriel, S., R.W. Lau, and C. Gabriel, The Dielectric Properties of Biological Tissues: III. Parametric Models for the Dielectric Spectrum of Tissues. Phys Med Biol 1996. 41: p. 2271–2293. 8. Stuchly, M.A., et al., Dielectric properties of animal tissues in vivo at frequencies 10 MHz - 1 GHz. Bioelectromagnetics, 1981. 2(2): p. 93103. 9. Stuchly, M.A. and S.S. Stuchly, Dielectric Properties of Biological Substances. Journal of Micorwave Power, 1980. 15: p. 19-26. 10. Angelone, L.M., et al., On the effect of resistive EEG electrodes and leads during 7 T MRI: simulation and temperature measurement studies. Magnetic Resonance Imaging, 2006. 24(6): p. 801-812. 11. Ayatollahi, M., et al. Effects of supporting structure on wireless SAR measurement. in Antennas and Propagation Society International Symposium, 2008. AP-S 2008. IEEE. 2008. 12. Gallo, M., P.S. Hall, and M. Bozzetti. Use of Animation Software in the Simulation of On-Body Communication Channels. in Antennas and Propagation Conference, 2007. LAPC 2007. Loughborough. 2007. 13. Jayawardene, M., et al. Comparative study of numerical simulation packages for analysing miniature dielectric-loaded bifilar antennas for mobile communication. in Antennas and Propagation, 2001. Eleventh International Conference on (IEE Conf. Publ. No. 480). 2001. 14. Tarvas, S. and A. Isohatala. An internal dual-band mobile phone antenna. in Antennas and Propagation Society International Symposium, 2000. IEEE. 2000. 15. Yu, H., et al., Printed capsule antenna for medication compliance monitoring. Electronics Letters, 2007. 43(22).
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
David M. Peterson University of Florida HSC 100015 Gainesville, FL 32610 USA
[email protected]
Identification of Bacteria and Sterilization of Crustacean Exoskeleton Used as a Biomaterial Tiffany Omokanwaye1, Donae Owens2, and Otto Wilson Jr.1 1
Catholic University of America/Biomedical Engineering Department, Washington, D.C., USA 2 Benjamin Banneker Academic High School, Washington, D.C., USA
Abstract— Derivatives of the crustacean exoskeleton like chitin have a long history of being used as biomaterials. In the BONE/CRAB lab, the blue claw crab exoskeleton is our biomaterial of choice for a possible bone implant material. The blue claw crustacean, Callinectes sapidus, is found in the Chesapeake Bay. Chitinolytic bacteria, such as those belonging to the Vibrio and Bacillus genera, are common to marine crustaceans. Previous in vitro studies in our lab indicated that bacterial contamination is a major concern. One of the fundamental considerations with the use of an implant biomaterial is sterilization. Materials implanted into the human body must be sterile to avoid subsequent infection or other more serious consequences. An effective sterilization method strikes a balance between the required sterility level and minimum detrimental effect on the properties of the biomaterial while being cost-effective, simple, and readily available. The objective of this study was to isolate, identify bacterial contaminants and develop the best sterilization method for those bacteria found on blue claw crab exoskeleton. Bacteria belonging to the genera Bacillus were identified based on bacterial growth morphologies of dry, dull, raised, rough, and white-grey appearance on LB agar. Bacillus members form endospores which are difficult to eliminate and poses a significant concern for implantable materials. There was no bacterial growth on the TCBS agar plates which is a differential and selective media for Vibrio species. Antimicrobial susceptibility tests were conducted to measure the effectiveness of 70% isopropyl alcohol, povidoneiodine, and household bleach against the bacteria found. The susceptibility tests revealed sensitivities towards the compounds studied. Bacterial identification and susceptibility provide vital guidance to the best method to sterilize while maintaining biological performance. Further studies will evaluate the effect the sterilization protocol has on the physical, chemical, and biological properties of the implant material. Keywords— Biomaterial.
Crustacean,
Microbiology,
Sterilization,
I. INRTRODUCTION Materials designed by humans pale in comparison to those created by nature. Natural materials use free energy, operate under conditions of low temperature (0-40°C), atmospheric pressure, and neutral pH [1]. Bone, one of nature’s masterpieces, is a remarkable, living, mineralized, connective tissue, which is characterized by its hardness, its resilience, and its ability to remodel and repair itself [2]. No single existing
material possesses all the necessary properties required in an ideal bone implant. A suitable bone graft material of proper quality, that is readily available in unlimited quantities, is still needed [3]. Nacre, the source of our inspiration, has been adapted as a bone implant material due to its ability to integrate with bone. This was noted as early as 600 A.D. in the ancient Mayan civilization [4]. One of the goals of this work is to evaluate crab exoskeleton as a potential material to promote bone remodeling. Crab exoskeleton is natural material, similar to bone in composition, structure, and function. Consequently, there exist a body of work that features crab exoskeleton and bone [5, 6,7]. Blue claw crabs are abundant in east coastal bays and water ways [8]. Blue claw crabs are crustaceans whose carapace comprises a mineralized hard component, which is primarily calcium carbonate and a softer, organic component, which is primarily α chitin [7]. Bone also has a mineralized hard component, which is primarily calcium phosphate, and a softer, organic component, which is primarily collagen I. Similar features between bone and crab exoskeleton have been listed in Table 1. These similarities warrant our theory that crab exoskeleton can be used as a bone implant material. Previous in vitro studies in our lab indicated that bacterial contamination is a major concern with our crab exoskeleton samples. One of the fundamental considerations of the use of an implant biomaterial is sterilization. Materials implanted into the human body must be sterile to avoid subsequent infection that can lead to a significant illness or possibly death. Several sterilization methods have been used for implant biomaterials. An effective sterilization method strikes a balance between the required sterility level and minimum detrimental effect on the properties of the biomaterial while being cost-effective, simple, and readily available [9]. Chitin is an abundant polymer within the marine environment, thus chitinolytic bacteria are both common and vital to nutrient recycling. Bacteria belonging to the genera Vibrio, Aeromonas, Pseudomonas, Spirillum, Bacillus, Alteromonas, Flavobacterium, Moraxella, Pasteurella and Photobacterium are all reported as probable agents involved in the bacterial contamination prevalent to marine crustacean like blue claw crab [10]; however, as a starting point, our attention will be focused on the genera Vibrio and Bacillus. The objective of this study was to develop the best sterilization method for bacterial contaminants identified on the blue claw
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 418–421, 2010. www.springerlink.com
Identification of Bacteria and Sterilization of Crustacean Exoskeleton Used as a Biomaterial
419
Table 1 Similarites between Bone and Crab Exoskeleton
1. 2. 3. 4.
hierarchical structuring at all size levels organic phase (collagen and chitin) liquid crystalline behavior of the organic matrix a highly loaded inorganic phase (Hydroxyapatite or calcium phosphate, and calcium carbonate) that contains 40% Ca2+ by mass; 5. textured, crystallographic orientation of inorganic phase 6. protein and muccopolysaccharde constituents 7. biomineralized composites 8. relatively hard and damage tolerant 9. structural support role, cellular control 10. self-healing behavior 11. piezoelectric properties (ability to convert mechanical stress to electrical signals) 12. the ability to adapt to environmental changes crab exoskeleton. The sterilization agents, 70% isopropyl alcohol, povidone-iodine, and household bleach, were selected on the basis of their availability, simplicity and cost.
II. MATERIALS AND METHODS Materials- Blue claw crabs were purchased from local crabbers. Fluka Thiosulfate Citrate Bile Salts Sucrose Agar (TCBS agar) powder, Luria Bertani or lysogeny broth (LB agar) pre-poured agar plates, Fluka sterile paper disc of 10 mm diameter, and Fluka Oxidase test strips were purchased from Sigma Aldrich. Fisher Scientific Accumet 1000 Series Handheld pH/mV/Ion Meter was used for pH measurements. Povidone-iodine, 70% isopropyl alcohol, and household bleach were purchased from local grocery store. Methods- Blue claw crab exoskeletons were removed, cleaned with deionized water, and allowed to dry overnight. The dried exoskeletons were ground in a coffee grinder and stored in plastic specimen cups for later use. Approximately 0.1g of crab exoskeleton chips were mixed with 50 ml of deionized water (crab broth) and allowed to stand for a day. Bacterial Isolation- One milliliter of crab broth, previously prepared, was spread over individual agar plates. Plates were incubated at 35°C for up to 2-3 days to allow bacterial growth. TCBS and LB agars were used: (1) TCBS agar is the primary plating medium universally used for the selective isolation of Vibrios; and (2) LB agar is the general nutrient medium used for routine cultivation and growth of bacteria that does not preferentially grow one kind of bacteria over another.
Biocehemical Oxidase Test- A member of the genus Vibrio was predicted as the probable bacterial contaminant of crab exoskeleton chips. Biochemical test such as the oxidase test can be used for identifying and differentiating types of bacteria with the enzyme cytochrome oxidase [11]. Vibrio strains are oxidase positive. It must be noted that TCBS is an unsatisfactory medium for oxidase testing of Vibrio species [12]. On the other hand, Bacillus species are oxidase negative [11]. Plastic diagnostic strips with a paper zone were used to wipe off several suspect colonies from the plates. Results were read after 1 minute. A negative result corresponds to no color change at the position of wiped colony and a positive result corresponds to a dark blue or black spot developing at the position of the wiped colony. Antimicrobial Susceptibility Testing- Antimicrobial susceptibility testing is a standard testing method used to measure the effectiveness of agents on pathogenic microorganisms. No zone around disc – N/A
Measure edge to edge across the zone of inhibition over the center of the disk
Fig. 1 Measurement of Zone of Inhibition Representation
IFMBE Proceedings Vol. 32
420
T. Omokanwaye, D. Owens, and O. Wilson Jr.
Table 2 Results for Isolation of Bacteria and Sterilization of Blue Claw Crab Exoskeleton Chips Test LB Agar Growth Patterns
Purpose General nutrient media determine colony morphology
to
Result Shape: rounded Edge: irregular Elevation: raised Color: white-grey, dull Texture: rough, dry
Analysis Genera: Bacillus* *
Spore forming
TCBS Growth Pattern
Differential, selective media for Vibrio colony morphology
No growth
Does not have Vibrio bacteria
Biochemical Oxidase Test
Identifies organisms that produce the enzyme cytochrome oxidase
No color change on strip
Negative for production of the enzyme cytochrome oxidase
Antimicrbial Susceptibilty Zone of Inhibition (ZI)
Measures effectiveness of agents against microorganisms
ZIAlcohol (pH=6)=11.7 ± 0.6mm ZIBleach(pH=12)=14.2 ± 0.8mm ZIIodine (pH=4)=10.8 ± 0.3mm ZIControl=N/A
Bleach is the most effective
Sterile 10 mm paper disks impregnated with povidoneiodine, 70% isopropyl alcohol, and household bleach were placed on a plate inoculated to form a bacterial lawn. Each disc will absorb exactly 50 µL of liquid. A control disc with no chemical agents was also included on the plate. The plates were incubated to allow growth of the bacteria and time for the agent to diffuse into the agar. As the substance moved through the agar, it established a concentration gradient. If the organism is susceptible to the agent, a clear zone will appear around the disk where growth has been inhibited. The size of the zone of inhibition (ZI) depends upon the sensitivity of the bacteria to the specific antimicrobial agent [11]. The sterile disks impregnated with the three different chemical agents described earlier and a control with no chemical agents, were placed approximately the same distance from the edge of the plate and from each other while ensuring the disks were in complete contact with the surface of the agar. The zones of inhibition (ZI) were measured as shown in Fig. 1. The ZI tests were conducted three times and the average and standard deviations were calculated. The traditional methodology for routine detection of pathogens nearly always employs a combination of different media in order to increase the sensitivity and the specificity of the detection and identification method. Quantitative growth, the ability of the medium to produce distinctive biochemical reactions [13], and the zones of inhibition were evaluated. Table 2 lists the results for the isolation of bacteria and sterilization of blue claw crab exoskeleton chips.
III. RESULTS AND DISCUSSION Typical Vibrio species morphology is small or large yellow colonies due to sucrose fermentation on TCBS agar plates [11]. No bacterial growth was observed on TCBS agar
plates. However, organisms were isolated on the LB agar plates. Based on growth morphologies which displayed dry, dull, raised, rough, and white-grey appearance on LB agar, bacteria were identified as belonging to the genera Bacillus. Bacillus is an endospore forming bacterium commonly found in soil and aquatic habitats. Among 70 tested Bacillus spp. strains, 19 were found to possess chitinolytic activity [14]. To identify the species within the genera more differential testing will be required. An endospore is a dormant form of the bacterium that allows it to survive poor environmental conditions. Spores are resistant to heat and most chemicals because of a tough outer covering made of the protein keratin [11]. Because bacterial spores are relatively difficult to kill it is usually assumed that a process which kills all spores present also kills all other microbial forms present - that is, it sterilizes the material. Most liquid chemical disinfectants, however, have little or no sporicidal action. Concentrated hypochlorite solutions (bleach) are sporicidal at room temperature but unfortunately are very corrosive. Mixtures of alcohol and bleach have been shown to be highly sporicidal. The alcohol, alone, has no sporicidal activity; alcohol may ‘soften’ the spore coat facilitating penetration by hypochlorite reaction product [15]. Oxidase tests performed on bacterial colonies formed on the LB agar plates were negative for the production of the enzyme cytochrome oxidase because there was no color change at the position of wiped colony. The zones of inhibition (ZI) were measured as follows: ZIControl= N/A, ZIAlcohol (pH=6)= 11.7 ± 0.6 mm, ZIBleach (pH=12)= 14.2 ± 0.8 mm, and ZIIodine (pH=4)= 10.8 ± 0.3 mm. The susceptibility tests revealed sensitivities towards the agents studied with household bleach showing the most susceptibility. Further tests are required to determine the minimum inhibitory concentration for each chemical agent as well as changes to the existing properties of the crab chips.
IFMBE Proceedings Vol. 32
Identification of Bacteria and Sterilization of Crustacean Exoskeleton Used as a Biomaterial
The pH values for povidone-iodine, 70% isopropyl alcohol, and household bleach were 4, 6, and 12, respectively. There appears to be direct relationship between pH and antimicrobial susceptibility; as the pH increases and becomes more basic, the ZI also increases.
IV. CONCLUSION One of the main research questions in the BONE/CRAB Lab involves the evaluation of crab exoskeleton as a material for bone inspired implants. Making implants safe and/or sterile for use in the body is a daunting task. Based on bacterial growth morphologies of dry, dull, raised, rough, and whitegrey appearance on LB agar, bacteria belonging to the genera Bacillus were identified. Bacillus members form endospores which are difficult to eliminate. Endospores pose a significant concern for implantable materials since the human body can create harsh environmental conditions. There was no bacterial growth on the TCBS agar plates which is a differential and selective media for Vibrio species. Antimicrobial susceptibility tests revealed sensitivities of 70% isopropyl alcohol, povidoneiodine, and household bleach against the bacteria found on crab exoskeleton. Bleach had the greatest sensitivity with a ZI measurement of 14.2 ± 0.8 mm. Sterilization is associated with the total absence of viable microorganisms, which refers to an absolute condition and assures the greatest safety margin than any other antimicrobial method. Finding an effective agent against spores requires a thorough understanding of the unique characteristics of each chemical agent, including their limitations and appropriate applications [16]. Apparent lack of an ideal liquid chemical sterilant and results from the zones of inhibition study establishes our need to test different concentrations of bleach and mixtures with alcohols to reach an optimum level of sterility without sacrificing properties such as bioactivity.
421
REFERENCES 1. Smith C A, Wood E J (1991) Molecular and Cell Biochemistry: Biological Molecules. Chapman & Hall, London 2. Hing K A. (2004) Bone repair in the twenty-first century: biology, chemistry or engineering? Philos Trans R Soc Lond A, 2821–2850 3. Wise D L et al. (2002) Biomaterials Engineering and Devices: Human Applications. Humana Press, Totowa 4. Ratner B D (2001) Replacing and Renewing: Synthetic Materials, Biomimetics, and Tissue Engineering in Implant Dentistry. J Dent Educ 65:1340-1347 5. Bouligand Y (1972) Twisted Fibrous Arrangements in Biological Materials and Cholesteric Mesophases. Tissue Cell 4:189-217 6. Giraud-Guille M-M, Belamie E, Mosser G (2004) Organic and mineral networks in carapaces, bones, and biomimetic materials. Comptes Rendus Palevol 3:503-513 7. Meyers M A et al. (2006) Structural Biological Composites: An Overview. JOM 58:35-41 8. Perry H (2001) Unit Five Coast/Blue Crabs. Project Oceanography. [Online]. http://www.marine.usf.edu/pjocean/packets/f01/f01u5p2.pdf. 9. Morejon-Alonso L et al. (2007) Effect of Sterilization on the Properties of CDHA-OCP-B-TCP Biomaterial. Material Research 10:15-20 10. Vogan C L, Costa-Ramos C, Rowley A F (2002) Shell Disease Syndrome in the Edible Crab, Cancer Pagurus -- Isolation, Characterization and Pathogenicity of Chitinolytic Bacteria. Microbiology 148:743-754 11. Leboffe M J, Pierce B E (2005) A Photographic Atlas for the Microbiology Laboratory, 3rd edition. Morton Publishing Company, Englewood 12. Morris G K et al. (1979) Comparison of Four Plating Media for Isolating Vibrio. J Clin Microbiol 9:79-83 13. Blom M et al. (1999) Evaluation of Statens Serum Institut Enteric Medium for detection of Enteric Pathogens. J Clin Microbiol 37:23122316 14. Aktuganov G E et al. (2003) The Chitinolytic Activity of Bacillus Cohn Bacteria Antagonistic to Phytopathogenic Fungi. Microbiology 72:56–360 15. Coates D, Death J E (1978) Sporicidal activity of mixtures of alcohol and hypochlorite. J Clin Pathol 31:148-152 16. Mazzola P G, Penna T C V, da S Martins, A M (2003) Determination of decimal reduction time (D value) of chemical agents used in hospitals for disinfection purposes. BMC Infect Dis 3
The corresponding author:
ACKNOWLEDGMENT The authors would like to acknowledge support from the NSF Biomaterials Program (grant number DMR-0645675).
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Otto Wilson, Jr., PhD Catholic University of America 620 Michigan Ave., NE Washington, DC USA
[email protected]
Neural Stem Cell Differentiation in 2D and 3D Microenvironments A.S. Ribeiro1, E.M. Powell2, and J.B. Leach1 1
University of Maryland Baltimore County/Chemical & Biochemical Engineering, Baltimore, USA 2 University of Maryland School of Medicine/Anatomy and Neurobiology, Baltimore, USA
Abstract— Neural Stem Cells (NSCs) have tremendous potential for tissue engineering applications because of their high regenerative capacity to promote functional recovery following disease and injury in the central nervous system. Despite their great potential, current methods to culture NSCs are limited; e.g., adherent 2D cultures are greatly simplified vs. the in vivo microenvironment by imposing altered tissue-specific architecture and mechanical and biochemical cues, and cell morphology. Environmental cues are critical for cellular maturation and function and in vivo these are presented in a 3D environment. Recent studies with non-neuronal cells demonstrate that in a 3D matrix, cells dramatically alter their morphology and signaling pathways, with in vitro 3D environments being a better representation of in vivo systems. The main goal of this study is to define how NSC differentiation and cell-matrix signaling is altered in 2D and 3D systems. We hypothesize that 3D culture imposes changes in matrix-ligand organization and alters NSC behavior by modulating cytoskeletal signaling and differentiation outcome. To test our hypothesis we cultured mouse embryonic NSCs in 2D and 3D biomaterials and observed differences in cell behavior and β1 - integrin signaling with altered culture dimensionality using immunocytochemistry and flow cytometry. In this study we show that NSCs sense the dimensionality of their environment and alter motility: in 3D, individual cells adapt a random migration pattern and extend longer neurites than in 2D where the cells undergo chain migration. In addition, the differentiation of the NSCs into the neuronal phenotype is increased in 2D vs 3D culture. These results confirm our hypothesis and provide a foundation to design optimal biomaterials towards the development of therapeutics for nerve repair and neurodegenerative disorders. Keywords— Neural Stem Cells, 3D culture, differentiation, β1 - integrin signaling.
I. INTRODUCTION The regeneration of damaged nervous tissue is a complex biological problem. Peripheral nerve injuries can heal on their own if the injury is small, but factors exist within the central nervous system (CNS) that pose barriers to regeneration [1]. Functional recovery following brain and spinal cord injuries and neurodegenerative diseases is likely to require the transplantation of exogenous neural cells and tissues, and neural stem cells (NSC) transplants have shown a great potential to promote functional recovery [2-4]. Though promising, the success of neurotransplantation is
currently limited by short term survival of NSCs and failure to integrate with the host tissue [5,6]. To overcome these challenges, tissue engineers have successfully combined neural stem cells and polymer scaffolds to generate functional neural and glial constructs that emulate the mammalian brain or spinal cord structure and can therefore be used as tissue replacements for CNS injuries [2,3,7]. Although some success has been noted in the use of biomaterial implants [8-10], most investigations of biomaterials for NSCs applications have been implemented in vitro and the few transplant studies carried out did not show improvement beyond the level of success reported for NSC transplants alone. We believe that tissue engineering efforts focused on nerve repair and brain injury have been limited by a poor understanding of how the NSCs interact with threedimensional (3D) cues. Cells cultured in engineered 3D microenvironments have been shown to better represent in vivo cellular behavior than cells cultured in 2D configurations [11,12]. For example, cells cultured in 3D scaffolds have been found to exhibit more in vivo like viability, proliferation, response to biochemical stimuli, gene expression and differentiation [13,14]. One of the fundamental differences between 2D vs 3D culture is the distribution of cell-cell and cellextracellular matrix interactions, which alter cell morphology, signaling mechanisms and subsequent cell function [11,15,16]. The types of cell-matrix adhesions organized by integrins in vitro and the signals they transduce have been shown to be strongly affected by the flat, rigid surfaces of tissue culture dishes [11]. Therefore, a closer approximation to in vivo environments should be attained by growing cells in 3D matrices [16]. Given these findings, there have been a number of studies investigating the interactions between NSCs and 3D biomaterials. In general, work in this area has focused on the effect of the biomaterial microenvironment on NSC viability [21,22] and differentiation [21,23] without exploring how substrate dimensionality directly impacts matrixcytoskeletal interactions and how it imposes indirect effects on NSC fate. The purpose of this study is to define the molecular mechanisms of how neural stem cells interact with their 3D environment, by determining the effect of environment dimensionality in NSC differentiation and cytoskeletal
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 422–425, 2010. www.springerlink.com
Neural Stem Cell Differentiation in 2D and 3D Microenvironments
423
signaling. We cultured NSCs in 2D and 3D collagen matrices and examined differentiation and β1-integrin expression in both presentations of the same substrate. We found that NSCs adapt various differentiation and migration patterns in 2D vs 3D culture and conserve β1–integrin signaling expression.
incubations in blocking solution on a rotating tray. All procedures were carried out at 25 °C. Immunoreactive cells in 2D and 3D samples were imaged with confocal microscopy (Leica, TCS SP5). The 3D gels were imaged using 63x long working distance objectives (WD = 250 µm; Leica). Images of ≥3 samples of ≥3 different cultures were analyzed for each experimental condition (n≥9). We determined differences in the presence (reactive vs absent), location (whole cell, cell body, processes) and type of signal (diffuse vs punctate).
II. MATERIALS AND METHODS A. Isolation and Culture of NSCs in 2D and 3D Collagen Matrices NSCs derived from the cerebral dorsal telencephalon of E13.5-14.5 C57Bl/6 mice (Jackson Laboratory) [19,24]. The dissected tissue was mechanically minced to a single cell suspension, suspended in proliferation medium (serumfree DMEM/F12 media supplemented with B27 (Gibco), human recombinant FGF-2 (20 ng/ml, Peprotech) and human recombinant EGF (20 ng/ml, Peprotech)) and then seeded at 2x105 cells/ml in culture flasks. These conditions promoted the formation of neurospheres from floating cultures of single cells [25]. NSCs were split weekly and the medium refreshed every 2-3 d. Viable NSCs (p 2-6) were seeded in differentiation medium (proliferation medium without growth factors) onto 2D collagen-coated coverslips (~7 µg/cm2) at 20-25 neurospheres/cm2 and in 3D 1mg/ml collagen gels 200-300 neurospheres/ml (10-15 neurospheres per 50 µl gel). These densities were optimized to reduce contact between neighboring neurospheres. NSCs were cultured for 3 d. The medium was changed after the first day of culture. Collagen coated coverslips and 1 mg/ml gels were prepared using rat tail type I collagen (BD Biosciences) according to the manufacturer’s directions. The cells were suspended in collagen solution prior to gelling, mixed and then 20 µl of collagen solution was transferred to an uncoated glass coverslip and allowed to gel at physiological conditions for ~30 min and then covered with 450 µL of medium. B. Immunocytochemistry and Confocal Imaging After fixation in a buffered 4% formalin solution for 20 min, the samples were blocked in 10% lamb serum in PBS for 30 min (in 2D) and 2 h (in 3D). For 2D culture, the cells were incubated in primary antibodies against β1–integrin or phenotypic markers for neurons and astrocytes (Table 1), for 30-60 min and visualized following 30 min incubation with the appropriate fluorescently-conjugated secondary antibodies (Jackson Immunoresearch). Immunocytochemistry procedures for the 3D gels incorporated several long washing steps (30 min each) and overnight antibody
Table 1 Antibodies used for integrin and stem cell differentiation analysis Name Rabbit anti-β1 integrin
Company Santa Cruz Biotechnology
Mouse IgG2a anti-βIIItubulin (TUJ1) Mouse IgG1 anti-GFAP
Dilution [1:100]
Sigma
[1:500]
Sigma
[1:200]
C. Flow Cytometry Analysis Protein expression was quantified using flow cytometry. Cells in collagen cultures were collected in 2 mg/ml collagenase (Fisher) following trituration to generate single-cell suspensions. Cells were fixed, permeabilized and then incubated with primary antibody in PBS containing 10% fetal bovine serum for a minimum of 30 min at 25 °C with constant agitation. Cells were washed twice with buffer, incubated with the appropriate secondary antibody for a minimum of 30 min at 25 °C and then washed and re-suspended in PBS prior to analysis. Three populations including the positive, negative (secondary only) and unlabelled cells for each antibody were analyzed. Three-color live-gating acquisition was carried out on a Beckman-Coulter Cyan ADP flow cytometer (Cytomation).
III. RESULTS A. NSC Differentiation in 2D and 3D Collagen Cultures Cell migration could be observed after day 1 in both culture conditions. After 3 d differences were apparent in the patterns of cell migration away from the neurospheres (Fig. 1). In 2D the migrating cells formed chains of cells extending from the spheres, as seen in previous studies [20], where the cells migrate in contact with one another rather than as random cells adhered to a substrate. In 3D, however, the cells migrated away from the spheres with minimal cell-cell contact; cells further from the neurosphere extended large processes that spanned into the gel matrix (Figs. 1, 2) instead of contacting with other cells as we observed in 2D.
IFMBE Proceedings Vol. 32
424
A.S. Ribeiro, E.M. Powell, and J.B. Leach
processes, while in 3D β1-integrin reactivity was more diffuse with complexes of higher intensity around the cell bodies (Fig. 4). Ctrl 3D
2D
3D Ctrl 2D
ȕ1-integrin+ 3D ȕ1-integrin+ 2D
Ctrl 3D
Ctrl 2D
TUJ1+ 3D TUJ1+ 2D
Fig. 1 Phase contrast images of NSCs cultured for 3 d in 2D and 3D collagen matrices. Yellow arrows note processes that extend into the matrix. Scale bars, 50 µm ȕ1 – integrin -
In order to determine the phenotype of the migrating cells, we stained the cultures with antibodies against βIIItubulin, a neuronal marker expressed very early after commitment to the neuronal lineage, and GFAP, a glial cell marker expressed in astrocytes. We observed GFAP+ cells near the center of the neurospheres and βIII-tubulin+ cells migrating towards the neurosphere periphery; the latter effect was more pronounced in 3D culture. To determine differences in NSC differentiation between 2D and 3D culture we quantified the expression of βIII-tubulin+ and GFAP+ cells for each condition using flow cytometry. Preliminary results suggest that in 2D culture there is an increase in the expression of βIII-tubulin+ cells (Fig. 3). The level of GFAP+ cells remain unchanged between 2D and 3D culture, revealing no differences in the cells that differentiated into astrocytes (data not shown). 2D
Fig. 3 Flow cytometry analysis of differentiated NSCs cultured for 3 d in 2D and 3D collagen matrices. Differentiated NSCs were labeled with antibodies against β1-integrin (left), βIII-tubulin (TUJ1, right) or the secondary antibodies only as the isotope controls
Fig. 4 NSC β1–integrin immunoreactivity (red) in 2D and 3D culture. Cell nuclei is labeled with DAPI (blue). Inset depict magnified features of representative cells. Arrows note the location of the neurosphere (out of field of view.) Scale bars, 20 µm
3D
IV. DISCUSSION
Fig. 2 Neurosphere immunoreactivity for βIII-tubulin and GFAP after 3 d of culture in 2D and 3D collagen matrices. In 2D neurons are labeled in red and astrocytes in green. In 3D, neurons are labeled in green and astrocytes in red. Scale bars, 50 µm
B. β1-Integrin Expression in 2D and 3D Collagen Culture The expression levels of β1-integrin in 2D and 3D culture are similar (Fig 3). Immunocytochemistry shows that the cells in both culture conditions expressed β1-integrin throughout the cells when cultured under both conditions. However, in 2D β1-integrin was expressed in a clustered pattern with several large punctate complexes in the cells’
This study focuses on determining how environment dimensionality affects neural stem cell outcome. The experiments reported here show that 3D culture impacts NSC differentiation and migration events. In 3D, instead of the characteristic chain migration observed in 2D and seen in previous studies [20], the cells migrate away of the neurospheres in a manner that seems more independent of cell-cell interactions. Moreover, in 3D, isolated neurons migrated further away from the spheres, into the gel matrix, and extended longer neurites than in 2D culture. Based on these findings, we hypothesize that cell-matrix interactions during cell migration in 3D culture play a more important role than cell-cell signals. We also note that in 2D there were a greater percentage of differentiated neurons indicating that flat 2D culture may induce neuronal differentiation in comparison to the soft 3D gels used in these experiments. Flow cytometry
IFMBE Proceedings Vol. 32
Neural Stem Cell Differentiation in 2D and 3D Microenvironments
425
analysis verified that total β1-integrin expression was unaffected with culture dimensionality, which agrees with previous findings with non-neuronal cells [12,16]. However differences in β1–integrin staining patterns were evident within the cells, as seen in the dissimilar arrangement of integrinmediated adhesion sites in 2D vs 3D.
6. Lepore, A.C. et al. (2006) Long-term fate of neural precursor cells following transplantation into developing and adult CNS. Neuroscience 139, 513-30. 7. Ma, W. et al. (2008) Reconstruction of functional cortical-like tissues from neural stem and progenitor cells. Tissue Eng Part A 14, 1673-86. 8. Park, K.I., Teng, Y.D. & Snyder, E.Y. (2002) The injured brain interacts reciprocally with neural stem cells supported by scaffolds to reconstitute lost tissue. Nat Biotechnol 20, 1111-7. 9. Freudenberg, U. et al. (2009) A star-PEG-heparin hydrogel platform to aid cell replacement therapies for neurodegenerative diseases. Biomaterials 30, 5049-60. 10. Bible, E. et al. (2009) The support of neural stem cells transplanted into stroke-induced brain cavities by PLGA particles. Biomaterials 30, 2985-94. 11. Cukierman, E., Pankov, R., Stevens, D.R. & Yamada, K.M. (2001) Taking cell-matrix adhesions to the third dimension. Science 294, 1708-12. 12. Paszek, M.J. et al. (2005) Tensional homeostasis and the malignant phenotype. Cancer Cell 8, 241-54. 13. Hoffman, R.M. (1993) To do tissue culture in two or three dimensions? That is the question. Stem Cells 11, 105-111. 14. Cullen, D.K., Lessing, M.C. & LaPlaca, M.C. (2007) Collagendependent neurite outgrowth and response to dynamic deformation in three-dimensional neuronal cultures. Ann Biomed Eng 35, 835-46. 15. Yamada, K.M., Pankov, R. & Cukierman, E. (2003) Dimensions and dynamics in integrin function. Braz J Med Biol Res 36, 959-66. 16. Cukierman, E., Pankov, R. & Yamada, K.M. (2002) Cell interactions with three-dimensional matrices. Curr Opin Cell Biol 14, 633-9. 17. Wozniak, M.A., Modzelewska, K., Kwong, L. & Keely, P.J. (2004) Focal adhesion regulation of cell behavior. Biochim Biophys Acta 1692, 103-19. 18. Geiger, B. (2001) Cell biology. Encounters in space. Science 294, 1661-3. 19. Lathia, J.D. et al. (2007) Patterns of laminins and integrins in the embryonic ventricular zone of the CNS. J Comp Neurol 505, 630-43. 20. Jacques, T.S. et al. (1998) Neural precursor cell chain migration and division are regulated through different beta1 integrins. Development 125, 3167-77. 21. O'Connor, S.M. et al. (2000) Primary neural precursor cell expansion, differentiation and cytosolic Ca(2+) response in three-dimensional collagen gel. J Neurosci Methods 102, 187-95. 22. Watanabe, K., Nakamura, M., Okano, H. & Toyama, Y. (2007) Establishment of three-dimensional culture of neural stem/progenitor cells in collagen Type-1 Gel. Restor Neurol Neurosci 25, 109-17. 23. Levenberg, S. et al. (2003) Differentiation of human embryonic stem cells on three-dimensional polymer scaffolds. Proc Natl Acad Sci U S A 100, 12741-6. 24. Reynolds, B.A. & Weiss, S. (1992) Generation of neurons and astrocytes from isolated cells of the adult mammalian central nervous system. Science 255, 1707-10. 25. Bez, A. et al. (2003) Neurosphere and neurosphere-forming cells: morphological and ultrastructural characterization. Brain Res 993, 1829.
V. CONCLUSIONS The goal of this study was to demonstrate that 3D culture modulates NSCs integrin signaling events and alters NSC outcome. Our work to date has demonstrated that NSC migration and differentiation are altered with culture dimensionality: in 2D there is an increase in neuronal population and cells undergo chain migration, whereas in 3D, differentiated cells adapt a random migration pattern and extend longer neurites. Ongoing work focuses on the confirmation of these results via in depth study of β1–integrin signaling pathways to determine how the individual differentiated cellular populations adjust these regulatory events to changes in culture dimensionality. Further studies in NSC biology combined with improved engineered cell scaffolds will certainly present rewards in the near future, including the development of new therapies for several types of neurological disorders.
ACKNOWLEDGMENT We thank C. Petty for confocal microscopy training and technical assistance on the Leica SP5 (funded by NSF DBI0722569); Dr J. Lathia for training in NSCs dissection and isolation and Dr S. Rosenberg for training and technical assistance with FACS analysis. This work was supported by NIH-NINDS R01NS065205 (JBL), the Henry Luce Foundation (JBL), and AR was supported by the Wyeth Fellowship at UMBC.
REFERENCES 1. Schmidt, C.E. & Leach, J.B. (2003) Neural tissue engineering: strategies for repair and regeneration. Annu Rev Biomed Eng 5, 293-347. 2. Ma, W. et al. (2004) CNS stem and progenitor cell differentiation into functional neuronal circuits in three-dimensional collagen gels. Exp Neurol 190, 276-88. 3. Martinez-Ramos, C. et al. (2008) Differentiation of postnatal neural stem cells into glia and functional neurons on laminin-coated polymeric substrates. Tissue Eng Part A 14, 1365-75. 4. Svendsen, C.N. et al. (1997) Long-term survival of human central nervous system progenitor cells transplanted into a rat model of Parkinson's disease. Exp Neurol 148, 135-46. 5. Kulbatski, I., Mothe, A.J., Nomura, H. & Tator, C.H. (2005) Endogenous and exogenous CNS derived stem/progenitor cell approaches for neurotrauma. Curr Drug Targets 6, 111-26.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Jennie B. Leach University of Maryland Baltimore County 1000 Hiltop Circle Baltimore, MD 21250 USA
[email protected]
A Microfluidic Platform for Optical Monitoring of Bacterial Biofilms M.T. Meyer1,2, V. Roy1,3, W.E. Bentley1, and R. Ghodssi1,2,4 1
Fischell Department of Bioengineering, 2 Institute for Systems Research, 3 Department of Molecular and Cell Biology, Department of Electrical and Computer Engineering, University of Maryland College Park, College Park MD, USA
4
Abstract— Bacterial biofilms are pathogenic matrices which characterize a large number of infections in humans and are often formed through bacterial intercellular molecular signaling. A microfluidic platform for the evaluation of bacterial biofilms based on optical density was fabricated and tested. The platform was used to non-invasively observe the formation of Escherichia coli biofilms. These methods were corroborated by measurement of biofilm optical thickness. The dependence of biofilm optical density on bacterial communication was evaluated. After 60 hours of growth at 10 µL/hr, wild-type biofilms were approximately 100% more thick and 160% more optically dense than biofilms formed by non-communicating bacteria. The thicknesses of the detected biofilms are comparable to those found in literature for both in vitro and in vivo biofilms seen in microbial infections. The platform was also used to observe the effect of flow parameters on biofilm adhesion; results indicate bacterial communication in biofilm formation is necessary for adherent biofilms. The presented platform will be used in characterization of biofilm formation and response in drug discovery applications. Keywords— microfluidics, bacterial biofilms, biofilm optical density, bioMEMS.
I. INTRODUCTION Bacterial biofilms have been linked to a type of intercellular molecular communication known as quorum sensing. Once an infection reaches a threshold population, molecular signals dictate a change in phenotype resulting in the formation of a pathogenic biofilm comprised mainly of bacteria and an extracellular polysaccharide matrix. Biofilms are of particular interest since they are involved in 65% of bacterial infections in humans [1]. Bacterial biofilms are particularly difficult to treat due to elevated resistance to antibiotics [2]. The prevalence and resistance of bacterial biofilms underscores the need to understand the mechanisms of biofilm formation and development toward the goal of treating and preventing bacterial biofilms. Bacterial biofilms have been recently investigated in microfluidic environments, which allow for control of the microenvironment of the biofilm. Microfluidic devices are therefore well suited for the study of biofilm growth. Janakiraman et al [3] evaluated the thickness and morphology of biofilms formed under varying conditions
within a microfluidic channel. However, these studies focus on endpoint measurements using external equipment for evaluation of the biofilm. In recent years, microfluidic systems integrated with microsensors have risen as a promising platform for drug development. These platforms take advantage of microfabrication techniques to batch-fabricate devices that not only are inexpensive and small, but also can serve as a platform for the integration of biological elements with microsensors. There exist many varieties of microdevices targeted toward bacterial detection. Capacitive sensors have been applied toward real-time sensing of cell sedimentation and adhesion [4]. Richter et al have developed sensors for fungal biofilm detection using impedimetric sensing [5]. In contrast to these devices that probe the electrical properties of cells, Bakke et al [6] have presented work at the macroscale using optical density as a non-invasive, label-free means of evaluating biofilm growth. The authors demonstrated that the optical absorbance of a biofilm in the visible spectrum will increase with biofilm growth. In the presented work, we have designed and constructed a microfluidic platform for real-time, non-invasive monitoring of Escherichia coli biofilms as a function of their optical density. Biofilm optical density is compared with the measured optical thickness to verify the applicability of this method. The platform is also used to investigate the role of quorum sensing in the formation of bacterial biofilms by comparing the biofilm optical density and thickness trends of wild-type E. coli to those of E. coli incapable of quorum sensing molecule production.
II. MATERIALS AND METHODS A. Microfluidic Platform Design and Fabrication The platform consists of a micropatterned base and a microfluidic channel. The base is fabricated on Pyrex, providing a transparent substrate; 20 nm Cr and 200 nm Au are sputtered onto the Pyrex, and patterned using contact photolithography to define two observation windows per microfluidic channel. Micropatterned windows allow for repeatable measurement positions within the channel. The chips are covered with a 1 µm layer of LPCVD-deposited
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 426–429, 2010. www.springerlink.com
A Microfluidic Platform for Optical Monitoring of Bacterial Biofilms
427
C. Platform Operation
Fig. 1 a) Schematic of bacterial deposition b) Cross-section of microfluidic platform
SiO2 to promote adhesion of the microfluidic layer. The microfluidic channel is molded in polydimethylsiloxane (PDMS); the mold is patterned in 100 µm-thick SU-8 50 (MicroChem Corp, USA) using contact photolithography. PDMS (Sylgard 184, Dow Corning, USA), in a 10:1 ratio of resin to curing agent, is poured over the mold and cured at 80 ºC for 20 min. Ports for interfacing the channel to fluidic tubing are drilled into the PDMS layer using a 2 mm dermatological punch. The molded PDMS is reversibly bonded to the chip, allowing for disassembly, cleaning, and reuse of each chip after experimentation. Methanol is applied to the PDMS layer, which is aligned to and placed on the chip. Evaporation of the methanol produces a reversible bond between the PDMS and the top layer of silicon dioxide on the chip. Schematics of the microfluidic platform are given in Figure 1. Assembled platforms are each affixed to a glass slide, then aligned to and positioned over two photodiodes (BS520, Sharp Microelectronics, USA) per microfluidic channel. One end of Tygon tubing is connected to a PDMS port via a barbed tubing coupler (McMaster Carr 5117K41, USA), and the other is connected to a syringe pump operating in suction mode. After fluidic assembly, an array of red, high-intensity LEDs is aligned to and positioned over the platforms. A data acquisition card interfaced to LabVIEW (National Instruments, USA) is used to monitor the outputs of the sensing photodiodes and the LED array. The entire assembly is positioned in an incubator maintained at 37 ºC. B. Strains Used Wild-type E. coli W3110 was selected as a standard for biofilm formation in the microfluidic platform. In investigating the role of quorum sensing in optically detectable biofilm formation, MDAI2, a luxS-null mutant of E. coli W3110 [7] was used as a negative control. All suspension and biofilm cultures were grown in LB media.
The device is prepared for experimentation by first depositing the bacteria of interest in the microfluidic channel. Bacteria are grown in suspension at 37 ºC to an OD600 of 0.25, then suctioned into the assembled platform. The channel is incubated with the inoculum for 2 hours to allow for adhesion to the substrate. The channel is rinsed for 15 min with LB growth media at a flow rate of 10 µL/hr, corresponding to an average velocity of 0.06 mm/s within the channel. The platform is then continuously operated with LB media at 10 µL/hr, and changes in optical signals are monitored using the LabVIEW interface. The change in photodiode voltage over the growth period is evaluated with respect to a baseline voltage measured over 15 min after rinsing. The change in voltage, corresponding to a change in transmitted light intensity, is converted to a change in optical absorbance for evaluation of results. While the data from the two measurement windows in the channel are not identical, results are similar; the optical absorbance data reported is the average of data from both measurement locations within a channel. In assessing the thickness of the biofilm, the platform is removed at selected timepoints. Optical thickness is evaluated using an optical microscope and measuring the distance between the focal plane of the channel bottom and the focal plane of the top of any accumulated biomass. Thickness is measured at 5 locations around each observation window and averaged.
III. RESULTS The presented method for evaluating bacterial biofilms based on optical density was used to observe several phenomena within a microfluidic flow cell. A. Parallel Operation of Devices The design of the setup, including inexpensive microfluidic platforms, sensors, and light sources, make this method easily arrayed. This capability is demonstrated by the operation of two platforms simultaneously. In the example shown in Figure 2, one microfluidic channel was inoculated with E. coli W3110, while the other was not inoculated with any bacteria; both channels were subsequently exposed to a continuous flow of LB growth media at 10 µL/hr. The lack of optically detectable microbial growth in the latter channel indicates the degree of sealing and sterility achieved in the setup. The capability of parallel operation allows for multiple experiments to have identical environmental parameters that may otherwise vary between experiments.
IFMBE Proceedings Vol. 32
428
M.T. Meyer et al.
Fig. 2 Change in optical density concurrently observed in a channel containing an E. coli W3110 biofilm and in a channel filled with LB media
Fig. 3 Change in optical density concurrently observed in two channels containing E. coli W3110 and MDAI2 respectively
B. Evaluation of Biofilm Growth The platform was utilized to examine the role of quorum sensing in biofilm formation. Wild-type E. coli W3110 was grown in parallel with MDAI2 continuously in the microfluidic platform for 60 hours, yielding the optical density curves shown in Figure 3. This was compared to the progression of optical thickness of adherent films over the same period of time (Figure 4). As depicted, both strains of bacteria produced a measurable change in optical density, and biofilm formation was observed in both channels. Both the optical density and optical thickness data follow the same general trends over the time scale investigated, indicating that quantification via optical density is an appropriate method for evaluating bacterial biofilms. For approximately the first 15 hours of operation, W3110 and MDAI2 exhibit similar trends in increasing optical density; after this point, the optical density of the W3110 biofilm rises above that of the MDAI2 biofilm. Over a 60 hour growth period, bacterial formations within the MDAI2 channel were also thinner than W3110 biofilms. The difference between the two types of biofilms investigated can be highlighted by observing the biofilms’ response to increased shear stress; biofilm stability may be evaluated by the degree of adhesion to the substrate. Biofilms previously formed at 10 µL/hr over 48 hours were rinsed with LB media at a flow rate of 400 µL/hr for 30 minutes, at which time the initial flow rate was restored. The change in optical density for MDAI2 and W3110 biofilms with rinsing is shown in Figure 5. After rinsing, the average optical thickness of the W3110 film was 56 ± 8 µm, and that of the MDAI2 film was 11 ± 3 µm. Over the rinsing period, the change in optical density for the W3110
Fig. 4 Optical thickness measurements of E. coli W3110 and MDAI2 biofilms
10 uL/hr
400 uL/hr
10 uL/hr
Fig.
5 Rinsing of E. coli W3110 and MDAI2 biofilms measured by the change in optical density
film was minimal, corresponding to the small change in thickness compared with the value in Figure 4. Conversely,
IFMBE Proceedings Vol. 32
A Microfluidic Platform for Optical Monitoring of Bacterial Biofilms
the MDAI2 film exhibited large decreases in thickness and in optical density, indicating weaker adhesion.
IV. DISCUSSION The methods presented provide the unique capability of continuous, real-time monitoring of biofilm optical density within the microfluidic channels. This is evident in the optical signals’ frequent fluctuations, which are attributed to sloughing and re-deposition of clumps of bacteria in the continuous flow environment. Both W3110 and MDAI2 exhibit optically detectable biofilms. The results suggest that although biofilms are observed in the absence of quorum sensing activity, their structure and growth dynamics differ from those formed with quorum sensing. MDAI2 biofilms were found to be thinner, more sensitive to rinsing, and generally less optically dense than W3110 biofilms. These results agree with other experiments finding that while biofilm formation is promoted by quorum sensing, quorum sensing-inhibited bacteria are also capable of forming thin, dense biofilms [8]. The increasing optical density and thickness of the wildtype biofilm at the end of the longest experiment (60 hours) suggest ongoing maturation. Considering this, the time limitation imposed in this study appears too short for achievement of a fully matured biofilm. However, when considering the application of this platform toward drug development, it is most important to monitor the beginning stages of biofilm growth. Assuming growth dynamics after 60 hours follow the same trend of slowly approaching steady state, the time scale used is suitable for investigation of the initial formation and development of E. coli biofilms. The optical changes detected correspond to biofilm thicknesses on the order of tens of microns; while this is comparable to in vitro biofilm thickness values found in literature [3,6], Candida albicans biofilms formed on catheters in vivo have been observed to be as thick as 100 µm [9]. Therefore, the platform may be used to observe both scientifically and clinically relevant biofilms. The presented work lays the foundation for the development of a lab-on-a-chip for biofilm observation; external photodiodes may be replaced by on-chip photodiodes embedded in a silicon substrate, and other types of sensors may be integrated for detection of molecules indicative of quorum sensing and biofilm growth.
V. CONCLUSION We present a unique microfluidic platform and method for optical monitoring of bacterial biofilms. Biofilm growth
429
within a microfluidic channel is evaluated based on increasing optical density, which was observed to follow the same trends as the biofilm optical thickness. Parallel operation of microfluidic channels allows for simultaneous comparison of biofilms formed under differing conditions. The system was used to compare the growth of wild-type E. coli as well as E. coli incapable of quorum sensing signaling. Biofilms formed by the latter strain exhibited an overall lower optical density and optical thickness. The integrity of both types of biofilm was evaluated by exposing formed biofilms to a high shear rate. The capability of continuous sensing provided by this platform is vital to the monitoring of bacterial biofilm growth, and will aid the development of drugs inhibiting biofilm formation.
ACKNOWLEDGMENT The authors acknowledge financial support from the R. W. Deutsch Foundation and the National Science Foundation Emerging Frontiers in Research and Innovation (NSF-EFRI). The authors also appreciate the support of the Maryland NanoCenter and its FabLab.
REFERENCES 1. Potera C, (1999) Forging a link between biofilms and disease. Science 283:1837–1839 2. Stewart P (2002) Mechanisms of antibiotic resistance in bacterial biofilms. Int J Med Microbiol 292:107-113 3. Janakiraman V, Englert D, Jayaraman A et al, (2009) Modeling growth and quorum sensing in biofilms grown in microfluidic chambers. Ann Biomed Eng 37:1206-1216 4. Prakash S, Abshire P, (2007) On-chip capacitance sensing for cell monitoring applications. IEEE Sensors 7:440-447 5. Richter L, Stepper C et al, (2007) Development of a microfluidic biochip for online monitoring of fungal biofilm dynamics. Lab Chip 7:1723-1731 6. Bakke R, Kommedal R, Kalvenes S, (2001) Quantification of biofilm accumulation by an optical approach. J Microbiol Meth 44:13-26 7. DeLisa M, Valdes J, Bentley W, (2001) Mapping stress-induced changes in autoinducer AI-2 production in chemostat-cultivated Escherichia coli K-12. J Bacteriol 183: 2918-2928 8. Davies D, Parsek M et al, (1998) The involvement of cell-to-cell signals in development of a bacterial biofilm. Science 280:295-298. 9. Andes D, Nett J et al, (2004) Development and characterization of an in vivo central venous catheter Candida albicans biofilm model. Infect Immun 72:6023-6031
Corresponding Author: Reza Ghodssi University of Maryland, College Park Department of Electrical and Computer Engineering College Park, MD USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Conduction Properties Of Decellularized Nerve Biomaterials M.G. Urbanchek1, B.S. Shim2, Z. Baghmanli1, B. Wei1, K. Schroeder3, N.B. Langhals1, R.M. Miriani1, B.M. Egeland1, D.R. Kipke1, D.C. Martin2 , and P.S. Cederna1 1
2
University of Michigan/Surgery, Plastic Surgery, Ann Arbor, USA University of Delaware/Materials Science & Engineering, Newark, USA 3 Hope College/Literature, Science, and the Arts, Holland, USA
Abstract- The purpose of this study is to optimize poly(3,4,ethylenedioxythiophene) (PEDOT) polymerization into decellular nerve scaffolding for interfacing to peripheral nerves. Our ultimate aim is to permanently implant highly conductive peripheral nerve interfaces between amputee, stump, nerve fascicles and prosthetic electronics. Decellular nerve (DN) scaffolds are an FDA approved biomaterial (Axogen™) with the flexible tensile properties needed for successful permanent coaptation to peripheral nerves. Biocompatible, electroconductive, PEDOT facilitates electrical conduction through PEDOT coated acellular muscle. New electrochemical methods were used to polymerize various PEDOT concentrations into DN scaffolds without the need for a final dehydration step. DN scaffolds were then tested for electrical impedance and charge density. PEDOT coated DN scaffold materials were also implanted as 15-20mm peripheral nerve grafts. Measurement of in-situ nerve conduction immediately followed grafting. DN showed significant improvements in impedance for dehydrated and hydrated, DN, polymerized with moderate and low PEDOT concentrations when they were compared with DN alone (a 0.05). These measurements were equivalent to those for DN with maximal PEDOT concentrations. In-situ, nerve conduction measurements demonstrated that DN alone is a poor electro-conductor while the addition of PEDOT allows DN scaffold grafts to compare favorably with the “gold standard”, autograft (Table 1). Surgical handling characteristics for conductive hydrated PEDOT DN scaffolds were rated 3 (pliable) while the dehydrated models were rated 1 (very stiff) when compared with autograft ratings of 4 (normal). Low concentrations of PEDOT on DN scaffolds provided significant increases in electro active properties which were comparable to the densest PEDOT coatings. DN pliability was closely maintained by continued hydration during PEDOT electrochemical polymerization without compromising electroconductivity. Keywords— poly(3,4,-ethylenedioxythiophene), peripheral nerve, decellular nerve, nerve conduction.
I. INTRODUCTION Health care professionals are challenged with enabling stable biological interfaces to currently available prosthetic arm devices which are microprocessor controlled and power out fitted [1]. Ultimately we see amputees using the peripheral nerves remaining in their stump to both control these motorized prosthetics and receive feedback from sensors
located in the prosthetics [2]. Our aim is to permanently implant highly conductive peripheral nerve interface (PNI) connectors between amputee stump nerve fascicles and prosthetic electronics. The purpose of this study is to increase the fidelity of signal transmission across the PNI. Poly(3,4,ethylenedioxythiophene) (PEDOT) is intrinsically an electrical conductor. Acellular muscle when polymerized with the maximal density of PEDOT is shown to have electrical conduction properties similar to copper wire [3]. However materials maximally polymerized with PEDOT acquire a brittleness which is incompatible with coaptation to living peripheral nerve. Decellular nerve (DN) scaffolds are an FDA approved biomaterial used clinically to repair peripheral nerve defects (Axogen™). They are extremely pliable, sized appropriately in diameter, and can be easily sewn to peripheral nerve for long term attachment without breaking off or causing injury to the native nerve. We plan to optimize the process by which PEDOT is polymerized into DN scaffolding. We seek both: a) increased electrical conductivity through DN by testing various concentrations of PEDOT deposition and b) maintenance of a pliable DN after PEDOT deposition.
II. MATERIALS AND METHODS A. Overview of Experimental Design Our purpose is to optimize the electrical fidelity and gain seen when PEDOT is polymerized on DN scaffolding while minimizing the sharp rigidity which accompanies highly conductive but compact concentrations of PEDOT Hypothesis: We hypothesize that electrochemical deposition of PEDOT allows DN scaffolds to remain pliable while it confers improved electro-conductive properties to the scaffold. Bench test and In-situ experimental designs: PEDOT can be deposited onto DN scaffolds by methods that either include dehydration steps (chemical method) [4] or allow scaffolds to remain continuously hydrated (electro-chemical method) [5]. Using each method, various concentrations of PEDOT were polymerized onto DN scaffolds. We tested the DN scaffolds with bench tests which measured impedance
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 430–433, 2010. www.springerlink.com
Conduction Properties of Decellularized Nerve Biomaterials
(fidelity) and cyclic voltametry to determine charge transfer capacity (gain in amplitude). Then, based on the bench tests, we selected the “best” concentrations for dehydrated and hydrated DN scaffolds and conducted in-situ tests for measuring nerve conduction properties (biological signal conductivity) . Rat sciatic nerves were harvested at the University of Michigan and decellularized by Axogen™. The DN scaffolds were polymerized with PEDOT. A chemical polymerization method used an EDOT monomer (Clevios™ M, H.C. Starck, Coldwater, MI) and iron chloride as a dopant [4]. The DN scaffolds must be dehydrated for the PEDOT to adhere. EDOT solutions were made in low (Low), moderate (Mod), and high (High) concentrations which corresponded to the amount of PEDOT deposited. The electrochemical method for PEDOT deposition used a PEDOT polymer and polystyrenesulfonic acid (Clevios™ P, H.C. Starck, Coldwater, MI); DN scaffold dehydration was not needed [5]. Low and Mod concentrations of PEDOT were deposited using the method which allowed constant hydration of the DN scaffolds B. Measurement of Test Material Impedance and Specific Charge Density Electrical impedance spectroscopy (EIS) testing was applied to determine electrode impedance (Frequency Response Analyzer; Version 4.9.007; Eco Chemie B.V.; Utrecht, The Netherlands) and cyclic voltametry (CV) to determine charge transfer capacity (n ~ 4 per 9 groups) [General Purpose Electrochemical System, Version 4.9.007, Eco Chemie B.V., Utrecht, The Netherlands) [5]. Graphs were viewed using MatLab (Version 7.8.0.347 R2009a; MatchWorks, Inc). Materials tested were between 15 and 20 mm in length. Impedance values were sampled for frequencies of 10, 100, and 1000Hz. For CV, a scan rate of 10 mV/s was used and the potential on the working electrode was swept between -1.0 to 1.0 V. Specific charge density was calculated by dividing the charge transfer capacity by each sample’s surface area (surface area of a cylinder). C. Measurement of Nerve Conduction For in-situ measurements, dehydrated (DPEDOT) and hydrated (HPEDOT) DN scaffolds were polymerized with moderate concentrations of PEDOT. Selection of the mod-
431
erate concentration for further testing was based on favorable results from the bench tests. Five experimental groups were tested; these groups were: Intact nerve, Autograft, DN (hydrated as shipped frozen), DPEDOT, and HPEDOT. Using 10-0 nylon suture, DN scaffold materials were sewn to the ends of divided, rat, peroneal nerve as 15-20mm peripheral nerve grafts (n 5 per 5 groups). Measurement of in-situ nerve conduction immediately followed grafting (Synergy T2X System,Viasys NeuroCare, Madison, WI). Stimulation was applied with a bipolar electrode placed on the nerve proximal to the nerve graft and as close to the sciatic notch as possible. Muscle electromyographic (EMG) responses were recorded with a needle electrode in the extensor digitorum longus muscle located distal to the nerve graft. Reference and ground needle electrodes were placed distal to the recording electrode [6]. Values recorded were EMG response latency, maximal amplitude, and spike area; as well as nerve conduction velocity, rheobase, and the stimulation amperage equal to 20% greater than that used to maximize EMG. D. Graft Stiffness Rating Scale Graft stiffness was rated using a scale from 4 to 0. A score of 4 meant the DN scaffold handled as native nerve; 3 = pliable, slight resistance to bending; 2 = rigid, resistant to needle insertion; 1= brittle, very stiff, cut the suture; and 0 meant a needle could not be placed through the material. E. Animal Care and Compliance Rats used were male Fischer-344 rats which were retired as breeders (Charles River Laboratory, Kingston, NY). All procedures were approved by the Institutional Animal Care and Use Committee of the University of Michigan and were in strict accordance with the National Research Council's Guide for the Care and Use of Laboratory Animals [7]. For all surgical procedures, rats were given an analgesic (buprenorphene, 0.05 mg/kg) prior to anesthesia with sodium pentobarbital (65 mg/kg). All rats were euthanized with an UCUCA approved procedure.
IFMBE Proceedings Vol. 32
432
M.G. Urbanchek et al.
F. Statistical Analysis A one-way analysis of variance (ANOVA) was performed, followed by Tukey’s post hoc test to determine significant differences between experimental groups in the bench test and in the in-situ studies. A p value with a 0.05 was considered to be significant.
III. RESULTS Data indicate that deposition of PEDOT on DN scaffolds significantly lowers (improves) impedance across all the hydrated as well as all dehydrated DN materials when compared to dehydrated DN with the exception of the Low dehydrated DN scaffold (Fig 2). Specific charge density was increased (improved) only for the Mod dehydrated DN when compared with dehydrated DN (Fig 3.). Some PEDOT broke off the dehydrated DN scaffolds during EIS and CV testing. No charge density could be measured for three of seven scaffolds in the High dehydrated DN group. These zero scores, most likely due to PEDOT cracking and falling off, were included in the statistics and explain the drop in specific charge density seen for this group when compared with the Mod dehydrated DN group. The EIS and CV data taken together may indicate there was a ceiling effect on how much PEDOT was enough. In-situ, nerve conduction measurements demonstrated that hydrated DN scaffolds were poor electro conductors. While intact nerve was best, addition of PEDOT allowed grafted DN scaffolds to compare favorably with the “graft gold standard”, autograft nerve (Table 1). Autograft, DPEDOT, and HPEDOT grafts did not vary from each other.for all EMG data (latency, maximal amplitude, spike area) and nerve conduction measurements (velocity, rheobase, and the stimulation voltage). Statistical power for these findings exceeded ȕ=0.80. Though not significant, greater stimulation amperage is needed to initiate a twitch
response (rheobase) and maximal response amplitude for conduction to pass through the DN scaffold graft with PEDOT when compared with the Autograft. Surgical handling characteristics for the hydrated DN scaffold and the Autograft nerve were rated 4 (as native nerve). The highly conductive HPEDOT DN scaffolds were rated 3 (pliable). DPEDOT DN scaffolds were rated 2 (rigid). The dehydrated DN with High PEDOT group from the bench studies was rated 1 (very stiff). Polymerization of PEDOT by the electrochemical method allowed DN scaffolds to remain hydrated and therefore to behave almost like native nerve during surgery. Our hypothesis said that electrochemical deposition of PEDOT would allow DN scaffolds to remain pliable while PEDOT conferred improved electro-conductive properties to the scaffold. The hypothesis was supported by most of the data. Bench test findings for impedance indicated that PEDOT does confer improvements in conduction fidelity and signal to noise ratio. In-situ tests showed that PEDOT deposition on DN facilitated biological signal conduction across a nerve gap. However, our bench cyclic voltametry results did not show convincing improvements in charge transfer capacity (signal gain) for the PEDOT coated DN scaffolds. The electrochemical polymerization process did allow the DN scaffolds to remain pliable following polymerization with PEDOT.
VI. DISCUSSION Bench tests measured improvements in impedance for DN scaffolds polymerized with PEDOT. Lower impedance indicated the electrical signal has better fidelity or signal to noise ratio. Devices with low impedance generally have lower overall power requirements leading to extended battery life. Decreased impedance is a favorable quality for a PNI scaffold. Specific charge density measurements did not increase significantly for all PEDOT coated DN scaffolds except for the moderately high coated dehydrated PEDOT DN group. Charge density is thought to determine charge transfer capacity or gain. A PNI needs to contribute some as yet unknown quantity of charge transfer. PEDOT is known to accumulate greater charge density because the fluffy PEDOT structure increases surface area (Fig 1). However, too much specific charge density could damage the native peripheral nerve. The meaning and benefits of charge density need further study. DN scaffolding alone, although not a good electrical conductor, may be a fine base material for a PNI. Peripheral nerves grow through it and, though it is a xenograft mate-
IFMBE Proceedings Vol. 32
Conduction Properties of Decellularized Nerve Biomaterials
rial, inflammation and immune response to it are minimal [8]. This is the first study to show that addition of PEDOT to DN allowed action potential type signals to pass across a 15 to 20 mm nerve graft. A 15 mm distance is the desired length for a PNI. Higher stimulation was needed to initiate a twitch response and a maximal response across the graft because signals are obstructed by scarring at two nerve to graft coaptation sites. Whether the “biologic like” signals across the graft were purely electrical, ionic, or a mixture of the two was undefined.
433
Low concentrations of PEDOT on DN scaffolds can provide significant increases in electro active properties which are comparable to maximal High PEDOT coatings. DN pliability is closely maintained by continued hydration during PEDOT electrochemical polymerization without compromising electro conductivity.
ACKNOWLEDGEMENTS The views expressed in this work are those of the authors and do not necessarily reflect official Army policy. This work was supported by the Department of Defense Multidisciplinary University Research Initiative (MURI) program administered by the Army Research Office under grant W911NF0610218. Conflict of interest statement: No potential conflicts of interest are disclosed.
REFERENCES 1. 2. 3. 4.
The bench studies and the in-situ nerve conduction studies indicated that moderate concentrations of PEDOT on DN were enough to reduce impedance and facilitate conduction through the DN scaffold. Reduced PEDOT concentrations along with the electrochemical deposition allowed the DN scaffold to remain pliable. Still slightly higher concentration of PEDOT in the hydrated samples should be possible. Implanting this DN scaffold as part of a PNI is a realistic goal. Pliability allows DNs to move with the peripheral nerve endings rather than breaking off or injuring surrounding tissues. This study was an acute in situ study and, therefore, carries certain limitations as the DN scaffolds were not maintained in-vivo as true PNIs would be. One cannot predict whether biological defenses would lead to degradation or encapsulation of the DN PNI materials without running a long term implant study.
5.
6.
7. 8.
Kuiken TA, Li G, Lock BA et al. (2009) Targeted Muscle Reinnervation for Real-time Myoelectric Control of Multifunction Artificial Arms. JAMA. 301(6):619-628. Frost CM, Urbanchek MG, Egeland BM, et al. Development of a Biosynthetic “Living Interface” with Severed Peripheral Nerve. (2009) Plast Reconstr Surg, 123(6S), p 12. Egeland BM, Urbanchek MG, Abidian MR, et al. (2009) A TissueBased Bioelectrical Interface has Reduced Impedance Compared to Copper Wire and Nerve. Plast Reconstr Surg, 123(6S), p 26. Peramo A, Urbanchek MG, Spanninga SA, et al. (2008) In situ polymerization of a conductive polymer in acellular muscle tissue constructs. Tissue Engine (A): 14(3):423-32. Richardson-Burns SM, Hendricks JL, Foster B, et al. Polymerization of the conducting polymer poly(3,4-ethylenedioxythiophene) (PEDOT) around living neural cells. Biomaterials (2007) 28:1539-1552) Urbanchek MG, Egeland BM, Richardson-Burns SM, et al.. In Vivo Electrophysiologic Properties of poly(3,4ethylenedioxythiophene) PEDOT in Peripheral Motor Nerves. Plast Reconstr Surg, 123(6S), p 89. Institute of Laboratory Animals Resources. Guide for the Care and Use of Laboratory Animals. 7th ed. Washington, DC: National Academy Press; 1996. Whitlock EL, Tuffaha SH, Luciano JP, et al. Processed Allografts and Type I Collagen Conduits for Repair of Peripheral Nerve Gaps. (2009) Muscle Nerve 39:787-99.
Melanie G. Urbanchek, PhD University of Michigan 109 Zina Pitcher Place BSRB 2023 Ann Arbor, MI 48109-2200 USA
[email protected]
V. CONCLUSIONS IFMBE Proceedings Vol. 32
Reverse Cholesterol Transport (RCT) Modeling with Integrated Software Configurator S. Adhikari Sysoft Center for Systems Biology and Bioengineering, Hunterdon, NJ
Abstract— Reverse Cholesterol Transport (RCT) is a series of very complex biological pathways by which accumulated cholesterol is transported from the vessel wall macrophages and foam cells to the liver for excretion, thus preventing atherosclerosis, a build-up of plaque in the arteries often referred as ‘hardening of arteries.’ Cardiovascular disease (CVD) is the leading cause of death in US and other developed nations costing the American healthcare system in excess of $450 billion per year. The underlying cause for CVD is atherosclerosis. There is a paradigm shift coming in CVD research and drug development. Atherosclerosis will not just be managed, but can ultimately be eliminated. In this paper we describe the dynamic RCT model that quantifies the clinical effects of reverse cholesterol efflux. RCT has emerged in recent days as one of the most desirable methods of medical interventions to reverse the atherosclerotic lesions. Optimized dynamic RCT modeling helps in therapeutic targeting of High Density Liporoteins (HDL) with the help of ApoA-1 Mimetic Peptides, and other oral small molecules. The net RCT pathway is quantified with multiple parameters that can change depending on clinical in-vivo or in-vitro conditions. A standard relational database model holds all the objects of the quantitative model. An optimization algorithm matches the aggregate model with the clinical RCT datasets. It automatically adjusts all the parameters till it can find the best solution. Multivariate analysis (MVA) aims to create a derived aggregate model reducing the complexity of multidimensional data to a few latent variables that express the majority of the variance of the data set. MVA is also utilized to perform nonlinear multiple regression analysis between large data sets. Our dynamic model can interfaces with external data sets and other models using Systems Biology Markup Language (SBML). It is a computer-readable format for representing models of biochemical reaction network in software. Keywords— RCT, Cholesterol Efflux, Lipid Metabolism, HDL, Atherosclerosis.
I. INTRODUCTION Reverse Cholesterol Transport (RCT) is a series of very complex biological pathways by which accumulated cholesterol is transported from the vessel wall macrophages and foam cells to the liver for excretion, thus preventing
%LOH
WHURO LV WUDQVSRUWHG IURP/LYHU WKH YHVVHO ZDOO PDFURSKDJHV D IRDPFHOOVWRWKHOLYHUIRUH[FUHWLRQWKXVSUHYHQWLQJ 65%
/&$7
DSR$
3/7
&(7
+'/
+'/
QG+'/
/'/5
&(
5HF\FOLQJRI$SR( (IIOX[HGFKROHVWHUROIURPPDFURSKDJHV
Fig. 1 Reverse Cholesterol Transport (RCT) QG+'/
/LYHU
PDWXUH+'/ (IIOX[HGFKROHVWHURO
([WUDFHOOXODU 6SDFH
$%&*
&\S$ &DYHROLQ
SDVVLYH GLIIXVLRQ
0DFURSKDJH
65%
$%&$ /;5
,QWUDFHOOXODUFKROHVWHUROSRRO
Fig. 2 Cholesterol Efflux atherosclerosis, a build-up of plaque in the arteries often referred as ‘hardening of arteries.’ Cardiovascular disease (CVD) is the leading cause of death in US and other developed nations costing the American healthcare system in excess of $450 billion per year. The underlying cause for CVD is atherosclerosis. There is a paradigm shift coming in CVD research and drug development. Atherosclerosis will not just be managed, but can ultimately be eliminated. Detailed instructions for preparing the papers are listed in chapter II. WRITING THE PAPER. When you write the paper, you can either follow the descriptive rules presented in subchapter A. Descriptive rules, or install the macros
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 434–436, 2010. www.springerlink.com
Reverse Cholesterol Transport (RCT) Modeling with Integrated Software Configurator
prepared by the publisher, as described in subchapter B. Using macros.
II. RCT DYNAMIC MODELING Statin therapies – the current standard of care can only prevent the disease from progressing. RCT is the basis of new cardiovascular therapeutics that can reverse atherosclerosis. Major constituents of RCT include acceptors such as high density lipo-protien (HDL) and apoliproprotein A-1 (ApoA-1), and enzymes such as lechtin:cholesterol acyltransferase (LCAT), Phospholipid transfer protein (PLTP), hepatic lipase (HL), and cholesterol ester transfer protein (CETP). In addition to traditionally recognized transport pathways, RCT also takes place through passive diffusion, protein-facilitated diffusion, and complex mechanisms involving membrane micro-solubilization. On top of that, RCT is facilitated by other apoliproproteins such as ApoE, and ApoM. They are required for HDL formation, maturation, and consequent enhancement of RCT performance. ApoA1, ApoE, and ApoM recycle to some extent augmenting enhanced RCT performance. ApoA-1 is the main apoliproprotein in RCT. The main pathway involves docking of lipid free ApoA-1 into ABCA1 transporter, transfer of phospholipids, and free cholesterol into ApoA-1, and further chlesterol transfer through ABCG1 transporter. Multiple HDL3s fuse to form HDL2. Figure 1. shows some of the RCT pathways. Figure 2. shows the detailed cholesterol efflux process that forms the initial step in RCT. Realistic quantitative modeling of RCT is extremely difficult through conventional methods. It requires a software configurator aided, database driven systems biology platform that can use the complex series of dynamic solution parameters, boundary conditions, and use stochastic optimization, multivariate, and multiple regression techniques to match the results from one or more clinical trials. It seeks to integrate nonlinear dynamic interacts of many components and pathways. RCT has emerged in recent days as one of the most desirable methods of medical interventions to reverse the atherosclerotic lesions. Optimized dynamic RCT modeling helps in therapeutic targeting of High Density Liporoteins (HDL) with the help of ApoA-1 Mimetic Peptides, and other oral small molecules. We have considered many known and clinically verified RCT pathways. Each individual quantitative model, with dynamic parameters, boundary conditions, and other variables has become part of a RCT quantitative model database accessible to the software configurator. Some of these models are kinetic models developed through clinical
435
testing. In many cases, we had to adopt appropriate mathematical models using analytical and numerical methods. For example, one of the ways ApoA-1 removes cholesterol is by diffusion via a concentration gradient between the membrane cholesterol donor and the acceptor particle. Cholesterol molecules spontaneously desorb from plasma membrane, diffuse through the aqueous phase, and are then incorporated onto HDL particles by simple collision. Our quantitative model treats the solute concentration in the space surrounding a particle as a function of the distance and time. The diffusion process is defined as the amount of solute diffusing in the direction of the solute concentration gradient per unit area of a cross section lying perpendicular to the gradient. Application of appropriate boundary conditions provides the solution. The quantitative model follows the following equation for rate of change in concentration dc/dt and k (defined as the amount of solute diffusing in the direction of the solute concentration gradient per unit area of a cross section lying perpendicular to the gradient): k=D(dc/dρ) dc/dt = D[d2c/dρ2 + (2/ρ)dc/dρ] D is the aggregate diffusion coefficient, and ρ is radius from the center. The following boundary conditions apply: (a) c(ρ,0) = cbulk, the concentration at “infinite distance” or bulk concentration; (b) c(r,t) = csolubility , the solute concentration at saturation; and (c) c(∞, t) = cbulk. The solution (assuming constant r and cbulk) is: c =s + (cbulk – s)[1-(r/ρ) (1-erf((ρ –r)/(2Dt)1/2))] which when t is infinity (i.e., end of transition state), reduces to: c = s + (cbulk – s ) [1 – (r/ρ)] The RCT quantitative model uses many complex models including kinetics to evaluate cholesterol efflux from the macrophages to ApoA-1 via ABCA1 and ABCG1. The model is enhanced by the quantitative RCT effects of Caveolin, Sterol 27-hydroxylase (CYP27A1), scavenger receptor SR-B1 transport processes, and ABCG5/G8 hepatobiliary and intestinal sterol extraction gene. In addition, the aggregate model also considers the effect of ApoA-1, Apo-E, Apo-M recycling. Apo-E recycling in hepatocytes is found to be important in enhancing the selective uptake of HDL cholesteryl esters through SR-BI scavenger receptors. Additionally, recycling Apo-E may also serve as a chaperone for proper targeting and repositioning of recycling LRP1 or other receptors to the cell surface. In macrophages, complex models manifest cholesterol efflux
IFMBE Proceedings Vol. 32
436
S. Adhikari
through Apo-E recycling. The aggregate model also quantifies the Apo-E recycling acting as a biosensor of cholesterol entry and exit pathways to help the cells avoid the dangers of cholesterol accumulation and depletion. ABCA1 and ABCG1 coordinate the removal of excess cholesterol macrophages using a diverse array of lipid acceptor particles. Cholesterol efflux is dependent on the phospholipid composition of HDL. PON1 enhances HDLmediated macrophage cholesterol efflux via ABCA1. Smaller, denser HDL3 possesses higher cholesterol efflux capacity. Interactions of ApoA-I with ABCA1 deteriorate with age affecting the capacity of HDL3 to mediate cholesterol efflux. ABCG5/G8 affects hepatobilliary and intestinal sterol excretion, the last RCT step.
III. CONCLUSIONS The net RCT pathway is quantified with multiple parameters that can change depending on clinical in-vivo or in-vitro conditions. A standard relational database model holds all the objects of the quantitative model. An optimization algorithm matches the aggregate model with the clinical RCT datasets. It automatically adjusts all the parameters till it can find the best solution. Multivariate analysis (MVA) aims to create a derived aggregate model reducing the complexity of multi-dimensional data to a few latent variables that express the majority of the variance of the data set. MVA is also utilized to perform nonlinear multiple regression analysis between large data sets. Our dynamic model can interfaces with external data sets and other models using Systems Biology Markup Language (SBML). It is a computer-readable format for representing models of biochemical reaction network in software.
ACKNOWLEDGMENT The author acknowledges the contributions and efforts of Sysoft Software and Bioengineers in this project.
REFERENCES 1. Adhikari S, (2010) Mathematical Modeling of the Effects of Increased ApoA-1 Transcription and Subsequent Protein Synthesis on Reverse Cholesterol Transport. Arteriosclerosis, Thrombosis, and Vascular Biology 2010 Scientific Sessions of American Heart Associaion, San Fracisco. 2. Adhikari S, (2010) Hemodynamic Analysis of the Effects of Exercise on Plaque Formation, Stability, and Reversal, Arteriosclerosis, Thrombosis, and Vascular Biology 2010 Scientific Sessions of American Heart Associaion, San Fracisco. 3. Adhikari S, (2010) Configurator based dynamic modeling for reverse cholesterol transport, Kinmet 2010, San Fracisco. 4. D.J. Rader (2006) Molecular regulation of HDL metabolism and function: implications for novel therapies, J Clin Invest, 116, pp.3090–100. 5. N. Mukhamedova, G. Escher, W. D’Souza, et al.,(2008) Enhancing apoliprotein A-I dependent cholesterol efflux elevates cholesterol export from macrophages in vivo, J Lipid Res, 49, pp.2312–22. 6. M. Pennings, I. Meurs, D. Ye, et al., (2006) Regulation of cholesterol homeostasis in macrophages and consequences for atherosclerotic lesion development, FEBS Lett , 580, pp.5588–96. 7. M. Navab , G.M. Anantharamaiah, S.T. Reddy, et al., (2006) Mechanisms of disease: proatherogenic HDL—an evolving field, Nat Clin Pract Endocrinol Metab, 2, pp.504–11. 8. M. Cuchel, D.J. Rader, (2006) Macrophage reverse cholesterol trans port: key to the regre ssion of atherosclerosis? Circulation, 113, pp.2548–55. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Sam Adhikari Sysoft Center for Systems Biology and Bioengineering P.O. Box 219 Whitehouse Station USA
[email protected]
Modeling Linear Head Impact and the Effect of Brain-Skull Interface K. Laksari, S. Assari, and K. Darvish Temple University, Department of Mechanical Engineering, Philadelphia, USA Abstract— The aim of this research was to simulate a sever linear impact to the head and study the effect of impact on a brain substitute material in terms of deformations that generally lead to Traumatic Brain Injury (TBI). Simplified 2D models of transverse section of human brain were made with 5% gelatin as brain surrogate material. The models underwent 55G deceleration and the strain distribution was measured through image processing. Finite element (FE) models of the experiments were developed using Lagrangian formulations and validated. Using physical material properties, the FE computational parameters were determined based on the results of strain distribution and posterior gap generation. The strain and pressure levels in the FE model both reach the injury threshold levels known for brain tissue. Keywords— Traumatic Brain Injury, Head Impact, Finite Element Model.
I. INTRODUCTION Traumatic Brain Injury (TBI) is one of the major causes of fatality and disability worldwide with 5.3 million people living with a disability related to TBI only in America [1]. The mechanisms of brain injury from an engineering perspective are those measurable physical quantities and processes that lead to functional and/or material failure in various tissues of the central nervous system. To calculate internal stress, strain and pressure at all locations and at any given instant of time during an impact a finite element model of brain that can accurately capture the interaction between brain and skull is needed. Several such models have been developed in the past [e.g., 3]. In this study, two-dimensional physical and FE models of human head under linear deceleration were developed with the focus of studying the modeling of brain-skull interface and its effect on brain deformation. The aims were a) to measure the brain surrogate material deformations experimentally with various interface conditions and b) to validate finite element models of the experiments using physical viscoelastic material properties.
II. MATERIALS AND METHODS 5% gelatin was used as the brain substitute material. The dynamic viscoelastic material properties of the gel were
determined from shear tests and are comparable to brain material properties (Table 1) [3]. A simplified 2D physical model of the human head in the shape of a hollow disk with 100 mm diameter and 20 mm thickness was made. To satisfy the 2D assumption, the disk was sealed at the top and bottom and displacements perpendicular to the plane of motion (vertical) were prevented. In the meantime, by wetting the surfaces of the gel to reduce friction, the gel could freely deform in the horizontal plane. The disk was mounted on a high speed track and crashed into a shock absorber, which creates impacts with constant decelerations between 30-70G [4]. A deceleration of 55G was chosen, which corresponds to a crash at about 30 mph with a HIC (Head Injury Criteria) of 700 which corresponds to the threshold of experiencing significant head injury [5]. The Gel deformation during impact was quantified by tracking photo targets at 2200 frames per second (Phantom, v4.2) with an accuracy of ±0.2mm (Figure 1).
Posterior Region
Fig. 1 Maximum deformed shape in the experimental and corresponding FE models with 1 mm gap Two categories of experiments were conducted in order to reach a better understanding of the brain-skull interaction and the deformation of brain during an impact: a) The gel was immersed in water, which completely filled the cylinder, representing the case where CSF has filled all the voids in the skull and has no room to leave the skull, b) There was initial gap, ranging from 0 to 2 mm, between the cylinder and the gel. This was done to
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 437–439, 2010. www.springerlink.com
438
K. Laksari, S. Assari, and K. Darvish
investigate the effect of existence and also the amount of such initial gap in causing large deformation of brain tissue. The gel impact test with slip boundary condition was modeled in LSDyna (LSTC, CA) with Lagrangian formulation, single point integration solid elements, and soft contact algorithm. The gel physical viscoelastic material properties were implemented in MAT_GENERAL_ VISCOELASTIC formulation (Table 1) and the skull was assumed as rigid. For model validation the parameters that were changed were the Poisson’s ratio and the hourglass type and coefficient.
In the case where the cylinder is filled with water, the relative acceleration of brain with respect to skull is related to the difference in their densities which was very small (both ≈ 1000 kg/m3) and resulted in negligible strains. However, the change in pressure could potentially be an important cause of brain injury. The maximum pressure present in the frontal region of the gel was around 100 kPa, comparable to the peak pressure results given by Nahum (80 to 200 kPa) [10]. 0.3
Table 1 Material properties of the surrogate brain material (5% gelatin)
G (t ) = G∞ + Σ Gi e− βit i =1
G1 = 69.49 Pa G2 = 104.96 Pa G3 = 114.32 Pa G4 = 761.4 Pa G∞ = 130.72 Pa
FE
0.2 Strain
4
Experiment
0.25
0.15 0.1
-1
β1 = 0.1 s β 2 = 1.0 s-1 β 3 = 10 s-1 β 4 = 100 s-1
0.05 0 0
0.02
0.04
0.06
0.08
Time(sec)
Fig. 2 Effective strain in the posterior region
III. RESULTS AND DISCUSSION
8
FE Experiment
7 6 Gap (mm)
In the experiments, when the cylinder was filled with gel (no gap) or water filled the remaining gap between the cylinder and the gel, no significant deformation was observed. This can be explained by the incompressibility of water and gel and also agreed with the FE results. In the experiments with initial gap, significant gel deformations were observed and the FE models were validated against experimental data. Effective strain distribution (Figure 2) and the boundary deformation (Figure 3) were used for this validation. The figures are plotted for 1 mm initial gap and the model results are calculated with Poisson’s ratio equal to 0.4995. Both curved show reasonable agreement between the model and experimental results. A rigorous quantitative assessment of this validation is underway. For hourglass control, the Flanagan-Belytschko stiffness form with exact volume integration for solid elements (HQ = 0.15) was found to work best for this problem. The default soft contact penalty factor (0.1) was sufficient to avoid instability. The strain levels in the FE simulations reached more than 25% that is reported in the literature as the threshold of injury [6]. A comparison between the displacement data (in center of the cylinder) with those determined by Hardy et al. in head impact experiments [9] show that they agree in terms of the maximum and minimum relative displacements (2 to 5 mm).
5 4 3 2 1 0 0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
Time(sec)
Fig. 3 Maximum boundary deformation The next step in this research would be to measure the pressure change in various locations experimentally and compare with the FE simulation results. Also in reality, since brain motion is somewhat constrained at its attachment to the spine and also by the trabeculae in the subarachnoid space, the experimental model needs to be modified to incorporate such constraints. This can be accomplished, for example, by fixing the gel in an offcenter region to the cylinder. Such constrains can play a crucial role in increasing the local shear strains. The results of this study can be used to develop three dimensional FE models of the brain for which experimental validation data are difficult or costly to achieve. The main difference will be the geometry that can be generated from CT-scan or MRI data. The material properties and other FE parameters will be the same.
IFMBE Proceedings Vol. 32
Modeling Linear Head Impact and the Effect of Brain-Skull Interface
A major limitation of this study is using a homogeneous and isotropic material for brain. Various studies show that brain is inhomogeneous and anisotropic [7, 8]. Whether inhomogeneity of brain (e.g., white matter and gray matter) can cause additional shear stress at their interface, or highly oriented nerve fibers in corpus callosum and the brain stem can lead to higher shear stresses are important questions that will require more elaborate models to study.
REFERENCES 1. National Center for Injury Control and Prevention Website (accessed 2008) http://www.cdc.gov/ncipc/tbi/TBI.htm. 2. Yang, K.H., and King, A.I. (2003) “A limited review of finite element models developed for brain injury biomechanics research”, Int. J. Vehicle Design, Vol. 32, Nos. 1/2, pp. 116-129. 3. Laksari K., Darvish K., (2009). "Brain Deformations in Linear Head Impact", Proceeding of ASME International Mechanical Engineering Congress and Exposition, Lake Buena Vista, FL 4. Shafieian, M., Darvish, K., Stone, J. R., 2009, Changes to the Viscoelastic Properties of Brain Tissue after Traumatic Axonal Injury, Journal of Biomechanics, Vol. 42, pp. 2136–2142. 5. Mertz H.J., and Prasad P. (1997) “Injury Risk Curves for Children and Adults in Front and Rear Collisions”, Technical Report by H.J. Mertz, General Motors, P. Prasad, Ford Motor Co. and A.L. Irwin, General Motors, #9733 18.
439 6. Bain A., Meaney D., (2000) “Tissue-Level Thresholds for Axonal Damage in an Experimental Model of Central Nervous System White Matter Injury” Journal of Biomechanical Engineering, Vol. 122, Issue 6, 615. 7. Arbograst K.B. and Margulies S.S. (1998) “Material Characterization of the Brainstem from Oscillatory Shear Tests” Journal of Biomechanics, Vol. 31, 801-807. 8. Prange M., Margulies S., (2000) “Defining Brain Mechanical Properties: Effects of Region, Direction, and Species”, Stapp Car Crash Journal, 44:205-13. 9. Hardy W., Mason M., Foster C., Shah C., Kopacz J., Yang K., and King A., (2008) “A Study of the Response of the Human Cadaver Head to Impact”, Stapp Car Crash Journal. 10. Nahum A., Smith R., Ward C., (1977) “Intracranial Pressure Dynamics during Head Impact”, Stapp Car Crash Journal.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Kurosh Darvish, PhD Temple University 1947 N. 12th Street Philadelphia USA
[email protected]
Mechanics of CSF Flow through Trabecular Architecture in the Brain Parisa Saboori, Catherine Germanier, and Ali Sadegh Dept of Mechanical Engineering, The City College of The City University of New York Abstract— The importance of the subarachnoid space (SAS) and the meningeal region, which provides a restraining interface between the brain and the skull during a coup/ countercoup movement of the brain, has been addressed in the literature, [9,10,11]. During a contact or non-contact (angular acceleration) of the head, which is due to vehicular collisions, sporting injuries and falls, the brain moves relative to the skull thereby increasing the contact and shear stresses in the meningeal region leading to traumatic brain injuries (TBI). Previous studies have over simplified this region by modeling it as a soft solid, which could lead to unreliable results. The biomechanics of the SAS has not been addressed in the literature. In this paper the mechanotransduction of the cerebrospinal fluid (CSF) through the SAS has been investigated. This is accomplished through a proposed analytical model and finite element solution. The results indicate that Darcy’s permeability is an appropriate model for the SAS and the proposed analytical model can be used to further investigate the transduction of mechanical and hydrodynamic forces through the SAS. Keywords— Head impact, Subarachnoid Space, analytical modeling, CSF flow, Darcy permeability.
I. INTRODUCTION In an accident, the human head being a vulnerable body region is most frequently involved in traumatic brain injuries (TBI) and life threatening injuries. The anatomy of the head reveals that the brain is encased in the skull and is suspended and supported by a series of fibrous tissue layers, dura mater, arachnoid, trabeculae and pia mater, known as the meninges. In addition, cerebrospinal fluid (CSF) located in the space between the arachnoid and pia mater known as subarachnoid space (SAS) stabilizes the position of the brain during head movements. To explain the likely injury process of the brain and to quantify the response of the human head to blunt impacts, investigators have employed experimental, analytical and numerical methods. Many researchers have used the finite element (FE) method to study head/brain injuries, [ 1, 13, 7, 17, 18, 19]. The complicated geometry of the SAS and trabeculae makes it impossible to model all the details of the region. Thus, in these studies and other similar studies, the meningeal layers and the subarachnoid region have been simplified as a soft elastic material or in some cases as water (i.e. soft solid having bulk modulus of water and very
low shear modulus), e.g., [7, 17, 18]. That is, the hydraulic damping (i.e. the fluid solid interaction) and the mechanical role of the fibrous trabeculae and the cerebrospinal fluid (CSF) in the subarachnoid space (SAS) were ignored. These simplifications are due to the complex geometry and random orientation of the trabeculae. In addition to the simplified models, the mechanical properties of SAS are not well established in the literature. A few studies, [6, 16, 17, 18], have reported a wide range of elastic modulus of trabeculae up to three orders of magnitudes. As indicated, SAS that includes CSF and the trabeculae has a complicated geometry. This is due to abundance of trabeculae in the form of rods (fibers) and thin transparent plates extending from the arachnoid (subdural) to the pia mater. The pia mater adheres to the surface of the brain and follows all its contours including the folds of the cerebral and cerebellar cortices. This gives the subarachnoid space a highly irregular shape and makes the distribution of CSF around the brain non-uniform. The volume of CSF is highest within the cisterns regions of the brain where, due to the shape of the brain surface, the subarachnoid space is large. Arachnoid trabeculae are more concentrated in the subarachnoid cisterns, sometimes even coalescing into membranes that partially occlude the subarachnoid space. This correlation between the CSF and the trabeculae suggests that their functions are not independent. These fluid and solid phases work together to mechanically support the brain. While the functionality of the SAS is understood, the histology and biomechanics of this important region has not been fully investigated. It is understood, however, that the arachnoid is a thin vascular layer composed of layers of fibroblast cells interspersed with bundles of collagen, and the trabecula is also a collagen based structure. Only the histology of the trabeculae of the optical nerves has been studied, [6]. In our previous in-vitro and in-vivo (animal) studies [9,10,11] the histology and the architecture of the trabeculae were investigated. Specifically, we employed a micro CT scan and Mimics Software and studied the 3D random structure of the trabeculae. In addition, the solidified samples of the brain tissues were sliced using vibratome, and were died through standard procedure for fluorescent and con-focal microscopy. Finally, an in vivo experiment was performed
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 440–443, 2010. www.springerlink.com
Mechanics of CSF Flow through Trabecular Architecture in the Brain
using Sprague-Dawley rat. The rat was anesthetized with pentobarbital sodium given subcutaneously and then, the right atrium was incised. With a sharp needle, pre-fixative solution PBS (Phosphate buffered saline) was injected to the left ventricle, followed by the fixative solution, the subarachnoid space was solified. Finally, after a few minutes, through the blood circulation, the blood vessels of the SAS were solidified. Then, the animal was sacrificed and a sample of the brain tissue was prepared for the Scanning Electron microscopy, see [10,11]. While there have been many finite element studies of the brain/head models there are limited analytical models. The goal of the present paper is to mathematically model subarachnoid space (SAS) and to investigate the biomechanics of CSF flow through trabecular architecture in the subarachnoid space.
II. MECHANICS OF CSF FLOW THROUGH THE TRABECULAE In this section, we propose a structural model for the analysis of mechanical transduction of CSF’s hydrodynamic forces through the SAS. Consider a transverse and/or lateral slice of the head where the brain is encased by the SAS. For simplicity of the analytical model we assume that the SAS is a uniform strip of a continuum around the brain. When the head is subjected to an impact the deformation of the brain at the coup location causes the CSF to flow around the brain and stagnates at the countercoup location. Assume that at the CSF’s stagnation point (countercoup) the SAS is cut, then the strip band around the brain is straightened out as a long channel. Therefore, the mechanics of the CSF flow through the SAS can be simulated as the flow of the CSF through a deformable channel (a long strip) when the plane of the channel is subjected to an impact or deformation. Based on the anatomy of the head and brain, the dimensions of the deformable channel are: 3 mm thick, 10 mm wide and 600 mm long, as shown in Fig. 1. 0.003m y 0.6m
x
441
These material models were compared/validated with the experimental results of [2] who applied a mild angular acceleration to three human subjects and measured the strain in the brain. It was concluded that the results of the Darcy’s model were in good agreement with the experimental results, and thus was selected. In addition, our in-vivo and in-vitro studies revealed that abundant trabeculae exist in the SAS region, which create a hydrodynamic resistance force against the CSF flow similar to Darcy’s permeability model. Structural Model: We propose a hexagonal structural (unit cell) model for the structural organization of the trabecular architecture, Fig. 2. While this is an ordered structure it provides a base to formulate a mathematical model for analyzing the transduction of mechanical and hydrodynamic forces through the SAS. It is also assumed that each trabecula is a fiber (rod) with a circular cross-section of radius a connecting the arachnoid to the pia mater. This structural model provides a reasonable prediction of the SAS permeability. Using Darcy’s permeability k is estimated by
It is assumed that the flow of the fluid through the deformable channel is governed by Darcy’s permeability law. The reason for this choice is that in our previous studies [9] several material models of the SAS including; Darcy’s permeability, viscous fluid, fluid solid interaction and poroelstic materials, were considered and analyzed.
Cε 3 μs2
[14] were C=1/2 for circular void, ε = void volume/ total volume, μ= dynamic viscosity and S = total reference area / total volume.
Fig. 2 Hexagonal structural model of trabeculae Based on the experimental results, it is estimated that the radius of the fiber is approximately 5 microns and the fiber gap (spacing between two fibers) is ∆=40 microns. Using the hexagonal geometry the void volume/ total volume ε, and total reference area / total volume S, can be written as,
0.01m
Fig. 1 The channel (strip) model
k=
∆
√
∆
Therefore, using these equation the SAS permeability is Kp = 3.125e-10 , which is in the range of permeability for soft tissue in fluid media, [15]. Mathematical formulation: Consider a long channel containing trabeculae and the CSF, shown in Fig. 1 and is subjected to a transverse load (q(x, t) or displacement on the top surface. The governing Darcy’s law is,
IFMBE Proceedings Vol. 32
442
P. Saboori, C. Germanier, and A. Sadegh
,
,
(1)
where u (x,t) is the velocity and p is the pressure p of the CSF. The continuity equation is:
,
·
,
(2)
Where h(x,t) is the height (thickness) of the channel. Finally, the balance of the forces lead to : ,
,
·
(3)
,
·
,
In this case we assumed q(x,t) = 0. Utilizing several mathematical manipulations and change off variables the solution to the three coupled differential equaations are:
To solve for the constants C1, C2 and C3 3 the following boundary conditions were used.
Fig. 3 Comparing the analytical and FE results Trabecular buckling and recoil: Onnce the deformation of the proposed strip model of the SAS is kknown, then the buckling and recoil of the unit cell fiber (trabbecula) is investigated. A 2D unit cell FE model was createdd using MRI images, and the velocity boundary condition waas applied and ABAQUS software was employed and the dissplacement history of the trabecula was determined. As show wn in Fig. 4 the maximum displacement of the fiber was 2.6e-4 m m.
0, 0.3,
0.3
0.3,
0.003
0.3
0
The constants were determined as: C2= - 0.3 3, 1 2 2 0.09
Fig. 4 Trabeculae buckling due to velocity input BC in different time step 0.3
Analytical approach: The hexagonaal unit cell model that is represented by a single fiber w was employed and was subjected to a non-uniform velocityy profile as shown in figure 5. Based on that profile, a drag force profile was applied on the trabeculae. Thiss profile is given by:
0.003
· ·
The input velocity, corresponding to a blu unt impact was used, and the results were compared to FE so olution. Finite Element approach: The three dimenssional model of Fig. 1 was created and was subjected to the same boundary condition and was solved using ABAQUS. Due D to the space limitation the FE model and the results aree not presented. However, the results of both analytical an nd FE methods were compared and they were in good agreements, as shown in Fig. 3.
·
where,
·
is the radius of the trabeeculae,
· · √ ·
·
∆
is the fiber volume fraction of the trrabeculae in the periodic unit. and have the same vvalue as in the previous section (the strap model.) Assum ming that , is the instantaneous local fiber velocity aand , is the local
IFMBE Proceedings Vol. 32
Mechanics of CSF Flow through Trabecular Architecture in the Brain
. Then using the deflection of the fiber, we can write similar approach presented in [5,15] for small elastic deflections, the viscoelastic recoil of a trabeculae is determined by the following equation:
To solve this equation we first introduce some dimensionless
/ ,
variables, and
/ .
,0 ,
Note that
coefficients of the velocity term,
/ and
with are the
is the height of the SAS,
, 0 is the maximum deflection at the beginning of and our analysis . The dimensionless equation to solve is then reduced to . For the boundary conditions we assume that there is no deflection at the top and bottom of the SAS, that the slope at the top of the SAS is zero and that there is no shear force in the middle of the fiber: 0,
1,
1 2,
0,
0
After some manipulation the solution of the differential equation is expressed as:
, where fi’s are unknown time-dependent functions that are determined through the boundary conditions. The results are shown in Fig. 6, which indicates that it takes approximately 19-20 milliseconds for the fiber to come back to its original shape in the SAS region.
Fig. 5
Fig. 6
III. CONCLUSION The aim of this study was to find a suitable model for SAS region to be utilized in our global head model where we investigate the strain in the brain due to contact and/or noncontact head accelerations (impacts). Several models, soft solid, viscous fluid, Darcy’s permeability model and porous elastic models were investigated. It was determined that Darcy’s model is a realistic representation of the SAS region and can explain the hydrodynamic forces of CSF through the SAS region.
443
In this study we proposed a hexagonal structural (unit cell) model for the structural organization of the trabecluar architecture. In addition, for the mathematical formulation we assumed that the SAS is a uniform strip of a continuum around the brain. The mathematical formulation of the strip model was based on: Darcy’s permeability, continuity equation and the balance of force equation. The solution of the analytical model was in good agreement with the FE solution. Finally, the buckling and recoil of the unit cell fiber was formulated and solved analytically and compared with the FE solution. The results of this study confirm the validity of the proposed structural unit cell model and the results of the analytical solution. In addition, the results indicate that Darcy’s permeability is an appropriate model for the SAS. This study can be used as a basis to further investigate the transduction of mechanical and hydrodynamic forces through the SAS.
REFERENCES 1) Al-Bsharat AS, et.al (1999). Intracrania pressure in the human head due to frontal impact based on finite element model. Proc. Bioneering Conference, BED ASME, 42:113-114. 2) A. Sabet et al., (2005) Deformation of the human brain induced by mild acceleration, J Neurotrauma, 22(8): 845–856 3) Bayly, et al. (2005), Deformation of the human brain induced by Mild acceleration J.Biomechanics; 41:307-315 4) Drew et al., (2004)The Contrecoup–Coup Phenomen. Neurocritical Care 3: 385–390 5) Guo et al., (2000) A hydrodynamic mechanosensory hypothesis for brush border microvilli. Am J Phy. Renal Physiol 279(4):F698-712. 6) Killer HE, et al. (2003a) Architecture of arachnoid trabeculae, pillars, and septa in the subarachnoid space of the human optic nerve, Br J Ophthalmol; 87:777–81. 7) Kleiven S and Hardy WN (2002). Correlation of an FE Model of the Human Head with Local Brain Motion - Cosequences for Injury Prediction. Stapp Car Crash Conference, 46:2002-22-0007. 8) H. E. Killer,et .al. (2006). The optic nerve: a new window into cerebrospinal fluid composition 9) P .Saboori, Sadegh, (2009) A. Effect of Mechanical Properties of SAS Trabeculae in transferring Loads to the Brain (IMECE2009) 10) P.Saboori, A. Sadegh (2010) Histology and Trabecular architecture of the brain for modeling of Subarachnoid Space ASME/SBC 11) P.Saboori, A. Sadegh (2010) Modeling of Subarachnoid Space and Trabeculae Architecture as it Relates to TBI. 16th US Nat. Cong. Theor. & App. Mech. #887 12) Rao V, Lyketsos C. Neuropsychiatric Sequelae of Traumatic Brain Injury. Psychosomatics 41 (2): 95– 103, 2000 13) Ruan JS, Khalil TB, King AI (1993). Finite Element Modeling of Direct Head Impact. Stapp Car Crash Conference, 37:933114. 14) G. Truskey, et al. (2004) Transport phenomena in Biological system 15) Weinbaum et al., (2002) Mechanotransduction and flow across the endothelial glycocalyx. PNAS 100(13): 7988-7995 16) Xin Jin, et al. (2008) Biomech. Response of the Bovine Pia-Arachnoid Complex to Tensile Loading atVarying Strain-Rates. Stapp 50: 637- 649 17) Zhang L, Yang KH, King AI (2001b). Biomechanics of Neurotrauma. J Neurotrama 23: 144-156 18) Zhang et al (2001c). Comparison of Brain Responses Between Frontal and Lateral Impacts by Finite Element Modeling. Jl of Neurotrauma, 18: 21-30 19) Zhang L, et al. (2002) Recent Advances in Brain Injury Research. Stapp Car Crash Conference, 45: 2001
IFMBE Proceedings Vol. 32
Impact of Mechanical Loading to Normal and Aneurysmal Cerebral Arteries M. Zoghi-Moghadam, P. Saboori, and A. Sadegh Department of Mechanical Engineering, City College of New York, New York, NY Abstract— Aneurysms of the cerebral arteries have potential risk of rupture which leads to intracranial hemorrhage that could be fatal. Nearly 1 to 5 percent of population has cerebral aneurysm; however 50 to 80 percent of these cases never face rupture of cerebral arteries. There is a high mortality rate of 45 percent in case of rupture. Many people with cerebral aneurysms are unaware. It is plausible to assume that arteries with aneurysm are more susceptible to injury in case of high stress and strain created in contact or non-contact head acceleration (blunt impacts). The purpose of this study is to investigate the response of cerebral arteries with and without aneurysm under external mechanical loadings. A twodimensional (2D) finite element model of head in sagittal plane has been created. The most common site of cerebral aneurysm is at circle of Willis, more specifically anterior communication artery (ACoA). A portion of ACoA with and without aneurysm has been created in the 2D model. The model was analyzed under dynamic loading. The stress and strain fields were investigated. It was concluded that the strain field is higher in aneurysm case comparing to normal case. It was also observed that the presence of aneurysm has influence on the surrounding brain media. Keywords— Aneurysm, Finite Element Analysis, Head Injury, Modeling.
I. INTRODUCTION An aneurysm is a focal dilatation of the vessel wall. Most are spherical in shape (saccular aneurysms) but they can also be fusiform. Aneurysms of the cerebral arteries can have severe consequences. Autopsy studies have estimated that the prevalence of cerebral aneurysms in the adult population is between 1 to 5 percent. These aneurysms become clinically relevant when they rupture and cause severe intracranial hemorrhage that could be fatal. Patients with ruptured aneurysms present with a sudden onset of severe headache, stiff neck and meningeal irritation due to subarachnoid hemorrhage. Studies have shown that 50 to 80 percent of all aneurysms do not rupture, thus these patients do not know that they are at risk [1]. However, if rupture does occur its mortality rate is 45 percent within the first month post rupture [2,3]. Currently, there is no sensitive, cost effective method for detecting cerebral aneurysms in the early stages. Risk factors include hypertension,
smoking, heavy alcohol consumption, family history and head injury. Understanding the difference in response to mechanical loading of a normal artery versus an artery with aneurysm can be helpful for preventing this fatal consequence in people at high risk. Traumatic brain injury (TBI), which is highly prevalent in the adult population, needs to be particularly prevented in these high risk individuals. Nearly 1.5 million people in US suffer from TBI annually [4]. The major causes of TBI are crashes involving motor vehicles, bicycles, pedestrians, firearm use, contact sports and falls. Researchers have studied cerebral aneurysm from different viewpoints. Li and Robertson (2009) developed a structural multi-mechanism damage model. They modeled cerebral arteries as an incompressible fiber-reinforced composite. The reinforcement was provided by a helical network of collagen fibers. Their model was validated against analytical data [5]. Eriksson et al. (2009) developed a growth model of saccular cerebral aneurysms using a twolayer cylinder (media and adventitia). They generated a damage region in the media and studied the stress distribution according to fiber angle [2]. Watton et al (2009) studied the change in hemodynamic environment on formation of cerebral aneurysm. They showed that the initiation of aneurysm is related to high wall shear stress (WSS) and wall shear stress gradient (WSSG) [3]. This has been confirmed by Li and Roberts (2009), Marcelo et al. (2006), Meng el al. (2006, 2007) and Metaxa et al. (2009) [5-9]. Many people with cerebral aneurysms are unaware. This is because they have never experienced a rupture thus there are no clinical signs or symptoms. The significance of this is currently unknown. It is plausible to assume that arteries with aneurysms are weaker and thus more susceptible to high stress and strain concentrations, i.e. during contact and noncontact acceleration of the head (blunt impacts). If this is the case then there may be a reason to screen people with a high relative risk based on known risk factors. This may allow these people to be monitored more carefully in order to prevent a catastrophic event. Furthermore, understanding how the presence of an aneurysm affects stress and strain concentrations could lead to a better assessment of a patient’s risk for rupture. This may allow for preventative interventions for people with a significant risk of rupture.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 444–447, 2010. www.springerlink.com
Impact of Mechanical Loading to Normal and Aneurysmal Cerebral Arteries
445
The purpose of this study is to investigate the response of cerebral arteries with and without aneurysm under external mechanical dynamic loadings. The most common sites for cerebral aneurysm are the arteries that make up the Circle of Willis (about 75%). To be more specific, the statistical data shows the occurrence rate of the followings: in anterior communicating artery, 30%, posterior communicating artery, 25% and middle cerebral artery, 20% [1]. A simplified two-dimensional finite element model of circle of Willis within brain media is created. The model is subjected to external loading in terms of dynamic displacement boundary condition. The focus of the study is on anterior communicating artery (ACoA). Its strain and stress fields due to external loading are determined for two different cases from normal arteries to severe aneurysmal arteries.
II. METHODS A. Modeling The model is a two-dimensional (2D) sagittal plane model of the head. It consists of scalp, skull, dura mater, arachnoid layer, pia mater and brain. The geometry of the model is taken from human magnetic resonance imaging (MRI). The detailed description of the model including the geometry information, mesh generation and material properties can be found in Saboori and Sadegh (2009)[10]. To model aneurysm, a portion of ACoA which is a component of circle of Willis was created in the 2D model. Figure 1 shows the 2D model including all its components. ACoA has been modeled as a single layered cylinder compromising adventitia and media layers. The material properties are taken from Watton et al (2009) [3]. To account for the presence of blood within the artery solid material with a low shear modulus and high Poisson’s ratio was placed inside the arterial walls. There are two models for ACoA, one representing normal healthy artery and the other one ACoA with saccular aneurysm. The geometry of aneurysm is taken from Brisman et al (2006). Figure 2 shows a magnified view of ACoA with aneurysm. The wall thickness of the blowup region (0.25 mm) is less than the healthy portions of ACoA (0.375 mm). The reason is the fact that the arterial wall has been degenerated from the inside thereby aneurysm has been produced. ABAQUS/CAE 6.9-2 [11] was used for pre-processing, analysis and post-processing. The model was subjected to dynamic blunt impact in form of velocity it took 90 minutes to run a 1sec analysis in 40 intervals with a 3.4GHz processor.
Fig. 1 2D model of head in sagittal plane. The meningeal layers are present in the model. In the middle there is a portion of ACoA in sagittal plane. The normal ACoA is shown here, there is another model with the same geometry having aneurysmal ACoA
Fig. 2 Aneurysmal artery, radius of the aneurysm region is 1 mm. The material properties of the blood media is a solid material with low shear modulus and very high Poisson’s ratio B. Results The nodal solution for stress and strain was investigated. Figure 3 shows the strain contour at the last time step for normal artery. As shown in this figure the strain is larger in the vicinity of the ACoA. This could be due to the different material properties of brain media and ACoA.
IFMBE Proceedings Vol. 32
446
M. Zoghi-Moghadam, P. Saboori, and A. Sadegh
(b)
Fig. 4 Stress contour (a) in normal case and (b) in aneurysmal case. The region of high stress is larger in (b) Fig. 3 Strain contour at t = 0.5 sec Figures 4a and 4b show stress and strain field of brain region for normal case and aneurysm case, respectively. As seen in Figure 4a the stress distribution is fairly smooth. The maximum stress occurs around the tips of the ACoA which could be due to stress concentration factor. However the stress field for the aneurysm case shown in Figure 4b has a relatively large region of high stress in the middle of the brain, comparing to normal case. This shows the impact of the aneurysm on the stress field of its neighboring components such as brain. Figures 5a and 5b show the stress and strain distribution in the aneurysmal arterial wall. As shown in Figure 5a the maximum stress takes place in lower left portion of the artery as well as in the aneurysm portion. Figure 5b almost follows the same pattern for strain, which is expected. Figures 5a and 5b suggest that an injury is likely to happen in the aneurysm portion.
(a)
(b)
Fig. 5 (a) Stress distribution and (b) strain
distribution in aneurymal ACoA
III. DISCUSSION AND CONCLUSIONS
(a)
The objective of this study is to compare the response of healthy normal cerebral arteries versus aneurysmal arteries to external mechanical loadings. These loading conditions may simulate the mechanical stresses endured during low impact injury. Nearly 30% of cerebral aneurysms occur in the ACoA, thus this artery was chosen for modeling in the current study. The IFMBE Proceedings Vol. 32
Impact of Mechanical Loading to Normal and Aneurysmal Cerebral Arteries
ACoA is part of the circle of Willis which is a dual blood supply that contains all the major vessels feeding the cerebrum. The circle of Willis receives all the blood that is pumped up the two internal carotid arteries which come up the front of the neck and that is pumped from the basilar artery formed by the union of the two vertebral arteries that come up the back of the neck. All the principal arteries that supply cerebral hemispheres of the brain branch off from the circle of Willis. Autopsies have shown that 1 to 5 percent of population possesses cerebral aneurysms, however these do not rupture for about 50 to 80 percent of cases. This means that many people with aneurysms are unaware of their risk for rupture and possibly a fatal subarachnoid hemorrhage. Studying the response to mechanical loading for healthy versus aneurysmal arteries can help us understand why some people are more susceptible to traumatic head injury. It may also help doctors to better assess the risk of rupture in people who have cerebral aneurysms. This could lead to preventative interventions. Two cases have been considered in this study, one is normal healthy ACoA and the other one a model of aneurysmal ACoA. The two models were analyzed under the same dynamic loading conditions. A summary of the results have been outlined in the previous section. In order to focus on the effect of aneurysm the strain field of arterial wall of the two models were compared. Figures 6 and 7 represent the average stress and the total strain with respect to normalized length of the artery for the two cases, respectively.
447
The results of average stress in brain media show areas of large stress in the middle of cerebrum for aneurysm case. This is intersting because aneurysmal artery has influenced the arterial walls as well as the surrounding media. In other words aneurysm could lead to injuries in the brain tissue itself, such as diffuse axonal injury (DAI).
Fig. 7 Comparison in the strain field of normal arteries and the ones with aneurysm Further studies are needed to further support or refute our conclusions. Multiple parameters are likely to play significant roles which govern the severity of head injury. Detailed studies of these parameters could advance the state of the art in understating the mechanism of injury and thereby minimizing the catastrophic impact of injuries.
REFERENCES
Fig. 6 Comparison in the stress field of normal arteries and the ones with aneurysm
As shown in Figure 6 the maximum stress within the aneurysm is nearly 100% greater than the normal healthy artery. Similarly Figure 7 shows that the maximum strain in the region of the aneurysm is nearly 50% greater than in the normal artery. These results support the conclusion that low impact external loading is more likely to damage a cerebral artery with an aneurysm compared to a normal healthy artery. This supports the hypothesis that people with cerebral aneurysms are more likely to experience head injury for any given initial load.
1. Brisman J, Song J, Newell D (2006) Cerebral Aneurysms. N Engl J Med 355:928-39 2. Eriksson T, Kroon M, Holzapfel G (2009) Influence of Medial Collagen Organization and Axial In Situ Stretch on Saccular Cerebral Aneurysm Growth. J Biomech Eng 131:101010-1 – 101010-7 3. Watton P et al (2009) Coupling the Hemodynamic Environment to the Evolution of Cerebral Aneurysms: Computational Framework and Numerical Examples. J Biomech Eng 131:101003-1 - 14 4. www.cdc.gov 5. Li D, Robetson A (2009) A Structural Multi-Mechanism Damage Model for Cerebral Arterial Tissue. J Biomech Eng 131:101013-1-8 6. Marcelo C et al (2006) Patient-Specific Computational Modeling of Cerebral Aneurysms. Acad Radiol 13:811-21 7. Meng H, Swartz D, Wang Z et al (2006) A Model System for Mapping Vascular Responses to Complex Hemodynamic at Arterial Bifurcation In Vivo. Neurosurgery 59(5):1094-1101 8. Meng H, Wang Z, Hoi Y et al (2007) Complex Hemodynamic at the Apex of an Arterial Bifurcation Induced Vascular Remodeling Resembling Cerebral Aneurysm Initiation. Stroke 38:1924-1931 9. Metaxa E, Tremmel M, Xiang J et al (2009) High Wall Shear Stress and Positive Wall Shear Stress Gradient Trigger the Initiation of Intracranial Aneurysm. SBD 2009, Lake Tahoe, CA 10. Saboori P, Sadegh, A (2009) Effect of Mechanical Properties of SAS Trabeculae in Transferring Loads to the Brain. IMECE 2009 11. ABAQUS, Inc. Pawtucket, R
IFMBE Proceedings Vol. 32
Identification of Material Properties of Human Brain under Large Shear Deformation: Analytical versus Finite Element Approach C.D. Untaroiu1, Q. Zhang1, A.M. Damon1, J.R. Crandall1, K. Darvish2, G. Paskoff 3, and B.S. Shender3 1
Center for Applied Biomechanics/Department of Mechanical & Aerospace Engineering, University of Virginia, Charlottesville, VA, USA 2 Department of Mechanical Engineering, Temple University, Philadelphia, PA, USA 3 Naval Air Warfare Center Aircraft Division, Patuxent River, MD, USA
Abstract— Brain injuries have severe consequences and can be life-threatening. Computational models of the brain with accurate geometries and material properties may help in the development of injury countermeasures. The mechanical properties of brain under various loadings have been reported in many studies in the literature over the past 60 years. Stepand-hold tests under simple loading conditions have often been used to characterize viscoelastic and nonlinear behavior of brain under high-rate deformation; however, the stress relaxation curves used for material identification of brain are typically obtained by neglecting the initial strain ramp and by assuming a uniform strain distribution in the sample. Using finite element simulations of human brain shear tests, this study shows that these simplifications may have a significant effect on the measured material properties. Models optimized using only the stress relaxation curve predict much lower stress during the strain ramp due to an inaccurate elastic function. In addition, material models optimized using analytical models, which assume a uniform strain distribution, underpredict peak forces in finite element simulations. Models optimized using finite element simulations show similar relaxation behavior as the optimized analytical model, but predict a stiffer elastic response (about 46%). Identification of brain material properties using finite element optimization techniques is recommended in future studies. Keywords— Human Brain, Material properties, Linear Viscoelastic, Quasi-Linear Viscoelastic, Finite Element Method.
I. INTRODUCTION Brain injuries have severe consequences and can be lifethreatening. The continuous improvement of computational models of brain and optimization techniques may help in the development of injury countermeasures; however, valid brain numerical models require accurate material models under a variety of loading conditions. The mechanical properties of brain under various loadings have been reported in many studies in the literature over the past 60 years. Step-and-hold tests under simple loading are often used to characterize viscoelastic and nonlinear behavior of brain under high-rate deformation [1]. However, the stress relaxation curves used for material
identification of brain are typically obtained under two assumptions. First, by neglecting the initial strain ramp, a perfect step loading is assumed. Second, a uniform strain distribution is assumed, and the parameters of a onedimensional (1D) analytical material model are identified using optimization techniques. In an effort to better understand the mechanical response of human brain, the effects of the aforementioned two assumptions on the material properties are quantitatively investigated in this study using a finite element (FE) approach, which considers the three-dimensional (3D) deformation of the sample.
II. MATERIALS AND METHODS A. Testing Test data were taken from simple shear tests of seven cylindrical human samples collected by Takhounts [1]. The material properties of the data were analyzed and the results are reported in this study. Fresh human brain samples (approximately 12 mm height and 18 mm diameter) of primarily white matter (more than 85% according to histological analysis of samples) were obtained within 24 hours of death and were kept moist and refrigerated during the next 24 hours. The samples were attached to the plates of a custom– made testing device using methyl-2-cyanoacrylate adhesive (Super Glue Corporation, Rancho Cucamonga, CA) [1]. A programmable linear actuator attached to the lower plate was used to apply a linear displacement to the brain sample corresponding to 50% engineering shear strain in about 0.1 sec. Two force transducers (Sensotec Inc., Columbus, Ohio, Model 31/1435-03-04 and Model 31/1434-01-01) attached to the upper plate were used to record the shear and tensile force during the testing. B. Material Identification – Analytical Approach It is well known that brain tissue exhibits time-dependent stress-strain behavior [1-2], so it is a viscoelastic material. An isotropic linear viscoelastic (LV) material is the simplest
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 448–451, 2010. www.springerlink.com
Identification of Material Properties of Human Brain under Large Shear Deformation: Analytical versus Finite Element Approach
constitutive formulation used to model the brain. According to this formulation if the material is subjected to a perfect step loading:
ε (t ) = ε 0 H (t − τ )
(1)
where H is Heaviside step function, then the stress is given by: (2) σ (t ) = ε 0G (t ) where G(t) is the stress relaxation function which is often approximated as a sum of exponentials 3
G (t ) = G ∞ + ∑ Gi
− βit
(3)
i =1
Applying the Boltzman superposition principle, the stress time history has the following integral representation: t ∂σ (ε ) (4) σ (t ) = ∫ G (t − τ ) dτ ∂τ 0 Since the ramp time was about 0.1 sec and the total duration of the ramp-and-hold test was about 4.5 sec, three decay rates with different orders of magnitude were chosen
β i = 10i [sec-1] i = 0,2
(5)
With the relaxation function (3), the stress at time t + Δt can be written as: t + Δt ∂σ (ε ) dτ = σ (t + Δt ) = ∫ G (t + Δt − τ ) ∂τ (6) 0 t
∫ G (t + Δt − τ ) 0
∂σ (ε ) dτ + ∂τ
t + Δt
∫ G (t + Δt − τ ) t
∂σ (ε ) dτ ∂τ
After calculation of both terms of Eq. 6, the formula of stress at time t + Δt can be written as: t ∂σ (ε ) dτ + σ (t + Δt ) = G∞ ε (t + Δt ) + ∑ e − β Δt ∫ Gi e − β (t −τ ) ∂τ (7) i =1 0 i
ε (t + Δt ) − ε (t ) Δt
∑ (1 − e i =1
− β i Δt
)β
i
Gi i
So, if the coefficients of the relaxation function ( Gi , β i ) are known, the stress at time t + Δt can be calculated based on the stress at time t and the strain at both time steps t and t + Δt . The sum of squared errors (SSE) between the numerically calculated stress and the stress calculated from shear tests at about 200 equidistant points on the logarithmic time scale were considered as the objective function. While the values of β i were assumed (see Eq. 5), the shear coefficients
was also used to model the brain. This 1D mathematical model assumes that the relaxation function is split into two components: one a function of time and the other a function of strain. These components are multiplied [3] as: (8) G (t ) = Gr (t )σ e (t )
where σ e (ε ) is the instantaneous elastic response, which was assumed to be an odd polynomial function independent of the loading direction. 2
σ e (ε ) = ∑ C2i +1ε 2i +1 ;
(9)
i =0
A discrete spectrum was assumed for the normalized relaxation function Gr (t ) : 2
2
i =0
i =0
Gr (t ) = Gr ,∞ + ∑ Gr , 2i +1e −βit ; Gr ,∞ + ∑ Gr , 2i +1 = 1 ;
(10)
Then, according to the Boltzmann hereditary integral formulation, the stress σ (t ) is described as: t
σ (t ) = ∫ Gr (t − τ ) 0
∂σ e (ε ) ∂ε dτ ∂ε ∂τ
(11)
As in case of the linear viscoelastic model, the decay rates of relaxations β i were assumed (Eq. 5) and the values of the reduced shear coefficients response
Gr , 2i +1 and the instantaneous
C 2i +1 (7 optimization variables) were obtained by
minimizing the sum of squared errors (SSE) between the model and experimental stress. The parameters of both the LV and QLV models were identified under two conditions: one which neglects the loading curve assuming a perfect step shear loading [1], and another which considers the whole loading curve. C. Material Identification – Finite Element Approach One of the seven human brain tests (test 17) analyzed in this study was simulated numerically using LS-DYNA nonlinear FE software (LSTC, Livermore, CA). A parametric mesh of the cylindrical brain sample was developed in TrueGrid (XYZ Scientific Applications, Inc, Livermore, CA). a)
Brain sample
b)
Rigid plate
Gi (4 optimization variables) were determined by minimizing the SSE using a quasi-Newton algorithm implemented in the Excel Solver package (Microsoft, Redmond, WA). A quasi-linear viscoelastic (QLV) formulation, frequently applied to characterize soft tissues under large deformation,
449
Fig. 1 The FE model of brain samples a) undeformed state b) deformed state (50% eng. Shear strain)
IFMBE Proceedings Vol. 32
450
C.D. Untaroiu et al.
The linear viscoelastic material model optimized using the analytical model (1D) was assigned to the brain model. The bulk modulus of brain was considered to be similar to that of water with a value of 2.1 GPa [4]. While the model nodes of the downward face were fully constrained to the rigid plate, the model nodes of the upward face were constrained only in the z and y directions. Displacement in the x direction was prescribed based on the displacement time history measured during testing. The time histories of shear force were calculated as the sum of nodal forces of the downward face along the x-direction. The shear force time history predicted by the FE model using the analytical material model showed similar relaxation behavior but lower peak stresses during the loading ramp. Therefore, the parameters of the FE material model were optimized by multiplying the shear coefficients by a constant and minimizing the SSE between the shear force recorded in testing and the corresponding force calculated in FE simulations.
III. RESULTS AND DISCUSSION
a)
b)
Fig. 2
8
Stress(kPa)
6
QLV LV
4
2
0 0%
10%
20%
30%
40%
50%
60%
ShearEngineeringStrain(%)
1.2
ReducedRelaxationFunction
The time histories of the shear stress (unfiltered) showed little noise during loading and relaxation, but inherent oscillatory mechanical noise occurred at the beginning and end of the ramp phase (Fig. 2). These oscillations, caused by inertial effects, were eliminated in the analytical models by minimizing the SSE of the model fit. Both the LV and QLV constitutive models that were optimized considering the loading ramp fit the data well (Fig. 2); however, the QLV model showed a better fit than the LV model given the higher number of coefficients used in the optimization (7 parameters in QLV compared to 4 parameters in LV). As expected, the models optimized by neglecting the loading ramp were able to characterize the relaxation response but under-predicted the peak stresses during the loading phase (Fig. 2). The average shear instantaneous response of the LV model was stiffer than the QLV model at low strain levels, and the opposite behavior was true at higher strain levels (Fig. 3a). In addition, the QLV model showed a faster relaxation response than the LV model (Fig. 3b). While the differences between the relaxation curves are mainly caused by the different assumed shapes of the instantaneous elastic response (linear vs. 5th degree polynomials), collecting additional test data from human tissue samples would improve the robustness of this methodology. For example, it would be beneficial to include the time histories of tensile force and record longer hold periods to better approximate long time relaxation behavior.
Comparison of test data (Test 17) using the analytical model optimized using the whole loading curve and the analytical model optimized using only the relaxation curve a) linear viscoelastic (LV) model, and b) quasi-linear viscoelastic (QLV) model
QLV(average)
1
LV(average) Test17(QLV)
0.8
Test17(LV) 0.6 0.4 0.2 0 0.0001
0.001
0.01
0.1
1
10
Time(s)
Fig. 3 Comparison of LV and QLV models a) Instantaneous elastic response b) Reduced relaxation function The FE model of the brain sample (Test17) with the shear material properties from the analytical model predict
IFMBE Proceedings Vol. 32
Identification of Material Properties of Human Brain under Large Shear Deformation: Analytical versus Finite Element Approach
much lower forces than the experimental data (Fig 4). FE optimization of the linear viscoelastic model showed that it fits very well with the experimental data; however, the elastic coefficient obtained from FE optimization is around 46% percent stiffer than the analytical model.
451
The elements with strain close to the assumed analytical value (between 45% and 55% strain) are located in the center of specimen and represent only 41% of the total elements. Approximately 2% of the elements recorded shear strains above 55%. These results suggest that the shape of the specimen may have a significant influence on the shear coefficients values obtained using an analytical approach. Therefore, additional numerical studies are recommended in order to determine the dimensional characteristics of samples (or correction factors) which would reasonably satisfy the uniform strain distribution. Another alternative, especially for analyzing the tests already performed, would be to identify the model parameters using FE simulations [5].
IV. CONCLUSIONS Fig. 4
Comparison between the shear force predictions of the linear viscoelastic models: optimized analytical model (1D), FE model with the material parameters of the analytical model, and the optimized FE model
The inconsistencies between the results of the 1D and 3D models can be explained by the assumption of uniform distributed shear strain within the sample used by the 1D model. This assumption is rejected by the results obtained in the FE simulation as observed in Fig. 5. At maximum displacement of the sample, the shear engineering strain shows a nonuniform distribution ranging from almost 0 % to 75% strain within the sample. The lower force predicted by the analytical model can be explained by the high percentage of elements with shear strains at lower levels than assumed by the analytical approach. For example, at the max shear displacement (50% strain), 57% (876 elements) of the total number of elements (1420 elements) show shear strains under 45% (analytical approach assumed 50% for all elements). 75% 62% 50% 37% 25% 12% 0%
a)
b)
c)
The material properties of the human brain under large shear deformation were investigated in this study. The models optimized using only the relaxation curve predict much lower stresses during loading due to an inaccurate elastic function. In addition, the material models optimized using an analytical model which assumes a uniform strain distribution, predict lower forces in finite element simulations. Finite element optimization appears to be a promising tool for the identification of brain material properties by considering the entire loading time histories and non-uniform strain distribution within the sample.
ACKNOWLEDGMENT This research was funded under Naval Air Warfare Center Aircraft Division contract N00421-06-C-0048.
REFERENCES [1] Takhounts E, (1998), Experimental determination of constitutive equations for human and bovine brain tissue, Ph.D. thesis, UVA [2] Darvish K et al, (2001), Nonlinear Viscoelastic Effects in Oscillatory Shear Deformation of Brain Tissue. J Med Eng Ph 23: 633-645. [3] Fung YC, (1993) Biomechanics: Mechanical properties of living tissues. Springer, New York. [4] Zhang L et al. (2001) Comparison of Brain Responses Between Frontal and Lateral Impacts by FEM Journal of Neurotrauma,18(1): 21-30 [5] Untaroiu C et al. (2007), A Design Optimization Approach of Vehicle Hood for Pedestrian Protection, Int J Crash, 12(6):581-589 The address of the corresponding author:
Fig. 5
The distribution of shear strain in the sample at maximum displacement a) whole model (1420 elements) b) elements with shear strain less than 45% (876 elements), c) elements with shear strain between 45% and 55% (630 elements)
Costin D. Untaroiu Center for Applied Biomechanics 1011 Linden Ave. Charlottesville, VA 22902, USA
[email protected]
IFMBE Proceedings Vol. 32
Mechanisms of Traumatic Rupture of the Aorta: Recent Multi-scale Investigations N.A. White1, C.S. Shah2, and W.N. Hardy1 1
Virginia Tech – Wake Forest University, Center for Injury Biomechanics, Blacksburg, USA 2 First Technology Safety Systems, Inc., CAE, Plymouth, USA
Abstract— Traumatic rupture of the aorta (TRA) occurs most often in high-speed automobile collisions. Although infrequent overall, TRA accounts for a disproportionately high percentage of crash-related fatalities. The etiology of TRA is reviewed along with novel experimental techniques for reproducing clinically relevant TRA in unembalmed cadaver tissue and whole cadavers. A multi-scale testing approach is used to study TRA, including biaxial tissue testing of aorta samples, longitudinal stretch testing of intact aorta specimens, in-situ testing of the aorta, and whole-body cadaver impact studies using a high-speed biplane x-ray system. It is shown that anterior, superior, or lateral distraction of the arch can generate a tear in the peri-isthmic region of the aorta. Axial elongation (longitudinal stretch) is fundamental to the initiation of TRA, with complete failure of the aorta in the peri-isthmic region beginning near strain on the order of 0.22. Additionally, deformation of the thorax is essential for TRA to occur. On the other hand, whole body acceleration and intraluminal pressure are not required to produce TRA. Pulmonary artery injury need not accompany TRA. While the ligamentum arteriosum may contribute to TRA, it is not required to produce injury. Atherosclerotic plaque is shown to increase the risk for TRA. Testing of perfused cadavers is used to elucidate potential mechanisms of TRA induced by automobile crashes. Three-dimensional motion of the aorta within the mediastinum and longitudinal strain in the peri-isthmic region are measured during frontal and lateral impacts using high-speed x-ray. Dorsocranial and left-side lateromedial deformation of the thorax can generate TRA in the cadaver. However, further investigation is needed to better understand these mechanisms. The use of finite element simulations has become a viable way of investigating the underlying mechanisms of TRA using real world scenarios and has potential to aid the design of future cadaver studies involving TRA. Once better understood, these injuries can be mitigated though advances in automotive safety systems. Keywords— TRA, aorta, mechanism, kinematics, cadaver.
I. INTRODUCTION Over 20,000 cases of TRA, with an 88.6% fatality rate, were reported for motor vehicle crashes (MVCs) between 1995 and 2000, and were associated primarily with frontal and near-side impacts (McGwin et al., 2003). The rate of TRA in near-side motor vehicle crashes is double the rate seen in frontal crashes (Steps 2003). In 2008, Bertrand et al. found that 21.4% of all MVC fatalities were attributed to
TRA. Clinical TRA almost always occurs in the transverse direction with tears occurring mainly in the intima and media of the aorta (Zehnder 1960, Strassmann 1947). In 94% of all TRA these tears are confined to the peri-isthmic region of the aorta (Katyal 1997). The ascending aorta extends superiorly and posteriorly from the left ventricle, then forming the aortic arch and continuing inferiorly along the left side of the vertebral column as the descending aorta. The left subclavian, left common carotid, and brachiocephalic trunk branch from the arch of the aorta, and the intercostal arteries branch from the descending thoracic aorta. The region of the aortic arch neighboring the ligamentum arteriosum is referred to as the isthmus and is of particular importance in TRA. The periisthmic region is bounded by the insertion of the left subclavian artery cranially and the junction of the arch and descending aorta caudally. The three layers that compose the aortic wall are the intima, media and adventitia. The innermost layer is the intima and is a layer of endothelial cells. Thickening of this layer with age has shown to affect the mechanical properties of the aorta (Clark & Glagov, 1979). The middle layer is referred to as the media and is composed of smooth muscle cells, elastic fibers, and collagen. The outer layer, which is composed mainly of collagen fibers and ground substance, is the adventitia.
II. EXPERIMENTAL STUDIES While TRA has been studied for more than a century, little was known about its mechanism of injury. Several theories to explain TRA have been proposed including downward traction (Letterer 1924), intravascular pressure (Klotz and Simpson 1932), deceleration (Zehnder 1960), “Water Hammer” (Lundevall 1964), and Voigt’s “Shoveling” (Voigt and Wilfert 1969). Recently, a multi-scale experimental approach was implemented to examine the mechanisms of TRA including biaxial tissue testing of aorta samples, longitudinal stretch testing of intact aorta specimens, in-situ testing of the aorta, and whole-body cadaver tests. A. Tissue-Level Testing (Biaxial) High-speed biaxial tissue properties of the aorta were first examined by Shah et al. (2005, 2006, 2007) using a
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 452–455, 2010. www.springerlink.com
Mechanisms of Traumatic Rupture of the Aorta: Recent Multi-scale Investigations
B. Component-Level Testing (Longitudinal Stretch) To investigate the structural properties of the aorta, a series of component-level longitudinal stretch tests were conducted using seven intact specimens (Shah 2006, 2007). The unembalmed, intact aortas were excised from the root to the celiac trunk from five female and three male cadavers with an average age, stature and mass of 77 years, 164 cm and 69 kg, respectively. Each specimen was lined with dots on the external surface using ink and then clamped in a high-rate hydraulic loading device, constraining the aortic arch and descending aorta. The aortic arch clamp contained the attachment of the left subclavian artery, but not the ligamentum arteriosum. The non-perfused aorta was placed into tension until failure at a rate of 1 m/s. High-speed video was used to capture the displacement of the dots during each test. Longitudinal Lagrangian strain, maximum principal
strain, and strain rate were calculated in the area of tear initiation. Failure load and engineering stress at failure were reported also. Locations of the tears were reported with respect to the ligamentum arteriosum and the left subclavian artery. All tears occurred in the transverse direction within the peri-isthmic region (Figure 1b). On average, complete transection of the aorta can occur at 92±7-N axial load, 0.221±0.069 axial strain, 0.75±0.14-MPa engineering stress and 11.8±4.6-s-1 maximum principal strain rate. Tear Initiation Longitudinal
custom designed biaxial testing device (Mason 2005). Through the use of a carriage system ridding on linear shafts and miniature clamps, cruciate-shaped tissue samples were subjected to simultaneous equal stretch in four directions at roughly 1 and 5 m/s. A template was used to stamp the cruciate samples oriented 22.5 degrees from the longitudinal axis of the aorta in Shah et al. (2005) and collinearly with the longitudinal axis in Shah et al. (2006, 2007). The former method was used to compare results with those of Lanir et al. (1996). These results were later transformed to longitudinal and circumferential directions for comparison with Shah et al. (2006, 2007). Shah et al. (2006, 2007) used aortas from twelve unembalmed, frozen and then thawed cadavers. The tissue samples were harvested from the ascending, peri-isthmic, and descending aorta of three female and nine male cadavers, with an average age, stature and mass of 68 years, 172 cm and 82 kg, respectively. Ink was used to mark each specimen with an array of dots in a regular pattern. High-speed video was used to capture the displacement of the dots during each test, allowing for strain time histories to be calculated. Two lasers were used to measure the change in thickness of each specimen during testing, with results confirming the incompressible nature of the tissue. This tissue incompressibility assumption was exercised for several tests where the laser data was unusable. Miniature load cells attached to each tissue clamp, along with accelerometers, were used to calculate inertia–compensated, load-time histories. Overall average maximum principal strain rate, and longitudinal Lagrangian failure stress and strain were 84.97±48.07-s-1, 1.96±0.97-MPa and 0.244±0.100, respectively. Specimens failed in the transverse (circumferential) direction for each test (Figure 1a), with failure occurring first in the intima, and exhibited nonlinear, anisotropic mechanical properties with no apparent rate-effect.
453
Transverse Tear Transverse (a)
(b)
Fig. 1 (a) Aorta sample during transverse tear initiation. An array of 11 dots was tracked to calculate strain. (b) Complete transection of the aorta in the transverse direction within the peri-isthmic region C. In-Situ Component-Level Testing Hardy et al. (2006) performed a series of in-situ experiments on unembalmed cadavers, involving quasi-static controlled distortion of the heart and aortic arch until complete transection. Four cadavers, two female and two male, with average age, stature and mass of 59 years, 169 cm and 84 kg were each used in four in-situ tests. The specimens were subjected to open-chest, quasi-static tests. The chest wall was removed and the heart and aorta carefully exposed. Nylon webbing was passed around the spine just dorsal to the aortic arch and through the back of the cadaver where it was secured to the test fixture. A grid pattern was applied to the peri-isthmic region and part of the descending aorta to facilitate peak stretch estimation during the tests. The aorta of the first specimen was distracted manually without perfusion pressure in the anterior direction. The next three aortas were pressurized and pulled in tension via a ratcheting system fitted with a load cell. Nylon webbing was wrapped around the ascending aorta and ligamentum arteriosum of the second specimen and distracted anteriorly (Figure 2a). The third specimen was distracted laterally to the right with webbing wrapped around the ascending aorta, but not the ligamentum arteriosum. The fourth specimen was distracted superiorly with the webbing wrapped around the aortic arch. All four specimens failed transversely in the peri-isthmic region, close to the ligamentum arteriosum and the pleural attachment between the spine and aorta (Figure 2b). Minor
IFMBE Proceedings Vol. 32
454
N.A. White, C.S. Shah, and W.N. Hardy
lacerations in the transverse direction were noted along the intima in the vicinity of the primary tear. The presence of atherosclerotic plaque increased both the number of these lacerations and their distance from the primary tear. On average, the distance from the primary tear to the ligamentum arteriosum and left subclavian artery were 15 and 29 mm, respectively. The peak webbing load and percent stretch averaged 148 N and 30% respectively, illustrating that TRA can result from nominal levels of tension. Anterior Distraction
Partial Tear
PeriIsthmus
Heart
(a)
(b)
Fig. 2 (a) Manual anterior distraction of the ascending aorta. (b) Partial tear resulting from manual anterior distraction of the ascending aorta D. Whole-Body Testing In addition to the four quasi-static open-chest tests, Hardy et al. (2008) performed whole-body dynamic impact tests using high-speed x-ray. The peri-isthmic region and descending aorta were accessed through an axillary thoracotomy on the left side, between ribs 3 and 4. A series of 2mm diameter lead spheres were fixed to the adventitia of the freed section of aorta at regular intervals using black cyanoacrylate gel (Figure 3).
to approximate normal physiological conditions. Eight whole body cadavers, four male and four female, with average age, stature and mass of 70 years, 175 cm and 65 kg, were subjected to shoveling, side impact, submarining or combined impact conditions (Figure 4). All aortic tears, except one minor tear, occurred within the peri-isthmic region (Figure 5a). These tears occurred primarily in the circumferential direction and in the vicinity of the lesser curvature of the aortic arch. It was common to see tears in areas of increased atherosclerotic plaque. Multiple, bilateral rib fractures occurred in every test in addition to sternum fractures for the shoveling test and visceral damage to the abdominal organs in the submarining test. Average peak impact load, impact speed and intraluminal aortic pressure were 4.5 kN, 8.0 m/s and 67.5 kPa, respectively. The mediastinal motion of the aorta was determined from the highspeed x-ray motion tracking of the aorta targets (Figure 5b). The shoveling test produced dorsocranial mediastinal motion, moving the aorta posteriorly, superiorly and slightly left. The side impact tests produced anteromedial motion, moving the aorta anteriorly, laterally (right) and slightly superiorly (arm engaged) or slightly inferiorly (ribs engaged). The submarining test produced dorsocranial motion, moving the aorta superiorly, somewhat posteriorly and slightly lateral. The combined tests produced dorsocranial and medial motion, moving the aorta superiorly, posteriorly and laterally (left). Average longitudinal tensile strain time histories were calculated using marker displacements in terms of triads, (triangular combinations) using LS-DYNA (Livermore Software Technology Corporation, CA). Tension was the primary mode of loading for the longitudinal response of all tests.
III. CONCLUSIONS
Marker Placement (a)
(b)
Fig. 3 (a) Marker placement along the aorta of a whole-body specimen. (b) Single frame from the high-speed x-ray system displaying marker placement on the aorta
Webbing was wrapped around the spine at two levels to constrain the body, the arms allowed to dangle and the lower extremities amputated at the hip. To position the mediastinal contents in a more anatomical position, the cadaver was inverted at an angle. The aorta was pressurized
A multi-scale experimental approach has been implemented to study the mechanisms of TRA. Failure of the aorta will always occur in the transverse direction with the intima failing before the media or adventitia. As a complete structure, the aorta fails within the peri-isthmic region at roughly 30-percent stretch. While the material properties of the aorta are characterized by a nonlinear stress-strain response, more research is needed to determine rate effects. Simple tension of the aorta in-situ can generate clinically relevant TRA. Straightening of the inferior arch of the aorta though anterior, superior or right lateral distraction may initiate a tear. While intraluminal pressure and whole body acceleration is not required to produce TRA, thoracic deformation must occur. TRA can occur without injury to the pulmonary artery and without loading via the ligamentum arteriosum. However, an important aspect of TRA is the
IFMBE Proceedings Vol. 32
Mechanisms of Traumatic Rupture of the Aorta: Recent Multi-scale Investigations
455
Pelvis
Pelvis L Arm Head Head (a)
Seatbelt (c)
(b)
Fig. 4 Experimental set-up for the (a) shoveling, (b) arm side impact and (c) submarining high-speed x-ray tests tethering of the descending thoracic aorta by the parietal pleura. TRA tends to occur within regions of plaque when atherosclerosis is present at longitudinal tensile strains below established failure thresholds for the aorta. While a better understanding of TRA mechanisms has been acquired though a multi-scale experimental approach, further testing is required to fully understand this deadly phenomenon. Sup 75
Transverse Tear
50
25
Ant
0 50
25
0
-25
-50
-25
(b)
(a)
Fig. 5 (a) Aortic tear from frontal shoveling impact. (b) Motion of the aortic markers from a frontal shoveling impact from a sagittal view (scale in mm)
ACKNOWLEDGMENT This work was conducted under the auspices of Wayne State University. The authors wish to thank the Bone and Joint Specialty Center of the Henry Ford Health System. The funding for this research has been provided [in part] by private parties, who have selected Dr. Kennerly Digges [and FHWA/NHTSA National Crash Analysis Center at The George Washington University] to be an independent solicitor of and funder for research in motor vehicle safety, and to be one of the peer reviewers for the research projects and reports. This research was supported in part by NHTSA through the Southern Consortium for Impact Biomechanics.
REFERENCES 1. Bertrand S, Cuny S, Petit et al. (2008) Traumatic rupture of the thoracic aorta in real-world motor vehicle crashes. Traffic Injury Prevention, 9:153-161 2. Clark J, Glagov S (1979) Structural integration of the arterial wall. I. Relationships and attachments of medial smooth muscle cells in normally distended and hyperdistended aortas. Lab Invest 40, 587-602 3. Hardy W, Mason M, Foster C et al. (2007) A study of the response of the human cadaver head to impact. Stapp Car Crash Journal, 51:17-80
4. Hardy W, Schneider L, Rouhana S (2001) Abdominal impact response to rigid-bar, seatbelt, and airbag loading. Stapp Car Crash Journal, 45:1-31 5. Hardy W, Shah C, Kopacz J et al. (2006) Study of potential mechanisms of traumatic rupture of the aorta using in situ experiments. Stapp Car Crash Journal, 50:247-265 6. Katyal D, Mclellan B, Brenneman F et al. (1997) Lateral impact motor vehicle collisions: Significant cause of blunt traumatic rupture of the thoracic aorta. Journal of Trauma, 42(5), 769-772 7. Klotz O, Simpson W (1932) Spontaneous rupture of the aorta. American Journal of Medical Science, 184, 455-473 8. Letterer E (1924) Beitrage zur entstehung der aortenruptur an typischer stele. Virch. Arch. Path. Anat, 253, 534-544 9. Lundevall J (1964) The mechanism of traumatic rupture of the aorta. Acta Path. Microbiol. Scand, 62, 34-46 10. Mason M, Shah C, Maddali M et al. (2005) A new device for highspeed biaxial tissue testing: Application to traumatic rupture of the aorta. Transactions of the Society of Automotive Engineers, Paper No. 2005-01-0741 11. McGwin G, Metzger J, Moran S et al. (2003) Occupant- and collision-related risk factors for blunt thoracic aorta injury. J. Trauma, 54, 655-662 12. Shah C (2007) Investigation of traumatic rupture of the aorta (TRA) by obtaining aorta material and failure properties and simulating realworld aortic injury crashes using the whole-body finite element (FE) human model. PhD Dissertation, Mechanical Engineering, Wayne State University, Detroit, Michigan 13. Shah C, Hardy W, Mason M et al. (2006) Dynamic biaxial tissue properties of the human cadaver aorta. Stapp Car Crash Journal, 50:217-245 14. Shah C, Mason M, Yang K et al. (2005) High-speed biaxial tissue properties of the human cadaver aorta. Proceedings of IMECE05, 2005 ASME International Mechanical Engineering Congress, IMECE2005-82085 15. Steps J (2003) Crash characteristics indicative of aortic injury in near side vehicle-to-vehicle crashes. Ph.D. Dissertation, The George Washington University 16. Strassmann G (1947) Traumatic rupture of the aorta. American Heart Journal, 33, 508-515 17. Voigt G, Wilfert K. (1969) Mechanisms of injuries to unrestrained drivers in head-on collisions. Proc. 13th Stapp Car Crash Conference, pp. 295-313 18. Zehnder M (1960) Accident mechanism and accident mechanics of the aortic rupture in the closed thorax trauma. Thoraxchirurgie und Vasculaere Chirurgie, 8, 47-65
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Warren N. Hardy Virginia Tech 443 ICTAS Bldg, Stanger St., MC 0194 Blacksburg USA
[email protected]
Head Impact Response: Pressure Analysis Simulation R.T. Cotton1, P.G. Young2, C.W. Pearce2, L. Beldie3, and B. Walker3 1
2
Technical Services, Simpleware, Exeter, UK School of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK 3 Vehicle Design, ARUP, Solihull, UK
Abstract— A new approach to generating physical and numerical models of the human head is presented. In this study, analytical, numerical and experimental models were used in parallel to explore the pressure response of the human head as a result of low velocity impact. The aim of the study is to investigate whether it is possible to predict the response of the head for a particular impact scenario using image-based modeling techniques. A number of finite element models were generated based on MRI scan data. The models were generated using a technique adapted from the marching cubes approach which automates the generation of meshes based on 3D scan data, and allows for a number of different structures (e.g. skull, scalp, brain) to be meshed simultaneously. The resultant mesh was used to explore the intra-cranial response to impact. Previously developed approximate analytical expressions were also used to provide additional comparison results. Good agreement was observed between these modeling techniques, and large transient pressure amplification at the site of impact was observed for impacts of low duration. In this paper, the analytical and numerical models were used in parallel to explore the pressure response of the human head as a result of low velocity impact. The presented research demonstrates the potential of the approach for the generation of head impact models based on in vivo clinical scans. Beyond its significance in the area of head impact biomechanics, the study has demonstrated that numerical models generated from 3D medical data can be used effectively to simulate physical processes. This is particularly useful when considering the risks, difficulties and ethical issues involved when using cadavers. Keywords— image-based meshing, patient-specific modeling, finite element, pressure response, head impact.
I. INTRODUCTION Although a wide range of mesh generation techniques are currently available these, on the whole, have not been developed with meshing from segmented 3D imaging data in mind. Meshing from 3D imaging data presents a number of challenges but also unique opportunities for presenting more realistic and accurate geometrical description of the computational domain. The majority of approaches adopted
have involved generating a surface model (either in a discretized or continuous format) from the scan data, which is then exported to a commercial mesher – a process which is time consuming, not very robust and virtually intractable for the complex topologies typical of image data. A more ‘direct approach’ presented in this paper is to combine the geometric detection and mesh creation stages in one process which offers a more robust and accurate result than meshing from surface data.
II. MESH GENERATION FROM BIOMECIAL IMAGING DATA: CAD VERSUS IMAGE-BASED MESHING
Meshing from image data presents a number of challenges but also unique opportunities so that a conceptually different approach can provide, in many instances, better results than traditional approaches. Image-based mesh generation raises a number of issues which are different from CAD-based model generation. CAD-based approaches use the scan data to define the surface of the domain and then create elements within this defined boundary [1]. Although reasonably robust algorithms are now available [2], these techniques do not easily allow for more than one domain to be meshed, as multiple surfaces are often non-conforming with gaps or overlaps at interfaces where one or more structures meet. A more direct approach developed by the authors combines the geometric detection and mesh creation stages in one process. The technique generates 3D hexahedral or tetrahedral elements throughout the volume of the domain [3], thus creating a robust and accurate mesh directly with conforming multipart surfaces. This technique has been implemented as a set of computer codes (ScanIP, +ScanFE and +ScanCAD).
III. PRESSURE RESPONSE ANALYSIS IN HEAD INJURY In this study, analytical, numerical and experimental models were used in parallel to explore the pressure response of the human head as a result of low velocity impact. The aim of the study is to investigate whether it is possible
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 456–458, 2010. www.springerlink.com
Head Impact Response: Pressure Analysis Simulation
457
to predict the response of the head for a particular impact scenario using these image-based modeling techniques.
transient is observed under the site of impact which is followed by a negative pressure transient and then a high positive pressure transient as shown in Fig. 2.
A. Methods and Model Generation High resolution T1-weighted whole head MRI scans of a young male were obtained in vivo. Using ScanIP software (Simpleware Ltd.), 15 different structures were segmented, i.e. brain (gray matter, white matter, brain stem, cerebellum); CSF, skull, mandible, cervical vertebrae, intervertebral discs eye (eye-ball, optic nerve, fatty tissue), nasal passage, and skin (cf. Fig. 1).
Fig. 2 Von Mises stress at the brain in LS-DYNA® (LSTC)
a)
b)
Fig. 1 Segmented head model in a) ScanIP (Simpleware) and b) LS-DYNA® (LSTC) A number of finite element models were generated in ScanFE (Simpleware Ltd.) based on the segmented image data. The resultant mesh was exported to LS-DYNA® (LSTC - Livermore Software Technology, Corp.). In addition, the interface between the skin and exterior was used to define a contact surface. The various components are connected by coincidental nodes and elements. The exterior surface of the skin was used to define a contact surface. An impactor was introduced in LS-DYNA® with a velocity of 7m/s, a mass of 6.8kg, and a duration of event of 15ms. The brain region was set to be a viscoelastic material, the CSF was set to be an elastic fluid, and everything else was am elastic material.
Good agreement was observed between these modeling techniques, and large transient pressure amplification at the site of impact was observed for impacts of low duration. In this paper, the analytical and numerical models were used in parallel to explore the pressure response of the human head as a result of low velocity impact. Von Mises stresses in the intervertebral discs and cervical vertebrae were also investigated (cf. Fig. 3).
+
B. Results and Discussion The resulting models are geometrically very accurate, and were used to explore the intra-cranial response to impact. Previously developed approximate analytical expressions were also used to provide additional comparison results [4]. The finite element models generated were solved using LS-DYNA®. At early stages after contact a high pressure
a)
b)
Fig. 3 Von Mises stress a) at the discs and b) in the vertebrae in LS-DYNA® (LSTC) In addition, a model of the head wearing a helmet was generated (cf. Fig. 4). For the head and helmet model, + ScanCAD (Simpleware Ltd.) was used to import STEP data of the helmet, and interactively position it on the head. The resultant mesh was again exported to LS-DYNA®, including a contact surface on the outside of the helmet. show the influence of the presence of a helmet to reduce the pressure transient.
IFMBE Proceedings Vol. 32
458
R.T. Cotton et al.
in vivo clinical scans. Beyond its significance in the area of head impact biomechanics, the study has demonstrated that numerical models generated from 3D medical data can be used effectively to simulate physical processes. This is particularly useful when considering the risks, difficulties and ethical issues involved when using cadavers. It has been shown how integrating CAD data in to the image data can be used to investigate different helmet designs with a realistic head model.
REFERENCES Fig. 4 Head and helmet model in +ScanCAD (Simpleware)
IV. CONCLUSIONS The ability to automatically convert any 3D image dataset into high quality meshes, is becoming the new modus operandi for anatomical analysis. Techniques have been developed for the automatic generation of volumetric meshes from 3D image data including image datasets of complex structures composed of two or more distinct domains and including complex interfacial mechanics. The techniques guarantee the generation of robust, low distortion meshes from 3D data sets for use in finite element analysis (FEA), computer aided design (CAD) and rapid prototyping (RP). Additional tools enable the incorporation of CAD models interactively within the image. The presented research demonstrates the potential of the approach for the generation of head impact models based on
Cebral J, Loehner R (2001) From medical images to anatomically accurate finite element grids. Int.J.Num.Methods Eng., 51:985-1008. Antiga L, Ene-Iordache B, et al. (2002) Geometric reconstruction for computational mesh generation of arterial bifurcations from ct angiography. Computerized Medical Imaging and Graphics, 26:227-235. Young P, Beresford-West T, et al. (2008) An efficient approach to converting 3D image data into highly accurate computational models. Philosophical Transactions of the Royal Society A, 366:3155-3173. Johnson E.A.C., Young P.G. (2005) On the use of a patient-specific rapidprototyped model to simulate the response of the human head to impact and comparison with analytical and finite element models. J Biomech, 38:39-45.
Author address: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Ross Cotton Simpleware Ltd. Rennes Drive Exeter United Kingdom
[email protected]
An Introduction to the Next Generation of Radiology in the Web 2.0 World A. Moein1, M. Malekmohammadi2, and K. Youssefi3 1
Department of Biomedical Engineering, Azad University, Science and Research Branch, Tehran, Iran 2 Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran 3 Mississippi State University, MS, USA
Abstract— "Web 2.0" refers to a second generation of web development and design, that facilitates communication, secure information sharing, interoperability, and collaboration on the World Wide Web. The truth is that Web 2.0 is a difficult term to define, even for web experts. Usually phrases like “the web as platform” and “architecture of participation” are used to describe this term. Examples of Web 2.0 include webbased communities, hosted services, web applications, socialnetworking sites, video-sharing sites, wikis, blogs, mashups and folksonomies. The Internet is changing medicine and Web 2.0 is the current buzz word in the World Wide Web dictionary. Radiology in the Web 2.0 probably refers to things like globalization, clinical decision supporting softwares, social networking sites dedicated to radiology and also radiology centric blogs and wikis etc. Also concepts like PACS, DICOM, RIS, Teleradiology, Web-based PACS, HL7, IHE, HIPAA, etc again somewhere refer to the impact of Web 2.0 on radiology and often known as Radiology 2.0. In this paper we are going to have an overview on recent development of radiology in the Web 2.0 world and also demonstrate our point of view about the radiology electronic learning in the future. Keywords— Web 2.0, Radiology e-Learning, Picture Archiving and Communication System (PACS), Digital Imaging and Communication in Medicine (DICOM), Teleradiology, Web-based PACS, Medical Imaging Informatics.
I. INTRODUCTION Web 2.0 generally refers to a set of social, architectural, and design patterns resulting in the mass migration of business to the Internet as a platform. These patterns focus on the interaction models between communities, people, computers, and software. Human interactions are an important aspect of software architecture and, even more specifically, of the set of websites and web-based applications built around a core set of design patterns that blend the human experience with technology [1]. Web 2.0 is about the spirit of sharing which is in contrast to the traditional concept of “knowledge is power”. Knowledge in the world of web 2.0 is about sharing and is nobody's property. Term that personally use for web 2.0 and medicine is democratization of knowledge. As of today, Web 2.0 is an important repository of medical knowledge
editable in real-time by physicians. It is in contrast to the static delivery of contents over the traditional internet hence the term Web 2.0 [2].
II. WEB 2.0 CONCEPTS A. Characteristics Web 2.0 websites allow users to do more than just retrieve information. They can build on the interactive facilities of "Web 1.0" to provide "network as platform" computing, allowing users to run software-applications entirely through a browser [3]. Users can own the data on a Web 2.0 site and exercise control over that data [3,4]. The characteristics of Web 2.0 are: rich user experience, user participation, dynamic content, metadata, web standards and scalability. Further characteristics, such as openness, freedom [5] and collective intelligence [3] by way of user participation, can also be viewed as essential attributes of Web 2.0. B. Technology Overview Web 2.0 draws together the capabilities of client-side and server-side software, content syndication and the use of network protocols. Standards-oriented web browsers may use plug-ins and software extensions to handle the content and the user interactions. Web 2.0 sites provide users with information storage, creation, and dissemination capabilities that were not possible in the environment now known as "Web 1.0" [6]. C. Usage The popularity of the term Web 2.0, along with the increasing use of blogs, wikis, and social networking technologies, has led many in academia and business to coin a flurry of 2.0s [7] including Library 2.0 [8], Enterprise 2.0, e-Learning 2.0, Publishing 2.0, Medicine 2.0, Travel 2.0 and even Government 2.0 [9]. Many of these 2.0s refer to Web 2.0 technologies as the source of the new version in their respective disciplines and areas [6].
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 459–462, 2010. www.springerlink.com
460
A. Moein, M. Malekmohammadi, and K. Youssefi
III. WEB 2.0 AND
E-LEARNING
The shift to Web 2.0 has its counterparts in both elearning technology and methodology. e-Learning 2.0 strongly aims on collaborative nature of learning, about transition from traditional view of e-learning as a technologically driven way to transfer the pre-existing knowledge to the recipients. One of the core methodologies behind eLearning 2.0 is connectivism, concentrating on making connections (i.e. links) among learning resources and people. e-Learning 2.0 brings also strong focus on content syndication, its reuse/re-purposing, adaptation, and personalization [10]. The term "Web 2.0" is used to describe applications that distinguish themselves from previous generations of software by a number of principles. Previous studies showed that Web 2.0 applications can be successfully exploited for technology-enhance learning. However, in-depth analyses of the relationship between Web 2.0 technologies on the one hand and teaching and learning on the other hand are still rare. Web 2.0 is not only well suited for learning but also for research on learning [11].
IV. WEB 2.0 AND MEDICINE While it may be too early to come up with an absolute definition of Medicine 2.0 or Health 2.0, Figure 1 shows a suggested framework, created in the context of a call for papers for the purpose of scoping the Medicine 2.0 congress and this theme issue [13]. The program of the first Medicine 2.0 conference [14] also gives a good idea of what academics feel is relevant to the field [12].
Fig. 1 Medicine 2.0 Map (with some current exemplary applications and service)
According to the model depicted in Figure 1, five major aspects (ideas, themes) emerge from Web 2.0 in health, health care, medicine, and science, which will outlive the specific tools and services offered. These emerging and recurring themes are (as displayed in the center of Figure 1): • • • • •
Social Networking Participation Apomediation Collaboration Openness
While “Web 2.0”, “Medicine 2.0”, and “Health 2.0” are terms that should probably be avoided in academic discourse, any discussion and evaluations concerning the impact and effectiveness of Web 2.0 technologies should be framed around these themes [12]. Figure 1 also depicts the three main user groups of current Medicine 2.0 applications as a triangle: consumers/patients, health professionals, and biomedical researchers. While each of these user groups have received a different level of “formal” training, even end users (consumer, patients) can be seen as experts and—according to the Web 2.0 philosophy—their collective wisdom can and should be harnessed: “the health professional is an expert in identifying disease, while the patient is an expert in experiencing it” [15]. Current Medicine 2.0 applications can be situated somewhere in this triangle space, usually at one of the corners of the triangle, depending on which user group they are primarily targeting. However, the ideal Medicine 2.0 application would actually try to connect different user groups and foster collaboration between different user groups (for example, engaging the public in the biomedical research process), and thus move more towards the center of the triangle. Putting it all together, the original definition of Medicine 2.0—as originally proposed in the context of soliciting submissions for the theme issue and the conference—was as follows [13]: Medicine 2.0 applications, services and tools are Webbased services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies and/or semantic web and virtualreality tools, to enable and facilitate specifically social networking, participation, apomediation, collaboration, and openness within and between these user groups [12]. There is however also a broader idea behind Medicine 2.0 or “second generation medicine”: the notion that healthcare systems need to move away from hospital-based medicine, focus on promoting health, provide healthcare in people's own homes, and empower consumers to take responsibility for their own health—much in line with what
IFMBE Proceedings Vol. 32
An Introduction to the Next Generation of Radiology in the Web 2.0 World
others and I have previously written about the field of consumer health informatics [16] (of which many Medicine 2.0 applications are prime examples). Thus, in this broader sense, Medicine 2.0 also stands for a new, better health system, which emphasizes collaboration, participation, apomediation, and openness, as opposed to the traditional, hierarchical, closed structures within health care and medicine [12].
V. WEB 2.0 AND RADIOLOGY One of the more profound changes to radiology and healthcare will come from online collaboration or Web 2.0 tools. The new logic of peer-to-peer sharing takes the role of radiology consulting to another level of peer contact and collaboration. What this means for radiology is the possibility of a convergence of teleradiology and online collaboration which will further leverage expert relationships across medical science. Collective knowledge on this scale could help to develop greatly enhanced peer review and quality standards, provide a platform for checking real-time drug interactions, monitoring patient reactions and new interventional techniques and services, as well as provide a knowledge base of support between residents and attending physicians [17]. Specific to Radiology and Web 2.0, there are many social networking and community, wikis and search engine websites, such as: • •
• • • • • • • •
Webicina: Practicing Medicine in the Web 2.0 Era [18] MyPACS: A web-based teaching file authoring tool for Radiologists and related professionals that allows easy uploading of images and descriptive information from any computer with web access [19] AuntMinnie: Provides the first comprehensive community Internet site for Radiologists and related professionals in the Medical Imaging industry [20] DiagnosticImaging: Daily news, announcements and conference reports [21] radRounds: Connecting Radiology, Enhancing collaboration, education and networking [22] Radiopaedia: The online collaborative Radiology resource and encyclopedia [23] Radiolopolis: The international Radiology network and professional Radiology community [24] RadsWiki: More than 3000 articles focusing on numerous sub-fields of Radiology [25] Yottalook: Radiology references, teaching files and peer-reviewed images [26] RadiologySearch: A special search engine that is dedicated to find Radiological content [27].
461
The growth of online medical information and online physician-to-physician collaboration and social networking will put pressure on the future shape of radiology service. The developing picture to keep in mind is: DICOM and highspeed networks continuing to improve speed and delivery of complex radiology studies; teleradiology is evolving into globalized full service radiology; and online physician and patient social networking broadening clinical collaboration for doctors and access to medical knowledge for consumers [17]. One of the common Web 2.0 architecture patterns is “structured information”, the advent of XML and the ability to apply customized tagging to specific elements has led to the rise of syntaxes commonly referred to as microformats. These are small formats with highly specialized abilities to mark up precise information within documents. The use of such formats, in conjunction with the rise of XHTML, lets Internet users address content at a much more granular level than ordinary HTML. The XML Friends Network (XFN) format is a good example of this pattern [1]. The term “structured reporting” in radiology means different things to different people and DICOM Structured Report (SR) is used widely as a standard mechanism for presenting capturing, transmitting and exchanging information in diagnostic medical imaging. Known methods and systems for presenting a DICOM Structured Report includes, for example, using a DICOM SR viewer, available on the Advantage Windows (AW) review workstation, which enables exporting the report to formats such as HTML, XML, plain text and PDF formats to facilitate generating a hard copy of the reports being presented [28,30].
VI. RADIOLOGY AND E-LEARNING 2.0 The appeal of online education and distance learning as an educational alternative is ever increasing. To support and accommodate the over-specialized knowledge available by different experts, information technology can be employed to develop virtual distributed pools of autonomous specialized educational modules and provide the mechanisms for retrieving and sharing them [29]. We suggest for present and evaluate a new learning environment model based on Web 2.0 applications. We assume that the technological change introduced by Web 2.0 tools has also caused a cultural change in terms of dealing with types of communication, knowledge and learning. The goal is to design and development of a web-based e-Learning 2.0 application to assist all medical imaging informatics and other professional healthcare based on DICOM Working Group 18 (WG=18) which is extend the DICOM Standard with respect to clinical trials information and the storage of
IFMBE Proceedings Vol. 32
462
A. Moein, M. Malekmohammadi, and K. Youssefi
images for educational and research purposes; to identify attributes necessary for use in clinical trials (e.g., client, clinical protocol, site number) and technique-related attributes [31]. In this overview there is an introduction of the concepts of e-Learning 2.0 and Personal Learning Environments, along with their main aspects of autonomy, creativity and networking, and relate them to the didactics of constructivism and connectivism. The requirements and basic functional components for the development of our particular Web 2.0 learning environment are derived from these. As a result, we have an advanced PACS-based imaging informatics e-Learning 2.0 module that assists users to improve their skills by working on this system. The key point of this research, design and development is implementing of a system with all applicable features in the Web 2.0 world.
VII. CONCLUSIONS There is no doubt that modern computer technology and the Internet create an incredible ability to find the proverbial needle in a haystack in medicine as in other endeavors. The broader possibilities represented by this capability are encapsulated by the concept of Web 2.0, a vast, distributed network of individuals who openly share information and technology. Whereas the initial phase of the internet included many static pages created by individuals or private interests, Web 2.0 represents an interactive, collaborative, constantly evolving network of information reflecting communication among many different people. The benefits derive from open access and sharing of information. An ecological and a Web 2.0 perspective of e-learning provides new ways of thinking about how people learn with technology and also how new learning opportunities are offered by new technology. These perspectives highlight the importance of developing connections between a wide variety of learning resources, containing both codified and tacit knowledge. New adaptive technology has the potential to create personalized, yet collective, learning. The future implications for e-learning in medical education are considered.
REFERENCES 1. Governor J, Hinchcliffe D, Nickull D (2009) Web 2.0 Architectures, O’Reilly books 2. Sethi S K (2008) Web 2.0 and Radiology, The Internet Journal of Radiology, Vol 8 Num 2 3. Tim O'Reilly (2005) What Is Web 2.0, Design Patterns and Business Models for the Next Generation of Software 4. Hinchcliffe D (2006) The State of Web 2.0 5. Greenemeier L, Gaudin S (2007) Amid The Rush To Web 2.0, Some Words Of Warning, InformationWeek
6. Web 2.0 at Wikipedia, the Free Encyclopedia, Available at: http://en.wikipedia.org/wiki/Web_2.0 7. Schick S (2005) I Second that Emotion, IT Business, Canada 8. Miller P (2008) Library 2.0: The Challenge of Disruptive Innovation 9. Eggers W D (2005) Government 2.0: Using Technology to Improve Education, Cut Red Tape, Reduce Gridlock, and Enhance Democracy, Rowman & Littlefield Publishers 10. Drasil P, Pitner T, e-Learning 2.0: Methodology, Technology and Solutions 11. Ullrich C, Borau K, Luo H, Tan X, Shen L, Shen R (2008) Why Web 2.0 is Good for Learning and for Research: Principles and Prototypes, International World Wide Web Conference, Proceeding of the 17th international conference on World Wide Web, social networks: applications and infrastructures, pages: 705-714, ISBN:978-1-60558-085-2 12. Eysenbach G, MD, MPH (2008) Medicine 2.0: Social Networking, Collaboration, Participation, Apomediation, and Openness, J Med Internet Res;10(3):e22 DOI:10.2196/jmir.1030, Available at: http:// www.jmir.org/2008/3/e22/ 13. Eysenbach G (2008) Medicine 2.0 Congress Website Launched (and: Definition of Medicine 2.0 / Health 2.0), Available at: http://gunthereysenbach.blogspot.com/2008/03/medicine-20-congress -website-launched.html 14. Eysenbach G (2008) Medicine 2.0 Final Program, Medicine 2.0 Congress, Available at: http://www.medicine20congress.com/ocs/ schedule.php 15. Kathryn P, Davison & James W. Pennebaker (1997) Virtual narratives: illness representations in online support groups, Petrie, K.J., Weinman, J.A. (Eds.), Perceptions of health and illness: Current research and applications, Harwood Academic Publishers:463-486 16. Eysenbach G (2000) Consumer Health Informatics, BMJ;320(7251): 1713-1716 17. Sappington R W, PhD (ABD) (2008) Leading Radiology Services in the Age of Teleradiology, Wikinomics, and Online Medical Information, Radiology Management J:34-42 18. Webicina at http://www.webicina.com 19. MyPACS at http://www.mypacs.net 20. AuntMinnie at http://www.auntminnie.com 21. DiagnosticImaging at http://www.diagnosticimaging.com 22. radRounds at http://www.radrounds.com 23. Radiopaedia at http://radiopaedia.org 24. Radiolopolis at http://www.radiolopolis.com 25. RadsWiki at http://www.radswiki.net 26. Yottalook at http://www.yottalook.com 27. RadiologySearch at http://www.radiologysearch.net 28. Clunie D A (2000) DICOM Structured Reporting, PixelMed Publishing, Bangor, Pennsylvania, Library of Congress Card Number: 00191700, ISBN 0-9701369-0-0 29. Bamidis P D, Konstantinidis S, Papadelis C L, Perantoni E, Styliadis C, Kourtidou-Papadeli C, Kourtidou-Papadeli C, Pappas C (2008) An e-learning platform for Aerospace Medicine, Hippokratia; 12(Suppl 1):15–22 30. Moein A, Youssefi K (2009) A Novel Method to Study DICOM Tags and Definitions for Structured Report and Image Analysis Purposes, 25th Southern Biomedical Engineering Conference, IFMBE Proceedings, Miami, Florida, USA, 15–17 May, 2009, pp 73–74 31. American College of Radiology, National Electrical Manufacturers Association (ACR-NEMA), "DICOM official website" at http://medical.nema.org/ Author: Ali Moein Institute: Azad University, Science and Research Branch City: Tehran Country: Iran Email:
[email protected]
IFMBE Proceedings Vol. 32
Novel Detection Method for Monitoring of Dental Caries Using Single Digital Subtraction Radiography J.H. Park1,2, Y.S. Choi3, G.J. Lee1,2, S. Choi1,2, K.S. Kim1,2, D.H. Park1,2, I. Cho1,2, and H.K. Park1,2,* 1
Department of Biomedical Engineering, School of Medicine, Kyung Hee University, Seoul, Korea 2 Healthcare Industry Research Institute, Kyung Hee University, Seoul, Korea 3 Department of Oral and Maxillofacial Radiology, Institute of Oral Biology, School of Dentistry, Kyung Hee University, Seoul, Korea Abstract— This study suggested a novel detection method for monitoring of dental caries based on pixel gray values in digital subtraction radiography images from single dental images of patient’s with dental caries. The advantage of single digital subtraction radiography (SDSR) is knowing the status of current teeth with caries without requiring a second digital radiograph. Digital subtraction is currently used in radiographic studies of periapical lesions or other dental disorders that have been treated and whose progress must be evaluated over time. SDSR is a novel detection method for caries that detects dental mass changes from only one dental radiograph. Subjects were chosen among patients who were diagnosed with dental caries from an intraoral X-ray system, we study marks the points of emphasis in hidden dental caries in dental X-ray images from 11 subjects. For each caries lesion that was diagnosed, a mean pixel value was obtained from a SDSR using a scale ranging from 0 to 255 gray values. The image mean variable of the tooth was 71.99 (± 25.64) and 3.25 (± 0.85) (P < 0.0001) for caries and healthy tissue, respectively. SDSR was found to be a novel detection method that uses single dental images of patients to mark the points of emphasis in hidden dental caries. Keywords— SDSR, dental caries, intraoral X-ray.
I. INTRODUCTION Radiologic images have two dimension of three dimensional reality, hence, the images of different anatomical structures are superimposed on each other and, thus, make it difficult to detect the lesions [1,2]. The protective outer surface of anatomic crown is made up of enamel. Dental caries is the disease process of decay in which acid formed from carbohydrate, aided by streptococci mutans bacteria, attacks tooth surface [3]. Digital subtraction radiography(DSR) is a method that can resolve deficiencies and increase the diagnostic accuracy [4]. The subtraction methods was introduced by B.G.Zeides des Plantes in the 1920s. Subtraction image is performed to suppress background features and to reduce the background complexity, compress the dynamic range, and amplify small differences by superimposing the scenes obtained at different times [5]. Subtraction radiography was introduced to dentistry in 1980s [4]. It is used to compare standardized radiographs
taken at sequential examination visits. All unchanged structures are subtracted and these areas are displayed in a neutral gray shade in the subtraction image; regions that have changed are displayed in darker or lighter shades of gray [6]. For radiographic dentinal lesions, the fraction for surfaces with cavitation has been reported to range between 50 and 90% [7]. Recurrent caries is more accurately detected with subtraction techniques. The dynamic nature of caries remineralization/demineralization also could be explored with reliable digital subtraction techniques [8]. This digital subtraction method, although commonly used in clinical dental research, has yet to be applied in clinical caries diagnosis by general practitioners because of the difficulty of image registration. Hence the purposed of this study was to novel detection method of proximal caries, based on pixel grey values in digital subtraction radiography images from single dental image of patient, used for monitoring of dental caries.
II. METHODOLOGY A. Tooth Images Selected Study subjects were chosen from among the patients who were diagnosed as having proximal dental caries from intraoral X-ray system at the Dental Medical Center, Kyung Hee University. The digital radiographs were acquired using an intraoral X-ray system by Heliodent DS (Sirona Dental System Gmbh, Bensheim, Germany) and storage phosphor plates by Kodak RVG 6100 system. The digital images receptor were 1140 * 1920 pixels (dimentions of active area: 27 * 36 mm) with true image resolution, 256 gray levels, and were capable of providing more than 20 lp/mm of spatial resolution. Each of images was taken using the system setup with a 12-inch cone operating at 60 kVp, 7 mA, and 0.32 second. B. Proposed Novel Method of Image Subtraction Digital subtraction radiography is a technique that allows us to determine quantitative changes in radiographs. The
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 463–465, 2010. www.springerlink.com
464
J.H. Park et al.
premise is quite simple. A radiographic image is generated before a particular treatment is performed. At some time after the treatment, another image is generated. The resultant image shows only the changes that have occurred and subtracts those components of the image that are unchanged. The magnitude of the changes can then be measured by evaluating the histogram (graphic depiction of the distribution of gray levels) of the resultant image. Direct digital imaging has been a great help in the quest to take the technique of digital subtraction radiography out of the laboratory setting and actually use it clinically. Fig. 1 shows the flowchart of novel method for proximal caries detection from single dental image of patient by digital subtraction radiography. The X-ray dental image is first subjected to a image preprocessing. This image preprocessing is used to reduce the background noise which is come from lookup table and also to prepare the image for further processing such as image subtraction.
Fig. 2 Perceptual image of hidden dental caries using SDSR. (a) Original image and (b) perceptual reverse image of the hidden dental caries using SDSR Fig. 2(b) is a reversed image of the caries detected image and detected dental mass changes by SDSR from the Fig. 2(a) original image. In this study, the advantage of SDSR is knowing the status of current teeth with caries without requiring a second digital radiograph.
Fig. 3 Result for the image mean variable of the carious and healthy tissue of the same tooth using SDSR
Fig. 1 Novel detection method for monitoring of dental caries from a patient’s single dental image by SDSR
III. RESULTS AND DISCUSSIONS Fig. 2 shows the result images of novel method for proximal caries detection from single dental image of patient according to the novel detection method of dental caries. The X-ray dental image of Fig. 2(a) was first subjected to image preprocessing. Fig. 2(a) shows that the carious area was not visibly clear about the state of caries.
To evaluate the contrast in detecting dental caries as a function of the histogram, the measurements of the relative difference between carious and healthy teeth was defined. Figure 3 shows the results of the image mean variables of carious and healthy tissue from the same tooth (N=11) using SDSR. The image mean variable of the tooth was 71.99 (± 25.64) and 3.25 (± 0.85) (P < 0.0001) for caries and healthy tissue, respectively. SDSR was found to be a novel detection method that uses single dental images of patients to mark the points of emphasis in hidden dental caries.
IV. CONCLUSIONS Pixel gray value measurements in subtraction radiography images constitute a suitable complementary method for
IFMBE Proceedings Vol. 32
Novel Detection Method for Monitoring of Dental Caries Using Single Digital Subtraction Radiography
monitoring outcomes of remineralization. This digital image subtraction method, although commonly used in clinical dental research, has not yet routinely been applied in clinical caries diagnosis by general practitioners, mainly because of the difficulty of image registration, i.e., aligning the second radiograph with the first. Hence, this study were to design a novel detection method of proximal caries, based on pixel gray values in digital subtraction radiography images from a patient’s single dental image, used for monitoring of dental caries. In SDSR, the image is used to mark the points of emphasis in hidden dental caries, hence the novel monitoring method of dental caries in this study. Compared to the carious and healthy tissue, the image mean value for gray values of the dental image showed statistically significant difference between caries and sound (p < 0.0001). It has been demonstrated that this SDSR is a new detection method for dental caries using single dental images of patients.
REFERENCES 1. Matteson SR, Deahl ST. (1996) Advanced imaging methods. Crit Rev Oral Biol Med 7:346-395 2. Christgau M, Hiller KA.(1998) Quantitative digital subtraction radiography for the determination of small changes in bone thickness. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 85:462-472 3. Woelfel J. B. (1990) Dental Anatomy: its Relevance to Dentistry, 4th Ed., Malvern, PA, Lee&Febiger 4. Bragger U. (1988) Digital imaging in periodontal radiography. A review. J Clin Periodontol 15:551-557 5. Woo BMS, Zee K-Y. (2003) In vitro calibration and validation of a digital subtraction radiography system using scanned images. J Clin Periodontol 30: 114-118 6. Reddy MS, Wang IC. (1999) Radiographic determinants of implant performance. Adv Dent Res 13:136-145 7. Ratledge DK, Kidd EAM, Beighton D (2001) A clinical and microbiological study of approximal carious lesions. Part 1: The relationship between cavitation, radiographic lesion depth, the site specific gingival index and the level of infection of the dentine. Caries Res 35:3-7 8. Bader J, Shugars D.(1993) Need for change in standards of caries diagnosis epidemiology and health services research perspective. J Dent Educ 57:415-421 The corresponding author:
ACKNOWLEDGMENT This research was supported by the research fund from Seoul R&BD (Grant # CR070054).
465
Author: Hun-Kuk Park Institute: Kyung Hee University Street: 1 Hoeki-dong, Dongdaemun-gu City: Seoul Country: Korea Email:
[email protected]
IFMBE Proceedings Vol. 32
Targeted Delivery of Molecular Probes for in vivo Electron Paramagnetic Resonance Imaging S.R. Burks1,2, E.D. Barth3, S.S. Martin2,4, G.M. Rosen1,5, H.J. Halpern3, and J.P.Y. Kao1,2 1
Center for Biomedical Engineering and Technology, University of Maryland, and Medical Biotechnology Center, University of Maryland Biotechnology Institute, and Center for EPR Imaging In Vivo Physiology, University of Maryland, Baltimore, USA 2 Department of Physiology, University of Maryland, Baltimore, USA 3 Department of Radiation Oncology and Center for EPR Imaging In Vivo Physiology, University of Chicago, Chicago, USA 4 Marlene and Stewart Greenebaum Cancer Center, University of Maryland, Baltimore, USA 5 Department of Pharmaceutical Sciences, University of Maryland, Baltimore, USA
Abstract— With recent advances in electron paramagnetic resonance imaging (EPRI), in vivo visualization of a physiologically distinct tissue (e.g., a tumor) has become a real possibility. EPRI could be a powerful imaging modality to detect metastatic lesions and report tissue-specific physiological information. Approximately 25–30% of breast tumors overexpress the Human Epidermal Growth Factor Receptor 2 (HER2). HER2overexpressing breast tumors are proliferative, metastatic, and have poor clinical prognoses. We have developed a novel mechanism for selective in vivo delivery of “spin probes” (molecular probes for EPRI) to Hc7 cells, which are MCF7 breast cancer cells engineered to overexpress HER2. Spin probes can be encapsulated in anti-HER2 immunoliposomes at high concentration (>100 mM). At such concentrations, the spectroscopic signal of spin probes is severely “quenched”—a process analogous to the self-quenching of fluorophores. This makes the intact immunoliposomes spectroscopically “dark” and thus invisible by EPRI. Tumor-specific contrast is generated after selective endocytosis of anti-HER2 immunoliposomes. Intracellular degradation of endocytosed liposomes liberates the spin probes from the liposomal lumen. Once de-quenched by dilution into the much-larger cellular volume, the spin probes regain their spectral signal and make the cells visible by EPRI. Through uptake of immunoliposomes, Hc7 cells can achieve an intracellular spin probe concentration of ~750 μM. Using imaging phantoms, we verify that this concentration of spin probes is easily imageable by EPRI. We have optimized immunoliposomes for in vivo applications by increasing their persistence in circulation to maximize tumor targeting. Through near-infrared fluorescence imaging of tumor-bearing animals, we demonstrate that optimized anti-HER2 immunoliposomes selectively target Hc7 tumors in vivo, enabling highcontrast imaging with minimal background. This work lays the foundation for imaging Hc7 tumors with EPRI. Keywords— Electron paramagnetic resonance, Breast cancer, HER2, Liposomes, Nitroxides, Imaging.
I. INTRODUCTION Very-low-frequency electron paramagnetic resonance imaging (EPRI) is an attractive emerging modality for
imaging metastatic breast tumor lesions. EPRI can detect and image paramagnetic species in vivo and in real time [1]. Endogenous paramagnetic molecules are too scarce to be detected by EPRI; therefore, exogenous “spin probes” such as nitroxides must be used to label features of interest. EPRI using nitroxides offers the advantage that it is a magnetic resonance imaging modality capable of reporting cellular physiology; therefore, in addition to localizing a tumor, they can also report on its physiological status. We previously synthesized nitroxides that are wellretained by cells and thus exhibit long-lived intracellular signals that can be imaged by EPRI [2,3]. We have also shown that nitroxides, like fluorophores, can be encapsulated in liposomes at high concentrations (>100 mM), and show concentration-dependent signal quenching. Thus, intact liposomes containing quenched probes have attenuated spectral signals and are spectroscopically "dark". After endocytosis by cells, however, lysis of the liposomes liberates and dilutes the encapsulated probes into the cell; the resulting dequenching of the probe signal renders the cell visible [4]. Encapsulation of probes at high concentration minimizes background signal from unendocytosed liposomes and creates a cell-activated contrast-generating mechanism. By itself, however, liposomal delivery is limited by the inability to deliver probe molecules selectively to a particular cell type. As a tool for delivering imaging agents to a physiologically distinct tissue such as a breast tumor, liposomes must be targetable—i.e., they must incorporate features that enable selective uptake in a tissue of interest, but not in other, indifferent, tissues. Liposomal surfaces can be readily decorated with moieties that target them to a specific tissue. For example, immunoliposomes, bearing surface-conjugated antibody fragments, can target distinct antigens. Specifically, immunoliposomes targeted against the human epidermal growth factor receptor 2 (HER2) have been used to enhance delivery of chemotherapeutics to HER2-expressing tumors [5]. We have previously demonstrated that Hc7
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 466–469, 2010. www.springerlink.com
Targeted Delivery of Molecular Probes for in vivo Electron Paramagnetic Resonance Imaging
cells, which are MCF7 breast tumor cells engineered to overexpress HER2, selectively endocytose immunoliposomes containing quenched fluorescein, which leads to bright intracellular fluorescence in vitro. MCF7 cells, which express only a low, physiological level of HER2, do not accumulate significant fluorescence [6]. Analogously, we have shown that immunoliposomes encapsulating quenched concentrations of nitroxide can deliver ~750 µM nitroxide intracellularly to Hc7 cells upon endocytosis, while contributing minimal background signal. Using immunoliposomes as delivery vehicles in vivo requires additional considerations. Liposomes are rapidly cleared from the circulation by the reticulo-endothelial system (RES). Incorporating into the liposomes a small proportion of lipid conjugated to poly(ethyleneglycol) (PEG) retards clearance by the RES [7]. The longer circulation times of such sterically-stabilized, “PEGylated” liposomes enhance their targeting potential in vivo. We demonstrate here that sterically-stabilized liposomes are more persistent in circulation than classical liposomes, which lack PEG. We also show that anti-HER2 immunolipomes encapsulating quenched Indocyanine Green (ICG) generate high-contrast fluorescence images of Hc7 tumors in vivo. Through the use of EPRI tissue phantom models, we demonstrate the feasibility of this targeting approach to be used with EPRI.
II. MATERIALS AND METHODS A. General Materials and Methods Dipotassium (2,2,5,5-tetramethylpyrrolidin-1-oxyl-3-ylmethyl)amine-N,N-diacetate was synthesized as described previously (Rosen, Burks et al. 2005). ICG was from Sigma (St. Louis, MO). Lipids were from Avanti Polar Lipids (Alabaster, AL); cell culture media and biochemicals were from Invitrogen (Grand Island, NY). Mice were from Harlan (Indianapolis, IN). Herceptin was a gift from Dr. Katherine Tkaczuk (University of Maryland, Baltimore). Data analyses and presentation were performed with Origin 8.0 (OriginLabs, Northampton, MA), Living Image 3.0 (Caliper Life Sciences, Hopkinton, MA), and Matlab 2010a (The Mathworks, Natick, MA). Hc7 cells (gift from Angela M. Brodie, University of Maryland, Baltimore) were maintained at 37°C under 5% CO2, in Dulbecco’s Modified Eagle Medium (DMEM) supplemented with 10% (v/v) fetal bovine serum (FBS), 2 mM L-glutamine, Pen/Strep (100 U/mL penicillin, 100 µg/mL streptomycin) and 500 µg/mL hygromycin B. Anti-HER2 immunoliposomes were prepared as previously described [7], and comprised 1,2-distearoylphosphatidylcholine (DSPC), cholesterol (Chol), ammonium
467
1, 2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[poly (ethyleneglycol)2000] (PEG-PE), and ammonium 1,2distearoyl-sn-glycero-3-phosphoethanolamine-N-[maleimidepoly(ethyleneglycol)2000] (PE-PEG-maleimide) in the molar ratio 3 DSPC : 2 Chol : 0.1 PEG-PE : 0.06 PE-PEGmaleimide. B. Animals and Hc7 Tumor Inoculation Female NIH Swiss or NOD-SCID mice (5–6 weeks of age) were used for experimentation. SCID mice were previously ovariectomized by the vendor. At least 48 hr prior to tumor inoculation, estrogen pellets (2.5 mg / 60-day release, Innovative Research of America, Sarasota, FL) were implanted in SCID mice. Hc7 cells (2 x 107 suspended in 0.1 mL DPBS) were subcutaneously injected into the legs of SCID mice. Tumors were allowed to grow for ~3 weeks prior to imaging. C. Clearance of Liposomally-Encapsulated Nitroxides from Circulation For pharmacokinetic measurements, NIH Swiss mice received intravenous injections of liposomes with or without PEG-PE at a dose of 3.75 µmoles of encapsulated nitroxide/kg body weight. At various times, blood was drawn from the mouse, diluted into 1 mL of deionized H2O (resistivity, 18.3 MΩ), and subjected to 3 freeze/thaw cycles. Samples were measured for nitroxide content by EPR spectroscopy and Na+ content using a Na+-selective electrode (model no. 8411BN, Fisher Scientific, Hampton, NH). EPR measurements were normalized to plasma Na+ content. EPR spectroscopy was performed on an X-band EPR spectrometer (model E-109, Varian Inc., Palo Alto, CA). D. Imaging For in vivo fluorescence imaging, SCID mice bearing Hc7 tumors were injected with anti-HER2 immunoliposomes encapsulating 1 mM ICG (2.5 µmol ICG/kg body weight). At 3 hr post-injection, ICG fluorescence in the mice was imaged (IVIS 200 optical imager, Caliper Life Sciences, Hopkinton, MA) at the following settings: acquisition time, 4 s; f-stop, 1; and medium binning (8×8 pixels). For EPR imaging, an agarose cylinder (4% w/v) was prepared as described previously [4]. It measured 6 mm in diameter, 10.2 mm in length, and was impregnated with 400 µM nitroxide. The cylinder was sealed in polyvinylsiloxane dental impression material (GC Dental Products, Kasugai, Japan) and fixed in the cavity (19 mm diameter) of an EPR imaging spectrometer. Continuous-wave EPR image data acquisition, reconstruction, and analysis were performed according to established procedures [4].
IFMBE Proceedings Vol. 32
468
S.R. Burks et al.
III. RESULTS A. Clearance of Nitroxide-Containing Liposomes To assess the improvement in circulatory retention of sterically-stabilized liposomes, mice (n = 3) were given classical liposomes lacking PEG-PE or sterically-stabilized liposomes containing 5 mole–% PEG-PE; both types of liposomes encapsulated 150 mM nitroxide. At various times, blood was drawn from the mice. Because liposomes in the blood contained quenched nitroxides, samples were subjected to repeated freeze-thaw cycles to lyse the liposomes and dequench the nitroxide spectral signal. The nitroxide signal in each sample was assayed by EPR spectroscopy. Clearance of classical liposomes is best fit by a firstorder exponential decay with a time constant of t1/e = 6.9 ± 5.45 hr (or equivalently, a half-life of t1/2 = 4.1 ± 3.27 hr)— implying that classical liposomes would be entirely eliminated from circulation by ~20 hr. Sterically-stabilized liposomes, however, persisted much longer (t1/e = 17.5 ± 2.87 hr, t1/2 = 10.5 ± 1.54 hr). Incorporating PEG-PE into the liposomal formulation extends circulation times by ~2.5fold, so that even after 50 hr, ~10% of the original nitroxide signal remains in the circulation. B. In Vivo Fluorescence Imaging of Hc7 Tumors In order to determine the ability for anti-HER2 immunoliposomes to target and generate contrast in Hc7 tumors in vivo, Hc7 tumor-bearing mice were given intravenous injections of ICG-containing immunoliposomes and imaged for ICG fluorescence 3 hr post-injection. A representative fluorescence image is shown in Fig 1. In the tumor loci (indicated by red arrows), dequenching of the ICG resulted in intense tumor-associated fluorescence with minimal background signal arising from the surrounding tissue. After imaging, the mouse was euthanized and the spleen, liver, kidneys, and tumors were dissected and imaged for ICG fluorescence ex vivo (data not shown). As expected, organs associated with clearance of liposomes and ICG (i.e., spleen, liver, and kidneys) also accumulated imageable fluorescence signals. C. EPR Imaging of a Nitroxide-Containing Agarose Cylinder Having demonstrated that immunoliposomes are highly selective for Hc7 cells in vivo and that they can deliver ~750 µM nitroxide intracellularly to Hc7 cells in vitro [6], we investigated whether this concentration would be sufficient for EPRI of Hc7 tumors. An agarose cylinder (6 mm diameter, 10.2 mm length) was impregnated with 400 µM nitroxide and imaged by EPRI. A cross-sectional view of
Fig. 1 In vivo fluorescence imaging of Hc7 tumors SCID mouse bearing two Hc7 tumors (red arrows) imaged 3 hr post-injection with anti-HER2 immunoliposomes encapsulating 1 mM ICG. Tumors are imaged with an SNR of 180 the reconstructed image is shown in Fig 2. The geometry of the phantom is faithfully reproduced in the image, the signal-to-noise ratio (SNR) of the image is 109, and the resolution of the image is 2.5 ± 0.19 mm. Therefore, should Hc7 tumors accumulate similar concentrations of nitroxides as Hc7 cells do in vitro, they should be easily imaged by EPRI.
IV. DISCUSSION We have demonstrated here that immunoliposomes can be engineered to persist in circulation, thereby maximizing their tumor-targeting potential. Long-lived immunoliposomes are highly selective for Hc7 tumors in vivo and generate sufficient fluorescence signals for high-contrast optical imaging of tumors in vivo. We have demonstrated through EPR imaging phantom models that should Hc7 cells accumulate concentrations of nitroxides in vivo similar to those previously reported in vitro, they should be routinely imageable by EPRI. There are factors that could limit the feasibility of this approach using EPRI. Liposomes access the tumor volume through the vasculature. While a previous study of antiHER2 immunoliposomes in a xenograft model showed uniform micro-distribution within tumors [8], the macrodistribution may be inhomogeneous owing to differential vascularization throughout the tumor volume. This would result in liposomes accessing only a fraction of the tumor volume, and despite the very high intracellular concentrations that are potentially achievable with immunoliposomal delivery, the total nitroxide that is delivered to the tumor
IFMBE Proceedings Vol. 32
Targeted Delivery of Molecular Probes for in vivo Electron Paramagnetic Resonance Imaging
469
selectively, combined with our current efforts aimed at optimizing nitroxide molecular structure for EPRI, bodes well for high-contrast EPRI of HER2-overexpressing tumors.
V. CONCLUSIONS
Fig. 2 EPR image of nitroxide-containing phantom Cross-sectional view of an agarose cylinder (4%, 6 mm diameter) containing 400 µM nitroxide. Cylinder is imaged with an SNR of 109, image resolution is 2.5 ± 0.19 mm. Axis labels are in cm
may still be modest. This motivates additional improvements to SNR. SNR in EPRI can be improved by delivering more nitroxide molecules to cells, and by optimizing the spectroscopic properties of the nitroxides themselves. Liposomes of 100-nm outer diameter are near-optimal in vivo. Larger-diameter liposomes exhibit increased circulatory clearance and reduced extravasation, both of which offset the advantage of the larger luminal volume. However, nitroxides can be improved through rational design. First, nitroxides that are zwitterionic at physiologic pH are highly water-soluble but would require no counter ions, which increase the osmolarity of the encapsulated solution but contribute no imageable signal. At physiological pH, zwitterionic nitroxides can be encapsulated at 300 mM, twice the concentration of the mono-anionic nitroxide used in this study. Second, deuterium and 15N-substituted nitroxides have narrower spectral peaks and correspondingly larger peak amplitudes. Preliminary studies indicate that isotopic substitution increases the EPR peak amplitude by close to 10-fold (data not shown). The combination of these two improvements implies a 20-fold increase in the measureable nitroxide signal that could be generated in the tumor. Such a signal enhancement which greatly increase the feasibility of visualizing Hc7 tumors in vivo by EPRI. EPRI is an emergent imaging modality that could offer sensitive detection of HER2-overexpressing tumors, as well as useful insight into their physiology. The demonstration that it is possible to use anti-HER2 immunoliposomes to deliver imageable concentrations of imaging probes to HER2-overexpressing tumors
The circulating lifetime of anti-HER2 immunoliposomes is extended by surface modification with PEG. Stericallystabilized immunoliposomes encapsulating quenched ICG are highly selective for Hc7 tumors in vivo and are capable of generating robust fluorescence in Hc7 tumors with minimal background in circulation. Nitroxide-containing tissue phantom models containing µM concentrations of nitroxide are easily visualized by EPRI, further suggesting that if Hc7 tumors accumulate similar concentrations through immunoliposome targeting and delivery, they too should be imageable by EPRI.
ACKNOWLEDGMENT This work was supported by National Institutes of Health Grants GM-56481 (JPYK), P41-EB-2034 (GMR and HJH), CA-98575 (HJH), and CA124704-03 (SSM).
REFERENCES [1] Halpern HJ, Spencer DP, Vanpolen J et al. (1989) Imaging radiofrequency electron-spin-resonance spectrometer with high-resolution and sensitivity for in vivo measurements. Rev Sci Instr 60:1040-1050 [2] Rosen GM, Burks SR, Kohr MJ, et al. (2005) Synthesis and biological testing of aminoxyls designed for long-term retention by living cells. Org Biomol Chem 3:645-648 [3] Kao JP, Barth ED, Burks SR, et al. (2007) Very-low-frequency electron paramagnetic resonance (EPR) imaging of nitroxide-loaded cells. Magn Reson Med 58:850-854 [4] Burks SR, Barth ED, Halpern HJ, et al. (2009) Cellular uptake of electron paramagnetic resonance imaging probes through endocytosis of liposomes. Biochim Biophys Acta 1788:2301-2308 [5] Park JW, Kirpotin DB, Hong K, et al. (2001) Tumor targeting using anti-her2 immunoliposomes. J Control Release 74:95-113 [6] Burks SR, Macedo LF, Barth ED, et al. (2010) Anti-HER2 immunoliposomes for selective delivery of electron paramagnetic resonance imaging probes to HER2-overexpressing breast tumor cells Breast Cancer Res Treat In Press [7] Woodle MC and Lasic DD. (1992) Sterically stabilized liposomes. Biochim.Biophys.Acta 1113:171-199 [8] Kirpotin DB, Drummond DC, Shao Y et al (2006) Antibody targeting of long-circulating lipidic nanoparticles does not increase tumor localization but does increase internalization in animal models. Cancer Res 66:6732–6740
IFMBE Proceedings Vol. 32
New Tools for Image-Based Mesh Generation of 3D Imaging Data P.G. Young1, D. Raymont1, V. Bui Xuan2, and R.T. Cotton2 1
School of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK 2 Software Development & Technical Services, Simpleware, Exeter, UK
Abstract— There has been increasing interest in the generation of models for computational modeling from imaging modalities such as MRI and CT. Although a wide range of mesh generation techniques are available, these on the whole have not been developed for meshing from 3D imaging data. The paper will discuss new tools specific to image-based meshing, and their interface with commercial FEA and CFD packages. Automated mesh generation techniques can generate models with millions of nodes. Reducing the size of the model can have a dramatic impact on computation time and memory and CPU (Central Processing Unit) requirements. A proprietary technique allowing the setting of different density zones throughout the model was developed, reducing of the number of elements required to capture a given geometry, while increasing the density around areas of greater interest. Micro-architectures can be generated to conform to existing domains. To control the mechanical properties of the structure, a re-iso-surfacing technique combined with a bisection algorithm is shown to allow density variations and porosity control. The concept of a relative density map, to represent the desired relative densities in the micro-architecture, is introduced. Both 2D and 3D examples of functionally and arbitrary graded structures are given. Finally, a new homogenization algorithm has been implemented by using the meshing techniques described above and parallel processing strategies innovatively to compute orthotropic mechanical properties from higher resolution scans, enhancing the value of micro level information, and enabling it to be used for macro models on desktop computers. The ability to automatically convert any 3D image dataset into high quality meshes is becoming the new modus operandi for anatomical analysis. New tools for image-based modeling have been demonstrated, improving the ease of generating meshes for computational mechanics and opening up areas of research that would not be possible otherwise.
meshes directly and robustly from the image data have been proposed in recent years, however, there are a range of issues related to image processing of the data which still need to be addressed. The paper will discuss issues specific to image-based meshing, and will focus on techniques specific to image-based mesh generation and will also discuss the interface with commercial FEA and CFD packages (e.g. ANSYS, Fluent, LS-DYNA, etc). A number of examples that cover different applications within and outside the Computational Biomechanics field will be presented.
II. CAD-BASED VERSUS IMAGE-BASED MESHING ‘CAD-based approaches’ use the scan data to define the surface of the domain and then create elements within this defined boundary [1]. These techniques do not easily allow for more than one domain to be meshed as multiple surfaces generated are often non-conforming with gaps or overlaps at interfaces where two or more structures meet (cf. Fig. 1). The ‘image-based approach’ presented by the authors is a more direct way, as it combines the geometric detection and mesh creation stages in one process. The technique generates 3D hexahedral or tetrahedral elements throughout the volume of the domain [2], thus creating the mesh directly with conforming multipart surfaces (cf. Fig. 1). This technique has been implemented as a set of computer codes (ScanIP, +ScanFE and +ScanCAD).
Keywords— image-based meshing, image processing, mesh generation, finite element analysis.
I. INTRODUCTION There has been increasing interest in the generation of models appropriate for computational modeling from imaging modalities such as MRI and CT. Novel methods of generating the required finite element and finite volume
Fig. 1 Original segmentation (left), non-conforming (centre) and conforming multipart surface reconstruction (right) A. Robustness and Accuracy Modeling complex topologies with possibly hundreds of disconnected domains (e.g. inclusions in a matrix), via a
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 470–472, 2010. www.springerlink.com
New Tools for Image-Based Mesh Generation of 3D Imaging Data
471
CAD-based approach is virtually intractable. For the same problem, an image-based meshing approach is by contrast remarkably straightforward, robust, accurate and efficient. Meshes can be generated automatically and exhibit imagebased accuracy with domain boundaries of the finite element model lying exactly on the iso-surfaces, taking into account partial volume effects and providing sub-voxel accuracy (cf. Fig. 2).
different density zones throughout the model, effectively reducing the overall number of elements required to capture a given geometry, while allowing to increase the mesh density around areas of greater interest if necessary. An example is given in Fig. 3 where the head of the femur has a higher mesh density than the rest of the femur.
Fig. 3 Femur with higher mesh density at the head a
b
c
B. Generation of Micro-architectures Fig. 2 a)
3
Original image, unsmoothed (203,238 mm ); b) Traditionally smoothed (180,605 mm3, Δvolume = -11.14%); c) Smoothed with Simpleware’s smoothing algorithm (202,534 mm3, Δvolume = -0.35%)
B. Anti-aliasing and Smoothing Where anti-aliasing and smoothing is applied to the segmented volumes, the presented technique is both topology and volume preserving. If appropriate algorithms are not used, smoothing and anti-aliasing the data can introduce significant errors in the reconstructed geometry and topology. Most implemented smoothing algorithms are not volume preserving and can lead to shrinkage of convex hulls and topological changes. Whilst this is not particularly problematic when the purpose is merely enhanced visualization, the influence can be dramatic when the resultant models are used for metrology or simulation purposes.
Micro-architectures can be generated to conform to an existing domain. To control the mechanical properties of the structure a re-iso-surfacing technique is shown to allow density variations throughout the architecture. Combined with a bisection algorithm the technique allows microarchitectures to be generated with a specific porosity. The authors introduce the concept of a relative density map, a method for representing the desired relative densities in the micro-architecture where both the minimum and maximum porosity values can be specified. Examples are given of functionally graded and arbitrary graded structures in both 2D and 3D as shown in Fig. 4.
III. NEW DEVELOPMENTS IN IMAGE-BASED MESHING A. Generation of Variable Density Meshes Automated mesh generation techniques can easily generate millions of nodes, which ultimately lead to larger models to solve. The number of nodes is directly linked with the computational complexity of a problem. Reducing the size of the model can therefore have a dramatic impact on the computation time, as well as on the memory and CPU (Central Processing Unit) requirements. The authors have developed a proprietary technique which allows the setting of
a)
b)
Fig. 4 Integration of lattice structure into a) CAD structure and b) image data C. Homogenization Finally, a new homogenization algorithm has been implemented. Through the innovative use of the meshing
IFMBE Proceedings Vol. 32
472
P.G. Young et al.
techniques developed by the authors and parallel processing strategies, it is possible to compute orthotropic mechanical properties from higher resolution scans. This enhances the value of the information obtained at micro level, enabling it to be effectively used for macro models on desktop computers.
REFERENCES 1. Cebral J, Loehner R (2001) From medical images to anatomically accurate finite element grids. Int.J.Num.Methods Eng., 51:985-1008. 2. Young P, Beresford-West T, et al. (2008) An efficient approach to converting 3D image data into highly accurate computational models. Philosophical Transactions of the Royal Society A, 366:3155-3173. Author address:
IV. CONCLUSIONS The ability to automatically convert any 3D image dataset into high quality meshes is becoming the new modus operandi for anatomical analysis. Techniques have been developed for the automatic generation of volumetric meshes from 3D image data including image datasets of complex structures composed of two or more distinct domains and including complex interfacial mechanics. The techniques guarantee the generation of robust, low distortion meshes from 3D data sets for use in finite element analysis (FEA), computer aided design (CAD) and rapid prototyping (RP). The ease and accuracy with which models can be generated opens up a wide range of previously difficult or intractable problems to numerical analysis.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Philippe G Young University of Exeter North Park Road Exeter United Kingdom
[email protected]
Characterization of Speed and Accuracy of a Nonrigid Registration Accelerator on Pre- and Intraprocedural Images Raj Shekhar1, William Plishker1, Sheng Xu2, Jochen Kruecker2, Peng Lei1, Aradhana Venkatesan3, and Bradford Wood3 1
Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, Maryland, USA 2 Philips Research North America, Briarcliff Manor, New York, USA 3 Center for Interventional Oncology, Clinical Center and National Cancer Institute, National Institutes of Health, Bethesda, Maryland
Abstract— Targeting of FDG-avid PET-visible but CTinvisible lesions with CT is, by definition, challenging and is a source of uncertainty in many percutaneous ablative procedures. Preprocedural PET has been overlaid on intraprocedural CT to improve the visualization of such lesions. Such an approach depends on registration of PET and CT, but current registration methods remain mostly slow and manual and do not optimally account for deformation of abdominothoracic anatomy. We have developed a fully automatic, nonrigid image registration technology that is accelerated on 3 field programmable gate array (FPGA) chips to execute in <1 min. We have tested the application of this technology for multimodality PET-CT image guidance during interventional procedures. Archived abdominothoracicpreprocedural PET and intraprocedural CT images of 7 patients with FDG-avid lesions were re-registered using the FPGA method. During the actual procedure, a PET-CT fusion image to assist in needle placement was created for each of the patients using a prototype electromagnetic tool tracking system and manual rigid image registration lasting >5 min. The speed and accuracy of registration, which are critical to eventual clinical adoption, were recorded. On average, the FPGA-based registration took 53 s and agreed with the existing solution at the lesion center to within 6.6 mm (1.7 voxels). We prove the feasibility of fast and accurate nonrigid registration capable of enabling efficient multimodality PET-CT image-guided interventional procedures. The sub-minute speed of the FPGA method is important for clinical efficiency and on-demand intraprocedural PET-CT registration. The accuracy of FPGA-based nonrigid image registration is also acceptable. Our next steps are to introduce the FPGA registration in the clinic and re-test its accuracy, speed, and effectiveness in a larger patient population. When fully developed and tested, our approach might improve target visualization and thus the precision, safety, and outcomes of interventional radiology procedures. Keywords— Medicine, image registration, FPGA.
I. INTRODUCTION Because of the >90% sensitivity of fluorodeoxyglucose (FDG) positron emission tomography (PET) and the approximately 50% sensitivity of computed tomography (CT), many metastatic lesions are visible in PET but invisible in CT. Practical considerations discourage the use of a PET scanner for interventional procedures but allow the use of CT to provide intraprocedural imaging guidance in most biopsies and percutaneous ablations. Targeting of PETvisible but CT-invisible lesions with CT is, by definition, challenging and a source of uncertainty. Preprocedural PET has been overlaid on intraprocedural CT in an attempt to improve the intraprocedural visualization of such lesions, a process that employs PET data for needle placement. Such an approach depends on the registration of PET and CT, and current registration methods remain mostly manual or semi-automatic, making registration slow and clinically less practical. Moreover, most of these methods assume rigid-body to ease the registration task. While rigid-body assumptions are appropriate for some registration scenarios, they do not properly account for deformation of soft tissue such as those found in the thoracic and abdominal anatomy. Such motion may be the result of respiration, scanning position, or changes to shape or size over time. To properly address this nonrigid registration problem, we have developed a fully automatic, nonrigid image registration technology [1] that is accelerated on three field programmable gate array (FPGA) chips such that registration requires only 1 minute or less in execution time. To characterize this solution for image guided intervention scenarios, we have tested here the accuracy and speed of PET and CT registration by our FPGA-based registration method. The quality and speed of our solution are indicative of the feasibility of multimodality PET-CT imaging guidance during interventional procedures.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 473–476, 2010. www.springerlink.com
474
Raj Shekhar et al.
II. BACKGROUND
PC, making it a practical solution for image navigation in terms of size and integration.
Intensity based image registration algorithms rely on correlations between voxel (3D pixel) intensities and not on landmark detection, which makes them robust and computationally intensive. A transformation is often described as a deformation field, in which all parts of the image to be deformed (the floating image) have a specific deformation such that they align with the other image (the reference image). Construction of the deformation field can start from a just few parameters in the case of rigid registration or from a set of control points which capture the nonuniformity of nonrigid registration. Regardless of the representation, a transformationcontains the information necessary to deform all of the voxels in the floating image into the reference image space, aligning the features of the floating image with that of the reference. This transformed image can be compared to the reference image using a variety of similarity metrics, such as mean squared difference (MSD) or mutual information. For iterative approaches, the similarity value is returned so that it may guide the optimization engine towards successively better solutions. Problem parameters may change during run-time to improve speed and accuracy. While image registration is a computationally intensive problem, it can be readily accelerated by exploiting parallelism. Researchers have applied a variety of innovative approaches at different levels of the problem. To bring more accurate and more robust image registration algorithms into the clinical setting, a significant body of research has been dedicated to acceleration techniques for image registration. Graphics processors (GPUs) have been utilized in various image registration algorithms [2,3]. Clusters and other multiprocessor systems are often the target of higher level parallelism [4,5]. The Cell processor has been used to accelerate rigid registration using mutual information with additional speedup from just processing a subset of the total voxels [6]. However, these solutions are unable to take advantage of the lowest levels of parallelism in the application. Our FPGA based solution utilizes an architecture specifically designed to exploit the significant amount of parallelism that occurs in processing a single voxel. Ithas the potential to provide the speed of registration necessary for clinical viability. Furthermore our FPGA based solution is constructed with off-the-shelf parts and housed in a standard
III. METHODS Archived abdominothoracic preprocedural PET and intraprocedural CT images of 7 patients with FDG-avid lesions, who either underwent biopsy or were treated with radiofrequency ablation, werere-registered using the FPGAbased registration method. During the actual procedure, a PET-CT fusion image to assist in needle placement was created for each of the patients using a prototype electromagnetic tool tracking system and manual/semi-automatic rigid image registration,which lasted several minutes. In one case with standalone preprocedural PET, the PET image was registered directly with intraprocedural CT. In 6 other cases, hybrid preprocedural PET/CT was available. The CT component of the hybrid PET/CT image was registered with intraprocedural CT in these cases. The resulting transformation, when applied to the PET component, helped create the registered preprocedural PET–intraprocedural CT fusion image. The speed and accuracy of registration, which are critical to eventual clinical adoption, were recorded. For the archived cases, a single-point registration solution at the lesion center from the manual/semi-automatic method was available. Although the FPGA-based registration method nonrigidly registered every voxel in the image, the accuracy of the matching single-point registration result was compared for the 2 methods (rigid vs. nonrigid). The timing results were acquired by a software timer was started before registration began and stopped after the final transformation vector was derived. The setup and teardown overheads (such as image transfer)were omitted from these results, as we believe a final implementation integrated with a surgical navigation system would have negligible overhead. A final integrated solution would embed the FPGA acceleration engine in the image processing pipeline, making the core registration time the primary contributor to latency.Therefore we quote this core registration time, as it would be the additional viewing latency to the preprocedural data in the procedure room.
IFMBE Proceedings Vol. 32
Characterization of Speed and Accuracy of a Nonrigid Registration Accelerator on Pre- and Intraprocedural Images
475
voxel size. On average, the FPGA-based registration took 53 s and agreed with the existing solution at the target center to within 6.6 mm (1.7 voxels).
IV. RESULTS Figure 1 shows a particular case showing preprocedural and intraprocedural images. In this case, both preprocedural PET and CT were available, so the intraprocedural and preprocedural CTs were registered to produce the deformation field that could be applied to the PET image. At the time of the procedure the patient was in a slightly different pose, causing a rigid misalignment with the body, but also much of the soft tissue deformed as well, necessitating nonrigid registration for the best possible automated results. The result of the CT-CT registration is shown here, and the same deformation field can be applied to the PET image to produce an overlay of the preprocedural PET on the intraprocedural CT. Table 1 summarizes data characteristics and speed and accuracy results. Time required for registration is a direct function of the size of 2 images and the degree of starting misalignment. The accuracy of image registration depends considerably on image resolution, which correlates with
V. CONCLUSION We have proven the feasibility of fast and accurate nonrigid registration capable of enabling efficient multimodality PET-CT imaging-guided interventional procedures. Compared with >5 min required to register PET and CT images either manually or semi-automatically, the automatic FPGA-based registration requires <1 min in most cases. Such fast registration is important for clinical efficiency and will permit generation of a new PET-CT fusion image almost instantaneously each time intraprocedural CT is repeated. Furthermore, the inclusion of the registration accelerator to the visualization pipeline would not effect the time of the intraprocedural image to be displayed, only the preprocedural overlay.
(a)
(b)
(c) Fig. 1 Axial slice of chest of the (a) preprocedural PET/CT, (b) the intraprocedural CT, and (c) the preprocedural CT (green) registered to the intraprocedural CT (gray) IFMBE Proceedings Vol. 32
476
Raj Shekhar et al.
Table 1 A summary of speed and accuracy of the registration results Intraprocedural CT Image Size
Intraprocedural CT Voxel Size (mm x mm x mm)
Preprocedural CT Image Size
512 x 512 x 92 0.98 x 0.98 x 1.5 512 x 512 x 311 512 x 512 x 85 0.86 x 0.86 x 1.5 512 x 512 x 267 512 x 512 x 101 0.98 x 0.98 x 1.5 512 x 512 x 311 512 x 512 x 127 0.98 x 0.98 x 1.5 512 x 512 x 311 512 x 512 x 103 0.98 x 0.98 x 1.5 512 x 512 x 267 512 x 512 x 110 0.98 x 0.98 x 1.5 512 x 512 x 267 512 x 512 x 88 0.98 x 0.98 x 1.5 256 x 256 x 226* *Data for preprocedural PET for the standalone PET case An average 6.6 mm of disagreement between the clinically applied and the FPGA-based registrations may be an acceptable accuracy for nonrigid image registration. One of the determinants of this accuracy is image resolution; with smaller slice spacing and finer slices of the CT part of the preprocedural hybrid PET/CT, this accuracy should improve. Having established the equivalence of the automated nonrigid method with the currently used manual/semiautomated rigid method, our next steps are to introduce the FPGA-based registration in the clinic, gather experience with its use, and re-test its accuracy, speed, and effectiveness in a larger patient population. The FPGA-based image registration technology that uses mutual information as the image similarity measure is general enough to register any combination of imaging modalities. This could be used for other types of preprocedural images, such as contrast magnetic resonance imaging. When fully developed and tested, the use of multimodality imaging guidance enabled by our registration technology might improve target visualization and thus the precision, safety, and outcomes of interventional radiology procedures. It may also be possible to expand the application of percutaneous ablations to those lesions that currently go untreated because of limitations of current visualization technology.
Preprocedural CT Voxel Size (mm x mm x mm) 0.98 x 0.98 x 3.3 0.98 x 0.98 x 3.3 0.98 x 0.98 x 3.3 0.98 x 0.98 x 3.3 0.98 x 0.98 x 3.3 0.98 x 0.98 x 3.3 2.13 x 2.13 x 4.3*
Target Registration Error (mm; voxels) 4.2; 1.2 7.8; 2.2 9.3; 2.6 5.8; 1.6 7.9; 2.2 3.3; 0.8 7.9; 1.5
Total Registration Time (s) 57 65 42 55 45 51 45
REFERENCES 1. R. Shekhar, V. Zagrodsky, et al. "High-speed registration of three- and four-dimensional medical images by using voxel similarity," Radiographics, vol. 23(6), pp. 1673-81, 2003. 2. A. Köhn, J. Drexl, et al., "GPU accelerated image registration in two and three dimensions," in Bildverarbeitungfür die Medizin 2006, H. Handels, et al., Eds. 2006, Springer Berlin Heidelberg. pp. 261-265. 3. D. Ruijters, B. Romeny, and P. Suetens, "Efficient GPU-accelerated elastic image registration," In Proc. Sixth IASTED International Conference on BIOMEDICAL ENGINEERING (BioMed), Innsbruck, Austria, pp. 419-424, February 2008. 4. F. Ino, K. Ooyama, and K. Hagihara, "A data distributed parallel algorithm for nonrigid image registration," Parallel Computing, vol. 31(1), pp. 19-43, 2005. 5. R. Stefanescu, X. Pennec, and N. Ayache, "Parallel non-rigid registration on a cluster of workstations," Proc. of HealthGrid'03, 2003. 6. M. Ohara, H. Yeo, et al. "Real-time mutual-information-based linear registration on the cell broadband engine processor," In 4th IEEE ISBI, Arlington, VA, pp. 33-36, 2007. 7. C.R. Castro-Pareja, J.M. Jagadeesh, and R. Shekhar, "FAIR: a hardware architecture for real-time 3-D image registration," Information Technology in Biomedicine, vol. 7(4), pp. 426, 2003.
IFMBE Proceedings Vol. 32
Assessment of Kidney Structure and Function Using GRIN Lens Based Laparoscope with Optical Coherence Tomography C.W. Chen1, J. Wierwille2, M.L. Onozato3, P.M. Andrews3, M. Phelan4, J. Borin4, and Y. Chen1,2,* 1
Department of Electrical and Computer Engineering 2 Fischell Department of Bioengineering University of Maryland, College Park, MD 20742 USA 3 Georgetown University School of Medicine, Washington, DC 20007 USA 4 Department of Surgery, University of Maryland School of Medicine, Baltimore, MD 21201 USA
[email protected] Abstract— Acute kidney injury (AKI) is a common and potentially devastating disease in clinical medicine. Currently, there is a lack of early available biomarkers for AKI, therefore precludes the initiation of treatment or therapy in a timely manner. In clinical practice, however, standard indicators of kidney function following transplantation require several days to achieve a steady-state in order to accurately reflect graft function. Recently, we have developed a laparoscopic probe that can perform imaging in vivo and in real time using optical coherence tomography (OCT). The design of the probe consists of two galvanometers for transverse scans, an objective, and a GRIN lens relay. The scan range depends on the size of GRIN lens implemented. In this study, a GRIN lens relay of 4.57mm in diameter and 219mm in length is implemented. Using the laparoscopic probe, we have imaged human skin in situ and rat kidney ex vivo with clear visualization of features such as the dermal layers and eccrine sweat glands in the skin, and glomeruli and convoluted tubules in the kidney. We have also demonstrated the use of Doppler OCT to quantify flow rate in capillary phantom. These data demonstrate the probe’s imaging capability and its possible use in imaging human kidney in vivo and in real time during the standard laparoscopic partial nephrectomy procedures, and to provide a quantitative assessment of kidney structure and function following transplantation. Keywords— Acute kidney injury, Optical coherence tomography, GRIN lens, Laparoscope.
I. INTRODUCTION Acute Kidney Injury (AKI) is a life threatening disease with a persistently high mortality and morbidity rate despite the advances in supportive care. It has been estimated that the incidence of intrinsic AKI is approximately 115,000 cases/year. [1] AKI is typically diagnosed by observing rises in blood urea nitrogen (BUN) and serum creatinine (SCr). However, SCr is a suboptimal indicator of renal
function during AKI, and does not accurately reflect the degree of dysfunction until reaching a steady-state after several days. [2] More critically, no objective tool currently exists in clinics to detect early renal dysfunction and the degree and extent of AKI, and consequently, therapeutic interventions are not able to be implemented effectively. The need of early interventions of a therapy urges the development of new diagnostic tools to assess the extent of disease, quantify the physiological parameters of disease, and evaluate disease progression or response to therapy. Optical coherence tomography (OCT), a rapidly emerging medical imaging modality, holds such promises in that OCT can provide subsurface imaging of biological tissues with penetration depth (1-2 mm) and micron-level resolution relevant to many biological tissues. It performs “optical biopsy” to image kidney structure and function with a fieldof-view (FOV) comparable to that of standard excisional biopsy but without the removal of a tissue specimen. OCT has already been successfully translated to various clinical applications including ophthalmology, cardiology, gastroenterology, to name a few. [7,8] OCT imaging of kidney is a new and under-explored area, however, with strong translational potential. OCT can be readily interfaced with fiberoptic catheters, endoscopes, laparoscopes, and needle imaging probes with minimally invasion to image inside the body. Here, we have developed a GRIN lens based laparoscopic probe that can perform imaging in vivo and in real time using OCT. We have tested this system under human skin, rat kidney ex vivo, and capillary phantom. This GRIN lens based laparoscopic OCT system enables clinical kidney imaging during standard laparoscopic procedures to quantitatively evaluate kidney structural and function during partial nephrectomy, a procedure where the kidney will be subjected to ischemic insult.
* Corresponding author.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 477–480, 2010. www.springerlink.com
478
C.W. Chen et al.
(A)
Circulator 1 2 FC 3 FC
SS laser 1325nm
M
DCG
(A)
1
Hand hold part
XY
C C
PC
2
OBJ
FC
GRIN lens
BD
FC
3
500µm
BD MZI Clock
(B) (B) 2 y (mm)
DAQ FT Computer
GRIN lens
air
air
0 -2 0
100 200 z, optical axis (mm)
Fig. 1 (A) Schematic and photo of the OCT-based laparoscope system. (B) Ray tracing using Matlab. After passing through the objective, a beam is focused slightly into GRIN lens. The GRIN lens relays the focused beam to the other end, 2mm outside from the tip
Fig.
2 (A) Cross-sectional laparoscopic OCT image of human finger in vivo with morphological details clearly discerned (1: epidermis, 2: dermis, 3: eccrine sweat glands). (B) 3D visualization of human finger in vivo. Individual swirling sweat ducts can be visualized in the enlarged view
refraction is 1.542, and NA is 0.1. Figure 1(A) shows the overall schematic of the OCT system used in this experiment. Inset photo shows the prototype of the laparoscope probe.
II. METHOD A. The Laparoscopic System A high-speed high-resolution swept source/Fourier domain OCT system (Thorlabs Inc., NJ, USA) is used in this study. The light source generates a 100 nm full width at half maximum (FWHM) bandwidth at 1325 nm, yielding an axial resolution of 10 μm in the tissue. The laser operates at a sweep rate of 16 k A-scan per second with an average output power of 12 mW. 3D imaging is achieved by a pair of mirrors mounted to XY scanning galvanometers (Cambridge Technology, MA, USA) and a microscope objective. The transverse resolution is 15 μm with 4 mW of power illuminating on the sample. The OCT imaging system’s sensitivity is 97 dB. The GRIN lens is mounted with the entrance plane close to the focal plane of the objective. The GRIN lens is carefully designed such that the focused beam can be relayed to the other end. The principle of design and imaging capability of GRIN lens based OCT imaging has been demonstrated previously. [3] Here, the GRIN lens (GRIN lens corporation, Rochester, NY) is 0.99 pitch long (219mm) and has diameter 4.57mm. The central index of
B. Rat Kidney In situ preserved normal kidneys of male rats of MunichWistar through vascular perfusion were used. [4] All the animal models, euthanasia, and fixation procedures received prior approval by the Animal Use and Care Committee at the Georgetown University Medical Center in compliance with the Federal Animal Welfare Act. C. Capillary Phantom Capillary with 0.4mm inner diameter (Drummond Scientific Co., Broomall, PA) is created as vessel phantom. Solution in the capillary is 2% intralipid. Surrounding media is 2% intralipid mixed with ultrasound gel. Capillary is obliquely placed in the gel with 5 degree angle upward from the horizontal plane. The flow is pumped through the capillary with a constant rate from 0 to 1.2mm/s.
IFMBE Proceedings Vol. 32
Assessment of Kidney Structure and Function Using GRIN Lens Based Laparoscope with Optical Coherence Tomography
ʋ
(A)
0.2mm/s
479
1mm
0.6mm/s
1mm
0
500µm 0.4mm/s
Fig. 3 En face view (at 200µm below the cortex) of the tubular structure of rat kidney imaged by the laparoscope
1mm
0.8mm/s
1mm
Ͳʋ
(B)
III. RESULT AND DISCUSSION Figure 2A shows a cross-sectional OCT image of human finger in vivo under this laparoscopic system. Figure 2B shows the 3D view. Individual swirling sweat ducts can be clearly resolved. This preliminary data demonstrates that the prototype OCT laparoscope has sufficient image resolution and sensitivity for clinical kidney imaging. The capability of resolving characteristic renal microanatomy is demonstrated in Fig.3. Proximal convoluted tubules from rat kidney were clearly resolved by the laparoscopic OCT imaging probe. This suggests our device has sufficient resolution to image renal anatomic structures. Besides structural imaging, functional information such as blood flow is important to assess the kidney status. We have investigated Doppler OCT through the laparoscopic probe. In Fig. 4, we demonstrated the change of phase shift accordingly follows the change in flow velocity. The relation between phase shift and velocity is linear. The linear relation is observed at the flow rate up to 1.2mm/s. Also, this linear phase shift is observed in the highly scattering gel surrounding the capillary tube, which has similar scattering properties as human tissues. Compared with previous studies [4,5], these results also indicate that the image quality is not compromised by the introduction of GRIN lens. One possible limitation, however, is that it could be difficult to quantify the absolute blood flow. Currently, we are developing 3D DOCT algorithms to incorporate the angle of flow in order to accurately assess the flow inside the vessel.
Fig. 4 (A) Doppler OCT image under different flow velocity in the capillary. (B) Correlation between velocity and Doppler phase shift
IV. CONCLUSIONS AKI involves various degrees of structural and functional impairment in different renal compartments; hence, the capability of comprehensive and quantitative imaging of kidney structural and function in situ may lead to better diagnosis and prognosis. The preliminary data has demonstrated that the GRIN lens based laparoscopic OCT can provide highresolution, depth-resolved imaging to visualize the kidney microanatomy in real time. Rat kidney tubular microstructure is clearly visualized, and the morphometric parameters, such as the diameters of the vessel and tubules, can be automatically and quantitatively assessed subsequently. [6] In addition, DOCT has been demonstrated on capillary phantom.
IFMBE Proceedings Vol. 32
480
C.W. Chen et al.
Therefore, it may enable the quantification of renal blood flow. Taken together, OCT/DOCT enables multi-parametric imaging of kidney structure and function, which can be used to improve kidney disease diagnosis and prognosis. These promising preliminary results motivate further development and translation of this technology to assess its feasibility for human kidney imaging.
ACKNOWLEDGMENT This work is supported in part by the NanoBiotechnology Award of State of Maryland, the Minta Martin Foundation, and the General Research Board (GRB) Award of the University of Maryland.
4. Y. Chen, P.M. Andrews, A.D. Aguirre, J.M. Schmitt, J.G. Fujimoto, "High-Resolution Three-Dimensional Optical Coherence Tomography Imaging of Kidney Microanatomy Ex Vivo," Journal of Biomedical Optics, 12, 034008 (2007). 5. P.M. Andrews, Y. Chen, S. Huang, D. Adler, R. Huber, J. Jiang, S. Barry, A. Cable, J.G. Fujimoto, "High-Speed Three-Dimensional Optical Coherence Tomography Imaging of Kidney Ischemia In Vivo," Laboratory Investigation, 88, 441-449 (2008). 6. Q. Li, M.L. Onozato, P.M. Andrews, C.W. Chen, A. Paek, R. Naphas, S. Yuan, J. Jiang, A. Cable, Y. Chen*, "Automated Image Analysis of Three-Dimensional Microstructures of the Human Kidney using Optical Coherence Tomography (OCT)," Optics Express, 17, 16000-16016 (2009) 7. Schuman, J.S., C.A. Puliafito, and J.G. Fujimoto, Optical coherence tomography of ocular diseases (2nd Edition). 2004, Thorofare, NJ: Slack Inc. 8. Jang, I.K., B. Bouma, B. MacNeill, M. Takano, M. Shishkov, N. Iftima, and G.J. Tearney, In-vivo coronary plaque characteristics in patients with various clinical presentations using Optical Coherence Tomography. Circulation, 108, 373-373, 2003.
REFERENCES 1. Liano, F. and J. Pascual, Epidemiology of acute renal failure: a prospective, multicenter, community-based study. Madrid Acute Renal Failure Study Group. Kidney Int, 50, 811-8, 1996. 2. Bellomo, R., J.A. Kellum, and C. Ronco, Defining acute renal failure: physiological principles. Intensive Care Med, 30, 33-7, 2004. 3. Tuqiang Xie, Shuguang Guo, Zhongping Chen, David Mukai, and Matthew Brenner, "GRIN lens rod based probe for endoscopic spectral domain optical coherence tomography with fast dynamic focus tracking," Opt. Express 14, 3238-3246 (2006)
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Yu Chen Fischell Department of Bioengineering University of Maryland College park, MD 20742 USA
[email protected]
Reliability of Structural Equation Modeling of the Motor Cortex in Resting State Functional MRI T. Kavallappa1, S. Roys1, A. Roy2, J. Greenspan1, R. Gullapalli1, and A. McMillan1 1
University of Maryland School of Medicine, Department of Diagnostic Radiology & Nuclear Medicine, Baltimore, MD 2 University of Maryland, Baltimore County, Department of Mathematics and Statistics, Catonsville, MD
Abstract— Similar to functional MRI (fMRI), resting-state functional connectivity MRI (rsfcMRI) can be used to measure inter-regional similarities in BOLD fluctuations across the resting brain to yield important information about the connectivity of a functional network, e.g., the motor system. Effective connectivity analysis can be used to evaluate the causal relationships of these networks. In this study, we evaluate the consistency of structural equation modeling (SEM) of resting state motor network involving five regions: the primary motor cortices (LM1, RM1), the supplementary motor area (SMA), and the pre-motor areas (LPMA, RPMA). Overall, our results show that some directional connections within the cortical motor network are consistent, but are not reproducible across the entire study population, and depend on the chosen causal model. Keywords— Path Coefficients, Motor Network, Effective Connectivity, fMRI, Structural Equation Modeling.
I. INTRODUCTION fMRI is a popular neuroimaging method which measures localized changes in cerebral blood flow due to neuronal activity. Here, the brain activation is indirectly measured by the change in the oxygenation level of blood flow in different regions of the brain, called Blood Oxygen Level Dependent (BOLD) signal [1]. Connectivity studies examine the interaction, coordination, and connectivity patterns between different brain regions during a particular brain state. Effective connectivity in particular refers to the directional influences between brain regions [2]. Resting state connectivity refers to the connectivity between different brain regions exhibiting spontaneous BOLD fluctuations, during a no-active-task state, where the subject simply rests. In this paper, we evaluate the resting state effective connectivity of the motor network using Structural Equation Modeling (SEM). SEM is one of the methods available to estimate effective connectivity using fMRI data. SEM is a multivariate linear statistical technique to examine relationships between variables using the covariance-variance structure of the data. SEM is primarily used to validate causal network models. In this context, covariances refer to the degree to which the activations of regions of interest (ROIs)
are related to each other. While SEM is typically used across task conditions or groups [3], in this paper we evaluated the reliability of SEM for connectivity analysis across subjects, in a well-known anatomical network of the brain, the cortical motor network. The anatomical motor network we used is shown in Figure 2, comprising the left and right primary motor areas (LM1, RM1), the left and right premotor areas (LPMA, RPMA), and the supplementary motor area (SMA). Structural Equation Modeling: SEM was introduced by McIntosh and Gonzalez-Lima [4] to brain connectivity analysis using neuroimaging. SEM works with a predefined model describing the directional influences between the variables present in the model, and the covariances between these variables. SEM estimates the connection strength or path coefficients of each of the directional connections described in the input causal model by minimizing the difference between the input, observed covariances and those implied by the causal model. In SEM, the variables can be described as endogenous or exogenous, and latent and indicator variables. While endogenous variables are those included in the causal model, exogenous variables are those that are not included in the network, but influence the activity of the endogenous variables. The indicator variables represent the latent variables, whose activity cannot be directly measured. In the context of fMRI, the latent variables can be the neuronal activity corresponding to the task, and the BOLD fluctuations in the corresponding ROIs are the indicator variables. The relationship between the brain regions in a single sample can be mathematically represented as a linear equation [5]:
Y = βy.x1 X1 + βy.x 2 X 2 + ψ
(1)
Here, regional activities or variances in X1 and X2 {endogenous variables} influence the variance of Y; βs represent the path coefficients from X1 and X2 to Y. The path coefficients represent the degree to which X1 or X2 influence the activity in Y. ψ is the residual error variance representing the proportion of total variance in Y is not accounted for by X1 and X2. The model can also be represented as
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 481–484, 2010. www.springerlink.com
482
T. Kavallappa et al.
Y = βY + ψ
(2)
Here, Y represents a vector of variables present in the model. β is a vector of path coefficients, and ψ represents the residual error values corresponding to each variable in the model. Path coefficients are defined as [4] the “direct proportional functional influence one region has on another region through their direct anatomical connection with all other regions in the model left unchanged.” Fig. 2 Initial causal model
II. METHODS Data Acquisition: Seven healthy right-handed participants (3 females and 4 males with an average age of 33±11 years), with no neurological illness participated in this study. Informed consent was obtained from each of the seven participants in accordance with the Institutional Review Board at the University of Maryland School of Medicine. Each participant was imaged 9 times, 3 times within a single session, across 3 sessions. All data was acquired on a Tim Trio 3T MRI Scanner (Siemens AG, Erlangen, Germany) equipped with a 12channel receive-only head coil. fMRI and fcMRI data were acquired using a T2*-sensitive single-shot EPI sequence (TE = 30 ms, TR = 2s, timepoints = 171, flip angle = 90, 24 axial slices, slice thickness = 6 mm with no gap, 3.4 × 3.4 mm2 in-plane resolution, FOV = 22 cm). A high resolution T1-MPRAGE (TE = 3.44ms, TR = 2250ms, TI = 900ms, flip angle = 9º, 96 slices, slice thickness = 1.5 mm, 0.86 × 0.86 mm2 in-plane resolution, FOV = 22 cm) was acquired for anatomic reference.
Fig. 1 fcMRI maps
Preprocessing: Data was analyzed using AFNI [6] and MATLAB (MathWorks Inc., Natick, MA). Each participant's functional scans were corrected for slice timing, spatially registered to the first functional scan from their first session, and blurred with a 6-mm FWHM Gaussian kernel. For each of a participant’s three sessions, seed voxels from the LM1 (determined during a simple finger tapping fMRI experiment) were used to extract average time series from the three separate resting state scans. The extracted time series were used to calculate the whole-brain resting state functional connectivity using correlation analysis. The resulting images were thresholded at r ≥ 0.4 to obtain. fcMRI maps, as shown in Figure 1. Spherical regions of interest (ROIs) were manually drawn in each region of the cortical motor network {described below}to extract the time series data for further analysis. Analysis: To perform SEM analysis, the initial causal model selected is shown in Figure 2. This is well understood anatomical model of the motor network [7] comprising the left and the right primary motor areas (LM1, RM1), the left and the right pre-motor areas (LPMA, RPMA), and the supplementary motor area (SMA). In this model, the SMA is anatomically connected to the bilateral motor areas and pre-motor areas, and has a causal influence on all these areas. SEM analysis was performed using the 1dSEM function in AFNI [8]. This function requires the initial causal model defined in the form of a connection matrix, the covariance matrix, residual ψ values, and number of degrees of freedom. In order to calculate the covariance matrix, either the mean or the first eigenvector of the component voxels in a single ROI can be used. Here we used the mean to ensure that the ROI data represented the trend in the data, and was not unduly influenced by outliers. We conducted combination tests to observe the reliability of results with SEM between subjects. With each subject having 9 resting state scans, each scan was taken in a unique combination (without concern of ordering), 2 at a time, 3 at a time, etc.
IFMBE Proceedings Vol. 32
Reliability of Structural Equation Modeling of the Motor Cortex in Resting State Functional MRI
483
III. RESULTS Initial causal model: The initial causal model representing the resting state motor network is shown in Figure 2. Each subject’s permuted data was subject to SEM analysis and path coefficients were obtained. The goodness-of-fit estimates for this model (Akaike’s information criterion and the parsimonious fit index) indicated that the data was not a good fit to the model. This was reiterated by low mean path coefficient values (strongest consistent connection LPMA>RPMA (0.1149)) across all regions, and standard deviations of the same order as the mean path coefficient values. Thus Figure 2 was judged to be a poor model fit, and was modified to the model in Figure 3, where the SMA and its associated connections were eliminated. The mean and standard deviation of the path weights at combination level 5 (maximum number of combinations) for all subjects are shown in Table 1. In order to assess the consistency of the path coefficients, and also ensure good coefficients of variation measures across all permutations, we use two metrics representing these values. These values are tabulated as shown in. θ represents the slope of the line joining mean path coefficients corresponding to each combination level. A lower slope value shows that the path coefficient mean remained the same across all permutation levels. The γ Table 1 Path coefficients [mean (std)] for all subjects at combination level 5 (126 combinations) LM1-> RM1
LPMA-> LM1
LPMA-> RPMA
RM1-> LM1
RPMA-> LPMA
Fig. 3 Modified anatomic model value is related to the coefficient of variation. It is calculated using:
∑σ γ= ∑μ
2.0122 -0.8269 1.0377 1.9006 0.4559 -0.3259 (0.0983) (0.1648) (0.0692) (0.0728) (0.0381) (0.0729)
SUBJ 2
0.5721 0.6693 0.6349 0.3045 0.7926 0.2380 (0.0416) (0.0370) (0.0615) (0.0624) (0.0491) (0.0316)
SUBJ 3
0.2257 0.1825 0.7485 0.6468 0.6922 0.6074 (0.0991) (0.0848) (0.0322) (0.0840) (0.0387) (0.0827)
SUBJ 4
0.7505 0.2006 1.0814 0.5175 1.1579 0.2397 (0.4170) (0.2329) (0.3297) (0.4360) (0.3361) (0.2157)
SUBJ 5
0.7593 0.5105 0.7677 0.2932 0.5219 0.3838 (0.0604) (0.0393) (0.1613) (0.0405) (0.1950) (0.0421)
SUBJ 6
0.8826 0.3181 0.8010 0.2668 0.5749 0.2190 (0.0469) (0.0368) (0.0320) (0.0206) (0.0325) (0.0572)
SUBJ 7
1.1202 0.7717 1.2571 -0.1822 0.3285 0.3339 (0.0548) (0.0439) (0.0408) (0.0349) (0.0292) (0.0276)
2
Connections were judged as reliable according to the γ values as:
A = (γ < 0.05) + 2 * (| θ |< 0.01025)
(3)
θ and (4)
The reliability metric, A, was chosen so that the change in path coefficients across all combinations was less than 0.1 (stable across combinations), and that the sum of squares variance was within 5% of the sum of squares mean (stable within combination).
IV. DISCUSSION
RPMA-> RM1
SUBJ 1
2
While the initial causal model for the resting state motor network included SMA, the path coefficients from SEM analysis and permutation tests showed that the model was a poor fit for our data. Thus, the SMA and its associated connections were removed from the model. In terms of the resting state network, we hypothesize that this might be due to the function of the SMA, which is complex programming and planning of motor functions, and thus does not exert a strong causal influence on the other ROIs in the network, although it is highly correlated (Figure 1). The weak influence of SMA on the model might also be due to other exogenous variables, which were not considered in our model. For example, the basal ganglia, which has significant anatomical and functional connections with the SMA might be acting upon it, and in turn resulting the poor path coefficients [9]. From our results with the modified causal model, LPMA>RPMA connection is the strongest across all the subjects, followed by the RPMA->LPMA connection, based on the
IFMBE Proceedings Vol. 32
484
T. Kavallappa et al.
Table 2 Reliability indices of path weights for all subjects. SUB J 1
2
3
4
5
6
7
LM1->RM1
θ = 0.2028
γ
= 0.3238 Mean_PC = 0.8970 θ = 0.0168 γ = 0.0107 Mean_PC = 0.6016 θ = 0.0318 γ = 0.1179 Mean_PC = 0.3294 θ = 0. 0153 γ = 0.1208 Mean_PC = 0.6903 θ = 0.0011 γ = 0.0129 Mean_PC = 0.7564 θ = 0.0030 γ = 0.0181 Mean_PC = 0.8810 θ = 0.0386 γ = 0.0079 Mean_PC = 0.9721
LPMA->LM1
θ
= 0.1758 γ = 0.3404 Mean_PC = -0.1001 θ = 0.0367 γ = 0.0130 Mean_PC = 0.5712 θ = 0.0206 γ = 0.2626 Mean_PC = 0.2507 θ = 0.0019 γ = 0.3868 Mean_PC = 0.2354 θ = 0.0282 γ = 0.0390 Mean_PC = 0.4228 θ = 0.0125 γ = 0.0938 Mean_PC = 0.2913 θ = 0.0756 γ = 0.0162 Mean_PC = 0.5534
LPMA->RPMA
θ
= 0.0117 γ = 0.0101 Mean_PC = 1.0643 θ = 0.0535 γ = 0.0315 Mean_PC = 0.7710 θ = 0.0164 γ = 0.0050 Mean_PC = 0.7886 θ = 0.0405 γ = 0.0588 Mean_PC = 0.8071 θ = 0.0283 γ = 0.1335 Mean_PC = 0.8753 θ = 0.0541 γ = 0.0084 Mean_PC = 0.9427 θ = 0.0276 γ = 0.0046 Mean_PC = 1.1604
path coefficients. We see that the path coefficients corresponding to RPMA->RM1, and LM1->RM1 are the most reliable across subjects, while path coefficients corresponding to LPMA->LM1, RM1->LM1 have the highest variability between subjects. Observing the between subject reliability of the path weights, subjects 1 and 4 display the least reliable connections, while subjects 5, 6 and 7 seem to have the more reliable path weights. From our results, it does not appear that SEM of the investigated cortical motor network is totally reliable, thus caution should be taken when interpreting similar analyses. However, while the magnitude of the mean path weights vary across subjects, some of the connections (LPMA>RPMA, RPMA->LPMA, LM1->RM1) appear relatively close in magnitude. This variability in mean path weights may not be entirely unexpected, since our study population is somewhat heterogeneous in age, gender, and experience (i.e., motor training). Furthermore, reliable output of SEM is highly dependent on having a correct a priori anatomical model. While we have used a subset of a well-characterized anatomical network, absolute validation of our model is difficult, particularly when recent studies have shown that functional connections exist even when anatomical connections cannot be identified [10] and that different anatomical regions are active in resting versus active networks [11]. In conclusion, we have shown that some connections within the cortical motor network are consistent, but were not reproducible across the entire population, and were highly dependent on the chosen model. Therefore, care should not only be taken when interpreting the significance of path weights between regions, selection of a functionally correct network model is critical to reliable findings in SEM. Future studies will explore more extensive models of
A = 3, A=2, A=1
RM1->LM1
θ
= 0.2513 γ = 0.1016 Mean_PC = 0.8741 θ = 0.0423 γ = 0.0295 Mean_PC = 0.4233 θ = 0.0184 γ = 0.0279 Mean_PC = 0.5796 θ = 0.0040 γ = 0.3776 Mean_PC = 0.4303 θ = 0.0276 γ = 0.0206 Mean_PC = 0.3712 θ = 0.0215 γ = 0.0252 Mean_PC = 0.3245 θ = 0.0844 γ = 0.1189 Mean_PC = 0.0539
RPMA->LPMA
θ
= 0.0241 γ = 0.0183 Mean_PC = 0.5188 θ = 0.0202 γ = 0.0102 Mean_PC = 0.7308 θ = 0.0161 γ = 0.0070 Mean_PC = 0.7304 θ = 0.0421 γ = 0.0545 Mean_PC = 0.8790 θ = 0.0498 γ = 0.1250 Mean_PC = 0.6743 θ = 0.0015 γ = 0.0063 Mean_PC = 0.5716 θ = 0.0447 γ = 0.0101 Mean_PC = 0.4469
RPMA->RM1
θ
= 0.0918 = 0.9566 Mean_PC = 0.1781 θ = 0.0048 γ = 0.0714 Mean_PC = 0.2352 θ = 0.0081 γ = 0.0389 Mean_PC = 0.5729 θ = 0.0200 γ = 0.2033 Mean_PC = 0.3049 θ = 0.0061 γ = 0.0601 Mean_PC = 0.3652 θ = 0.0206 γ = 0.1282 Mean_PC = 0.2832 θ = 0.0011 γ = 0.0164 Mean_PC = 0.3458
γ
motor connectivity to determine the most relevant network to guide clinical implementation of these techniques.
REFERENCES 1. S. Ogawa, D.W. Tank, R. Menon, J.M. Ellermann, S.G. Kim, H. Merkle, and K. Ugurbil. Intrinsic signal changes accompanying sensory stimulation: functional brain mapping with magnetic resonance imaging. Proc Natl Acad Sci U S A, 89(13): 5951–5955, Jul 1992. 2. B.P. Rogers, V.L. Morgan, A.T. Newton, and J.C. Gore. Assessing functional connectivity in the human brain by fmri. Magn Reson Imaging, 25(10): 1347–1357, Dec 2007. 3. A.R McIntosh, C.L. Grady, L.G. Ungerleider, J.V. Haxby, S.I. Rapoport, and B. Horwitz. Network analysis of cortical visual pathways mapped with PET. J. Neurosci., Feb 1994; 14: 655 - 666. 4. A.R.Mcintosh, F. Gonzalez-Lima, 1994. Structural equation modeling and its application to network analysis in functional brain imaging. Human Brain Mapping 2 (1-2), 2-22. 5. M. Lindquist. The Statistical Analysis of fMRI Data. Statistical Science, 23(4): 439–464, 2008. 6. R.W. Cox. AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical Research, 29:162-173, 1996. 7. N. Sharma, J.C. Baron, and J. B. Rowe. Motor imagery after stroke: relating outcome to motor network connectivity. Ann Neurol, 66(5): 604–616, Nov 2009. 8. G. Chen, D.R. Glen, J.L. Stein, A.S. Meyer-Lindenberg, Z.S. Saad, R.W. Cox. Model Validation and Automated Search in FMRI Path Analysis: A Fast Open-Source Tool for Structural Equation Modeling, Human Brain Mapping Conference, 2007. 9. E. Kandel, J. Schwartz, Principles of Neural Science. 2nd edition. Elsevier Science Publishing Company (1985). 10. C. J. Honey, O. Sporns, L. Cammoun, X. Gigandet, J. P. Thiran, R. Meuli, and P. Hagmann. Predicting human resting-state functional connectivity from structural connectivity. Proc Natl Acad Sci U S A, 106(6):2035–2040, Feb 2009. 11. Newton AT, Morgan VL, Gore JC. Task demand modulation of steady-state functional connectivity to primary motor cortex. Hum Brain Mapp. 2007; 28:663-672.
IFMBE Proceedings Vol. 32
Quantitative Characterization of Radiofrequency Ablation Lesions in Tissue Using Optical Coherence Tomography J. Wierwille1, A. McMillan3, R. Gullapalli3, J. Desai2, and Y. Chen1 1 Department of Bioengineering Department of Mechanical Engineering University of Maryland, College Park, USA 3 Department of Radiology, University of Maryland, Baltimore, USA 2
Abstract— Radiofrequency (RF) ablation is a widely used therapeutic intervention in the management of many cancers including breast and liver cancers. While optimum delivery of RF energy can be monitored through real-time temperature mapping from MR imaging, it is possible that microscopic foci of malignant tissue may be left untreated. Such microscopic tissue may be beyond the resolution limit of MRI and may result in sub-optimal treatment efficacy which may lead to cancer recurrence. Thus, for optimal treatment it is beneficial to incorporate higher-resolution techniques such as optical coherence tomography (OCT) that can surpass the resolution afforded by MRI into the micron range in situ and can provide histopathology level information on tissue in vivo. In this preliminary study, in order to test the feasibility of this approach, we characterized tissue properties such as the scattering coefficient (μs), in non-ablated and ablated bovine skeletal muscle ex vivo. The estimated μs of the non-ablated muscle region was 1.9641 mm-1 while the μs of the ablated region was 5.8998 mm-1 (p<0.001). The OCT image of ablated tissue shows higher backscattering and reduced penetration depth from higher attenuation. Significant histological differences in muscle fiber size and interstitial area were observed between the ablated and non-ablated tissue with the ablated tissue showing 31% less packing fraction (defined as the muscle fiber area divided by the total histology area) explaining the increased backscattered light that tends to augment the index mismatch for the incident light. These preliminary data demonstrate the characterization of morphological tissue properties that are altered by RF energy during ablation procedures and the feasibility of quantitatively assessing ablation lesions using OCT. Keywords— optical imaging, optical coherence tomography, magnetic resonance imaging, radiofrequency ablation.
I. INTRODUCTION Radiofrequency ablation (RFA) is a standard therapeutic intervention in the treatment of many types of cancer [1]. Particularly, percutaneous RFA has been reported to be a safe and successful intervention in the treatment of hepatocellular carcinoma [2-5]. Recently, MRI-guided RF ablation procedures have aided in the effective application of RF energy in the therapeutic treatment of cancer by allowing the operator to visualize RF energy delivery in real time.
Optical coherence tomography (OCT) is an optical imaging modality that has become increasingly important in the clinical arena where it is used for many noninvasive medical diagnostics [6-15]. Its ability to resolve morphological microstructures in tissue to provide high-resolution images approaching that of histological studies can be very useful in clinical treatment and management of disease. The clinical benefit of OCT is the ability to perform real-time analysis (“biopsy”) of tissue in situ.
II. METHODS A. MRI-Guided Radiofrequency Ablation MRI temperature maps were acquired using a 3T Siemens MRI scanner (Malvern, PA) and a RITA Medical Systems (Fremont, CA) RF generator operating at 450KHz (Model 1500) with a Starburst XL RFA probe from AngioDynamics (Rita Medical Systems) that comes with a thermocouple embedded at the tip of each prong which provided an independent measure of temperature. RFA was performed on bovine skeletal muscle ex vivo during continuous MRI to simultaneously obtain real-time rapid gradient echo images of the ablation region (Figure 1A-B). MR thermography was based on the temperature dependence of the proton resonance frequency (PRF). MR images were acquired using a spoiled gradient echo sequence. The acquisition parameters were: TR/TE = 81/7.38 ms, flip angle = 20º, matrix=128×128, and slice thickness = 4 mm. 10 acquisitions were obtained before the start of RFA which were used as the reference points and thermograph maps generated after phase unwrapping of the images. Subtraction of reference from objective phase images generated phase difference maps. The phase difference maps were converted to temperature maps based on the reference temperature and the thermal coefficient α of the proton chemical shift resulting from temperature change Δφ = αγBo(TE)ΔT, where Δφ is the phase difference, γ the gyromegnetic ratio, B0 the strength of the magnetic field, TE the echo time, and ΔT the temperature change.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 485–488, 2010. www.springerlink.com
486
J. Wierwille et al.
rate of 16 kHz and the imaging was performed using a microscope setup with a frame rate of 30 frames per second (video rate). A Michelson interferometer composed of one circulator and a fiber-optic 50/50 splitter is used to generate the OCT interference signal. The reference arm consists of a stationary mirror and a polarization controller. The light from the sample arm is steered by a pair of galvanometer mirrors (X and Y direction), then focused by an objective lens. The power on the sample is ~4 mW with a spot size of 15 µm. The OCT interference signal is detected by a balanced detector with an overall sensitivity of 95 dB. Three-dimensional OCT images (1.0mm x 2.5mm x 2.5 mm, with 512 x 512x 512 resolution) were acquired for both ablated and non-ablated regions with the OCT axial signal being measured as a function of depth (z). One representative cross-sectional image (XZ) was selected from each data set (as shown in Figure 2) for further analysis.
A
B
C. Tissue Histology
Fig.
1 (A) Bovine skeletal muscle (beef steak). Location of RF ablation is indicated by upper arrow and non-ablated region by lower arrow. (B) MRI temperature map obtained during RF ablation procedure
Following OCT imaging, histology samples were taken from the ablated and non-ablated regions and stained with hematoxylin and eosin stain (H&E) in order to be able to visualize and analyze the tissue microstructure (Figure 5A-B).
A A
Non-Ablated
B
Ablated
100 μm
B 250 μm
250 μm
Fig. 2 Cross-sectional OCT images from (A) non-ablated and (B) ablated regions in Figure 1A B. OCT System and Measurements The OCT system is comprised of a wavelength-swept laser light source generating a 100 nm bandwidth at 1310 nm central wavelength, yielding approximately 10 µm axial image resolution in the tissue. The laser operated at a swept
100 μm Fig. 3 Histology images from (A) ablated and (B) non-ablated regions in Figure 1A
IFMBE Proceedings Vol. 32
Quantitative Characterization of Radiofrequency Ablation Lesions in Tissue Using Optical Coherence Tomography
D. Data Analysis The scattering coefficient (μs) was determined for both ablated and non-ablated OCT images using with Matlab™ to fit a single-scattering linear model to every A-scan. The fitting of the scattering model for each A-scan began at the tissue surface and extended to approximately 250 μm in depth. For histological analysis, the packing fraction, which is defined as the muscle fiber area divided by the total fiber bundle area, was computed for both the ablated and nonablated histology images using ImageJ© software. All values are reported as mean ± standard deviation. A student’s t-test was performed to determine statistical significant of all data and is given as the p value followed by the number of samples in the data set.
487
while the ablated region (in Figure 3B) shows a reduced packing fraction of 58% (p<0.01, n=31). The significant difference between the scattering coefficient in the ablated (Figure 2B) and non-ablated (Figure 2A) tissue regions could be interpreted by the condensed muscle fibers and enlarged interstitial spaces seen in the histology images (Figure 3). Condensed muscle fiber increases the fiber density while the enlarged muscle fiber separation increased the index mismatch for the incident light. Both effects could result in the increased backscattering and light attenuation in the ablated region over that in the non-ablated region.
8
*
The temperature map obtained from the MRI-guided RF ablation shown in Figure 1B depicts the local hyperthermia resulting from the RF energy entering the tissue. The central region of the tissue where the ablation probe was placed indicates the local temperature reaching above 80° C during lesion formation. The alteration in the tissue microstructure is easily seen by comparing the ablated and non-ablated OCT crosssectional images in Figure 2. Characteristic bands due to the birefringence effects of the muscle fibers are clearly visible in Figure 2A. In contrast, the OCT image of the ablated region shows high backscattering of light with reduced birefringence effects being able to be distinguished. For the ablated regions, the higher backscattered light is accompanied by reduced penetration depth (~0.5 mm) whereas the penetration depth in the non-ablated regions is nearly 1 mm. Furthermore, tissue properties, such as the scattering coefficients (μs), can be estimated from the attenuation of OCT axial signals: I(z) α exp(-2μs·z), where I(z) is the OCT A-scan intensity and z is depth. Figure 4 shows the estimated μs of the non-ablated tissue region is 1.9641 mm-1 while the μs of the ablated regions is 5.8998 mm-1 (p<0.001, n=512). Correlating with the OCT cross-sectional images, alterations in the tissue microstructure that are seen in the OCT images are confirmed in the histology images. The ablated histology image (Figure 3B) shows condensed muscle fiber and enlarged interstitial spaces as compared to the nonablated histology image (Figure 3A). Quantitatively, the change in tissue microstructure from RF ablation can be estimated using the muscle packing fraction. As presented in Figure 5, the non-ablated tissue region (in Figure 3A) shows a high packing fraction of 84%
6 5 4 3 2 1 0
Non-Ablated
Ablated
Fig. 4 Plot of the scattering coefficient (μs) obtained by fitting a singlescattering model to the OCT images (p<0.001, n=512) 1
*
0.9 0.8
Packing Fraction
III. RESULTS AND DISCUSSION
Scattering Coefficient (mm-1)
7
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Non-Ablated
Ablated
Fig. 5 Plot of the packing fraction for the non-ablated and ablated histology images (p<0.01, n=31)
IFMBE Proceedings Vol. 32
488
J. Wierwille et al.
IV. CONCLUSIONS These preliminary data demonstrated the possible histological origins of altered tissue properties due to RF ablation and the feasibility of quantitatively assessment using OCT. Combined with MRI-guided RF ablation procedures, OCT could play an adjunctive role in providing real-time quantification of lesion formation in the treatment of hepatocellular carcinoma and other cancers.
ACKNOWLEDGMENT The authors would like to thank Chao-Wei Chen and Dr. Shuai Yuan (University of Maryland) for technical assistance. This work was supported in part by the NanoBiotechnology Award of State of Maryland, the Minta Martin Foundation, the General Research Board (GRB) Award of the University of Maryland, the Prevent Cancer Foundation, and the UMB-UMCP SEED Grant Program.
REFERENCES 1. Gillams AR (2005) The use of radiofrequency in cancer. Br J Cancer 92:1825-1829 2. Manikam J, Mahadeva S, Goh KL, et al. (2009) Percutaneous, nonoperative radio frequency ablation for haemostasis of ruptured hepatocellular carcinoma. Hepatogastroenterology 56:227-230 3. Yan K, Chen MH, Yang W, et al. (2008) Radiofrequency ablation of hepatocellular carcinoma: long-term outcome and prognostic factors. Eur J Radiol 67:336-347 4. Ott DJ (2003) Percutaneous radio-frequency liver tumor ablation: what are the risks? Am J Gastroenterol 98:2564-2565
5. Livraghi T, Solbiati L, Meloni MF, et al. (2003) Treatment of focal liver tumors with percutaneous radio-frequency ablation: complications encountered in a multicenter study. Radiology 226:441-451 6. Huang D, Swanson EA, Lin CP, et al. (1991) Optical coherence tomography. Science 254:1178-1181 7. Fujimoto JG (2003) Optical coherence tomography for ultrahigh resolution in vivo imaging. Nat Biotechnol 21:1361-1367 8. Wang Z, Lee CS, Waltzer WC, et al. (2007) In vivo bladder imaging with microelectromechanical-systems-based endoscopic spectral domain optical coherence tomography. J Biomed Opt 12:034009 9. Schuman JS, Puliafito CA, and Fujimoto JG, Optical coherence tomography of ocular diseases (2nd Edition). 2004, Thorofare, NJ: Slack Inc. 10. Jang IK, Bouma B, MacNeill B, et al. (2003) In-vivo coronary plaque characteristics in patients with various clinical presentations using Optical Coherence Tomography. Circulation 108:373-373 11. Sivak MV, Jr., Kobayashi K, Izatt JA, et al. (2000) High-resolution endoscopic imaging of the GI tract using optical coherence tomography. Gastrointestinal Endoscopy 51:474-479 12. D'Amico AV, Weinstein M, Li X, et al. (2000) Optical coherence tomography as a method for identifying benign and malignant microscopic structures in the prostate gland. Urology 55:783-787 13. Bouma BE, Tearney GJ, Compton CC, et al. (2000) High-resolution imaging of the human esophagus and stomach in vivo using optical coherence tomography. Gastrointestinal Endoscopy 51:467-474 14. Pitris C, Goodman A, Boppart SA, et al. (1999) High-resolution imaging of gynecologic neoplasms using optical coherence tomography. Obstetrics and Gynecology 93:135-139 15. Pitris C, Brezinski ME, Bouma BE, et al. (1998) High resolution imaging of the upper respiratory tract with optical coherence tomography: a feasibility study. American journal of respiratory and critical care medicine 157(5) Pt 1:1640-1644 Author: Yu Chen Institute: University of Maryland Street: 2330A Jeong H Kim Building City: College Park, MD 20742 Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
Clinically Relevant Hand Held Two Lead EEG Device E.M. O’Brien1 and R.L. Elliott2 1
School of Engineering, Mercer University, Macon, GA, U.S.A. 2 School of Medicine, Mercer University, Macon, GA, U.S.A.
Abstract–– A preliminary device has been designed that records a two lead EEG and displays the average signal frequency over a 60 second period. The device is portable and battery operated and has clinical relevance. Between 10% and 40% of hospitalized elderly patients and up to 40% of patients admitted to intensive care units have or develop delirium, a mental state of confusion which often indicates an underlying metabolic, infectious, or other potentially serious but often treatable condition. Despite its potentially serious outcome, the diagnosis of delirium is frequently missed. Fortunately the EEG is a rapid and sensitive test for detecting delirium. The EEG can also indicate the presence of potentially treatable causes of or contributions to dementia, including thyroid disease, malnutrition, infectious disease, or adverse effects from medications. Finding a normal EEG in a patient with suspected delirium or dementia reduces the need for additional costly and potentially risky tests. However, scheduling a full EEG test is time-consuming and expensive, and the results are not usually available in many centers for days or weeks. This portable EEG device can be used by the clinician to rapidly indicate if the patient’s EEG is normal or abnormal and if an extensive medical evaluation is warranted. In the present design three electrodes are placed on the scalp with one being a reference electrode that is from a “driven right leg” circuit. The first stage includes radio frequency suppression and protection of the circuit from large voltages possible caused by electrostatic discharges or defibrillation. Next is an instrumentation amplifier with moderate gain followed by other high gain amplifiers. A fifth-order Butterworth low-pass filter is used to suppress frequencies above 30 Hz. The 10 bit A/D converter of a PIC16F88 microcontroller digitizes the signal. The microcontroller uses a C program to determine the average EEG frequency. This result is displayed via a LCD and is updated every 1 minute supplying valuable information to the clinician. Keywords–– EEG, portable, delirium, dementia, average frequency.
I. INTRODUCTION Between 10% and 40% of hospitalized elderly patients and up to 40% of patients admitted to intensive care units
have or develop delirium [1, 2], a state of confusion, which often indicates an underlying metabolic, infectious, or other potentially serious but often treatable condition. The EEG is a helpful tool in diagnosing delirium, and, according to Mendez, et al. [3] “disorganization of the usual cerebral rhythms and generalized slowing are the most common changes…” associated with the EEG of a patient with delirium. A similar slowing of the EEG is associated with some treatable causes of or contributions to dementia such as hypothyroidism or a metabolic encephalopathy [4]. Thus the EEG can also indicate the presence of a potentially treatable cause or contribution to dementia. Conversely, the finding of a normal EEG in a patient with either suspected delirium or dementia reduces the need for additional costly and potentially risky tests. Consequently, an EEG can indicate either the need for further medical evaluation in delirium or dementia or the lack of treatable conditions likely to be uncovered with further evaluation. However, scheduling a full EEG test is time consuming and expensive, and the results are not usually available in many centers for days or weeks. Therefore a portable EEG device which can be used by the clinician to rapidly indicate if the patient’s EEG is normal or abnormal and if an extensive medical evaluation is warranted would be clinically useful. Such a tool can save time, reduce costs associated with additional tests (that are likely to have little value) and indicate the presence of potentially serious underlying medical conditions needing further evaluation. What is proposed is a portable handheld device that will record a two lead EEG and will generate and display an average frequency of the zero crossings, freqz, and of the peaks, freqp, over a 60 second interval. If the EEG is like a single frequency sine wave then the freqz and freqp will be the same. If there is a low-amplitude high-frequency waveform riding on top of a low frequency large amplitude waveform (as is often the case in the EEG) the two frequencies will be different. This will provide more insight into the underlying EEG.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 489–492, 2010. www.springerlink.com
490
E.M. O’Brien and R.L. Elliott
II. METHODS AND MATERIALS The EEG recorded with scalp electrodes has a signal amplitude of 5 – 300 μV and a frequency content from dc to 150 Hz according to one source [5] and an amplitude of 10 – 100 μV and a frequency content mostly below 40 Hz for scalp recordings from another [3]. For this work signals in the 10 μV range were seen and the frequency was limited to a range of 1 to 30 Hz which is the upper limit of the Beta band of EEG frequencies. Since we are concerned with a slowing of the EEG frequency, 30 Hz should be adequate and provide more distance from any 60 Hz noise that is to be removed. A block diagram of the current device is shown in Figure 1. Silver silver-chloride ECG type electrodes with paste are used for the interface. A bi-frontal recording is done with
the electrodes placed on the forehead at approximately AF7 and AF8 of the international 10-20 system. The reference electrode is placed behind the left ear. Coaxial cables with the shields tied to the system ground are used to connect the electrodes to the first stage. A radio frequency suppression circuit is used to protect against interference. It consists of 10 kΩ resistors in each lead with a 0.01uf capacitor between them at the instrumentation amplifier side. This provides a low-pass filter with a cutoff frequency of 1600 Hz. The first stage of amplification is supplied by an INA101 instrumentation amplifier. The gain is set at 10 for this stage to prevent saturation due to any polarization voltage associated with the electrodes. All amplifiers are supplied by ±9 V from two conventional 9 V batteries. The input of the INA101 includes high voltage protection. The INA101 also has the two outputs from its first stage externally available.
Fig. 1 Overall circuit block diagram
IFMBE Proceedings Vol. 32
Clinically Relevant Hand Held Two Lead EEG Device
This facilitates the determination of the common mode signal at the inputs and subtracting it from the body via a “driven right leg” circuit [5]. The circuit uses a high gain inverting operational amplifier (MC33071). The output of this circuit is applied to the body by the reference electrode which will minimize 60 Hz noise or other common-mode voltages. The common-mode rejection ratio, CMRR, of the INA101 at 10 Hz was measured to be 106 dB matching the specification sheet value. The calculated random noise referred to the input of the INA101 is 0.17 μVrms which is well below the anticipated signal level. The next two stages are identical and consist of 1 Hz passive high-pass filters followed by noninverting amplifiers, gain of 150 each, constructed using a MC33071 operational amplifiers. The total circuit gain is 225,000 V/V which brings a ±10 μV signal to a value of ±2.25 V. The fifth-order Butterworth filter is a MAX1062 integrated circuit tuned via external components to give the correct Butterworth response. The cutoff frequency of the filter is 30 Hz and achieves 30 dB of attenuation at 60 Hz. This chip requires the input to be between 0 and 5 V. Therefore a level shifting circuit precedes the filter and adds a constant 2.5 V. This circuit is also a high-pass filter with a 1 Hz cutoff frequency. The 5 V supply for the microcontroller and the MAX1062 is obtained from the +9 V supply using a LM78L05 linear voltage regulator. After the EEG signal is amplified and level shifted it is sampled with the 10 bit analog-to-digital converter of the PIC16F88 microcontroller, the average frequencies of the EEG zero crossings, freqz, and of the peaks, freqp, are determined as shown in the flow chart of Figure 2. The program is written in C using the CCS C compiler. After initializing the program and hardware of the device the analog signal is sampled at a rate of 480 Hz. Each time a sample is obtained the timing counter is incremented. An eight point moving average filter is implemented which has its first null at 1/8th the sample frequency. This effectively removes any remaining 60 Hz noise. An algorithm is then implemented to determine if there has been a positive excursion through 2.5 V (which corresponds to zero volts of the original signal) which causes an increment of the zerocount variable. It then determines if the current sample is a local peak causing an increment of the peak-count variable. If 60 s have elapsed the average frequency of the zerocrossings and the peaks are displayed on the LCD screen (indicated in Figure 1) and all count variables are reset before another sample is taken. If 60 s have not elapsed, then the next sample is taken and the process is repeated.
III. RESULTS AND CONCLUSION Figure 3 shows a 1.2 s EEG from a normal subject. The figure was made from a screen capture of an oscilloscope
491
Fig. 2 Flow chart of the processing algorithm connected after the fifth-order Butterworth filter. Each horizontal division corresponds to 0.1 s and each vertical division corresponds to 4.4 μV of the original EEG. During this recording time the display showed 5 Hz for the average zero crossing frequency and 19 Hz for the average peak frequency. The device is now ready for clinical testing on patients.
IFMBE Proceedings Vol. 32
492
E.M. O’Brien and R.L. Elliott
Fig. 3 A 1.2 s sample recording of a bi-frontal EEG from a normal subject (4.4μV and 0.1 s per division)
ACKNOWLEDGEMENTS The authors would like to express their thanks to Mr. Eric Daine for help in the physical construction of the device.
Author: Edward M. O’Brien Institute: Mercer University Street: 1400 Coleman Ave City: Macon, GA Country: USA Email:
[email protected]
REFERENCES 1. Elie M, Rousseau F et al. (2000) Prevalence and detection of delirium in elderly emergency department patients. CMAJ 163 (8):977-981 2. Alagiakrishnan K, Blanchette P, (2009) Delirium @http://emedicine.medscape.com/article/288890-overview 3. Mendez M, McMurtray M. Delirium. In Bradley WG, Daroff RB, Fenichel G, Jankovic J. Bradley’s Neurology in clinical practice. 2008, Butterworth-Heinemann, Oxford UK. 4. Cummings JL. Treatable dementias. Adv Neurol 1983;38:165-83 5. Webster J, (2010) Medical instrumentation: application and design. Wiley
IFMBE Proceedings Vol. 32
A Simple Structural Magnetic Resonance Imaging (MRI) Method for 3D Mapping between Head Skin Tattoos and Brain Landmarks Mulugeta Semework State University of New York, Downstate Medical Center, Brooklyn, USA
Abstract— A successful brain surgery requires pre-surgery localization of various brain areas. An accurate craniotomy, which gives perfect access to such brain area of interest is needed and is dependent on mathematically establishing the relationship between head landmarks and structural magnetic resonance image (MRI) scans. While typical stereotactic procedures rely upon external cranial landmarks and standardized atlases for localization of subcortical neural regions, visualization of the internal morphology of the brain in vivo can be achieved by MRI. Our lab, when possible, also uses MRIs on our post-surgical monkeys to get precise information on the exact placement of implanted microwires. Our stereotactic instrument and head-posts on the monkeys are compatible with a magnetic resonance unit and we have developed a model to analyze the magnetic resonance imaging results and calculate 3D mappings between external landmarks, skin tattoos on the primates’ heads and brain areas of interest. This allowed us to overcome the limitations, inaccuracies, and cost prohibitions of the traditional stereotactic methods and helped us make reliable localization of subcortical targets in the monkey brain. Keywords— Structural MRI, Structural MRI image analysis, brain structure mapping, Stereotactic method, neurosurgery.
I. INTRODUCTION With increased ease, power and accessibility of magnetic resonance imaging (MRI), especially the high resolution of T1-weighted structural MRI [1] there has been a growing interest in using this technology to study brain structure, function, development, and pathologies [2]. In what is now a very common practice of implanting primates with microelectrode arrays (MEAs) for various research goals, there is a common problem that arises from the inherent variability of brain structures and skull anatomy between different subjects. Since MRI is a non-invasive method that capitalizes on the complex mosaic across the cortical sheet [3], it is possible to solve much of this problem by taking individual MRIs pre-surgery and be able to map structures of interest and use established coordinates during the surgery. There is variability between individuals in pattern of brain area folding, shape and size of cortical areas and relative locations
[3]. We and others observe individual differences in brain anatomy even though the overall organization and relative location with respect to each other stays the same. It is very common to find errors in subjective guesses of location of a brain structure just from skull topography alone. Moreover, as standardized atlases are generally used for localization of subcortical neural regions [4] a problem arises from such poorly informed assumption of the location of underlying brain structures and it is not uncommon to make a misplaced craniotomy. There is thus a need for a method to make a reconstruction of the areas of interest and describing the relationships within a reasonably acceptable mathematical error. This short paper discusses a recently developed new method for expressing relationships between surface markers, such as tattoos on head skin and underlying major brain structures.
II. METHODS A. MRI Procedure Monkeys (Macaca radiata) are anesthetized in their home cage with Ketamine (10-20 mg/kg) injection intramuscularly. We found that most of our monkeys remain under for the duration of the scanning by just this drug alone. For the MRI procedure, the anesthetized monkeys are transported to the SUNY Downstate Medical Center (DMC, also known as “University Hospital of Brooklyn”) Department of Radiology. In the facility, after their head is stabilized with earbars, the monkeys are placed into the scanner chamber, and their heads fitted inside a 16-in. head coil. Monkeys remain anesthetized during the MRI procedure, if needed, with a supplemental injection of Ketamine. Since the procedure takes only 30-50 minutes, the first anesthesia injections are generally effective in maintaining stillness inside the machine. MRIs of brains are acquired on a Magnetom Symphony Maestro Class Scanner, and the following parameters are for a typical scan and can vary between monkeys and scans (2). T1-weighted 3D MPRAGE MR images are acquired through the entire brain using a TR = 1,500 ms and
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 493–495, 2010. www.springerlink.com
494
M. Semework
TE = 3.04 ms with no echo-train. Scan acquisition time will approximately be 10 min. For each monkey the number of signals were averaged to be three. Slices will be obtained as 0.5-mm thick contiguous sections with a matrix size of 256 × 256 and a field of view of 128 mm × 128 mm, resulting in a final voxel size of 0.5 mm × 0.5 mm × 0.5 mm. After being transported back to their cages, monkeys are allowed to completely recover from the effects of the anesthesia. All procedures are done per SUNY DMC’s Animal research regulations and the Joseph T. Francis lab protocols.
C. Image Processing To help in the ease and perfection of the next procedures, the scans are first enhanced, region of interest (ROI) cropped, and then appropriately edged, a subjective process that depends also on the quality of the scans and the resulting binary images. Depending on several factors, the threshold for edging can be manually set.
B. Code and User Interface The image metadata from a DICOM file series is made into a structure, a 3D array of the images is generated, and previewed using a Matlab tool (5). All of the analysis is done by a novel Matlab function that imports the DICOM images and does the preprocessing and detailed calculations. Currently, there is a tested command line version of the code. The most recent one is designed for a user with no Matlab programming skills and is still being optimized. In short, it has a Graphic User Interface (GUI) used as a front panel to make option selections, such as importing slides, color maps, edging methods, etc. that makes the analysis easy and user friendly (Fig.1)
Fig. 2 Sobel-edged and selected brain, skull and marker surfaces As shown in Fig. 2 (which shows already highlighted targets, in color), the edged scan is now ready to be pointand-clicked, or, in case of the new code (GUI shown in Fig. 1), a ROI is first manually selected, and an automated tracking of this object throughout all the scans follows. There is the option of deciding how many connected points to consider in the analysis, colors to use, etc. in graphical displaying of the process. D. Transformations The user has the option of selecting skull or brain landmarks to be used for this purpose. The outside markers we use, skin tattoos, are identified in the MRI scans from vitamin E tablets we affix to the skin during the scanning (yellow ellipses on Fig. 3).
Fig. 1 Graphic User Interface for new MRI mapping method (a “Canny” edged brain scan shown)
IFMBE Proceedings Vol. 32
A Simple Structural Magnetic Resonance Imaging (MRI) Method for 3D Mapping
Once the outside and inside points are set, they are tracked through all or few selected scans that are known to contain them. The absolute distance between any two markers (mDist) in the scans 3-D space is calculated as follows:
mDist{x, y, z} =
(
the skin landmarks which made it difficult to do sterile procedures during surgery. These markers ended up hiding deep in the stereotactic apparatus and therefore only two locations had to be used, still with no error in location the target.
( | (x 1 - x 2 ) |2 +
| (y1 - y 2 ) |2 ) * pw + (| (s1 - s 2 ) | *th)2 ) Where x and y are the locations of the given markers (or ROI and marker), “th” is scan thickness in mm and “pw” is pixel width. Following this same convention, the new automated approach computes the normalized 2-D crosscorrelation of the target matrix template and the cropped scan and finds the maximum coefficient and gives mDist in same 3-D space.
495
IV. CONCLUSIONS For the sole purpose of making a craniotomy above the location of the brain structure we are interested in, we found this method to be very useful and dependable. It eliminated the need to widen craniotomies or make new ones to correct errors. The current code refinement is expected to make this method available for wide use and help improve structural mapping success. A broader application of this method is envisioned for structures that are hidden from view and reach or are too topologically convoluted.
III. RESULTS AND DISCUSSION
ACKNOWLEDGMENT
Out of the need to make a simple craniotomy that lands on the brain structures that we are interested in implanting multielectrode arrays, the above method was developed. It is designed to map physical markers, tattoos on the monkeys’ heads, and the underlying major brain landmarks, such as the central sulcus. As shown in Fig. 3 (left panel), a pencil sketch on the skull of the predicted location of the central sulcus (diagonal trace) corresponded well with the actual finding after the craniotomy.
I thank everyone in the Joseph T. Francis lab at SUNY DMC for their unreserved support in taking the MRI scans and comments in developing this method. Westley Hayes, thank you so much for your professional and speedy manuscript editing.
REFERENCES 1. Saad Z, Glen D, Chen G, et al. (2009) A new method for improving functional-to-structural MRI alignment using local Pearson correlation. NeuroImage 44, 839–848 2. Smith S., Jenkinson, M, Woolrich, M, et al. (2004) Advances in functional and structural MR image analysis and implementation as FSL. NeuroImage. 23 (2004) S208–S219 3. van Essen, D, Drury, H, Joshi, S, and Miller, M (1998) Functional and structural mapping of human cerebral cortex: Solutions are in the surfaces. Proc. Natl. Acad. Sci. USA. Vol. 95, pp. 788–795 4. Saunders, R, Aigner, T & Frank, J (1990). Magnetic resonance imaging of the rhesus monkey brain: use for stereotactic neurosurgery. Exp. Brain Res. 81, 443–446 5. Balkay, L (2005) DICOM Reader at http://www.mathworks.com/matlabcentral/fileexchange/ 7926-dicomdir-reader
Fig. 3 Craniotomy sketch and matching central sulcus As a proof of individual differences, we tried using the coordinate system and marker distances that were generated from one monkey on another one and the craniotomy was off by at least half a centimeter. One challenge we faced was the lower placement on the side of the face of some of
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Mulugeta Semework State University of New York, Downstate Medical Center 450 Clarkson Ave, box 31 Brooklyn USA
[email protected]
Frame Potential Classification Algorithm for Retinal Data John J. Benedetto1 , Wojciech Czaja1 , and Martin Ehler1,2 1 Norbert Wiener Center for Harmonic Analysis and Applications Department of Mathematics, University of Maryland, College Park, MD 20742 2 National Institutes of Health, Eunice Kennedy Shriver National Institute of Child Health and Human Development PPB/LIMB/SMB, Bethesda, MD, 20892, USA
Abstract- State of the art dimension reduction and classification techniques in multi- and hyper-spectral image analysis depend primarily on representing the data in terms of orthonormal eigendecompositions of the associated kernel matrices. To better capture the non-orthogonal nature of spectral classes in retinal imaging we replace the orthonormal bases with frame representations of these kernels. The frames are obtained by means of minimizing the frame potential energy. We also investigate the role of adding various types of penalty terms to the frame potential, in order to promote the sparsity of the aforementioned representations.
fore, for given data and kernel, we present an alternative to eigenmap methods. The non-orthogonal nature of regions and substances for classification leads us to the use of frames. By using a frame potential energy algorithm [4], combined with an appropriately chosen penalty term, we obtain sparse frame representations to replace the eigenmaps. Under suitable conditions, the minimization of the penalty term is analogous to recovering a frame with a sparse set of coefficients. This, in turn, implies separation of the frame elements. This algorithm is applied to images of National Eye Institute study patients with retinal pathology, for the purpose of detecting and classifying mixed fluorophore distributions.
Keywords- Multispectral analysis, retinal imaging, frames, frame potential, kernel methods, sparsity.
II. Frame theoretic approach
I. Introduction A. Frames
Many retinal diseases are associated with the distribution of fluorescent photo chemicals that accumulate within the human retina. Age-related macular degeneration (AMD), for instance, originates following the increased accumulation of fluorescent photoproducts within the retinal pigment epithelial (RPE) cells at the back of the retina. The analysis of multispectral retinal images offers the prospects of sensitive mapping fluorophores and chromophores within the retina as well as monitoring the dynamics of early changes. Detection and classification of these molecular photoproducts, that accumulate within the retina, are important for the evaluation of early drug interventions. Both, multi-spectral retinal data collected and theoretical spectral signatures, suggest that different classes of chemicals in RPE are almost never orthogonal at the level of given spectral data. Thus, popular kernel eigenmap methods, such as, e.g., Laplacian Eigenmaps [1], Locally Linear Embedding [2], Isomap [3], are not optimal tools for this type of analysis. This is since eigenmap methods provide processed orthogonal decompositions. Non-orthogonal decompositions allow for greater flexibility in representing mixtures and pure elements. There-
A frame for a Hilbert space H is a collection of vectors Φ = {ϕi : i ∈ I} ⊂ H for which there exist constants 0 < A ≤ B < ∞ such that for each f ∈ H, X Af 2 ≤ |ϕi , f |2 ≤ Bf 2 . (1) i∈I
Constants A and B which satisfy (1) are called lower and upper frame bounds of Φ, respectively. Optimally chosen values of A and B are referred to as the optimal frame bounds of the frame Φ. When A = B, the frame Φ is referred to as a tight frame. In particular, orthonormal bases are frames. However, frame elements need not be mutually orthogonal, and frames, as redundant systems, provide for a multitude of different representations. In finite dimensional Hilbert vector spaces, the notion of a frame becomes intuitively simple: let s ≥ d; Φ = {ϕi : i = 1, . . . , s} is a frame for Fd (where F denotes the field of real or complex numbers) if and only if it is a spanning system for Fd . However, the spanning property does not reflect the usefulness of frames for representation and stability in noisy environments, see, e.g., [5]. A frame that is finite, tight, and each of its elements has unit norm ϕi = 1, is known as a finite unit norm tight frame, or a FUNTF.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 496–499, 2010. www.springerlink.com
Frame Potential Classification Algorithm for Retinal Data
497
The theory of frames was initiated by Duffin and Schaeffer [6] in 1952, as part of developments in the area of nonharmonic Fourier series. The significant role of frame theory in signal processing was discovered in 1980s, see, e.g, [7].
and P1 (Y, Θ) = T S(Y, Θ)−1 . Another approach would be to define a penalty term of the form N X s X P2 (Y, Θ) = |yj , θi |. j=1 i=1
B. Frame potential For the finite dimensional Hilbert space Rd , let s ≥ d. Let Φ = {ϕi : i = 1, . . . , s} ⊂ Rd be a FUNTF for Rd . Therefore, Φ ∈ S d−1 ×. . .×S d−1 , and, following [4], we consider the total frame potential energy (T F P ), defined as T F P (Φ) =
s X s X
|φk , φl |2 .
Minimization of P2 (Y, Θ) is equivalent to minimizing 1 energy of Y along the frame elements. As such, following [8], we observe that under suitable conditions this is equivalent to recovering a frame Φ with a sparse set of coefficients {yj , φi }.
III . Algorithm description
(2)
k=1 l=1
The notion of the frame potential is modeled on the notions of Coulomb force and electric potential. Theorem 1 (Benedetto, Fickus (2003)). (a) Every local minimizer of the frame potential T F P is also a global minimizer. (b) If s ≤ d, the minimum value of the frame potential is d and the minimizers are precisely the orthonormal sequences in Rd . (c) If s ≥ d, the minimum value of the frame potential is d2 /s and the minimizers are precisely the FUNTFs in Rd . Moreover, we observe that if ϕi = 1 for i = 1, . . . , s, then Φ is FUNTF for Rd if and only if dX y, φk φk . s s
∀ y ∈ Rd ,
y=
k=1
Given a data space Y consisting of N vectors yn in Rd , our goal is to construct a FUNTF Φ with the property that each frame element is associated to only one type of classifiable material. We shall achieve this goal by promoting the sparsity of the frame representations. As mentioned above, each frame yields infinitely many different representations for a given dataset. The representations which have physical meaning may be associated with the notion of sparsity of frame expansion coefficients. As such we propose to consider the following minimization problem: o n min T F P (Θ) + P (Y, Θ) | Θ ∈ S d−1 × . . . × S d−1 , (3) where P (Y, Θ) is a penalty term that depends on the data set Y and the set of unit vectors Θ. There are several different ways to quantify the penalty term. One approach to sparsity is by considering the separation of the frame elements. For Θ = {θk : k = 1, . . . , s} ∈ S d−1 × . . . × S d−1 , we let ∀ n = 1, . . . , s,
p(θn ) =
N X
|ym , θn |.
m=1
Then the total separation of the frame coefficients for given data Y is defined as T S(Y, Θ) = min |p(θk ) − p(θl )|,
In [9] we proposed a new dimension reducing algorithm for hyper-spectral satellite imaging data by exploiting the synergy between endmembers and kernel based classification schemes. The unifying tool was the notion of non-orthogonal overcomplete frame representations. There are two major reasons for using frames. On one hand, initially overestimating the number of classes allows for greater flexibility in representing mixtures and pure elements. On the other hand, empirical evidence strongly suggests that distinct classes in RPE are not orthogonal to each other. In the present work we propose a new method for the selection of data representation systems for the purpose of optimal classification of multi-spectral retinal data. This optimality is achieved by increasing the sparsity of the frame coefficients. D Given a multi-spectral data set X = {xi }N consisti=1 ⊆ R ing of N pixels in D dimensions, we propose the following algorithm for processing X: 1. Kernel Application, 2. Frame Potential with Penalty, 3. Frame Coefficients. In step 1, as a preprocessing step, we use kernel eigenmap methods to reduce the dimension of the data set X, thus cred ating a low dimensional data set Y = {yi }N i=1 ⊆ R that preserves the local geometry of X. Y consists of N data points, one for each data point of X, each of which is d-dimensional. We assume d ≤ D. The purpose of step 2 is to select a frame Φ for Y in such a way that each frame element is associated with only one classifiable material. We achieve this by selecting maximally separated frames which, in turn, is done by combining frame potential with a penalty term. This creates a frame by which we can represent the low dimensional data points Y . Frames provide overcomplete representations which gives us flexibility in representing mixtures and pure elements. Step 3 then computes the frame coefficients of the data points Y in terms of the maximally separated frame Φ. There are infinitely many such frame representations - we highlight certain ones that are well suited for classification purposes. Our multi-spectral data set X is obtained from autofluorescence images from standard fundus cameras with inserted
k=l
IFMBE Proceedings Vol. 32
498
J.J. Benedetto, W. Czaja, and M. Ehler
interference filters, see Section IIID for a detailed discussion and results. A. Kernel Application Our first step is to reduce the complexity of the original data by mapping the data X to an appropriate low dimensional set Y . This is achieved by applying kernel eigenmap methods. The general nature of our framework allows for the use of any kernel eigenmap method, including, e.g., Laplacian eigenmaps [1], locally linear embedding (LLE) kernels [2], or Isomap [3], however in this work we focus on Laplacian eigenmaps. We diagonalize the resulting kernel K and select the d eigenvectors corresponding to the d most significant eigenvalues. Denote the j th eigenvector by vj , and let the ith entry of vj be denoted by vj (i). The reduced dimension coordinates for the sampled points xi ∈ X are then given by: yi = (v1 (i), v2 (i), . . . , vd (i)) ∈ Rd
for all
We propose two separate ways to find C. The first is based on the frame operator S : Rd −→ Rd , which is:
Kernel application (combined with a suitable re-indexing of the low dimensional coordinates) results in a set Y = d {yi }N i=1 ⊆ R , where yi is the new low dimensional representation of the original high dimensional data point xi ∈ X ⊆ RD . We shall use data dependent frames Φ = {φk }sk=1 , s > d, for the representation of the dimension reduced data set Y ([5]): s X
ci,j φj
spots. Square marks the processed part of the image.
i = 1, . . . , N.
B. Frame Potential with Penalty
yi =
Fig 1 Color fundus image. Drusen appear as bright
for all
Sy =
y=
N X s X
|yj , φi |.
y, S −1 ϕi ϕi
for all
y ∈ Rd .
c ˜
C. Frame Coefficients Given a frame Φ = {ϕi }si=1 for Y = {yi }N i=1 , we shall find a set of coefficients C = {ci,j }, i = 1, . . . , N, j = 1, . . . , s that represents Y in terms of Φ: for all
The coefficients ci,j = yi , S −1 ϕj , i = 1, . . . , N, j = 1, . . . , s are called the canonical coefficients and they minimize the 2 energy of the coefficient set C. An alternative to the canonical coefficient set is to find sparse coefficient representations [8]. Such coefficients are found by minimizing their p energy, where 0 < p ≤ 1: ci,· = arg min ˜ cp
P3 is a modification of the penalty term P2 , defined in (3). This modification emphasizes that we expect the frame expansion coefficients of the given data to be concentrated on some 3 fixed frame components. There are other variants of this penalty term, which amount to assuming that each multispectral pixel can be sparsely represented by 3 different frame elements. That approach, however, would come with a significant increase in computational costs.
ci,j ϕj
s X i=1
j=1 i=4
s X
y ∈ Rd .
For any frame Φ, the frame operator S is invertible, and in fact gives the following representation:
yi ∈ Y.
The construction of a maximally separated frame Φ is a modification of the frame potential minimization technique from [4]. The total frame potential energy is defined in (2). The minimizers of the frame potential are FUNTFs. We increase the sparsity of the frame coefficients by combining the frame potential with a penalty term of the form
yi =
for all
i=1
j=1
P3 (Y, Φ) =
s X y, ϕi ϕi
i = 1, . . . , N.
subject to
yi =
s X
c˜j ϕj .
j=1
The minimization for p < 1, however, is a non-convex combinatorial optimization problem that is NP-hard in general. For p = 1 it can be solved in polynomial time by linear programming, cf., [8]. The 1 minimization also matches our choice of the penalty term P3 (Y, Φ). D. Application to Retinal Images Increased accumulation of fluorescent photochemicals within the retinal pigment epithelial (RPE) cells is suspected to be a precursor of many retinal diseases. The very early progression of AMD is associated with high levels of cytotoxic photoproducts that induce RPE dysfunction which has initially clinically been observed by increased backscattering of light from subRPE deposits (drusen) [10], [11].
j=1
IFMBE Proceedings Vol. 32
Frame Potential Classification Algorithm for Retinal Data
499 by the fact that the accumulated fluorescent photoproducts are known to be mixtures of many components, and, as such, should not have sparse representations.
Acknowledgments This material is based in part upon work supported by NSF through grant CBET0854233, by NGA through grant HM 15820810009, and by ONR under contract N000140910144. The retinal data is courtesy of the National Eye Institute. The authors gratefully acknowledge Dr. Robert Bonner (NIH) for many helpful discussions and suggestions and Dr. Matt Hirn (Yale) for computational support.
References
Fig 2 Frame coefficient maps.
Multi-spectral fluorescence imaging offers the prospects of obtaining high resolution maps of fluorophores and chromophores within the retina. The image sets were obtained from standard fundus cameras with added selective interference filter sets. Aligned pixels are considered as vectors in RD forming our data set X. Due to scattered fluorescence originating from non-local sources, each data vector inherits a strong background component. Our goal is to increase the classification rate by considering sparse frame expansions, see Figures 1 and 2.
IV. Conclusions We have derived and implemented an algorithm for classification of multi-spectral data. This algorithm is based on the principle of minimization of the frame potential with a penalty term, which increases the sparsity of the expansion coefficients. We have applied this technique to analyze multispectral retinal images. Our technique allows us to detect artifacts by detecting the multi-spectral pixels which cannot be sparsely expressible in terms of the frames which minimize the frame potential with the penalty P3 . Therefore, the artifacts which are coherently present in many significant coefficient maps correspond to drusen, see Figure 2. This is supported
[1] M. Belkin and P. Niyogi, “Laplacian eigenmaps and spectral techniques for embedding and clustering,” Advances in Neural Information Processing Systems, vol. 14, pp. 585–591, 2002. [2] S. Roweis and L. Saul, “Nonlinear dimension reduction by locally linear embedding,” Science, vol. 290, no. 5500, pp. 2323–2326, 2000. [3] J. Tenenbaum, V. de Silva, and J. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Science, vol. 290, no. 5500, pp. 2319–2323, 2000. [4] J. J. Benedetto and M. Fickus, “Finite normalized tight frames,” Advances in Computational Mathematics, vol. 18, pp. 357–385, 2003. [5] J. Kovaˇcevi´c and A. Chebira, “Live beyond bases: The advent of frames,” IEEE Signal Processing Magazine, 2007. [6] R. J. Duffin and A. C. Schaeffer, “A class of nonharmonic fourier series,” Trans. Amer. Math. Soc., vol. 72, pp. 341–366, 1952. [7] I. Daubechies, A. Grossmann, and Y. Meyer, “Painless nonorthogonal expansions,” J. Math. Phys., vol. 27, no. 5, pp. 1271–1283, 1986. [8] E. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Information Theory, vol. 51, pp. 4203–4215, 2005. [9] J. J. Benedetto, W. Czaja, J. C. Flake, and M. Hirn, “Frame based kernel methods for automatic classification in hyperspectral data,” Proc. IEEE International Geoscience and Remote Sensing, July 2009. [10] S. M. Meyers, M. A. Ostrovsky, and R. F. Bonner, “A model of spectral filtering to reduce photochemical damage in age-related macular degeneration,” Trans. Am. Ophthalmol. Soc., pp. 83–95, 2004. [11] S. Schmitz-Valckenberg, M. Fleckenstein, H. P. N. Scholl, and F. G. Holz, “Fundus autofluorescence and progression of age-related macular degeneration,” Surv. Ophthalmol., pp. 96–117, 2009.
IFMBE Proceedings Vol. 32
Raman-AFM Instrumentation and Characterization of SERS Substrates and Carbon Nanotubes Q. Vu1, M.H. Zhao1,2, E. Wellner1, X. Truong1, P.D. Smith1, and A.J. Jin1,* 1
Laboratory of Bioengineering and Physical Science, National Institute of Biomedical Imaging and Bioengineering National Institutes of Health, Bethesda, MD 20892, USA 2 Building & Fire Research Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA Abstract— Atomic force microscopy (AFM) is a powerful tool for structural determination at sub-nanometer resolution and nano-mechanical characterization with pico-Newton molecular force sensitivity. Raman spectroscopy, on the other hand, can provide fingerprint chemical information of samples with sub-micron resolution. Through in-house engineering and integration of commercial components, we have developed a multi-modality biological AFM and Raman-spectroscopic instrument featuring a number of unique advantages including an independent closed-loop X-Y scanning stage and Z-piezo control, high-performance optical gratings, thermoelectrically cooled and electron-multiplying charge-coupled detectors (EMCCD), active vibration inhibition, and biological environmental controls. With this new instrument, we have obtained correlated surface enhanced Raman spectroscopy (SERS) mapping and AFM topographic images of a homegrown gold-on-mica substrate with putative SERS hot spots. We have also investigated carbon nanotubes (CNTs) via AFM tip enhanced Raman spectroscopy (TERS), where the enhancement factor is quantified from the difference Raman spectrum of the CNT’s G-band signature between an engaged and retracted state of a gold-coated AFM tip. The developed instrument can obtain simultaneous nanoscale structural and chemical information under controlled environmental conditions, a capability that is critical for meaningful interpretation of many anticipated biomedical applications. Keywords— Atomic Force Microscope (AFM); Raman Spectroscopy, Near-field Optics; Tip-Enhanced Raman Spectroscopy (TERS); Surface-Enhanced Raman Spectroscopy (SERS); Carbon Nanotubes (CNTs).
I. INTRODUCTION Atomic force microscopy (AFM) is becoming increasingly useful for studying ultra-structure and functional properties of biological molecules, tissues, and material samples with nanometer spatial resolution and pico-Newton force sensitivity (e.g. [1-4]). Vibrational spectroscopy, such as Raman spectroscopy and Fourier transform infrared spectroscopy, is well known to yield chemical identification by measuring the characteristic transitions between a molecule’s internal energy levels (e.g. [5-10]). To integrate the
power of both technologies (e.g. [11]), we have designed a multi-functional instrument that combines AFM, optical imaging, and optical spectroscopic mapping measurements in situ (Fig. 1) toward nano-characterization of biomedical and non-biological samples and systems (examples of the latter are reported below).
II. MATERIALS AND METHODS A. Instrument Components and Integration Our instrument consists primarily of a research AFM (model XE120, Park Systems Corp, Suwon, South Korea) and a Raman spectrometer (LabRam, Horiba Jobin Yvon, Edison, NJ). The AFM and Raman subsystems are coupled together via an inverted optical microscope (IX 70, Olympus, Japan) and all are supported by an active vibration isolation (AVI) platform (AVI-350, Herzan, Laguna Hills, CA) (Fig. 1). The inner AFM base unit and the optical microscope sample stage are enclosed in a light-tight, lifescience environmental enclosure with temperature, CO2 and humidity controls (Precision Plastics Inc, Beltsville, MD). A home built outer enclosure with metal frames and dark room curtains further isolates the combined instrument from ambient conditions. System integration comprised several steps with assembly of all components occurring on site. The control hardware and software for both the Raman spectrometer and AFM were integrated into a single Windows® XP workstation. Alignment of the AFM cantilever above the sample plane and Raman laser focus from below, is a key task aided by two CCD cameras built into the system and optical alignment precision of a few micrometers is achieved routinely. The final data link between the AFM and optical subsystem is established via added-on DLL patches to the original instrument software from the component manufacturers (chiefly, Horiba JY). To accommodate optical imaging, and spectroscopic and AFM measurements of biological samples, the environmental controls are independently operated and validated to provide both CO2 and
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 500–503, 2010. www.springerlink.com
Raman-AFM Instrumentation and Characterization of SERS Substrates and Carbon Nanotubes
humidity-control, and thermostatically controlled heating of the entire AFM head unit.. The AVI system features active isolation for a frequency range of 1.2 to 200 Hz and passive isolation at frequencies higher than 200 Hz with rated transmissibility above 10 Hz of less than 2% and a full payload support of 628 kg.
501
sample surface by an objective (40x, NA0.75, Olympus, Japan) from below. A bare silicon AFM tip for the gold-onmica substrate or a gold-coated AFM tip (e.g. NSC15 or NSC15/Cr-Au, nominal spring constant of 0.6 N/m, Mikromasch, Wilsonville, Oregon) for CNT samples were selected, positioned near the focus of the beam, and maintained above the sample surface at a set distance by the integrated AFM feedback mechanisms. The Raman signals were collected through the same objective and separated from the excitation laser by a long-pass filter. The Stokes signals were recorded by a thermoelectrically-cooled spectroscopic CCD over a programmable spectroscopic range of wave numbers. Translation of the coarse microscope stage carrying the X-Y AFM nano-scanner allows comparison of sample regions with features of interest in both AFM and Raman spectra and permits co-localization for refined measurements. The Raman-AFM hyper-spectroscopic imaging is obtained by raster scanning the AFM tip on the sample surface and simultaneously recording the pixel-bypixel Raman spectrum. ImageJ (ver 1.4x, NIH, Bethesda, MD, http://rsb.info.nih.gov/ij/) with user macros were used for data analysis and visualization.
III. RESULTS AND DISCUSSION A. Mapping SERS Substrate for Enhancement Hot Spots Fig.
1 Schematic of the newly integrated Raman-AFM instrument for simultaneous acquisition of AFM images and a Raman spectroscopy maps of live-cells and single molecules, featuring a sample scanning AFM, inverted research microscope, laser Raman spectrometer, and full environmental controls. Total internal reflection fluorescent (TIRF) imaging via en external excitation laser and EMCCD camera attached to the microscope side port is also implemented
B. Samples and Measurements The gold-on-mica substrates were home-made by using an in-house evaporator (Auto 306, Edwards, Wilmington, MA), operating under high vacuum with a liquid N2 trap and a range of evaporation conditions of gold (99.99% purity) over freshly-peeled mica sheets (Ted Pella, Redding, CA). The second sample of carbon nanotubes, primarily of single-wall CNTs (SWNTs), was purchased from CheapTubes (Brattleboro, VT) with a concentration of 0.1 mg/mL. The tubes were then dispersed in N-methyl-pyrrolidone (NMP) solvent, sonicated for one hour and spin-coated on a thin sheet of freshly-peeled mica. Prior to measurement, the Raman excitation laser beam (λ = 633 nm, 20 mW) was aligned through a dichroic beam splitter and other optical elements, and focused onto the
We have investigated home-grown gold thin-films deposited onto fresh mica surface, in search of ideal substrates for AFM and for surface enhanced Raman scattering (SERS) sample support [12, 13]. In Fig. 2, we surveyed the SERS enhancement zones on a large gold-on-mica substrate both optically and topographically with AFM, and focused our characterization towards gold islands and gold-film edge features. We observed that the Raman laser is partially transmitted through a thin mica sheet and the thinner parts of the gold film (less than ~500 nm). In situ AFM images (Fig. 2) show a gold island about 10 μm in diameter and nearly 2 μm in maximum height. The lowest “basins” (low areas, Fig. 2A with cross sections in Fig. 2C) remained semi-transparent optically, indicating very little gold deposition. The corresponding Raman maps of the same area correlate the topological features to the chemical characterization from the Raman peaks (Fig. 2B, D&E). In regions where the main silicon peak of the AFM tip at 525 cm¯¹ (Fig. 2D) was observed (green areas in Fig. 2B), the surface appeared semi-transparent and the gold coating is measured less than ~200 nm. The intensities of the main mica peak at 265 cm¯¹ (blue areas in Fig. 2B) show an opposite trend with the detected silicon peak with the non-transparent gold coating (of thickness larger than ~200 nm) enhancing the mica
IFMBE Proceedings Vol. 32
502
Q. Vu et al.
signal quite uniformly by about four fold. We detected broad band Raman intensity from the gold coating of varying thickness (Fig. 2D, with total intensity from 200-1000 cm¯¹ plotted as a three-dimensional profile in Fig. 2E), and is greatly enhanced near the edge of the main island, i.e., SERS hot spot involving varying states of gold crystallization and grain size. The continuous coating of gold film enhances mica peaks by about four fold without adding broad band Raman intensities (Trace 1 & 2 in Fig. 2D) suggests that the gold deposition is in a non-crystalline state as we did not employ any annealing protocol [14].
Fig. 2 AFM topographic and Raman spectroscopic maps showing features formed by evaporated gold deposition on fresh mica surface. The topographic map (A) and three cross-sections (C) reveal gold islands of ca. 2 µm in height and 10 µm in diameters. The Raman map (B) is composed of 50x60 pixel spectral arrays, four of which are shown in (D) of the same substrate region in (A)-(C) with 1 (blue), 2 (green) and 3a,b (of two neighboring pixels) marking the locations 1, 2, 3 in (B), and the intensity scale for spectrum 3a suppressed 2x relative to 1, 2 and 3b. The RGB components of (B) are blue derived from the mica signature near 265 cm¯¹ (between blue dotted markers in D), green from the silicon signature near 525 cm¯¹ (between green dotted markers in D), and red from the gold signature of the broad-band intensity from 200-1100 cm¯¹ (between the red dotted markers in D), respectively. The gold signature, red component in (B), is highly localized near the two main gold islands at the locations 3 and 4, as in the 3-dimensional presentation (E)
B. TERS of Carbon Nanotubes We have also chosen to study CNTs deposited on mica primarily because of the vast number of Raman scattering studies that have characterized their electronic and bondvibration properties [6, 15, 16]. We are interested in the
large enhancement of the Raman intensity resonant with the excitation laser energy and with near-field laser energy via SERS and tip-enhanced Raman scattering (TERS) [17, 18].
Fig. 3 (A) Topographical image of CNT bundles on mica. Scan area 10 x 10 μm. Maximum height of the bundles is approximately 20 nm. (B) Farfield Raman map corresponding to (A) upon laser excitation of 633 nm with a power of 20 mW. The characteristic Raman G band (at 1590 cm¯¹) was used for the spectral integration (scan area 10 x 10 μm, integration time 20 seconds per pixel, 32 x 32 pixels). (C ) Difference spectra of point “a” in the topographical image in Figure 4 between when the AFM tip was engaged (top insert) and when it is retracted (bottom insert). All are fitted with a Lorentzian-line shape (light blue). G-band peaks are as follows: (a) 1540, 1554, and 1590 cm¯¹ (b) 1541, 1554, and 1590 cm¯¹. D-band peaks are as follows: (a) 1308 cm¯¹ and (b) 1308 cm¯¹. (c) shows the subtracted Raman spectra between (a) and (b). G-band peaks are as follows: 1540, 1553, and 1590 cm¯¹ Fig. 3A-B shows a typical topographical profile and Raman map of the characteristic G band of SWNTs at 1590 cm¯¹, caused by the tangential stretching modes [6, 15, 16], of a low coverage CNTs and amorphous carbon particles. The colored outlines in Fig. 3 are placed to mark the high degree of correlation between the AFM and Raman images of CNTs, especially SWNTs whose diameters range from 1.4 to 1.6 nm and in resonance with our current laser excitation [7], and the relative lack of the G-band signature from amorphous carbon and other non resonant CNTs. In Fig. 3C, the point spectra taken at the spot labeled as ‘a’ in Fig. 3A-B show that the TERS component measures about 33%
IFMBE Proceedings Vol. 32
Raman-AFM Instrumentation and Characterization of SERS Substrates and Carbon Nanotubes
of the total spectra when the gold-coated AFM tip was directly engaged above the CNT. Because the enhancement zone radius is thought to be 2-10 nm [17, 18] but the farfield size is measured by optical resolution of ~420 nm (=λ/2•NA) , the normalized local enhancement factor is best estimated at about 500-12,000 [≈0.33•(420/2)2].
IV. CONCLUSIONS A customized multimodal AFM-Raman instrument has been envisioned and built at NIH for biomedical research applications. Its functionality has allowed in situ investigations of gold-on-mica substrates and yielded useful insights on SERS enhancement. Combined AFM and TERS measurements of carbon nanotube samples were also achieved and interpreted with rich features. This instrument is the latest development in expanding several AFM platforms in a sharedfacility environment, together with theoretical biological models and new mathematical analytical tools, for investigating biological samples on a number of collaborative scientific projects. Further optimization and broader utilization of this instrument are ongoing.
ACKNOWLEDGEMENT We thank Mr. Nick Burke and Dr. Joachim Schreiber (HORIBA Jobin Yvon) and other members of the support teams from the component manufactures for their assistance especially during the final AFM-Raman system integrations. We also thank Drs Svetlana Kotova (NIH), Tinh Nguyen (NIST), and Xiaohong Gu (NIST) for insightful comments. This research was supported by the Intramural Research Program of the NIH, including National Institute of Biomedical Imaging and Bioengineering (NIBIB), Department of Human Health Services (DHHS). This research is performed while one of us (MZ) holds a National Research Council Research Associateship Award from the National Institutes of Health (National Institute of Biomedical Imaging and Bioengineering)/National Institute of Standards and Technology [NIH (NIBIB)/NIST] Joint Postdoctoral Program. Federal Government does not endorse any commercial products mentioned herein via this presentation.
REFERENCES 1. Kotova, S., Prasad, K., Smith, P. D., Lafer, E. M., Nossal, R. J., and Jin, A. J. (2010) AFM Visualization of Clathrin Triskelia under Fluid and in Air, FEBS Lett 584, 44-48. 2. Hayakawa, E., Tokumasu, F., Nardone, G. A., Jin, A. J., Hackley, V. A., and Dvorak, J. A. (2007) A Mycobacterium tuberculosis-derived lipid inhibits membrane fusion by modulating lipid membrane domains, Biophys. J. 93, 4018-4030.
503
3. Jin, A. J., Prasad, K., Smith, P. D., Lafer, E. M., and Nossal, R. J. (2006) Measuring the Elasticity of Clathrin Coated Vesicles via Atomic Force Microscopy, Biophys. J. 90, 3333-3344. 4. Raghavan, D., Gu, X., VanLandingham, M., and Nguyen, T. (2001) Mapping chemically heterogeneous polymer system using selective chemical reaction and tapping mode atomic force microscopy, Macromol. Symp. 167, 297-305. 5. Campion, A., and Kambhampati, P. (1998) Surface-enhanced Raman scattering, Chem. Society Rev. 27, 241-250. 6. Rao, A. M., Richter, E., Bandow, S., Chase, B., Eklund, P. C., Williams, K. A., Fang, S., Subbaswamy, K. R., Menon, M., Thess, A., Smalley, R. E., Dresselhaus, G., and Dresselhaus, M. S. (1997) Diameter-selective Raman scattering from vibrational modes in carbon nanotubes, Science 275, 187-191. 7. Dresselhaus, M. S., Dresselhaus, G., Jorio, A., Souza, A. G., and Saito, R. (2002) Raman spectroscopy on isolated single wall carbon nanotubes, Carbon 40, 2043-2061. 8. Zhou, Z. P., Kang, H., Clarke, M. L., Lacerda, S. H. D., Zhao, M. H., Fagan, J. A., Shapiro, A., Nguyen, T., and Hwang, J. (2009) WaterSoluble DNA-Wrapped Single-Walled Carbon-Nanotuble/QuantumDot Complexes, Small 5, 2149-2155. 9. Pope, A., Schulte, A., Guo, Y., Ono, L. K., Cuenya, B. R., Lopez, C., Richardson, K., Kitanovski, K., and Winningham, T. (2006) Chalcogenide waveguide structures as substrates and guiding layers for evanescent wave Raman spectroscopy of bacteriorhodopsin, Vib. Spectrosc. 42, 249-253. 10. Levin, I. W., and Bhargava, R. (2005) Fourier transform infrared vibrational spectroscopic imaging: Integrating microscopy and molecular recognition, Ann. Rev. Phys. Chem. 56, 429-474. 11. Anderson, M. S., and Pike, W. T. (2002) A Raman-atomic force microscope for apertureless-near-field spectroscopy and optical trapping, Rev. Sci. Instrum. 73, 1198-1203. 12. Kneipp, K., Kneipp, H., Manoharan, R., Hanlon, E. B., Itzkan, I., Dasari, R. R., and Feld, M. S. (1998) Extremely large enhancement factors in surface-enhanced Raman scattering for molecules on colloidal gold clusters, Appl. Spectrosc. 52, 1493-1497. 13. Tessier, P. M., Velev, O. D., Kalambur, A. T., Rabolt, J. F., Lenhoff, A. M., and Kaler, E. W. (2000) Assembly of gold nanostructured films templated by colloidal crystals and use in surface-enhanced Raman spectroscopy, J. Am. Chem. Soc. 122, 9554-9555. 14. Derose, J. A., Thundat, T., Nagahara, L. A., and Lindsay, S. M. (1991) Gold Grown Epitaxially On Mica - Conditions For Large Area Flat Faces, Surf. Sci. 256, 102-108. 15. Bachilo, S. M., Strano, M. S., Kittrell, C., Hauge, R. H., Smalley, R. E., and Weisman, R. B. (2002) Structure-assigned optical spectra of single-walled carbon nanotubes, Science 298, 2361-2366. 16. Jorio, A., Souza, A. G., Dresselhaus, G., Dresselhaus, M. S., Swan, A. K., Unlu, M. S., Goldberg, B. B., Pimenta, M. A., Hafner, J. H., Lieber, C. M., and Saito, R. (2002) G-band resonant Raman study of 62 isolated single-wall carbon nanotubes, Phys. Rev. B 65. 17. Hayazawa, N., Yano, T., Watanabe, H., Inouye, Y., and Kawata, S. (2003) Detection of an individual single-wall carbon nanotube by tipenhanced near-field Raman spectroscopy, Chem. Phys. Lett. 376, 174180. 18. Festy, F., Demming, A., and Richards, D. (2004) Resonant excitation of tip plasmons for tip-enhanced Raman SNOM, Ultramicroscopy 100, 437-441. Corresponding Author: Albert J. Jin, Ph.D Institute: National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health Street: 9000 Rockville Pike, Bldg 13/Room 3N18D City: Bethesda, MD 20892 Country: USA Email:
[email protected]
IFMBE Proceedings Vol. 32
A Novel Model of Skin Electrical Injury Thu T.A. Nguyen1, Ali Basiri1, J.W. Shupp2, A.R. Pavlovich2, M.H. Jordan2, Z. Sanford2, and J.C. Ramella-Roman1 1
The Catholic University of America, 620 Michigan Ave., N.E., Washington DC, USA 2 The Burn Center, Washington Hospital Center, Washington DC, USA
Abstract— High voltage electrical burns can cause devastating third and fourth degree burns. The pathophysiology of electrical injury is not well understood. We have developed an Electrical Burn Delivery System (EBDS) capable of reliably delivering pre-set voltages and currents to biological specimens in order to accurately simulate the damage from high-tension electrical contact. In addition to the high voltage delivery the system can also measure skin impedance before and after shock. Two different imaging techniques were used for noninvasive assessment of skin damage pre and post shock. A thermal camera was used to monitor the extent of thermal damage due to the electrical shock, while hyper-spectral imaging was used to monitor wavelength dependent differences in total reflectance. We present results of experiments conducted on freshly excised human skin samples as well as porcine skin samples. Keywords— Electrical burns, imaging, spectral response.
I. INTRODUCTION Occupying the number of around one thousand cases each year, electrical accidents are a substantial cause of injury in the workplace. According to the Journal of Safety Research in 2003, it is indicated that electrical shock caused 99% of fatal and 62% of non-fatal electrical accidents [1]. Several studies have shown the effects of high voltage electrical injury on the human body. This kind of injuries can cause not only visible damages on the skin but also severe damage to the cardiac [2], central [3], and peripheral nervous system [4]. As a current or voltage is applied on the body, bones, fat, and tendons express the highest resistance; skin represents the intermediate resistance; and nerves, blood and muscles show the least resistance [5]. When one touches a high voltage conducting material, skin becomes the direct contact surface and generally suffers of an extensive injury with ablative effects on the dermis, the epidermis and the muscle layer. In order to gather a deeper understanding on how much damage the skin sustains during electrical shock, we have built a DC high voltage Electrical Burn Delivery System as well as used imaging systems to monitor the wounds. The resistance of skin (external resistance), which is the primary resistor against the electrical current, depends
mostly on the thickness of the epidermal layer, the moisture of the skin, the voltage applied, and its frequency. Moist skin may have a resistance of less than 1000 Ohm [5]. According to Freiberger [4] the epidermis works as a highly insulating barrier and its impedance changes at voltages above 20V DC. A significant voltage applied to the skin will cause epidermal structural breakdown by perforating skin structures and forming a hydrated electrically conducting channel. Typical breakdown voltage is 150 V in most area and 400 V for the epidermis of soles and palms. When the voltage applied to the skin is above a threshold voltage, the epidermis is completely destroyed at the contact points. Consequently, evaporation and permeability of the epidermal layer occurs allowing the DC current to flow to the dermal layer and deeper tissue [4]. A severe burn [6] can also occur at small voltages and currents because of a poor electrode-skin surface contact. High voltage electrical injuries can cause various degrees of cutaneous burns, with the greatest effects on deeper tissues and muscles and lower damage to the skin surface [3]. According to Lee [4] in an electrical shock, there need to be at least two contact points as entry and exit wound for a burn to occur. The area and topology of the contact point defines the skin-wound pattern and size. Increased contact area reduces current density in the skin. The main principles of direct electrical injury are Joule heating, electroporation, and denaturation of membrane proteins. The damage rate depends on the tissue field strength, pathway of current through the body as well as duration of exposure [7]. Beneath the skin, cell membranes in the tissues are highly resistive to extra low frequency current (ELF = 01kHz). Cell membrane will be disrupted by a strong field such as a transmembrane potential greater than 200-300 mV [7]. Electroporation is an important mechanism of tissue injury. The maximum induced transmembrane potential is obtained when most of the skeletal muscle is oriented parallel to the direction of the electric field [7]. Large cells such as muscle and nerve cells are more vulnerable to electrical breakdown [8]. Some thresholds of ELF current have been observed to cause muscle spasm, for example this effect is noticeable in the forearm skeletal musculature at 16mA. Another cause of tissue injury is the denaturation of membrane proteins [7].
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 504–507, 2010. www.springerlink.com
A Novel Model of Skin Electrical Injury
505
Thermal burning by direct dissipation of Joule heating on a biological load under electrical flow is also a major cause of tissue injury. According to Lee et al. [7], cell membrane is the most vulnerable cellular component to heat injury. A three-dimensional model was developed using numerical methods to solve the Bioheat equation to point out the thermal and non-thermal damage during electrical shock [9] by Tropea and Lee. For high voltage contact, it was showed that the thermal response of tissue was roughly adiabatic during the current flow and reached the highest temperature in the skeletal muscle in the extremity. In the electrical shock, tissue electroporation damage accumulates on the time scale of milliseconds. Nevertheless, with the longer contact time, thermal damage becomes significant and all the tissues along the current path will be burned [7]. Finally tissue perfusion also impacts the level of damage [10]. Previous studies have introduced animal models and have mostly investigated electrical damage of muscle and nerves [11, 12, 13]. Kun-Wu Fan et al. [11] delivered different voltages (500V, 1000V and 3600V) on the sciatic nerve of thirty rats for a duration of 10 ms. The results proved their model ability of producing controlled injuries to the sciatic nerve at different degrees of severity. Kalkan et al. [12] established another animal model of electrical injury to study the subsequent changes in muscle perfusion at different post-traumatic stages. Fifty rats were shocked (current 330mA and 440V duration 10 sec) between left forelimb and right hind limb by using phase electrodes. The skin temperature was acquired using a thermometer during the process. Their study showed that the muscle temperature increased suddenly causing severe damage on the right hind limbs of all rats. In 1995, Block et al. [13] established a novel animal electrical-injury model and indicated a mediated non-thermal mechanism in muscle injury. Cuff-type electrodes were wrapped around the tail and the ankle of Sprague Dawley rats. 4 ms pulses from a DC power supply were delivered to the limb in intervals of 10 sec. The results showed clear cell damage when the shock generated temperatures above 43 oC. Later, a soft-tissue injury model [14] was established to observe changes in muscles, nerves, hearts, and liver. Two copper foils electrodes (3cm x 3cm) were placed on the left limb at a distance of 7 cm from each other. A power supply was used to provide high-voltage alternating current on the hind limb of a rabbit unilaterally for 0.1s at 3000 V, a multimeter and a thermometer were used to measure the electric resistance and skin temperature of a limb respectively. Finally, in a different experiment by Li et al. [15], a rabbit model was explored to study the
effect of small versus larger electrodes on tissue. A 1.6 kV power supply was used to investigate different degrees of injuries from mild to extra-severe levels. MRI imaging is considered a significant tool for electrical wounds monitoring and it is especially valuable to monitor deep tissue damage, MRI is one of the very few imaging techniques available for monitoring of electrical burns. This paper focuses on exploring two different imaging methods, thermal and multi-spectral imaging, which could provide valuable information regarding electrical injuries dynamics. The shallow penetration ability on the skin of these techniques makes them best suited for monitoring superficial tissue damage, particularly at positions near the entrance and exit wound. The combination of data gathered through these techniques and MRI can produce a more general and profound analysis of electrical wounds.
II. MATERIAL AND METHODS A. The Electrical Burn Delivery System We have built an Electrical Burn Delivery System (EBDS) able to deliver high voltage electrical burns up to 1000 V and 6 A on a biological load during a desired period of time. This system was able to measure skin impedance of the load before and after shock. The EBDS was computercontrolled through a friendly user interface; the electrical shock could be delivered remotely to guarantee user safety. The system comprised of a shock box, Fig 1 (a protective Plexiglas® casing containing the electrodes), and a controlling system that included a high voltage DC power supply (Magna, Flemington, NJ), a data acquisition card (Measurement Computing, Norton, MA), a protection circuit, and a computer (Hewlett Packard, CA). The electrodes used for the high voltage shock were two aluminum rods (positive and negative poles, 10 cm in length 1 cm diameter), which were chosen because of their large contact surface, low resistance and insignificant oxidation. The electrodes were connected to the skin through a layer of ultrasound gel, which reduced interface resistance. Two alligator clips were used as electrodes for the impedance measurement. These electrodes were indirectly connected to the outputs of the high voltage power supply through a protection circuit consisting of a series of relays controlled by computer and a DAQ card. The protection circuit was used for switching between high voltage shock mode and impedance measurement mode, which works at low voltage.
IFMBE Proceedings Vol. 32
506
T.T.A. Nguyen et al.
Fig. 1 The shock box (top right), controlling laptop (top left) and high voltage power supply (bottom) The cover of the shock box was fitted with two position sensors that interrupted voltage delivery to the load once the casing was open, and an audible alarm and warning were also implemented through the software interface. To measurement impedance, a low DC voltage was delivered to the biological load; the received stable DC feedback current was used to calculate the material impedance. This method was based on the Ohm Law V=IR in which V is DC voltage, I is current, and R is the impedance of the biological load. Different voltage settings were tested to improve the precision of the system in calculating resistance; a variable potentiometer of known resistance was used for this purpose. A simple user interface was designed to allow for the selection of different voltages and different durations of time of shock in high voltage mode. In impedance measurement mode, values of impedances were acquired three times before and after shock, then saved for reference and plotted in a graph of impedance versus time on the interface. B. Imaging A FLIR System (Thermovision, CA) was utilized to record the IR emission of the burns in real time during skin electrocution. Several images were acquired with this system that allows for 60 frames / sec acquisition. Analysis of the data was done in post processing. A multi-aperture spectral system was also used to generate wavelength dependent images before and after shock [18].
impedance electrodes were applied to the skin sample at a fixed 5 cm distance from each other as shown in Fig 2. Three measurements of impedance were conducted to obtain a set of before shock impedances. The measurement electrodes were replaced with the shock electrodes in the same positions. Spectral images of the area of interest were acquired with the multi-spectral camera for later processing. A two second duration shock of 1000 DC voltage was delivered to the skin. During shock the FLIR system was used to record images of temperature increase due to the electric discharge on the whole shock area. A second set of images was acquired with the spectral camera as well as a set of impedance values was measured after the shock was delivered.
Fig. 2 Experiment on human breast skin
III. RESULTS A sequence of thermal images at different times is shown in Fig. 3. The thermal effect on the skin is clearly visible in these images. A typical spectral image is shown in Fig. 4 below. Images were normalized with a 99% reflectance standard (Spectralon, MA). A region of interested was selected near the electrodes and the average retro-reflectance values of that region were calculated before and after burn. A typical result is shown in Fig. 5, reflectance decreased significantly post burn.
C. Experimental Procedure A set of experiments were conducted on porcine skin obtained from the local abattoir and on freshly excised human skins obtained following breast reduction surgery at the Washington Hospital in Washington DC. The skin samples were stored at 37 oC in a moist environment in an incubator until they were brought to the shock box for testing. The
Fig. 3 Thermal imaging sequence, the rise in temperature is due to a 2 seconds 1000 V electrical pulse
IFMBE Proceedings Vol. 32
A Novel Model of Skin Electrical Injury
507
REFERENCES
Fig. 4 Spectral image before burn
Fig. 5 Reflectance values before (red dots) and after burn (blue dots), a reduction in relative reflectance is observable at all wavelengths Values of impedance measured before and after shock mostly indicated a decrease of skin impedance after high voltage shock; these results are similar to what has been reported by Lee [4]. Only the skin samples with high humidity showed an increase in impedance.
IV. CONCLUSIONS In this study we have introduced an experimental apparatus (EBDS) aimed at delivering electrical burns of controlled voltage and time duration on biological media and measuring impedance of skin. Two imaging techniques that could be used to gather quantitative values of heat related damage were also introduced. Future work will focus on animal models and the addition of other imaging modalities such as Laser Doppler, confocal imaging.
1. Cawley J.C., Homce G.T., “Occupational electrical injuries in the United States, 1992-1998, and recommendations for safety research,” J Safety Res. 34(3); 2003. 2. Koumbourlis A. Electrical injuries. Crit Care Med. 30(11 Suppl); 2002. 3. Spies C., Trohman R.G., “Narrative review: Electrocution and lifethreatening electrical injuries,” Ann Intern Med. 3(7), 2006. 4. Lee R.C., “Injury by Electrical forces: pathology, manifestations, and therapy,” Curr Prob Surg; 34; 1997. 5. Koumbourlis A.C., Electrical injuries. Crit Care Med. 2002;30 (11 suppl): S424-30. 6. Barkana B.D., Gupta N. and Hmurcik L.V., “Two cases reports: Electrothermal (aka contact) burns and the effects of current density, application time and skin resistance,” JBUR-3160; No of Pages 5. 7. Lee R.C. and Dougherty W., “Electrical Injury: Mechanisms, Manifestations, and Therapy,” IEEE Transactions of Dielectrics and Electrical Insulation, 10(5), 2003. 8. Lee R.C. and Kolodney M.S., “Electrical Injury Mechanism: Electrical Breakdown of Cell Membranes,” Cambridge and Boston, Mass. 9. Tropea B.I. and Lee R.C., “Thermal Injury Kinetics in Electrical Trauma,” J. Biomech. Engr., Vol 114:2, pp. 241-250, 1992. 10. Lee R.C., and Kolodney M.S., “Electrical Injury Mechanism: Dynamics of the Thermal Response,” Cambridge and Boston, Mass. 11. Fan K.W., Zhu Z.X., Den Z.Y., “An experimental model of an electrical injury to the peripheral nerve,” Burns, 31(6), 2005. 12. Kalkan T., Demir M., Ahmed A.S., Yazar S., Dervisoglu S., Uner H.B., Cetinkale O., “A dynamic study of the thermal components in electrical injury mechanism for better understanding and management of electric trauma: an animal model,” Burns.30(4), 2004. 13. Block T., Aarsvold J., Matthews K., Mintzer R., River P., CapelliSchellpfeffer M., Wollmann R., Tripathi S., Chen C., Lee R.C., “Nonthermally Mediated muscle injury and necrosis in electrical trauma,” Journal of Burn Care & Rehabilitation, 16(6),1995. 14. Chai J.K., Li L.G., Gao Q.W., Shen X.P., Zhang H.J., Sheng Z.Y., Wang Z.Q., Zhang C., “Establishment of soft-tissue-injury-model of high-voltage electrical burn and observation of its pathological changes,” Burns 35, 2009, 1158-1164. 15. Li W. , Zhu M., Zhu Z. , Guan J. , Xu X., “A series of models of nonthermal high voltage electrical injuries,” Burns 32, 2006. 16. Ohashi M., Koizumi J., Hosoda Y., Fujishiro Y., Tuyuki A., Kikuchi K., “Correlation Between Magnetic Resonance Imaging and Histopathology of an Amputated Forearm after Electrical Injury,” Burns, Vol. 24, pp. 362_368, 1988. 17. Hannig J., Kovar D.A., Abramov G.S., Lewis M.Z., Karczmar G.S., Lee R.C., “Contrast Enhanced MRI of Electroporation Injury,” JBCR, Vol. 20, 1999. 18. Basiri A., Nabili M., Mathews S., Libin A., Groah S., Noordmans H. J., J.C. Ramella-Roman, “Use of a multi-aperture camera in the characterization of skin wounds,” Optics Express, 18(3), 2010. 19. Freiberger H., “The Electrical Resistance of the Human Body to DC and AC Currents,” (Der elektrische widerstand desmen-schlichen Korpers gegen technischen gleich und wechselstrom.). Berlin: Elertrizitatswirtschaft, Vol.3217,pp.373-375,1933; Vol.322, pp. 442-446, 1933.
IFMBE Proceedings Vol. 32
Design, Construction, and Evaluation of an Electrical Impedance Myographer K. Lweesy, L. Fraiwan, D. Hadarees, A. Jamil, and E. Ramadan Jordan University of Science and Technology, Faculty of Engineering, Biomedical Engineering Department, Irbid 22110, Jordan
Abstract— This paper describes the design, construction, and evaluation of an electrical impedance myographer (EIM), which can be used as a non-invasive technique for the assessment of the muscular state. It can also be used for many diagnostic purposes such as distinguishing tumor tissue from normal tissue, estimating segmental muscle volume by bioelectrical impedance spectroscopy, predicting muscle mass and improving estimation of glomerular filtration rate in nondiabetic patients with chronic kidney disease. In the design described herein, two impedance spectra are generated: one for relaxed muscles and one for contracted muscles. Those two spectra are then compared to provide the evidence about the induced physiological modifications in muscle morphology. The EIM design consists of a Wien bridge oscillator that generates a 91 kHz sinusoidal signal, and a voltage to current converter which generates a constant current (1 mA) that passes through the patient's forearm. An envelop detector was used to convert the signal from AC to DC. The output of the envelope was isolated from the next stage using a buffer. A low pass filter (0.7-20 Hz) was used to eliminate the high frequency noise and to get only low frequency signal. An optocoupler was used to ensure patient's safety. From the output signal, it has been observed that the impedance of a healthy subject increases gradually as the muscle contracts; this is due to the increasing blood flow to the contracted muscle area. When the patient's muscle was continuously contracted, the impedance signal appeared as DC line, and the signal decreased as the patient relaxed his muscle. Keywords— Myography, Muscle Electrical Impedance, Muscle State.
I. INTRODUCTION The impedance measurements across a skeletal muscle using the four-electrode technique give important information about the muscle that can be used for diagnostic purposes. The objective of this study is to compare the two impedance waveform for (relaxed and contracted muscles) in order to provide the evidence about the induced physiological modification in muscles morphology (muscle contraction). When a muscle is subjected to any physiological activity, the bioimpedance waveform measurement reflects the changes between the waveforms of the relaxed and the
contracted state of the muscle. This means that the morphology and geometry of the muscle fibers in the contracted muscles is different from that in the relaxed muscles. There exist different electricomayography (EMG) techniques to diagnose neuromuscular diseases; these techniques depend mainly on the direct structural and biological nature of the cell. Being an invasive technique, EMG is not always a suitable method to be used for particular groups of patients such as children. In this work, a non-invasive technique is proposed. This technique is familiar to most neurologists for assessment of the muscular state as electrical impedance mayography (EIM); a technique that overcomes some of the difficulties in neuromuscular disease assessment. EIM is a non-invasive technique that is used for the assessment of neuromuscular disease. It has been used to study a variety of disorders, including motor neuron disease, radiculopathy, and myopathy. The most basic form of EIM, linear EIM, uses high frequency, low-intensity Alternating current (constant amplitude 1 mA) applied via adhesive electrodes to the hands or feet with the resulting voltage patterns measured by a second series of surface electrodes placed serially over a muscle or group of muscles of interest. With known of the applied current and the measured voltage , the resistance (R) and reactance (X) can be measured and the phase (θ) calculated as θ = arctan (X/R). The EIM design described herein operates at high frequency (90 kHz) and low intensity constant current that is applied via adhesive electrodes to the hands. The resulting voltage is measured by a second set of surface electrodes that are parallel to the first set of electrodes.
II. MATERIALS AND METHODS The hardware design of the EIMr includes an excitation (constant current) circuit and a sensing circuit. The sensing circuit is composed of an instrumentation amplifier, an envelop detector, a gain for amplification, a lowpass filter, and an optocoupling circuit for isolation. Figure 1 shows the block diagram of the EIM design.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 508–511, 2010. www.springerlink.com
Design, Construction, and Evaluation of an Electrical Impedance Myographer
509
There are many different ways to amplify and filter, in our project we use RC oscillator (Wien Bridge).
Fig. 1 Block diagram of the EIM design A. Excitation Circuit (Constant Current Circuit) The purpose of the excitation circuit is to produce a constant current that does not change with load. A two stage excitation circuit (oscillator and voltage to current convertor) was built according to the schematic design shown in Figure 2.
Fig. 3 Schematic design of the Oscillator circuit C. Voltage to Current Convertor The output of the previous stage, which is a voltage sinusoidal signal, is then converted into a sinusoidal current signal that is fed later on to the load (body). A voltage to current converter circuit is designed and built for this purpose. A schematic of this circuit is shown in Figure 4.
Fig. 2 Schematic design of the excitation circuit B. Oscillator An oscillator is a waveform generator used to generate pure sinusoidal waveforms of fixed amplitude and frequency. There are two main types of electronic oscillator: the harmonic oscillator and the relaxation oscillator. The ralaxation oscillator produces a non-sinusoidal output, square wave, or sawtooth, and contains a nonlinear component like the transistor. The harmonic (or linear) oscillator produces a sinusoidal output. The basic form of it is an amplifier with the output attached to a filter, and the output of the filter attached to the input of the amplifier, in a feedback loop. When the power supply to the amplifier is first switched on, the amplifier's output consists only of noise. The noise travels around the loop, being filtered and re-amplified until it increasingly resembles the desired signal. In this paper, a harmonic RC oscillator (Wien Bridge) with a gain of 2 was built according to the schematic shown in Figure 3.
Fig. 4 Voltage to current converter
circuit
Since a current value of 1 mA is needed, a voltage divider is placed at the input of the voltage to current converter to provide a fine adjustment for the value of the required current. D. Sensing Circuit a) Highpass Filter A 10 pF capacitor and a 1MΩ resistor were used to build a highpass filter with a cutoff frequency of 16 kHz. This cutoff frequency was chosen to pass the impedance signal and reject the power line interference.
IFMBE Proceedings Vol. 32
510
K. Lweesy et al.
b) Instrumentation Amplifier Biomedical signals that are recorded from the human body have small amplitudes ( < 5 mV). To increase this value, an instrumentation amplifier is used, which provides a gain for the signal. An instrumentation amplifier with a gain of 20 was used. Instrumentation amplifier model AD620 was used because it has high differential input impedance and high common mode rejection ratio (CMMR). Figure 5 shows a schematic that includes both the highpass filter and the instrumentation amplifer.
Fig. 6 Optocoupling circuit
Fig. 5 High pass filter and instrumentation amplifier c) Envelop Detector (Demodulator) The output of the instrumentation amplifier is then passed through a super envelop detector. This is done to convert the AC high frequency signal into a regulated stable output (DC) that would change in according to the body impedance.
f) Current to Voltage Converter ( Signal Conditioner) The current to voltage converter is used to convert the current signal (which comes from the transmitter optocoupler) to a voltage signal in order to pick up the output signal: Vo = IR * Rf
d) Gain (Buffer) An operational amplifier is used to buffer or isolate the output of the envelope detector from the next stage to insure that the output does not change with load. e) Optocoupling Circuit An optocoupling circuit that consisted of ALGaAsLED optically coupled to a high speed photodiode transistor was used. The optocoupling circuit is used to transform the signal from the exciting circuit to the sensing circuit safely. Before the optocoupler there is an op-amp which has a 10 kΩ potentiometer on the non–inverting terminal that is adjusted to ensure that the led works in the linear region. ِAfter the optocoupler, there is an optoreciever amplifier with a 10 kΩ potentiometer, which is used to adjust the zero offset of it is output. Figure 6 shows a schematic of the optocoupling circuit.
Fig. 7 Current to voltage converter
III. RESULTS The output of the oscillator was a sinusoidal wave with (amplitude = 17 V, frequency = 91 kHz), while the output
IFMBE Proceedings Vol. 32
Design, Construction, and Evaluation of an Electrical Impedance Myographer
of the envelop detector appears was a DC signal with value of 80 mV. When the patient arm moves, the signal is affected according to this movement and this signal is the muscles impedance which is more pure and clear than the EMG signal as can be see in Figure 8.
511
IV. CONCLUSIONS The muscle impedance is an important figure that can be used for diagnostic purposes. It has less noise compared to EMG signals. Described here in, we have proposed an accurate hardware design than can evaluate the impedance of muscles accurately.
REFERENCES
Fig.
8 Impedance of muscles hand at end of contraction and beginning relaxtion
When the patient begin to relax his muscles the signal falls down and decreases until maximum relaxation is reached then the signal appears again as line (DC) (Figure 8).
[1] Carl A Shiffman, Ronald Aaron and Seward B Rutkove, Electrical impedance of muscle during isometric contraction Physiol.Meas. 24 (2003) 213–234. [2] Cynthia Bartok and Dale A. Schoeller ,Estimation of segmental muscle volume by bioelectrical impedance spectroscopy J Appl Physiol 96: 161–166,( 2004) First published September 23, (2003); [3] Jamie H.Macdonald, Samuele M. Marcora, Mahdi Jibani, Gareth Roberts, Mick John Kumwenda, Ruth Glover, Jeffrey Barron and Andrew Bruce Lemmey: Bioelectrical impedance can be used to predict muscle mass and hence improve estimation of glomerular filtration rate in non-diabetic patients with chronic kidney disease Nephrol Dial Transplant (2006) 21: 3481–3487 doi:10.1093/ndt/gfl432 Advance Access publication 5 September (2006). [4] B¨orje Blady and Bo Baldetorp ,Impedance spectra of tumour tissue in comparison with normal tissue; a possible clinical application for electrical impedance tomography Physiol. Meas. 17 (1996) A105– A115. Printed in the UK. [5] T Zagar and D Krizaj Multivariate analysis of electrical impedance spectra for relaxed and contracted skeletal muscle Physiol. Meas. 29 (2008) S365–S372 [6] C A Shiffman, H Kashuri and R Aaron,Electrical impedance myography at frequencies up to 2 MHz Physiol. Meas. 29 (2008) S345–S363 [7] Shiffman C, Aaron R, Angular dependence of resistance in noninvasive electrical measurements of human musclethe tensor model. Phys Med Biol (1998);43:1317–22. [8] Aaron R, Shiffman C, Using localized impedance measurements to study muscle changes in injury and disease.
Author: Khaldon Lweesy Institute: Jordan University of Science and Technology Street: P.O.Box 3030 City: Irbid Country: Jordan Email:
[email protected]
Fig. 9 Impedance of muscles at relaxtion When the patient contracts his arm muscles, the signal increases to the maximum contraction and if he remains at that position, the impedance signal will appear as a line (DC). So any small movement in the muscles will appear in the signal as impedance increase or decrease.
IFMBE Proceedings Vol. 32
The Role of Imaging Tools in Biomedical Research: Preclinical Stent Implant Study W.F. Pritchard, M. Kreitz, O. Lopez, D. Rad, B. McDowell, S. Nagaraja, M.L. Dreher, J. Esparza, J. Vossoughi, O.A. Chiesa, and J.W. Karanian Laboratory of Cardiovascular and Interventional Therapeutics, Division of Biology, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, FDA, 8401 Muirkirk Road, Laurel, MD 20708
Abstract— Imaging tools are used at the bedside for diagnostic and interventional procedures. The use of more than one modality (including fused modalities) can provide information from diverse sources during different stages of imageguided interventional procedures such as diagnosis, delivery of a stent to treat vascular disease and subsequent evaluation. Multi-modality interventions use an array of tools including ultrasound, fluoroscopy, angiography, endoscopy, electromagnetic tracking, robotics and computed tomographic (CT) imaging, alone or in combination, together with analytical tools during preclinical vascular and therapeutic procedures. Explant analysis includes specimen radiography (Faxitron) and scanning electron microscopy (SEM)). In the case presented, swine underwent implantation of peripheral and coronary stents. Image acquisition tools included CT imaging and angiography (CTA), diagnostic angiography, fluoroscopy and specimen radiography. Image management and three-dimensional reconstructions were accomplished using OsiriX. Following delivery and deployment of stainless steel stents (Guidant Multi Link) and nitinol stents (Boston Scientific Radius), fluoroscopic and CT images were obtained at 30 day intervals for up to six months to define stent integrity. 3D reconstructions of CT scans using OsiriX were compared to explant specimen radiographs of each stent. Explanted stents were processed for SEM analysis. Stent integrity was evaluated in vivo and ex vivo demonstrating the development of stent fracture. Use of multi-modality imaging tools enhances the performance of pre-clinical device trials and the information generated by those studies. This may lead to improved safety evaluations and produce more reliable data for quantitative analysis. These imaging tools are available in clinical practice and are used to determine procedural success and short term and long term effectiveness of cardiovascular implants such as stents. The application of these clinical tools in pre-clinical trials, together with imaging procedures and analytic techniques that cannot be used clinically, represents a dynamic and rapidly emerging component of the device development and evaluation process. Keywords— medical imaging, implant safety, predictive models.
I. INTRODUCTION In recent reports, the prevalence of human stent fractures and the need to model failure modes has become increasingly important [1,2]. These failure modes may be related to the implant material and environment [3]. In vivo and ex vivo characterization of implanted device integrity in both humans and pre-clinical animal models is critical to the evaluation of design and safety of devices such as stents. The use of different imaging modalities for endovascular interventions, such as stent placement, and the combination of these tools has been a fertile area of research and applications over the past decade. Certainly the use of more than one modality (or fused modalities) either simultaneously or in sequence has tremendous appeal by providing key information during different stages of endovascular procedures (Fig. 1). The combined use of intravascular ultrasound (IVUS), optical coherence tomography (OCT), CT and MRI with fluoroscopy and angiography are being studied for vascular therapies. The recent optimization of rotational angiography adds a new twist to this paradigm as well, offering post-processed CT from rotational angiography source data in rapid, clinically-relevant timeframes, with clinically relevant fields of view.
Fig. 1
Multi-modality interventional translational suite during pre-operative procedure
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 512–515, 2010. www.springerlink.com
The Role of Imaging Tools in Biomedical Research: Preclinical Stent Implant Study
513
The study objective was to serially image stents in swine from deployment to explant. Explanted stents were evaluated radiographically and with scanning electron microscopy (SEM). The combination of these imaging techniques allowed for an evaluation of stent integrity including progression of stent fracture rates across implant sites (coronary vs. femoral) and material (Stainless Steel vs. Nitinol).
II. MATERIALS AND METHODS The study was performed under an Institutional Animal Care and Use Committee-approved research protocol. Anesthetized domestic swine underwent femoral and coronary stent implantation under fluoroscopic guidance. Image acquisition tools included CT imaging and angiography (CTA), diagnostic angiography, fluoroscopy and, following explant, specimen radiography, and SEM. Image management and three-dimensional reconstructions were accomplished using OsiriX. Stainless steel (SS) (Guidant Multi-Link) and nickel titanium (NiTi) (Boston Scientific Radius) stents were implanted in both coronary and femoral arteries of four swine for up to 180 days. Two overlapping stents were placed in the right circumflex femoral artery and in the left anterior descending (LAD) coronary artery (distal stentNiTi/proximal stent-SS). A single SS or NiTi stent was placed in the left circumflex femoral artery and left circumflex (LCx) coronary artery. Serial evaluation was conducted with fluoroscopy and CT at 30 day intervals up to six months to define stent integrity. CT images including 3D reconstructions using OsiriX were compared to explant specimen radiographs of each stent.
Fig. 3 3D-reconstruction of CT (A) and fluoroscopic image (B) following a single stent implantation
Fig. 4 CT at 180 days (A) and specimen radiograph (B) show-stent fracture in vivo and at explant, respectively All computed tomography (CT) and CT angiography (CTA) to image the pelvic and coronary vasculature was performed using a Phillips Mx8000 Quad/IDT CT scanner (Philips Medical Systems) with Isovue-370 (Bracco Diagnostics) as the contrast agent. Fluoroscopy and diagnostic angiography was performed with a Philips BV Pulsera Carm. Following explant, specimen radiography with a Faxitron X-Ray system (Faxitron X-Ray Corp.) and SEM of the stents was performed.
III. RESULTS
Fig. 2 Diagnostic angiography (A) and 3D-reconstruction of CT angiogram (B) of femoral implant sites
Representative diagnostic angiography and CTA of the femoral implant sites are shown (Fig. 2). Fluoroscopic and CT images of a single femoral stent are shown at the time of implantation (Fig. 3). CT of the implanted stent and a radiograph of the explanted stent show stent fracture at 180 days (Fig. 4). Fluoroscopic and CT images of overlapping femoral stents were acquired at the time of implant and at 30 day intervals up to 180 days. Fig. 4 illustrates representative
IFMBE Proceedings Vol. 32
514
W.F. Pritchard et al.
fluoroscopic and CT images of overlapping NiTi-SS stents in the right circumflex femoral implant site at implant, 90 and 180 days. A representative radiograph (Fig. 6A) of an explanted overlapping femoral stent illustrates fracture of the SS stent at 180 days. An exemplar SEM of a SS strut fracture surface is shown in Fig. 6B. CT analysis showed that 6/6 of the femoral SS stents fractured by 60-days with additional fractures and migration noted prior to explant (e.g., Fig. 5, Fig. 7). Explant radiographs showed 3/4 of the coronary SS stents fractured by 180 days but only when overlapped with a NiTi stent as demonstrated in Fig. 7. None of the NiTi stents fractured. Overlapped stents (SS and NiTi) exhibited localized evidence of corrosion while single stents were relatively corrosion free (SEMs not shown).
IV. CONCLUSION Multi-modality imaging guided the implantation of vascular stents and defined the long-term in vivo stent performance including the time course of fracture development and distraction of the stent fractures. Following explant, the application of ex vivo evaluative tools such as specimen radiography and SEM provided additional performance details.
Stainless Steel Stent Fracture Rates Coronary
Femoral Overlapping & Single
Overlapping A
3/4
B
6/6
A
B
NiTi SS
C
D
SS
C E C D
E
Fig. 5 Fluoroscopic image of overlapping stents (NiTi-SS) at the time of implantation in the left circumflex femoral artery (A). 3D reconstructions of the overlapping stents at 0 days (B), 90 days (C) and 180 days (D) after implantation
A
Fig.
Fig. 7 SS fracture rates in coronary and femoral implant site.
Explant radiographs of overlapping NiTi-SS stents in the LAD and a single SS stent in the LCx (A) and a single SS stent in the femoral (B) show fracture of the single femoral and the overlapped SS stents. Explanted overlapping stents from the LAD with adherent tissue removed by digestion show a SS fracture site at the edge of the overlap (C). SEMs of the fracture surfaces (D, E)
B
6 Example radiograph of overlapping stents (NiTi-SS) (A) and SEM of a stent strut fracture (B)
We have shown stainless steel stents fractured in both the coronary and femoral vessels while NiTi stents had no fractures. Overlapping stents in the coronary artery may introduce additional forces applied to the SS stents, leading to
IFMBE Proceedings Vol. 32
The Role of Imaging Tools in Biomedical Research: Preclinical Stent Implant Study
fractures that were not observed in single SS stents implanted in the coronary artery. Fracture modes were considered to be bending and mixed mode in the femoral position based on preliminary analysis of the SEMs. Non-radial deformations, such as changes in curvature, can be similar in the swine [4] to those reported in the human superficial femoral artery and may contribute to observed femoral stent fractures [5,6]. We hypothesize that reported device fractures in patients may be due in part to the wide range of physical forces acting on these vessels [7] attributed to deformations occurring from normal motions. Use of multi-modality imaging tools enhances the performance of pre-clinical device evaluations and the information generated by those studies. This may lead to improved safety studies and produce more reliable data for quantitative analysis. These imaging tools are available in clinical practice and are used to determine procedural success and short term and long term effectiveness of cardiovascular implants such as stents. The fusion of imaging and analytical tools may improve clinical outcomes, particularly with continued improvement in diagnosis, visualization and delivery or placement accuracy. The application of these clinical tools in pre-clinical trials, together with imaging procedures and analytic techniques that cannot be used clinically, represents a dynamic and rapidly emerging component of the device development and evaluation process.
515
REFERENCES 1. Umeda H, Gochi T, Iwase M et al. (2009) Frequency, predictors and outcome of stent fracture after sirolimus-eluting stent implantation. Int J Cardiol 133:321-326 2. Popma JJ, Tiroch K, Almonacid A et al. (2009) A qualitative and quantitative angiographic analysis of stent fracture late following sirolimuseluting stent implantation. Am J Cardiol 103:923-929 3. Choi G, Shin LK, Taylor CA, Cheng CP. (2009) In vivo deformation of the human abdominal aorta and common iliac arteries with hip and knee flexion: implications for the design of stent-grafts. J Endovasc Ther 16:531-538 4. Karanian JW, Nagaraja S, Dreher ML et al. (2010) Predictors of stent failure in a swine model: Role of implant site, implant material and musculoskeletal motion. J Cardiovasc Revasc Med (in press) 5. Choi G, Cheng CP, Wilson NM, Taylor CA. (2009) Methods for quantifying three-dimensional deformation of arteries due to pulsatile and nonpulsatile forces: Implications for the design of stents and stent grafts. Ann Biomed Eng 37:14-33 6. Scheinert D, Scheinert S, Sax J et al. (2005) Prevalence and clinical impact of stent fractures after femoropopliteal stenting. J Am Coll Cardiol 45:312-315 7. Cheng CP, Wilson NM, Hallett RL et al. (2006) In vivo MR angiographic quantification of axial and twisting deformations of the superficial femoral artery resulting from maximum hip and knee flexion. J Vasc Interv Radiol 17:979-987
IFMBE Proceedings Vol. 32
Optimization of Screw Positioning in Mandible during Bilateral Sagittal Split Osteotomy Using Finite Element Method A. Raeisi Najafi1, A. Pashaei2, S. Majd3, I. Zoljanahi Oskui4, and B. Bohluli5 1
Department of Industrial and Enterprise Systems Engineering, University of Illinois at Urbana - Champaign, IL 2 CISTIB, Department of Technology, University of Pompeu Fabra, Barcelona, Spain 3 Engineering and Scientific Research Associates, 3616 Martins Dairy Circle, 20832, Olney, MD 4 Department of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran 5 Department of Oral and Maxillofacial Surgery, Azad School of Dentistry, Tehran, Iran
Abstract— Stress analysis of five fixation methods used in Bilateral Sagital Split Osteotomy are performed using finite element method. Geometrical model is built using CT imaging. Mandible is considered isotropic and static stress analysis is performed using linear elasticity. Results are compared to identify the configuration that minimizes the von-Mises Stress values at the entire mode, screws and bone-screw interface to reduce the transferred load to different elements of bone screw model. Results suggest that among the studied configurations Model 3 (Horizontal T), with maximum stress values of 64.59 MPa, 5.29 MPa and 2.67 MPa at the entire model, screws and bone-screw interface, respectively is the optimum positioning configuration to fix the two bone segments.
estimation of the lower jaw using primitive solids [1], which ignores the effect of jaw curvatures during calculating the applied stress values. In this work, we use finite element method to assess the screw-bone stability and strength relative to screw placement in the bone. Stress distribution of five different models having five different screw placements are compared under similar pressure boundary condition caused by mastication. Finally, a discussion is offered to evaluate the advantage/disadvantage of each approach and recommend a case specific treatment to optimize the healthy bone union.
Keywords— Dentistry, Bilateral Sagittal Split Osteotomy Finite Element Method and surgery.
II. METHODS
I. INTRODUCTION
For this study, a computer model of an adult human mandible was produced using CT imaging of a healthy, 35year-old female. A sample CT scan of this patient is shown in Fig. 1. The model was assigned an orthogonal XYZ coordinate system.
As the only moving joint in the skull, mandible plays a significant role in mastication as well as face aesthetics. Currently, Bilateral Sagittal Split Osteotomy (BSSO) is the most popular surgical method to correct the occlusion pattern [1-6]. This method allows for backward or forward advancement of the mandible when patient has deficient or large lower jaw, respectively. Alternative methods are available to fix the bone after correcting the mandible position, using different types of internal fixation screws and different choices of screw placement[4,5]. These two factors have to be optimized to facilitate the bone healthy union. The bone-screw unit acts as a composite structure to perform the load bearing [6]. The strength of this unit highly depends on the physical and mechanical properties of the screw, screw placement and bone quality. While, extensive researches are performed to assess the effect of screw types used in this surgery [1], limited discussions are offered regarding the screw placement, mostly obtained from biomechanical testing. Limited number of Numerical studies were performed to analyze and compare the alternative fixation methods[1, 2], most of them rely on geometrical
Fig. 1 A frame of the CT image that is used for extraction of the mandible model
To define the position of interface layer, the computed tomography scans of the patient were segmented using Mimics software (Materialise, Columbia Scientific Inc, Glen Burnie, MD) to distinct the hard tissue of bone from
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 516–519, 2010. www.springerlink.com
Optimization of Screw Positioning in Mandible during Bilateral Sagittal Split Osteotomy Using Finite Element Method
517
the surrounding soft tissues around the major arc of the mandible. The cortical layer was then used to create graphical curves and surface meshes within the original surface. The surface information is converted to the volumetric domain to construct geometry and computational mesh for the model. The tool that is used for this format manipulation and all of the fracture modeling is CATIA software (Dassault Systemes, v5) (Fig. 2).
Fig. 3 Boolean operations on the model volume to construct and simulate the post-surgery condition
These models essentially represent the post surgery-day 1 since there is no callus formed in between the bone segments, which is most critical loading condition for screws and the bone segments. The before– 6 weeks average incisor bite force of 62.8 N was applied to the mode as a point load on the incisor in the negative Z direction [2]. To accelerate solution time, symmetry effects were exploited and half of the mandible was modeled which was assumed to be symmetric at the symphysis. Fig. 2 Near perfect model of the mandible using CT imaging Using this program we effectively simulated the surgery and divided the mandible model into the external cortical part and the internal layer based on the volume Boolean operations (Fig. 3). Mandible cortical bone into is divided into two distinct parts (Fig. 3) and complete angle fracture on the right side of the mandible was simulated. Friction was ignored at the interface of two bone segments. An isotropic homogenous material is used to model the mechanical properties of the mandible mechanics assuming that cortical bone is representing the mechanical behavior of the mandible with the elastic modules of 14GPa and Poisson’s ratio of 0.3. Titanium screws were modeled as simple cylinders (2.4 mm diameter [2]) to fix the two bone segments together. FEA meshes were then developed on the screw models. The material properties for the titanium screw are defined as young modulus of 110 GPa and Poisson ratio of 0.34 [3]. Five computer models are generated using five different configurations of the screw positioning in four models and the combination of plate and screws in the last model (Table. 1). Perfect bonding (no slip and no clearance) between the screws and the surrounding bone was assumed at the bone-screw interface.
Table 1 Five alternative fixation models to secure the two bone segments after Bilateral Sagittal Split Osteotomy (the first four models utilizes three titanium screws whereas the fifth model utilizes a plate and to titanium screws to fix the two bone segments)
Model 1
Model 2
Model 3
Model 4
Model 5
With a well-defined geometry and mesh, material properties, and appropriate bite force, the ABAQUS finite element solver software (SIMULIA brand, Dassault Systemes S.A) was used to compute displacements and stresses in each mandible model. The mandibular analysis was performed as static loading with linearly elastic material properties.
III. RESULTS Model 2 and Model 3 showed the highest and lowest von Mises stress values, respectively. Fig. 4 and Fig. 5 show the stress contours of these two models: entire model, screws and bone-screw interface. As seen in Figure 4. screws are used in an inverses L form. The maximum von Mises stress value in the entire model is 96.1 MPa, whereas maximum stress in screws and interface are 67.7 and 35.9 MPa,
IFMBE Proceedings Vol. 32
518
A.R. Najafi et al.
respectively. Maximum stress value in the entire model happen in the area near condoyle, whereas in screws it occurs in screw 3 and in the interface it happens at the posterior bone segment, which is significantly different from values at the anterior bone segment . In addition, stress concentration can be seen around the bone-screw interface (posterior bone segment) since stress distribution changes significantly.
Fig. 5
Model 3, the lowest von-Mises stress values were observed in this model. Values (MPa) can be seen at the entire model, screws and bonescrew interface at the posterior bone segment
Fig. 4 Model 2, the highest von-Mises stress values were observed in this model. Values (MPa) can be seen at the entire model, screws and bonescrew interface at the posterior bone segment As seen in Figure 5. screws are used in a horizontal T form. The maximum von Mises stress value in the entire model is 65 MPa, whereas maximum stress in screws and interface are 5.3 and 6.5 MPa , respectively. Maximum stress value in the entire model happen in the area near condoyle, whereas in screws it occurs in screw number 2 and in the interface it happens similarly in the anterior and posterior bone segments when screw number 3 wherein screw number 3 is inserted. Stress concentration is not seen in the area that screws are inserted.
Table 2 compares the maximum von Mises stress values in all five models in three different levels of entire model, screws and bone-screw interface. Model 2 shows the highest values of maximum von Mises Stress in all three levels of entire model (at condoyle) , screw (screw 3) and interface (posterior bone segment). While models 1, 3 and 4 show similar values of stress at screw and interface, however, since maximum stress in the entire model is the least in model 3, this model is selected in the study as the best fixation configuration used in post surgery. In addition, it is seen that using the plate does not reduce the stress values while it is less invasive than other methods as a result of using only two screws.
IFMBE Proceedings Vol. 32
Optimization of Screw Positioning in Mandible during Bilateral Sagittal Split Osteotomy Using Finite Element Method
Table
2 Comparing the maximum Von-Misses stress values and their location in different models in three different levels (entire model, screw and bone at bone-screw interface)
Table 2 Model no.
Model Max stress (MPa)
Screw
Interface
Max stress (MPa)
Max stress (MPa)
Model 1
95.59
5.043
2.69
Model 2
96.41
67.73
35.89
Model 3
64.59
5.287
2.67
Model 4
75.04
5.528
2.72
Model 5
78.24
25.64
4.01
IV. DISCUSSION Five different fixation methods of two bone segments after Bilateral Sagittal Split Osteotomy are compared in this study. Complete angle fracture is simulated to assess the effect of screw positioning in optimizing of applied load to the bone, screws, and bone-screw interface. Geometrical details of the mandible are considered since the model geometry is built using CT imaging. This is of a great importance and a major improvement in FEA modeling to obtain more accurate results since most previous studies used primitive geometrical 3d objects to build their models [1-2]. For example, Maurer et al. considered one section for each bone segments connected by screws and simulated the two sections using 3d primitive geometrical objects [1]. Model is considered isotropic. Therefore, trabecular mechanical properties were ignored and were replaced by cortical bone mechanical properties. Further modification of this model and defining the model as an orthotropic model is expected to enhance the results accuracy similar to the approach Cox et al. selected [2]. They included the orthotropic behavior of the mandible using estimated thickness of cortical bone on
519
trabecular bone and assigning their mechanical properties [2]. There are numerous other fixation configurations exist which need to be analyzed. This study suggest that screw positing is a key factor in reducing the applied force to the bone and prevent the loosening of the screws and enhance bone fracture recovery. Cox et al. compared the stress values of two identical methods that are different only on the screw material properties: one model was built using titanium screws and another one was built using resosrbable screws. They found similar mandible stress values in two models [2]. This observation potentially emphasizes on the importance of the fixation configuration even compared to the screws material properties.
V. CONCLUSION Results of this study suggest that Model 3 (Horizontal T) is the configuration of choice among five methods analyzed since it minimizes the von Mises stress values in the entire model, screws and bone-screw interface.
REFERENCES 1. Maurer P, Holweg S, Schubert J(1999) Finite-element-analysis of different screw-diameters in the sagittal split osteotomy of the mandible . J Cranio-Maxillofacial Surgery 27:365-372 . 2. Cox T, Kohn M, Impellluso T (2003) Computerized analysis of resorbable polymer plates and screws for the rigid fixation of mandibular fractures . J Oral Maxillofac Surg 61:481-487. 3. Gere JM, Timoshenko SP: Mechanics of Materials (ed 2). Belmont MA, McGraw Hill, 1984, p745. 4. Lee M, Lin C, Tsai W, Lo L. (2010) Biomechanical stability analysis of rigid intraoral fixation of bilateral sagittal split osteotomy. Journal of Plastic, Reconstructive & Aesthetic Surgery. 63: 451-455. 5. Sato F, Asprino L, Morae C (2010) Comparative biomechanical and photoelastic evaluation of different fixation techniques of sagittal split ramus osteotomy in mandibular advancement. . J Oral Maxillofac Surg 68:160-166. 6. Suuronen R, Laine P, Pohjonen T, Lindqvist C (1994) Sagittal split osteotomy fixed with biodegradable self-reinforced poly-L-lactide screws. Int J oral Maxillofac surg 21:303-308.
IFMBE Proceedings Vol. 32
Extraction and Characterization of a Soluble Chicken Bone Collagen Tiffany Omokanwaye, Otto Wilson Jr., Hoda Iravani, and Pramodh Kariyawasam Catholic University of America/Biomedical Engineering Department, Washington, D.C., USA Abstract— Our understanding of collagen has increased considerably. Collagen is a fibrous protein that plays a key role in the framework and development of hard and connective tissue. Collagen has vast structural possibilities for modifications to generate novel properties, functions, and applications especially in the bone implant arena. Collagen is found throughout nature in skin, tendon, bone, cartilage, etc. and in numerous different species. Of the 28 identified types, collagen I is the most abundant in nature and the main component of bone. Despite its abundance, the utilization and characterization has been restricted by its insolubility. Collagen solutions are usually viscous and difficult to study using techniques such as zeta potential analysis, chromatography and electrophoresis. The specific aim of this study was to develop a protocol to extract and solubilize collagen from chicken femur bone. In the thermogram of our chicken femur bone collagen (CBC), there was only a 35% reduction in weight. This signifies that the CBC collagen sample contained 65% inorganic material. Elemental analysis revealed that CBC consisted of 11% Carbon (C), 2% Hydrogen (H), and 3% Nitrogen (N), totaling 16%. This is substantially lower than 77%, the calculated theoretical value of CHN% in collagen. Our CBC shared the theoretical Carbon/Nitrogen value of 3.4. Elemental analysis confirms that the CBC contained elements other than carbon, nitrogen, and hydrogen. This suggests that the CBC sample was not fully demineralized and adjustments need to be made to the existing collagen extraction protocol. Further studies are needed to confirm chemical composition of the CBC in solution. This work is part of a larger study to compare collagen I, derived from bone, and α-chitin, derived from crab exoskeleton. Keywords— Collagen, Bone, Solubilization.
I. INTRODUCTION For a thorough and systematic design of a bone inspired bio-implant, it is fundamental to study the individual components [1]. Bone, one of nature’s masterpieces, is a remarkable, living, highly vascular, dynamic, mineralized, connective tissue, which is characterized by its hardness, resilience and growth mechanisms, and its ability to remodel and repair itself [2]. Bone is a complex tissue composed of inorganic and organic matrices that provide support and mechanical strength. The inorganic matrix, primarily hydroxyl apatite, provides compressional strength, and the organic matrix that is predominantly collagen, provides tensile strength and structural scaffolds to the
inorganic matrix [3], Fig. 1. In order to study the organic component of bone, collagen must be extracted from the composite matrix.
Bone Building Blocks Mineral Hydroxyapatite (calcium phosphate) Ca10(PO4)6(OH)2
Organic Collagen I C12H24N3O4
Fig. 1 Bone Building Blocks Collagen, a fibrous protein, appears to have been first reported by Zachariades in 1900 [4]. More than 28 different types of collagen have been identified [5] and classified primarily according to their physiological structure. Collagen type I, the main constituent of bone, tendon, and skin, is synthesized by fibroblast, smooth-muscle cells, and osteoblasts [6]. Collagen I consists of relatively few constituent elements: H, C, O, and N [7]. The empirical chemical formula for collagen is (C12H24N3O4). The basic unit of collagens is a polypeptide consisting of the repeating sequence (G- X - Y)n. Glycine (G) is small, and is the only amino acid that fits in the crowded interior of the triple helix. X is usually proline and Y is usually hydroxyproline [8]. Collagen type I usually consists of three coiled subunits: two α1(I) and one α2(I) chains forming fibrils of 50 nm in diameter [9]. The conventional procedures used to extract bone collagen using hypertonic salt solutions or dilute organic acids do not yield any significant amount of material, and more drastic conditions yield degraded or denatured products [10]. Once a product is extracted, collagen is nearly insoluble because of cross-linking. The large size (300,000 mol wt) and asymmetry (3,000 by 15 A) of native collagen hampers its chemical characterization. Solutions of collagen are usually too viscous for study by the usual techniques such as zeta potential analysis, chromatography and electrophoresis [11]. This communication outlines an
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 520–523, 2010. www.springerlink.com
Extraction and Characterization of a Soluble Chicken Bone Collagen
attempt to extract and solubized collagen from chicken femur bone.
II. MATERIALS AND METHODS Collagen samples, derived from Bovine Achilles tendon (BAT), were obtained from Worthington. Chickens femur bones were obtained from a local grocery store. The meat and cartilaginous ends of the chicken femur were removed to ensure that no cartilage was included in the bone specimens. The marrow was removed. The bone was washed and dried overnight. The bones were then ground in a coffee grinder for 10 min. The mass of the bone chips was 10.0856 g. Extraction and Solubization--- Several techniques have been developed to extract collagen [12, 13, 14]. Isolation of collagen from chicken bone (CBC) involved demineralization and purifying steps. Bone samples, prepared as above, were suspended in 40 ml of 1 M HCl and boiled for approximately 30 minutes with constant stirring to rid the bone chips of the mineral, calcium phosphate. This was followed by rinsing with de-ionized water until a neutral pH was reached. The weight of the bone sample after demineralization and rinsing was 4.7419 g. 2.5419 g of 4.7419g sample was purified using 0.1 molar NaOH for 30 minutes with constant stirring. Again, the extract was rinsed to neutral pH with de-ionized water and dried at room temperature for 24 h under the hood. Quantitative analyses of collagen extracted from chicken bone were carried out to determine the effectiveness of the procedure. Elemental Analysis--- Elemental analysis to determine the Carbon (C), Hydrogen (H), and Nitrogen (N) composition was performed by Prevalere Life Sciences, LLC, Whitesboro, NY. Theoretical values of CHN were calculated for collagen using their empirical formulas and molecular weights (mol wt). For example, collagen has the empirical formula C12H24N3O4 (mol wt = 274). Therefore, the molecular weight ratio of the N molecule to collagen molecule is 42/274 or 0.1533 or 15.33%. The same ratios can be calculated for C and H. Additionally other ratios such as C/N and CHN% can be calculated as well. These values are provided for collagen from bovine Achilles tendon and from chicken bone femur, Table 2. Thermal Analysis (TGA)--- Thermogravimetric analysis was carried out on dried collagen samples samples using a Shimadzu TGA-50H Thermogravimetric Analyzer (Kyoto, Japan). Heating was performed in an alumina pan in an air flow (20 ml /min) at a rate of 10 °C/min up to 800°C. Percentage of weight loss was calculated using the formula ܹݐ × 100. The percentage weight loss was plotted as a ܹ
521
function of temperature for both collagen and chitin in Figure. Surface Morphology (SEM)--- Collagen samples were sputtered with an ultrathin layer of gold and studied with a Hitachi SU-70 Schottky field emission gun scanning electron microscope, the Nanoscale Imaging, Spectroscopy, and Properties (NISP) Laboratory the Kim Engineering Building at the University of Maryland at College Park.
III. RESULTS Elemental Analysis--- Results of the elemental composition measurements and theoretical values for collagen from chicken femur bone and from bovine Achilles tendon are shown in Table 2. Experimental value of %N for CBC was 3.20%, approximately a third of the theoretical value and approximately half the value of the BAT collagen. Experimental CHN% for CBC was 16.40% when a value close to 76.64%, the theoretical value, was expected. Since the percentage of CHN is low, it can be assumed that other elements such as calcium and phosphorous, the components of the mineral matrix of bone, are present. This should be confirmed by more detailed elemental analysis. Table 1 Elemental Composition of Collagen Sample Collagena Collagenb Collagenc
%C 52.55 41.48 11.02
%H 8.76 5.76 2.18
%N 15.33 7.03 3.20
%CHN 76.64 54.27 16.40
C/N 3.43 5.90 3.44
a
Theoretical Bovine Achilles Tendon (BAT), Worthington Chicken Bone (Femur), CBC
b c
Thermal Analysis (TGA)--- In the thermogram of BAT collagen, Fig. 2, three thermal events could be observed. The first occurs in the range 50-100 °C, and is attributed to water loss. The second event occurs in the range of 250350 °C and could be attributed to the breakdown of the triple helix molecule in chitin. The third event occurs in the range of 450- 600 °C and could be attributed to the further denaturation of collagen to gelatin. Similar behavior was observed in collagen from frog skin, amniotic membrane, and calfskin [15]. In the thermogram, Fig. 2, of CBC two thermal events could be observed. The first occurs in the range of 50–100 °C, and is attributed to water loss. The second occurs in the range of 200–300 °C. However, there was only a 35% reduction in weight. This signifies that the CBC collagen
IFMBE Proceedings Vol. 32
522
T. Omokanwaye et al.
sample contained 65% inorganic material. Thus, our CBC sample was not fully demineralization.
Fig. 4 Experimental Chicken Bone Collagen SEM Micrograph Fig. 2 Thermogram of CBC and BAT Collagen
IV. DISCUSSION
Surface Morphology (SEM)--- Collagen, from bovine Achilles tendon, SEM micrographs, Fig. 3 , displays latticelike structures. The fibers in the BAT collagen micrographs were oriented parallel, perpendicular, or a variation in between, to the surface, on successive planes. The SEM micrograph of the collagen from bovine Achilles tendon demonstrates the characteristic 64 nm banding pattern of a collagen fiber [16].
To complete the study on the comparison of collagen I, derived from bone, and α-chitin, derived from crab exoskeleton, bone collagen was preferred. However, collagen derived from bone was difficult to obtain. The closest connective tissue to bone was tendon. In tendon, filamentous collagen fibrils, which make up collagen fibers, are the main structural elements. In bone, the basic building blocks are collagen fibrils and crystals of carbonate apatite [17]. Collagen may or may not swell in water depending on which tissue is used to extract the collagen. This highlights one of the differences between bone collagen and tendon collagen. For example, bone collagen (which normally calcifies) does not swell in water whereas tendon collagen (which normally does not calcify) does [18]. Although the molecular structure, macromolecular aggregation state, and amino acid composition of bone collagens are qualitatively similar to those of the collagens of unmineralized soft tissue, the collagen as it is organized in bone, possesses certain characteristics physicochemical properties which distinguish it from most of the soft tissue collagens. For example, in contrast to the collagen of most other tissues, very little of the collagen of dimineralized bone, even in young, rapidly growing animals, can be extracted in solution normally used to extract the undenatured protein from unmineralized soft tissue. On the other hand, unlike the insoluble collagen of bovine Achilles tendon, for example, a large fraction of chicken bone collagen can be extracted as gelatin at neutral pH [19]. Because of the difficulty in studying the native molecules, most of the information on collagen structure has been obtained by studying the denatured protein. To illustrate this fact, chick bone collagen was found to be
Fig. 3 Bovine Achilles Tendon SEM Micrographs
Collagen, from chicken bone femur, SEM micrographs, Fig. 4, displays crystal like structures. This micrograph confirms that our sample is not fully mineralized.
IFMBE Proceedings Vol. 32
Extraction and Characterization of a Soluble Chicken Bone Collagen
composed of α 1 chains. Reasons for collagen differences in various tissues can be assumed to be related to specific functions. The solubility (and presumably the degree of cross-linking) of collagen varies in different tissues [11].
V. CONCLUSION Collagen has been widely applied in food, cosmetic and medicine because of its good biological and functional properties. However, collagen has defied attempts to investigate them primarily because of difficulties in isolation. The disruptive and dissociative techniques or extraction with denaturing solvents yield very low amounts of collagen and in a partially degraded form [20]. Although a chicken bone gelatin was expected, based on our results, a sample that was not fully mineralized was produced. Adjustments are needed to the existing collagen extraction protocol to ensure complete demineralization of bone samples.
ACKNOWLEDGMENT The authors would like to acknowledge support from the NSF Biomaterials Program (grant number DMR-0645675). The authors also would like to thank the Nanoscale Imaging, Spectroscopy, and Properties (NISP) Laboratory at the Kim Engineering Building and Dr. Lloyd for the use of Schimadzu TGA-50H, at the University of Maryland at College Park.
523 5. Gobeaux F et al. (2008) Fibrillogenesis in Dense Collagen Solutions: A Physicochemical Study. J Mol Biol 376:1509–1522. 6. Kuhn K, Glanville, R W (1980) Molecular Structure and Higher Organization of Different Collagen Types. Academic Press, London 7. Meyers M A et al. (2008) Biological materials: Structure and mechanical properties. Progr Mater Sci 53:1–206 8. Smith C A, Wood E J (1991) Biological Molecules: Molecular and Cell Biochemistry. Chapman & Hall, New York 9. Kolacna L et al. (2007) Biochemical and Biophysical Aspects of Collagen Nanostructure in the Extracellular Matrix. Physiol Res 56:S51-S60 10. Strawich E, Nimni, M E (1971) Properties of a Collagen Molecule Containing Three Identical Components Extracted from Bovine Articular Cartilage. Biochemistry 10:3905-3911 11. Martin G R (1971) Recent Progress in Collagen Research. J Dent Res 50:268-274. 12. Gurfinkel D M (1987), Comparative Study of the Radiocarbon Dating of Different Bone Collagen Preparations. Radiocarbon 29:45-52 13. Brown T A et al. (1988) Improved Collagen Extraction by Modified Longin Method. Radiocarbon 30:171-177 14. Gerstenfeld L C et al. (1994) Selective Extractability of Noncollagenous Proteins from Chicken Bone. Calcif Tissue Int 55:230-235 15. Shanmugasundaram N, Ravikumar T, Babu, M (2004) Comparative Physico-chemical and in Vitro Properties of Fibrillated Collagen Scaffolds from Different Sources. J Biomater Appl 18:247-264 16. Franchi M et al. (2008) Different Crimp Patterns in Collagen Fibrils Relate to the Subfibrillar Arrangement. Connect Tissue Res 49:85-91. 17. Vanderby Jr R, Provenzano P P (2003) Collagen in connective tissue: from tendon to bone. J Biomech 36:1523-1527 18. Bonucci E (1992) Calcification in Biological Systems. CRC Press, Boca Raton 19. Katz E P, Francois C J, Glimcher M J (1969) The Molecular Weights of the Alpha Chains of Chicken Bone Collagen by High-Speed Sedimentation Equilibrium. Biochemistry 8:2609-2615 20. Deshmukh K, Nimni M E (1973) Isolation and Characterization of Cyanogen Bromide Peptides from the Collagen of Bovine Articular Cartilage. Biochem J 133:615-622
REFERENCES 1. Daamen W F et al. (2003) Preparation and evaluation of molecularlydefined collagen-elastin-glycosaminoglycan scaffolds for tissue engineering. Biomaterials 24:4001-4009 2. Hing K A. (2004) Bone repair in the twenty-first century: biology, chemistry or engineering? Philos Trans R Soc Lond A, 2821–2850 3. Rath N C, et al. (1999) Comparative Differences in the Composition and Biomechanical Properties of Tibiae of Seven- and Seventy-TwoWeek-Old Male and Female Broiler Breeder Chickens. Poult Sci 78:1232-1239 4. Slack H G B (1955) A Short Review of Connective Tissue Metabolism. Ann Rheum Dis 14:238-242
The corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Otto Wilson, Jr., PhD Catholic University of America 620 Michigan Ave., NE Washington, DC USA
[email protected]
A Model for Human Postural Regulation Yao Li and William S. Levine Department of Electrical and Computer Engineering, University of Maryland, College Park, USA Abstract— We present a computational model of a quietly standing human. The model is composed of a system of 3 rigid links. There are two joints, one at the ankle and one at the hip. This is consistent with considerable experimental data which indicates that humans keep their knee angle nearly constant when there are small perturbations to their posture. Our model also includes the delays associated with sensing and actuation by the human neuromuscular system. An optimal control problem is designed with a cost function that contains a cost for state errors that is quartic or some higher even power and a cost for control that is quadratic. This performance measure tolerates larger state errors than the usual quadratic criterion and reduces the energy expended by the controller, a very plausible goal for a neuromuscular controller. We have solved the resulting constrained nonlinear optimal control problem. Simulations of the optimal control demonstrate better agreement with experimental studies of posture than other analytical models. Furthermore, we propose an implementation of the optimal control using only neurons and muscles. Straightforward learning/adaptation schemes would then make the proposed optimal controller a feasible solution to the real posture regulation problem.
achieves the best possible performance. Ideally, the cost assumed in a human optimal control model should involve cost terms for states and controls and correspond to what the sensorimotor system is trying to achieve. A properly designed biomechanics model and its computer implementation can help us understand how humans control their muscles to generate movements and how their muscles, joints, and ligaments interact during motor activity[6][7][8]. Such a model could also be used to compute the joint forces and torques from knowledge of the external forces and limb trajectories. This inverse solution could provide insight into the joint reaction forces and the muscle activities at each joint [9]. It would be particularly useful for clinical tests on humans as they perform motor tasks. This model could aid in the clinical diagnosis and treatment of motor control disorders, and the development of functional electrical stimulation for recovery of lost motor function.
Keywords—Human Balance, Optimal Control, Convex Programming, Coordination.
A. Quiet Standing Model
I. INTRODUCTION Motor control, including control of balance, body movement and planning of movements, is a complex task that involves the central nervous system (CNS) and a variety of other systems including the musculoskeletal, visual, vestibular and proprioceptive systems [1][2][3][4]. In experimental studies, the position of human body segments can be tracked and measured with cameras and other tracking equipment. However, we are not able to measure the internal forces and torques in a living human directly. The human sensorimotor system is a system with the capability of learning, developing, and adapting to improve performance. Many biological behaviors are likely to be optimal with respect to some performance measure that involves energy. It is reasonable to believe that the human is (unconsciously) optimizing some performance measure as he regulates his posture. In engineering and mathematics, optimal control methods require a performance criterion that describes the goal and then fills in all the control details automatically by searching for the control strategy that
II. DYNAMICAL MODEL In this work, we present a computational model of a quietly standing human which uses three rigid and connected segments to represent the foot, leg (locked knee), and torso. This two joint, three segment model is controlled by torques on the ankle and hip joints as depicted in Fig 1.
Fig. 1 A quiet standing model is composed of 3 rigid links, with control torques at the ankle and hip. The knee is locked We first derived the basic equations of motion using the Euler-Lagrange method.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 524–527, 2010. www.springerlink.com
A Model for Human Postural Regulation
525
(1)
J (q )q + G (q, q ) = U q
Since the body dynamics during standing have been demonstrated to be well-approximated by a linear model for small perturbations, we linearize the double inverted pendulum model around the unstable equilibrium point and also define the small angular deviations from the equilibrium. ⎛ φ1 ⎞ ⎛ φ1* ⎞ ⎛ +φ1 ⎞ ⎜ ⎟ ⎜ * ⎟ ⎜ ⎟ ⎜ φ2 ⎟ = ⎜ φ2 ⎟ + ⎜ +φ2 ⎟ * ⎜ uankle ⎟ ⎜ u ⎟ ⎜ +uankle ⎟ ⎜ ⎟ ⎜ ankle ⎟ ⎜ ⎟ ⎜ uhip ⎟ ⎜ u * ⎟ ⎜ +uhip ⎟ ⎝ ⎠ ⎝ hip ⎠ ⎝ ⎠
(2)
It is useful to convert this to a dimensionless model. This makes it easier to apply the model to a variety of humans having different height, weight, moments of inertia. We then introduce the state space variables and develop dynamics in the form x = Ax + Bu , where ⎡ +φ1 ⎤ ⎡ x1 ⎤ ⎢ +φ ⎥ 2 ⎥ ⎢ ⎥ ⎢ x ⎢ ⎥ x = ⎢ 2 ⎥ = ⎢ d (+φ1 ) ⎥ ⎢ x3 ⎥ dt ⎥ ⎢ ⎥ ⎢ ⎣⎢ x4 ⎦⎥ ⎢ d (+φ ) ⎥ ⎢ 2 ⎥ ⎣ dt ⎦
,
⎡ u ⎤ ⎡+uankle ⎤ u = ⎢ 1⎥ = ⎢ ⎥ ⎣u2 ⎦ ⎢⎣ +uhip ⎥⎦
B. Modeling Delay It is known that there is significant delay in the feedback due to neural transmission, muscle activation, and possibly neural processing. There are actually two delays in the postural control system. There is a delay in the application of the control and there is a delay in the CNS receiving the sensors’ data. Delay in the discretized model simply adds states to the linear model. In order to easily consider the delay effects both on the observations and control variable, we discretize the time by letting, z4 nd +1 = x1 (k )
z4( nd +1) = u1 (k − nd )
z4 nd +2 = x2 (k ) z4 n +5 = u1 (k − nd + 1) d z4 nd +3 = x3 ( k ) z4 nd +4 = x4 ( k )
z5 nd +4
# = u1 ( k − 1)
III. NONLINEAR OPTIMAL CCONTROL It is reasonable to believe that the human is (unconsciously) optimizing some performance measure as he regulates his posture. We propose a Higher Order Control (HOC) performance measure being minimized in human posture regulation has the form J=
z5( nd +1) = u2 (k − nd )
y (k ) = Cz z (k )
Inclusion of the delay in the observations changes the optimal control problem substantially. This system stores multiple units of each of the previous states and controls, and
8
N
i =1
n =0
di zi2 p [k ] + ∑ n=0 d9u12 q [k ] +d10u22 q [k ] N
(5)
Then rewrite the performance measure and dynamics in terms of the new overall variable as
∑
min
N n =0
Ds siΛ [ k ]
s.t. As s ( k ) = b
where
Λ ∈ { p, q} , Ds = diag (d1 , d 2 ,..., d10 ) and
⎡ I ⎢ −A As = ⎢ z ⎢ # ⎢ ⎣⎢ 0
# = u2 ( k − 1)
(3)
∑ ∑
s = [ z (0), u (0), z (1), u (1),..., z ( N ), u ( N )]T
we will have a new system with delay and observation: z (k + 1) = Az z (k ) + Bz u (k )
(4)
Then, the discrete-time optimal control problem is to minimize Equation (5) subject to the constraints defined in Equation (3). To solve the optimal control problem computationally, introduce a new overall optimization variable which contains all the state and control variables:
z5 nd +6 = u2 (k − nd + 1) z6 nd +5
L 1 ∞ K 2 [∑ qi zi2 m (t ) + ∑ ru i i (t )]dt ∫ 0 2 i =1 j =1
here qi and rj are cost coefficients, K, L, m are integers, and the zi and uj are deviations from the nominal equilibrium values of the states and controls respectively. Note that this performance measure penalizes small state errors much less than an entirely quadratic criterion would. This means that the optimal control for this performance criterion would be more tolerant of small errors than a linear controller would be and, hence, would use less energy than a linear controller. We discretize the performance measure to min
The derivation and detailed calculation for system matrices A and B can be found in our earlier works [8][9][10].
z1 = x1 (k − nd ) z2 = x2 ( k − nd ) z3 = x3 ( k − nd ) z4 = x4 ( k − nd )
therefore includes the delays for both the state vector and the control vector.
0
"
"
− Bz # "
I #
0 #
− Az
− Bz
" 0⎤ ⎡ z (0) ⎤ ⎥ ⎢ ⎥ " #⎥ 0 ⎥ b=⎢ ⎥ ⎢ # ⎥ # # ⎥ ⎢ ⎥ I 0⎦⎥ ⎣⎢ 0 ⎦⎥
The Newton-KKT method is a good choice for solving this convex programming problem computationally. The key step is the repeated solution of the following system of linear equations involving the gradient and the Hessian.
IFMBE Proceedings Vol. 32
⎡∇ 2 J ( s (i ) ) ⎢ As ⎢⎣
AsT ⎤ ⎡ Δ snt(i ) ⎤ ⎡ −∇J ( s (i ) ) ⎤ ⎥=⎢ ⎥⎢ ⎥ O ⎥⎦ ⎢⎣ w ⎥⎦ ⎣ O ⎦
(6)
526
Y. Li and W.S. Levine
By repeatedly solving this problem for different initial conditions and saving the first step of the solution we can obtain an approximately optimal feedback solution, and the approximation can be made as accurate as desired. There are sensors in the human body that measure every state of the system. Thus, in the absence of delay, the optimal control would be a collection of nonlinear gains mapping the states into the controls of the form u (t ) = f ( z (t )) . The sensor delay means that, although all the state components are observed, their value is not known by the controller until sometime in the future. A truly optimal control would, in that case, be a function of the entire collection of past states and controls. However, a good approximation to that optimum is, in many cases, the controller shown in Fig. 2. The nonlinear gain is exactly the same as in the full state observation case except that the input to that gain is an estimate of the state u (t ) = f ( zˆ (t )) and zˆ (t ) is the output of a state predictor, a subsystem that uses the observations and the known prior controls to predict the delayed measurements. The predictor consists of two elements. The first is a model of the human that includes the delays and the nonlinear gain. The output of this predictor goes both to the nonlinear gain and to a comparator within the predictor. This comparator measures the difference between the prediction and the observation and uses the differences between the two to update both the prediction and the predictor. A nonlinear gain and a predictor of this type would be the usual engineering solution to a control problem of this type and, for reasons that will be elaborated in the following section, is the likely solution in humans and other mammals.
IV. NEURONAL CONTROL MODEL Consider the motor unit. It is a single neuron and the collection of muscle fibers excited by that neuron. There are at least hundreds of motor units in each mammalian muscle. Because neurons are threshold devices it is easy to implement a nonlinear gain by simply adjusting the thresholds of the motor units. Thus, if the input to the motor units came from a state predictor, the nonlinear gain portion of Fig. 2 would be the natural way to implement the control. If one takes into account the size principle, then a nonlinear controller of exactly the form given by our optimal control model would be virtually automatic.
Fig. 2 A simple human balance control system with delay
The remaining question is how a predictor can be constructed using only neurons. This is a harder problem which we have not yet fully solved. However, it is well accepted that cats, at least, have a central pattern generator (CPG) that produces the signals to the motor units that then produce the forces that generate repetitive movements. Surprisingly, the CPG seems to require sensor feedback to function. Why? One plausible answer is that the CPG consists of a nominal (open loop—no feedback) signal and a predictor. It is the predictor that needs the feedback. Furthermore, one might ask why people have not observed a CPG for posture. A possible answer is that the predictor used by the posture regulator needs feedback from vision and the otolith organs. These come from higher in the brain than the sensory feedback needed for pure locomotion. Thus, when one performs the surgery that facilitates the CPG experiments in walking, one destroys the feedback that is essential to the predictor in the posture regulator. At this point this is just speculation. However, it does offer a consistent explanation for the experimental observations and it is very much the way an engineer would design the system.
V. RESULTS We have successfully solved a series of constrained nonlinear optimal control problems. The method has been tested on standing models that includes the delays in the human sensory and muscle activation systems. This means that the solution to the optimal control problem, in the case of full state feedback, would be a nonlinear but timeinvariant feedback gain function with multiple inputs and multiple outputs. We present, for simplicity of exposition, the results assuming the hip is also not moving. This reduces the problem to two segments and one control. The simulations are based on the simplified sway model defined in Equation (3) using Peterka’s body parameters [4] as shown in Table I. The parameters for the steady state (balance maintenance) simulation are listed in Table IV. The simulation results compare with Linear Quadratic Gaussian Control (LQG) are depicted in Figs. 3 and 4. Table 1 Body Characteristics And Dimensionless Model Parameters Symbol
Quantity
Value
M
Body mass
76kg
Io
Body moment of inertia
66kgm2
h g
CM height over ankle joint
0.87m
Acceleration of gravity
9.8m/s2
τ
Sample interval
0.25 secs
nd
Number of delay states
2
N
Total samples
80
IFMBE Proceedings Vol. 32
A Model for Human Postural Regulation
527
Table 2 Body Characteristics And Dimensionless Model Parameters Symbol
Quantity
Value
z1 (0)
Angular displacement
+0.01D
z2 (0)
Angular velocity
+0.01D
Ξ, Θ
Noise standard deviation
0.1
T
Simulation duration
20 secs
Fig. 3 Trajectory of steady-state response
which can be conveniently and reliably solved. By repeatedly solving this problem for different initial conditions and saving the first step of the solution we obtain an approximately optimal feedback solution. Nonlinear state estimator/predictor - In reality, all the states are measured but with significant delays. Our work so far has led to the hypothesis that the CNS control of posture must incorporate a prediction of the state, i.e., those variables that describe state information that has been sensed but is still making its way via the neuronal paths to the CNS. For example, the leg muscles must react to foot strike in running and walking well before the impact could be sensed and processed. The existence of a Central Pattern Generator (CPG) provides evidence that such an estimator could be implemented using neuronal circuits. We have started to work on neuronal implementations of Kalman filters and Luenberger observers using realistic models for neurons.
REFERENCES
Fig. 4 Control input of steady-state response As we can see from the figure, the optimal controls for HOC performance criteria (blue line) are much more aggressive in reducing the large deviations than the LQG control (red line). One the other side, HOC relaxes for small errors and therefore generate more sway movement than LQG. Notice that more complex models and performance criteria of the form of Equation (5) can be solved by the same techniques.
VI. CONCLUSIONS AND FUTURE WORK We have presented an optimal control model for a quietly standing human. This model includes a three segment inverted pendulum controlled by joint torques at the ankle and at the hip. It also includes an optimization criterion that is quadratic in the controls but quartic in the states. The solution to the nonlinear optimal control problem was obtained by first approximating the infinite time performance measure by a finite time one. The entire problem was then discretized in time. The result is a convex programming problem
1. Accornero N, Capozza M, and Manfredi GW. Clinical multisegmental posturography: age-related changes in stance control. Electroenceph Clin Neurophysiol 105: 213-219, 1997. 2. Aramaki Y, Nozaki D, Masani K, Sato T, Nakazawa K, and Yano H. Reciprocal angular acceleration of the ankle and hip joints during quiet standing in humans. Exp Brain Res 136: 463-473, 2001. 3. Collins JJ and DeLuca CJ. The effects of visual input on open-loop and closed-loop postural control mechanisms. Exp Brain Res 103: 151-163, 1995. 4. Peterka RJ. Postural control model interpretation of stabilogram diffusion analysis. Biol Cybern. 82: 335–343, 2000 5. Boyd SP and Vandenberghe L. Convex Optimization. Cambridge University Press, 2003 Material available at http://www.stanford.edu/ ~boyd/cvxbook.html 6. Bennett SE, and Karnes JL. Neurological disabilities: assessment and treatment Lippincott Williams & Wilkins, 1998 7. Winter DA, Patla AE, Rietdyk S, and Ishac MG. Ankle muscle stiffness in the control of balance during quiet standing. J Neurophysiol 85: 2630-2633, 2001. 8. Li Y and Levine WS. An optimal control model for human postural regulation. 2009 American Control Conference, Louis, Missouri, June 10-12, Page(s): 4705-4710, 2009. 9. Li Y and Levine WS. An optimal model predictive control model for human postural regulation. 17th Mediterranean Conference on Control and Automation, Page(s): 1143-1148, June 24-26, 2009. 10. Li Y and Levine WS. Models for human postural regulation that include realistic delays and partial observations. Proc. of 48th IEEE Conference on Decision and Control Conference. Shanghai, China Page(s) 4590-4595, December 16-18, 2009. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
William. S. Levine University of Maryland Department of Electrical and Computer Engineering College Park USA
[email protected]
Development of an Average Chest Shape for Objective Evaluation of the Aesthetic Outcome in the Nuss Procedure Planning Process K.J. Rechowicz1, R. Kelly2, M. Goretsky2, F. Frantz2, S. Knisley3, D. Nuss2, and F.D. McKenzie1 1
Department of Modeling and Simulation, Old Dominion University, Norfolk, VA 23529, USA 2 Department of Surgery, Eastern Virginia Medical School, Norfolk, VA 23507, USA 3 Department of Mechanical Engineering, Old Dominion University, Norfolk, VA 23529, USA
Abstract— The Nuss procedure is a minimally invasive surgery for correcting pectus excavatum. Pectus excavatum (PE), also called sunken or funnel chest, is a congenital chest wall deformity which is characterized by a deep depression of the sternum. This condition affects primarily children and young adults and is responsible for about 90% of congenital chest wall abnormalities. Among various PE treatments options, the Nuss procedure has been proven to have a high success rate and satisfactory aesthetic outcome. Although the Nuss procedure is routinely performed, the outcome depends mostly on the correct placement of the bar. Therefore, a Nuss procedure surgical planner would be an invaluable planning tool ensuring the optimal aesthetic outcome. Unlike what is done today, the planner will require a means to evaluate the aesthetic results objectively. Therefore, we have developed a methodology to generate an average shape of the chest for comparison with results of the Nuss procedure planning process. Since a method is based on a sample of normal chests obtained from the male population using laser surface scanning, a challenging aspect is data processing of the sample. This includes hole-filling, scan registration, ensuring consistency among scans, and shape vector creation. In our case, the first problem is solved using radial basis function approximation of the surface optionally implemented in the scanning software. After registration of all scans, they are processed in order to meet requirements of statistical shape analysis, which includes creation of a surface interpolant for each dataset that allows controlling consistency among scans and their resolution. If hole-filling cannot be initially performed, we propose a crosssection of scans with parallel planes and then sampling points from fitted polynomials. As a result an average shape of the chest is obtain with statistical shape analysis performed, disclosing directions in which the largest variability occurs.
This condition affects primarily children and young adults and is responsible for about 90% of congenital chest wall abnormalities [1]. Typically, this deformity can be found in approximately one in every 400 births and is inherited in many instances [2]. Among various PE treatments options, the minimally invasive technique for the repair of PE, often referred to as the Nuss procedure, has been proven to have a high success rate, satisfactory aesthetic outcome and low interference with skeletal growth [2]. PE patients that undergo minimally invasive surgery report an improved ability to exercise and long term improvements in measures of cardiac and pulmonary function [3,4]. The Nuss procedure involves placing a metal bar(s) underneath the sternum forcibly changing geometry of the ribcage. Apart from a physical improvement, positive psychological results are attributed to surgical correction [5,6] because a normal shape of the chest is restored, reducing embarrassment, social anxiety, and depression present in PE [7].
Keywords— shape analysis, 3D scanning, pectus excavatum, Nuss procedure, surgical planning.
Fig. 1 Use of the Nuss procedure planner
I. INTRODUCTION Pectus excavatum (PE), also called sunken or funnel chest, is a congenital chest wall deformity which is characterized, in most cases, by a deep depression of the sternum.
Although the Nuss procedure is routinely performed, the outcome depends mostly on the correct placement of the bar. It would be beneficial if a surgeon had a chance to review possible strategies for placement of the bar and the
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 528–531, 2010. www.springerlink.com
Development of an Average Chest Shape for Objective Evaluation of the Aesthetic Outcome
associated appearance of the chest. Therefore, we propose the development of a Nuss procedure surgical planner, taking into account the biomechanical properties of the PE ribcage, emerging trends in surgical planners, deformable models, and visualization techniques. Figure 1 presents the Nuss procedure planning process and evaluation of the outcome. Individual patient parameters are used to recreate a geometrical model of the chest. Based on the placement of the bar, the incision points and the insertion points, a new shape is obtained. The model is deformed according to a finite element model approximation in order to achieve real-time performance. On two occasions, an average shape of the chest is employed, first, to evaluate the plan developed using the planner and, second, to evaluate the actual outcome in terms of aesthetics. In this paper, we will focus on developing a method to generate an average shape of the chest from 3D surface scans and, as an intermediate step, on identifying the main issues in appropriate 3D scans processing for use in statistical shape analysis.
II. RELATED WORK
529
parameterized, it was possible to perform PCA on a set of shape vectors corresponding to each subject. Due to strict preprocessing, it was assumed that each mesh has the same number of vertices and triangles. From this comparison, a common framework emerges that includes hole filling, scan registration, shape vector creation, PCA application, and component analysis. Our methodology will follow the same framework utilizing marker-based registration.
III. METHODOLOGY A. Data Acquisition Eleven 3D surface scans of the chest without PE were obtained for the purpose of another study [13] using a FastScan laser scanner (Polhemus, VT, USA). Each scan includes eleven markers related to features on the cartilage and sternum that can be considered useful in this study. Those landmarks are located on the sternal notch, center of the manubrium and xiphoid process. B. Scan Processing
Allen et al. [8] used 250 full body 3D scans from the CEASAR database [9] representing different body types for characterization of the range of human body shape variation. However, their main purpose was to find a way to bring those scans into correspondence and then find the main components responsible for variations in body shape. An idealized and hole-free model generated by an artist was matched to one of the scans, using a set of manually selected markers. It resulted in a hole-free template mesh, which was used to register remaining the scans. Then, principal component analysis (PCA) was performed in order to identify components with the highest variance. Another approach using the same dataset can be found in [10]. In contrast to [8], a markerless approach was proposed. After converting all scans to a volumetric representation, a shape vector was created where each element of that vector is the signed distance from a voxel to the surface. PCA was applied to 300 volumetric models of male subjects. In [11] the focus of the research was a female breast modeler. In the preprocessing phase, as in [8] and [12], a generic template mesh of the breast is fitted to each scan surface making the finding of corresponding features among scans easier. The template mesh is deformed to match identifiable markers on the 3D scan. Because it contains only a few triangles, they are subdivided into smaller ones to allow for finer matching and more realistic results. Once data is
The nature of 3D scanning using a handheld device assumes existence of small errors in the scanner measurements caused by metal objects interfering with the magnetic locater, the object or reference movement. Therefore, a rigid body transformation is applied to each sweep within a scan to minimize those errors (fig. 2 left).
Fig.
2 A point cloud consisted of registered sweeps (left), surface built from merged sweeps (middle), and surface RBF interpolation (right)
Next, the skin surface model is built directly from a point cloud represented by scan sweeps, which are merged (fig. 2 middle). A surface created by simple joining points belonging to a cloud may still contain holes and defects, which would require significant amount of work. Therefore, we utilized a fast RBF interpolation technique, incorporated into scanning software to fill holes and smooth scans if necessary (fig. 2 right).
IFMBE Proceedings Vol. 32
530
K.J. Rechowicz et al.
C. Scan Registration All scans have to be brought into correspondence in terms of orientation. A two-step registration process is employed using Delta (ARANZ, New Zealand). First, the landmarks are used to produce a coarse registration, then, the surface data itself is used to complete a fine registration. The surface registration uses circular surface patches around each landmark on the reference and subject scans to closely align the objects.
missing data outside the convex hull and inside the area defined by the re-meshing process, a linear interpolant cannot be used and the nearest neighbor method is not correct. Therefore, we propose multiple parallel cross-sections through the surface that result in a set of profiles. Points belonging to those profiles are used to fit curves that are later used for sampling. Based on those data points, a new interpolant is created. The cross-sections method can not only extrapolate points outside the convex hull but also interpolate data points that are missing within it. F. Average Shape and PCA
Fig. 3 Surface scan with landmarks for registration
Consistency among scans allows creating a shape vector, which describes each geometry. Since spacing of a 2D mesh was 50 by 100 units, each shape vector contains 15453 elements (m = 5151). Once the creation of a shape vector is complete, we are ready to apply mean and multivariate analysis in the form of PCA.
D. Consistency among Scans
IV. RESULTS
Statistical shape analysis cannot be performed unless consistency among scans is ensured. In other words, the same point across the whole dataset has to represent the same feature. In order to achieve this, an interpolant that fits a surface to the scattered data is created. In the next step, an equally spaced 2D mesh for each scan is built, where a user decides its resolution. In our case, the mesh has 50 units in width by 100 units in height. However, this parameter can be adjusted to the one's need. Because all scans must cover the same area of the chest, the sternal notch landmark and xiphoid process landmark limit a scan horizontally, whereas the "yellow" landmark (the first left cartilage) and its projection in the x direction around the sternal notch (red) limit a scan vertically (fig. 3). Then, for each vertex of the mesh, an interpolant is evaluated that results in the third coordinate (fig. 4), being a linear interpolation of the value at that location.
A. Hole Filling The cross-section method was used to fill manually created holes from a scan for validation. On the left of figure 5 is the original scan with holes and on the right is an error map comparing the original scan with our fitted and filled result. Only the rectangular strip in the center was used for the comparison. Within this rectangular area (showing mostly blue and some yellow colors), the maximal difference does not exceed 1 mm.
Fig. 5 Original scan with artificially removed data points (left), and origiFig. 4 Mesh with the original vertex coordinates (blue), original data points
nal scan compared with the recreated area of interest (right)
(red), and interpolated coordinate (green)
B. Average Shape E. Hole Filling A linear interpolator has capabilities to fill holes within a convex hull defined by the dataset. However, if a hole is too big, that approximation is not very accurate. In case of
Figure 7 shows the average shape created from calculating a mean over the sample of 11 surface scans. It can be seen that the average shape follows a normal shape of a chest with a shallow depression in the middle.
IFMBE Proceedings Vol. 32
Development of an Average Chest Shape for Objective Evaluation of the Aesthetic Outcome
531
For the next step, this approach and the results obtained will provide the basis of the assessment of shape that will be used in the Nuss Procedure planner.
REFERENCES
Fig. 6 Average shape of the central chest Since the landmarks undergo the same procedure, they can be used for evaluation of the surgical planning outcome. Because the Nuss procedure planning tool utilizes a deformable model of the ribcage, the landmarks for matching an average and predicted shape will be actually projections of internal features, for instance, sternal notch, center of the manubrium, and xiphoid process. These correspond to the set of landmarks chosen for this study. C. Shape Analysis Performing the PCA allows studying variations in shape of the chest. Results show that the first four components explain almost 100% of variability in the dataset, whereas the first two almost 90%. Therefore, each shape can be expressed as the sum of the average shape and a weighted sum of two to four components. This significantly decreases the amount of data needed to represent the chest models. Figure 7 presents the first two principle components in the chest shape space. The first component depicts the overall change in size of the chest, whereas the second component depicts width and curvature.
1. Pretorius SE, Haller AJ, Fishman EK (1998) Spiral CT with 3D Reconstruction in Children Requiring Reoperation for Failure of Chest Wall Growth after Pectus Excavatum Surgery. Clin Imag 22: 108-116 2. Protopapas AD, Athanasiou T (2008) Peri-operative data on the Nuss procedure in children with pectus excavatum: independent survey of the first 20 years' data. J Cardiothorac Surg 3: 40 3. Sigalet DL, Montgomery M, Harder J (2003) Cardiopulmonary effects of closed repair of pectus excavatum. J Ped Surg 38: 380-385 4. Bawazir OA, Montgomery M, Harder J, Sigalet DL (2005) Midterm evaluation of cardiopulmonary effects of closed repair for pectus excavatum. J Ped Surg 40: 863-867 5. Krasopoulos G, Dusmet M, Ladas G, Goldstraw P (2006) Nuss procedure improved the quality of life in young male adults with pectus excavatum deformity. Eur J Cardiothorac Surg 29: 1-5 6. Lawson ML, Cash TF, Akers R, Vasser E, Burke B, et al. (2003) A pilot study of the impact of surgical repair on disease-specific quality of life among patients with pectus excavatum. J Ped Surg 38: 916-918 7. Einsiedel E, Clausner A (1999) Funnel chest. Psychological and psychosomatic aspects in children, youngsters, and young adults. J Cardiovasc Surg 40: 733-736 8. Allen B, Curless B, Popovic Z (2003) The space of human body shapes: reconstruction and parameterization from range scans, ACM SIGGRAPH Proc. vol. 22(3), Symposium on Comp. Animat., San Diego, California, 2003, USA, pp 587-594 9. Civilian American and European Surface Anthropometry Resource Project (CAESAR) at http://www.sae.org/standardsdev/tsb /cooperative/ caescans.htm 10. Azouz ZB, Rioux M, Shu C, Lepage R (2005) Characterizing Human Shape Variation Using 3-D Anthropometric Data. The Visual Computer, Int J Comp Graph 22: 302-314 11. Seo H, Cordier F, Hong K (2007) A breast modeler based on analysis of breast scans. Comp Anim Virtual Worlds 18: 141-151 12. Seo H, Magnenat-Thalmann N. (2003) An Automatic modeling of human bodies from sizing parameters, SIGGRAPH Proc., Symposium on Interactive 3D graphics, 2003; Monterey, California, USA, pp 1926 13. Rechowicz K, McKenzie F, Yan Z, Bawab S, Ringleb S (2009) Investigating an approach to identifying the biomechanical differences between intercostal cartilage in subjects with pectus excavatum and normals in vivo: preliminary assessment of normal subjects, SPIE Proc. vol. 7261, Med Imag 2009: Vis., Image-Guided Procedures, and Model, 2009, Orlando, Florida, USA DOI: 10.1117/12.813532
Fig. 7 The first two principle components in the chest shape space
V. CONCLUSIONS AND FUTURE WORK Obtained results show that the implementation of the common framework for processing a surface scan that we describe here is sufficient in order to obtain an average shape and perform shape analysis. IFMBE Proceedings Vol. 32
Sickle Hemoglobin Fiber Growth Rates Revealed by Optical Pattern Generation Z. Liu, A. Aprelev, M. Zakharov, and F.A. Ferrone Drexel University, Department of Physics, Philadelphia, PA, USA Abstract— Sickle hemoglobin (HbS), a mutant of normal adult hemoglobin (HbA), will polymerize at concentrations above a well-defined solubility. HbS polymerization occurs by a double nucleation mechanism. A fundamental element of the mechanism is the growth of individual fibers, whose diameter (21 nm) precludes direct optical visualization. We have developed a photolytic method to measure the HbS fiber growth speed in HbS carbon monoxide derivative (COHbS) solutions. The idea of this method is that a single fiber entering a region of concentrated deoxyHbS will generate large numbers of additional fibers by heterogeneous nucleation, allowing the presence of the first fiber to be inferred even if it is not directly observed optically. We implement this method by projecting an optical pattern consisting of three parts: a large incubation circle, a small detection area, and a thin channel connecting the two. The connecting channel is turned on for just a short time; only if fiber growth is fast enough will the detection circle polymerize. Our fiber growth rates obtained from pure HbS, HbS/HbA mixtures, and partial photolysis of HbS validate a simple growth rate equation including any non-polymerizing species in the activity coefficient calculation. For 25°C we have determined the monomer on-rate to be 82 ± 2 mM-1s-1. The monomer off-rate is 751 ± 79 molecules/sec in agreement with earlier DIC observations of 850 ± 170 molecules/sec. Combining the above, the method predicts a solubility of 16.0 ± 1.1 g/dl in good agreement with 17.2 g/dl measured from sedimentation methods.
called polymer domains, with polymers that can easily assume a quasi-radial geometry because of polymer flexibility. The HbS polymer growth rate, another fundamental pathological aspect of sickle cell disease, has been measured using DIC microscopy [2], in which the diffraction shadow of the growing polymer is observed (the polymer itself being too small for direct observation). Such isolated fibers are difficult to create, and have only been fashioned in a process of growth and dissolution of fibers that requires great skill and patience. Furthermore, such a process of growth and dissolution can easily generate fiber bundles, which would be indistinguishable from isolated fibers. Thus we sought a more automated, reliable and rapid method to follow fiber growth. Such a method could provide the concentration dependence of the growth rate, for example, which has not been previously measured. In this paper, we describe the implementation of a novel photolytic method that can be used to measure the individual polymer growth speed. The method relies on the fact that illumination of COHbS will reversibly remove the CO from the Hb so long as the light is kept on, allowing polymerization, but only in illuminated regions [3].
Keywords— Sickle cell hemoglobin, photolysis, polymer elongation.
A. Sample Preparation
I. INTRODUCTION Sickle hemoglobin (HbS) is a natural point mutation of hemoglobin A (HbA) in which a charged amino acid (Glu) is replaced by a hydrophobic one (Val) on the sixth position of each β-chain. The result of this structural change is that when HbS releases O2, it can form long, rigid, 21 nm diameter polymers, which will make the red blood cells stiff, causing obstructions in micro-circulation. The HbS polymerization process is well modeled by a double nucleation mechanism [1], in which homogeneous nuclei form in solution following a stochastic delay, and heterogeneous nuclei form on other polymers nucleated by either pathway. The nucleation rates depend sensitively on the HbS solution concentration and temperature. Because of the heterogeneous pathway, sickle hemoglobin naturally forms arrays of attached polymers,
II. MATERIALS AND METHODS HbS and HbA were purified by standard methods [3], exchanged into 0.15 M phosphate buffer, pH 7.35, then stored in liquid nitrogen as the oxy derivative until use. Hemoglobin concentrations were obtained by measuring hemoglobin carbon monoxide (COHbS) derivative optical absorption spectra in the soret band (400-450nm), and fitting the entire spectrum to known standards. Concentrated sodium dithionite (Na2S2O4) was added to give a final concentration of 50 mM. Around 2 µL of sample was sealed between glass cover slips with Kerr sticky wax in the CO box. The sample thickness was controlled to be around 10 µm, so that the optical density of such sample absorbance is kept around 1.3. B. Instrumentation The apparatus (Figure 1) is based on a horizontal microspectrophotometer. A pair of oil immersion
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 532–535, 2010. www.springerlink.com
Sickle Hemoglobin Fiber Growth Rates Revealed by Optical Pattern Generation
533
Fig. 1 Schematic of the apparatus. The sample is placed on a temperature controlled stage, between x100 oil-immersion objectives, which also are temperature controlled. The optical pattern is after the laser emerges from the beam expander. The translator above then moves the mask through the field of view, as shown in Figure 2 below. The optical pattern is then imaged on the sample by a dichroic mirror, which also permits absorption spectra to be measured to characterize the sample, and further allows the increase in absorption to be monitored that accompanies polymer formation
Fig. 2 The images projected to the sample plane from the mask and the slit.
A two millimeter diameter hole is drilled on an optical slit, which together with a mask serve as our channel. They both are imaged on the sample plane and resemble a conventional field diaphragm. The mask is connected to a one dimensional translator, whose position is controlled by the computer program in the following three steps: Step 1, only the hole is unmasked as shown in the top panel. The laser image of this hole is the incubation circle. After a time the dense polymers fill up the incubation circle, which triggers the translator taking the mask to the step 2 position. Step 2, both the hole and the silt are unmasked as shown in the second panel. The laser image of the slit serves as the “light channel”. After a “channel on” time we designate, the mask is moved to the step 3 position by the translator. Step 3, only the very end of the slit is unmasked as shown in third panel, and the image of this part of the slit serves as the detection area. After about 3 minutes, the mask is shifted back to block the whole slit for about 10 seconds to allow all polymers melt in the sample, and then the experiment restarts with with the step 1 position.
objectives (100 × , Lecia) are used as the condenser and the objective. For the absorption measurements, the light from a 150W Xenon arc lamp is passed through a Monochromator (Spectrapro 2150i, Acton) and is imaged on the back aperture of the condenser objective in a Koehler configuration. The photolysis beam is provided by an Argon-ion laser (488nm/5W, Spectra physics). Both absorption and photolysis data are recorded by the CCD camera (12 bit, Photometrics Quantix). An optical slit width adjustable is drilled with a 2 mm diameter hole, whose image serves as the area where polymerization begins, labeled as the incubation circle. A U shaped mask, connected to a computer controlled 1-D translator, blocks different part of the slit in different stages of the experiment, as shown in figure 2. Since the slit and the mask are both imaged at the sample plane, only the unblocked part of the slit image will be projected to the sample plane as we designed. C. Laser Photolysis Sequence Our method of measuring the growth speed of HbS fibers is based on the known fact that laser illumination of the thin COHbS solution sample results in photo-dissociation of the hemoglobin carbon monoxide complex. So polymer can grow only in laser illuminated area [3]. Using reversible laser photolysis of COHbS and creating illuminated patterns in a designed sequence we can measure the time it takes for polymers grow from one spot (the incubation disk) to the other spot (the detection area) through
IFMBE Proceedings Vol. 32
534
Z. Liu et al.
a narrow illuminated line (light channel) that connects these two spots. The polymers melt quickly outside of the line boundaries, where CO is rich. So the line works as a channel with no physical walls. The laser photolysis sequence includes three steps as shown in Figure 2. Firstly, the incubation circle is kept on for enough time to get dense polymers in this area. Once that happens, the light channel (image of the slit) is turned on for successively longer and longer times (typically increasing from 5 second to 30 second in 1 second intervals), and only at the very end of the channel is left illuminated, as shown in Figure 2c. The channel end serves as a detector for any incoming polymer. If one polymer fiber reaches the end of the channel during the "channel on" step of the experiment, then by heterogeneous nucleation it would generate other polymers in an exponentially explosive reaction in the detection area. If after a short delay (typically 5-30 seconds), we detect exponentially increasing polymer mass--by monitoring light intensity transmitted through the detection spot--we concluded that a fiber had reached the detection area before channel extinction. If, on the contrary, the time duration chosen is not long enough, polymerization of the detection area would eventually occur by homogeneous nucleation but in a much longer time (typically above 30 minutes), because homogeneous nucleation highly depends on the volume and the detection area is much smaller than the incubation circle. By varying the "channel on" time in a loop of above three steps, we could get the critical time that just enough for an individual fiber growing into the detection area. By making the volume of the detection area much smaller than the incubation circle, the homogenous nucleation time in the detection area will be much longer than step 1 (in the range of a half-hour to hours). In contrast, if polymer grows into this detection area, because of the quick heterogeneous nucleation, it only takes 5 to 30 seconds to fill this area with easily observed dense polymers.
absorption spectra measurements. In such situations it is established that the laser heating of sample will be less than 1 degree [3]. Experiments with channel width of 0.8 µm and 1.6 µm yield the same growth speeds (<2% difference). Different channel length experiments (30 µm and 18 µm) produce almost the same growth speeds (<3% difference). Experiments with increasing or decreasing the “channel on” time produces the same critical time. Near the slit there is another small hole drilled (of size comparable to the detection area), whose image serves as a “veto counter”. If polymers start to grow spontaneously in the veto counter during the experiment, then the experiment is stopped, because homogenous nucleation will happen in the detection area too.
D. A Typical Measurement The transmitted light intensities of the detection area of a typical experiment are shown in figure 3(a). In figure 3(b) the signal intensities measured in the last second are plotted as a function of the corresponding “channel on” time. Clearly there is a transition from no polymer arrived to a state where polymer has arrived at the detection area. The time when this transition happens, or the time just enough to let the first individual fiber to grow into the detection area, is the critical time. E. Control Experiments Laser power is increased to just enough to get full photolysis in each experiment, as determined by optical
Fig. 3 (TOP) The detection signals (transmitted light intensities) versus the experiment time in a typical experiment. If the channel on-time is long enough to let polymer grow into the detection area, the transmitted light intensity of the detection area will drop significantly because polymerization consumes a lot of HbS molecules, which are replaced from the neighboring solution, leading to net decrease in transmission. (BOTTOM) Intensity at the last measured time point shows a clear change once the channel is on long enough for polymers to arrive at the detector
IFMBE Proceedings Vol. 32
Sickle Hemoglobin Fiber Growth Rates Revealed by Optical Pattern Generation
III. RESULTS AND ANALYSIS A. HbS Polymer Growth Speed Data Analysis Data were collected at samples of concentrations from 3.76 to 4.17 mM and experiment temperature ranging from 22 to 29oC. The observed HbS polymer elongation rate, denoted by j, is related to the polymers’ molecular growth rate J by j = 0.46 nm × J. Polymer growth rate data from pure HbS, pH = 7.35 are plotted as the solid red circles in Figure 4 and fitted with growth rate equation (1) J = k+γc – k– in which k+ and k– are the molecular addition and loss rates of molecules from the end of the polymer. γ is an activity coefficient, which is a function of the total concentration of Hb in the solution, and is required by the high concentrations used. γ physically accounts for the fact that such crowded solutions behave as though they were even more concentrated, due to the proximity of the molecules. For the concentrations used here, for example, γ is around 10. The activity coefficient has been shown to be related to the monomer concentration c by the following equation [4] ln γ = 8 vc/(1–vc)2 in which v = 0.0484 mM-1 is the hemoglobin specific volume. As a means of testing the usage of γ in equation (1) we polymerized a mixture of sickle hemoglobin with normal hemoglobin, which cannot polymerize but which does crowd the solution and add to γ. The HbS and HbA mixture (50:50) sample is studied in the same way as the pure HbS samples.
535
Since the HbS tetramer and HbA tetramer will both split and recombine randomly there will be HbAS tetramers in the sample besides HbS tetramers and HbA tetramers. The three species percentages follow the binominal distribution. Unlike the HbS, HbA tetramers are not able to join the polymer, and the probability for an HbAS tetramer entering the polymer is 0.357 [5]. As shown in Figure 4, the HbA/S data is also well described by the same equation. Finally, we also employed partially photolyzed samples, which agree similarly with the simple theory of equation 1. The values obtained are in good agreement with value of constants measured in other experiments. From the fitting, the monomer on rate k+ is determined from the slope to be 81 mM-1s-1. The monomer off rate k- is determined to be 705 molecules per second. From earlier DIC observations, k- is determined to be [6] 850 molecules per second per fiber end, which is close to our value for k-. By definition, solubility cs is the concentration at which polymerization and de-polymerization are in equilibrium so that J = 0. The fitting line of J data intercepts the x-axis very close to the solubility value measured by sedimentation [7] as shown in figure 4.
IV. CONCLUSIONS In summary, we have developed a method that can reliably and accurately determine the growth rate of fibers of sickle hemoglobin that are below the level of optical resolution. Such a tool will be useful in the study or mutant or modified hemoglobins, as well as for temperature dependence of the rate constants.
ACKNOWLEDGEMENT This work was supported by NIH grant R01-HL057549.
REFERENCES
Fig. 4 Growth rates j and J as a function of hemoglobin activity γc. γ
is an activity coefficient, and c is concentration. Filled circles are growth rate data obtained from pure HbS samples. Traingle and diamond are the data obtained from HbS/HbA mixture (1:1) and from partial photolysis. The filled triangle on the x-axis is the solubility determined in sedimentation experiments
1. Ferrone, F. A., J. Hofrichter, and W. A. Eaton. 1985. Kinetics of sickle hemoglobin polymerization II: a double nucleation mechanism. J. Mol. Biol. 183:611-631. 2. Samuel, R. E., E. D. Salmon, and R. W. Briehl. 1990. Nucleation and growth of fibres and gel formation in sickle cell haemoglobin. Nature 345:833-835. 3. Ferrone, F. A., J. Hofrichter, and W. A. Eaton. 1985. Kinetics of sickle hemoglobin polymerization I: studies using temperature-jump and laser photolysis techniques. J. Mol. Biol. 183:591-610. 4. Ferrone, F. A., and M. A. Rotter. 2004. Crowding and the polymerization of sickle hemoglobin. J Mol Recognition 17:497-504. 5. Roufberg, A., and F. A. Ferrone. 2000. A model for the sickle hemoglobin fiber using both mutation sites. Protein Sci. 9:1031-1034. 6. Agarwal, G., J. C. Wang, S. Kwong, S. M. Cohen, F. A. Ferrone, R. Josephs, and R. W. Briehl. 2002. Sickle Hemoglobin Fibers: Mechanisms of Depolymerization. J. Mol. Biol. 322:395-412. 7. Ross, P. D., J. Hofrichter, and W. A. Eaton. 1977. Thermodynamics of gelation of sickle cell deoxyhemoglobin. J. Mol. Biol. 115:111-134.
IFMBE Proceedings Vol. 32
Sickle Cell Occlusion in Microchannels A. Aprelev1, W. Stephenson1, H. Noh2, M. Meier3, M. MacDermott3, N. Lerner3, and F.A. Ferrone1 1
Drexel University, Department of Physics, Philadelphia, PA, USA Drexel University, Department of Mechanical Engineering and Mechanics, Philadelphia, PA, USA 3 Marian Anderson Sickle Cell Center, St. Christopher’s Hospital for Children, Philadelphia, PA, USA 2
Abstract— We describe the development of a method to study the mechanical aspects of occlusion of small vessels by sickled cells, which is an essential feature of the pathophysiology of sickle cell disease. The method involves microfluidic channels of diameters smaller than red cells, with thickness that can permit measurement of absorption spectra to ascertain intracellular concentration. Laser photolysis is used to convert carboxyhemoglobin into deoxyhemoglobin which creates the rigid polymers that lead to occlusion. Keywords— Sickle cell hemoglobin, photolysis, vaso-occlusion.
I. INTRODUCTION Sickle cell disease results from a genetic point mutation of hemoglobin molecules which permit their polymerization when in the deoxy structure. The polymers rigidify the cells which in turn may occlude the microcirculation, causing oxygen deprivation downstream. While polymer rigidity is intuitively apparent as the origin of the pathophysiology of this disease, the exact details have been difficult to pin down. Eaton et al. formulated the “kinetic hypothesis” in 1976, in which the critical determinant of was hypothesized to be the ratio of the delay time to the capillary transit time.[1] So long as cells could escape the narrow capillaries, occlusion would be avoided. This viewpoint was challenged by Schechter and others who postulated that oxygen exchange in arterioles coupled with polymers that had not been fully depolymerized in the lungs would prevent cells from entering the capillaries at all. [2, 3] Subsequently, Kaul and Fabry formulated a two-step model based on intravital microcopy, in which they proposed that the critical locus was the low-flow post-capillary venules, where adhesion of deformable red cells would narrow the lumen, which could then be obstructed by a group of rigidified cells. [4] Kaul and Fabry did not observe either capillary or precapillary occlusion, but the difficulty of intravital microcopy cannot be overstated. Moreover, the events that create occlusion must by their nature be very rare, simply to insure survival. Eaton and Hofrichter have placed this in the range of 1 obstructed cell in every 102 to 104 transits. [5] Thus, although postcapillary obstruction clearly can occur, intracapillary occlusion cannot be ruled out. To try to elaborate these processes further a microfluidic model of sickling has been recently employed.[6] Although the
results were of some interest, the method employed used large diameter channels and slow oxygen exchange. The former prevents any studies relevant to capillary flow, while the latter feature diminishes the importance of the kinetics, which are an essential feature of the disease. [7] Thus, there is a clear need for a microfluidic system of small diameter vessels, with the ability to control and measure with precision the cells that are pertinent to any observed occlusion. To address this need, we have developed the system described here.
II. MATERIALS AND METHODS To be effective, a system requires a microfluidic manifold, the ability to measure spectra with accuracy, and photolysis ability. These are examined in turn. A. Microfluidic Manifold The microfluidic manifold is shown in Figure 1. The core of the system is a pair of narrow channels, 5 µm wide
Fig. 1 A schematic of the microfluidic manifold.
The circles on left and right represent the input and output. A large bypass can be shut off, but permits easy charging of the system. The squares provide obstacles to help filter residual debris. The apparent gap at the center is actually a pair of final capillaries, which are 5 µm wide channels, as shown in greater magnification in the lower center. An actual image of the system with cells in transit is shown in the lowest image
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 536–539, 2010. www.springerlink.com
Sickle Cell Occlusion in Microchannels
537
Fig. 2 Sample enclosure. (LEFT) A cross section.
Lightly (blue) shaded areas designate the input/output windows. Gas flows in and out of the enclosure from the top, providing an oxygen-free atmosphere surrounding the PDMS chip. Flow of cells into and out of the chip comes from the bottom in this drawing. The dark gray cantilever, adjusted by a screw, provides the cutoff for the bypass channel (recall Figure 1). (RIGHT) Photograph of the sample enclosure. This picture is inverted relative to the left. The adjustment screw for the bypass channel is visible in the upper right.
which act as a kind of filter for debris that may have also been mixed with the cells. Both bypass and filter are useful to protect the device from becoming clogged. The device itself is fabricated from PDMS, Oxygen-plasma bonded to a glass substrate, drilled to accommodate tubing. The cells are driven through the system by a combination of syringe-pump and gravity flow. Gravity flow, implemented by simply maintaining a difference in input and output height, is more sensitive for stopping or reversing flow when a cell of interest has been identified, while the syringe pump is useful for priming or flushing purposes. The microfluidic chip itself is enclosed in a gas-tight chamber (Figure 2) This is necessary to keep the PDMS in an oxygen free atmosphere, as described below.
Fig.
4 Spectrum of a single cell, taken on a 3 µm x 3 µm area. The measured spectrum, taken at 2 nm intervals, has been fit by a standard COHb spectrum and, as is evident, the fit is excellent
B. Spectra
Fig. 3 Schematic of the apparatus.
The microfluidic assembly, as shown in Figure 2, is designated M. Drawing is not to scale. The CMOS camera permits imaging at high speed, while the CCD camera can take spectra with high resolution
and 3 µm deep. Two channels are used so that one can be photolyzed with the other acting as a reference. In addition there is a large bypass arm which is useful for priming and flushing the fluidic chip, and which can be shut off. In addition we have added a set of posts
It is essential to be able to measure precise absorption spectra. Absorption spectra allow the concentration and fraction of deoxygenated hemoglobin to be determined. The optical interrogation of the system is done on an optical table on which a customized microscope has been constructed, as shown in Figure 3. Light is sent from a 150 Watt xenon arc lamp through an Acton Research Corporation monochromator. Spectra in a selected spatial region of interest in the wavelength range 400-450 nm are collected with a Photometrics series 300 camera. A pair of Leitz strain-free long working distance (x32) objectives are used with one acting as a condenser. In order to flatten the baseline, the objectives are mounted on translation stages, controlled by a small motor, computer controlled and programmed to move objectives as wavelength is changed to retain optimal focus. A reference region of interest is also selected near the area within the red cell, and optical density is computed using the totaled intensity of each of the paired regions. The use of this approach insures that sample and
IFMBE Proceedings Vol. 32
538
A. Aprelev et al.
reference are taken at the same objective position within the same image and thus. Intensity of the pixels in the region of interest is averaged together. This permits collecting high quality absorbance spectra, as shown in Figure 4. In the figure the spectrum, collected at every 2 nm and designated by the points, has been fit by standard spectra, which provide the continuous curve that is shown. C. Photolysis Photolysis is provided by a 5 watt Argon Ion laser (Spectra Physics 2020). It has been previously established in other experiments [8] that 4 kW/cm2 are sufficient to photolyze HbS rapidly, while generating minimal heating, and this is the power density used here. The photolysis beam enters the optical path before the condensing objective via a dichroic mirror. This beam is blocked from the camera by a notch filter as well as one or more color glass filters (BG12). The photolysis of oxygen is much less efficient that that of CO, which binds preferentially to Hb. Thus, upon photolysis any stray oxygen will rapidly take the place of CO, removing the desired dexoyhemoglobin. Therefore it is of paramount importance to keep oxygen from entering the system. This was done by providing a protective atmosphere around the PDMS chip. (Long term storage in a CO atmosphere was observed to degrade PDMS.) In addition, sodium dithionite was added to the samples as an oxygen scavenger.
Fig. 6 Cell obstructed at entrance to a microfluidic channel. The bottom channel shows a cell that is blurred because of its velocity through the channel, illustrating the pressure that is present across the channels. Flow is from left to right. Cells on the left are in a larger volume, as illustrated in Figure 1 residual oxygen. The total OD is low because the cell does not fill the entire depth of the channel in this particular realization of the microfluidic channel system. (Future channels will be more shallow.) Deviations at the peak arise from scattering of the light by the PDMS. D. Sample preparation Sickle blood samples obtained from St. Christopher’s Hospital for Children were washed by centrifugation and resuspension in a 0.3 Osm Hypertonic cell buffer, containing citrate to minimize any aggregation of the cells to one another.
III. RESULTS
Fig. 5 Spectrum of a cell during laser photolysis.
The measured spectrum, taken at 2 nm intervals, has been fit by a standard COHb spectrum plus a deoxyHb spectrum. The deoxyHb spectrum accounts for 94% of the concentration (the remainder COHb), with no evidence of oxygen contamination. The small total absorbance arises because the sample does not fill the channel, top-to-bottom
Figure 5 shows the spectrum of a cell that that was originally saturated with CO and now has been photolyzed. Note the conversion to deoxyHb (with 6% residual CO, by fitting to the spectrum), further illustrating the absence of
Figure 6 shows occlusion of one of the channel pairs. In this experiment, the top cell was positioned in the inlet to the channel, and then photolyzed while held stationary. The photolysis beam was centered on the entrance of the top channel but did not include any of the bottom channel. When pressure was reapplied so as to resume flow, the channel was obstructed, as can be seen. In contrast, a blurred cell at the bottom shows the rapidity of the flow in absence of constraint. While the static images do not show the result clearly, a view of the complete sequence of images shows, over time, an extrusion of the sickled cell into the capillary. Such behavior is an important and unexpected finding, and may account for a natural means of clearing occlusions, so that only the most severe last. This experiment also shows the importance of maintaining simultaneous control over flow and photolysis.
IFMBE Proceedings Vol. 32
Sickle Cell Occlusion in Microchannels
539
While the occlusion seen here is at an entrance to a channel, the same phenomenon would also occur in a multi-cellular obstruction, such as seen by Kaul and Fabry.
IV. CONCLUSIONS We have developed a method that can manipulate sickle cells so as to study their occlusion in small channels. Phototlysis coupled with quantitative spectroscopy in a microfluidic manifold should provide insights into the fundamental events that underlie sickle cell disease. We have already seen that occluded cells possess a degree of deformability that must be taken into account in models of the pathophysiology of this disease.
ACKNOWLEDGEMENTS The authors gratefully acknowledge Drs. Carlton Dampier and Lewis Hsu who were of much help in initiating these studies. They are also pleased to acknowledge advice from Stacy Jones on cell handling. This work was supported by NIH grant R01-HL057549.
REFERENCES 1. Eaton, W. A., J. Hofrichter, and P. D. Ross. 1976. Delay time of gelation: a possible determinant of clinical severity in sickle cell disease. Blood 47:621-627. 2. Noguchi, C. T., and A. Schechter. 1981. The Intracellular Polymerization of Sickle Hemogloibn and Its Relevance to Sickle Cell Disease. Blood 58:1057-1068. 3. Noguchi, C. T., D. A. Torchia, and A. N. Schechter. 1983. Intracellular polymerization of sickle hemoglobin. Effects of cell heterogeneity. J Clin Invest 72:846-852. 4. Kaul, D. K., M. E. Fabry, and R. L. Nagel. 1989. Microvascular sites and characteristics of sickle cell adhesion to vascular endothelium in shear flow conditions: pathophysiological implications. Proc Natl Acad Sci U S A 86:3356-3360. 5. Eaton, W. A., and J. Hofrichter. 1990. Sickle Cell Hemoglobin Polymerization. Adv. Protein Chem. 40:63-280. 6. Higgins, J. M., D. T. Eddington, S. N. Bhatia, and L. Mahadevan. 2007. Sickle cell vasoocclusion and rescue in a microfluidic device. Proc Natl Acad Sci U S A 104:20496-20500. 7. Mozzarelli, A., J. Hofrichter, and W. A. Eaton. 1987. Delay time of hemoglobin S polymeriztion prevents most cells from sickling. Science 237:500-506. 8. Ferrone, F. A., J. Hofrichter, and W. A. Eaton. 1985. Kinetics of sickle hemoglobin polymerization I: studies using temperature-jump and laser photolysis techniques. J. Mol. Biol. 183:591-610.
IFMBE Proceedings Vol. 32
Engineering Microfluidics Based Technologies for Rapid Sorting of White Blood Cells Vinay Raj1, Kranthi Kumar Bhavanam2, Vahidreza Parichehreh2, and Palaniappan Sethu2 2
1 DuPont Manual High School, Louisville, KY 40208, USA Department of Bioengineering, University of Louisville, Louisville, KY 40208, USA
Abstract— Blood is a valuable resource rich in cellular populations reflective of the immediate immune and inflammatory status of the body. Much of this information is contained in the nucleated White Blood Cells (WBCs). Current WBC isolation techniques are time consuming and result in artifactual changes in WBC gene expression. Using engineering approaches and basic fluid flow phenomenon in the microscale we have developed a microfluidic sorting device that can isolate WBC sub-populations rapidly based on size. This technique relies on osmosis to amplify size differences between different cell populations to ensure clear separation of different sub-populations based on size. The dynamics of cell size increase is determined in a microfluidic cell docking device which can be used to study instantaneous and time-dependent increase of cells to changing extra-cellular tonicity. Sorting is then accomplished in a microfluidic spiral sorter that exploits the balance between inertial lift forces and Dean’s forces that developed in the microchannels. This paper demonstrates proof-of-concept of the ability to measure osmosis dependent cell size increase using MOLT-3 cells and sorting using 15 and 25 µm beads and confirms that this method holds promise for WBC sorting. Keywords— Microfluifdics, Cell Sorting, White Blood Cells, Osmosis.
I. INTRODUCTION Blood is a living tissue containing cellular populations rich in information regarding the immediate and historic immune and inflammatory status of the body. This information can be harnessed using high-throughput genomic and proteomic technologies to gain a better understanding of the pathophysiological basis of human diseases[1, 2]. Much of this information is contained in the NCs (WBCs + other rare cells). Previous reports indicate that elimination of RBCs significantly improves the quality of information attainable from blood samples[3]. Further, sorting of NCs into subpopulations allows for identification of greater changes in gene and protein expression[4]. Commonly used nucleated cell isolation techniques include selective RBC lysis[5] and density gradient techniques using ficoll, percoll, sucrose or dextran, alone or in
conjunction with antibody based selection systems using red cell rosetting (RosetteSep™)[6]. A primary concern with current technologies is the effect of sample processing on NC activation that affects the fidelity of gene and protein expression studies. Antibody based techniques provide high specificity at the cost of unwanted signaling due to the antibody binding event whereas techniques that exploit density differences rely on centrifugation at ~ 450-500 g for > than 20 minutes also result in altered gene expression[6, 7]. Other problems associated with these techniques include varying degrees of cell loss, long processing time, poor repeatability, need for skilled personnel and poor suitability for point of care applications. Therefore there is a critical need for technologies in the clinical setting to serve as a sample preparation tool for high throughput technologies that can overcome aforementioned limitations. Microfluidics offers creation of inexpensive, fast, reliable technologies suitable for point-of-care use. Macroscale processes introduce a significant amount of variability as a consequence of non-uniform conditions that exist due to the longer time required for complete diffusion of molecules and ions in large volumes. Microfluidic systems on the other hand can be used to establish control over the operating conditions such that every cell experiences uniform conditions during processing. Further, scaling effects and unique flow phenomenon that arise in microfluidic channels can be exploited to accomplish unique sorting alternatives. Microfluidic cell isolation techniques have exploited differences in cell density, size and shape to sort cells. Huh et al[8] have used gravity driven sedimentation to sort platelets and blood cells based on differences in settling velocity, Davis et al[9] have used deterministic hydrodynamics via arrangement of obstacles along the direction of flow to separate blood into RBCs, WBCs and plasma. Russom et al[10], DiCarlo et al[11] and Bhagat et al[12] have used inertial focusing in asymmetrically structured or spiral microchannels to accomplish size based ordering. Our research group has also to designed several microfluidic devices to sort blood cells based on size[13, 14], shape[13, 14], chemical reactivity[15] and osmotic resistance[16]. Despite unique advantages, microfluidics has found limited
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 540–543, 2010. www.springerlink.com
Engineering Microfluidics Based Technologies for Rapid Sorting of White Blood Cells
application in blood NC sorting. This can be attributed to the ratio of RBCs to NCs (5000:1) and the size and density overlap between different NC sub-populations (Table 1). Our solution is based on exploiting the differences in osmotic properties of different blood cells. By exposing NCs to hypotonic solutions, swelling of cells due to osmosis can be accomplished. Preliminary data suggests a significant difference in the dynamics of cell size increase due to swelling. Therefore, osmotic exposure to increase size difference between different NC populations, followed by inertial separation in spiral microfluidic channels will result in filter-less rapid sorting of NCs into different subpopulations. To establish proof-of-concept we have accomplished the following: (a) Demonstrated the ability to measure the dynamics of osmosis based size increase using MOLT-3 cells a T-lymphoblast cell line and (b) Accomplished size based separation of 15 and 25 µm beads using the microfluidic spiral sorter. Table 1 Physical properties of blood cells. Represented are RBCs (erythrocytes), WBCs (leukocytes) and Platelets (thrombocytes). Not represented are small NCs which have not been characterized
541
using a syringe needle and tubing was press fitted into the holes for access to the channels. B. Testing The cell docking device was first prepped using 1X saline solution. Care was taken to eliminate air bubbles. Once the device was primed, MOLT-3 cells were introduced into the device and allowed to sediment into the trenches. Following settling, the remaining cells were washed via infusion of 1X saline solution. DI water was then introduced into the device at a 50 µL/min. Cell began to increase in size due to osmosis almost instantaneously. Images were captured every 6 seconds using an inverted microscope and analyzed using Metamorph software. The spiral cell sorter was also primed similar to the docking device. 15 and 24 µm polystyrene beads were suspended in 1X saline solution and introduced into the spiral sorter using a syringe. The syringe was set on a syringe pump and the flow rate was set to 2500 µL/min. this flow rate ensured generation of sufficient inertial lift and Dean’s forces to cause separation of 24 µm particles from 15 µm particles. Images were acquired using a high speed camera and analyzed using Metamorph.
III. RESULTS A. Characterization of Cell Size Increase MOLT-3 cells average 15.3 µm in size under isotonic conditions. On introduction of DI water and rapid change from isotonic to hypotonic conditions, the cell size begins to increase as shown in figure 1. The cells double in size after ~ 80 seconds following which they undergo lysis.
II. MATERIALS AND METHODS A. Device Fabrication Two different devices were fabricated using standard soft-lithography techniques out of (poly) dimethyl siloxane (PDMS) a biocompatible silicone. First, masks were designed using AutoCAD, a layout software. The masks were then translated using photoresist onto a silicon wafer. The silicon wafer was then used as a mold to fabricate PDMS channel structures. The PDMS was bonded irreversibly to glass following oxygen plasma treatment. The cell docking device consisted of trenches 500 µm long and 200 µm wide whereas the spiral sorting device consisted of a spiral 120 µm tall, 500 µm wide at the inlet and increased to 2 mm wide at the outlet. The radius of curvature was expanded as the channels spiraled outward. Access holes were punched
Fig. 1 The dynamics of cell size increase in MOLT-3 cells following exposure to deionized water
IFMBE Proceedings Vol. 32
542
V. Raj et al.
B. Sorting of Beads
V. CONCLUSIONS
15 and 24 µm beads were sorted along unique streamlines using the spiral sorter (Figure 2.). The flow rate at which sorting occurred was 2500 µL/min which resulted in a Re number value of ~ 1 and Dean’s number value of 1.5 which in turn resulted in sufficient lift forces and Dean’s forces to cause focusing of particles based on size. The 24 µm particles focus close to the inner side wall at their equilibirium position whereas the 15 µm beads are at locations further away from the wall
In summary, two devices were developed. The first device can be used to study the response of different WBCs to hypotonic exposure and the second device can be used to sort cells based on size. This sets the stage for subsequent experiments using actual human blood cells.
ACKNOWLEDGMENT The authors would like to thank the staff of the Micro/Nanofabrication facility for their help and useful discussions.
REFERENCES
Fig. 2 Microscope images showing 24 µm particles focusing closer to the side wall than the smaller 15 µm beads
IV. DISCUSSION WBCs are a heterogeneous population of cells with a slight overlap in size which precludes immediate sorting based on size. However the membrane and cytoplasm of different WBCs is sufficiently different such that their swelling behavior following exposure to hypotonic conditions is significantly different and can enhance the size differences between the cells. This article describes the first steps and development of technology to first assess the osmotic behavior of WBCs following exposure to hypotonic solution. The cell docking device provides a microscope compatible device that can be used to accurately monitor the changes in cell size following hypotonic exposure. Proof-of-concept was accomplished using MOLT-3 cells, a T-lymphoblastic cell line. Once a sufficient size difference >7 µm is accomplished the cells can be reliably sorted using a spiral sorter. Again, proof-of-concept was demonstrated using 15 and 24 µm particles to accomplish the sorting.
1. Cobb, J.P., et al., Application of genome-wide expression analysis to human health and disease. Proceedings of the National Academy of Sciences of the United States of America, 2005. 102(13): p. 48014806. 2. Polpitiya, A.D., et al., Using systems biology to simplify complex disease: Immune cartography. Critical Care Medicine, 2009. 37(1): p. S16-S21 10.1097/CCM.0b013e3181920cb0. 3. Feezor, R.J., et al., Whole blood and leukocyte RNA isolation for gene expression analyses. Physiol. Genomics, 2004. 19(3): p. 247-254. 4. Laudanski, K., et al., Cell-specific expression and pathway analyses reveal alterations in trauma-related human T cell and monocyte pathways. Proceedings of the National Academy of Sciences, 2006. 103(42): p. 15564-15569. 5. MAREN, T.H. and C.E. WILEY, Kinetics of Carbonic Anhydrase in Whole Red Cells as Measured by Transfer of Carbon Dioxide and Ammonia. Mol Pharmacol, 1970. 6(4): p. 430-440. 6. Pelegrí, C., et al., Comparison of four lymphocyte isolation methods applied to rodent T cell subpopulations and B cells. Journal of Immunological Methods, 1995. 187(2): p. 265-271. 7. Makino, A., et al., Mechanotransduction in leukocyte activation: A review. Biorheology, 2007. 44(4): p. 221-249. 8. Huh, D., et al., Gravity-Driven Microfluidic Particle Sorting Device with Hydrodynamic Separation Amplification. Analytical Chemistry, 2007. 79(4): p. 1369-1376. 9. Davis, J.A., et al., Deterministic hydrodynamics: Taking blood apart. Proceedings of the National Academy of Sciences, 2006. 103(40): p. 14779-14784. 10. Russom, A., Nagarath, S., Gupta, A. K., DiCarlo, D., Edd, J. K. and Toner, M. . Differential Inertial Focusing in Curved High-Aspect Channels for Continuous High Throughput Particle Separation. in 12th International Conference on Micro Total Analysis Systems (MicroTAS). 2008. San Diego, CA: Chemical and Biological Microsystems Society. 11. Di Carlo, D., et al., Continuous inertial focusing, ordering, and separation of particles in microchannels. Proceedings of the National Academy of Sciences, 2007. 104(48): p. 18892-18897. 12. Bhagat, A.A., Kuntaegowdanahalli, S. S. and Papautsky, I. , Continuous particle separation in spiral microchannels using Dean flows and differential migration.. Lab Chip. , 2008. 8(11): p. 1906-14. 13. Sethu, P., A. Sin, and M. Toner, Microfluidic diffusive filter for apheresis (leukapheresis). Lab on a Chip, 2006. 6(1): p. 83-89.
IFMBE Proceedings Vol. 32
Engineering Microfluidics Based Technologies for Rapid Sorting of White Blood Cells 14. Murthy, S., et al., Size-based microfluidic enrichment of neonatal rat cardiac cell populations. Biomedical Microdevices, 2006. 8(3): p. 231-237. 15. Sethu, P., et al., Continuous Flow Microfluidic Device for Rapid Erythrocyte Lysis. Analytical Chemistry, 2004. 76(21): p. 6247-6253. 16. Parekkadan, B., et al., Osmotic Selection of Human Mesenchymal Stem/Progenitor Cells from Umbilical Cord Blood. Tissue Engineering, 2007. 13(10): p. 2465-2473.
Corresponding Author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Palaniappan Sethu University of Louisville 2210 S. Brook St., 357 SRB Louisville USA
[email protected]
543
Peripheral Arterial Tonometry in Assessing Endothelial Dysfunction in Pediatric Sickle Cell Disease K.M. Sivamurthy1, C. Dampier2, M. MacDermott1, M. Meier1, M. Cahill1, and L.L. Hsu3 1
Hematology, St. Christopher’s Hospital for Children, Philadelphia, PA, USA 2 Emory University, Atlanta, GA, USA. 3 Hematology, Children’s National Medical Center, Washington, DC USA
Abstract— Sickle cell disease (SCD) is characterized by hemolysis and oxidative stress, resulting in endothelial dysfunction (EDF). Peripheral arterial tonometry (PAT), a non-invasive technology for measuring EDF, utilizes reactive hyperemia following mini-ischemic stress (reactive hyperemia index or RHI). RHI greater than 1.67 indicates normal endothelial function. Methods: We studied 54 SCD children to determine the influence of hemoglobin genotype and treatment on RHI. Results: Blunted RHI was seen with increased symptomatology (1.44 + 0.18; pvalue 0.047). RHI was not normal in children on chronic transfusion or hydroxyurea (1.49+ 0.56, 1.48 + 0.51). RHI correlated with reticulocyte fraction (Spearman r = -0.47, p=0.037). PAT merits further exploration as a measure of EDF in SCD. Keywords— sickle cell anemia, child, vasculopathy, noninvasive.
I. INTRODUCTION A. Sickle Cell Disease Sickle cell disease, an inherited disorder of red blood cells, is now recognized to include chronic vascular damage. This vascular pathophysiologic process augments the effects of polymerization of sickle hemoglobin: rigid red cells, hemolytic anemia, microvascular obstruction, inflammation and ischemic injury. The most common type of sickle cell disease (SCD), type SS, has the most sickling and hemolysis. SCD-SS generally has more frequent complications of both acute vaso-occlusion and chronic vascular damage such as stroke, compared to other types of SCD like SC and Sβ+-thalassemia. B. Endothelial Dysfunction and Flow-Mediated Dilatation Flow-mediated dilatation (FMD) describes the process of vasorelaxation following a stimulus of increased shear stress, mediated significantly by nitric oxide. Nitric oxide is released by the endothelial cells and diffuses into the surrounding smooth muscle to induce cGMP. This cGMP is a second messenger that relaxes vascular smooth muscle to result in vasodilatation. Endothelial dysfunction is the cardiovascular research terminology for abnormally diminished nitric
oxide-mediated vasodilatation [1-3]. In conditions with severe intravascular hemolysis, such as malaria, sickle cell disease, and paroxysmal nocturnal hemoglobinuria, EDF is attributable to plasma hemoglobin scavenging nitric oxide [3, 4]. Endothelial dysfunction (EDF) is common in coronary artery disease, diabetes mellitus and renal failure, in which the vasodilatory response to nitric oxide (NO) or acetylcholine is blunted [5, 6], attributable to decreased synthesis of NO. Although these are commonly considered adult diseases, EDF can occur in children as young as 6 years old [7]. Classic methods for measuring endothelial dysfunction by forearm blood flow and flow-mediated dilatation or brachial artery ultrasonography have either been cumbersome or required invasive vascular catheters [6, 10, 11]. Peripheral arterial tonometry (PAT) is a non-invasive technology for measuring endothelial dysfunction in peripheral arterial beds through a finger plethysmographic probe following a mini-ischemic stress [6]. The ratio of the postocclusion dilatation compared to the baseline plethysmographic measurement is the Reactive Hyperemia Index (RHI). RHI >1.67 indicates normal endothelial function [12-14]. PAT technique has been used to screen for endothelial dysfunction in people with coronary artery disease and other cardiovascular disorders [12-15]. However, PAT is only starting to be used in SCD [16-17] and our group is the first to focus exclusively on pediatric SCD [16]. C. Hypotheses We hypothesized that SCD patients with more severe manifestations of SCD would have greater endothelial dysfunction and lower RHI as measured by the PAT technique. We predicted that RHI would be lower in SCD children with SCD-SS and especially in SCD-SS with more frequent symptoms. We predicted that SCD patients with greater hemolysis would have lower RHI due to lower NO bioavailability [4, 8-9, 18]. Conversely, we reasoned that reducing hemolysis with interventions like hydroxyurea or chronic blood transfusion might result in higher RHI than patients without anti-sickling interventions. This paper adds data to our earlier report in children with sickle cell disease [16].
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 544–547, 2010. www.springerlink.com
Peripheral Arterial Tonometry in Assessing Endothelial Dysfunction in Pediatric Sickle Cell Disease
545
Table 1 Sickle cell type and Reactive Hyperemia Index
II. METHODS A. Study Design A convenience sample of 54 SCD patients was examined at the Marian Anderson Comprehensive Sickle Cell Center, which follows approximately 450 patients. Patients ranged from age 7-20 years. Normal controls were healthy children on no medications. The study was approved by the IRB of Drexel University College of Medicine. B. PAT Technique The PAT device used was the FDA-approved EndoPAT2000 (Itamar Medical). RHI was measured as recommended by the manufacturer: quiet, warm, darkened room, no recent heavy meal, not acutely ill. Plethysmographic probes were placed on corresponding fingers of both hands, and baseline arterial wave-forms were recorded for five minutes. The dominant arm served as a control to correct for any baseline sympathetic overlay. The non-dominant arm was occluded with a blood pressure cuff for a period of five minutes and then released. A ratio of the post-occlusion dilatation compared to the baseline, Reactive Hyperemia Index (RHI), was automatically generated by the instrument (RHI >1.67 signifying normal endothelial function). The PAT device sometimes generated RHI with a warning that the signal was very noisy or otherwise difficult. These data were omitted from analysis.
Hemoglobin type
SS
SC
Sβ+
Normal
Number
41
10
3
4
Males
20
6
2
1
Age 13+ years
27
8
3
3
RHI below 1.67
71%
80%
67%
50%
RHI mean
1.59
1.44
2.04
2.09
RHI Standard Dev
0.54
0.39
0.78
0.67
This study enrolled 54 children with SCD. The majority of SCD patients (72.2%) had RHI < 1.67, which connotes endothelial dysfunction. Of these, 76% had a diagnosis of SCD-SS, 19% were SCD-SC, and 6% were SCD-Sβ+ thalassemia (Table). B. Disease Severity and RHI Disease severity in SCD can be classified by SCD type or frequency of symptoms. The 25 SCD-SS patients not on transfusions or hydroxyurea had similar RHI as the combined group of 13 patients with SCD-SC or SCD-Sβ+ thalassemia (mean RHI 1.62 and 1.58, respectively, Figure). The mean RHI for children with SCD-SS and frequent pain or acute chest syndrome was significantly lower than those with SCD-SS who were less symptomatic (Figure, mean RHI 1.44 and 1.68 respectively).
C. Chart Review
2.67
D. Statistical Analysis Due to the skewed distribution of RHI data, we used nonparametric statistical analysis: Mann-Whitney test, Spearman correlation, and Wilcoxon rank-sum test.
III. RESULTS
p < .05 by Wilcoxon signed-rank test compared to 1.67
RHI (mean, SEM)
Medical records of the subjects were reviewed for frequency of symptoms (two or more hospitalizations/ year for SCD pain or acute chest syndrome considered as “frequent symptoms”). Baseline hemoglobin and reticulocyte count were recorded, as well as chronic antisickling therapy (hydroxyurea or chronic transfusions).
2.33
*
2.00
normal
1.67
1.33
SS SS SC or Sβ + Freq VOC Less sympt n=13 n=7 n=18
control n=4
Fig. 1 RHI for children without transfusion or hydroxyurea treatment
A. Sickle Cell Disease Compared to Normal
IFMBE Proceedings Vol. 32
546
K.M. Sivamurthy et al.
C. Hemolysis and RHI Correlation of baseline hemoglobin values and reticulocyte counts of subjects not receiving regular transfusions were analyzed in relation to RHI as indirect indicators of hemolysis, because the pathophysiology of EDF in SCD is linked to hemolysis [18]. A significant inverse correlation was found between RHI and reticulocyte fraction (Spearman r = -0.47, p=0.037), but not hemoglobin level (Spearman r=0.29, p=0.14). However, there was wide variability in RHI scores noted within each group. D. Safety of PAT Technique in SCD Patients We had worried that occlusion of the arterial flow for 5 minutes might trigger vaso-occlusive pain in patients with SCD. However, no pain was reported by any of the 54 subjects with SCD, either immediately after the procedure or by telephone follow-up the next day.
IV. DISCUSSION In this study, we assessed the utility of PAT in children with SCD. The incidence of abnormal RHI was 72% in our study group. The cohort was analyzed for differences between hemoglobin genotypes, symptomatology and degree of hemolysis. The group of patients who had more frequent symptoms did have a significantly lower mean RHI than the other groups. This difference might reflect increased endothelial damage caused by repeated acute insults. RHI correlated with reticulocyte fraction, consistent with a relationship between chronic hemolysis and EDF. The PAT technique was well-tolerated, with no subject reporting that it triggered vaso-occlusive pain. This study was limited by sample size, with only a few patients studied in each subgroup such as hydroxyurea treatment or chronic transfusion, even though this paper has added to the number we previously reported. The variability of RHI within each group probably reflects the multiple other factors that affect vasoregulation: medications, genetics, and environment. Longitudinal measurements of RHI for the same subject over time at several baseline visits could determine the possible contributors to RHI variability. In addition, RHI measurements at acute episodes of vasoocclusive pain or other sickle cell complications could provide insights regarding the vascular pathophysiology of SCD. Finally, direct comparison of PAT with classic techniques for measurement of flow-mediated dilatation would be helpful for distinguishing whether RHI indicates dysfunction as the microvascular or arterial level. We are aware of only one such direct comparison of PAT and forearm
blood flow in SCD, which has been published only in abstract (Krajewski, M.L., NIH Clinical Research Training Program student poster presentation, May 8, 2007, Bethesda, MD)
V. CONCLUSION The RHI results are partially consistent with our hypothesis that severity of SCD correlates with RHI. However, the variability in RHI from one individual to another indicates that SCD severity may not be the dominant factor controlling RHI. One pilot study demonstrated improvement of RHI in SCD patients with daily administration of tetrahydrobiopterin [17], suggesting that interventions to improve nitric oxide bioavailability might improve endothelial function in SCD. Although this cross-sectional study does not provide clear evidence that hydroxyurea or chronic transfusion corrects the RHI, longitudinal studies could try to provide better evidence. PAT measurement of RHI has potential for helping understanding of endothelial dysfunction in sickle cell disease.
REFERENCES 1. Munzel, T., Sinning C, Post F, Warnholtz A, Schulz E. Pathophysiology, diagnosis and prognostic implications of endothelial dysfunction. Ann Med, 2008. 40(3): p. 180-96. 2. Deanfield, J.E., J.P. Halcox, and T.J. Rabelink, Endothelial function and dysfunction: testing and clinical relevance. Circulation, 2007. 115(10): p. 1285-95. 3. Yeo, T.W., Lampah DA, Gitawati R, et al., Impaired nitric oxide bioavailability and L-arginine reversible endothelial dysfunction in adults with falciparum malaria. J Exp Med, 2007. 204(11): p. 2693704. 4. Reiter, C.D., Wang X, Tanus-Santos JE, et al., Cell-free hemoglobin limits nitric oxide bioavailability in sickle-cell disease. Nat Med, 2002. 8(12): p. 1383-9. 5. Drexler, H., Endothelial dysfunction: clinical implications. Prog Cardiovasc Dis, 1997. 39(4): p. 287-324. 6. Donald, A.E., Charakida M, Cole TJ, et al., Non-invasive assessment of endothelial function: which technique? J Am Coll Cardiol, 2006. 48(9): p. 1846-50. 7. Kavazarakis, E., Moustaki M, Gourgiotis D, et al., The impact of serum lipid levels on circulating soluble adhesion molecules in childhood. Pediatr Res, 2002. 52(3): p. 454-8. 8. Aslan, M. and B.A. Freeman, Oxidant-mediated impairment of nitric oxide signaling in sickle cell disease--mechanisms and consequences. Cell Mol Biol (Noisy-le-grand), 2004. 50(1): p. 95-105. 9. Gladwin, M.T., Schechter AN, Ognibene FP, et al., Divergent nitric oxide bioavailability in men and women with sickle cell disease. Circulation, 2003. 107(2): p. 271-8. 10. Otto, M.E., Svatikova A, Barretto RB, et al., Early morning attenuation of endothelial function in healthy humans. Circulation, 2004. 109(21): p. 2507-10. 11. Zawar, S.D., Vyawahare MA, Nerkar M, Jawahirani AR. Noninvasive detection of endothelial dysfunction in sickle cell disease by Doppler ultrasonography. J Assoc Physicians India, 2005. 53: p. 67780.
IFMBE Proceedings Vol. 32
Peripheral Arterial Tonometry in Assessing Endothelial Dysfunction in Pediatric Sickle Cell Disease 12. Bonetti PO, P.G., Higano ST, Holmes DR Jr, et al. Noninvasive identification of patients with early coronary atherosclerosis by assessment of digital reactive hyperemia. J Am Coll Cardiol, 2004 Dec 7. 44(11): p. 2137-41. 13. Hamburg NM, Keyes .M.J, Larson MG, et al. Cross-sectional relations of digital vascular function to cardiovascular risk factors in the Framingham Heart Study. Circulation, 2008. 117(19): p. 2467-74. 14. Kuvin JT, Mammen, A., Mooney P, et al. Assessment of peripheral vascular endothelial function in the ambulatory setting. Vasc Med, 2007. 12(1): p. 13-6. 15. Endemann, D.H. and E.L. Schiffrin, Endothelial dysfunction. J Am Soc Nephrol, 2004. 15(8): p. 1983-92.
547
16. Sivamurthy K, Dampier C, MacDermott ML, et al. Peripheral Arterial Tonometry In Assessing Endothelial Dysfunction In Pediatric Sickle Cell Disease. Ped Hem Onc 2009 Nov;26(8):589-96. 17. Hsu, L.L., Ataga, KI. Gorduek VR., et al., Tetrahydrobiopterin (6RBH4): Novel Therapy for Endothelial Dysfunction in Sickle Cell Disease Blood, 2008. 112(Nov): p. LBA-5. 18. Krajewski, M.L., L.L. Hsu, and M.T. Gladwin, The proverbial chicken or the egg? Dissection of the role of cell-free hemoglobin versus reactive oxygen species in sickle cell pathophysiology. Am J Physiol Heart Circ Physiol, 2008. 295(1): p. H4-7.
IFMBE Proceedings Vol. 32
Comparison of Shear Stress, Residence Time and Lagrangian Estimates of Hemolysis in Different Ventricular Assist Devices K.H. Fraser, M.E. Taskin, T. Zhang, B.P. Griffith, and Z.J. Wu Artificial Organs Laboratory, Dept. of Surgery, University of Maryland Medical School, Baltimore, USA Abstract— Millions of people are diagnosed with heart failure each year and thousands would benefit from a heart transplant if there were enough donor hearts. Ventricular Assist Devices (VADs) are blood pumps which augment the failing heart's pumping capacity. They are already in clinical use as bridge-totransplant but could benefit more patients as end-stage therapy. While current devices are more biocompatible than their forerunners they still have problems; device-induced blood damage, including hemolysis, platelet activation, thrombosis and embolization, may still cause serious clinical events such as strokes and renal damage. Reliable computational methods for predicting blood damage, including hemolysis, are desirable since they will aid in selecting devices to be used and the design of new devices. Flow-induced hemolysis is a function of the shear stress on the erythrocytes and the exposure to this shear stress. Computational fluid dynamics (CFD) was used to analyze the flow field in a range of VADs, for a range of operating conditions, and calculate shear stress and residence time parameters. These parameters may give some indication of the hemolysis potential for these devices, however since hemolysis is a function of shear stress and exposure time a model is required. Lagrangian models of hemolysis use flow streamlines as an indication of the paths taken through the device by erythrocytes. The hemolysis at the outlet is then calculated as an incremental function of the shear stress and exposure time along these lines. Various Lagrangian models were used to compute hemolysis indices for the VADs and the results were compared with experimental measurements. Best agreement was found with a model that integrates the temporal derivative of the power law equation along the streamlines. In a centrifugal VAD with a rotational speed of 4000 rpm and flow rate of 5 l/min, the model predicted the HI was 5.0x10-4 % as compared with the experimental result 5.4x10-4 %. Keywords— shear stress, residence time, hemolysis, ventricular assist device, mechanical circulatory support.
the available therapies fail to control their symptoms; for them, cardiac transplantation may be the only treatment option. However, only approximately 2300 donor hearts become available each year resulting in around 2200 transplants [1], or only about 6 % of the estimated 35,000 US patients who would benefit from a heart actually receiving a transplant. To address the need to support the circulation in patients with end-stage HF a wide variety of mechanical circulatory support devices (MCSDs) have been developed over the past four decades. The two main types of blood pumps which have been developed are: rotary continuous flow and positive displacement pulsatile pumps. Whilst displacement pumps maintain the physiological pulsatility of the flow they typically experience problems with mechanical failure of diaphragms and valves. The advantages of continuous flow pumps are the simpler designs, involving fewer moving parts, the smaller size and lower power consumption. While MCSDs have already benefitted many patients in the form of bridge-to-transplant, they have the potential to help many more if blood damage problems can be eliminated. Blood damage consists of hemolysis, platelet activation, thrombosis and emboli. The development of computational models of blood damage will assist in the design of MCSDs. This work concentrates on the hemolysis part of blood damage. Hemolysis is a function of shear stress and the exposure time to this shear stress. The aim of this work was to investigate the shear stress and residence time in three different rotary MCSDs. Mean shear stress and residence times were found using from computational fluid dynamics (CFD) and initial estimates of hemolysis were calculated from post processed flow streamlines.
I. INTRODUCTION Cardiovascular disease is the leading cause of mortality globally. Among various forms of cardiovascular disease, heart failure (HF) affects 5.7 million patients in the United States [1]. The fatality rate for HF is high, with one in five people dying within 1 year [1]. The number of deaths has increased [1] despite advances in surgical treatment and new pharmaceutical therapies. Despite optimal medical and surgical therapies, some patients still do not improve and
II. METHODS A. Devices Three rotary ventricular assist devices were analyzed: Device 1: A magnetically levitated centrifugal pump for adult circulatory support with an optimum operating condition around 4000 rpm and 5 l/m. The internal priming volume of the pump is 32 ml and the radius of the impeller is 21.2 mm. The pump is fitted extra corporeally using flexible cannulae.
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 548–551, 2010. www.springerlink.com
Comparison of Shear Stress, Residence Time and Lagrangian Estimates of Hemolysis in Different Ventricular Assist Devices
Device 2: An axial flow pump for pediatric circulatory support with an optimum operating condition around 12,000 rpm and 2 l/min. The impeller is suspended hydrodynamically. The priming volume is 10 ml and the impeller radius is 8 mm. The device weighs just 35 g and is fitted directly into the heart and the outflow connected to the aorta through a cannula. Device 3: An axial flow pump for adult circulatory support with optimum operating conditions around 10,000 rpm and 5 l/min. The impeller uses hydrodynamic bearings and has a radius of 6 mm. The priming volume is 124 ml. The device weighs 375 g and is implanted with a rigid connection to the ventricle and a cannula to the aorta.
residuals with the criteria that the residuals for all equations should be < 1 × 10-3. Blood was treated as an incompressible Newtonian fluid with a viscosity of 3.5 ×10-3 kg/m-s and density 1050 kg/m3. It is well known that blood is a shear thinning fluid but because the shear rates found in VADs are high it was treated as Newtonian. For each device calculations were performed at the optimum operating condition, and 8 other conditions spanning the range of typical usage. The inlet boundary condition was plug flow for Device 2, which fits directly into the heart, and parabolic flow for Devices 1 and 3 which have tubular connections to the heart. The outlet boundary condition was uniform pressure. Post processing. The scalar shear stress, ıscalar, was calculated from the shear stress components, ıi, using the following equation [2]: ͳ
Fig. 1 Geometry of flow domain of the three devices Note the images are not to scale.
B. Calculations Geometry and meshing. The geometries of the three devices were obtained from a combination of their computer aided drawing (CAD) files and taking measurements from the devices using calipers. Commercial software (Ansys) was used to build up models of the fluid flow domains. The flow domain was extracted using Ansys DesignModeler. For the two axial devices (Devices 2 and 3) the blades were constructed with BladeModeler and the interblade passages were meshed using TurboGrid. The remainder of the axial flow devices (inlet and outlet sections) as well the whole of Device 1, were meshed using Ansys Meshing. Hybrid meshes were constructed with any suitable regions meshed with hexahedral cells and the most complicated structures meshed with tetrahedral cells. Calculations. The CFD were calculated using the commercial, finite volume software Fluent 12.0 (Ansys Inc). Motion of the impellers was incorporated using the multiple reference frame (MRF) approach. This is a steady state approximation of the actual flow. The flow in the pediatric device (Device 2) was assumed to be laminar as the low flow rates gave Reynolds numbers (Re), based on the inlet diameter, between 180 and 1200. The SST-kȦ turbulence model was used in the two adult support devices (Devices 1 and 3) which had higher flow rates (Device 1: 2100 < Re < 5000, Device 3: 1500 < Re < 3700). Second order discretization was used for all the equations and the SIMPLE scheme was used for pressurevelocity coupling. Convergence was assessed using the
549
ͳൗ ʹ
(1) ߪ
ൌ ൫ߪ݅݅ െ ߪ݆݆ ൯൫ߪ݅݅ െ ߪ݆݆ ൯ ߪ݆݅ ߪ݆݅ ൨ The mean residence time was found by tracing streamlines from the inlet plane through the device to the outlet plane and finding the mean of the lengths of time taken by all the streamlines to reach the outlet. The hemolysis index (H) can be calculated from measurements of plasma free hemoglobin and is defined as the change in plasma free hemoglobin (¨pfHb) as a percentage of the total hemoglobin (Hb): οܾܪ݂ (2) ܪൌ ൈ ͳͲͲ ܾܪ Under constant, uniform shear stress, as found for example in a Couette device, a power-law function has been proposed to represent the relationship of hemolysis to the shear stress (ı) and exposure time (t): (3) ܪൌ ݐܥఈ ߪ ఉ where C, Į and ȕ are constants. Previously published constants were used in the present study (C=1.21x10-5, α=0.747 and β =2.004) [3] [4]. However, the shear stress encountered by blood in a VAD is far from uniform and therefore this equation needs to be extrapolated. Several different methods for estimating hemolysis in devices from CFD have been developed including: Eularian methods [3], which integrate shear stress and residence time over every mesh element; scalar transport equations [4], which calculate the production and flow of plasma free hemoglobin; and Lagrangian methods [5], which estimate hemolysis on a small section of a streamline and integrate this along the whole line. We chose Lagrangian methods for this work as they account for residence times in different areas of the device and it is straightforward to incorporate the damage history of the erythrocytes. Two equations for the infinitesimal damage were compared: the first is the temporal derivative [7] of the power-law equation (3):
IFMBE Proceedings Vol. 32
550
K.H. Fraser et al.
݀ ܪൌ ߪߙܥఉ ݐఈିଵ ݀ݐ
(4)
And the second attempts to account for the damage history of the cells [7]: ఈିଵ
௧
݀ ܪൌ
ఉ ߙܥ൮ න ߪሺߝሻ ൗఈ
݀ߝ ܦሺݐ ሻ൲
ߪሺݐሻ
ఉൗ ఈ ݀ݐ
(5)
௧బ
Calculations of pressure head were compared with experimental measurements (Fig. 3) and the percentage errors in the calculated pressure heads were found. The mean error, over all 9 operating conditions, for each device was: device 1 = 4 %, device 2 = 22 % and device 3 = 8 %. C. Shear Stress
III. RESULTS A. Meshing To assess the errors involved in discretizing the equations a mesh study was performed on Device 1 at the optimum operating condition. Three meshes were used: a coarse mesh with element spacing of 1.25 times the standard mesh, the standard mesh, and a fine mesh with element spacing 0.8 times that of the fine mesh. The number of elements in each of these meshes was: coarse = 2.3 M, standard = 3.1 M and fine = 4.3 M. The ordered error estimates were calculated based on Richardson extrapolation (6) and they are shown as percentage errors in Fig 2.
Fig. 4 Mean ıscalar for each device and operating condition and comparison of the mean ıscalar at the optimum operating condition for each device
Fig. 2 Convergence of volumetric mean ıscalar and decrease in standardized
The volumetric mean scalar shear stress (mean ıscalar) (Fig. 4) and the volume of the device with a scalar shear stress above the threshold value of 150 Pa (high ıscalar volume) were calculated for each device and operating condition (Fig. 5).
mesh error with reduced mesh spacing
B. Validation
Fig. 3 Comparison of calculated and experimental pressures: A) Device 1, B) Device 2, C) Device 3
Fig. 5 High ıscalar volume for each device and operating condition and comparison of the high ıscalar volume at the optimum operating condition for each device
IFMBE Proceedings Vol. 32
Comparison of Shear Stress, Residence Time and Lagrangian Estimates of Hemolysis in Different Ventricular Assist Devices
D. Residence Times The mean residence time was calculated for each device and operating condition (Fig. 6).
551
lowest mean ıscalar and the lowest percentage volume with a high ıscalar. Device 2 has the shortest residence time. From the two Lagrangian methods used Device 1 appears to be the most hemolysing. It is surprising the while Device 1 had the lowest shear stresses it still produced the most hemolysis. Possible reasons for this: hemolysis is a function of the local shear stresses and residence times which may not be reflected in the averaged shear stress and residence time results presented; the Lagrangian method relies on postprocessed pathlines to calculate the damage and therefore does not account for flow through the entire device.
V. CONCLUSION Flow through three different VADs was calculated and analysis of shear stress and residence time was performed. In future, models for hemolysis will be further developed and results compared to experiments.
Fig. 6 Residence time for each device and operating condition and comparison of the residence time at the optimum operating condition for each device
ACKNOWLEDGMENT This work was funded by National Institutes of Health (grant number: R01HL088100).
E. Hemolysis Values of H at the optimum operating condition for each device were calculated from ıscalar and residence time along the streamlines, and the temporal derivative and damage history equations (2) and (3). These are shown in (Fig 7). The comparable experimental value for H in Device 1 was 5.4x10-4 % which is in good agreement.
Fig. 7 Estimates of hemolysis at the optimum operating condition for each device using the temporal derivative (left) and damage history (right)
IV. DISCUSSION Both shear stress and residence time are important factors in calculating hemolysis. Device 1 appears to have the
REFERENCES 1. Lloyd-Jones, D., et al. (2009) Heart Disease and Stroke Statistics 2009 Update. Circulation 119:e21-e181. 2. Bludszuweit, C. (1995) Three-dimensional Numerical Prediction of Stress Loading of Blood Particles in a Centrifugal Pump. Artif Organs 19: 590-596 3. Zhang, T. et al. (2008) Study of blood damage using a newly developed shearing device. ASAIO J 56:5A 4. Taskin M. E. et al. (in press) Computational Characterization of Flow and Hemolytic Performance of the UltraMag Blood Pump for Circulatory Support. Artif Organs 5. Garon, A. and Farinas, M-I. (2004) Fast Three-dimensional Numerical Hemolysis Approximation. Artif Organs 28:1016-1025 6. Goubergrits, L. (2006) Numerical modeling of blood damage: current status, challenges and future prospects. Exp Rev Med Dev 3:527-531 7. Grigioni, M. et al (2004) The Power-law Mathematical Model for Blood Damage Prediction: Analytical Developments and Physical Inconsistencies. Artif Organs 28:467-475 8. Roache, P. J. (1997) Quantification of Uncertainty in Computational Fluid Dynamics. Ann Rev Fluid Mechanics 29:123-160
Corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Zhongjun J. Wu University of Maryland School of Medicine MSTF rm 436, 10 S Pine St Baltimore USA
[email protected]
Drug Resistance always Depends on the Turnover Rate C. Tomasetti and D. Levy Department of Mathematics and Center for Scientific Computation and Mathematical Modeling (CSCAMM), University of Maryland, College Park, MD, USA Abstract— Resistance to drugs is a fundamental problem in the treatment of many diseases. In this work we consider the problem of drug resistance in cancer, focusing on random genetic point mutations. A recent result obtained by Komarova is that for the case of a single drug treatment, the probability to have resistant mutants generated before the beginning of the treatment (and present, including their progeny, at some given time afterward) does not depend on the cancer turnover rate. This implies that the treatment success will not depend on such rate. In this paper we show that the number of such resistant mutants must depend on the turnover rate, which will also be the case for the success of the treatment. Keywords— Drug resistance, Cancer, Ordinary differential equations.
I. INTRODUCTION One of the main reasons for the failure of cancer treatment is the development of drug resistance. There are multiple mechanisms by which drug resistance may develop. One type of resistance is ''kinetic resistance''. Many drugs are indeed effective only during one specific phase of the cell cycle, e.g., the S phase, when the DNA is synthesized. Thus, in the case of a short exposure to the drug, the cell will not be affected if during that time it is in a different phase. Even more importantly, the cell will be substantially invulnerable if it is out of the cell division cycle, i.e., in a ''resting state''. Such resistance is generally only temporary. Resistance to drugs may instead develop as a consequence of genetic events such as mutations. This category includes both ''point mutations'' and ''chromosomal mutations'', also known as ''gene amplifications''. Point mutations are random genetic changes that occur during cell division. These mutations cause the replacement of a single base nucleotide or pair, with another nucleotide or pair in the DNA or RNA. This is a random event with a very small probability that modifies the cellular phenotype, making any of its daughter cells resistant to the drug. Gene amplification is the consequence of an overproduction of a particular gene or genes. This means that a limited portion of the genome is reproduced to a much greater extent than normal, essentially providing the cell with more copies of a particular
gene than the drug is able to cope with. For a more comprehensive picture we refer to the book by Teicher [1] and to the references therein. In the following we will focus only on random point mutations, given the main role they have in causing drug resistance (see Luria and Delbrück [2]). Thus we will consider a growing cancer cell population for which, at each division, a random point mutation may occur, conferring drug resistance to a daughter cell. The first models of resistance caused by point mutations in cancer are due to Goldie and Coldman [3-6]. Using stochastic processes, Goldie and Coldman show for example how the probability of having no drug resistance is larger in smaller tumors. A recent work on point mutations is by Iwasa et al. [7], in which continuous-time branching processes are used to calculate the probability of having resistance at the time of detection of the cancer. We will focus on another recent work on point mutations due to Komarova [8-9]. There, probabilistic methods and a hyperbolic PDE are used to show for example how the pretreatment phase is more significant in the development of resistance than the treatment phase. This is a very natural, intuitive result given that during treatment the cancer population cannot be able to divide nearly as frequently, due to the presence of the drug. However, the main result obtained by Komarova is the following. In the case of a single drug treatment, the probability to have resistant mutants generated before the beginning of the treatment and present, including their progeny, at some given time afterward, does not depend on the cancer turnover rate. A consequence of such result is that also the probability of treatment success will not depend on such rate. For the case of a multi-drug treatment, instead, Komarova shows how there appears to be a strong dependence of the probability to have resistant mutants on the turnover rate (see [9]), and therefore also the probability of treatment success will strongly depend on such rate. Our goal is to understand the reason for such a difference between the single and multi-drug cases. This is accomplished by using a different, much simpler approach, based on an elementary compartmental system of ordinary differential equations rather than on stochastic processes. In particular we would like to understand if it is true that in the case of single drug treatment, drug resistance (and therefore
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 552–555, 2010. www.springerlink.com
Drug Resistance always Depends on the Turnover Rate
553
treatment success) is independent of the cancer’s turnover rate.
By solving the linear system (1), and using (3), we find that the solution for R(t) is given by:
II. AN ELEMENTARY MODEL Consider the case of resistance to a single drug. Accordingly, we have two populations. The first group is composed of wild-type cancer cells (cells that are sensitive to the drug). We denote the number of wild-type cancer cells at time t , by N(t) . The second group is composed by cells that have undergone a mutation, and therefore are resistant to the drug. The number of mutated cells at time t is denoted by R(t) . We assume that cancer grows exponentially and also that the drug therapy starts at time t * . Our model can then be written as:
⎧⎪ N ′ (t) = (L − D)N(t), ⎨ ′ ⎪⎩ R (t) = (L − D)R(t) + uN(t).
t ≤ t*,
(1)
⎧⎪ N ′ (t) = (L − D − H )N(t), ⎨ ′ ⎪⎩ R (t) = (L − D)R(t) + uN(t).
t > t *.
(2)
and
System (1) describes the pre-treatment phase, while system (2) follows the dynamics after the treatment starts. The difference between both systems is the introduction of H , the drug-induced death rate. In both systems, L , D , and u denote the birth, death, and mutation rates, respectively. We assume that 0 ≤ D < L and 0 < u = 1 . The initial conditions for the pre-treatment system (1) are given as constants N (0) = N 0 ≠ 0 and R(0) = 0 . The initial conditions for the system (2) are N (t * ) and R(t * ) , which are the solutions of (1) at t = t * . In this model we assume that both the wild-type and the resistant (mutated) cells have the same birth and death rates, as assumed in Komarova [9]. The time of the beginning of the treatment, t * , is related to the size of the tumor at that time. If we assume that the total number of cancer cells at time t * is M , we can use the exponential growth of cancer and the fact that the mutation rate u is relatively small, to estimate t * as t* ≈
1 M ln . L − D N0
III. ANALYSIS AND RESULTS
R(t * ) = N 0 ut * e( L−D)t ≈ *
Mu ln(M / N 0 ) . L(1 − D / L)
(4)
Here M is the total number of cancer cells when the therapy begins. The expression for R(t * ) contains the turnover ratio D / L . Therefore we see that the amount of resistant mutants generated before the beginning of the treatment clearly depends on the turnover rate. The slower the growth of the cancer is (i.e., the closer the turnover rate D / L is to 1) the larger is the amount of pre-treatment drug resistance. Conversely, the faster the tumor grows (i.e., the closer the turnover rate is to zero) the smaller is the resistance that develops prior to the beginning of the treatment. The result is natural since a tumor having a lower death rate will reach detection size with fewer divisions (and therefore fewer mutations) than a tumor with a higher death rate. Now, assume that mutations could be terminated after time t * , the time at which the therapy starts, so that the only drug resistance that is present after t * would be the ''progeny'' of the resistance generated before therapy started. We refer to such resistance as the ''pre-treatment resistance at time t '', where t is the time from the start of the treatment, and denote it by R p (t) . Note that R p (t) is simply the solution of system (1) at time t * that is then multiplied by an exponential term e( L−D )t that accounts for the growth of this resistance during treatment, that is
R p (t) =
Mu ln(M / N 0 ) ( L−D )t e . L(1 − D / L)
(5)
Equation (5) clearly shows how the amount of resistance generated before the beginning of the treatment and present, including its progeny, at any given time afterward depends on the turnover rate. Using the same methods, such dependence can be shown to be present also in the case of a multidrug therapy. We would like to note the simplicity of our mathematical approach with respect to the much more sophisticated one taken by Komarova. Of course our result is only about the average behavior of the drug resistant population, given our deterministic approach.
(3)
IFMBE Proceedings Vol. 32
554
C. Tomasetti and D. Levy
IV. DISCUSSION
V. CONCLUSIONS
A puzzling issue is the source of the apparent contradiction between our result and the result of Komarova [9]. A possible cause could be found in the different mathematical techniques used: while in this work we use a deterministic approach that deals with numbers of cells, in [9] the quantities of interest are probabilities. Can this be the source of the contradicting results? Clearly, the answer must be negative. The reason for this difference is due to the fact that Komarova studies the probability to have such resistance in the limit, as t → ∞ . It is actually only at t = ∞ that the results of [9] show a lack of dependence of the resistance on the turnover rate (see page 365, equation (49) and the following discussion in [9]). Therefore these results do not hold at any finite time. This result can be further understood by the following argument. Using techniques of branching processes we were able to calculate the probability to have resistant mutants generated before the beginning of the treatment and present, including their progeny, at some given time afterward. This probability is given by the following formula
Our goal was to understand the reasons behind the difference in the results of Komarova [9] for the single and multidrug cases. In order to accomplish this goal we have used a different, much simpler approach, based on an elementary compartmental system of linear ordinary differential equations, rather than on stochastic processes. In particular we wanted to understand if it is true that in the case of a single drug treatment, drug resistance (and therefore treatment success) is independent of the cancer’s turnover rate. We have shown that for the single drug case, Komarova’s results do not hold at any finite time. This is due to the fact that all quantities of interest are defined only as t → ∞ in [9]. The dependence on the turnover rate in the single drug case is simply weaker that the dependence in the multi-drug case. The asymptotic analysis in [9] loses this information.
⎛ ⎛ ⎜ ⎜ L 1 PR (t) = 1 − exp ⎜ −uM ln ⎜ −( L−D)t −( L−D )t De De ⎜ ⎜1− ⎝ ⎝ L
⎞⎞ ⎟⎟ ⎟⎟ . ⎟⎟ ⎠⎠
(6)
Here the time t is measured from the start of the treatment. Once again it is clear that this probability given by (6) does depend on the cancer turnover rate for any finite time t . It is only asymptotically that such dependence will disappear. The strength of such dependence will depend on the actual values of the parameters. Furthermore, the conclusion in [9] that, in the single drug case, the probability of treatment success does not depend on the turnover rate (see page 352, [9]), is related to the definition of a successful treatment as a complete extinction of the tumor as time becomes infinite. Different definitions of a successful treatment (such as allowing tumors not to exceed a certain size or simply considering finite times) will lead to a dependence on the turnover rate also in the single drug case. While from a mathematical point of view, it is a common practice to compute asymptotics as t → ∞ , in our opinion it is more desirable in the problem of drug resistance (and its related concept of treatment success) to study the dynamics for finite time, a time that is at most of the order of several years.
ACKNOWLEDGMENT The authors wish to thank Prof. Dmtry Dolgopyat for his helpful discussions and suggestions. Cristian Tomasetti would like to thank Professor Doron Levy for the advice and financial support. This work was supported in part by the joint NSF/NIGMS program under Grant Number DMS-0758374, and by the National Cancer Institute under Grant Number R01CA130817.
REFERENCES 1. Teicher B A (2006) Cancer drug resistance. Humana Press, Totowa, New Jersey 2. Luria S E, Delbrück M (1943) Mutation of bacteria from virus sensitivity to virus resistance. Genetics 28:491–511 3. Goldie J H, Coldman A J (1979) A mathematical model for relating the drug sensitivity of tumors to their spontaneous mutation rate. Cancer Treat. Rep. 63:1727–1733 4. Goldie J H, Coldman A J, Gudaskas G A (1982) Rationale for the use of alternating non-cross resistant chemotherapy. Cancer Treat. Rep. 66:439-449 5. Goldie J H, Coldman A J (1983) A model for resistance of tumor cells to cancer chemotherapeutic agents. Math. Biosci. 65:291-307 6. Goldie J H, Coldman A J (1998) Drug Resistance in Cancer: Mechanisms and Models. Cambridge University Press, Cambridge 7. Iwasa Y, Nowak M A, Michor F (2006) Evolution of resistance during clonal expansion. Genetics 172: 2557–2566 8. Komarova N, Wodarz D (2005) Drug resistance in cancer: principles of emergence and prevention. Proc. Natl. Acad. Sci. USA 102:9714-9719 9. Komarova N (2006) Stochastic modeling of drug resistance in cancer. J. Theor. Biol. 239:351-36
IFMBE Proceedings Vol. 32
Drug Resistance always Depends on the Turnover Rate Author: Institute: Street: City: Country: Email:
Cristian Tomasetti Mathematics Department, University of Maryland Paint Branch Drive College Park, MD 20742-3289 USA
[email protected]
IFMBE Proceedings Vol. 32
555
Design and Ex Vivo Evaluation of a 3D High Intensity Focused Ultrasound System for Tumor Treatment with Tissue Ablation K. Lweesy, L. Fraiwan, M. Al-Shalabi, L. Mohammad, and R. Al-Oglah Jordan University of Science and Technology, Faculty of Engineering, Biomedical Engineering Department, Irbid 22110, Jordan Abstract— This paper describes the design, construction, and evaluation of a three dimensional (3D) ultrasound system to be used to treat different kinds of tumors using high intensity focused ultrasound (HIFU). The system consists of two major parts: an ultrasonic therapy part and a treatment planning part. The ultrasonic therapy part consists of an ultrasound bowl shaped transducer (made from Lead Zirconate Titanate (PZT) and has a resonance frequency of 0.5 MHz), a lossless electrical impedance matching circuit built to ensure maximum electrical power delivery to the transducer, a function generator, and a high power amplifier. The ultrasonic therapy part is responsible for generating a high-power focus at the location of the geometric focus of the bowl shaped ultrasound transducer. The treatment planning part consists of three stepper motors (responsible for moving the setup in the x- y- and z-directions), three high-voltage high-current darlington arrays (to supply the stepper motors with the required voltages and currents), and a C# software to perform the treatment planning. To assess the movement of the treatment planner, each of the three stepper motors was moved forward and backward from end to end. Then the treatment planner was successfully driven to cover cubes of dimensions of 1 x 1 x 1 cm3, 2 x 2 x 2 cm3, 4 x 4 x 4 cm3, and 8 x 8 x 8 cm3, with step sizes 0.5, 1, 2, and 4 mm, respectively. Ex vivo experiments using fresh bovine liver were performed and indicated the capability of the system to generate lesions both on- and offaxis. Lesions at different depths were successfully generated at the intended locations. Temperature distributions were recorded both inside and outside the lesion and indicated that the temperature reached about 60°C inside the lesion and remained below 39°C outside it. Keywords— Geometrically focused transducer, high intensity focused ultrasound, lesion, sonication, treatment planning.
I. INTRODUCTION Cancer is a disease that can affect people from all ages, although the risk of having cancer increases with age. Cancer is responsible for more than 13% of all human deaths. According to the American cancer society, in the year 2007, about 7.6 million people died as a result of cancer worldwide [1]. Different techniques for treating cancer exist, such as surgery [2], chemotherapy [3], radiotherapy [3], microwave therapy [4], and high intensity focused ultrasound (HIFU)
therapy [5]. Surgery, chemotherapy, radiotherapy, and microwave therapy suffer from many drawbacks. As a result HIFU represents a good choice that can non-invasively target different kinds of tumors. In the past two decades, HIFU is getting more attention by different research groups and companies as a noninvasive procedure to treat cancers in different organs, such as kidney, liver, brain, prostate, and breast. Many HIFU devices had been tested with the guidance of magnetic resonance imaging (MRI). These HIFU devices either were unable to cover the whole cancerous volume due to limitations on the steering angle and the maximum depth of penetration (DOP), or used manual movements of single element ultrasound transducers which resulted in inaccurate movements. The purpose of this study was to build a complete and accurate ultrasound system for the treatment of different tumors without the use of any manual movement of the ultrasound transducer.
II. MATERIALS AND METHODS The overall system proposed herein is shown as a block diagram in Figure 1. The system consists of two parts, ultrasonic therapy and treatment planning. The ultrasonic therapy part consists of a single element geometrically focused ultrasound transducer that is driven by a function generator and a power amplifier, and connects to a personal computer (PC). The treatment planning part includes three stepper motors, three darlington arrays which connect to the PC through its parallel port, and a software (C#) to perform the planning. A. Ultrasonic Therapy Part a) Ultrasound Transducer Simulations The pressure and intensity beam profiles of a single element geometrically focused ultrasound transducer were simulated using Huygen’s principle [6], which evaluates the overall generated pressure (P(r ,θ )) or intensity (I (r ,θ )) at a certain point in the medium by dividing the ultrasound transducer into small point sources (known as simple
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 556–559, 2010. www.springerlink.com
Design and Ex Vivo Evaluation of a 3D High Intensity Focused Ultrasound System for Tumor Treatment with Tissue Ablation
sources) then adding the contributions of these sources to calculate the overall pressure or intensity. Matlab (MathWorks, Inc., USA) simulations were used to calculate both pressure and intensity distributions. Figure 2(a) shows the normalized intensity distribution calculated at an x-z plane (y = 0); a focal point at (x, y, z) = (0, 0, 10) cm is observed. Using the simulated intensity field, the temperature distribution was calculated using the Pennes bioheat transfer equation (BHTE) [7]: ρCt
⎛ ∂ 2T ∂ 2T ∂ 2T ∂T + + = K⎜ ⎜ 2 ∂t ∂y 2 ∂z 2 ⎝ ∂x
⎞ ⎟ − wC (T − T ) + q (x, y, z ) b a ⎟ ⎠
Fig. 1 Overall system block diagram Where Ct is the specific heat of the tissue (3770 J·kg-1· ºC ), K is the thermal conductivity (0.5 W·m-1· ºC-1), T is the temperature at time t at the point x, y, z in °C, Ta is the arterial blood temperature (37°C), w is the perfusion in the tissue (5 kg·m-3· s-1), Cb is the specific heat of the blood (3770 J·kg-1· ºC-1), and q(x, y, z) is the power deposited at the point x, y, z. The power was calculated from the intensity field distribution of the ultrasound transducer, while the BHTE was solved using a numerical finite difference method with the boundary condition temperatures set at 37°C. Figure 2(b) shows the temperature distribution generated by the intensity waveform shown in Figure 2(a). The -1
557
temperature rise at the focal point was found to be around 60°C, while outside the focus it was below 40°C (safe). b) Ultrasound Transducer Construction Several parameters govern the selection of the ultrasound transducer to be used for HIFU, such as material type, geometry, and resonance frequency. The choice of the transducer’s material is crucial, since it has a direct impact on both the electrical and acoustical properties of the transducer. Among the different PZT materials available in the market, PZT 8 and PZT 4 are the best candidates that can handle the high driving electrical powers needed for HIFU. PZT 8 has a low loss factor and a high quality factor compared to PZT 4; as a result, PZT 8 was chosen as the material for the transducer. Based on the simulation results mentioned earlier, a geometrically focused ultrasound transducer with a resonance frequency of 0.5 MHz was chosen in order to allow deep penetration of ultrasound wave to tissue since the DOP is inversely proportional to the resonance frequency. The geometric focus of the transducer was chosen to be 10 cm to allow the treatment of deep cancerous tissue. The electrical impedance of the PZT-8 material alone was measured to be 1.3 kΩ∟25°. This high impedance requires using a low capacitance coaxial cable in order to have both the cable electrical impedance and the PZT-8 electrical impedance in the same range. A two meter coaxial cable with a characteristic impedance of 75 Ω was found to be suitable. The soldering between the coaxial cable and the geometrically focused ultrasound transducer used a low temperature soldering material (Indalloy #1E, Indium Corporation of America, USA) to ensure that the temperature during soldering did not exceed the curie temperature for the PZT-8 material, which is about 310°C. c) Ultrasound Driving Source A sinusoidal signal (0.5 MHz) generated from a function generator was used. The sinusoidal signal was then fed into a 25 W power amplifier (Model 25A250, Amplifier Research, USA) to produce the high power required for HIFU treatments. Usually, in normal blood perfusion rates, a power of 6 W is enough to raise up the temperature at the focal point to 60ºC if the sonication time is set to 2 seconds. d) Electrical Matching Circuit The electrical impedance of the transducer, along with the coaxial cable connected to it, was measured to be 46.32 + j 13.07 Ω. Since this value is far from the optimal value of 50 + j 0 Ω, which is required for maximum power delivery to the load, an LC (L = inductor and C = capacitor) matching circuit with L = 0.21 µH and C = 1.88 nF was designed and built.
IFMBE Proceedings Vol. 32
558
K. Lweesy et al.
transistor-transistor-logic (TTL) signal. The three darlington arrays were connected to the X, Y, and Z motors from one side and to the PC through its parallel port interface from the other side. Figure 3 shows a front view of the built translating system. A C# code was written to move any of the three stepper motors either forward or backward. After each movement, a command instructs the moved stepper motor to stop for a pre-determined period of time, which represents the time delay required to cool down the tissue that lies in front of the transducer after each sonication.
(a)
C. Ex Vivo Experiments To ensure the capability of the system of generating onand off-axis lesions ex vivo, a fresh bovine liver (thickness about 4 cm) was obtained and submerged in a 40 x 40 x 60 cm3 water tank. The bovine liver was placed such that its proximal surface is 6 cm away from the transducer and its distal surface is about 10 cm away from the transducer. The coordinate (0, 0, 0) was set at the center of the ultrasound transducer. The transducer was aimed at a point that lies exactly on the distal liver’s surface (to have a visible lesion), then turned on for 2 seconds. The transducer was then moved off-axis to the locations (1, 1, 10) cm and (-1, -1, 10) cm, and was turned on at each location for 2 seconds.
(b)
Fig.
2 Simulated normalized intensity distribution (a) and temperature distribution (b) for a geometric focus at (0, 0, 10) cm
Stepper motor X
B. Treatment Planning Part The Treatment planning part consists mainly of a three dimensional (3D) translating system that consists of three stepper motors, named X, Y, and Z, and three darlington arrays that connect to the PC through the parallel port which is divided into three sub-ports: data, control, and status. Two six-wire stepper motors (X and Y) and one eightwire stepper motor (Z) were used to move the ultrasound transducer. Since the Z stepper motor is responsible for moving the whole setup, it was chosen to be larger to be able to generate the required torque. All the three stepper motors rotate with a step angle of 1.8º; thus one revolution (about 1 mm horizontal distance) needs 360/1.8 = 200 steps to be completed. Thus the distance resolution (minimum horizontal distance any of the three stepper motors can move) is 1mm/200 = 5 µm. Three high-voltage high-current darlington arrays (ULN2003A, Allegro MicroSystems, Inc., USA) were used because of their ability to provide the stepper motors with high voltages (up to 50 V) and high currents (up to 500 mA). Each darlington array was driven with a 5 V
Stepper motor Y Stepper motor Z
Fig. 3 Front view of the translating system
III. RESULTS The electroacoustic efficiency, which is defined as the output acoustic power divided by the input electric power, was first measured to ensure that the therapeutic ultrasound transducer is capable of delivering enough power to the
IFMBE Proceedings Vol. 32
Design and Ex Vivo Evaluation of a 3D High Intensity Focused Ultrasound System for Tumor Treatment with Tissue Ablation
tissue. The radiation force technique was used to measure the electroacoustic efficiency, which was found to be 52%. This efficiency can be increased by adding a matching layer to the design. The movement of the 3D translating system was tested first by moving each of the three stepper motors forward and backward from end to end. Then cubes of dimensions 1 x 1 x 1 cm3, 2 x 2 x 2 cm3, 4 x 4 x 4 cm3, and 8 x 8 x 8 cm3, were scanned using step sizes of 0.5, 1, 2, and 4 mm. For a cube of dimensions 1 x 1 x 1 cm3 (i.e., xc = yc = zc = 1 cm), the ultrasound transducer was moved with a step size of 0.5 mm to cover the whole volume. After each step movement of the ultrasound transducer, the fixed hydrophone recorded the voltage, and thus the intensity, generated by the ultrasound transducer. Ex vivo experiments were done to prove the capability of the overall system to generate lesions both on- and off-axis. Three sonications were aimed at (0, 0, 10) cm, (1, 1, 10) cm, and (-1, -1, 10) cm, with the on time of each sonication set to 2 seconds and the time between two consecutive sonications (off time) set to 10 seconds. The result is shown in Figure 4, which indicates the generation of three different lesions. The two off-axis lesions (at (1, 1, 10) cm and (-1, 1, 10) cm) coincide exactly at the intended locations, while the on-axis lesion ((0, 0, 10) cm) was shifted a little bit from its intended location; which might be due to the curvature of the distant liver’s surface.
IV. CONCLUSIONS HIFU is gaining more attention as a noninvasive/minimally invasive approach to treat cancer in different organs such as liver, kidney, brain, prostate, and breast. Most of the previously proposed HIFU devices to noninvasively treat breast cancer either used complex and expensive arrays yet with limited steering angles and DOPs, or used single element transducers that needed to be moved manually (inaccurate) to generate different lesions. The design described herein although used a single element transducer, it had a 3D translating system that can move the focal point accurately and repeatedly with a variable step size as small as 5 µm. Mechanical movements of the 3D translating system and ex vivo experiments were used to prove the capability of the system to generate lesions both on- and off-axis. In conclusion, a 3D HIFU therapeutic system that can be used to treat breast cancer, as well as other tumors, has been designed, built, and tested. The device has shown to give
559
good movement and focusing capabilities: two important parameters that must be considered when designing a HIFU device. Further improvement to the system can be done by incorporating it with MRI guidance.
Fig. 4 Three generated lesions, one on-axis and two off-axis
REFERENCES [1] American Cancer Society, Cancer Statistics, 2007. [2] Early Breast Cancer Trialists’ Collaborative Group (EBCTCG), “Effects of radiotherapy and of differences in the extent of surgery for early breast cancer on local recurrence and 15-year survival: an overview of the randomised trials,” Lancet, 366, 2087–2106, 2005. [3] M. Overgaard, P. Hansen, J. Overgaard, C. Rose, M. Andersson, F. Bach, M. Kjaer, C. Gadeberg, H. Mouridsen, M. Jensen, K. Zedeler, “Postoperative radiotherapy in high-risk premenopausal women with breast cancer who receive adjuvant chemotherapy. Danish Breast Cancer Cooperative Group 82b Trial,” New Engl J Med, 2 337(14):949-955, 1997. [4] G. Vlastos and H. Verkooijen, “Minimally Invasive Approaches for Diagnosis and Treatment of Early-Stage Breast Cancer,” The Oncologist, 12:1–10, 2007. [5] P. Huber, J. Jenne, R. Rastert, I. Simiantonakis, H. Sinn, H. Strittmatter, D. Fournier, M. Wannenmacher, J. Debus, “A New Noninvasive Approach in Breast Cancer Therapy Using Magnetic Resonance Imaging-guided Focused Ultrasound Surgery,” Cancer Res, 61:84418447, 2001. [6] J. Zemanek, “Beam behavior within the nearfield of a vibrating piston,” J Acoust Soc Am, 49:181–191, 1971. [7] H. Pennes, “Analysis of tissue and arterial blood temperatures in the resting human forearm,” J Appl Physiol, 1:93-122, 1948. Author: Khaldon Lweesy Institute: Jordan University of Science and Technology Street: P.O.Box 3030 City: Irbid Country: Jordan Email:
[email protected]
IFMBE Proceedings Vol. 32
Clinical Applications of Multispectral Imaging Flow Cytometry H. Minderman1, T.C. George2, K.L. O’Loughlin1, and P.K. Wallace1 1
Roswell Park Cancer Institute, Flow and Image Cytometry Facility, Buffalo, USA 2 Amnis Corporation, Seattle, USA
Abstract–– The ImageStream is a flow cytometry-based image analysis platform that acquires up to 12 spatially correlated spectrally-separated images of cells in suspension at rates of up to 1000 cells/sec. By combining the high throughput and multiparameter capability of flow cytometry with the high image content information of microscopy it allows quantitative image analysis in immunophenotypically defined cell populations in statistically robust cell numbers. One area of its clinical application is in the study of cell signal transduction pathways for which the intracellular localization of signaling intermediaries correlate with activity. For example, activation of the nuclear factor-kappaB (NFкB) transcription factor complex is associated with the cytoplasmic to nuclear translocation of p65. To demonstrate this application, the nuclear translocation of p65 following receptor mediated and drug–induced activation of NFκB was studied in human myeloid leukemia cells. TNFα –induced nuclear translocation of p65 was rapid and concentration-dependent, peaking at 30 min of exposure with maximum translocation achieved with concentrations above 5 ng/ml. Daunorubicin (DNR)-induced p65 translocation was concentration-dependent and correlated with DNR–induced apoptosis. The clinical context, the analysis approaches and results will be presented.
B. NFkB Pathway
Keywords–– Quantitative Imaging, Flow Cytometry, NFκB, Signal Transduction.
Many signal transduction pathways that control the activity of oncogenes and tumor suppressor genes implicated in oncogenesis and drug resistance have been characterized in recent years. Signal transduction through these pathways occurs through an intricate interplay between posttranslational protein modifications, intracellular colocalizations and transport between cytoplasm and nucleus of pathway intermediaries. The nuclear factor-kappaB (NFкB) transcription factor complex regulates genes important in cell proliferation, survival and drug resistance. It is held in an inactive state in the cytoplasm by binding to the inhibitor of nuclear factor кB (IкB) and is activated by phosphorylation of IкB by the IкB kinase (IKK) complex which leads to ubiquitinproteasome-mediated degradation of IкB and release of NF-кB for translocation to the nucleus [1-11]. Aberrant constitutive activation of this transcription factor has been implicated in many diseases making it an important therapeutic target. The ability to measure the activity of this pathway by determining the intracellular localization of its pathway intermediaries in the target cells would be an important parameter of response to targeted therapies.
I. INTRODUCTION
II. MATERIAL AND METHODS
A. Imagestream Technology The ImageStream platform is operationally similar to a flow cytometer, but has the ability to generate 12 simultaneous images of each cell analyzed with resolution comparable to that of 60x magnification of a standard fluorescence microscope. Each cell is represented by a dark field image, two bright field images, and up to nine spectrally separated fluorescent images. The novelty of this technology is that it can provide quantitative information not only on the prevalence of molecular targets in a heterogeneous cell population, but also on their localization within the cell, with statistically meaningful numbers. The combination of these capabilities brings statistical robustness to image-based assays.
A. Cell Line Models To demonstrate concentration dependent effects of a receptor-mediated activation of NFкB, ML-1 cells were exposed in vitro for 30 min to a concentration range of TNFα as detailed in the results section. To demonstrate drug-induced activation of NFкB and correlation with druginduced apoptosis, HL60 cells were exposed in vitro for 4h to a concentration-range of daunorubicin (DNR) which has previously been demonstrated to activate NFкB in this model [12]. B. Immunostaining For both cell line models, following drug treatment cells were washed with PBS, fixed (10 min 4% paraformaldehyde), permeabilized (0.1% v/v triton-X in PBS) and stained
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 560–563, 2010. www.springerlink.com
Clinical Applications of Multispectral Imaging Flow Cytometry
For each sample, the bright field, FITC and DRAQ5 images of 10,000 events were collected with the ImageStream. For each cell, the so-called ‘Similarity score’ for the FITC (p65) and the corresponding DRAQ5 (nucleus) images were calculated as a measure for nuclear p65 translocation. The ImageStream analysis software applies features (algorithms) and masking operations (region-finders), to perform image-based analysis [20]. The Similarity (S) score (Fig.1) is a log-transformed Pearson’s correlation coefficient (ρ) of the pixel values of the NFkB and DRAQ5 image pair within the nuclear mask. If NFкB is nuclear localized, its image will be similar to that of the DRAQ5 image and the Sscore will therefore have large positive values. If NF-кB is cytoplasmic, its image will be anti-similar to that of DRAQ5 and the S-score will therefore have large negative values. In Fig.1, examples of the S-score calculation are shown for a typical untranslocated and translocated cell. A frequency distribution plot can then be made for the S- score within a population and relative shifts of these distribution between two populations (e.g. treated ‘t’ versus control ‘c’) can then be calculated using the Fisher’s Discriminant ratio (Rd).
Healthy cell 9
Apoptotic cell 617
100
100
Area_Threshold(M05,C
C. ImageStream Analysis
The % apoptotic HL60 cells following DNR exposure were also determined with the ImageStream by quantifying the number of cells with condensed, fragmented nuclear images after extending the cultures for an additional 48 hrs. Fig.2 shows image representative bright field and corresponding nuclear images for a healthy and apoptotic HL60 cell. Compared to the healthy cells, apoptotic cells have a higher image contrast in the bright field image and because the nuclear fluorescence is more condensed, the area of the 50% brightest fluorescence signal as quantified by an ‘area threshold’ feature is relatively smaller in apoptotic cells than in healthy cells. These 2 parameters are plotted and a gate can then be set for apoptotic cells. Note the shift of distribution in these plots for apoptotic cells compared to the healthy control cells.
Area_Threshold(M05,C
(primary: polyclonal rabbit anti-human p65 antibody (SC-372, Santa Cruz Biotechnology, Santa Cruz, CA), secondary: FITC-conjugated donkey-anti-rabbit (Jackson Immunoresearch, West Grove, PA)). Immediately before acquisition with the ImageStream, cells were counterstained with the DRAQ5 nuclear stain (Axxora, San Diego, CA).
561
80 60 40 20 R4
0 0
10
20
30
40
50
60
70
80 60 40 20 R4
0 0
Contrast_M01_Ch01
10
20
30
40
50
60
70
Contrast_M01_Ch01
Fig. 2 ImageStream analysis of apoptosis
III. RESULTS A. Receptor-Mediated Time and Concentration-Dependent Translocation of NFkB
‘Untranslocated
Similarity Score -2.07
‘Translocated’
Similarity Score +2.82
Fig. 3 summarizes the effects of TNFα exposure on p65 nuclear translocation in ML-1 cells as determined with the ImageStream analysis approach outlined in Fig. 1. The data in both graphs represent average values of four independent replicates of the same experiment. First, the time-dependent effect of TNFα exposure was studied by fixing cells at different time points following the initiation of exposure to 10 ng/ml TNFα. The graph on the left demonstrates that the effect of TNFα on nuclear translocation of p65 is rapid and maximizes at 30 min following the start of exposure. Note that in this model system, prolonging the exposure time beyond 30 min resulted in a decreased nuclear translocation of p65. Next, the concentration-dependent effect of TNFα exposure was studied in this same cell line model for a fixed exposure duration of 30 min. The graph on the right demonstrates that maximum translocation of p65 is achieved with concentrations of 5 ng/ml or higher.
Fig. 1 ImageStream Similarity Score and Rd value IFMBE Proceedings Vol. 32
562
H. Minderman et al. (
)
1.2 1 0.8 0.6 0.4 0.2 0 5 min
10 min
15 min
20 min
25 min
30 min
40 min
50 min
60 min
0.3
1.8 1.6
0.2
1.4 1.2 1 0.8 0.6 0.4 0.2 0 0.1ng
0.5ng
Incubation Time (minutes)
1ng
2.5ng
5ng
7.5ng
10ng
[TNF-alpha]
Similarity Scores (Nucleus / NFkB p65)
Rd treated vs control
Rd treated vs control
2
0.1
0.0
-0.1
-0.2
Fig.
3 Time- (left) and concentration- (right) dependent translocation of p65 in ML-1 cells following exposure to TNFα
-0.3 0
20
40
60
80
100
% Apoptotic Cells
B. DNR-Induced Translocation of NFkB
0.3
Rd (relative to untreated control)
0.2
Similarity Scores (Nucleus /NFkB p65)
The effect of DNR on p65 nuclear translocation in HL60 cells assessed by western blot analysis has been previously described [12]. The DNR concentrations (0.1, 0.25, 0.5 and 1.0 µM) and exposure duration (4h) used in the present study were chosen to replicate the conditions used in this reference. Figure 4 summarizes the data of 5 replicate experiments in which the nuclear p65 translocation in HL60 cells was quantified by the ImageStream approach as outlined in fig.1. The ImageStream analysis revealed that, as was previously described based on western blot analysis [12], the exposure to DNR resulted in a concentrationdependent increase of nuclear p65.
0.1
0.0
-0.1
-0.2
-0.3 0
20
40
60
80
100
% Apoptotic Cells
Fig. 5 Top: Correlation between % DNR-induced apoptotic cells at 48h and the ImageStream analysis of nuclear NFkB translocation (similarity score) induced at 4h in the same cultures of 5 replicate experiments. Bottom: linear regression analysis of the cumulative data. R=0.9
0.5 0.4
IV. DISCUSSION
0.3 0.2 0.1 0.0 -0.1 0.1
1 DNR µM
Fig. 4 DNR-induced nuclear p65 translocation in HL60 cells C. DNR-Induced Apoptosis and Correlation with Nuclear p65 Translocation Next, the % apoptotic cells following 48h exposure to DNR were evaluated using the ImageStream analysis approach as outlined in fig. 2. In figure 5, the data of 5 replicate experiments are summarized by plotting the mean similarity scores (of p65 and nucleus) of the same data set shown in Fig.4 versus the % apoptotic cells. The 5 different colors in Figs. 4 and 5 are associated with corresponding data sets.
Gene transcription is regulated by activity of proteins that are part of a signal transduction cascade. In eukaryotes, the nuclear-cytoplasmic transport of these signaling proteins is one mechanism of regulation of transcription [13, 14]. In cancer cells, the sub-cellular distribution of oncogenes and tumor suppressors is frequently perturbed due to modification of the proteins, defective nuclear-cytoplasmic transport mechanisms or alterations in the nuclear pore complexes through which transport takes place. The sub-cellular localization of specific factors can thus be a determinant of the activity of a given pathway, and also of the efficacy of therapies directed at restoring normal activity. The presented studies focused on the NFкB pathway, but NFкB is only one of an increasing number of well-characterized pathways which can be aberrantly activated in cancer cells, including but not limited to p53 [15], p27 [16], FOXO-family transcription factors [17], INI1 [18] and β-catenin [19]. Conventional methods used to measure nuclear translocation have significant limitations. Biochemical/ molecular techniques are both time-consuming and semiquantitative in nature, and do not provide information
IFMBE Proceedings Vol. 32
Clinical Applications of Multispectral Imaging Flow Cytometry
regarding heterogeneity within a sample. Microscopic visualization remains the most direct method for measurement but suffers from operator bias and small sample size. Recent advancements in imaging instrumentation technology and image processing have allowed numerical scoring of large populations of cells, bringing statistically robustness to the quantitation of nuclear translocation. In this field, the ImageStream platform is unique in that it is a flow cytometry-based technology and thus allows quantitative image analysis of cells in suspension. The analytical approach used in the present studies was developed to allow nuclear translocation measurements in immunologically relevant cell populations, using cross-correlation analysis of fluorescent nuclear and transcription factor images from each object [20]. This approach allows accurate measurement of translocation in cells with small cytoplasmic areas in a dose- and timedependent manner, as well as in subsets of cells within a mixed cell population. The present data demonstrate the accuracy of these measurements with regards to TNFα- and DNR-induced NFkB translocation and its correlation with a biological endpoint (induction of apoptosis). We are currently applying this approach in the study of the effects of the proteasome inhibitor Bortezomib (Velcade) on the activation of NFkB in the treatment of acute myeloid leukemia.
V. CONCLUSIONS The ImageStream technology enables the quantitative study of intracellular (co-)localization of fluorescently labeled molecular targets. The ability to perform this in immunophenotypically defined target cells is a powerful tool to study these parameters as determinants of response to targeted therapies.
ACKNOWLEDGEMENT
563
REFERENCES 1. Baldwin AS (1996) The NF-кB and IкB proteins: new discoveries and insights. Annu Rev immunol 14:649-681 2. Ghosh S, May MJ, Kopp EB. (1998) NF-кB and Rel proteins: evolutionary conserved mediators of immune responses. Annu Rev Immunol 16:225-260 3. Miyamoto S, Verma (1995) IM. RE1/NF-кB/IкB story. Adv Cancer Res 66:255-292 4. Siebenlist U, Franzoso G, Brown K (1994) Structure, regulation and function of NF-кB. Annu Rev Cell Biol 10:405-455 5. Karin M, Ben-Neriah Y (2000) Phosphorylation meets ubiquitination: the control of NF-кB activity. Ann Rev Imm 18:621-663 6. Foo S, Nolan G (1999) NF-кB to the rescue. Trends in genetics 15:229-235 7. Baichwal VR (1997) Baeuerle PA. Activate NF-кB or die ? Curr Biol 7:R94-96 8. Sonenshein GE (1997) Rel/NF-кB transcription factors and the control of apoptosis. Semin Cancer Biol 8:113-119 9. Karin M Cao Y, Greten F et al. (2002) NF-кB in cancer: from innocent bystander to major culprit. Nature Rev Cancer. 2:301-310 10. Gilmore TD, Koedood M et al (1996) Rel/NF-кB/IкB proteins and cancer. Oncogene 13:1367-1378 11. Luque I, Gelinas C (1997) Rel/NF-кB and IкB factors in oncogenesis. Semin Cancer Biol 8:103-11 12. Boland et al: Daunorubicin activates NFkB and induces kBdependent gene expression in HL60 promyelocytic and Jurkat T lymphoma cells. J. Biol. Chem. 272 (20): 12952-12960 13. Fujihara SM, Nadler SG (1998) Modulation of nuclear protein import: a novel means of regulating gene expression. Biochem Pharmacol. 56:157-161 14. Nigg, E A (1997) Nucleocytoplasmic transport: Signals, mechanisms and regulation. Nature 386: 779–787 15. O'Brate A, Giannakakou P (2003) The importance of p53 location: nuclear or cytoplasmic zip code? Drug Resist Updat. 6:313-322 16. Blagosklonny MV (2001) Are p27 and p21 cytoplasmic oncoproteins? Cell Cycle. 1:391-393 17. Jacobs FM, Van der Heide LP, Wijchers PJ et al (2003) FoxO6, a novel member of the FoxO class of transcription factors with distinct shuttling dynamics. J Biol Chem. 278:35959-35967 18. 18. Craig E, Zhang ZK, Davies KP et al (2002) A masked NES in INI1/hSNF5 mediates hCRM1-dependent nuclear export: implications for tumorigenesis. EMBO J. 21:31-42 19. Henderson BR, Fagotto F (2002) The ins and outs of APC and betacatenin nuclear transport. EMBO Rep. 3:834-839 20. George TC, Fanning SL et al (2006) Quantitative measurement of nuclear translocation events using similarity analysis of multispectral cellular images obtained in flow. J Immunol Methods 311. 117-129
Supported by NIH 1R21-CA12667, 1S10RR022335 and the NCI Cancer Center Support Grant to the Roswell Park Cancer Institute (CA016056).
IFMBE Proceedings Vol. 32
Multispectral Imaging, Image Analysis, and Pathology Richard M. Levenson Brighton Consulting Group, Principal, Brighton,, MA, USA Abstract— Biological systems are complex; multiparameter detection methods such as expression arrays and flow cytometry make this apparent. However, it is increasingly important not just to measure overall expression of specific molecules, but also their spatial distribution--at various scales and while preserving cellular and tissue architectural features. Such high-resolution molecular imaging is technically challenging, especially when signals of interest are co-localized. Moreover, in fluorescence-based methods, sensitivity and quantitative reliability can be compromised by spectral cross-talk between specific labels and also by the presence of autofluorescence commonly present, for example, in formalin-fixed tissues. In brightfield microscopy, problems of overlapping chromogenic signals pose similar imaging difficulties. These challenges can be addressed using commercially available multispectral imaging technologies attached to standard microscope platforms, or alternatively, integrated into whole-slide scanning instruments. However, image analysis is a central and still incompletely solved piece of the entire imaging process. New and evolving machine-learning technologies as well as other image-understanding approaches can create tools that can readily be used to separate image regions into appropriate classes (“cancer”, “stroma”, “inflammation”, e.g.) with (near) clinically acceptable accuracy. By itself this is useful, but can also be combined with specific segmentation and quantitation tools to extract molecular data automatically from appropriate cellular and tissue compartments, information necessary for designing and testing targeted diagnostic and therapeutic reagents. Having tools such as these available will allow pathologists to deliver appropriate quantitative and multiplexed analyses in a reproducible and timely manner. Keywords— image analysis; immunofluorescence; immunohistochemistry; segmentation; multispectral.
I. INTRODUCTION A. New Roles for and Demands on Pathology Demands on pathology as a discipline, and on pathologists in person have multiplied, extending far from the simple post-facto correlations that marked its early years. The pathologist is called upon, of course, to arrive at a correct diagnosis or label for whatever process is manifested in a patient. Beyond that, prognostic information is sought— what, to a high level of precision, will be the clinical outcome? And predictive guidance is desired as well—which
drugs should or in many cases should not be given to an individual patient?
II. QUANTITATIVE MOLECULAR IMAGING To help answer these questions, new molecular targets have been identified for probe development, new labeling reagents have been commercialized, and these developments have been accompanied by advances in imaging technology. As importantly, the biological complexity of the sample has been acknowledged, and the conventional one-marker-at-a-time approach is recognized as inadequate. As a consequence, it is likely that fluorescence-enabled techniques will become increasingly part of the standard pathology armamentarium. The number and types of addressable molecular imaging targets continue to expand. Immunofluorescence (IF) and immunohistochemistry (IHC) began to have an impact on surgical pathology beginning in the 1970s [1], ushering in the era of true molecular pathology, which has now expanded to include detection of DNA and a variety of RNA species. These tissue-based methods can yield exquisite spatial resolution, giving molecular information down to the subcellular level while preserving spatial context all the way up to the centimeter-scale. They also provide the ability to look at different cell populations simultaneously, providing assurance that a molecular signature being studied really arises in the cells of interest, while permitting appreciation of “field effects” in which anatomically normal tissues adjacent to abnormal regions exhibit molecular abnormalities. Other, non-imaging-based multiplex assays (such as cDNA or proteomics arrays) almost always examine a mélange of tumor and non-tumor tissues, or at the very best, look at the average molecular state of many tumor cells mixed together. Even if an apparently pure tumor cell population is analyzed, perhaps via laser-capture, the fact that it is examined in the aggregate means that subpopulation signatures, if present, will be blurred into the bulk signal [2]. Thus, methods that can work at a single-cell level help ensure that the molecular repertoires of all of a tumor’s heterogeneous populations are properly evaluated. There are at least three drivers to the adoption of multiplexed methods in pathology. The first is a practical one: to the extent that antibody panels assayed on serial sections
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 564–567, 2010. www.springerlink.com
Multispectral Imaging, Image Analysis, and Pathology
could be multiplexed such that several probes could be applied on each slide, this would decrease sample handling demands and simplify the work-flow. This would also decrease the demands placed on scarce samples if multiple stains are required. The second driver would be to apply current multi-molecular phenotyping, especially with cellsurface markers, as is done every day in flow cytometry, to a slide-based approach. Finally, we can anticipate that ongoing research in cancer and molecular biology, particularly employing intracellular systems approaches, will create a need to characterize multiple (signaling) molecules on a per-cell basis. In particular, antibody-based methods, in contrast to RNA or DNAfocused techniques, can determine post-translational modifications, such as phosphorylation and de-phosphorylation, that play integral roles in mediating the activity of signaling networks; such signaling pathways rarely operate independently of one another. For example, signaling by cell surface receptors when they bind cognate growth factor ligands often activates both the RAS/Raf-MEK-ERK and PI3KAKT pathways [3]. Activation of both is necessary for many growth factors, such as EGF, to produce their pleiotropic effects (e.g. cell proliferation, apoptosis resistance, etc.). While such analyses are not yet routine, we can anticipate that relevant molecular assays may eventually become part of the practice of clinical anatomic pathology, especially as they tie in with individualized patient profiling and drug selection. However, despite the large investment that has been made in molecularly targeted therapies in recent years, identification of robust predictors of therapeutic response for individual patients (vs. patient populations) has remained largely elusive. A. Labeling Strategies Molecular imaging typically requires some kind of label to be attached to a specific probe. Typical methods for protein detection include IF and IHC. Fluorescence, in which excited dyes emit signals at characteristic wavelengths, has its proponents who point to qualities such as increased sensitivity, improved dynamic range, suitability for high levels of multiplexing even when signals are overlapped, potential for single-cocktail labeling approaches, and freedom from enzymatic amplification (and therefore, improved linearity). On the other hand, fluorescence still suffers from interference from various sources of autofluorescence, especially with formalin-fixed tissue [4, 5], more complex and expensive instrumentation requirements, difficulties with interinstrument calibration and quantitation, interference with pathology work-flow, unfamiliar appearance of the sample, which no longer resembles a brightfield, H&E-stained specimen, and so on.
565
Brightfield chromogenic (colored) stains that absorb light at certain wavelengths have their own set of advantages and drawbacks. Most notably, especially when counterstained with hematoxylin, the tissues maintain a familiar appearance, allowing the microscopist to easily determine the tissue-context of a positive molecular signal. The stains are relatively stable and do not require storage in the dark and cold; they can be viewed on any microscope; quantitation of the stain can be instrument-independent when done properly so inter-institutional comparisons can be feasible (this assumes—a big assumption—that the staining procedures and other variables are properly quality-controlled [6]). Disadvantages include a major one—absent spectral imaging, it is difficult at best to resolve multiple overlapping colors and recover even qualitative data from multiplexed chromogenically stained samples. However, with spectral imaging techniques, multiple chromogens can be successfully unmixed [7]. B. Spectral Imaging and Unmixing Techniques Spectral imaging techniques offer to enhance the value of tissue examination, and doing so in ways that are both convenient and robust. In essence, they simply generates a series of images at a number of relatively narrow wavelength bands, typically 10- to 30-nm wide. By slicing the incoming light into these distinct ranges, a user can resolve and quantitate signals that may overlap both spatially and spectrally [8, 9] providing data that cannot typically be extracted from conventional color (RGB) images. The mathematics involved is similar to that used in traditional spectroscopy, with the distinction that here the spectroscopic information is linked, pixel by pixel, with high quality images. As will be discussed below, this combination can be used to help automate quantitative analyses. There are a number of ways to acquire spectral image data, reviewed in [9, 10]. After acquisition, the key task is to partition the overall optical signal at a given pixel correctly into its component species. Linear unmixing algorithms can unmix the data quickly and accurately`, generating individual abundance images for each of the unmixed components`, as well as a “component” image containing and combining all the unmixed species in one multiplane display. Fluorescence-based data are used directly in the unmixing procedure. Brightfield images, which rely on lightabsorbing chromogens rather than light-emitting fluorophores, must first be mathematically converted to optical density. Since chromogen-based quantitation relies on Beer’s law to work properly, any deviation from pure absorption behavior can affect the results. Some chromogens, unfortunately including the popular brown DAB stain, scatter as
IFMBE Proceedings Vol. 32
566
R.M. Levenson
well as absorb light. However, in practice, this does not seem to pose insuperable problems, since linearity and reasonable dynamic range can be achieved using DAB staining [11]. Other chromogens, such as Vector Red, have been shown to display good linearity and dynamic range [12].
III. AN EXAMPLE FROM HEMATOPATHOLOGY Acute leukemias, myeloproliferative diseases and myelodysplastic conditions are three sets of related clinical settings in which accurate assessment of disease activity is essential, part of the standard of care, and used to determine not only prognosis but therapy [13]. Detection of malignant blasts in the marrow can be accomplished using bone marrow aspirates and/or bone marrow biopsies but these methods are presently either difficult, not completely reliable, or both. For aspirates, the problems are connected to issues of sampling, and for the marrow biopsies detection and counting of blasts is currently a subjective and imprecise procedure. Bone marrow biopsies can be preferable to aspiratebased methods since they retain the architecture of the bone marrow environment, preserve the presence and distribution of focal blast accumulations and do not suffer from sampling issues (dry taps or hemodilution). They also allow for estimation of marrow cellularity, and the presence and degree of fibrosis. With simple H&E staining, identification of blasts by their morphology is intrinsically hard, especially due to the destruction of some morphological detail caused by decalcification. Moreover, blast levels are frequently estimated without actual counting, via a gestalt impression (“looks like about 5 %”). Immunophenotyping with chromogenic labels could simplify the task, but there is no single antigen currently identified that is pathognomonic. However, double-labeling could detect blast populations that co-express antigens typically only seen singly in non-blast populations. Figure 1, panels A and B, show an application of multispectral imaging to an immunostained decalcified bone marrow specimen from a patient biopsied after chemotherapy with a goal of identifying double-labeled blasts. The sample was stained for two markers often expressed in blasts [14]: CD34 (with a red chromogen) and c-Kit (with DAB, the commonly used brown chromogen). Both of these markers are also expressed (singly) in normal marrow elements: CD34 on endothelial cells, and c-Kit in mast cells and hematopoietic stem cells [15]. The sample shown was counterstained with hematoxylin, and single-red, single-brown, and double (red+brown) labeled cells were identified using brightfield spectral imaging.
Fig. 1 Multispectral detection of blasts in bone marrow biopsy (A and B), and automated segmentation of bone marrow elements (C and D). See text for further explanation In this example, a spectral dataset was created by collecting images from 440 to 700 nm and unmixing was used to separate the chromogens from each other and from the hematoxylin counterstain using spectral curves shown in the inset in panel A. CD34 signals were unmixed into a red, pseudo-color simulated fluorescence channel, and the c-Kit signals unmixed into green; the hematoxylin signal was concurrently suppressed by unmixing it into black. Note how the unmixed image resembles fluorescence—changing display modes can often be helpful for increasing legibility. Double-stained blasts are indicated by the presence of a yellow (green plus red) signal. The prominent vessel in the center is red, as would be expected for CD34-only staining behavior, and numerous green-only signals are visible, indicating the presence of cells in the mast-cell or granulocyte lineages (or the existence of spectrally similar hemosiderin). A. Regions of Interest (ROIs) Estimation of blast levels requires some “denominator”—it is important to assess the extent of the relevant bone-marrow compartment in which blasts could be present. Regions consisting of bone, clot, and fat are not relevant to this estimation. What would be useful is a way of atuomatically detecting the extent of cellular marrow, and then figuring out how many blasts are present within this compartment. Existing commercial products for quantitative analysis use several approaches for detecting ROIs. The
IFMBE Proceedings Vol. 32
Multispectral Imaging, Image Analysis, and Pathology
first is to have the operator manually outline regions of interest (for example, cancer) to restrict quantitation to the appropriate tissue compartment; the second is to employ one immunostain to identify appropriate regions and then evaluate the expression of another analyte in the defined areas [16]; finally, another possible approach is to use imaging algorithms to define various compartments. One useful version of this relies on machine-vision techniques [17] This approach can be used to create a classifier that can distinguish between cancer, normal tissue, stroma and inflammatory infiltrates, to use one reasonable palette. Training can be extended over multiple examples in order to encompass the variability in the sample set. As shown in Fig. 1, (C and D), an H&E-stained section of bone marrow (C) can be separated (D) via machinelearning based algorithms into bone, fat, and clot (pink) and cellular marrow elements (green). The final step in the analysis would then be to measure the area occupied by the blast population, and then divide that number by the area of true marrow elements to arrive at a normalized estimate of percentage of marrow occupied by neoplastic cells (in this case, it was about 15%). The ability to perform such quantitative analysis could provide accurate, objective and reliable assessments of patients’ clinical status.
IV. CONCLUSIONS Novel imaging and analysis capabilities can provide pathology with many of the tools it needs to generate the information now being requested. Prognosis, therapy selection and therapeutic monitoring will in many instances involve determinations of multiple analytes in unhomogenized, spatially intact tissue specimens, with resolution sufficient to measure expression in individual subcellular compartments. None of today’s competing phenotyping technologies (expression arrays, serum proteomics, in-vivo imaging, e.g.) can provide comparable spatial and molecular precision.
ACKNOWLEDGMENT I would like to acknowledge my former colleagues at Cambridge Research and Instrumentation, and Drs. Massimo Loda and Alessandro Fornari for assistance in preparation of this manuscript. Samples were kindly provided by Dr. Raul Braylan, University of Florida. The work was funded in part through support from the NIH via grants BRP 5R01CA108468 and SBIR 2R43CA088684.
567
REFERENCES 1. Taylor C R, Cote R J. 1997. Immunohistochemical markers of prognostic value in surgical pathology. Histol Histopathol 12: 1039-55 2. Banks R E, Dunn M J, Forbes M A, et al. 1999. The potential use of laser capture microdissection to selectively obtain distinct populations of cells for proteomic analysis-- preliminary findings. Electrophoresis 20: 689-700 3. Lugli A, Zlobec I, Minoo P, et al. 2006. Role of the mitogen-activated protein kinase and phosphoinositide 3-kinase/akt pathways downstream molecules, phosphorylated extracellular signal-regulated kinase, and phosphorylated akt in colorectal cancer-a tissue microarray-based approach. Hum Pathol 37: 1022-31 4. Mansfield J R, Gossage K W, Hoyt C, et al. 2005. Autofluorescence removal, multiplexing, and automated analysis methods for in-vivo fluorescence imaging. J Biomed Opt 10: 41207 5. Levenson R M, Mansfield J R. 2006. Multispectral imaging in biology and medicine: Slices of life. Cytometry A 69: 748-58 6. Taylor C R, Levenson R M. 2006. Quantification of immunohistochemistry--issues concerning methods, utility and semiquantitative assessment ii. Histopathology 49: 411-24 7. Levenson R M. 2006. Spectral imaging perspective on cytomics. Cytometry A 69: 592-600 8. Farkas D L, Du C, Fisher G W, et al. 1998. Non-invasive image acquisition and advanced processing in optical bioimaging. Comput Med Imaging Graph 22: 89-102 9. Garini Y, Young I T, McNamara G. 2006. Spectral imaging: Principles and applications. Cytometry A 69: 735-47 10. Bearman G, Levenson R. 2003. Biological imaging spectroscopy. In Biomedical photonics handbook, ed. T Vo-Dinh, pp. 8_1-8_26. Boca Raton: CRC Press 11. Matkowskyj K A, Cox R, Jensen R T, et al. 2003. Quantitative immunohistochemistry by measuring cumulative signal strength accurately measures receptor number. J Histochem Cytochem 51: 205-14 12. Ermert L, Hocke A C, Duncker H R, et al. 2001. Comparison of different detection methods in quantitative microdensitometry. Am J Pathol 158: 407-17 13. Sebban C, Browman G P, Lepage E, et al. 1995. Prognostic value of early response to chemotherapy assessed by the day 15 bone marrow aspiration in adult acute lymphoblastic leukemia: A prospective analysis of 437 cases and its application for designing induction chemotherapy trials. Leuk Res 19: 861-8 14. Oertel J, Oertel B, Schleicher J, et al. 1996. Immunotyping of blasts in human bone marrow. Ann Hematol 72: 125-9 15. Miettinen M, Lasota J. 2005. Kit (cd117): A review on expression in normal and neoplastic tissues, and mutations and their clinicopathologic correlation. Appl Immunohistochem Mol Morphol 13: 20520 16. Camp R L, Chung G G, Rimm D L. 2002. Automated subcellular localization and quantification of protein expression in tissue microarrays. Nat Med 8: 1323-7 17. Levenson R. 2008. Putting the "More" Back in morphology: Spectral imaging and image analysis in the service of pathology. Arch Pathol Lab Med 132: 748-57 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 32
Richard Levenson Brighton Consulting Group 52 Greycliff Rd. Brighton, MA 02135 US
[email protected]
Sensitive Characterization of Circulating Tumor Cells for Improving Therapy Selection H. Ben Hsieh1, George Somlo2, Robyn Bennis1, Paul Frankel2, Robert T. Krivacic1, Sean Lau2, Janey Ly1, Erich Schwartz3, and Richard H. Bruce1 1
Palo Alto Research Center/Biomedical Engineering, Palo Alto, CA 2 City of Hope Cancer Center/Medical Oncology, Duarte, CA 3 Stanford University/Department of Medicine, Stanford, CA
Abstract— For metastatic disease, biomarker profiling of distant metastases is done only when feasible because biopsy of metastases is invasive and associated with potential morbidity without proven benefit. So although biomarker expression may differ in distant metastases, treatment with targeted therapies is almost always based on biomarker targets derived from a patient’s primary breast tumor, usually excised years before development of metastatic disease. This work addresses measurement of biomarker expression on circulating tumor cells (CTCs) as a source of current biomarker expression. CTCs are rapidly located on a planar substrate with a sensitive detection instrument using Fiber Array Scanning Technology. The instrument targets abundant cytokeratins rather than EpCAM. The assay includes quantitative measurement of expression levels of 3 breast cancer markers (HER2, ER and ERCC1) that predict efficacy of treatment. We have observed high discordance rates in cancer markers between CTC and tissue. Multiplex testing may allow for personalized therapy for patients. Keywords— circulating tumor cells.
I. INTRODUCTION In recent clinical trials, detection of CTCs provided prognostically useful information regarding progressionfree and overall survival [1] and treatment efficacy [2] in a subset of patients. However, enumeration does not provide information for choosing the optimal therapy. The biological characteristics of CTCs differ from the primary tumor and change during disease progression. The level of this discordance has been reported to be substantial with HER2-positive CTCs observed in up to 50% of patients with breast cancer, whose primary tumor was HER2 negative [3]. Because CTCs can provide a different biological characterization of the disease, their phenotype could be important for prediction of therapeutic response. The estimated frequency of CTCs in blood is in the range of one tumor cell per 10 6–7 WBCs (1-10 CTCs/ml). At such low concentrations, reliable identification of these cells is a huge technical challenge. Solutions developed to overcome
this problem focus on enrichment of CTCs [4-6] to reduce sample size. While enrichment protocols are extremely effective at enriching the proportion of analyzable cells (both nucleated hematopoietic cells and rare CTCs), these methods can result in considerable cell loss or cell damage [7]. On the other hand, an automated digital microscope (ADM) can provide high levels of sensitivity and minimal cell damage, but the analysis of a meaningful sample size is still prohibitively long for a clinical assay. Another major barrier of the reliable identification of CTCs stems from their extreme biological heterogeneity. This heterogeneity is exhibited in a wide range of genetic, biochemical, immunological and biological characteristics, such as cell surface receptors, enzymes, karyotypes, cell morphologies, growth properties, sensitivities to various agents and ability to invade and produce metastasis. Therefore, sample preparation protocols and detection methods need to comprehend this heterogeneity. We have previously shown a novel approach that uses fiber-optic array scanning technology (FAST) to address the rare-cell detection problem [8]. With FAST cytometry, laser-printing optics are used to excite 300,000 cells/sec, and fluorescence emission is collected in an array of optical fibers that forms a wide collection aperture. We demonstrated that with its extremely wide field-of-view (FOV), the FAST cytometer can locate CTCs at a rate that is 500 times faster than an ADM, the current gold-standard method of automated CTC detection. We provided experimental evidence that the FAST cytometer can achieve this detection speed with comparable sensitivity and improved specificity. Because of this high scan rate, it requires no additional processing or enrichment of CTCs that could result in reduced sensitivity from cell loss. In addition, unlike alternative techniques for CTC detection such as PCR or flow cytometry, FAST cytometry enables the cytomorphology of the prospective rare cells to be readily examined. The processing and staining protocols used in the FAST assay were designed to preserve morphology and enable multi-marker characterization of target cells [9].
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 568–571, 2010. www.springerlink.com
Sensitive Characterization of Circulating Tumor Cells for Improving Therapy Selection
II. MATERIALS AND METHODS A. Cell Attachment and CTC Identification Blood samples were processed using a previously described method [10] with some minor modifications. Briefly, 10 mL blood samples were drawn into Cytochex Cell Free DNA BCT tubes (Cat. #: 218962, Streck Inc., Omaha, NE), shipped overnight and processed within 24 hr. Samples were subjected to erythrocyte lysis according to our previously described protocol. The remaining cell pellet was then washed, re-suspended in phosphate buffered saline (PBS), and then plated on custom designed (12.7x7.6 cm slide with an active area of 64 cm2 ) adhesive glass substrates (Paul Marienfeld GmbH & Co., KG, Bad Mergentheim, Germany). The cells were then incubated for 40 minutes at 37ºC. After incubation, excess liquid is decanted and slides are fixed with 2mL 2% paraformadehyde at room temperature for 10 min. Slides are then rinsed in PBS twice, submerged in cold acetone in -20ºC and rinsed in PBS again. Slides are then blocked with a buffer containing 20% human AB serum (Sigma, H4522) in PBS at 37º C for 20 minutes. Primary antibodies used in this study were mouse anti-human CD45 IgG2a (MCA87, AbD Serotec, Raleigh, NC) directly conjugated with Qdot 705 (Invitrogen custom conjugation), a cocktail of mouse monoclonal anticytokeratin antibodies for cytokeratin classes 1, 4, 5 , 6, 8, 10, 13, 18 and 19 (C2562, Sigma), and mouse monoclonal anti-cytokeratin 19 antibody (RCK108, DAKO). In order to detect cells that have very weak CK expression while minimizing nonspecific binding, we perform tertiary antibody amplification. The secondary antibody for CK is biotin-XX goat anti-mouse IgG1 (A10519, Invitrogen) streptavidin Alexa555 tertiary (S-32355, Invitrogen). We found on average the signals are 2 to 4X stronger using tertiary with no impact on noise. The cell nucleus is counterstained with DAPI (0.5 μg/ml 4’, 6-diamidino-2-phenylindole, D-21490, Invitrogen) and coverslip is mounted with Live Cell mounting medium (0.25g n-propyl gallate and 0.13g Tris-HCl in 4mL ddH2O, add 36mL glycerol and heat to dissolve). CTCs were identified by their morphology and immunophenotype (CK+, CD45-, and DAPI +) as described previously (10). For cancer marker labeling, secondary antibodies are matched to the primary antibody probes in order to insure sufficient signal for quantification. Antibodies derived from different immunoglobulin G isotype subclasses or different species were used for simultaneous staining without cross reactivity. These antibodies are pre-absorbed against the other IgG subclasses, the other immunoglobulin classes, or the serum of other species to minimize cross-reactivity.
569
The multiplex assay for characterization of breast CTCs includes three additional markers to measure expression levels of Her2 (membrane receptor), ER (nuclear staining) and excision repair cross-complementation group 1 (ERCC1), a marker of DNA repair (nuclear staining) . The primary antibody against the Her2 (erbB2, chicken antihuman, ProSci Inc. cat. #:ab14027) was followed by a Qdot655 conjugated goat anti-chicken secondary antibody (Q14421MP, Invitrogen). The primary antibody against the estrogen receptor (ER-α, monoclonal rabbit anti-human, LabVision cat.#:RM-9101) was followed by Alexa750 tagged goat anti-rabbit secondary antibody (A-21039, Invitrogen). The primary antibody against ERCC1 (mouse antihuman IgG2b, sc-17809, Santa Cruz Biotech) was followed by Alexa647 tagged goat anti-mouse IgG2b secondary antibody (A-21242, Invitrogen). B. FAST Optical System The FAST scanner scans samples at a rate of 25M cells min-1 [8]. A laser raster enables the fast scan rate (100 lines sec-1). An Argon ion laser using 4 mW output excites fluorescence in labeled cells, and this emission is collected in optics with a large (76 mm) field-of-view. This field-ofview is enabled by an optical fiber bundle with asymmetric ends. The numerical aperture (NA) of the FAST scanner is 0.65 and is determined by the index of refraction of the borosilicate fibers (1.51) used for fluorescence collection. The resolution of the scanning system (10 μm) is determined by the spot size of the scanning laser. The emission from the fluorescent probes is filtered using standard dichroic filters before detection in a photomultiplier. A polygon laser scanner produces a laser scan speed of 10 m/sec. The sample is moved orthogonally across the laser scan path on a microscope stage at a rate of 3 mm sec-1. Fluorescent objects are located with an accuracy of 40 μm relative to alignment marks on the substrate. C. Sensitivity Testing To test the inherent sensitivity, we prepared samples by spiking 2 to 50 HT29 cells into 1 mL of whole blood. Samples were first scanned by the FAST, which located objects labeled with Alexa 555. These were imaged at 20x resolution by the ADM for identification. The samples were subsequently scanned in the ADM using a 4x resolution objective with a low numerical aperture (NA=0.2) by stepping the effective field-of-view (3.4 mm2) across the sample. The HT29 cells have sufficient intensity to be easily detected by the 4x objective. Image analysis located areas of fluorescence above the background, and these were subsequently scanned with the same 20x
IFMBE Proceedings Vol. 32
570
H.B. Hsieh et al.
objective used to image the objects located by the FAST cytometer. The images were analyzed by trained personnel. D. Sample Scoring To score cancer marker expression in patient samples, we have adopted a methodology from tissue analysis that combines expression level and the percentage of expressing cells in the sample. The expression level is scored relative to a moderate expressing cell line for each marker that is processed with the sample. A CTC with an expression level within the 34th quantile of the median of the cell line control is scored a 2 while CTCs expressing higher levels are scored a 3. CTC expression levels lower than the cell line but higher than background are scored a 1. For the breast cancer markers described here (HER2, ER, ERCC1), leukocyte expression is used for the background. The cell line controls used are MDA-Mb-453 for HER2, T-47D for ER and A-549 for ERCC1. The percent population is scored linearly on a 10 point scale with 0 for less than 10% expressing CTCs and 1 for 10% to 20%, up to 10 for populations between 90% and 100%. The sample score is product of the average expression and the population score.
III. RESULTS A. Sensitivity A comparison of FAST detection sensitivity to that of an ADM shows identical sensitivities. In the test, we varied the number of HT29 cells spiked into 1 ml of blood by nearly two orders of magnitude. We detected a total of 238 cells in 13 samples. Using CK as a target, each scan by the FAST cytometer located exactly the same number of cells located by the ADM. B. Specificity In CTC detection, the three main sources of false positives located by FAST are autofluorescent particles, labeled cellular debris, and dye aggregates. Based on our observations, the vast majority of false positive detections originate from autofluorescing particles. These generally fluoresce broadly, and the fluorescence intensity diminishes as the magnitude of the Stokes shift, the wavelength shift of the emission from the excitation, increases. We use a wavelength comparison technique to filter away a substantial number of autofluorescing particles. For this we measure emissions at two different wavelengths. The CK probe is selected to have an emission wavelength (580 nm) that has a relatively large separation in
wavelength (95nm) from the excitation wavelength (488 nm). By comparing the emissions at an intermediate wavelength (525 nm), false positives can be identified as having a relatively higher intensity at 525 nm than at 580 nm, while the objects with probes have a relatively higher intensity at 580 nm. The ratio of the two wavelengths is used to eliminate the autofluorescing particles. False positive cells originating from dye aggregates and cell fragments are successfully eliminated with appropriate filtering for object size and brightness. With the current filtering algorithms, 99.8% of the false positives are eliminated without loss of sensitivity. The typical specificity for a FAST scan is around 3x10-6. This means that only 150 false positives are found in a sample containing 50 million WBCs. C. Patient Results The assay includes quantitative measurement of expression of 3 breast cancer markers (HER2, ER and ERCC1) that may predict efficacy for specific therapies in addition to markers needed for CTC identification (CK, DAPI and CD45). We observed high discordance rates in cancer markers between CTC and tissue characterization. For determining marker status for primary tissue, conventional clinical scoring was used for HER2 and ER, and the median score was used for ERCC1 [11]. For CTCs the sample status for each marker was determined from the score using a cutoff score that minimizes discordance. Only patients with 5 or more CTC were used for biomarker analysis. For HER2 expression, 18 patients with MBC were analyzed, and the observed discordance was 28%. The discordance rate for ER expression in 14 patients was 36%. The discordance rate for ERCC1 expression in 13 patients was 38%.
IV. DISCUSSION We presume that improvements on CTC measurement can be achieved if both sample preparation and detection technique are taken into account simultaneously. The sample preparation process described here was developed to preserve both the maximum number and forms of available CTCs. This was accomplished using only minimal blood preparation and omitting antigen or density-dependent enrichment methods. With this minimal pre-analytical processing approach, the burden is placed on the detection instrument to scan the remaining large number of prospective cells with high sensitivity and specificity. While the FAST cytometer is capable of scanning fast enough to screen 50 million nucleated cells without enrichment, the subsequent ADM imaging becomes a
IFMBE Proceedings Vol. 32
Sensitive Characterization of Circulating Tumor Cells for Improving Therapy Selection
limitation when the false positive rate exceeds a few hundred. We expect that the use of automation with further process optimization should reduce the false positive levels.
571
expression patterns on CTCs and primary tumor is observed. Multiplex testing may allow for personalized therapy for patients with metastatic breast cancer.
ACKNOWLEDGMENT The work was supported by funding from the National Cancer Institute.
REFERENCES
Fig. 1 Images of cancer marker labeling. Original cells in top row with CK (red) and nucleus (blue). Bottom row shows markers (green) from left to right for ERCC1, HER2, and ER While several other studies address CTC characterization, our approach enables the preservation of cellular morphology together with a number of simultaneously detected markers. The detailed cytomorphological and immunophenotypical characterization that is enabled by imaging undistorted cells on a planar surface is relevant not only for CTC identification and characterization. High fidelity images enable the incorporation of marker localization in assessing expression levels. For example, HER2 is localized to the membrane while ERCC1 and ER are localized to the nucleus as shown in Fig. 1. The use of localization improves the specificity of the CTC identification and assessment of the cancer marker expression level by reducing the inclusion of nonspecific binding. While discordance between the cancer marker status in CTCs and tissue is comparable to early results of others, these reported levels of discordance vary over a considerable range. Although some of this variation could be due to small sample sizes, it is likely that the variation is also derived from the CTC detection methodology as well as the approach to quantifying the expression level. In addition, the cutoff score that differentiates positive and negative marker status on CTCs could well be different from that empirically determined for tissue. The cutoff score for CTCs will need to be determined from patient outcome.
V. CONCLUSIONS
1. Cristofanilli M, Budd GT, Ellis MJ, et al. (2004) Circulating tumor cells, disease progression, and survival in metastatic breast cancer. N Engl J Med 2004 351:781-91. 2. Budd GT, Cristofanilli M, Ellis MJ, et al. (2006) Circulating tumor cells versus imaging--predicting overall survival in metastatic breast cancer. Clin Cancer Res 12:6403-9. 3. Wülfing P, Borchard J, Buerger H, et. al. (2006) HER2-positive circulating tumor cells indicate poor clinical outcome in stage I to III breast cancer patients. Clin Cancer Res 12(6)1715-20. 4. Vona G, Sabile A, Louha M, et al. (2000) Isolation by size of epithelial tumor cells : a new method for the immunomorphological and molecular characterization of circulating tumor cells. Am J Pathol 156:57-63. 5. Martin VM, Siewert C, Scharl A, et al. (1998) Immunomagnetic enrichment of disseminated epithelial tumor cells from peripheral blood by MACS. Exp Hematol 26:252-64 6. Mahaeswaran, S, Sequist LV, Nagrath S, et al. (2008) Detection of mutations in EGFR in circulating lung-cancer cells. N Engl J Med. 359(4):366-77 7. Goeminne JC, Guillaume T, Symann M. (2000) Pitfalls in the detection of disseminated non-hematological tumor cells. Ann Oncol 1:785-92. 8. Krivacic RT, Ladanyi A, Curry DN, et al. (2004) A rare-cell detector for cancer. PNAS 101:10501-4. 9. Marrinucci D, Bethel K, Bruce RH, et al. (2007) Case study of the morphologic variation of circulating tumor cells. Hum Pathol 38:5149. 10. Hsieh HB, Marrinucci D, Bethel K, et al. (2006) High speed detection of circulating tumor cells. Biosens Bioelectron 21:1893-9. 11. Olaussen, KA, Dunant, A, Fouret MD, et. al. (2006) DNA repair by ERCC1 in non-small-cell lung cancer and cisplatin=based adjuvant chemotherapy, NEJM, 355:983-991.
Author: Institute: Street: City: Country: Email:
Detecting multiple markers in CTCs from patients with MBC is feasible, and significant discordance between
IFMBE Proceedings Vol. 32
Richard Bruce Palo Alto Research Center 3333 Coyote Hill Rd Palo Alto, CA United States
[email protected]
Nanohole Array Sensor Technology: Multiplexed Label-Free Protein Binding Assays J. Cuiffi1, R. Soong2, S. Manolakos1, S. Mohapatra3, and D. Larson2 1
Draper Laboratory – Bioengineering Center at USF, Tampa, USA 2 Draper Laboratory, Cambridge, USA 3 University of South Florida, Department of Molecular Medicine, Tampa, USA Abstract— We present a review of current implementations of nanohole array sensor technology and discuss future trends for this technique applied to multiplexed, label-free protein binding assays. Nanohole array techniques are similar to surface plasmon resonance (SPR) techniques in that local refractive index changes at the sensor surface, correlated to protein binding events, are probed and detected optically. Nanohole array sensing differs by use of a transmission based mode of optical detection, extraordinary optical transmission (EOT) that eliminates the need for prism coupling to the surface and provides high spatial and temporal resolution for chip-based assays. This enables nanohole array sensor technology to combine the real time label-free analysis of SPR with the multiplexed assay format of protein microarrays. Various implementations and configurations of nanohole array sensing have been demonstrated, but the use of this technology for specific research or commercial applications has yet to be realized. In this review, we discuss the potential applications of nanohole sensor array technology and the impact of that each application has on nanohole array sensor, instrument and assay design. A specific example presented is a multiplexed biomarker assay for metastatic melanoma, which focuses on biomarker specificity in human serum and ultimate levels of detection. This example demonstrates strategies for chip layout and the integration of microfluidic channels to take advantage of the high spatial resolution achievable with this technique. Finally, we evaluate the potential of nanohole array sensor technology against current trends in SPR and protein micro-arrays, providing direction towards development of this tool to fill unmet needs in protein analysis. Keywords— SPR, extraordinary optical transmission, nanohole array sensor, label-free detection, protein microarray.
I. INTRODUCTION Nanohole array sensor technology is a promising approach for highly multiplexed label-free protein binding assays. High throughput protein interaction analysis has proven difficult to implement in comparison to highly successful DNA microarray technology[1-5], and many factors contribute to this fact. Proteins are unstable, both in chemistry and conformation in comparison to nucleic acids, and require specific orientation when attached to a surface. Proteins interact with a variety of molecular species including small molecules,
nucleic acids, and other proteins. The capture species, especially other proteins such as antibodies, are more difficult to synthesize compared to DNA capture probes. Labels are required for typical optical microarray imaging techniques, which may interfere with species interactions. Finally, not only are absolute and relative protein concentrations often desired, but also protein interaction kinetics. Although the nature of proteins and their interactions cannot be changed, label-free binding assays offer an approach to determining protein concentrations and kinetics without interference from molecular tags[6-8]. Surface plasmon resonance (SPR) techniques have proven to be the modern label-free standard for protein kinetics assays[9-14]. SPR does not however, offer ease of integration with highly multiplexed techniques such as protein microarrays or the limit of detection (LoD) comparable to labeled techniques such as enzyme linked immunosorbent assays (ELISA) [6,8,13,15]. Recent advances in nanohole array sensor technology have demonstrated promise to achieving high density label-free kinetic measurements coupled with the potential for improved LoD over SPR[16-21].
II. TECHNOLOGY REVIEW A. Nanohole Arrays Sensor vs. SPR SPR techniques operate by measuring local index of refraction changes of a liquid (or gas) solution on a metal surface. The principle of operation is shown schematically in Fig 1a. Light is coupled to surface plasmons in the metal with a prism or grating and the reflected light is analyzed. The surface plasmons are sensitive to the local index of refraction and alter the coupled/reflected wave, offering a detection mechanism through changes in coupling angle, coupling wavelength, reflected light intensity or reflected light phase[13]. In a typical protein binding experiment a detection molecule (e.g. antibody) is fixed to the surface. The interacting molecule of choice (e.g. antigen) is then perfused across the surface, and real-time interaction assessments are made as molecules bind near (within ~200nm) the surface[22].
K.E. Herold, W.E. Bentley, and J. Vossoughi (Eds.): SBEC 2010, IFMBE Proceedings 32, pp. 572–575, 2010. www.springerlink.com
Nanohole Array Sensor Technology: Multiplexed Label-Free Protein Binding Assays a.
Protein
Flow
Antibody
Metal film
Prism Incident light
Reflected light
b. Incident light
Nanohole array
Metal film Transmitted light
Fig. 1 a) Schematic of SPR operation showing an example of a protein antibody capture system, b) schematic of nanohole array sensor operation showing 4 nanohole array sensors. Drawings are not to scale Nanohole array sensor technology is similar in that local index of refraction changes on a metal surface correlate to optical changes. In this technique however, a transmission mode (instead of reflected) of optical coupling is used as shown in Figure 1b. The transmitted light passes through arrays of holes in the metal film, where the holes are substantially smaller than the incident wavelength of light, by coupling with local surface plasmons[23]. This transmission mode, called extraordinary optical transmission (EOT), is an unexpected phenomenon [24] and has only been recently applied to monitoring molecular binding events[25,26]. As detailed below, nanohole sensor array technology offers unique advantages over SPR, combining real-time temporal resolution with a spatial resolution beyond that of modern microarrays. B. Instrumentation and Sensor Chip Design Nanohole array technology, making use of EOT, eliminates the need for prisms or optical grating as in SPR. This simplifies the optics instrumentation, allowing for improved multiplexing. Traditional prism coupled SPR has been limited to small numbers of parallel sensors (<100) [7,10,11,14]. Recent improvements in SPR imaging techniques have achieved parallel sensing on the order of 1000 sensor areas at once[7,9,12,27,28]. EOT does not require external coupling to the sensor surface, and even enables perpendicular (normal) incident light
573
excitation. As in SPR, the angle of incident light along with wavelength may be varied, and spectroscopic readout techniques may be used[16,17,19,20]. In a simplified approach that offers fast (<1s) monitoring of the entire sensor area, monochromatic light is used to illuminate the sensing surface and a charge couple device (CCD) chip (camera) is placed in focus behind the sensor[18]. The CCD measures the transmission intensity of each nanohole array sensor simultaneously, and has been shown to enable monitoring of up to 20,164 sensors[18]. If a microscope objective is placed between the sensor chip and the camera an ultra-high density of sensors (>107/cm2) can be probed by scanning the chip[19]. Key to the achievable sensor density is nanohole array chip design. Each nanohole array sensor on the chip is comprised of a periodic array of sub-wavelength holes in the metal film, as shown schematically in Figure 1b. The number of holes, their pitch, size and depth in the array all influence the EOT effect and its local index of refraction dependence. A key note when implementing this technology is that the nanohole array design, in conjunction with the associated incident light characteristics, must be optimized for each species desired to be detected. For example, a given hole pitch and size may be optimal for detecting changes in salt concentration, but not for protein interactions[16]. Studies have even shown inverse relationships between protein absorption and transmission intensity by simply changing the pitch of the nanohole sensor array[20]. These variables may seem difficult to optimize for technology commercialization, but they do offer a variety of options for sensing on a single chip, such as various ranges of sensitivity and specificity to different species in a sample. In order to achieve an ultra high density of sensors, designs that include Bragg mirrors and other optical confinement techniques can be used[19]. Nanohole array chips require advanced patterning techniques, because the fundamental feature size is sub-wavelength of typical optical patterning. Focused ion beam (FIB) milling is a popular choice for prototype chips, but other techniques, such as e-beam and phase shifting photolithography have been explored[29]. Real-time assays are designed with low volume flow systems (microfluidics) to deliver the sample. Microfluidics can be used to address different regions on a single printed chip and is used in modern SPR instruments[10,11]. This allows for reference channels and channels for more than one sample to be run simultaneously, and has been demonstrated with nanohole array technology[20]. For label-free assays, the most important consideration is the capture molecule and its orientation and attachment to the sample surface. The capture molecule is commonly antibodies, but it may also include small molecules (with a tether), nucleic acids, or other proteins and peptides.
IFMBE Proceedings Vol. 32
574
J. Cuiffi et al.
Specificity is always a consideration in protein assays. One advantage of multiplexing is the ability to print more than one capture molecule for the same protein. This can even be done by spotting the antibodies used in sandwich ELISA on different sensors. For increased dynamic range, capture molecules with different binding affinities for the same species can also be spotted on the same chip. In fact, the use of redundancy, multiple capture agents for each species, and individual sensors with different measurement characteristics to improve specificity and sensitivity may prove to be more valuable than adding to the number of analytes that a chip can detect. C. Performance Comparison Performance of SPR and nanohole array sensor techniques is commonly reported in refractive index unit (RIU) resolution and the ultimate limit of detection (LoD). The resolution, in terms of RIU, is the minimum detectable change in index of refraction. SPR resolution varies by specific implementation, but in general prism based SPR can achieve a resolution of ~10-8RIU, while multiplexed SPR imaging can achieve ~10-7RIU[8]. Nanohole sensor array technology has been shown to achieve a resolution of 9.4x10-8RIU[25]. This is promising given the immature status of the technology. For SPR and nanohole array sensing, comparisons are made using LoD units in weight/ml (e.g. ng/ml) as opposed to a molar value, because the weight (and corresponding index of refraction influence) can vary at a given molar concentration. That given, it is convenient to report molar LoD, and reports have shown a LoD of 13nM for anti-glutathione S-transferase (GST) in a highly multiplexed nanohole array sensor format[18]. This is again promising for technology that has yet benefitted from specific parameter and instrumentation optimization.
III. APPLICATIONS A. Multiplexed Protein Binding Assays Proteins are responsible for a wide variety of reactions in biological systems, and therefore protein assays are varied in their implementation from abundance-based to functionbased. Multiplexed protein analysis assays using SPR have been used for monoclonal antibody screening[11], antibiotic drug impact on serum protein binding[10], serum protein quantification[12], detection of serum biomarkers[9], and many others[13,28]. Nanohole array sensor techniques have not yet been used in a commercial setting, but efforts are underway to optimize the technology for specific protein binding assays.
Nanohole array sensor chip Microfluidic channels for 5 samples with associated reference channels ~ 3mm Sample Channel Reference Channel
c+ c- a1 a2 a1 a2 a1 a2 100µm
Spotted antibodies: c+ = pos. control c- = neg. control a1 = antibody #1 a2 = antibody #2
nanohole array sensor 10 x 10 array 130nm holes
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
3µm
Fig. 2 Top down schematic of microfluidic, multiplexed nanohole array sensor chip for biomarker detection in serum B. Metastatic Melanoma Biomarker Assay As an example, we present a scheme for melanoma biomarker detection in serum using a nanohole array sensor chip. An early blood-based biomarker for cutaneous metastatic melanoma has not been found, and early detection is key to this devastating disease[30]. With a sensitive multiplexed assay, several biomarkers may be probed at once, potentially finding correlations with more than one biomarker. For this application, the nanohole array sensor design must be first optimized by varying nanohole array and instrumentation parameters given the specific antibody system. For specificity and reduction of false positives and negatives, multiple control samples must be used and nonspecific binding must be reduced through passivating surface chemistry. A scheme for sensor layout to achieve this on a single chip is shown in Figure 2. In this design, microfluidic channels are used to create reference channels (for temperature, optical source intensity and buffer reference) and to deliver multiple samples on a single chip. The chip interfaces to a instrumentation setup as previously reported[18]. Briefly, a 635nm diode laser illuminates a 1cm2 area of the chip. A CCD camera is focused on the back of the chip and intensity from each nanohole array sensor is monitored real-time. Optimization of the system will include incident laser angle, noise control systems, and data processing to benchmark LoD and analyze protein kinetics.
IFMBE Proceedings Vol. 32
Nanohole Array Sensor Technology: Multiplexed Label-Free Protein Binding Assays
IV. CONCLUSIONS AND FUTURE TRENDS Nanohole sensor array technology, however promising, does require further technical development. The correlations between sample type and the optimal detection schemes continue to be studied. The nanohole array sensor parameters and their influence on performance are also not fully understood. Therefore, the ultimate resolution and LoD of the technique has yet to be defined. Even with these hurdles to overcome, the technology has shown potential in becoming the next-generation label-free detection scheme of choice. As momentum gains for multiplexed SPR, nanohole array sensor development will continue to mature, ultimately providing improved assay techniques.
ACKNOWLEDGMENT The authors are grateful for the support from the Bankhead-Coley Research Program (grant #09BW-08) and NHGRI (grant #7R01HG003828-04). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NHGRI or the NIH.
REFERENCES 1. Zhu, HSnyder, M. (2003) Protein chip technology. Curr Opin Chem Biol. 7(1): 55-63. 2. LaBaer, JRamachandran, N. (2005) Protein microarrays as tools for functional proteomics. Curr Opin Chem Biol. 9(1): 14-19. 3. Mitchell, P. (2002) A perspective on protein microarrays. Nat Biotechnol. 20: 225-229. 4. MacBeath, GSchreiber, S L. (2000) Printing proteins as microarrays for high-throughput function determination. Science. 289(5485): 1760-1763. 5. Ramachandran, N, Hainsworth, E, Bhullar, B, et al. (2004) Selfassembling protein microarrays. Science. 305(5680): 86-90. 6. Ramachandran, N, Larson, D N, Stark, P R, et al. (2005) Emerging tools for real-time label-free detection of interactions on functional protein microarrays. FEBS J. 272(21): 5412-5425. 7. Rich, R LMyszka, D G. (2007) Higher-throughput, label-free, realtime molecular interaction analysis. Anal Biochem. 361(1): 1-6. 8. Fan, X, White, I M, Shopova, S I, et al. (2008) Sensitive optical biosensors for unlabeled targets: a review. Anal Chim Acta. 620(1-2): 826. 9. Ladd, J, Taylor, A D, Piliarik, M, et al. (2009) Label-free detection of cancer biomarker candidates using surface plasmon resonance imaging. Anal Bioanal Chem. 393(4): 1157-1163. 10. Baneres-Roquet, F, Gualtieri, M, Villain-Guillot, P, et al. (2009) Use of a surface plasmon resonance method to investigate antibiotic and plasma protein interactions. Antimicrob Agents Chemother. 53(4): 1528-1531. 11. Nahshol, O, Bronner, V, Notcovich, A, et al. (2008) Parallel kinetic analysis and affinity determination of hundreds of monoclonal antibodies using the ProteOn XPR36. Anal Biochem. 383(1): 52-60.
575
12. Lausted, C, Hu, ZHood, L. (2008) Quantitative serum proteomics from surface plasmon resonance imaging. Mol Cell Proteomics. 7(12): 2464-2474. 13. Homola, J. (2008) Surface plasmon resonance sensors for detection of chemical and biological species. Chem Rev. 108(2): 462-493. 14. Safsten, P, Klakamp, S L, Drake, A W, et al. (2006) Screening antibody-antigen interactions in parallel using Biacore A100. Anal Biochem. 353(2): 181-190. 15. Homola, J, Koudela, IYee, S S. (1999) Surface plasmon resonance sensors based on diffraction gratings and prism couplers: sensitivity comparison. Sensors and Actuators B. 54: 16-24. 16. Yang, J, Ji, J, Hogle, J M, et al. (2009) Multiplexed plasmonic sensing based on small-dimension nanohole arrays and intensity interrogation. Biosensors and Bioelectronics. 24(8): 2334-2338. 17. Yang, J C, Ji, J, Hogle, J M, et al. (2008) Metallic nanohole arrays on fluoropolymer substrates as small label-free real-time bioprobes. Nano Lett. 8(9): 2718-2724. 18. Ji, J, O'Connell, J G, Carter, D J, et al. (2008) High-throughput nanohole array based system to monitor multiple binding events in real time. Anal Chem. 80(7): 2491-2498. 19. Lindquist, N C, Lesuffleur, A, Im, H, et al. (2009) Sub-micron resolution surface plasmon resonance imaging enabled by nanohole arrays with surrounding Bragg mirrors for enhanced sensitivity and isolation. Lab Chip. 9(3): 382-387. 20. Im, H, Lesuffleur, A, Lindquist, N C, et al. (2009) Plasmonic nanoholes in a multichannel microarray format for parallel kinetic assays and differential sensing. Anal Chem. 81(8): 2854-2859. 21. Lesuffleur, A, Im, H, Lindquist, N C, et al. (2007) Periodic nanohole arrays with shape-enhenced plasmon resonance as real-time biosensors. Applied Physics Letters. 90: 243110. 22. Homola, J, Yee, SGauglitz, G. (1999) Surface plasmon resonance sensors: review. Sensors and Actuators B. 54(1): 3-15. 23. Ghaemi, H F, Thio, T, Grupp, D E, et al. (1998) Surface plasmons enhance optical transmission through subwavelength holes. Phys Rev B. 58(11): 6779-6782. 24. Ebbesen, T W, Lezec, H J, Ghaemi, H F, et al. (1998) Extraordinary optical transmission through sub-wavelength hole arrays. Nature. 391: 667-669. 25. Stark, P, Halleck, A ELarson, D N. (2005) Short order nanohole arrays in metals for highly sensitive probing of local indices of refraction as the basis for a highly multiplexed biosensor technology. Methods: Companion to Methods in Enzymology. 37: 37-47. 26. Brolo, A G, Gordon, R, Leathem, B, et al. (2004) Surface plasmon sensor based on the enhanced light transmission through arrays of nanoholes in gold films. Langmuir. 20: 4813-4815. 27. Boozer, C, Kim, G, Cong, S, et al. (2006) Looking towards label-free biomolecular interaction analysis in a high-throughput format: a review of new surface plasmon resonance technologies. Curr Opin Biotechnol. 17(4): 400-405. 28. Campbell, C TKim, G. (2007) SPR microscopy and its applications to high-throughput analyses of biomolecular binding events and their kinetics. Biomaterials. 28(15): 2380-2392. 29. Henzie, J, Lee, J, Lee, M H, et al. (2009) Nanofabrication of plasmonic structures. Annu Rev Phys Chem. 60: 147-165. 30. Larson, A R, Konat, EAlani, R M. (2009) Melanoma biomarkers: current status and vision for the future. Nat Clin Pract Oncol. 6(2): 105117. Corresponding Author: Dale N. Larson Institute: Draper Laboratory Address: 555 Technology Square, Cambridge, MA, USA Email:
[email protected]
IFMBE Proceedings Vol. 32
This page intentionally left blank
Author Index
Aamair, M. 278 Abney, Teresa M. 9 Abshire, P. 282 Acosta, Miguel A. 332 Adhikari, S. 434 Agubuzo, O. 404 Aguel, F. 254 Al-Oglah, R. 556 Al-Shalabi, M. 556 Andrews, P.M. 477 Aprelev, A. 532, 536 Aronova, M.A. 357 Ashtiani, M.N. 376 Asman, E. 142 Assari, S. 380, 437 Atkins, J.L. 26 Avolio, Alberto 81 Azarnoosh, M. 258 Azhar, F. 278
Bentley, T.B. 26 Bentley, W.E. 426, 430 Betz, Jordan 301, 313, 401 Betz, Joshua F. 38 Bhagavatula, R. 348 Bhatia, Sanjeev 105 Bhavanam, Kranthi Kumar 540 Biermann, P.J. 274 Blake-Greenberg, Sherry 134, 138 Bohluli, B. 516 Bonner, R. 361 Bonner, Robert F. 344 Borin, J. 477 Borjian, Roozbeh 365 Borrelli, J.R. 122 Boyden, E.S. 61 Branco, Isa 85 Brokaw, E.B. 113 Bruce, Richard H. 568 Buckhout-White, Susan 313 Burks, S.R. 466
B
C
Baghmanli, Z. 430 Bailey, Ann 372 Balachandran, B. 13 Balajoo, S. Maleki 393 Banton, Sophia 196 Baranova, A. 130 Baras, J.S. 204 Barreto, Jose 286 Barth, E.D. 466 Bartolini, P. 254 Bashirullah, Rizwan 414 Basiri, Ali 329, 504 Bastidos, A.M. 154 Baum, Brian S. 365, 368 Bauman, R.A. 26 Bayly, P.V. 31 Bayly, Philip V. 9 Beatty, Michelle 372 Bekdash, Omar 301, 313, 401 Beldie, L. 456 Belkoff, S.M. 274 Benchekroun, Z.H. 97 Benedetto, John J. 496 Ben Hsieh, H. 568 Bennis, Robyn 568
Cahill, M. 544 Calcagnini, G. 254 Camacho, N. 165 Cargal, T.E. 240 Carvajal, D.A. 126 Cederna, P.S. 430 Censi, F. 254 Chae, S.J. 221 Chandhoke, V. 130 Chapain, P. 247 Chatterjee, Monita 49 Chemlal, Salim 67 Chen, C. 353 Chen, C.-f. 77 Chen, C.W. 477 Chen, P. 336 Chen, Y. 477, 485 Cheng, Yi 301, 313, 401 Chew, Emily 344 Chiesa, O.A. 150, 512 Cho, I. 463 Cho, Ilsung 71 Choi, S. 221, 463 Choi, S.K. 221 Choi, Samjin 71
A
Choi, Samjinn 293 Choi, Y.-S. 289 Choi, Y.S. 463 Choi, Young-Seok 101 Clayton, E.H. 31 Cohen, E. 385 Colberg, Sheri 67 Conner, H.A. 274 Cotton, R.T. 456, 470 Crandall, J.R. 448 Cuiffi, J. 572 Cunningham, Denise 344 Czaja, W. 361 Czaja, Wojciech 344, 496 D Dagenais, Mario 317 Damon, A.M. 448 Dampier, C. 544 Darvish, K. 380, 437, 448 Das, S. Sarkar 240 Das, Srilekha Sarkar 89 Dehghani, Fariba 188 Depireux, D.A. 270 Desai, J. 485 DeShong, Philip 317 Ding, N. 45 Diong, Bill 201, 251 Dobrosotskaya, J. 361 Dobrosotskaya, Julia 344 Domingues, Jos´e P. 85 Dreher, M.L. 512 Duchowny, Michael 105 Duncan, D. 321 E Egeland, B.M. 430 Ehler, M. 361 Ehler, Martin 344, 496 Eisenman, D.J. 270 Elliott, R.L. 489 English, R. 409 Esparza, J. 150, 512 Ettienne-Modeste, Geriel Euliano, Neil 414
161
578 F Farahi, Farnoosh 305 Feng, Y. Aaron 9 Fernandez-Fernandez, A. 126 Fernandez-Fernandez, Alicia 224, 228 Ferrone, F.A. 532, 536 Fickus, M. 348 Fisher, A. 130 Fiskum, G. 1, 5 Fitzsimmons, Jeffery R. 414 Forry, Samuel P. 325 Fourney, W.L. 1 Fraiwan, L. 508, 556 Frankel, Paul 568 Frantz, F. 528 Fraser, K.H. 548 G Galante, A.R. 213 Genin, G.M. 31 Genin, Guy M. 9 George, T.C. 560 Germanier, Catherine 440 Ghandehari, H. 236 Gharavi, R. 26 Ghodssi, R. 426, 430 Godfrey, S.B. 266 Goeller, Jack 34 Goldberg, D.S. 236 Goldman, Michael D. 201, 251 Golzan, S. Mojtaba 81 Gonzales, Pedro 217 Goretsky, M. 528 Graham, Stuart L. 81 Greenspan, J. 481 Griffith, B.P. 548 Grill, W.M. 389 Guha, S. 146 Guha, Suvajyoti 232 Gullapalli, R. 1, 5, 481, 485 Gullapalli, Rao P. 38 Gyuricsko, Eric 67 H Haase, E.B. 56 Hadarees, D. 508 Haidegger, Tam´ as 92 Halpern, H.J. 466
Author Index Hameed, K. 278 Hardy, W.N. 452 Harrigan, T.P. 18 Harris-Love, M.L. 266 Haslach Jr., H.W. 122 Hazelton, J. 1 Hoppmann, E. 297 Hossein-Zadeh, G.A. 393 Hsu, L.L. 544 Huang, J. 109 Hubbard, Tom 67 Hurley, Matthew T. 317 I Imani, R. 376 Iravani, Hoda 520 J Jamil, A. 508 Januszkiewicz, A.J. 26 Jayakar, Prasanna 105 Ji, Chengdong 188 Jia, Xiaofeng 101 Jin, A.J. 500 Jo, Il Sung 293 Johnson, A. 247, 397 Johnson, Arthur T. 65 Jordan, M.H. 504 K Kang, S.W. 221 Kao, J.P.Y. 466 Karanian, J.W. 150, 512 Kariyawasam, Pramodh 520 Kavallappa, T. 481 Kazarian, Sergei G. 188 Kelley, Matthew W. 42 Kelly, R. 528 Kim, H.-N. 289 Kim, K.S. 463 Kim, Kyung Sook 71, 293 Kim, Y.C. 357 Kim, Yoon Hyuk 368 Kim, You-Sin 365, 368 King, E. 361 King, Emily 344 Kipke, D.R. 430 Knisley, S. 528 Kostov, Y. 309
Kothare, M.V. 289 Kovaˇcevi´c, J. 348 Krauthamer, V. 254 Kreitz, M. 150, 512 Krivacic, Robert T. 568 Kruecker, Jochen 473 Kyrtsos, C.R. 204 L Laksari, K. 437 Langhals, N.B. 430 Larson, D. 572 Lau, Sean 568 Leach, J.B. 142, 422 Leach, Jennie B. 332 Leapman, R.D. 357 Lee, A.B. 353 Lee, G.J. 221, 463 Lee, Gi Ja 71, 293 Lee, P. 209 Lee, Sang Ho 71 ´ Lehotsky, Akos 92 Lei, Peng 473 Lei, Tingjun 224, 228 Lemaillet, P. 321 Lerner, N. 536 Levenson, Richard M. 564 Levine, William S. 524 Levy, D. 209, 213, 552 Li, Yao 524 Lin, W. 336 Lin, Wei-Chiang 105 Linberg, Alison 365 Liu, Z. 532 Lompado, A. 321 Long, J.B. 26 Lopez, O. 150, 512 Lucas, A.D. 240 Lum, P.S. 113, 266 Luo, Tianzhi 74 Luo, Xiaolong 401 Lweesy, K. 508, 556 Ly, Janey 568 M MacDermott, M. 536, 544 Madhok, Jai 101 Majd, S. 247, 397, 516 Majumdar, Zigurts 344
Author Index Malekmohammadi, M. 243, 459 Manchanda, Romila 224, 228 Mangum, Michael 251 Manolakos, S. 572 Martin, D.C. 430 Martin, S.S. 466 Massar, M.L. 348 Mattei, E. 254 McDermott, M.K. 240 McDowell, B. 150, 512 McGoron, A.J. 126 McGoron, Anthony J. 228 Mcgoron, Anthony J. 224 McKenzie, F.D. 528 McKenzie, Frederic D. 67 McMillan, A. 481, 485 Mehl, P. 171, 404 Meier, M. 536, 544 Merkle, A.C. 18, 22 Meyer, M.T. 426 Minderman, H. 560 Minnikanti, S. 97, 385 Miriani, R.M. 430 Moein, A. 243, 459 Mohammad, L. 556 Mohapatra, S. 572 Morgado, Ant´ onio M. 85 Morsi, Y.S. 180 Mujeeb, D. 278 Murray, T.M. 113 Musib, Mrinal K. 158 Muzammil, M. 278 N Nagaraja, S. 512 Nagy, Melinda 92 Najafi, A. Raeisi 516 Nazeran, Homer 134, 138, 201, 251 Nef, T. 113 Newman, G.I. 289 Nguyen, Q.D. 321 Nguyen, Thu T.A. 504 Noh, H. 536 Nuss, D. 528 O O’Brien, E.M. 489 O’Loughlin, K.L. 560 Oberzut, Cherish 49 Oh, Sanghoon 105 Okamoto, Ruth J. 9
579 Omokanwaye, Tiffany 183, 418, 520 Onozato, M.L. 477 Oskui, I. Zoljanahi 516 Owens, Donae 418 Ozolek, J.A. 353 P Pancrazio, J. 97 Parichehreh, Vahidreza 540 Park, D.H. 463 Park, Dong Hyun 293 Park, E.K. 221 Park, H.K. 221, 463 Park, Hun-Kuk 293 Park, Hun Kuk 71 Park, J.H. 463 Park, Jaebum 368 Park, Jeong Hoon 71, 293 Parks, S. 26 Pashaei, A. 516 Paskoff, G. 448 Patanarut, A. 130 Patel, L. 240 Patel, S.S. 180 Pattekari, P. 175 Patwardhan, D.V. 240 Pavlovic, Mirjana 196 Pavlovich, A.R. 504 Pearce, C.W. 456 Pease III, L.F. 146 Peixoto, N. 97, 385 Peng, Shu-Chen 49 Peterson, David M. 414 Pham, Kevin 414 Phelan, M. 477 Pierrakos, Olga 372 Pless, Robert 9 Plishker, William 473 Powell, E.M. 422 Presacco, A. 289 Pritchard, W.F. 150, 512 R Racz, J. 5 Rad, D. 150, 512 Raghavan, Srinivasa R. 325 Ragheb, John 105 Raj, Vinay 540 Ramadan, E. 508 Ramella-Roman, J.C. 321, 504 Ramella-Roman, Jessica C. 329
Rao, G. 309 Raymont, D. 470 Rechowicz, K.J. 528 Reis, L.G. 175 Ribeiro, A.S. 422 Riccio, C. 26 Richardson, D. Coleman 89 Ritzel, D.V. 26 Roberts, J.C. 18, 22 Robinson, Douglas N. 74 Rohde, G.K. 353 Romanov, V.V. 380 Rosen, G.M. 466 Roth, Zvi 196 Rothwell, G. 409 Roy, A. 481 Roy, Anindya 38 Roy, V. 426, 430 Roys, S. 5, 481 Rubloff, Gary W. 301, 313, 401 Ruihua, Ye 118 Ruys, Andrew 188 Ryu, Geunmin 317 S Saboori, P. 444 Saboori, Parisa 440 Sadegh, A. 444 Sadegh, Ali 440 Saha, Subrata 158 Sanford, Z. 504 Sarje, A. 282 Sathish, R. Sai 309 Satin-Smith, Marta 67 Saylor, D.M. 240 Scerbo, Mark W. 67 Schabowsky, C.N. 266 Schroeder, K. 430 Schuck, P. 340 Schwartz, Erich 568 Schwerin, Matthew 89 Semework, Mulugeta 493 Sethu, Palaniappan 540 Shah, C.S. 452 Shahrukh, I. 278 Shanmuganathan, Kathirkamanthan 38 Shariatmadari, M.R. 409 Shekhar, Raj 473
580 Shender, B.S. 448 Shi, D. 5 Shim, B.S. 430 Shim, J.K. 109 Shim, Jae Kun 365, 368 Shoge, R. 26 Shourabi, N. Bazyar 192 Shupp, J.W. 504 Silva, R. 404 Silva, R.A. 171 Simon, J.Z. 45 Sit, P.S. 175 Sivamurthy, K.M. 544 Skoblov, M. 130 Slepcev, D. 353 Smith, C. 130 Smith, P.D. 500 Soltanian-Zadeh, H. 393 Somlo, George 568 Song, Wang 118 Soong, R. 572 Sousa, A.A. 357 Srinivasan, Supriya 224 Stafford, S.W. 154, 165 Stephenson, W. 536 Stohlman, J. 254 Swaan, P.W. 236 Sweeney, James 286 T Tack, Charles 89 Tan, Z.B. 262 Tang, Yuan 224, 228 Tarlov, M.J. 146 Tarlov, Michael J. 232 Taskin, M.E. 548 Thakor, N.V. 289 Thakor, Nitish V. 101 Thomas, Peter C. 325 Tomasetti, C. 213, 552 Tong, L. 26 Topoleski, Timmie 161 Torres, Jorge 286
Author Index Tran, Binh Q. 217 Triventi, M. 254 Trueba Jr., L. 165 Truong, X. 500 Tsai, D.-H. 146 Turner, Walker 414
Wittenberg, G.F. 53 Wong, W.C. 56 Wong, Wai 344 Wood, Bradford 473 Wu, Dan 101 Wu, Z.J. 548
U
X
Untaroiu, C.D. 448 Urbanchek, M.G. 430
Xu, S. 5 Xu, Sheng 473 Xuan, V. Bui 470
V Y Valdez, M.F. 13 Van Druff, J.L. 142 Venkatesan, Aradhana 473 Vossoughi, J. 150, 247, 397, 512 Vu, Q. 500 W WaiPan, Yau 118 Walker, B. 456 Wallace, P.K. 560 Walsh, Donna 89 Wang, H. 262 Wang, L.Y. 262 Wang, W. 353 Ward, E.E. 18 Wardlaw, Andrew 34 Wasserman, S.C. 61 Wawroski, Lauren 49 Wayment, Joshua 232 Wei, B. 430 Wellner, E. 500 White, I.M. 297 White, Ian 305 White, N.A. 452 Wierwille, J. 477, 485 Wilson, O.C. 171, 404 Wilson, S.N. 209 Wilson Jr., Otto 183, 418, 520 Wing, I.D. 22, 274
Yadav, Nitin 105 Ying, S.H. 289 Yonghua, Chen 118 Young, P.G. 456, 470 Youssefi, K. 459 Yu, Hong 414 Z Zachariah, M.R. 146 Zachariah, Michael R. 232 Zaidi, K.F. 97 Zakharov, M. 532 Zangmeister, R.A. 146 Zhang, G. 357 Zhang, J.S. 262 Zhang, J.Y. 389 Zhang, Q. 448 Zhang, T. 548 Zhang, X.G. 262 Zhao, M.H. 500 Zhong, Xia 188 Zhou, W. 142 Zhuo, J. 5 Zhuo, Jiachen 38 Zimmerman, C.L. 61 Zirzow, A.C. 130 Zoghi-Moghadam, M. 444 Zoshk, M.R. Yousefi 258
Keyword Index
3D culture 422 3-D matrix 404 3D scanning 528 4-aminobenzenethiol 313 6-link three-dimensional gait model 122 β1 - integrin signaling 422 A acceleration 1 acceleration planning 118 acetabular shell 154 Actin 74 Actin cross-linking protein 74 Acute kidney injury 477 Adaptive Noise Cancellation (ANC) 243 Adaptive Regulatory T cells 209 adenine 301 affinity distribution 340 AFM 71 age related macular degeneration 344 aggregate 146 aggregates 232 Aging 138 Airflow Perturbation Device 247, 397 Alzheimer’s disease 204 Amputation 365 Amputeev analytical modeling 440 analytical ultracentrifugation 340 Anesthesia 262 Aneurysm 444 ANSYS 397 antibody 232 aorta 452 apoE 204 ARMA 262 Arterial 329 artificial joint fluid 161 ARX 262 Ataxia 289 Atherosclerosis 434 Atomic Force Microscope (AFM) 500 Auditory system 53, 262 autofluorescence imaging 344 average frequency 489
B bacterial biofilms 426 bandwidth reduction 282 BCI 289 Bilateral Sagittal Split Osteotomy Finite Element Method and surgery 516 biochip application 293 BioEngineering 270 biofilm optical density 426 biological heart valve 372 Biomaterial 418 biomaterials 332 biomechanical testing 274 biomedical 56, 414 Biomedical Signal Processing 243 bioMEMS 426 Biomimetics 183 biopolymer 401 biosensor 305 biosimulant 274 Blast 22, 26 blast loading 31 blast overpressure 26 blood flow 321 blood glucose 67 Blood oxygen saturation 336 blood volume fraction 336 Bone 520 bovine calf serum (BCS) 161 Brain 22 Brain fibers 13 brain response 31 brain stimulation 97 brain structure mapping 493 Breast cancer 466 C Cancer 142, 213, 452, 552 Cancer Resistance 213 Carbohydrates 317 carbon nanotubes 97 Carbon Nanotubes (CNTs) 500 cardiac arrest 101 cardiac electrophysiology 254 cardiotoxicity 126 cavitation 34
CCD multi-element sensors 85 CD8+ T-cells 196 cell lines 228 cell proliferation 188 Cell Sorting 540 Cellular physiologic function measurements 134, 138 Cellular uptake 224 CFD 397 CFX 397 charge storage capacity 97 chemotherapy 126 child 544 children 49 Chitin 183, 404 Chitosan 171, 401, 404 cholesterol 204 Cholesterol Efflux 434 chromatin 353 circulating tumor cells 568 classification 353 clustering 92 CMOS 282 coating 146 cochlear implants 49 Collagen 183, 520 colloid 146 composite hydrogel 188 compressive modulus 274 Computation Modeling 389 Concanavalin A (ConA) 317 Control 192 Convex Programming 524 Coordination 524 COPD 201, 251 cornea epithelia 175 Correlation 105 cortex 45 cortical physiology 53 Cotton 404 Crustacean 418 Crutch gait for weak hip abductor 122 CSF flow 440 curriculum content 65 Cycle-by-cycle Fourier Series Analysis (CFSA) 243 Cytotoxicity 224
582 D Darcy permeability 440 decellular nerve 430 defibrillation 254 Delay Differential Equations 209 delirium 489 delivery vehicle 130 dementia 489 dendritic silver 313 dental caries 463 Dental prosthesis 376 Dentistry 516 diabetes 67 Diabetic Mellis (DM) 409 differentiation 422 Diffuse Reflectance 105 diffuse reflectance spectroscopy 336 diffusion tensor imaging 5, 38 Digital Imaging and Communication in Medicine (DICOM) 459 Discrete Wavelet Transform (DWT) 243 DNA baskets 130 DNA extraction 297 Doxorubicin 224 Doxorubicin 228 Drug delivery 270 drug elution 240 Drug resistance 552 dynamic aorta test 380 Ebola Virus 196 edge detection 361 education 56, 65 E EEG 289, 489 EEG Signals 258 Effective Connectivity 481 Electrical burns 504 electrical conductance 286 Electrical Stimulation 270, 389 electrically 414 Electro interstitial scan (EIS) system 134, 138 electrodeposition 401 electroless deposition 313 Electromyogram 278 Electron paramagnetic resonance 466 Electron tomography 357 electrospray-differential mobility analyzer 232 Electrospun polyurethane 180 Element 18 elemental mapping 357 eleven vessel occlusion ischemia model 221 emulsion lyophilization 188 endocytosis 236
Keyword Index energy-filtered imaging 357 engineering 56 Epilepsy 101, 105 Epileptic 278 equivalent 414 ES-DMA 146 Ethanol vapor detection 309 exercise 67 Explosion 1 extraordinary optical transmission 572 F Failure Analysis 165 Fatigue 258 femoral head 154 femur fracture reduction 118 Fiber Bragg Grating (FBG) 317 Finite 18 Finite Element 397, 456 Finite Element Analysis 444, 470 Finite Element Method 376, 448 Finite Element Model 437 Finite Mixture Model (FMM) 393 Flow Cytometry 560 fluorescent indicator dilution 126 fMRI 481 Footwear Foams 409 force sharing 109 force-distance analysis 71 FPGA 473 frame potential 496 frames 496 freshmen 56 functional Magnetic Resonance Imaging (fMRI) 393 fuzzy logic 118 G galvanic displacement 313 gelation 401 Genetic Algorithms 389 Geometrically focused transducer 556 giant magnetoresistance sensor 293 Glutamate 221 Glutathione patch 134 gold 146 GRIN lens 477 gum arabic 171 H hand disinfection 92 HDL 434 Head 18, 22 Head Impact 440, 437, 456 Head Injury 444 Hearing 270 heart valve performance 372 Heart valves 180
heat transfer 376 heat treated 232 hemolysis 548 HER2 466 HER-2 antibody 224 HIF-1α 332 high intensity focused ultrasound 556 histology 348 HIV/AIDS 192 Human Balance 524 Human Brain 448 humidity 286 hydrogel 401 hydrogels 175 hyperelasticity 380 Hyperfoam 409 Hypoxia 332 I image analysis 564 image processing 470 image registration 473 image segmentation 92 image-based meshing 456, 470 Imaging 142, 329, 466, 504 imaging-based modeling 150 immunofluorescence 564 immunohistochemistry 564 implant safety 512 implantable electrodes 97 Implants 270, 385 impulse oscillometry 201, 251 in vitro implant 161 in vivo 5, 336 Indocyanine Green 228 Injury 22 interventional device safety 150 intonation 49 Intracranial Pressure 81 intraoral X-ray 463 inverse dynamics 122 Iran 258 iridium oxide 97 isolated heart 126 K kernel methods 496 kinematics 452 Kinetics 365 Knee Replacement 165 L label-free 305 label-free detection 572 Laparoscope 477 Latex condoms 89 Lectins 317 lesion 556
Keyword Index LifeWave 134, 138 Linear Viscoelastic 448 Lipid Metabolism 434 Liposomes 466 local histogram 348 long-latency 101 LRP-1 204 lubricants 89 M macular pigment 344 Magnetic bead 293 Magnetic Nanoparticles 142 magnetic resonance imaging 5, 38, 485 Magnetoencephalography 45 mass loss 171 Material properties 448 material properties of aorta 380 math modeling 204 Mathematical Modeling 196 MATLAB 282 matrix morphology 240 matrix structure 240 mechanical advantage 368 mechanical circulatory support 548 mechanical heart valve 372 mechanical strength 188 mechanism 452 Mechanosensing 74 medical imaging 512 Medical Imaging Informatics 459 Medicine 473 Mesangial cell 71 mesh generation 470 methacrylate 175 Microbiology 418 microdialysis 77 Microenvironmental Control 213 micro-flow imaging 232 Microfluidic 297 Microfluifdics 301, 305, 325, 426, 540 Micro-total analysis systems 297 miniaturization 77 Model 18 Modeling 56, 192, 262, 444 modern biology 65 modulation 49 modulations 45 Monotonous exercises 258 motor control 113 Motor Network 481 MR-Elastography 31 Multi-Aperture Camera 329 multi-digit grasp 109 Multi-species system 309 multi-spectral 344
583 multispectral 564 Multispectral analysis 496 Muscle Electrical Impedance 508 Muscle State 508 Myography 508 Myosin 74 N nanohole array sensor 572 nanoparticles 130 nanoparticles 158 Nanotechnology 134, 142 Nanotechnology, Carnosine patch 138 Near-field Optics 500 nerve conduction 430 neural monitoring 101 Neural Networks 192 Neural Stem Cells 422 neuron 1 neurosurgery 493 NFκB 560 nimodipine 221 Nitroxides 466 noise 282 non-invasive 544 Non-invasive measurement 31, 81 nonlinear visco-elastic material 13 nuclear structure 353 Nuss procedure 528 O objective evaluation of hand washing 92 occlusion 348 Ocular Fluorometry 85 Optical biosensor 317, 340 optical coherence tomography 477, 485 optical imaging 485 Optical Spectroscopy 105 Optimal Control 524 Optimal transportation 353 Optimization 389 oral drug delivery 236 Ordinary Differential Equations 213, 552 Osmosis 540 outcome 38 Oxygen 142 oxygen control 325 Oxygen saturation 321, 329 oxygen sensing 332 oxygen sensor 325 P PAMAM dendrimers 236 parameter estimation 201, 251
Parcellation 393 Path Coefficients 481 pathology 353 patient-specific modeling 456 PDMS 325 Peanut agglutinin (PNA) 317 pectus excavatum 528 pelvic rotation and obliquity 122 pelvic tilt 122 peripheral nerve 430 Permeability 126 phantoms 414, 532, 536 Photoplethysmography (PPG) 243 PHRs 217 Picture Archiving and Communication System (PACS) 459 PLGA 224, 240 PLGA nanoparticles 228 poly(3,4,-ethylenedioxythiophene) 430 poly(ethylene glycol) 175 Polyethylene Wear 165 poly-lysine 404 polymer elongation 532 portable 489 preclinical evaluation 150 Prediction 67 predictive models 512 Prehension 368 pressure response 456 Primary Immune Response 209 principal component analysis 9 prognosis 101 Progressive Wear Damage Assessment 165 Prosthesis 365 prosthetics 274 protein microarray 572 proton magnetic resonance spectroscopy 5 Pulse Oximetry 329 Q Quantitative Imaging 560 Quasi-Linear Viscoelastic 448 R RAAS 71 radiofrequency ablation 485 Radiology e-Learning 459 Raman Spectroscopy 500 Ratiometric sensor 309 Rational Vaccine Design 196 RCT 434 real-time 278 Real-time imaging 71 real-time monitoring 221 refractive index 305
584 regularization 340 rehabilitation robotics 113 Rehabilitation Robotics 266 release rate 240 residence time 548 Resistivity 385 Respiratory impedance 201, 251 Respiratory mechanics 247 Respiratory resistance 247 respiratory system model 201, 251 resultant force 109 Retina 344, 385 Retinal imaging 361, 496 Retinal Venous Pulsations 81 Rhodamine-6G aggregates 309 ring resonator 305 Road Accident 258 Robot-assisted surgery 118 S safety margin 109 SAH 301 sampling 77 scanning transmission electron microscopy 357 SDSR 463 segmentation 564 sensor 286 Sensors 142 separation systems 297 shape analysis 528 shear stress 548 shock tube 26 shock wave 26 sickle cell anemia 544 Sickle cell hemoglobin 532, 536 Signal Transduction 560 Simulation 26, 278 siRNA 130 size distribution 340 skin 274 skull-brain interactions 9 slippage 89 Slit-Lamp 85 Solubilization 520 somatosensory evoked potential 101 sonication 556 sparsity 496 spectral response 504 spectroscopy 321
Keyword Index Spike detection 282 SPR 572 stem cells 180 Stereotactic method 493 Sterilization 418 Stimulation, Damage 385 stray field 293 stress relaxation 89 Stroke 266 stroke recovery 113 Structural Equation Modeling 481 Structural MRI 493 Structural MRI image analysis 493 Subarachnoid Space 440 surface enhanced Raman spectroscopy (SERS) 313 Surface functionalization 317 Surface plasmon-coupled emission 309 Surface-enhanced Raman spectroscopy (SERS) 301 Surface-Enhanced Raman Spectroscopy (SERS) 500 surgical planning 528 Surrogate 22 swelling 89, 171 T Tagged MR imaging 9 TBI 9, 18, 31, 34 teaching for success 65 Teleradiology 459 Temperature 409 tetracycline 240 Ti, size-shape descriptors 158 tight junctions 236 Time dependent 105 tinnitus 53 Tip-Enhanced Raman Spectroscopy (TERS) 500 tissue 414 titanium oxide 286 torque production 368 Total hip arthroplasty (THA) 154 TRA 452 Transcranial Magnetic Stimulation 53, 266 transport 236 traumatic aorta rupture 380 Traumatic Brain Injury 1, 26, 38, 437
traumatic brain injury 5 treatment 53 treatment planning 556 U UHMWPE 158 ultra high molecular weight polyethylene (UHMWPE) 154 undergraduate curriculum 65 V Vagus nerve 278 variational approach 361 Vascular biomechanics 150 vascular characterization 150 vasculopathy 544 vaso-occlusion 536 velocity planning 118 vasculopathy 544 vaso-occlusion 536 velocity planning 118 ventricular assist device 548 ventricular fibrillation 254 viscoelasticity 380 viscosity 161 voice pitch 49 vasculopathy 544 vaso-occlusion 536 velocity planning 118 ventricular assist device 548 ventricular fibrillation 254 viscoelasticity 380 viscosity 161 voice pitch 49 vortex ring formation 372 vortex ring propagation speed 372 W water 286 wave propagation 13 wavelet 361 wavelet transformation 336 wear 161 Wear-debris particles 158 Web 2.0 459 Web design 217 Web usability 217 Web-based PACS 459 White Blood Cells 540