IFMBE Proceedings Series Editor: R. Magjarevic
Volume 35
The International Federation for Medical and Biological Engineering, IFMBE, is a federation of national and transnational organizations representing internationally the interests of medical and biological engineering and sciences. The IFMBE is a non-profit organization fostering the creation, dissemination and application of medical and biological engineering knowledge and the management of technology for improved health and quality of life. Its activities include participation in the formulation of public policy and the dissemination of information through publications and forums. Within the field of medical, clinical, and biological engineering, IFMBE’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. The objectives of the IFMBE are scientific, technological, literary, and educational. The IFMBE is a WHO accredited NGO covering the full range of biomedical and clinical engineering, healthcare, healthcare technology and management. It is representing through its 60 member societies some 120.000 professionals involved in the various issues of improved health and health care delivery. IFMBE Officers President: Herbert Voigt, Vice-President: Ratko Magjarevic, Past-President: Makoto Kikuchi Treasurer: Shankar M. Krishnan, Secretary-General: James Goh http://www.ifmbe.org
Previous Editions: IFMBE Proceedings BIOMED 2011, “5th Kuala Lumpur International Conference on Biomedical Engineering 2011” Vol. 35, 2011, Kuala Lumpur, Malaysia, CD IFMBE Proceedings NBC 2011, “15th Nordic-Baltic Conference on Biomedical Engineering and Medical Physics” Vol. 34, 2011, Aalborg, Denmark, CD IFMBE Proceedings CLAIB 2011, “V Latin American Congress on Biomedical Engineering CLAIB 2011” Vol. 33, 2011, Habana, Cuba, CD IFMBE Proceedings SBEC 2010, “26th Southern Biomedical Engineering Conference SBEC 2010 April 30 – May 2, 2010 College Park, Maryland, USA”, Vol. 32, 2010, Maryland, USA, CD IFMBE Proceedings WCB 2010, “6th World Congress of Biomechanics (WCB 2010)”, Vol. 31, 2010, Singapore, CD IFMBE Proceedings BIOMAG2010, “17th International Conference on Biomagnetism Advances in Biomagnetism – Biomag2010”, Vol. 28, 2010, Dubrovnik, Croatia, CD IFMBE Proceedings ICDBME 2010, “The Third International Conference on the Development of Biomedical Engineering in Vietnam”, Vol. 27, 2010, Ho Chi Minh City, Vietnam, CD IFMBE Proceedings MEDITECH 2009, “International Conference on Advancements of Medicine and Health Care through Technology”, Vol. 26, 2009, Cluj-Napoca, Romania, CD IFMBE Proceedings WC 2009, “World Congress on Medical Physics and Biomedical Engineering”, Vol. 25, 2009, Munich, Germany, CD IFMBE Proceedings SBEC 2009, “25th Southern Biomedical Engineering Conference 2009”, Vol. 24, 2009, Miami, FL, USA, CD IFMBE Proceedings ICBME 2008, “13th International Conference on Biomedical Engineering” Vol. 23, 2008, Singapore, CD IFMBE Proceedings ECIFMBE 2008 “4th European Conference of the International Federation for Medical and Biological Engineering”, Vol. 22, 2008, Antwerp, Belgium, CD IFMBE Proceedings BIOMED 2008 “4th Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 21, 2008, Kuala Lumpur, Malaysia, CD IFMBE Proceedings NBC 2008 “14th Nordic-Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 20, 2008, Riga, Latvia, CD IFMBE Proceedings APCMBE 2008 “7th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 19, 2008, Beijing, China, CD IFMBE Proceedings CLAIB 2007 “IV Latin American Congress on Biomedical Engineering 2007, Bioengineering Solution for Latin America Health”, Vol. 18, 2007, Margarita Island, Venezuela, CD
IFMBE Proceedings Vol. 35
Noor Azuan Abu Osman, Wan Abu Bakar Wan Abas, Ahmad Khairi Abdul Wahab, and Hua-Nong Ting (Eds.)
5th Kuala Lumpur International Conference on Biomedical Engineering 2011 (BIOMED 2011) 20–23 June 2011, Kuala Lumpur, Malaysia
123
Editors Assoc. Prof. Dr. Noor Azuan Abu Osman University of Malaya Department of Biomedical Engineering Faculty of Engineering 50603 Kuala Lumpur Malaysia E-mail:
[email protected]
Dr. Ahmad Khairi Abdul Wahab University of Malaya Department of Biomedical Engineering Faculty of Engineering 50603 Kuala Lumpur Malaysia E-mail:
[email protected]
Prof. Dr. Ir. Wan Abu Bakar Wan Abas University of Malaya Department of Biomedical Engineering Faculty of Engineering 50603 Kuala Lumpur Malaysia E-mail:
[email protected]
Dr. Hua-Nong Ting University of Malaya Department of Biomedical Engineering Faculty of Engineering 50603 Kuala Lumpur Malaysia E-mail:
[email protected]
ISSN 1680-0737 ISBN 978-3-642-21728-9
e-ISBN 978-3-642-21729-6
DOI 10.1007/ 978-3-642-21729-6 Library of Congress Control Number: 2011930406 © International Federation for Medical and Biological Engineering 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permissions for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The IFMBE Proceedings is an Official Publication of the International Federation for Medical and Biological Engineering (IFMBE) Typesetting & Cover Design: Scientific Publishing Services Pvt. Ltd., Chennai, India. Printed on acid-free paper 987654321 springer.com
About IFMBE
The International Federation for Medical and Biological Engineering (IFMBE) was established in 1959 to provide medical and biological engineering with a vehicle for international collaboration in research and practice of the profession. The Federation has a long history of encouraging and promoting international cooperation and collaboration in the use of science and engineering for improving health and quality of life. The IFMBE is an organization with membership of national and transnational societies and an International Academy. At present there are 52 national members and 5 transnational members representing a total membership in excess of 120 000 worldwide. An observer category is provided to groups or organizations considering formal affiliation. Personal membership is possible for individuals living in countries without a member society The International Academy includes individuals who have been recognized by the IFMBE for their outstanding contributions to biomedical engineering. Objectives The objectives of the International Federation for Medical and Biological Engineering are scientific, technological, literary, and educational. Within the field of medical, clinical and biological engineering its aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. In pursuit of these aims the Federation engages in the following activities: sponsorship of national and international meetings, publication of official journals, cooperation with other societies and organizations, appointment of commissions on special problems, awarding of prizes and distinctions, establishment of professional standards and ethics within the field, as well as other activities which in the opinion of the General Assembly or the Administrative Council would further the cause of medical, clinical or biological engineering. It promotes the formation of regional, national, international or specialized societies, groups or boards, the coordination of bibliographic or informational services and the improvement of standards in terminology, equipment, methods and safety practices, and the delivery of health care. The Federation works to promote improved communication and understanding in the world community of engineering, medicine and biology. Activities Publications of IFMBE include: the journal Medical and Biological Engineering and Computing, the electronic magazine IFMBE News, and the Book Series on Biomedical Engineering. In cooperation with its international and regional conferences, IFMBE also publishes the IFMBE Proceedings Series. All publications of the IFMBE are published by Springer Verlag. The Federation has two divisions: Clinical Engineering and Health Care Technology Assessment. Every three years the IFMBE holds a World Congress on Medical Physics and Biomedical Engineering, organized in co-operation with the IOMP and the IUPESM. In addition, annual, milestone and regional conferences are organized in different regions of the world, such as Asia Pacific, Europe, the Nordic-Baltic and Mediterranean regions, Africa and Latin America. The administrative council of the IFMBE meets once a year and is the steering body for the IFMBE: The council is subject to the rulings of the General Assembly, which meets every three years. Information on the activities of the IFMBE can be found on the web site at: http://www.ifmbe.org.
Foreword
It is with great pleasure to present to you a collection of over 200 high quality technical papers from more than 10 countries at the 5th Kuala Lumpur International Conference on Biomedical Engineering (BIOMED 2011), which is held in conjunction with the 8th Asian Pacific Conference on Medical and Biological Engineering (APCMBE 2011). This international conference is jointly organized by Department of Biomedical Engineering, University of Malaya, Malaysia, and Society of Medical and Biological Engineering, Malaysia (MSMBE). The papers cover various topics of Biomedical engineering such as artificial organs, bioengineering education, bioinformatics, biomaterials, biomechatronics, biomechanics, bioinstrumentation, bionanotechnology, biomedical and physiological modelling, biosignal processing, clinical engineering, bioMEMS, medical imaging, prothetics and orthotics, and tissue engineering. This set of papers is the current research work being carried out in various disciplines of Biomedical engineering, including new and innovative researches in emerging areas. The conference program highlights five plenary talks by five prominent researchers/academicans of different areas: Professor Dr. Michael R. Neuman (Michigan Technological University, Michigan, USA), Professor Dr. Walter Herzog (University of Calgary, Canada), Professor Dr. Xiao-Ping Li (National University of Singapore, Singapore), Professor Dr. Alberto Avolio (Macquarie University, Sydney, Australia), and Professor Dr. Arthur F.T. Mak (The Hong Kong Polytechnic University, Hong Kong). Besides that, we also invite Prof. Dr. James Goh (National University of Singapore, Singapore), Prof. Dr. Ichiro Sakuma (University of Tokyo, Japan) and Prof. Dan Bader (University of Southampton, Uk) to give invited talks. Hope that you will find enlightening ideas from the papers for your research and study. Happy reading the proceedings and have a nice day. Assoc. Prof. Dr. Noor Azuan Abu Osman Chairperson, Organising Committee, Biomed 2011. Prof. Dr. Ir. Wan Abu Bakar Wan Abas President, Society of Medical and Biological Engineering, Malaysia (MSMBE)
Conference Details
Name 5th Kuala Lumpur International Conference on Biomedical Engineering 8th Asian Pacific Conference on Medical and Biological Engineering
Short Name BioMed 2011 APCMBE 2011
Venue Kuala Lumpur, Malaysia, 20-23 June 2011 Proceedings Editors
International Advisory Board
Treasurer
Noor Azuan Abu Osman Wan Abu Bakar Wan Abas Ahmad Khairi Abdul Wahab Hua-Nong Ting
Dan Bader (United Kingdom) Herbert F. Voigt (USA) Ichiro Sakuma (Japan) James Goh Cho Hong (Singapore) John Webster (USA) Jos AE Spaan (The Netherlands) Joseph D. Bronzino (USA) Makoto Kikuchi (Japan) Marc Madou (USA) Metin Akay (USA) Michael R. Neuman (USA) Nikola Kasabov (New Zealand) Ratko Magjarevic (Croatia) Shankar Krishnan (USA) Walter Herzoq (Canada)
Siew-Cheok Ng, PhD
Organized by Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Malaysia Co-Organized by Society of Medical and Biological Engineering, Malaysia (MSMBE) Supported by University of Malaya Ministry of Higher Education, Malaysia Tourism Malaysia IFMBE Asia-Pacific Working Group Chair: Ichiro Sakuma Vice Chair: Kang-Pin Lin Secretary: Siew-Lok Toh
Scientific and Technical Programme Committee Wan Abu Bakar Wan Abas Hua-Nong Ting Salmah Karman Norazmira Md Noh Mohd Shuhaibul Fadly Mansor Protocol Committee Fatimah Ibrahim Nahrizul Adib Kadri
Organizing Committee Chairperson
Special Task Committee
Noor Azuan Abu Osman
Ahmad Khairi Abdul Wahab Belinda Murphy
Norita Mohd Zain Nahrizul Adib Kadri Suraya Abdul Rahman Siew Cheok Ng Lim Einly
Secretary
Logistic Committee
Norita Mohd Zain Nahrizul Adib Kadri Ummi Syahirah Md Ali
Ahmad Nazmi Ahmad Fuad Suraya Abd Rahman
Vice-Chairperson
X
Social Program Committee Kama Bistari Muhammad Facilities Committee Fadzli Abu Bakar Student Committee Ummi Syahirah Md Ali Sponsors Committee Belinda Murphy Nur Azah Hamzaid Raha Mat Ghazali Elia Ameera Ali
Conference Details
Promotion Committee Ahmad Khairi Abdul Wahab Wan Azhar Wan Mohd Ibrahim Nasrul Anuar Abd Razak Mohd Faiz Mohamed Said Tutorial/Workshop Committee Hua-Nong Ting Mohd Shuhaibul Fadly Mansor Members Herman Shah Abdul Rahman Illida Mohd Nawi
Noranida Ariffin Fairus Hanum Mohamad Mohd Hanafi Zainal Abidin Adhli Iskandar Putera Hamzah Mohd Firdaus Mohd Jamil Mohd Asni Mohamad Hafizuddin Asman Razalee Rahimi Abdul Manaf Ahmad Firdaus Omar Noor Aini Dochik Norhazura Abdullah Neamah Suhaimi Mohd Faiz Mohd Mokhtar Mohd Fahmi Rusli
Table of Contents
Plenary Cardiovascular Modeling: Physiological Concepts and Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Avolio
1
Effect of Pain Perception on the Heartbeat Evoked Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X.P. Li
2
In-vivo Cartilage Mechano-Biology: How to Make Progress in Osteoarthritis Research . . . . . . . W. Herzog, T.R. Leonard, Z. Abusara, S.K. Han, A. Sawatsky
3
Sensors and Instrumentation to Meet Clinical Needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.R. Neuman
7
Tissues Injuries from Epidermal Loadings in Prosthetics, Orthotics, and Wheeled Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.F.T. Mak
8
Invited Paper Computer Aided Surgery for Minimally Invasive Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ichiro Sakuma
9
Role of Mechanical Loading in the Aetiology of Deep Tissue Injury . . . . . . . . . . . . . . . . . . . . . . . . . . C. Oomens, S. Loerakker, K. Nicolay, D. Bader
10
Tissue Engineering Approaches to the Treatment of Spinal Disorders . . . . . . . . . . . . . . . . . . . . . . . . J.C.H. Goh, H.K. Wong, S. Abbah, E.Y.S. See, S.L. Toh
11
Artificial Organs Study of the Optimal Circuit Using Simultaneous Apheresis with Hemodialysis . . . . . . . . . . . . . . A. Morisaki, M. Iwahashi, H. Nakayama, S. Yoshitake, S. Takezawa
12
Bioengineering Education Biomedical Engineering Education under European Union Support . . . . . . . . . . . . . . . . . . . . . . . . . . M. Cerny, M. Penhaker, M. Gala, B. Babusiak OBE Implementation and Design of Continual Quality Improvement (CQI) for Accreditation of Biomedical Engineering Program University of Malaya . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Karman, K. Hasikin, H.N. Ting, S.C. Ng, A.K. Abdul Wahab, E. Lim, N.A. Hamzaid, W.A.B. Wan Abas
16
20
XII
Table of Contents
Ocular Lens Microcirculation Model, a Web-Based Bioengineering Educational Tool . . . . . . . . . S.E. Vaghefi
25
Bioinformatics Analysis of Skin Color of Malaysian People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L.M. Abdulhadi, H.L. Mahmoud, H.A. Mohammed
29
Comparison of Spectrometer, Camera, and Scanner Reproduction of Skin Color . . . . . . . . . . . . . H.L. Mahmoud, L.M. Abdulhadi, A. Mahmoud, H.A. Mohammed
33
Context and Implications of Blood Angiogenin Level Findings in Healthy and Breast Cancer Females of Malaysia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. Piven, Y.A. Manaf, M.A. Abdullah
37
Detection of Acute Leukaemia Cells Using Variety of Features and Neural Networks . . . . . . . . A.S. Abdul Nasir, M.Y. Mashor, H. Rosline
40
Biomaterials A Novel Phantom for Accurate Performance Assessment of Bone Mineral Measurement Techniques: DEXA and QCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Emami, H. Ghadiri, M.R. Ay, S. Akhlagpour, A. Eslami, P. Ghafarian, S. Taghizadeh
47
Calcination Effects on the Sinterability of Hydroxyapatite Bioceramics . . . . . . . . . . . . . . . . . . . . . . C.Y. Tan, R. Tolouei, S. Ramesh, B.K. Yap, M. Amiriyan
51
Chitin Fiber Reinforced Silver Sulfate Doped Chitosan as Antimicrobial Coating . . . . . . . . . . . . C.K. Tang, A.K. Arof, N. Mohd Zain
55
Effects of Joule Heating on Electrophoretic Mobility of Titanium Dioxide (TiO2 ), Escherichia Coli and Staphylococcus Aureus (Live and Dead) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P.F. Lee, M. Misran, W.A.T. Wan Abdullah
60
Electrochromic Property of Sol-Gel Derived TiO2 Thin Film for pH Sensor . . . . . . . . . . . . . . . . . . J.C. Chou, C.H. Liu, C.C. Chen
69
Failure Analysis of Retrieved UHMWPE Tibial Insert in Total Knee Replacement . . . . . . . . . . S. Liza, A.S.M.A. Haseeb, A.A. Abbas, H.H. Masjuki
73
Influence of Magnesium Doping in Hydroxyapatite Bioceramics Sintered by Short Holding Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Ramesh, R. Tolouei, C.Y. Tan, M. Amiriyan, B.K. Yap, J. Purbolaksono, M. Hamdi
80
In-vitro Biocompatibility of Folate-Decorated Star-Shaped Copolymeric Micelle for Targeted Drug Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N.V. Cuong, Y.L. Li, M.F. Hsieh
84
Mercury (II) Removal Using CNTS Grown on GACs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K.A. Nassereldeen, S.E. Mirghami, N.W. Salleh
88
Table of Contents
XIII
Optical Properties Effect of Cadmium Sulfide Quantum Dots towards Conjugation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.A. Shamsudin, N.F. Omar, S. Radiman
92
Synthesis of Hydroxyapatite through Dry Mechanochemical Method and Its Conversion to Dense Bodies: Preliminary Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Adzila, I. Sopyan, M. Hamdi
97
The Effect of Ball Milling Hours in the Synthesizing Nano-crystalline Forsterite via Solid-State Reaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 K.L. Samuel Lai, C.Y. Tan, S. Ramesh, R. Tolouei, B.K. Yap, M. Amiriyan The Effect of Titanium Dioxide to the Bacterial Growth on Lysogeny Broth Agar . . . . . . . . . . . N.H. Sabtu, W.S. Wan Zaki, T.N. Tengku Ibrahim, M.M. Abdul Jamil
105
Thermal Analysis on Hydroxyapatite Synthesis through Mechanochemical Method . . . . . . . . . . A.S.F. Alqap, S. Adzila, I. Sopyan, M. Hamdi, S. Ramesh
108
Biomechatronics Continuous Passive Ankle Motion Device for Patient Undergoing Tibial Distraction Osteogenesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.T. Ang, N.A. Hamzaid, Y.P. Chua, A. Saw
112
Musculoskeletal Model of Hip Fracture for Safety Assurance of Reduction Path in Robot-assisted Fracture Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Joung, S. Syed Shikh, E. Kobayashi, I. Ohnishi, Ichiro Sakuma
116
Speed Based Surface EMG Classification Using Fuzzy Logic for Prosthetic Hand Control . . . . S.A. Ahmad, A.J. Ishak, S.H. Ali
121
Biomechanics Activity of Upper Body Muscles during Bowing and Prostration Tasks in Healthy Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.K.M. Safee, W.A.B. Wan Abas, N.A. Abu Osman, F. Ibrahim
125
Analysis of the Effect of Mechanical Properties on Stress Induced in Tibia . . . . . . . . . . . . . . . . . . B. Sepehri, A.R. Ashofteh-Yazdi, G.A. Rouhi, M. Bahari-Kashani
130
Comparative Studies of the Optimal Airflow Waveforms and Ventilation Settings under Respiratory Mechanical Loadings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.L. Lin, S.J. Yeh, H.W. Shia
134
Development of Inexpensive Motion Analysis System–Preliminary Findings . . . . . . . . . . . . . . . . . Y.Z. Chong, J. Yunus, K.M. Fong, Y.J. Khoo, J.H. Low
139
Diabetic Foot Syndrome-3-D Pressure Pattern Analysis as Compared with Normal Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.S. Ranu, A. Almejrad
143
XIV
Table of Contents
Effects of Arterial Longitudinal Tension on Pulsatile Axial Blood Flow . . . . . . . . . . . . . . . . . . . . . . Y.Y. Lin Wang, W.K. Sze, J.M. Chen, W.K. Wang
148
Effect of Extracellular Matrix on Smooth Muscle Cell Phenotype and Migration . . . . . . . . . . . . T. Ohashi, Y. Hagiwara
151
Effects of the Wrist Angle on the Performance and Perceived Discomfort in a Long Lasting Handwriting Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N.Y. Yu, S.H. Chang
153
Estimation of Muscle Force with EMG Signals Using Hammerstein-Wiener Model . . . . . . . . . . . R. Abbasi-Asl, R. Khorsandi, S. Farzampour, E. Zahedi
157
Hip 3D Joint Mechanics Analysis of Normal and Obese Individuals’ Gait . . . . . . . . . . . . . . . . . . . . M.H. Mazlan, N.A. Abu Osman, W.A.B. Wan Abas
161
Impact Load and Mechanical Respond of Tibiofemoral Joint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.A. Oshkour, N.A. Abu Osman, M.M. Davoodi, M. Bayat, Y.H. Yau, W.A.B. Wan Abas
167
Investigation of Lung Lethargy Deformation Using Finite Element Method . . . . . . . . . . . . . . . . . . M.K. Zamani, M. Yamanaka, T. Miyashita, R. Ramli
170
Knee Energy Absorption in Full Extension Landing Using Finite Element Analysis . . . . . . . . . . M.M. Davoodi, N.A. Abu Osman, A.A. Oshkour, M. Bayat
175
Knee Joint Stress Analysis in Standing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.A. Oshkour, N.A. Abu Osman, M.M. Davoodi, M. Bayat, Y.H. Yau, W.A.B. Wan Abas
179
Mechanical Behavior of in-situ Chondrocyte at Different Loading Rates: A Finite Element Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.K. Moo, N.A. Abu Osman, B. Pingguan-Murphy, S.K. Han, S. Federico, W. Herzog
182
Posture and EMG Evaluation of Assist Functions of Full-Body Suits . . . . . . . . . . . . . . . . . . . . . . . . . T. Kitawaki, Y. Inoue, S. Doi, A. Egawa, A. Shiga, T. Iizuka, M. Kawakami, T. Numata, H. Oka
187
Posture Control and Muscle Activation in Spinal Stabilization Exercise . . . . . . . . . . . . . . . . . . . . . . Y.T. Ting, L.Y. Guo, F.C. Su
190
Preliminary Findings on Anthropometric Data of 19-25 Year Old Malaysian University Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Y.Z. Chong, X.J. Leong
193
Quantification of Patellar Tendon Reflex by Motion Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L.K. Tham, N.A. Abu Osman, K.S. Lim, B. Pingguan-Murphy, W.A.B. Wan Abas
197
Quantitative Analysis of the Human Ankle Viscoelastic Behavior at Different Gait Speeds . . . Z. Safaeepour, A. Esteki, M.E. Mousavi, F. Tabatabaei
200
Response of the Human Spinal Column to Loading and Its Time Dependent Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.S. Ranu, A.S. Bhullar, A. Zakaria
203
Shoulder’s Modeling via Kane’s Method: Determination of Torques in Smash Activity . . . . . . . F.H.M. Ariff, A.S. Rambely, N.A.A. Ghani
207
Table of Contents
XV
Simulation of Brittle Damage for Fracture Process of Endodontically Treated Tooth . . . . . . . . . S.S.R. Koloor, J. Kashani, M.R. Abdul Kadir
210
Stress Distribution Analysis on Semi Constrained Elbow Prosthesis during Flexion and Extension Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Heidari, M. Rafiq Bin Dato Abdul Kadir, A. Fallahiarezoodar, M. Alizadeh
215
Stress Distribution of Dental Posts by Finite Element Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P.H. Liu, G.H. Jhong
219
Temporal Characteristics of the Final Delivery Phase and Its Relation to Tenpin Bowling Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Razman, W.A.B. Wan Abas, N.A. Abu Osman, J.P.G. Cheong
222
The Biomechanics Analysis for Dynamic Hip Screw on Osteoporotic and Unstable Femoral Fracture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.F. Wu, K.A. Lai, M.T. Huang, H.S. Chen, K.C. Chung, F.S. Yang
225
Hemodynamic Activities of Motor Cortex Related to Jaw and Arm Muscles Determined by Near Infrared Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L.M. Hoa, .N. Huan, N.V. Hoa, D.D. Thien, T.Q.D. Khoa, V.V. Toi
229
Time-Dependent EMG Power Spectrum Parameters of Biceps Brachii during Cyclic Dynamic Contraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 S. Thongpanja, A. Phinyomark, P. Phukpattaranont, C. Limsakul Transcutaneous Viscoelastic Properties of Brain through Cranial Defects . . . . . . . . . . . . . . . . . . . . H. Nagai, D. Takada, M. Daisu, K. Sugimoto, T. Miyazaki, Y. Akiyama
237
Two Practical Strategies for Developing Resultant Muscle Torque Production Using Elastic Resistance Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.J. Aboodarda, A. Yusof, N.A. Abu Osman, F. Ibrahim
241
Biomedical Instrumentation A Novel Body Temperature Measuring and Data Transmitting System Using Bio-Sensors and Real-Time Transmission Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 S. Manazir Hussain, Ijlal Shahrukh Ateeq, Kamran Hameed, Aisha Tahir, S.M. Omair, S. Imran Alam, Sana H. Khan A Quantitative Study of Gastric Activity on Feeding Low and High Viscosity Meals . . . . . . . . . K. Takahashi, A. Kobayashi, H. Inoue
249
A Study of Extremely Low Frequency Electromagnetic Field (ELF EMF) Exposure Levels at Multi Storey Apartment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Tukimin, W.N.L. Mahadi
253
Accuracy Improvement for Low Perfusion Pulse Oximetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.M. Cho, N.H. Kim, H.S. Seong, Y.S. Kim
258
Application of a Manometric Technique to Verify Nasogastric Tube Placement in Intubated, Mechanically Ventilated Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.S. Chen, K.C. Chung, S.H. Yang, T.H. Li, H.F. Wu
262
XVI
Table of Contents
Assessment of Diabetics with Various Degrees of Autonomic Neuropathy Based on Cross-Approximate Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.C. Chiu, S.J. Yeh, T.Y. Li
266
Automated Diagnosis of Melanoma Based on Nonlinear Complexity Features . . . . . . . . . . . . . . . . N. Karami, A. Esteki
270
Bowel Ischemia Monitoring Using Rapid Sampling Microdialysis Biosensor System . . . . . . . . . . E.P. C´ orcoles, S. Deeba, G.B. Hanna, P. Paraskeva, M.G. Boutelle, A. Darzi
275
Changes in Cortical Blood Oxygenation Responding to Arithmetical Tasks and Measured by Near-Infrared Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N.N.P. Trinh, N.H. Binh, D.D. Thien, T.Q.D. Khoa, V.V. Toi
279
Color Coded Heart Rate Monitoring System Using ANT+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.K. Che Harun, N. Uyop, M.F. Ab Aziz, N.H. Mahmood, M.F. Kamarudin, A. Linoby
283
Control Brain Machine Interface for a Power Wheelchair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.R. Hema, M.P. Paulraj
287
Design and Development of Microcontroller Based ECG Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . A.D. Paul, K.R. Urzoshi, R.S. Datta, A. Arsalan, A.M. Azad
292
Development of CW CO2 Laser Percussion Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Sano, Y. Hashishin, T. Nakayama
296
Electric Field Measurement for Biomedical Application Using GNU Radio . . . . . . . . . . . . . . . . . . I. Hieda, K.C. Nam
300
EZ430-Chronos Watch as a Wireless Health Monitoring Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I.N.A. Mohd Nordin, P.S. Chee, M. Mohd Addi, F.K. Che Harun
305
Face Detection for Drivers’ Drowsiness Using Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V.V. Dixit, A.V. Deshpande, D. Ganage
308
Histological Study to Estimate Risks of Radon Inhalation Dose on a Lung Cancer: In vivo . . . A.H. Ismail, M.S. Jaafar, F.H. Mustafa
312
Influence of Hair Color on Photodynamic Dose Activation in PDT for Scalp Diseases . . . . . . . . F.H. Mustafa, M.S. Jaafar, A.H. Ismail, A.F. Omar, H.A. Houssein, Z.A. Timimi
315
Measurement and Diagnosis Assessment of Plethysmographycal Record . . . . . . . . . . . . . . . . . . . . . M. Augustynek, M. Penhaker, J. Semkovic, P. Penhakerova, M. Cerny
320
Measurement of Available Chlorine in Electrolyzed Water Using Electrical Conductivity . . . . K. Umimoto, H. Kawanishi, Y. Shimamoto, M. Miyata, S. Nagata, J. Yanagida
324
Measuring the Depth of Sleep by Near Infrared Spectroscopy and Polysomnography . . . . . . . . N.T.M. Thanh, L.H. Duy, L.Q. Khai, T.Q.D. Khoa, V.V. Toi
328
Multi-frequency Microwave Radiometer System for Measuring Deep Brain Temperature in New Born Infants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Sugiura, H. Hirata, J.W. Hand, S. Mizushina
332
Table of Contents
XVII
Number of Pulses of rTMS Affects the Inter-Reversal Time of Perceptual Reversal . . . . . . . . . . K. Nojima, S. Ge, Y. Katayama, K. Iramina
336
Permittivity of Urine between Benign and Malignant Breast Tumour . . . . . . . . . . . . . . . . . . . . . . . . E.S. Arjmand, H.N. Ting, C.H. Yip, N.A. Mohd Taib
340
Problems and Solution When Developing Intermittent Pneumatic Compression . . . . . . . . . . . . . N.H. Kim, H.S. Seong, J.W. Moon, J.M. Cho
344
Pulse Oximetry Color Coded Heart Rate Monitoring System Using ZigBee . . . . . . . . . . . . . . . . . . F.K. Che Harun, N. Zulkarnain, M.F. Ab Aziz, N.H. Mahmood, M.F. Kamarudin, A. Linoby
348
Study of Electromagnetic Field Radiation on the Human Muscle Activity . . . . . . . . . . . . . . . . . . . M.S.F. Mansor, W.A.B. Wan Abas, W.N.L. Wan Mahadi
352
The pH Sensitivity of the Polarization Capacitance on Stainless-Steel Electrodes . . . . . . . . . . . . J.G. Bau, H.C. Chen
356
Ultrasound Dosimetery Using Microbubbles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. Rezayat, E. Zahedi, J. Tavakkoli
359
Visualization of Measured Data for Wireless Devices BluesenseAD . . . . . . . . . . . . . . . . . . . . . . . . . . O. Krejcar, D. Janckulik, M. Kelnar
363
Voltammetric Approach for In-vivo Detecting Dopamine Level of Rat’s Brain . . . . . . . . . . . . . . . G.C. Chen, H.Z. Han, T.C. Tsai, C.C. Cheng, J.J. Jason Chen
367
Wearable ECG Recorder with Acceleration Sensors for Measuring Daily Stress . . . . . . . . . . . . . . Y. Okada, T.Y. Yoto, T.A. Suzuki, S. Sakuragawa, H. Mineta, T. Sugiura
371
Wireless Sensor Network for Flexible pH Array Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.C. Chou, C.C. Chen, M.S. Wu
375
Bionanotechnology Application of Gold Nanoparticles for Enhanced Photo-Thermal Therapy of Urothelial Carcinoma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Y.J. Wu, C.H. Chen, H.S.W. Chang, W.C. Chen, J.J. Jason Chen
380
Nanobiosensor for the Detection and Quantification of Specific DNA Sequences in Degraded Biological Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.E. Ali, U. Hashim, S. Mustafa, Y.B. Che Man, M.H.M. Yusop
384
Polysilicon Nanogap Formation Using Size Expansion Technique for Biosensor Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Nazwa, U. Hashim, T.S. Dhahi
388
Biomedical and Physiological Modelling A Modified Beer-Lambert Model of Skin Diffuse Reflectance for the Determination of Melanin Pigments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 A.F.M. Hani, H. Nugroho, N. Mohd Noor, K.F. Rahim, R. Baba
XVIII
Table of Contents
A Review of ECG Peaks Detection and Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T.I. Amani, S.S.N. Alhady, U.K. Ngah, A.R.W. Abdullah
398
An Image Approach Model of RBC Flow in Microcirculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . W.C. Lin, H.H. Liu, R.S. Liu, K.P. Lin
403
An Image-Based Anatomical Network Model and Modelling of Circulation of Mouse Retinal Vasculature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Ganesan, S. He, H. Xu, Y.H. Yau
407
Analysis of Normal and Atherosclerotic Blood Vessels Using 2D Finite Element Models . . . . . K. Kamalanand, S. Srinivasan, S. Ramakrishnan
411
Comparative Analysis of Preprocessing Techniques for Quantification of Heart Rate Variability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . W.M.H. Wan Mahmud, M.B. Malarvili
415
Detection of Influence of Stimuli or Sevices on the Physical Condition and Satisfaction with Unconscious Response Reflecting Activities of Autonomic Nervous System . . . . . . . . . . . . . . . . . . H. Okawai, S. Ichisawa, K. Numata
420
Determination of Reflectance Optical Sensor Array Configuration Using 3-Layer Tissue Model and Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 N.A. Jumadi, K.B. Gan, M.A. Mohd Ali, E. Zahedi Effects of ECM Degradation Rate, Adhesion, and Drag on Cell Migration in 3D . . . . . . . . . . . . . H.C. Wong, W.C. Tang
428
Finite Element Analysis of Different Ferrule Heights of Endodontically Treated Tooth . . . . . . . J. Kashani, M.R. Abdul Kadir, Z. Arabshahi
432
How to Predict the Fractures Initiation Locus in Human Vertebrae Using Quantitative Computed Tomography (QCT) Based Finite Element Method? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Zeinali, B. Hashemi, A. Razmjoo
436
Influence of Cancellous Bone Existence in Human Lumbar Spine: A Finite Element Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Alizadeh, J. Kashani, M.R. Abdul Kadir, A. Fallahi
439
Laser Speckle Contrast Imaging for Perfusion Monitoring in Burn Tissue Phantoms . . . . . . . . . A.K. Jayanthy, N. Sujatha, M. Ramasubba Reddy
443
Microdosimetry Modeling Technique for Spherical Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Nazib Adon, M. Noh Dalimin, N. Mohd Kassim, M.M. Abdul Jamil
447
Recurrent Breast Cancer with Proportional Homogeneous Poisson Process . . . . . . . . . . . . . . . . . . C.C. Chang
450
Simulation of the Effects of Electric and Magnetic Loadings on Internal Bone Remodeling . . . A. Fathi Kazerooni, M. Rabbani, M.R. Yazdchi
458
Study of Hematocrit in Relation with Age and Gender Using Low Power Helium – Neon Laser Irradiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 H.A.A. Houssein, M.S. Jaafar, Z. Ali, Z.A. Timimi, F.H. Mustafa
Table of Contents
XIX
Three-Dimensional Fluid-Structure Interaction Modeling of Expiratory Flow in the Pharyngeal Airway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 M.R. Rasani, K. Inthavong, J.Y. Tu
Biosignal Processing A Hybrid Trial to Trial Wavelet Coherence and Novelty Detection Scheme for a Fast and Clear Notification of Habituation: An Objective Uncomfortable Loudness Level Measure . . . . Mai Mariam
472
Application of Data Mining on Polynomial Based Approach for ECG Biometric . . . . . . . . . . . . . K.A. Sidek, I. Khalil
476
Brain Waves after Short Duration Exercise Induced by Wooden Tooth Brush as a Physical Agent – A Pilot Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F. Reza, H. Omar, A.L. Ahmed, T. Begum, M. Muzaimi, J.M. Abdullah
480
Change Point Detection of EEG Signals Based on Particle Swarm Optimization . . . . . . . . . . . . . M.F. Mohamed Saaid, W.A.B. Wan Abas, H. Arof, N. Mokhtar, R. Ramli, Z. Ibrahim
484
Comparative Analysis of the Optimal Performance Evaluation for Motor Imagery Based EEG-Brain Computer Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Y.S. Ryu, Y.B. Lee, C.G. Lee, B.W. Lee, J.K. Kim, M.H. Lee
488
Comparison of Influences on P300 Latency in the Case of Stimulating Supramarginal Gyrus and Dorsolateral Prefrontal Cortex by rTMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Torii, K. Nojima, A. Matsunaga, M. Iwahashi, K. Iramina
492
Cortical Connectivity during Isometric Contraction with Concurrent Visual Processing by Partial Directed Coherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.N. Ramli, N.M. Safri, R. Sudirman, N.H. Mahmood, M.A. Othman, J. Yunus
496
Cross Evaluation for Characteristics of Motor Imagery Using Neuro-feedback Based EEG-Brain Computer Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Y.S. Ryu, Y.B. Lee, W.J. Jeong, S.J. Lee, D.H. Kang, M.H. Lee
500
EEG Artifact Signals Tracking and Filtering in Real Time for Command Control Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Moghavvemi, A. Attaran, M.H. Moshrefpour Esfahani
503
EEG Patterns for Driving Wireless Control Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. Azmy, N. Mat Safri, F.K. Che Harun, M.A. Othman
507
Effects of Physical Fatigue onto Brain Rhythms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.C. Ng, P. Raveendran
511
Evaluation of Motor Imagery Using Combined Cue Based EEG-Brain Computer Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.H. Choi, Y.B. Lee, W.J. Jeong, S.J. Lee, D.H. Kang, M.H. Lee
516
Fitting and Eliminating to the TMS Induced Artifact on the Measured EEG by the Equivalent Circuit Simulation Improved Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Y. Katayama, K. Iramina
XX
Table of Contents
Gender Identification by Using Fundamental and Formant Frequency for Malay Children . . . . H.N. Ting, A.R. Zourmand
523
Improving Low Pass Filtered Speech Intelligibility Using Nonlinear Frequency Compression with Cepstrum and Spectral Envelope Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.H. Mohd Zaman, M.M. Mustafa, A. Hussain
527
Long-Term Heart Rate Variability Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Penhaker, T. Stula, M. Augustynek
532
Neural Network Classifier for Hand Motion Detection from EMG Signal . . . . . . . . . . . . . . . . . . . . Md.R. Ahsan, M.I. Ibrahimy, O.O. Khalifa
536
Performance Comparison between Mutative and Constriction PSO in Optimizing MFCC for the Classification of Hypothyroid Infant Cry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Zabidi, W. Mansor, Y.K. Lee, I.M. Yassin, R. Sahak
542
Periodic Lateralized Epileptiform Discharges (PLEDs) in Post Traumatic Epileptic Patient—Magnetoencephalographic (MEG) Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Begum, F. Reza, H. Omar, A.L. Ahmed, S. Bhaskar, J.M. Abdullah, J.T.K.J. Tharakan
548
Premonitory Symptom of Septic Shock in Heart Rate Variability . . . . . . . . . . . . . . . . . . . . . . . . . . . . Y. Yokota, Y. Kawamura, N. Matsumaru, K. Shirai
552
Review of Electromyographic Control Systems Based on Pattern Recognition . . . . . . . . . . . . . . . S.A. Ahmad, A.J. Ishak, S.H. Ali
556
Speaker Verification Using Gaussian Mixture Model (GMM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. Hussain, S.H. Salleh, C.M. Ting, A.K. Ariff, I. Kamarulafizam, R.A. Suraya
560
Speaker-Independent Vowel Recognition for Malay Children Using Time-Delay Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.F. Yong, H.N. Ting
565
Feasibility of Using the Wavelet-Phase Stability in the Objective Quantification of Neural Correlates of Auditory Selective Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Y.F. Low, K.C. Lim, Y.G. Soo, D.J. Strauss
569
Clinical Engineering Sharing the Medical Resource: The Feasibility and Benefit of Global Medical Instruments Support and Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.J. Tzeng, C.Y. Lee, Y.Y. Huang
574
BioMEMS Hybrid Capillary-Flap Valve for Vapor Control in Point-of-Care Microfluidic CD . . . . . . . . . . . . T. Thio, A.A. Nozari, N. Soin, M.K.B.A. Kahar, S.Z.M. Dawal, K.A. Samra, M. Madou, F. Ibrahim
578
Semi-automated Dielectrophoretic Cell Characterisation Module for Lab-on-Chip Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N.A. Kadri, H.O. Fatoyinbo, M.P. Hughes, F.H. Labeed
582
Table of Contents
XXI
Medical Imaging A Preliminary Study of Compression Efficiency and Noise Robustness of Orthogonal Moments on Medical X-Ray Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587 K.H. Thung, S.C. Ng, C.L. Lim, P. Raveendran Activities of Oxy-Hb and DeOxy-Hb on Motor Imaging and Motor Execution by Near-Infrared Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591 D.H.T. Nguyen, N.V.D. Hau, T.Q.D. Khoa, V.V. Toi An Overview: Segmentation Method for Blood Cell Disorders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.F. Miswan, J.M. Sharif, M.A. Ngadi, D. Mohamad, M.M. Abdul Jamil
596
Assessment of Ischemic Stroke Rat Using Near Infrared Spectroscopy . . . . . . . . . . . . . . . . . . . . . . . F.M. Yang, C.W. Wu, C.Y. Lu, J.J. Jason Chen
600
Brain Lesion Segmentation of Diffusion-Weighted MRI Using Thresholding Technique . . . . . . . N. Mohd Saad, L. Salahuddin, S.A.R. Abu-Bakar, S. Muda, M.M. Mokji
604
Characterization of Renal Stones from Ultrasound Images Using Nonseparable Quincunx Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.R. Shah, M.D. Desai, L. Panchal, M.R. Desai
611
Cluster Approach for Auto Segmentation of Blast in Acute Leukimia Blood Slide Images . . . . N.H. Harun, M.Y. Mashor, H. Rosline
617
Comparison of the Basal Ganglia Volumetry between SWI and T1WI in Normal Human Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . W.B. Jung, Y.H. Han, J.H Lee, C.W. Mun
623
Computer-Aided Diagnosis System for Pancreatic Tumor Detection in Ultrasound Images . . . C. Wu, M.H. Lin, J.L. Su
627
Detection of Gastrointestinal Disease in Endoscope Imaging System . . . . . . . . . . . . . . . . . . . . . . . . . C.S. Low, K.S. Sim, A.L. Lee, H.Y. Ting, C.P. Tso, S.Y. Chuah
631
Digital Image Analysis of Chronic Ulcers Tissues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.F.M. Hani, L. Arshad, A.S. Malik, A. Jamil, B.B. Felix Yap
635
Early Ischemic Stroke Detection through Image Colorization Graphical User Interface . . . . . . . K.S. Sim, M.K. Ong, C.K. Tan, C.P. Tso, A.H. Rozalina
639
Evaluation of the Effects of Chang’s Attenuation Correction Technique on Simlar Transverse Views of Cold and Hot Regions in Tc-99m SPECT: A Phantom Study . . . . . . . . . . . . . . . . . . . . . . I.S. Sayed, A. Harfiza
643
Efficiency of Enhanced Distance Active Contour (EDAC) for Microcalcifications Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.S Yasiran, A. Ibrahim, W.E.Z.W.A. Rahman, R. Mahmud
650
Estimating Retinal Vessel Diameter Change from the Vessel Cross-Section . . . . . . . . . . . . . . . . . . M.Z. Che Azemin, D.K. Kumar
655
XXII
Table of Contents
Face-Central Incisor Morphometric Relation in Malays and Chinese . . . . . . . . . . . . . . . . . . . . . . . . . L.M. Abdulhadi, H. Abass
659
Fingertip Synchrotron Radiation Angiography for Prediction of Diabetic Microangiopathy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Fujii, N. Fukuyama, Y. Ikeya, Y. Shinozaki, T. Tanabe, K. Umetani, H. Mori
663
Hybrid Multilayered Perceptron Network Trained by Modified Recursive Prediction Error-Extreme Learning Machine for Tuberculosis Bacilli Detection . . . . . . . . . . . . . . . . . . . . . . . . . M.K. Osman, M.Y. Mashor, H. Jaafar
667
Intelligent Spatial Based Breast Cancer Recognition and Signal Enhancement System in Magnetic Resonance Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.K. Chia, K.S. Sim, S.S. Chong, S.T. Tan, H.Y. Ting, Siti Fathimah Abbas, Sarimah Omar
674
Investigating the Mozart Effect on Brain Function by Using Near Infrared Spectroscopy . . . . H.Q.M. Huy, T.Q.D. Khoa, V.V. Toi
678
Latex Glove Protein Estimation Using Maximum Minimum Area Variation . . . . . . . . . . . . . . . . . . K.P. Yong, K.S. Sim, H.Y. Ting, W.K. Lim, K.L. Mok, A.H.M. Yatim
682
Measurement of the Area and Diameter of Human Pupil Using Matlab . . . . . . . . . . . . . . . . . . . . . . N.H. Mahmood, N. Uyop, M.M. Mansor, A.M. Jumadi
686
Medical Image Pixel Extraction via Block Positioning Subtraction Technique for Motion Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.S.D.S. Jitvinder, S.S.S. Ranjit, S.A. Anas, K.C. Lim, A.J. Salim
690
Monte Carlo Characterization of Scattered Radiation Profile in Volumetric 64 Slice CT Using GATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694 A. Najafi Darmian, M.R. Ay, M. Pouladian, A. Shirazi, H. Ghadiri, A. Akbarzadeh Multi-Modality Medical Images Feature Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. Madzin, R. Zainuddin, N.S. Mohamed
698
Parametric Dictionary Design Using Genetic Algorithm for Biomedical Image De-noising Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. Nozari, G.A. Rezai Rad, M. Pourmajidian, A.K. Abdul Wahab
704
Quantification of Inter-crystal Scattering and Parallax Effect in Pixelated High Resolution Small Animal Gamma Camera: A Monte Carlo Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F. Adibpour, M.R. Ay, S. Sarkar, G. Loudos
708
Quantitative Assessment of the Influence of Crystal Material and Size on the Inter Crystal Scattering and Penetration Effect in Pixilated Dual Head Small Animal PET Scanner . . . . . . . N. Ghazanfari, M.R. Ay, N. Zeraatkar, S. Sarkar, G. Loudos
712
Rapid Calibration of 3D Freehand Ultrasound for Vessel Intervention . . . . . . . . . . . . . . . . . . . . . . . K. Luan, H. Liao, T. Ohya, J. Wang, Ichiro Sakuma
716
Segmentation of Tumor in Digital Mammograms Using Wavelet Transform Modulus Maxima on a Low Cost Parallel Computing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hanifah Sulaiman, Arsmah Ibrahim, Norma Alias
720
Table of Contents
The Correlation Analyses between fMRI and Psychophysical Results Contribute to Certify the Activated Area for Motion-Defined Pattern Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Kamiya, A. Kodabashi, Y. Higashi, M. Sekine, T. Fujimoto, T. Tamura
XXIII
724
Prosthetics and Orthotics A New Method for Measuring Pistoning in Lower Limb Prosthetic . . . . . . . . . . . . . . . . . . . . . . . . . . ´ H. Gholizadeh, N.A. Abu Osman, A.G. L´ u v´ıksd´ ottir, M. Kamyab, A. Eshraghi, S. Ali, W.A.B. Wan Abas
728
Ambulatory Function Monitor for Amputees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.N. Ooi, N.A. Abu Osman
732
Anthromorphic Design Methodology for Multifingered Arm Prosthesis . . . . . . . . . . . . . . . . . . . . . . U.S. Md Ali, N.A. Abu Osman, N. Yusoff, N.A. Hamzaid, H. Md Zin
735
Approximation Technique for Prosthetic Design Using Numerical Foot Profiling . . . . . . . . . . . . . A.Y. Bani Hashim, N.A. Abu Osman, W.A.B. Wan Abas, L. Abdul Latif
739
Comparison Study of the Transradial Prosthetics and Body Powered Prosthetics Using Pressure Distribution Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N.A. Abd Razak, N.A. Abu Osman
743
Effect of Position in Fixed Screw on Prosthetic Temporomandibular Joint . . . . . . . . . . . . . . . . . . . P.H. Liu, T.H. Huang, J.S. Huang
747
Evaluation of EMG Feature Extraction for Movement Control of Upper Limb Prostheses Based on Class Separation Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Phinyomark, S. Hirunviriya, A. Nuidod, P. Phukpattaranont, C. Limsakul
750
Modeling and Fabrication of Articulate Patellar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.H. Rifa’t, Y. Nukman, N.A. Abu Osman, L.K. Gym, M.Z. Harizam
755
Pistoning Measurement in Lower Limb Prostheses – A Literature Review . . . . . . . . . . . . . . . . . . . A. Eshraghi, N.A. Abu Osman, M.T. Karimi, H. Gholizadeh, S. Ali
758
Prosthetics and Orthotics Services in the Rehabilitation Clinics of University Malaya Medical Centre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762 S. Ali, N.A. Abu Osman, H. Gholizadeh, A. Eshraghi, P.M. Verdan, L. Abdul Latif Prosthetic Foot Design: The Significance of the Normalized Ground Reaction Force . . . . . . . . . A.Y. Bani Hashim, N.A. Abu Osman, W.A.B. Wan Abas, L. Abdul Latif
765
Rehabilitation Engineering A Survey on Welfare Equipment Using Information Technologies in Korea and Japan . . . . . . . H.S. Seong, N.H. Kim, Y.A. Yang, E.J. Chung, S.H. Park, J.M. Cho
769
Biomechanical Analysis on the Effect of Bone Graft of the Wrist after Arthroplasty . . . . . . . . . M.N. Bajuri, M.R. Abdul Kadir, M.Y. Yahya
773
XXIV
Table of Contents
Can Walking with Orthosis Decrease Bone Osteoporosis in Paraplegic Subjects? . . . . . . . . . . . . . M.T. Karimi, S. Solomonidis, A. Eshraghi
778
Design and Development of Arm Rehabilitation Monitoring Device . . . . . . . . . . . . . . . . . . . . . . . . . . R. Ambar, M.S. Ahmad, M.M. Abdul Jamil
781
Development of Artificial Hand Gripper for Rehabilitation Process . . . . . . . . . . . . . . . . . . . . . . . . . . A.M. Mohd Ali, M.Y. Ismail, M.M. Abdul Jamil
785
Motor Control in Children with Developmental Coordination Disorder–Fitts’ Paradigm Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.H. Chang, N.Y. Yu
789
Quantitative Analysis of Conductive Fabric Sensor Used for a Method Evaluating Rehabilitation Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.W. Lee, C.K. Lee, Y.S. Ryu, D.H. Choi, M.H. Lee
793
Study on Posture Homeostasis One Hour Pilot Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.W. Lin, T.C. Hsiao, C.W. Lin
797
The Development of Muscle Training System Using the Electromyogram and Interactive Game for Physical Rehabilitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801 K.S. Kim, J.H. Kang, Y.H. Lee, C.S. Moon, H.H. Choi, C.W. Mun
Tissue Engineering A Preliminary Study on Magnetic Fields Effects on Stem Cell Differentiation . . . . . . . . . . . . . . . . Azizi Miskon, Jatendra Uslama
805
A Preliminary Study on Possibility of Improving Animal Cell Growth . . . . . . . . . . . . . . . . . . . . . . . M.Y. Jang, C.W. Mun
811
Dynamic Behaviour of Human Bone Marrow Derived-Mesenchymal Stem Cells on Uniaxial Cyclical Stretched Substrate – A Preliminary Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.Y. Nam, B. Pingguan-Murphy, A.A. Abbas, A.M. Merican, T. Kamarul
815
Effect of Herbal Extracts on Rat Bone Marrow Stromal Cells (BMSCs) Derived Osteoblast–Preliminary Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.T. Poon, W.A.B. Wan Abas, K.H. Kim, B. Pingguan-Murphy
819
Fabrication and In-vivo Evaluation of BCP Scaffolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Behnamghader, R. Tolouei, R. Nemati, D. Sharifi, M. Farahpour, T. Forati, A. Rezaei, A. Gozalian, R. Neghabat
823
Fabrication of Porous Ceramic Scaffolds via Polymeric Sponge Method Using Sol-Gel Derived Strontium Doped Hydroxyapatite Powder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827 I. Sopyan, M. Mardziah, Z. Ahmad Hydrogel Scaffolds: Advanced Materials for Soft Tissue Re-growth . . . . . . . . . . . . . . . . . . . . . . . . . . 831 Z.A. Abdul Hamid, A. Blencowe, J. Palmer, K.M. Abberton, W.A. Morrison, A.J. Penington, G.G. Qiao, G. Stevens
Table of Contents
XXV
Process Optimization to Improve the Processing of Poly (DL-lactide-co-glycolide) into 3D Tissue Engineering Scaffolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.E. Hoque, Y.L. Chuan, I. Pashby, A.M.H. Ng, R. Idrus
836
The Fabrication of Human Amniotic Membrane Based Hydrogel for Cartilage Tissue Engineering Applications: A Preliminary Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I.H. Hussin, B. Pingguan-Murphy, S.Z. Osman
841
General Papers Artificial Oxygen Carriers (Hemoglobin-Vesicles) as a Transfusion Alternative and for Oxygen Therapeutics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845 H. Sakai Enzymatic Synthesis of Soybean Oil Based-Fatty Amides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.A. Jaffar Al-Mulla
849
Rat Model of Healing the Skin Wounds and Joint Inflammations by Recombinant Human Angiogenin, Erythropoietin and Tumor Necrosis Factor-α . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Gulyaev, V. Piven
854
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
859
Keyword Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
865
Cardiovascular Modeling: Physiological Concepts and Simulation A. Avolio Australian School of Advance Medicine, Macquarie University, Sydney, Australia
Abstract— Modeling concepts are utilized whenever relationships are described between two or more quantities. One of the most elementary cardiovascular models is the relationship between mean arterial blood pressure and mean arterial blood flow in a vascular bed, described as vascular resistance. For simulation purposes, the electrical analogy is conventionally used where voltage (V) is analogous to pressure (P) and current (I) is analogous to flow (Q), such that the equivalent ohmic resistance (R) is obtained from the relationship P = Q.R. This analogy to Ohm’s Law (V=I.R) implies a linear relationship, such that R can be determined for any value of P and Q. However, the physiological constituents of R, that is, vascular geometry and blood viscosity, are also functions of P and Q. Hence the relationship between P and Q is inherently nonlinear, with the degree of nonlinearity depending on the anatomical location along the arterial tree and so the value of velocity of blood flow. This concept applying to steady values of P and Q is extended to time varying signals, P(t) and Q(t), where steady state oscillations are described in the frequency (ω) domain such that the relationship between oscillatory pressure (P(ω)) and flow (Q(ω)) is vascular impedance (Z(ω)), and where R is the zero-frequency value of Z(ω).
The non-linearities are also inherent in the impedance model, since the elastic properties of the arterial wall makes constitutive parameters such as wave speed and vessel diameter pressure dependent. However, in the range of pressure excursions during the cardiac cycle, the effects of non-linearity are relatively small, hence allowing closed form expressions of impedance based on vascular and blood properties, such that the input impedance spectrum can describe the complete hemodynamics of the vascular bed. When applied to the ascending aorta or the pulmonary artery, it describes the dynamic load on the left and right ventricles respectively. When the impedance model is applied to distributed structures such as branching vascular trees, it can be used to investigate underlying concepts related to optimal functions determined by allometric relationships of cardiovascular parameters (eg heart rate) and body size. Simulation of wave propagation in arterial models can be used determine factors that contribute to the change in pulse waveform throughout the arterial tree and inclusion of non-linear properties, such as pressure-dependent elasticity, can simulate changes in arterial hemodynamics due to gravitational effects on arteries, as occurs with changes from supine to upright posture.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, p. 1, 2011. www.springerlink.com
Effect of Pain Perception on the Heartbeat Evoked Potential X.P. Li Department of Mechanical & Division of Bioengineering National University of Singapore
Abstract— Pain as an unpleasant sensory and emotional experience, if uncontrolled or undertreated, can seriously impair the quality of life. In many cases, the failure to adequately treat pain is due to the lack of accurate pain assessment tools, especially when subjective self-report methods are not applicable due to patients’ inability to formulate their pain experience (e.g. young children, incapacitating brain conditions). Therefore, there is a need for measures of pain which do not rely on patients’ ability to self-report. In this study, the relationship between the Heartbeat Evoked Potential(HEP) and acute pain perception was investigated. The aim was to examine the effect of acute tonic cold pain on the HEP and to test whether or not pain perception can be reflected by the HEP. Simultaneous electroencephalogram (EEG) and electrocardiogram (ECG) were recorded from 21 healthy young adults in three conditions: passive no-task control, no-pain control and cold pain
induced by cold pressor test (CPT). The HEP was obtained by using ECG R-peaks as event triggers. Prominent HEP deflection was observed in both control conditions mainly over the frontal and central locations, while it was significantly suppressed in the cold pain condition over the right-frontal, rightcentral and midline locations. A comparison of the data in the first and last 5 minutes of cold pain condition showed that lower subjective pain ratings were accompanied by higher HEP magnitudes. A correlation analysis showed that the mean HEP magnitude over the midline locations was significantly negatively correlated with subjective pain ratings. In conclusions, cold pain induces significant suppression of the HEP across a number of scalp locations, and the suppression is correlated with self-report of pain, indicating the potential of the HEP to serve as an alternative pain measure.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, p. 2, 2011. www.springerlink.com
In-vivo Cartilage Mechano-Biology: How to Make Progress in Osteoarthritis Research W. Herzog, T.R. Leonard, Z. Abusara, S.K. Han, and A. Sawatsky University of Calgary/Faculty of Kinesiology, Human Performance Laboratory, Calgary, Canada Abstract— Cartilage mechano-biology has typically been performed in isolated tissue explants exposed to hydrostatic pressure, or subjected to confined or unconfined loading conditions. Although these approaches offer great control over the experiments, they do not reflect the physiological loading and boundary conditions of cartilage in the intact joint. Here, we will describe recent approaches that allow for evaluation of cartilage and chondrocyte biomechanics and signaling in the intact cartilage and intact joint of live animals. Although not as well controlled as experiments performed on tissue explants, the in vivo work offers the opportunity to study chondrocyte mechanics and signaling as well as tissue biomechanics for physiologically relevant loading situations and with natural boundary conditions.
Below, we will describe some of our recent work aimed at understanding the mechano-biology of joints, articular cartilage tissue, and chondrocytes (articular cartilage cells) in the intact joint, subjected to normal or near physiological loading patterns. The disadvantage of this approach is that experimental conditions are not as well controlled as in the in situ and in vitro approaches common to this field, but the overwhelming advantage is that observations made in vivo are likely to reflect much better what might cause the onset and progression of OA. The studies selected for presentation here are focused on (i) muscle weakness as an independent risk factor for OA, (ii) muscle imbalance as a cause for knee pain, and (iii) the mechano-biology of chondrocytes in the intact joint.
Keywords— Osteoarthritis, knee biomechanics, chondrocyte signaling, muscle weakness, live cell imaging.
I. INTRODUCTION Osteoarthritis (OA) is a disease of the joint that affects approximately 10% of all Canadians and about 50% of all Canadians aged 65 years or older. With an increase in the average age of the population, an ever increasing number of people will be affected by OA. In Canada, the estimated cumulative costs for OA by the year 2040 are predicted to be $CDN 1.4 trillion. Osteoarthritis is associated with pain, swelling of the joint, functional impairment, and a loss in joint range of motion (e.g.[1]). These clinical signs are caused by a loss of cartilage from the articulating surfaces of bones, osteophyte formation, loss of subchondral bone, damage to menisci and other internal structures (e.g.[2]). Osteoarthritis research has primarily focused on articular cartilage and most of it has been done on isolated tissues or cells (e.g.[3]). Using such approaches has the advantage that tissues and cells are well controlled and the experimental conditions are highly repeatable. However, isolating pieces of tissues or cells, and subjecting them to loading through hydrostatic pressure, pressure plates or other instruments, is highly non-physiological, and although it might help define mechanical and biological properties, it is debatable whether these properties remain the same in the joint that is loaded by inertial forces and muscular contractions.
II. BACKGROUND AND PURPOSE A. Muscle Weakness Osteoarthritis is associated with muscle weakness. In fact, muscle weakness has been said to be a better predictor of OA than either joint space narrowing or pain [4;5]. However, it is not clear if muscle weakness is a condition arising after the onset of OA, or if muscle weakness is an independent risk factor for the onset and progression of OA [6]. Purpose: The purpose of this study was to develop a model of muscle weakness and test the hypothesis that muscle weakness is an independent risk factor for the onset and progression of OA. Methods: Muscle weakness was produced in one year old, female New Zealand white rabbits by injecting botulinum toxin type-A (BTXA) into the quadriceps musculature for times ranging from 1-6 months [7]. Contralateral limbs served as saline injected controls while other animals served as normal controls. Strength, muscle mass, muscle structure and knee histology (Mankin score) were evaluated to determine if muscle weakness caused joint degeneration. Results: Following BTXA injections for 1-6 months into the quadriceps musculature of rabbits, muscle strength was decreased to about 20% of original, muscle mass was reduced by approximately half, fat invasion into the muscle
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 3–6, 2011. www.springerlink.com
4
W. Herzog et al.
replaced contractile material by about 50%, and the Mankin scores on the retropatellar surface were increased, indicating that knee joint degeneration was increased following BTXA-induced muscle weakness [Figure 1] [8;9].
white rabbits by VM ablation. Patellofemoral contact pressures were measured using Fuji Presensor film and patellar tracking relative to the femur was quantified using markers attached to bone pins on patella and femur and high-speed video filming (200Hz). Results: Ablation of the VM did not cause a change in isometric patellofemoral contact pressure distributions across the physiological range of joint movement [Figure 2], and also did not cause systematic changes in patellar tracking along the femur. However, patellar tracking was systematically shifted towards the medial aspect of the femur during active compared to passive knee extension/flexion trials. Also, patellar tracking was systematically different for active flexion compared to active extension of the knee.
Fig. 1 Cross-sectional histological view of the vastus lateralis from a control animal (right), and a BTX-A treated animal receiving six injections in a six months treatment period. Note the loss of contractile tissue (dark) and the increase in fat (light) in the experimental (left) compared to the control muscle Discussion: The results of this study indicate that muscle weakness is an independent risk factor for OA onset and progression. Needless to say that in an in-vivo study with freely ambulating animals, muscle weakness may cause a series of contaminating conditions: for example, altered gait patterns, and changed behavior. Nevertheless, whatever the detailed reasons for the increased knee joint degeneration following quadriceps weakness, weakness was the catalyst for these events to occur. Conclusion: We concluded from the results of this and similar other studies on BTXA-induce muscle weakness that quadriceps weakness is an independent risk factor for the onset and possibly also for the progression of OA [10]. B. Muscle Imbalance Imbalance of muscle forces around joints has been associated with joint pain and functional limitations. Specifically, vastus medialis (VM) weakness in the knee extensor group has been related to mal-tracking of the patella, causing pain and leading to joint degeneration [11]. Purpose: The purpose of this study was to test if VM weakness causes mal-tracking of the patella, thereby creating conditions for knee pain and degeneration. Methods: Weakness in the VM was introduced in the knee extensor muscles of one year old, female New Zealand
Fig. 2 Selected patellofemoral contact pressure distributions at knee angles of 30, 60, and 90˚ with the VM intact (left column) and the VM transected (middle column). Overlaying the contact pressures measured before and after VM ablation showed no systematic shifts in patellofemoral pressures, suggesting that VM strength was not important in patellar tracking (right column)
Discussion: In contrast to clinical believe, anecdotal evidence, and conservative treatment protocols, VM weakness (achieved by complete ablation of VM) did not change patellar tracking in the femoral groove, nor did it change the contact pressures in the patellofemoral joint. The rabbit knee is similar in its musculoskeletal structure to the human knee in that it has a VM with fibre alignment about 45° to the medial side of the femur axis [Figure 3]. Therefore, one would have expected to see a medial shift as has been proposed in humans. However, this was not observed, likely because the VM line of action is along its aponeurosis (rather than along the fibre direction as has been assumed tacitly), thereby creating this surprising result.
IFMBE Proceedings Vol. 35
In-vivo Cartilage Mechano-Biology: How to Make Progress in Osteoarthritis Research
5
and cell shapes are obtained by reconstructing chondrocytes based on stacks of images separated by 0.5µm. Biological signaling is measured from intra-cellular calcium fluxes using fluorescence microscopy. Mechanical loading is controlled either by indentation (in situ) or by muscular contractions [15]. Results: Chondrocytes deform quickly (<1s) upon muscular loading but take minutes to recover fully to their preloading shape and volume [Figure 4]. Chondrocytes show great calcium sparks during the loading ramp, but calm down during steady-state load applications, and show some increase in calcium signaling upon unloading of the tissue [Figure 5]. Fig. 3 Knee joint and quadriceps anatomy of the human (left) and rabbit (right) knee. Note that the arrangement of the knee extensor muscles and their insertion into the patella is similar in human and rabbit knees. Conclusion: Ablation of the VM did not cause a lateral shift in patellar tracking, or an increase in lateral pressure distribution in the patellofemoral joint, as has been suggested clinically and anecdotally. It is not known whether this result would also hold for the human knee, but it does raise the question why clinical treatments of patellofemoral joint pain are concerned primarily with strengthening VM, despite a lack of direct evidence that VM strengthening will alter patellar tracking and/or patellofemoral contact pressure distributions. C. Mechano-Biology of Chondrocytes Chondrocytes are the cells in articular cartilage. They maintain the matrix by producing the appropriate matrix molecules. The biosynthetic activities of chondrocytes are known to depend, among many other factors, on the mechanical loading environment of the joint. However, chondrocyte deformations as a function of joint loading, or the associated biological responses, have never been measured. Rather, chondrocytes have been studied either in isolation, embedded in gel-like substances or tissue explants (e.g.[1214]. However, these experimental situations represent artificial loading and boundary conditions and thus the mechanobiological responses might be artificial as well.
Fig. 4 Cell shapes (a), cell volume (b) and cell height, width and depth (c) before muscular loading and at different instances during and after loading. Observe the quick volume loss of cells upon loading (b) and the slow recovery of cell shape (a) and volume (b) after unloading
Purpose: The purpose of this study was to design a system for the measurement and quantification of chondrocyte mechanics in situ and in vivo in the intact joint loaded by muscular contractions. Methods: Chondrocyte biomechanics and biosynthesis have been measured in rabbit knee cartilage (in situ) and in the intact mouse knee (in vivo). Imaging is performed using two-photon excitation microscopy. Deformations of cells
Fig. 5 Mechanical loading (top), Ca2+ signaling (middle) and Ca2+ signaling in selected cells during mechanical indentation testing. Note the increase in Ca2+ signaling upon load application
IFMBE Proceedings Vol. 35
6
W. Herzog et al.
Discussion: Upon muscular loading, cells loose volume of up to 25-30% in less than a second, attesting to their ability for quick fluid loss. The fact that cell shapes and volumes recover much slower (minutes) suggests that during cyclic loading of a joint, cells are deformed greatly during the first loading cycle, but very little during all subsequent cycles. This slow recovery of cell shape has the advantage that cells do not need to undergo big changes in volume for each loading cycle but will do so only for the first cycle. The fact that calcium signaling is associated with loading and unloading ramps suggests that static loading has little biosynthetic effect but that dynamic loading activates cell signaling pathways. Conclusion: Chondrocytes deform substantially upon loading, but only recover slowly during unloading. Therefore, calcium signaling during load application might be associated with cell deformations, while signaling during unloading is likely associated with the release of force/pressure.
III. CONCLUSIONS Experiments on intact knees of live animals demonstrated that muscle weakness is an independent risk factor for OA. However, VM weakness does not affect patellar tracking, but activation of muscles changes patellar tracking patterns. Finally, chondrocytes in the intact knee deform quickly and chondrocytes exposed to ramp loading show calcium signals upon loading, but recover only slowly and presumably with little signaling during steady-state loading or unloading.
ACKNOWLEDGMENT The AHFMR Team grant on Osteoarthritis, The Canada Research Chair Programme, The Canadian Institutes of Health Research, and Alberta Innovates Health Solutions.
REFERENCES 1. Kettlekamp DB, Coyler RA (1984): Osteoarthritis of the knee. In: Osteoarthritis: Diagnosis and management. RW Moskowitz, DS Howell, eds. W.B. Saunders, Philadelphia, 403-421pp.
2. Moskowitz RW (1984): Experimental models of osteoarthritis. In: Osteoarthritis, Diagnosis and Management. RW Moskowitz, DS Howell, VM Goldberg, HJ Mankin, eds. W.B.Saunders Company, Philadelphia, 213-232pp. 3. Guilak F, Ratcliffe A, Mow VC (1995) Chondrocyte deformation and local tissue strain in articular cartilage: a confocal microscopy study. J Orthop Res 13:410-421 4. Thomas AC, Sowers M, Karvonen-Gutierrez C et al. (2010) Lack of quadriceps dysfunction in women with early knee osteoarthritis. J Orthop Res 28:595-599 5. McAlindon TE, Cooper C, Kirwan JR et al. (1993) Determinants of disability in osteoarthritis of the knee. Ann Rheum Dis 52:258-262 6. Longino, D. Botulinum toxin and a new animal model of muscle weakness. 2003. University of Calgary, Calgary, AB, Canada, MSc Thesis. 7. Longino D, Frank CB, Leonard TR et al. (2005) Proposed model of botulinum toxin-induced muscle weakness in the rabbit. J Orthop Res 23:1411-1418 8. Rehan Youssef A, Longino D, Seerattan R et al. (2009) Muscle weakness causes joint degeneration in rabbits. Osteoarthritis Cartilage 17:1228-1235 9. Fortuna R, Aurelio Vaz M, Rehan Youssef A et al. (2011) Changes in contractile properties of muscles receiving repeat injections of botulinum toxin (Botox). J Biomech 44(1):39-44 10. Roos EM, Herzog W, Block JA et al. (2010) Muscle weakness, afferent sensory dysfunction and exercise in knee osteoarthritis. Nat Rev Rheumatol 7:57-63 11. Lewek MD, Rudolph KS, Snyder-Mackler L (2004) Quadriceps femoris muscle weakness and activation failure in patients with symptomatic knee osteoarthritis. J Orthop Res 22:110-115 12. Baaijens FP, Trickey WR, Laursen TA et al. (2005) Large deformation finite element analysis of micropipette aspiration to determine the mechanical properties of the chondrocyte. Ann Biomed Eng 33: 494-501 13. Guilak F (1995) Compression-induced changes in the shape and volume of the chondrocyte nucleus. J Biomech 28:1529-1541 14. Bader DL, Knight MM (2008) Biomechanical analysis of structural deformation in living cells. Med Biol Eng Comput 46:951963 15. Abusara Z, Seerattan R, Leumann A et al. (2011) A novel method for determining articular cartilage chondrocyte mechanics in vivo. J Biomech 44:930-934
Dr. Walter Herzog University of Calgary Faculty of Kinesiology KNB404, 2500 University Drive N.W. Calgary, Canada, T2N 1N4
[email protected]
IFMBE Proceedings Vol. 35
Sensors and Instrumentation to Meet Clinical Needs M.R. Neuman Department of Biomedical Engineering, Michigan Technological University Houghton, Michigan, U.S.A.
Abstract— There are many challenges that still plague the development of biomedical sensors and instruments. These range from basic science through the practical application and clinical evaluation of developed devices. In this presentation we shall examine some of the special applications of sensors and instrumentation to address clinical problems. In too many cases technology is developed without a specific application, and that has to come later if at all. A different approach is to start with the clinical problem and develop technology that can address it. An example of this is work done more than 30 years ago, but it illustrates how technology can address a clinical problem. In the early days of neonatal monitoring miniaturized versions of adult electrodes were used to pick up cardiac and respiratory signals from an infant. These electrodes and their method of attachment caused irritation to the infant's skin and also would create significant shadows when they were left in place as x-ray images were obtained. Both of these problems were overcome by the development of thin-film flexible electrodes. Another issue was related to the reliability of infant breathing and apnea sensors. A clinical study showed that the dominant technology for sensing breathing, transthoracic electrical impedance, was not very reliable when the infant was moving. In a study of 34 infants each studied for four hours, breathing signals could only be detected 43% of the time using transthoracic impedance during movement. On the other
hand, fast-responding nasal temperature sensors and compliant strain gauges recorded simultaneously with the transthoracic impedance data were shown to give more reliable data at 79 and 50 percent of the time respectively. Thin-film sensors have been developed that use these more reliable methods of sensing infant breathing. Such sensors in themselves do not fully address the problem, for it is a clinical problem, and basic questions such as is what is being sensed and is it an appropriate way to deal with the disease process. In the case of infant apnea monitoring, it was found that in many cases apnea monitoring alone would not affect the basic clinical problem. Thus, while developing sensors and instrumentation is important, one must not lose sight of the clinical problem, and part of the development activities should go into clinical studies justifying the instrumentation in addressing the disease. The Collaborative Home Infant Monitor Evaluation (CHIME Study) was a major clinical study undertaken by the U. S. National Institutes of Health (NIH) to determine the utility of home infant apnea monitoring. The results of this study demonstrated that infants were experiencing life-threatening episodes at times other than when apnea was occurring. Thus, such a study helped to put the technology in an appropriate clinical perspective. Other examples of sensors and instruments that address important clinical problems will be presented.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, p. 7, 2011. www.springerlink.com
Tissues Injuries from Epidermal Loadings in Prosthetics, Orthotics, and Wheeled Mobility A.F.T. Mak Department of Health Technology & Informatics The Hong Kong Polytechnic University
Abstract— Pressure ulcer is primarily a biomechanical issue, although the cascade of clinical events often involves many confounding intrinsic and extrinsic factors, such as the general health condition and the personal hygiene of the subject involved. Decades of research have clarified some mechanisms leading to pressure ulcers, though considerable controversies still remain. A number of hypotheses have been proposed for the biomechanics of pressure ulcer in past decades - a) direct
mechanical insults to tissues and cells, b) local ischemia and anoxia due to blood flow occlusion, and c) reperfusion injuries. Besides pressure, shear was also noted to be a major contributor to the problem of pressure ulcer. The purpose of this plenary lecture is to review our current understanding on how epidermal loadings affect the tissues internally, and on the biomechanics of pressure ulcers formation.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, p. 8, 2011. www.springerlink.com
Computer Aided Surgery for Minimally Invasive Therapy Ichiro Sakuma Department of Precision Engineering, Department of Bioengineering School of Engineering, The University of Tokyo
Abstract— Minimally invasive therapy such as endoscopic surgery and catheter based intervention since less pain and complication due to intervention. However, surgeon’s skills required for minimally invasive therapies are complicated since the surgeons have to manipulate various medical devices in a narrow surgical field with vision. Thus engineering assistance is important to realize safe and effective minimally invasive therapy. As the second vision of surgeon, we can utilize three dimensional medical imaging systems such as X-ray CT, MRI, Ultrasound Imaging, and high quality endoscope. Three dimensional anatomical information and function functional information registered to three dimensional position of the patient can be used to navigate surgeon’s therapy with assistance of information processing by high performance computers. There is also demand for functional information that provides pathological information of patient tissue. These measurement devices should have function to determine their three-dimensional position relative to the patient in addition to their main measurement information when they are integrated in image-guidance system for surgery. In addition to conventional medical imaging devices, biomedical instrumentation device incorporating three dimensional position tracking system can be also used for image guided surgery. One
example is 5-aminolevulinic and (5-ALA) induced fluorescence measurement for intra-operative detection of brain tumor. We have developed an optical pickup device mounted on a motor driven X-Y stage. The 5-ALA induced fluorescence detection system with a mechanical scanning system was integrated with a surgical navigation system. The fluorescence spectra were registered to the corresponding location in surgical navigation map. As the second hand of a surgeon, robotic/mechatronic devices can be used to position various therapeutic/diagnostic devices in living body even when it is difficult for surgeons to manipulate it manually. Surgeons can overcome limitation of manual procedures by introduction of robotic devices. Robotics devices can provide more precise positioning, remote-controlled manipulation of medical devices in small and deep space of patient body where surgeons’ hands cannot approach. This new field of technology is called as Computer Aides Surgery(CAS), Computer Assisted Intervention(CAI), or Computer Integrated Surgery(CIS). Combination of advanced medical instrumentation including three dimensional anatomical/functional imaging, robotic/mechatornic devices and advance information processing and computer control will realize more efficient, safer medical intervention with less invasiveness.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, p. 9, 2011. www.springerlink.com
Role of Mechanical Loading in the Aetiology of Deep Tissue Injury C. Oomens1, S. Loerakker1, K. Nicolay, and D. Bader1,2 1
Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands 2 School of Health Sciences, University of Southampton, Southampton, UK
Abstract— Prolonged mechanical loading of soft tissues overlying bony prominences, as experienced by bed-bound and wheelchair-bound individuals, can lead to damage in the form of pressure ulcers (PU). One type of PU is initiated in muscle tissues and progresses outward towards the skin layers. The treatment of this condition, termed Deep Tissue Injury (DTI), is necessarily complex and leads to a variable prognosis. The aetiology of DTI involves a number of factors, each triggered by mechanical loading, involves cell and tissue deformation, ischaemia, ischaemic-reperfusion injury and impaired interstitial and lymphatic flows. The present work examines the relative contributions of these factors, by applying different mechanical loading regimens to a well-established animal model [1]. A series of experiments were conducted, each of which involved indentation of the tibialis anterior muscle of the left hind limb of Brown Norway rats. The integrity of this muscle was monitored during both loading and unloading periods with T-2 weighted Magnetic Resonance Imaging (MRI) and muscle perfusion as assessed by dynamic- contrast enhanced MRI. In addition, dedicated Finite Element Models were
developed to estimate the local mechanical conditions within the muscle. Experiments involved both continuous and intermittent tissue deformation for 10 minutes and 2 hours, and continuous deformation for 4 hours, followed by an unloading period of 2 hours. Results indicated that, provided a specific damage threshold was exceeded, damage could be evident after 10 minutes of deformation and increased with time. In addition on removal of the mechanical deformation, reperfusion was not evident in all areas of the muscle. These findings suggest that the relative importance of the initiating factors for the aetiology of DTI are temporally-dependent, with deformation dominating at shorter loading periods. The results must be considered in the light of a prevention strategy for DTI and highlight the need to minimise internal tissue strains in those individuals in which prolonged loading is inevitable.
REFERENCE [1] Ceelen KK et al. (2008) Compression-induced damage and internal tissue strains are related. J. Biomechanics, 41, 3399-3404
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, p. 10, 2011. www.springerlink.com
Tissue Engineering Approaches to the Treatment of Spinal Disorders J.C.H. Goh, H.K. Wong, S. Abbah, E.Y.S. See, and S.L. Toh Division of Bioengineering, National University of Singapore Department of Orthopaedic Surgery, National University of Singapore
Abstract— Lower back pain causes discomfort in many individuals. If left untreated and allowed to deteriorate, discomfort might lead to disability. Lower back pain is linked with the degeneration of the inter-vertebral disc (IVD), also known as degenerative disc disease (DDD). Changes in the matrix composition and deterioration of biomechanical properties, abnormal mechanical loading, genetic predisposition, reduced cell activity or any combination of the three may induce DDD. There have been a myriad of tissue engineering techniques aimed at alleviating DDD, with the common goal of providing an effective solution in the shortest time. Two examples would be interbody spinal fusion and functional regeneration of the IVD. The former involves using methods to immobilize and calcify adjacent vertebrae of the diseased IVD; while the latter aims to regenerate a functional IVD to replace the diseased disc. Our group has studied the efficacy of using an architecturally optimized bioresorbable scaffold in a porcine animal model for spinal reconstructive surgery. The approach was to perform discectomies and replace the disc space with the scaffold. The results reveal that these scaffolds can act as bone graft substitutes by providing a suitable substrate for bone
regeneration in a dynamic load bearing environment. The efficacy of these polycaprolactone-based scaffolds is further enhanced by assembling it together with multilayer osteogenic differentiated cell-sheets derived from autogenous bone marrow stromal cells or low dose BMP-2. The cell as well as growth factor assembled constructs showed improved segmental stability of the lumbar spine. Our group has also been working on using BMSC cell-sheets seeded onto silk scaffolds for annulus fibrosus (AF) regeneration as part of an overall strategy to produce a functional tissue-engineered IVD replacement. The approach was to use an in vitro experimental model made up of the tissue-engineered AF wrapped around a silicone disc to form a simulated IVD-like assembly. The results showed that the tissue-engineered AF was able to regenerate extracellular matrix similar to that found in the native inner AF. It is clearly a multidisciplinary challenge to fabricate a suitable tissue-engineered construct capable of alleviating the problem of DDD, however, it is envisioned that clinical translation of current tissue engineering knowledge will become a reality in the near future.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, p. 11, 2011. www.springerlink.com
Study of the Optimal Circuit Using Simultaneous Apheresis with Hemodialysis A. Morisaki1, M. Iwahashi1, H. Nakayama1, S. Yoshitake2, and S. Takezawa2 1
2
Tohwa University/Department of Medical Electronics Engineering, Fukuoka, Japan Kyushu University of Health and Welfare/Department of Clinical Engineering, Nobeoka, Japan
Abstract— Patients with end-stage renal failure receive hemodialysis treatment. Some patients receiving the treatments need apheresis according to their co-existing diseases. The simultaneous application for the combination of plasmapheresis with hemodialysis has been reported. However, this procedure is complicated and sometimes has the risk of iatrogenic accident, so manufacturers do not recommend the combination because of the product liability law. On the other hand, this combination has a lot of benefit such as saving time for not only the patients but also health care givers. The simultaneous treatment will be widely applied for patients if the safety and validation is established and simpler procedures are provided. We developed a new circuit for the simultaneous treatment in this study. The pressure and flow volume were determined while the fluid viscosity was altered. The new circuit proved to decrease the priming volume by 40%. The pressures and flow volumes resulted in ranging within the safe and validated levels. This developed circuit will promise the safety and simplicity for patients and health care providers. Keywords— haemodialysis, plasmapheresis, circuit pressure, circuit priming volume, circuit flow rate.
I. INTRODUCTION The simultaneous treatment of apheresis combined with hemodialysis is actually performed at clinical scene when the patient with end-stage kidney failure needs the additional treatment. There are some types of plasmapheresis (PP), low density lipoprotein-apheresis (LDL-A), and double filtration plasmapheresis (DFPP) for apheresis. There are hemodialysis (HD) and continuous hemofiltration (CHF) for renal replacement therapy (RRT). Although a lot of clinical studies based on the local experiences are reported, "the product liability" always exists meaning that manufactures do not ensure these approach to therapy. Many doctors hesitate to perform this method due to the problem of "the enforcer responsibility" even if the simultaneous treatment of plasmapheresis and hemodialysis is technically possible. In addition, the handmade circuit which usually combines some parts of circuit always has the risk of disconnection during the treatment.
If the safety and validation of simultaneous treatment will be guaranteed, it contributes to the clinical application for the many indicated patients. It saves the time and bed control and improves the patients’ quality of life (QOL). The clinical with the handmade circuit has been reported. However, there is not the report of the safe inspection from an aspect of the engineering [1]. We developed the circuit for the simultaneous treatment of apheresis and hemodialysis treatment. The flow volumes and pressures in some points in the circuit were determined to ensure the safety of this circuit. A. About the Plasma Exchange (the Double Membrane Filtration Concentration Method: DFPP) We divided the extracted blood outside the body to a blood corpuscle ingredient and a plasma ingredient in a plasma separator and we divided etiologic agents by the plasma ingredient separator and removed the separated filtrate plasma [2]. We returned to the patient protein such as the albumin which was useful for the patient afterwards. Furthermore, DFPP adopts the two steps of filtration separation method that concentrates the globulin differential counts of the etiologic agent and abolishes [3]. Fig.1 shows the schematic view of the plasmapheresis by the double membrane filtration concentration method that was described the papers [4-5].
blood pump
plasma skimming pump
artery side internal shunt vein side
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 12–15, 2011. www.springerlink.com
)
plasma membrane filter anticoagulant (secondary membrane plasma separator or immune adsorption (firstme membrane) column air bubble microfilter analyzer venous pressure albumin or fresh frozen plasma monitor
Fig. 1 Plasma exchange
Study of the Optimal Circuit Using Simultaneous Apheresis with Hemodialysis
B. About the Haemodialysis Medical Treatment (HD) Fig.2 shows the material exchange of the HD membrane. Electrolyte and urine toxin material is removed from the blood by a semipermeable membrane and dialysate move by the diffusion and ultrafiltration. The supplement of the electrolyte which is necessary for the life support of the human body and the adjustment of the body fluid pH are performed [6]. The removal of the unnecessary electrolyte and the urine toxin material is performed.
Na/Cl
P/Cr/K/β2MG
13
II. MATERIALS AND METHODS A. General Description of the Circuit System A DCS-72 (Nikkiso Corporation, Tokyo) as the dialyser machine and a Pulasort iQ21 (Asahi Chemical Industry Kuraray medical company, Tokyo) as the apheresis device were used. We newly developed the circuit system using the parts of the commercially available circuits. It consists of the seriesparallel circuit after we considered the priming volume and the blood fluid viscosity. Fig. 3 shows our proposed schematic circuit diagram including the each portion of priming volume (PV) .
Blood
B. Preparation of PEG Solutions
Dialyste Nacl
Ca/HCO3
We prepared four kinds of water solution which had a coefficient of viscosity equal to hematocrit (Ht) values 20 %, 30 %, 40 % and 50 % using PEG 6000. We adequately operated the system using this circuit with water. We measured the flow volumes and pressures in the circuit using a diluted polyethylene glycol (PEG 6000) which simulated human blood.
Fig. 2 The material exchange of the HD membrane
III. RESULTS C. The Diseases below are Considered to be Indicated for this Device Some of the patients receiving apheresis will be necessary for renal replacement terapy [7-13].
Focal sclerosing glomerulopathy Renal homotransplantation Nephritis SLE Renal insufficiency in the severness incompatible pregnancy Arteriosclerosis obliterans
Inversely, some patient receiving apheresis will be treated by RRT for the renal failure. D. The Standard Medical Treatment in Japan The each apheresis and haemodialysis are often treated separately in Japan [14].
Patients usually receive hemodialysis treatment over 3-5 hours, 2-3 times / week Patients usually receive plasmapheresis over 2-3 hours, 1-2 times / week
A. Inspection of the Treatment Time Restriction to the bed will cause a patient needing multiplex treatment not only physical pain but also mental distress. When we performed the blood purification treatment by current therapy, each therapy is taken independently. Therefore, the patient will be forced to bed restriction 3-5 days a week for a total of 5-8 hours for the treatment. Furthermore, we must jab the patient with two large needles of the 16G size for blood removal and autohemotransfusion. When we treat the patient by using our proposed circuit, we can perform two medical treatments at once, decreasing treatment time to 60 %. We can expect decrease of the mental distress due to the decrease of the number of punctures, also. Furthermore, the risk of blood destruction and infectious disease by frequent extracorporeal circulation is decreased. B. General Description of the Circuit The new circuit system proved to decrease the priming volume by 40%. Apheresis is performed at a blood flows of about 100 ml/min. We secure the blood flow that is necessary
IFMBE Proceedings Vol. 35
14
A. Morisaki et al.
for apheresis with the pump T1. Hemodialysis usually require its blood flows of 200ml-250ml/min. The deficit of blood flow volume is supplied with pump the T4. The plasmapheresis treatment generally reqires two to three hours. On the other hand, the haemodialysis medical treatment requires 3-5 hours. The apheresis treatment generally reqires two to three hours. On the other hand, the haemodialysis medical treatment requires 3-5 hours. Simultaneous treatments have the different time to finish.We start the patient treatment with the schematic circuit diagram using the pump of T1, T2, T3, T4, T5 and T6, firstly. When the apheresis treatment by the pumps T1, T2 and T3 has finish, we return the blood in the circuit of DFPP to the patient. We are able to continue the treatment with a partial circuit of T4, T5, T6 for the remaining haemodialysis therapy. pressure monitor
pressure monitor
blood removal
saline
pressure monitor drain pump tube
pump tube
pressure monitor anticoagulant line
blood transmission
T1
:clamp :chamber :sample port :joint
T2
F T4
T3
pump tube pressure monitor
F:Plasma separator (the first column) S:Plasma ingredient differential counts device (the second column) D:Dialyzer (dialyzer)
S pressure monitor fluid replacement
PV
D T5
Red
T6
T1
50ml
Purple T2
42ml
Light blue T3+Green T4 Yellow T6
56ml
Blue T5
34ml
Total
182ml
Fig. 3 The Schematic circuit diagram C. The Flow Volume Measurement of the Circuit We prepared four kinds of water solution which had a coefficient of viscosity equal to hematocrit values 20%, 30%, 40%, 50% using PEG 6000. We measured the circuit flow volume in each part. Table.1 shows the results of measurement. "R" of table.1 shows the place of returning patient blood. We set the flow velocity by each pump as follows.
Table 1 Flow volumes of the points by different Ht values Which simulates the viscosity coefficient of the solution same as the Ht value of blood using PEG 6000 (ml/min) Flow measurement part T1-R T4-R T1+T4-R T1+T2+T3 T1+T2-F
Water 110 105 214 104 31
Ht20 % 100 100 214 104 31
Ht30 % 102 98 192 104 31
We prapared four kinds of water solution which had a coefficient of viscosity equal to Ht value 20%, 30%, 40%, 50% using PEG6000. We measured the arterial blood pressure, the filtration pressure, the plasma entrance pressure, TMP (transmembrane pressure), the liquid pressure, and the dialysis venous pressure. Fig.4(a)-(d) show the changes in pressures in those points by the different equivalent Ht values. We determined the pressures after setting the system operation as follow. Venous pressure : 60mmHg PP T2 : (flow of BP T1)×30% There is an elevation of the pressure according to the increase in flow volume. The increases in the pressures were observed according to the increase in the viscosity coefficient. The circuit pressures over 300mmHg, which is considered to be safe margin of medical treatment instrument for blood were not found. Therefore, it is assumed that our proposed circuit is one which can be used for clinical treatment [15-16].
mmHg
Arterial blood pressure Filtration pressure
120 80
Plasma entrance pressure TMP
40 0
-80 80ml/min
Table.1 shows the results of measurement. "R" of table.1 shows the place of returning patient blood. Even if the viscosity coefficient of the solution increases, the flow rate of the solution is approximately constant.
Ht50 % 104 102 190 104 32
D. The Measurements of Pressure in the Circuit
-40
BP (blood removal pump) T1: 100ml /min DP (drainage pump) T4: 100ml /min PP (plasma skimming pump) T2: 30ml /min
Ht40 % 104 98 193 102 32
100ml/min
120ml/min
Dialysis liquid pressure Dialysate venous pressure
Fig. 4(a) Pressures in the case of the Ht 20% equivalency
IFMBE Proceedings Vol. 35
Study of the Optimal Circuit Using Simultaneous Apheresis with Hemodialysis
mmHg 120 80 40 0 -40 -80 80ml/min
100ml/min
120ml/min
15
of circuit flow volume and pressure in it when we simulated the clinical application. The flow volume were validated using this system. The circuit pressures over 300mmHg, which is considered to be safe margin of medical treatment instrument for blood were not observed. This study demonstrated that our proposed circuit would be one of which is available enough for clinical treatment because the error of circuit flow volume was less than 5% of setting volume and the circuit pressure is less than 300mmHg. However, hemolysis and actual benefit for the patients and health care providers are not unknown, further study will be necessary by the this newly developed circuit system.
Fig. 4(b) Pressures in the case of the Ht 30% equivalency
REFERENCES mmHg 120 80 40 0 -40 -80 80ml/min
100ml/min
120ml/min
Fig. 4(c) Pressures in the case of the Ht 40% equivalency
mmHg 120 80 40 0 -40
1. Kentaro T, Yutaka U et al.(2008) Therapeutic Apheresis and Diaiysis vol.12 No.4: 264-270 2. Akira Y, Tetsuzo A et al.(1999) Apheresis manual, Syuhujun company ,Tokyo 3. Agishi T, Kaneko I et al.(1980) Dublefiltration plasmapheresis,Trans Am Soc Artif Inter Organs 26:406-409 4. http://www.h.u-tokyo.ac.jp/touseki/PP.htm2009.7.20 5. Tetsuzo A (1999) Practical use hemocatharsis method, Syuhujun company, Tokyo 6. Yoshihei H (1999) dialytic therapy manual, Published by medical center ,Tokyo 7. Euso E,et al.(2007) Clin,Nephrol.67:341-344 8. Hettori M,et al.(2003) Am.J.Kid.Dis.24:1121-1130 9. Lee P.C,et al.(2009) Transplantation 88(4):568-574 10. Choy B.et al.(2006) Am.J.Transplant.6(11):2535-2542 11. Tamaki T, Tanaka M,et al.(1998) Cryofiltratin pheresis for major ABO-incompatible kidney transplantation,The Apher 2:308-310 12. Wallce DJ(2003) Apheresis for rheumatic and other autoimmune conditions:where do we stand? Ther Apher Dial 7:143-144 13. Schwartz MM,Korbet SM,Lewis EJ(2008)The prognosis and pathogenesis of severe lupus glomerulonesphritis,Nehrol Dial Transplant 23:1298-1306 14. Nobuyuki S, Minoru O, Eiichi C et al.(2007) A dialysis frontier. We answer various doubt in the dialysis 5: 51-69 15. Tetsuzo A, Kenji M et al.(1998) Advanced medical series 1. An artificial organ. An artificial organ to the 21st century, Institute for advanced iatrotechnique, Tokyo 16. Akira S (2005) Advanced medical series 37. An artificial organ. Of the regenerative therapy is the highest , Institute for advanced iatrotechnique, Tokyo
-80 80ml/min
100ml/min
120ml/min
Fig. 4(d) Pressures in the case of the Ht 50% equivalency
IV. DISCUSSION This study resulting in proving the safety and validation of newly developed circuit system from the point of views
Author: Aya Morisaki Institute: Department of Medical Electronics Engineering, Tohwa University Street: 1-1-1 Chikushigaoka, Minami-ku City: Fukuoka Country: Japan Email:
[email protected]
IFMBE Proceedings Vol. 35
Biomedical Engineering Education under European Union Support M. Cerny1, M. Penhaker1, M. Gala2, and B. Babusiak2 1 2
VSB - Technical University of Ostrava/Department of Measuring and Control, Ostrava, Czech Republic University of Zilina/Department of Electromagnetic and Biomedical Engineering, Zilina, Slovak Republic
Abstract— Biomedical engineering is a progressive science, combining knowledge of technology and medical science. Graduates of this field must be studied not only the technology, which, as a person is not an easy field, but also the selected chapters of medicine. An important part of the study is also practical training in hospitals in the department of medical technics, which is expensive for such old equipment. The cost of teaching these subjects are generally higher than on purely technical education in the field. This article describes the structure of study biomedical engineering in the Czech Republic and Slovak Republic. It also shows how important and useful support for the study of these subjects from the European Union structural funds. The article outlines ways to motivate students of lower levels of education to study these very difficult subjects.
The article also highlights the potential for improving education systems for the European Union. Department of VSB - Technical University of Ostrava is currently solving two projects aimed at promoting education in biomedical engineering. This article demonstrates the structure of teaching can be improved. One of the projects is focused on working with students of secondary schools and promotion of biomedical engineering education in secondary schools. Enthusiasm of students at an earlier age field gets only more interested in studies but most quality candidates who have already dealt with the issue and have the knowledge already acquired.
Keywords— Biomedical engineering, education, European union.
II. BIOMEDICAL ENGINEERING AT ZILINA UNIVERSITY
I. INTRODUCTION Biomedical engineering is a progressive science which combines knowledge from different disciplines. The most common is a combination of technical skills and basic medicine. Specialized knowledge in the field of cybernetics, signal processing and informatics are added at higher levels of education. Graduates of these courses are then biomedical engineers with a greater knowledge of technology and the basics of medicine, so they become biomedical technicians or engineers. They perform mostly technical work, which leads to the service, inspections or technical improvement of medical technology in their practice. Another improvement of knowledge and physician-ness, the graduates can become clinical engineers who do not care only about the technique, but actively involved in the diagnosis and treatment of a patient. [2] Although the Czech Republic and the Slovak part of their common history, the development of education in these fields is different. In this paper we introduce the education of biomedical engineers at two partner sites, each from a different state. Each workplace is influenced by other legislation that training in biomedical engineering controls and checks. Although these are the two working very close, both geographically, focus and structure the training in different fields of biomedical engineering.
Biomedical engineering at Zilina University teaches Department of Electromagnetic and Biomedical Engineering. Scientific and research activities at this are aimed mainly to the following topics: investigation of electromagnetic field and its interactions with various isotropic and anisotropic media, development of the new methods and tools for materials and biomaterials non-destructive evaluation, problems of electromagnetic compatibility and bio-compatibility and the study of possibilities of special sensors use in biomedical area. The most of the interdisciplinary activities are performed in cooperation with medical doctors; also the Bachelor and Master programs of Biomedical Engineering (BME) are realized in the close cooperation with the Jessenius Medical Faculty of Comenius University in Martin. The graduate of BME obtains deeper knowledge in courses of the theoretical background of the study program including biological signal processing, obtains detailed knowledge of information systems applied in medicine, knowledge of simulation methods used in biomedical engineering, as well as diagnostic and curative methods in medicine including practice in medical center. Students working in English speaking groups will also improve their foreign language speaking abilities and thus become more competitive in international labor market, especially within the EU. Study of biomedical engineering is implemented in bachelor of engineering studies in program named Biomedical engineering.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 16–19, 2011. www.springerlink.com
Biomedical Engineering Education under European Union Support
Bachelor's degree program is largely based on knowledge of the Electrical Engineering and Computer Science. Students are first trained in the theoretical foundations of electrical engineering and electrical engineering. An integral part of the first semester of education is the basics of algorithms and programming languages. These purely computer science knowledge is further developed in subsequent semesters in two other articles dealing with programming and using the pro-applicable individual notification language for signal processing. Greater emphasis is placed on electrical knowledge. There are three courses of theoretical electrical engineering, electronics, two courses, course measurement and control systems and sensors for biomedical engineering. In the final year students are familiar with the problems of image processing and analysis. Education in medical fields is carried out in cooperation with the JLF UC. Wells will acquire knowledge of anatomy and histology, biochemistry and biophysics and physiology and pathology. There are necessary foundations for all forms of learning mathematics and physics too. These objects are represented in the first two years of study, total courses of mathematical analysis, linear algebra and general physics. Knowledge of languages is controlled only in the first year of study in one subject. Percentage of subjects in the study expressed the graph in Figure 1.
Fig. 1 Study subjects in bachelor study at Zilina University Students finished the bachelor's degree may continue into the engineering studies named Biomedical Engineering. Based study is electrical engineering, which occupies the majority of the study period. Students improve themselves in electromagnetic compatibility, signal and image processing and information systems in health care. The issue of neural networks and modeling and simulation is also part of the compulsory subjects, which is attended by all graduates. In the field of medicine well of course receive diagnostic methods, bioethics and medical ethics and medical management services. Mathematics and physics are represented by only one compulsory course and numerical methods and discrete mathematics. Evaluative representation of objects is shown in the chart on the Figure 2.
17
Fig. 2 Study subjects in MSc. study at Zilina University
III. BIOMEDICAL ENGINEERING AT VSB – TECHNICAL UNIVERSITY OF OSTRAVA
The founding of biomedical specialization at the Technical University of Ostrava as affecting discipline in both teaching and research is related to the formation of the Faculty of Electrical Engineering and Computer Science and Department of Measurement and Control. There is taught the master study focused on measuring and control technology in biomedicine since 1991. [1] With the beginning of the 21st century, the structure of the study of biomedical engineering at the Technical University was affected by Bologna Declaration and subsequent legislative changes in the laws of the CR, which aim to improve the quality of medical care. There was enacted legislation that clearly specifies the requirements for professionals working in healthcare facilities. Accordingly, it is necessary to study all the programs that educate medical staff to let the Ministry of Health to assess whether they meet basic requirements on the structure of education. The groundbreaking 2003 VSB Technical University as the first university in the Czech Republic accredited degree course in Biomedical Technology at the Ministry of Education and at the Health Ministry too under Law 96/2004 [3]. Based on experience Assoc. Dr. Jinrich Černohorsky Ph.D., and his newly-formed team joined the accreditation of engineering person Biomedical Engineering. This course was successfully accredited at the end of 2009. This completed the comprehensive university tertiary education in Biomedical Engineering at VŠB - TU Ostrava. Currently, we are the only faculty in the Czech and Slovak Republics since the academic year 2010/2011 provides a comprehensive education, biomedical engineering in two European tertiary levels of higher education according to those standards. Tuition at the VSB - Technical University of Ostrava provides Department of Measurement and Control (from September 2011 Department of Cybernetics and Biomedical Engineering). Medical courses teach Medical Faculty of
IFMBE Proceedings Vol. 35
18
M. Cerny et al.
University of Ostrava. In the first year of undergraduate studies, students are mainly engaged in mathematics and physics and medical sciences. They are in their first year take up more than two thirds of learning time. In subsequent years, the ratios of medical sciences reduce at the expense of the electrical disciplines. Specialized biomedical subjects are taught in three successive semester courses on sensors, electronics, measuring and medical equipment. Learning of a foreign language is 4 of 6 semesters. Mainly it is English. The practice is also required. This takes place at hospitals each summer. Students have two practices each lasting at least 3 weeks under the supervision of an experienced technician who works in the department dealing with medical devices in the hospital. The figure 3 shows proportion of individual subjects in bachelor studies.
in the field of cybernetics, equipment design and microprocessor technology.
Fig. 4 Study subjects in MSc. study at VSB – TU Ostrava There is the growing interest among applicants in the study of bachelor biomedical engineering now. Since the establishment of the field, the number of candidates increased several times. This was helped by the projects of the European Union, which supported the development of the field in the framework of grant projects.
Fig. 3 Study subjects in bachelor study at VSB – TU Ostrava At the VSB - Technical University is also possible to study master's degree in biomedical engineer. Study foresees the possibility of two different types of candidates. The first type is graduates of electrical engineering bachelor degree program which is eligible to work the medical profession. They also are known as pre-bio. The second type of candidates are graduates of a other electrical field study program (for example, Measurement and Control, Control and Information Systems, Automation, Telecommunication Engineering, Electronics), which also are known as pre-NObio. On the basis of diversity of knowledge that candidates have gained the bachelor degree, it was necessary to define two different groups of compulsory elective courses in the curriculum field of biomedical engineering. Pre-NObio students must attend after-guilty different electives than students from a group of candidates pre-bio. Students from the fields of pre-NObio receive a greater extent than medical courses for students-bio. Pre-bio students gain more knowledge of technical disciplines. Studies are based on consecutive courses of medical instrumentation and clinical engineering, medical imaging systems and medical diagnostic methods. Students also gain knowledge
Fig. 5 Number of applicants at VSB – TU Ostrava A master's degree is also very interested in studying the field of approximately 20 full-time students and approximately 30 students in distance learning.
IV. SUPPORT BY EUROPEAN UNION To improve the quality of education we are able to obtain a stable support for the European Union to develop this sector and also increase the motivation of other candidates for study. Acceleration in preparing students also meant that ESF funds to support by project RLZ CZ.04.1.03/3.2.15.1/0020 named Improvement of the study program biomedical technics and increase their labor market outcomes in relation to law 96/2004Sb. This project is followed by the ESF project Biomedical engineering in high schools CZ.1.07/1.1.07/02.0075. This project focuses on improving teaching and preparing students already in the process of secondary education. The
IFMBE Proceedings Vol. 35
Biomedical Engineering Education under European Union Support
19
project is designed activities that aim to increase knowledge of biomedical engineering in high schools and thereby encourage a subsequent study Biomedical Engineering at our alma mater. The project created an innovative teaching materials and laboratory tasks. Practical electrical kits have been developed for measuring and diagnosing basic simple biosignal by pupils. They can put into operation their electrical kit at technical university. After they can use the software created for this purpose can be read in conjunction with a PC can operate in domestic conditions. ECG signals are measured and pulse curve. These modules are distributed at high schools in the region also part of the project, which is used for demonstration and measurement of biosignal processing in specialized subjects such as biology and physics. The project also carried out popular presentations about the field of Biomedical Engineering directly at high schools. In these lectures to students and issues are illuminated sample performed ECG and pulse curves in the teaching kit developed. The project also involved two high schools in Ostrava, introduce the teaching of biomedical engineering into the curricula of their courses. With the use both raise awareness about the field, but also to prepare quality candidates for study of the issue.
These projects are co-financed by European Social Fund and national budget of the Czech Republic.
V. CONCLUSIONS Article introduced teaching programs at two universities with the same focus. Teaching varies mainly due to legislative control subjects in the Czech Republic. Both institutions collaborate on projects and improve themselves through learning support for mobility of students and teachers from European funds for mobility. Support for European Union in the field of biomedical engineering education is important because this application of technics is very beneficial to all society.
ACKNOWLEDGMENT The work and the contribution were supported by the project: Ministry of Education of the Czech Republic under Project 1M0567 “Centre of applied electronics”, student grant agency SV 4501141 “Biomedical engineering systems VII” and The project “Biomedical technology on high schools” CZ.1.07/1.1.07/02.0075 which is supported by European Social Funds and budget of Czech Republic.
REFERENCES
Fig. 6 Kit for Biomedical Engineering Educcation Another project is currently being solved at VSB – Technical University with improving the teaching of biomedical disciplines in higher education. It is project named CZ.1.07/2.2.00/15.0112 “Increasing competitiveness in field of biomedical engineering at the Technical University of Ostrava”. The project supports the creation of new teaching materials, development of new practical labs and improves language training for future graduates. The project also includes the creation of multimedia teaching aids as standard EVICAB (www.evicab.eu). An important part of the project is the training of student mentors in professional practice in hospitals, so as to standardize the content and practices reflect the requirements of practice.
1. Cerny M., Penhaker M. (2008) Biotelemetry. IFBME Proc. vol. 20, 14th NBC on Biomed. Eng. and Med. Phys., Riga, Latvia, 2008, pp. 405-408 ISSN: 1680-0737, ISBN: 978-3-540-69366-6 2. Cernohorsky J, Penhaker M, Sochorova H. (2006) Biomedical engineering education at VSB - Technical University of Ostrava, IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., Seoul, South Korea, 2006 pp. 3832-3835, ISSN: 1680-0737, ISBN: 978-3540-36839-7 3. Penhaker M, Cernohorsky J, Sochorova H, et al. (2008) Proced. of multi technical education in biomedical engineers curricula, Proc. of 5th Int. Conf. on Information Techn. and Apps. in Biomed. (ITAB), Shenzhen, Peioples R of China, 2008, pp. 373-376, ISBN: 978-1-42442254-8
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Martin Cerny VSB – Technical University of Ostrava 17. listopadu 15 Ostrava Czech Republic
[email protected]
OBE Implementation and Design of Continual Quality Improvement (CQI) for Accreditation of Biomedical Engineering Program University of Malaya S. Karman, K. Hasikin, H.N. Ting, S.C. Ng, A.K. Abdul Wahab, E. Lim, N.A. Hamzaid, and W.A.B. Wan Abas Department of Biomedical Engineering, Faculty of Engineering, University of Malaya
Abstract— This paper describes the outcome based education (OBE) implementation and continuous quality improvement (CQI) process plan in the Department of Biomedical Engineering, University of Malaya. The implementation of CQI plan is a part of the Outcome – based Education (OBE) approach required by the Engineering Accreditation Council (EAC) for recognition of Biomedical Engineering program and as a graduate student for Board of Engineer Malaysia (BEM) and Institute of Engineer Malaysia (IEM) membership registration. This approach was first introduced to Biomedical Engineering Department since 2005. The measurement on Programme Outcomes (POs) which will be evaluated upon graduation is done based on the results of sub-Programme Outcome (subPOs) which are evaluated at the end of each course. The improvement on assessment and evaluation process are continuously done to obtain the better achievement of POs and programme educational objectives (PEO)s in the department of Biomedical Engineering. Keywords— outcome based education (OBE), continuous quality improvement (CQI),programme educational objectives (PEO),Programme Outcomes (POs).
I. INTRODUCTION Outcome-Based Education (OBE) is an approach that focuses on outcomes, where the achievements of students are measurable, proven and can be improved. This approach set by the MQA, following the Malaysian Qualification Framework (MQF) introduced by the Ministry of High Education Malaysia (MOHE). The planning process and assessment of OBE are reverse of that associated with traditional education (TE) planning [1]. It is shifts from measuring input and process to include measuring the output (outcome). The OBE approach pursuing the design of curriculum and teaching process that focuses on producing graduates with capabilities to meet the expectations of the stakeholders [2]. By Implementing OBE, students are expected to be able to do more challenging tasks other than memorize what was thought, such as able to manage project, analyzing the data and make decisions based on the results [1]. Under OBE approach, the graduates are completely prepared for the workforce after graduation with all of soft kills needed in
jobs, such as technical skills, communication skill and human relationships skills. OBE approach has been introduced to the Biomedical Engineering department since 2005 and fully implemented in department after 2006, to fulfill the requirements of Engineering Accreditation Council (EAC). EAC is the body delegated by the Board of Engineers Malaysia (BEM), The Institution of Engineers Malaysia (IEM), Malaysia Qualification Agency (MQA) and the public services department (JPA) for accreditation of engineering degrees [2]. The objective of the accreditation is not only to ensure that the graduates of the accredited engineering programmes satisfy the minimum academic requirements for registration as a graduates engineer with the BEM and IEM, but may also serve as a tool to benchmark engineering programmes offered by Institute of Higher Learning (IHLs) in Malaysia. The accreditations have brought the culture of CQI in the IHLs to continuously improve the OBE. The accreditation is necessary for recognition of programme and graduate students as professional engineers. The professional engineers whose are accredited by EAC are possible to be employed by the signatory countries of Washington Accord (WA) as engineers without further examination [1,3]. The signatory contries of WA are Australia, Canada, Republic of Ireland, Hong Kong, Japan, New Zealand, Singapore, South Africa, South Korea, Taiwan, UK, USA, Malaysia, India, Germany, Russia and Sri Lanka [1]. This paper describes the strategies of OBE implementation in Department of Biomedical Engineering University of Malaya, CQI plan and assessment method as well as performance criteria.
II. OBE IMPLEMENTATION IN DEPARTMEN OF BIOMEDICAL ENGINEERING DESCRIPTIVE RULES
The implementation of OBE in the Department of Biomedical Engineering follows the 3 steps instructed by MQA, those are ; 1) set the target, 2) achieving the target, and 3) continuous improvement. The target is the programme Educational Objectives (PEOs), which to be achieved a few
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 20–24, 2011. www.springerlink.com
OBE Implementation and Design of Continual Quality Improvement (CQI)
years after graduation. In order to achieve the PEO, the programmeOutcomes (PO) should be set and achieved at the time of graduation. The POs are divided to 3 categories, namelytechnical skills, soft skills and general knowledge. Table1 Programme Educational Objectives PEO PEO1
PEO2
PEO3
Criteria To produce professional engineers who are educated and well trained with the ability to carry out critical and creative analysis to design and to solve biomedical engineering related problems. To produce professional engineers who are able to communicate effectively at technical and non-technical levels and to serve as a team members or leaders with good interpersonal skills. To produce professional engineers who are able to make technical decisions within the context of globalised economy, and to practice good work ethics in executing their professional, social and environmental duties.
Table 2 Programme Educational Objectives Categories
PEO PO1 PO2
Technical skills
PO3
PO4
PO5
PO6
Soft skills
PO7
PO8 PO9
General knowledge
PO10
PO11 PO12
Criteria Ability to acquire and apply knowledge in engineering fundamentals. Ability to design and conduct experiments, as well as to acquire, analyze, interpret and report experimental data. Ability to design biomedical engineering systems, components or processes to meet desired needs to provide for sustainable development. Ability to identify and solve biomedical engineering problems. Ability to use the current engineering techniques, skills and tools necessary for biomedical engineering practice. Ability to communicate effectively, not only with engineers but also with the community at large. Ability to function effectively as a leader with management and entrepreneurship skills as well as an active member in a multi-disciplinary team. Ability to apply thinking skills in solving problems. Understanding of and commitment to biomedical engineering professional and ethical responsibilities. Understanding of the social, cultural, and environmental responsibilities (local and global) of a professional biomedical engineer. Understanding the need to undertake lifelong learning and possessing the ability to do so. Knowledgeable in contemporary biomedical engineering issues.
21
In achieving this targets, there are only 3 main steps to be taken, which are 1) teach the student, 2) let the students practice what has been taught, and 3) evaluate whether the students have learned. These three steps should be continuously revised and improved to be more effective in achieving the targets. Table 1 shows the PEO while table 2 shows the PO use for Biomedical Engineering Programme. A. PEOs and POs Measurement Method Generally the model measurement of outcomes in OBE approach is shows in the Figure 1. This model is implemented by most institutions in Malaysia including MMU [3], UNITEN [4], UPM [5], UM and others [6]. Figure 2 shows the model of measurement PO and PEO that implemented in the Department of Biomedical Engineering. Most of the IHLs evaluating the POs based on the Course Outcomes (COs). In this department, the POs evaluation is done based on the assessment of the SubPOs, while the COs are assessed separately according to the technical skills. Vision and Mission of the higher institution
PEO
PO
CO Fig. 1 PO and PEO achievement Vision and Mission of the higher institution
PEO
PO
SubPO Fig. 2 Measurement model for PO and PEO achievement implemented in Department of Biomedical Engineering, University of Malaya
22
S. Karman et al.
A.1.
POs Evaluation
Currently, there are 2 types of evaluation that are implementing in this department to ensure that the POs are achieved; 1) assessmentof COs; will assessthe technical knowledge related to a certain subject using test and final written examinations, 2) assessment based on sub-POs which will assess each component in the assigned PO. Since each PO consists of multiple attributes, the department has introduced the sub component of each PO (i.e. subPOs). The sub-POs will be assigned to all departmental subjects based on their credit hours. These sub-POs are formulated to have different levels of difficulty. Certain sub-POs should be achieve in the earlier years of studies, while others have to be achieved in the later years of studies. Currently the department has established 59 subPOs. The example of sub-POs is given in Table 3. Table 3 depicts that the PO1 is divided into three attributes namely PO 1-1a, PO1-1-b and PO1-1c. Each of the Sub-PO is formulated based on the components in the particular PO.
Start semester
PA-Student Meeting (Week 1 & 2)
Teaching & Learning Test, assignment, quizzes, project etc. (Week 1-Week 14)
Corrective actions: 1.Reseat failed sub-po 2. Improve teaching & learning 3. Improve in assessment method
PA-Student Meeting -exam slip- (Week 14)
Data Collection & Analysis -sub-po results- (Week 15)
Table 3 Sub-POs for PO1 PO PO1: Ability to acquire and apply knowledge in engineering fundamentals.
Sub-PO PO1-1-a PO1-1-b PO1-2-a
Criteria Ability to apply various techniques of searching for information.
CQI Meeting -Summarize & discuss all the sub-po results-
Ability to describe the information that is acquired Ability to apply engineering fundamentals to analyze and solve engineering problems
A.2. PEO Evaluation The evaluation of PEO is conducted after few years of graduation. The PEO evaluation is conducted based on the several surveys such as alumni, employer and exit surveys.Thesurveywill gather the personal information, evaluation of PEOs and POs, self perception on strength and weakness of Biomedical Engineering Programme, as well as suggestionsto improve quality of engineering programme. C. Continual Quality Improvement (CQI) The realization of the inadequacies of current systems or implementations and the desire for improvement is the fundamental of CQI process, which can be summarized to 2 steps; 1) realization of inadequacies (from complaints, feedbacks or suggestions) and 2) creation of a system to address the inadequacies. There are various feedback systems in this department such as 1) staff-student committee meeting, 2) departmental meeting, 3) student-academic advisor meeting,
Exam Board Meeting -Comment, finalize & endorse-
PEO, PO Review & Improvement
Achievement of Sub-PO < 3 Stakeholder input: alumni and employersurvey
No
No
Yes
Year 4?
Yes
No Year 4?
Y Finishing School/ Special Training/Special Project
Fig. 3 Flowchart of assessment
Graduation ,Exit interview
OBE Implementation and Design of Continual Quality Improvement (CQI)
4) exit survey and interview, 5) ad-hoc surveys and 6) alumni and employer survey. The system of CQI in this department can be seen in the flowchart shown in Figure 3. The assessment and measurement flow of OBE implementation in this department start with the Academic Advisor – Student meeting at beginning of eachsemester, to remind and brief students regarding the PEOs, POs, sub-POs and assessment method, followed by the teaching and learning process for 14 weeks. At week 14, the students will be evaluated according to the implemented assessment method.The data (i.e. Sub-POs marks) will be collected in week 15, and will be discussed in the departmental meeting forperformance evaluation and CQI. For student Year 1 to Year 3 who achieved subPO less than 3, will be forwarded for the corrective actions. The rest of these students will continue the assessment in the following semester. For students Year 4, whose pass the subPO (achieve subPO 3 and above) will let for graduation and POs evaluation. They will seat the exit interview at the last day of the semester. However, the Year 4 students who fail the subPO should attend the finishing school or special training before they are recognized for graduation. The PO evaluation is conducted at the time of graduation according to the subPO results. By using the POs evaluation results, exit interview results, stakeholders input (alumni and employer survey) the evaluation of PEO can be done. From this evaluation, weakness of implemented assessment can be reviewed and lead to the assessment improvement continuously.
23
D. Performance Criteria (Indicator) Table 4 and Figure 4 show the subPO rubric which is currently used in this department in measuring the student’s subPO achievement. Each subPO is divided to 5 levels of skills, which is level 1 is “very poor”, level 3 is “satisfactory” and level 5 is “excellent”. The student who attained the level 3 and above will “pass” that subPO. The students will “fail” when they attained the level 2 or lower and need to reseat that subPO in corrective action session. Pass
1
4
5
Corrective action
Fig. 4 Performance criteria Table 4 Sub-PO rubric for performance criteria Skill Level Code
Criteria
1
2
3
4
5 Able to use library, internet and other methods effectively to search for information
PO11-a
Ability to apply various techniques of searching for information.
Unable to search for information
Able to use library and internet to search for information
PO11-b
Ability to describe the information that is acquired
Unable to describe the acquired information
Able to describe the acquired information
Able to describe clearly and concisely all the information acquired
PO12-a
Ability to apply engineering fundamentals to analyze and solve engineering problems
Unable to apply engineering fundamentals
Able to apply engineering fundamentals to analyze engineering problems
Able to apply engineering fundamentals to analyze and solve engineering problems
C. Assessment Method Currently, the department is implementing the active teaching and learning process including the lectures, tutorials, laboratory experiments, individual assignments, group assignments, quizzes, tests and exams. For the laboratory experiments and assignments, the students are essential to prepare reports as well as searching for information and synthesizing the information obtained. Some of the assignments need the students to do the presentations.The presentation session allow the student to practice their communication skill which is very useful for their future work. The work in group work which implemented in certain type of assignments gave the students the opportunity to practice soft skills in a group setting as well as leadership skills to get the task completed. The e-learning modules such as e-quizzes and online report submission are also implemented. The department is in the process to implement the Problem-based Education (PBL) as soon as possible.
3
2
Since the particular sub_PO is assessed more than once, the highest mark will be taken and contribute to the final mark of the sub-PO. The final marks of all the sub-POs will be accumulated and average mark will be considered as the final mark for that particular PO.
24
S. Karman et al.
III. CONCLUSIONS The Continual Quality Improvement (CQI) for OBE implementation is designed. However, there may be certain weaknesses in various aspects of the implementation which should be improved.
REFERENCES 1. Chong SS (2008) Outcome Based Education (OBE). 2. EAC manual 2007. 3. Choo KY, Mah SK (2010) Outcome-Based Approach to Engineering Education – Towards EAC Accreditation in 2010, Faculty of Engineering, Multimedia University.
4. Anuar A, N. H. Shuaib NH, Mohamed Sahari KS, ZainalAbidin I (2009) Continual Improvement and assessment plan for Mechanical Engineering Programme in UNITEN, ICEED2009, Kuala Lumpur.
Address of the corresponding author: Author: Dr. Hua-Nong TING Institute: University of Malaya Street: 50603, LembahPantai City: Kuala Lumpur Country: Malaysia Email:
[email protected]
Ocular Lens Microcirculation Model, A Web-Based Bioengineering Educational Tool S.E. Vaghefi Bioengineering Institute, University of Auckland, Auckland, New Zealand Abstract— Currently a comprehensive computational model of the human body is being developed at the Auckland Bioengineering Institute. As a part of this project, the development of a global model of the human eye - the virtual eyeis also in progress. This multi-scale virtual eye model is being developed to enable a powerful educational tool for students and researchers as well as a novel form of integrative diagnosis or treatment for individuals, based on the clinical data gathered. One of the most important aspects of the ocular tissue is its fluid dynamics which is known to affect the physiological and optical properties of the eye. During this project, a 3D finite element model of the fluid dynamics of the ocular lens was designed and executed on our high performance computer. This sophisticated computer model was then linked to a website, in order to elevate it from a local computer model to a global research and educational tool. The presentation of the 3D fluid microcirculation model of the ocular lens over the internet, combined with its userfriendly graphical user interface, has enabled it as a computer model to be used by students and researchers worldwide. Such exposure to the international lens community makes this model a unique debate point in order to obtain a better understanding of the ocular lens homeostasis and its role in the functionality. Using the 3D microcirculation model to predict the effects of various perturbation conditions on the physiological and optical properties of the lens, could lead to better understanding of lens abnormalities such as cataracts and their causes. This model is particularly seen as a capability platform for the other ocular tissues’ fluid dynamic models to be linked into a single virtual eye platform, which is going to be accessible via the internet. Keywords— bioengineering, computational education, ocular lens, microcirculation.
modeling,
I. INTRODUCTION Bioengineering is often defined as the field of research that deals with the application of mathematical, computer and engineering models into physiological sciences. Bioengineering is also considered to be one of the youngest and most thriving fields of science in today’s world. Every year new bioengineering curricula are being created around the world. It was estimated that there are about 50 new undergraduate programs in the US alone by 2003 and the numbers have grown exponentially ever-since [1].
One of the principal challenges of bioengineering education has been to create a perfect blend of physiological and engineering sciences, in order to equip its young graduates and researchers with essential tools to tackle the current healthsciences related issues. Such enriched mixture of independent yet related sciences is also needed to be delivered to the current students in a meaningful and conciseness manner, utilizing the modern media. For that reason, web-based education systems are being proposed by many authors from different backgrounds, to be most suitable form of teaching in today’s academic environment [2],[3]. Realizing this necessity, here at the Auckland Bioengineering Institute (ABI), new bioengineering models of the human body are being developed hand-in-hand with innovative graphical and web-based methods of delivering the results to the global audiences. The Physiome Project of ABI is an excellent example of the above approach [4]. This project is a worldwide public domain effort to provide a computational framework for understanding human physiology. It aims to develop integrative models at all levels of biological organization, from genes to the whole organism via gene regulatory networks, protein pathways, integrative cell function, and tissue and whole organ structure/function relations. Various modeling projects are being developed currently at the ABI (http://www.abi.auckland.ac.nz/uoa/home/about/our research/projects) to create the full computer model of the human body, and the virtual eye project is one of them (http://www.abi.auckland.ac.nz/uoa/home/about/ourresearch/projects/special-sense-organs). The virtual eye project is an effort to develop an optically and mechanically correct 3D model of the eye, i.e., that can "see" what a person sees. It is intended to be used by scientists and students in the fields of bioengineering, optometry and ophthalmology, for investigating their issues of interest. To create a comprehensive model of the eye, each of its comprising tissues (e.g. lens, cornea, retina …) should be investigated independently and jointly with the rest, based on the existing literature models and experimentally obtained data. One well-developed example of this kind of integrative modeling within the eye is the work that has been done on the ocular lens microcirculation system [5]. This model draws together a variety of empirical data from a range of experimental approaches and applies physical laws to these
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 25–28, 2011. www.springerlink.com
26
S.E. Vaghefi
data to infer a detailed integrative model of whole-lens fluid dynamics. The lens microcirculation computer model, as an essential part of the eye’s fluid dynamics, has been developed at the ABI to be incorporated in the bigger virtual eye model. This model is now evolved to an educational web-tool for the use of students and researchers, interested in the fluid dynamics of the eye.
II. MATERIALS AND METHODS For the microcirculation model to be compatible and hence incorporable with the bigger virtual eye model, it had to be developed in similar programing environment. ABI has been developing its in-house sophisticated programming language for computational modeling called CMISS, which stands for Continuum Mechanics, Image analysis, Signal processing and System identification (www.cmiss.org). CMISS is a mathematical modeling environment that allows the application of finite element analysis, boundary element and collocation techniques to a variety of complex bioengineering problems [6]. It consists of a number of modules including a graphical front-end with advanced 3D display and modeling capabilities, and a computational back-end that may be run remotely on powerful workstations or supercomputers. A C/C++ coded graphical user interface (CMGUI) and a Fortran-77 coded computational engine (CM) are being implemented in CMISS; and ASCII and binary files are being utilized to connect the engine and the interface. The CMISS visualization user interface (CMGU, www.cmiss.org/cmgui) is linked to the FIREFOX web browser and CMISS generated results are displayed in web page format via an ABI-authored FIREFOX extension called ZINC (www.cmiss.org/cmgui/zinc, Stevens et al. 2006). These webpages, not only could then be viewed online by researchers worldwide but also interactions with the models are made possible allowing the users to investigate the models’ capabilities even further. ZINC is being developed for Mozilla platform (www.mozilla.org) based applications such as the web browsers Mozilla and Firefox. The ZINC extension embeds the CMISS environment and exposes a javascript application programming interface (API). Web site developers can use the javascript API to add interactive 3D models and computational abilities to web pages. The Mozilla platform was chosen as the initial target for CMISS because it is supported on all the major operating systems; Linux and most other UNIX platforms, Mac and Microsoft Windows. Mozilla is free and its license, the Mozilla Public License (MPL) ensures public access to the source code.
Fig. 1 Comparison of lens fiber cell geometry and its finite element mesh representation. A: Mouse ocular lens structure imaged with electron microscopy, reproduced from [7]. B: 2D projection of the half of finite element mesh generates using CMISS and visualized using CMGUI with the curved internal cuboid elements are outlines in green. The outer surface of the lens, colored in red, is where the computational boundary conditions are applied to the mesh. The points of reference are labeled (AP) anterior pole, (PP) posterior pole and (EQ) equator Utilizing the above listed programming platforms, the 3D microcirculation model was implemented using common steps of computational models implementation. It started by creating a finite element mesh of the ocular lens, using suitable cuboid elements, on which future computations would be performed [Fig. 1]. During the subsequent phase, equations governing the fluid dynamics of the ocular lens, mostly derived by [8],[9], were implemented in the programming platforms. The next stage involved the setting of the initial boundary conditions on the generated mesh and then solving the created system using computational solvers. The convergence criterion was constantly controlled and when met, the calculated parameters were exported for the post processing and visualization of the results to be performed. The basic steps towards creation of a finite element model are profoundly covered by [10]. Computed fields from the text format files created in CMISS were imported into CMGUI. However, since CMISS and CMGUI are local to ABI, installing and learning them by external researchers to view the computational models’ results was considered to be unfeasible. Hence, CMGUI has been recently embedded directly into web-based applications, through the ZINC extension developed for the Mozilla Platform web-browsers.
IFMBE Proceedings Vol. 35
Ocular Lens Microcirculation Model, A Web-Based Bioengineering Educational Tool
27
model can predict the fluid dynamics of the lens under wide range of the perturbation conditions, some still uninvestigated empirically. This data extrapolating capability of the model is valuable to all the researchers interested in this field and is deemed reliable since the model mimics the already obtained experimental data. Table 1 Initial conditions at outer lens boundary for the present model, at normal and perturbed settings
Fig. 2 Relationships among the programming platforms used in this project, and the steps towards making the model available online The developed 3D microcirculation model is based on three programming bases to solve (CMISS), display (CMGUI) and convert the graphical end to web-format (ZINC). [Fig. 2] illustrates the incorporation of the developed microcirculation model in webpage format.
Value Parameter Extracellular sodium concentration 10-110 Extracellular potassium concentration 8-108 Extracellular chloride concentration 115 Intracellular sodium concentration 7 Intracellular potassium concentration 100 Intracellular chloride concentration 10 Temperature 280-310
Units mM mM mM mM mM mM K
IV. CONCLUSIONS III. RESULTS The microcirculation model is solved on a range of ‘natural’ and ‘unnatural’ boundary conditions extracted from the literature [9], listed in the table below [Table 1]. ‘Unnatural’ boundary conditions (e.g. lower temperatures) have been used in this model in order to mimic the perturbation experiments. It should be noted that due to the large size of the graphical presentation files, it was not practical to solve the model for all the perturbation points. For example to mimic the low temperature conditions, the model has been solved at 37, 27,17 and 7 degrees and then ZINC has interpolated the calculated fields for all the in-between temperature value points (e.g. 8, 9, etc.). These interpolations are implemented in a linear fashion such as if 10% drop in a certain field’s values is caused by a 10% fall in temperature, then 5% decline of the temperature is modeled to lead to 5% decrease of the calculated field values. In this model, the concentration of important ions for the lens microcirculation (i.e. sodium, potassium and chloride) alongside with electric potentials, hydraulic pressures, fluid velocity and fluxes, current densities and trans-membrane osomolarities are estimated under the normal and perturbed conditions. All these fields are passed on to the model’s webpage and are illustrated on the graphical meshes upon request. The following figure [Fig. 3] is a snapshot of the webpage of the model. The 3D microcirculation model is available online at (http://sitesdev.bioeng.auckland.ac.nz/evag002) and the readers are encouraged to view and use it. Its predicting results are very well matched by those experimentally obtained and published in the literature. The
During this project, a 3D model of fluid dynamics of the ocular lens was designed and executed on ABI’s high performance computer (HPC). The results of this research were found to be in agreement with the predictions of the previous models of the microcirculation [8],[9],[11]. The presentation of the 3D microcirculation model over the internet, combined with its comprehensible graphical user interface (GUI), gives it a unique capability as a computer model to be used by researchers globally. This exposure to the international lens community makes this model a unique tool in acquiring a better understanding of the ocular lens and its fluid dynamics. Using the 3D model to predict results of different changes in the lens could lead to better understanding of lens abnormalities such as cataracts and their causes. For example lack of antioxidant delivery to the center of the lens, caused by weakened microcirculation system and reduced fluxes inside the lens, which is hypothesized to be the major cause of age-related cataracts, can be modeled and studied with the current model and in the human lens in the future. In summary, the current 3D model of the mouse lens is a first step towards implantation of human lens models and prediction and investigation of lens pathologies. The ultimate goal of modeling the fluid dynamics in the ocular lens is to create a comprehensive computational model of the fluid dynamics of the human eye. Such system linked with the models from the rest of the body (e.g. blood pressure and sugar level models) can lead to an extensive model of the human body, which can be used by students and scientist to investigate the links between these phenomena and eye pathologies such as cataracts.
IFMBE Proceedings Vol. 35
28
S.E. Vaghefi
Fig. 3 Screenshot of the 3D lens microcirculation system presentation in webpage format using the FIREFOX web browser. The model controls are in upper left part, the main screen displaying the lens at right and the color-bar associated with the visualized data and its labels is located lower left of the image
ACKNOWLEDGMENT The authors wish to thank the Marsden Fund of New Zealand for the financial support of this project.
REFERENCES [1] J.H. Linehan, “Innovations in Bioengineering Education for the 21 Century,” 11th Mediterranean Conference on Medical and Biomedical Engineering and Computing 2007, 2007, pp. 1142–1142. [2] S. Shurville, T. Browne, and M. Whitaker, “Employing the new educational technologists: A call for evidenced change,” 2008. [3] E. Tremblay, “Educating the Mobile Generation–using personal cell phones as audience response systems in post-secondary science teaching.,” Journal of Computers in Mathematics and Science Teaching, vol. 29, 2010, pp. 217–227. [4] P.J. Hunter and T.K. Borg, “Integration from proteins to organs: the Physiome Project,” Nature Reviews Molecular Cell Biology, vol. 4, 2003, pp. 237–243. [5] R.T. Mathias, T.W. White, and X. Gong, “Lens Gap Junctions in Growth, Differentiation, and Homeostasis,” Physiol. Rev., vol. 90, 2010, pp. 179-206.
[6] D. Nickerson, M. Nash, P. Nielsen, N. Smith, and P. Hunter, “Computational multiscale modeling in the IUPS Physiome Project: modeling cardiac electromechanics,” IBM Journal of Research and Development, vol. 50, 2010, pp. 617–630. [7] T. Blankenship, L. Bradshaw, B. Shibata, and P. FitzGerald, “Structural specializations emerging late in mouse lens fiber cell differentiation,” Investigative ophthalmology & visual science, vol. 48, 2007, p. 3269. [8] R.T. Mathias, J.L. Rae, and G.J. Baldo, “Physiological properties of the normal lens,” Physiological reviews, vol. 77, 1997, p. 21. [9] D.T. Malcolm, “A Computational Model of the Ocular Lens,” 2006. [10] A.J. Morris and A. Rahman, A practical guide to reliable finite element modelling, Wiley Online Library, 2008. [11] R.T. Mathias and J.L. Rae, “Steady state voltages in the frog lens,” Current Eye Research, vol. 4, 1985, pp. 421–430.
Author: Dr S E Vaghefi Institute: Bioengineering Institute Street: 70 Symonds stree City: Auckland Country: New Zealand Email:
[email protected]
IFMBE Proceedings Vol. 35
Analysis of Skin Color of Malaysian People L.M. Abdulhadi1, H.L. Mahmoud1, and H.A. Mohammed2 1
2
Faculty of Dentistry, University of Malaya, Kuala Lumpur, Malaysia Faculty of Dentistry, MAHSA University College, Kuala Lumpur, Malaysia
Abstract— The increase need for better color matching of medical-grade silicone in episthesis fabrication imposes gathering and classifying of the captured skin colors according to the body locations. This action provides a correct foundation for further manipulation to establish a complete range of skin shades for any ethnic or human group. In this study visible light spectrometer was used to capture the skin colors in three different locations on the face (forehead, zygoma, and chin) of 86 healthy Malaysian adults representing the three main components of the population. The color values were transferred into a mixed L*a*b mode (Hunter and CIE) using Photoshop software after a pilot study. The skin color data were analyzed and classified according to gender, skin location, and ethnic group in order to establish a color data-base for the Malaysians. Data analyzed using descriptive analysis, independent t-test and one way ANOVA. The results showed that the facial skin had variable color spectrum according to location, gender, and different ethnic groups. As a conclusion of this study, the skin colors fluctuated from one body location to another and according to ethnic group and gender. In addition, mixed L*a*b mode (Hunter and CIE) offered better color matching results for a wide range of skin colors compared to the single mode. This information can be used in making a shade guide or color chart formula for Malaysian population. Keywords— VL-spectrometer, Zygoma , Front, Chin.
Medical-grade
silicone,
I. INTRODUCTION The study of skin color components, distribution and texture is very important to disclose the appropriate pigment composition and shade when medical grade silicone have to be used as an artificial replacement to any mutilated facial or body structure [1]. Much research was conducted to find color pigments that could simplify and enhance the color matching process of medical grade silicone [2,3]. Many silicone coloring systems were hypothetically supposed to match accurately the entire human skin colors and cover a wide skin color range [4]. However, no system until today has such two features. The major concern in silicone coloring is how to find simple procedures that are, reproducible, repeatable, pigment stable [5], and provide real-color skin matching colors. The most commonly used devices in measuring and recording skin color are
colorimeter, visible light spectrometer (VL-Spectrometer) [6], combined color- photometer and spectrometer, digital photo apparatus, visual perception, and some other systems that are not widely used such as chromasphere and spectroradiometer. Color modes or systems such as RGB, CMYK, Lab, Bitmap mode, Indexed color mode and Multichannel mode[7] were used to display and print images. The imaging or analyzing devices like spectrometer, colorimeter [8, 9], digital cameras, scanners, and printers base its color modes on established color models for describing and reproducing color. The most commonly used include; RGB mode (red, green, blue), CMYK (cyan, magenta, yellow, black), CIE L*a*b (luminance, green-red, blue-yellow), and Hunter L,a,b. RGB mode uses the RGB model, assigning an intensity value to each pixel ranging from 0 (black) to 255 (white) for each of the RGB components in a color image. For example, a bright red color might have an R value of 246, a G value of 20, and a B value of 50. When the values of all three components are equal, the result is a shade of neutral gray. When the value of all components is 255, the result is pure white; when the value is 0, pure black. In the case of CMYK Mode, each pixel is assigned a percentage value for each of the process inks. The lightest (highlight) colors are assigned small percentages of process ink colors, while the darker (shadow) colors assigned higher percentages [10]. For example; a bright red might contain 2% cyan, 93% magenta, 90% yellow, and 0% black. In CMYK images, pure white is generated when all four components have values of 0%. Hunter L,a,b and CIE L*a*b modes are both color scales based on the opponentcolors theory which assume that the receptors in the human eye perceive colors as the following pairs of opposites light-dark, red-green, yellow-blue [11]. However, even with the presence of many color modes that are applied in many systems, total color matching is difficult to achieve because its assessment depends on visual perception. Therefore, a margin of error exists and is dependent upon the hardware, software and the human visual perception. In daily practice, medical silicone skin color matching is very difficult task to achieve. Therefore, more research is needed. A logical way to approach this problem is by determining the range of skin colors at different locations and then search for the best color formulation that produces the best color match. The objective of this study
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 29–32, 2011. www.springerlink.com
30
L.M. Abdulhadi, H.L. Mahmoud, and H.A. Mohammed
was to survey and analyze the skin color in three locations of the face using VL- spectrometer and then overlay the L*a*b (Hunter and CIE) mode in Malaysian community.
II. MATERIALS AND METHODS Eighty six volunteers randomly included in the study, their ages ranged between 19-74 years. They were Malays, Chinese, and Indians. This study has been approved and supervised by the ethical committee [DF OP0806/0027(L)], faculty of dentistry, University of Malaya. Each subject was informed about the whole experiment and asked to sign a consent form before being incorporated into the study. The inclusion criteria were the absence of any sign and symptom of skin abnormality. Individuals with any abnormal skin pigmentation, irregular color tone, the presence of small pigmented moles in the examined site, ulcer, burn, scar, scales, presence of heavy hair, and make up use were excluded from the study.
record according to the manufacturer’s instructions. The skin color was recorded at 90° using VL reflection fiberoptic probe applied to the skin for 10 seconds. For consistency of recording using the spectrometer, one examiner recorded the color of the whole sample subjects by using of the VL-spectrometer. Prior to recording the skin color for the research, a pilot study was performed using the spectrometer to find the best color mode for displaying the skin colors with the aid of Photoshop software. Accordingly, Adobe Photoshop software Version 6.0 (Adobe systems incorporated, USA) was used separately to convert the recorded color values by the spectrometer into color images so that they can be assessed visually and when necessary manipulated by different blending options found in the software until the best match of the skin color was reached. In the main menu of Photoshop, the layers option was selected, then blending options were chosen to start the overlay between Hunter L,a,b, and CIE L*a*b after copying and pasting the data from Microsoft Excel . The resultant images were three color sets representing the three color modes; overlay L*a*b (of CIE and Hunter), Hunter L,a,b, and CIE L*a*b (Fig. 2. Mixed color on left side). Adobe Photoshop’s definition of the overlay process is; to multiply or to screen the colors, depending on the base color. Patterns or colors overlay the existing pixels while preserving the highlights and shadows of the base color. The base color is not replaced, but mixed with the blend color to reflect the lightness or darkness of the original color [14].
Fig. 1 A :VL-spectrometer, B The captured skin locations A visible Light (VL) spectrometer (USB VL4000, Optic Ocean, 830 Douglas Avenue, Dunedin, FL 34698) was used to record the skin color at three reference locations on the face ; the forehead, the skin over left zygoma, and the pogonion (chin) (Fig.1, A, B). The forehead skin was selected as a constant reference due to its fixity to the underlying frontal bone and the absence of compressible tissue adding to its continuous exposure to the sunlight theoretically result in darker color in relation to the other face locations. The left zygoma was selected randomly to stand for the middle part of the face. On the other hand, the chin represented the lower part of the face. As a result, a simple color map of the facial skin color was formulated. The system components are; light source (Tungstenhalogen lamp), fiber optic probe, USB-visible light spectrometer with 25 µm x1mm aperture , reflection probe holder, calibration kit, and software (Spectra suit, Optic Ocean) downloaded to a computer. The values of captured skin colors were displayed on the computer using the software package included with the spectrometer (Spectra suit, Optic Ocean). The device was recalibrated after each
Fig. 2 The pilot study of overlaying CIE and Hunter This formulation was applied on six variable skin colors selected from the Malaysian subjects (including very clear to the darkest colors) using the VL- spectrometer. The Hunter L,a,b and CIE L*a*b values of the forehead skin colors were displayed on the screen of the computer by Spectra software. They were copied and pasted to Microsoft office Excel (MOE) to be a reference of the captured values by the spectrometer. Then, the saved skin color values (Hunter L,a,b and CIE L*a*b values) were copied from the Microsoft Excel and pasted into the Adobe Photoshop to be displayed as color images. An overlay of the two colors was performed to
IFMBE Proceedings Vol. 35
Analysis of Skin Color of Malaysian People
31
produce one mixed color. The output was printed on photo papers as tags so that they can be compared to the color of the original skin visually by two examiners for consistency. The results showed that mixed L*a*b values by the overlay technique offered better matching capability when compared to the original modes. Therefore, we decided to use the overlay or simply mixed L*a*b mode consistently to display, compare, arrange and classify the skin colors in ascending manner. The data were analyzed using SPSS software version 17 to calculate the descriptive statistics and the normality of distribution. Levene’s test showed equal variances of the data components. Therefore the two conditions for the ANOVA test were met (normality and homogeneity). One way ANOVA was used to examine the null hypothesis of color difference among the skin color of the three facial locations. Independent t-test analysis were used to demonstrate the difference between the mean L*a*b values in relation to ethnic group, gender.
The ethnic composition of the sample consisted mainly of Malay, followed by Chinese and Indian subjects. The females constituted nearly 70% of the sample (Table 1). Table 1 The mean L*a*b value of skin color in men and women
A B
Table 2 The mean L*a*b values of different locations
L
A
III. RESULTS
L
maintained minimum value (b*37.16) when compared to the zygoma b*41.73, and chin b*45.49. In Malays, the forehead color values (L* 71, a* 28.3, b* 38.97) were statistically different from the chin (L*89.68, a* 32.83, b*47.05). The chin color tends to be fainter, reddish and yellowish in comparison to the forehead. On the other hand, the forehead values (L* and b*) were different when compared to the zygoma skin color. The difference was in the darkness and yellow color mainly. The zygoma color was different from chin color except for (L*) vector.
Gender
N
Mean
SD
M
78
70.91*
17.77
F
180
81.78
14.76
M
78
30.29
3.10
F
180
29.50
3.52
M
78
38.62*
6.70
F
180
42.68
5.91
The three facial locations analysis revealed that (L* 81.78) and (b* 42.68) mean values in women were different from that of men (L* 70.9; b* 38.6). This means that women have clearer skin color when compared to men and the skin of women has more yellowish color. There was no difference between the two genders for the (a*). However, it tended to shift more towards the red in males (Table 1). This signified that men have slightly reddish skin compared to women. When the whole sample records were analyzed, the forehead, zygoma, and chin location’s colors were different in L*a*b values (Table 2). The forehead was significantly darker than the other face locations (L* 65.67). However, the zygoma (L* 83, 3) and chin (L* 86.49) color tended to be less variable. The forehead showed reduced reddish color (a* 27.79) when compared to the other locations ( a* zygoma: 29.14 and a*chin: 32.3). For the yellowish color, again forehead
B
Location
N
Mean
SD
Front
86
65.67*
16.65
Zygoma
86
83.33*
11.14
Chin
86
86.49*
12.85
Front
86
27.79*
2.36
Zygoma
86
29.14*
3.23
Chin
86
32.30*
2.90
Front
86
37.16*
5.13
Zygoma
86
41.73*
4.88
Chin
86
45.49*
6.30
In the Chinese, the skin color values on forehead were different from zygoma except for (a*) vector. This means the red color was nearly constant in the forehead and zygoma locations but only the lightness and yellow color fluctuate. However, the forehead showed totally different L*a*b values compared to the chin and tended to be darker. The zygoma and chin skin color demonstrated different values in (a*) and (b*) components similar to the difference in Malays. The analysis of the Indian ethnic subjects revealed a difference for the L*a*b values between the forehead (L* 48.52, a* 26.78, b* 32.6) and the chin (L* 75.96, a*32, b*41.35), forehead-zygoma (L*72.74, a* 29.65, b*38.70) body locations. However, the difference was limited to (a) components for the zygoma-chin body locations. The color tags were classified in ascending manner according to location, gender and ethnic groups.
IV. DISCUSSION Human skin is a turbid medium with a multi-layered structure [11] and contains various pigments such as melanin and haemoglobin. Slight changes in structure and pigment construction produce great skin color variation. Therefore, it is necessary to analyze skin color on the basis of its structure and pigment construction in reproducing and diagnosing various
IFMBE Proceedings Vol. 35
32
L.M. Abdulhadi, H.L. Mahmoud, and H.A. Mohammed
colors. The facial skin color showed different L,a,b values according to the location, gender, and the studied ethnic groups. In a study done using a Minolta 2500d chromameter for the skin color of buttocks and hands of Chinese females, the results were comparable to our findings [12]. The VLspectrometer was used previously to record and investigate skin color as a standardized tool to find better skin-silicone color matching for maxillofacial prosthesis [6]. Many factors may influence on the reproducibility of this device like angle of reflection, location topography, lens aperture, compressibility of the tissue, mobility of the tissue and distance from the target. However, using consistent method with minimum movement and little application of pressure by the probe on the skin can enhance the precision and reproducibility to its utmost. To reduce the bias to minimum, the skin area which is mostly covered by relatively thin tissue layer and supported by bone like the forehead was considered as a constant reference for the other locations of the face like; zygomatic prominence and the pogonion. In the current study, the association of Photoshop software with this device was very useful to find the best skin color matching method. The overlay or simply mixed L*a*b mode revealed better similarity to the skin color when compared to the basic modes (Hunter L,a,b and CIE L*a* b). However, the precision of matching to the original skin color was related to the display or printer used to issue the samples for visual evaluation and sometimes the resultant color was far from the original. The overlay mode is one of the mixing techniques advocated by Adobe Photoshop to enhance and slightly lighting the base color while preserving the highlights and shadows of the base color. The base color is not replaced, but mixed with the blend color to reflect the lightness or darkness of the original color.The result of this blending may be used successfully for the average or lighter colors in the spectrum and even the darker spectrum. However it never covered the broad spectrum of the recorded colors. The incorporation of Photoshop with the features of color blending might make the skin color matching easier for the anaplastologist since adding or abstracting colors can be done on the screen before apply the result on the silicone. The difference between the males and females was mainly related to luminosity. Female skin in general reflects the light more than male and this may be due to the skin texture [13]. Hence, spectrophotometer cannot record the texture profile of the skin but an average combination of the reflected color lights from a small body location. Generally, the forehead color was different from the zygoma and chin skin may be due to the long and direct exposure to the sunlight compared to other body locations. One of the limitations of this pilot study
is the subject’s number which is considered inadequate on the bases of total population representation. However, the objective of this study was fulfilled regarding the new technique proposed for recording the skin color.
V. CONCLUSIONS Within the limits of this pilot study, the following may be concluded; overlay or mixed L*a*b mode offers better skin color matching compared to single mode. The color of facial skin showed variable spectrum according to the location, gender, and ethnic group.
REFERENCES 1. Abdulhadi LM,( 2008) Restoration of large orofacial defect after graft breakdown: A Case Study. International Journal of Anaplastology 2: 32-37. 2. Guttal SS, Patil NP, Nadiger RK, Kulkarni R. (2008) A study on reproducing silicone shade guide for maxillofacial prostheses matching Indian skin color. Indian J Dent Res 19:191-5. 3. Korfage A, Borsboomb PC, Dijkstra PU, van Oort RP ( 2009) Analysis of translucency of skin by volume reflection for color formulation of facial prostheses. Int J Prosthodont 22: 623- 9. 4. Tran NH, Scarbecz M, Gary JJ.( 2004) In vitro evaluation of color change in maxillofacial elastomer through the use of an ultraviolet light absorber and a hindered amine light stabilizer. J Prosthet Dent 91:483-90. 5. Bicchierini M, Davalli A, Sacchetti R, Paganelli S. (2005) Colorimetric analysis of silicone cosmetic prostheses for upper-limb amputees. J Rehabil Res Den 42:655-64. 6. Mancuso DN, Goiato MC, Micheline dos Santos D. (2009) Color stability after accelerated aging of two silicones, pigmented or not, for use in facial prostheses. Braz Oral Res 23:144- 8. 7. Color modes at http://www.inkjetcolorsystems.com.htm 8. Kiat-Amnuay S, Lemon JC, Powers JM.( 2002) Effect of opacifiers on color stability of pigmented maxillofacial silicone A-2186 subjected to artificial aging. J Prosthodont 11:109-16. 9. Troppmann RJ, Wolfaardt JF,Grace M, James AS. (1996) Spectrophotometry and formulation for coloring facial prosthetic silicone elastomer: A Pilot clinical trial. J facial somato pros 2: 85-92. 10. Poynton C A (1995) Guided Tour of Color Space, New Foundations for Video Technology (Proceedings of the SMTPE Advanced Television and Electronic Imaging Conference), pp 167-180. 11. Application note, insight on color, Hunter L,a,b versus CIE 1976 L*a*b at http://www.hunterlab.com 12. Anderson R R, Parrish J. A, (1981) “The optics of human skin,” J Invest. Dermatol 77: 13–19 13. Liu W, Wang X, Lai W, Li L, Zhang P, Wu Y, et al.( 2007) Skin color measurement in Chinese female population: analysis of 407 cases from 4 major cities of China. Int J Dermatology 46 :835 - 839 Author: Laith Mahmoud Abdulhadi Institute: Faculty of Dentistry, University of Malaya Street: Jalan University City: 50603 Kuala Lumpur Country: Malaysia Email:
[email protected]
IFMBE Proceedings Vol. 35
Comparison of Spectrometer, Camera, and Scanner Reproduction of Skin Color H.L. Mahmoud1, L.M. Abdulhadi1, A. Mahmoud1, and H.A. Mohammed2 1
Department of Prosthetic Dentistry, University of Malaya, Kuala Lumpur, Malaysia 2 Faculty of Dentistry, MAHSA University College, Kuala Lumpur, Malaysia
Abstract— The tinting techniques of medical grade silicone still depend on unstandardized method and inconsistency still arises. However, more research work is needed to find simple to use, handy, reliable device in reproducing the color spectrum of the human skin in order to exempt the patient from multiple visits. The data were collected from 90 young volunteers representing the main components of Malaysian population. They were divided into 30 Malays (28.53±9.54 years), 30 Chinese (23.8±6.34 years), and 30 Indians (26.3±9.44 years). visible light spectrophotometer, digital camera and flat scanner were used in standardized manner to compare the consistency in reproducing the skin color. A custom-made adjustable stand-holder was used to place the scanner vertically so that it can be used to record the forehead skin. Adobe Photoshop software, Microsoft office software 2007 (word, power point, excel), SPSS statistical analysis software version 17, and a specially programmed software (Image Colors Average Calculator Software version 1 beta) that calculates the average LAB values of the whole pixels in the captured images by camera or scanner were used to process the data. The result was displayed in CIE LAB values so that it can be transformed into a colored tag for printing. The visual double-blinded evaluations of the recorded skin colors were performed by four independent examiners worked independently to find the best-printed color that match the original skin color. Normality of distribution, ANOVA, and Chai-square were used to find the difference between the color values of the different ethnic groups and gender (p<0.05) and to analyze the observers’ assessments regarding the color matching of skin-tags. Generally, the images captured by digital camera showed nearly 70% agreement between observers compared to other devices. Difference was shown between the males and females in addition among the different ethnic groups. Keywords— VL-spectrophotometer, Digital camera, Digital scanner, Medical-grade silicone.
I. INTRODUCTION The anaplastologist faces a challenge when fabricating a maxillofacial prosthesis as it is not only has to be functional, but also aesthetic. Besides fulfilling the physical requirements for good mechanical and physical properties, the prosthesis should also be of appropriate contour and texture relative to the surrounding structures [1, 2]. Coloring of the prosthesis may be achieved by the following methods: Extrinsic coloring is done on the base colors of the
processed prosthesis to achieve a desired skin tone [3, 4]. The results were however, not satisfactory because it was affected by external conditions. The most commonly used intrinsic coloring agents are dry earth pigments, such as rayon fibre flocking, artist’s oil paints and kaolin, and liquid facial cosmetic. However, the physical properties of maxillofacial elastomers were changed by the incorporation of these coloring agents [5, 1]. Spectrometers, colorimeters, cameras, scanners, television monitors and printers base their color modes on established models for describing and reproducing color. Common modes include RGB (red, green, blue); CMYK (cyan, magenta, yellow, black); and CIE L*a*b* (Commission Internationale de l`Eclairage) [6]. The LAB color is designed to approximate human vision. The RGB or CMYK spaces model the output of physical devices rather than human visual perception [7]. The transformations among the color modes may be done with the help of appropriate editing applications or color management software which convert color data to different color standards (RGB, CMYK, L*a*b* etc.) [8]. The subjectivity of the visual perception of the human eye in assessing colors has led to the use of color measuring devices like spectrometer and colorimeter that evaluate colors based on color modes. However, visual perception is still valuable in a clinical setting involving multi-ethnic patient population, although, human vision may be less sensitive to distinguish color differences [9]. Jarad et al. in 2005 concluded that digital camera can be used as a mean of color measurements in the dental clinic [10]. The scanner is a device that captures images from photographic prints. They are available as hand-held, feed-in and flatbed types. They use the RGB system of measuring color. There is no mention of the use of scanners for shade matching procedures in the dental literatures. The spectrometer measures the amount of light reflected or transmitted by a sample at discrete wavelengths. The recipient light is considered as a color of that substance, and generally measured using the CIE L*a*b* or Hunter Lab values. It was found that because of the intrinsic factors of the materials, automatic color detection for prosthesis production was a complex procedure [11]. Therefore, the system was deemed inappropriate to be used, and further development was essential before it could be used to fulfil the needs of prostheses manufacturers [12].
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 33–36, 2011. www.springerlink.com
34
H.L. Mahmoud et al.
Generally, the patient has to be present when the process of tinting the prosthesis is carried out in the clinic to get the proper shade for the prosthesis. This stage may be facilitated if a reliable reproduction of the skin color can be made. So that, the patient needs only to be present when the facial shade is selected just before the processing of the prosthesis. The aim of this study was to compare the ability of 3 devices (the spectrometer, digital camera and flatbed scanner) in reproducing color that is closest to the skin shade.
Chi-square was used to investigate the significance of color matching of each device depending on visual perception. The spectrometer was installed and connected to a personal computer following the manufacturer’s instructions. It was calibrated before each new recording. The subject was seated in a proper position at a comfortable chair. The fibre optic probe was placed gently on the forehead and the area was exposed for spectrometer light for 10 seconds. The color analysis values were copied and pasted into Excel document (Microsoft office) for further statistical analysis or converted into color image by using of Adobe Photoshop software (Fig. 1).
II. MATERIALS AND METHODS This study was approved by the research ethics committee of the Faculty of Dentistry, University of Malaya [DF PD0902/0019(P)]. The subjects were 90 adults from 3 ethnic groups in Malaysia – Malays, Chinese and Indians (n=30 for each ethnic group). Their ages ranged from 20-57 years. Full information case sheet was used to record the patient particulars. The forehead area was selected as being representative of the facial skin color because it is exposed, sparsely covered with facial hair and easily probed due to its fixation to the underlying bone. Subjects who participated in the study were not wearing any make up, had healthy skin with no excessive facial hair and absence of skin conditions that would affect on the skin tone and shade. One of the tested devices was a visible light (VL) spectrometer (USB VL4000, Optic Ocean, 830 Douglas Avenue, Dunedin, FL 34698). It contains light source (Tungsten- halogen lamp), fibre optic probe, USB-visible light spectrometer, reflection probe holder, calibration kit, and special software (Spectra suit, Optic Ocean) to display the reflected light result on the computer. The other devices were Canon 450 D with mounted macro lens (http://web.canon.jp), and a scanner (Canon lide 100. Canon Inc., Thailand). Canon inkjet printer (Canon MP 970, Canon Inc., Thailand) was used to print the recorded color tags using the Adobe Photoshop software. The average LAB values of the whole pixels of each captured image by the camera and the scanner was calculated by the use of a software image colors average calculator software version 1 beta (ICAC) as shown in these equations: Avg L= (L1+L2+....+Ln)/n, Avg A= (A1+A2+....+An)/n, Avg B= (B1+B2+....+Bn)/n [13] . SPSS software version 17 was used to analyze the data and to find the best device that reproduces the closest skin color. Normal distribution of data and homogeneity of variance were tested and approved so that one way ANOVA and independent t- test can be applied to compare the skin color based on ethnic groups and gender for each device.
Fig. 1 The use of spectrophotometer The digital camera was fixed on a tripod at a distance equal to 50cm from the front head and 150 cm height in a fixed light condition. The Patient was seated in up-right position and his/her forehead was covered except a rectangle area measuring 4x3 cm by a neutral color paper. The snapshot was done using 60mm lens, ISO-400, and exposure time equal to 1/20 second after focusing the camera on the area to be tested (mid area of the forehead). The reliability of the camera recording was tested by capturing three frames continuously for the same skin area. The images were saved as jpeg file using Adobe Photoshop and their RGB values were converted into CIELAB by using of the Adobe Photoshop software. The LAB values were uploaded into the (ICAC) software and averaged. The resulted color values were saved so that they can be printed later side a side with the other devices images. The scanner was fixed in an adjustable custom-made stand holder that consisted of a wooden base (30 x30 cm) and a vertical aluminium stand (35 x10 cm). The scanner cover was demounted and the screen was covered totally by a sheet of thick paper except for a rectangular window in the paper measuring 4x3 cm designated for recording the forehead skin color. The patient was seated comfortably in front of the scanner that was placed vertically in the jig on a table. The base of the jig was then moved towards the subject until the forehead just lightly touched the scanner at the exposed area. A fixed resolution of 200 dpi was used to scan the forehead. The reliability and consistency of the
IFMBE Proceedings Vol. 35
Comparison of Spectrometer, Camera, and Scanner Reproduction of Skin Color
scanner record was tested and approved. The scanned image was saved as jpeg file and its RGB values were converted into CIELAB using the Adobe Photoshop. Consequently, the LAB values were averaged using of (ICAC) software and saved to be printed later. The three skin color images were arranged on A4 template to be printed on special mat textured paper (180 gm) using Adobe Photoshop. Each subject assessed the matching of the printed tags to his/her forehead shade using a hand-held mirror. The two assessors and the subject himself were blind to the source of the data for the printed tags. The evaluation was done with the patient seated in a well- lit room. The assessors were given 1-2 minutes to complete the assessment independently as 1: poor, 2: fair, or 3: good. The data were analyzed using Chi square to see the inter-observer evaluation difference at p≤ 0.05.
35
The frequency of skin color reproduction was 73% using the digital camera followed by scanner 26% and spectrophotometer 2% (Figure 2). Table 1 The mean LAB values for the devices Device
L*
a*
b*
Spectrometer
61.61(±7.56)
8.29(±1.55)
10.19(±2.36)
Camera
49.40(±6.27)
7.23(±1.79)
15.82(±3.43)
Scanner
51.86(±6.77)
11.64(±1.16)
16.93(±2.16)
III. RESULTS The three devices records were statistically different. The camera and scanner L values were nearly close but higher than spectrophotometer record. On the other hand (a*) values of the camera and the spectrometer were close. However, paired t- test analysis of L*a*b* values from spectrometer, digital camera, and scanner showed that the values were significantly different (CI: 95%, DF: 89, p<0.01) (Table I). The results of independent t test between Malays and Chinese revealed that the spectrometer and digital camera mean values were significantly different (p<0.05). While, for the scanner records only the L* and b* values were different (p<0.05). Alternatively, the difference between the Malay and Indians mean LAB values recorded by spectrometer values was significant except for b* (p< 0.05). The digital camera mean values showed only the L* values were significantly different (p<0.05). For the scanner mean records, the L* and a* values were significantly different (p< 0.05). However, no significant difference in the b* values (p<0.05). Chinese and Indians mean LAB values analysis showed significant difference (p<0.05) for the spectrometer and digital camera records. Conversely, for the scanner, the mean L* and b* values were significantly different (p< 0.05). However, no significant difference in the a* values (p<0.05). The records of the three devices in males and females of the whole sample showed that the mean L* and a* values of spectrometer were significantly different (p<0.05). There was however, no significant difference in the b* values (p<0.05). For the digital camera, the mean L*a*b* values were not significantly different (p<0.05). The scanner mean L* and a* values were significantly different (p<0.05). There was however, no significant difference in the b* values (p<0.05).
Fig. 2 Visual perception results
IV. DISCUSSION The digital camera could be considered as a standardized and repeatable device when the camera is set accordingly. It can cover the wide skin color spectrum. However, advanced setting requires special skill and knowledge in photography. The main disadvantage of the digital camera is the cost when compared to the scanner. However, its cost is lower than that of the spectrophotometer. The scanner is a simple, standardized device. It can be used safely with no harmful effects on the human body. It can be operated by anyone with no specialised skills. The disadvantage of the scanner is the limited coverage of the skin color spectrum and its configuration to capture skin color at different body locations. Therefore, it needs some modification in order to enhance its accuracy and handling. A study comparing the use of 4 types of spectrophotometer and digital camera in color matching showed good results with the camera. However, due to the intrinsic factors of the materials and the complexity of the skin, automatic color detection for prosthesis production was difficult. Therefore, any system that is used for color matching of prosthetic materials will require further
IFMBE Proceedings Vol. 35
36
H.L. Mahmoud et al.
development for full satisfaction of the needs of prostheses manufacturers. The L* value of the spectrophotometer range is greater than the L* values in the digital camera and the scanner. This may be due to the different design of devices as the spectrophotometer, digital cameras, scanners and printers handle output and input data differently. The spectrophotometer reproduces color that was lighter than the other devices. The reason of this is because of the difference in the way the spectrophotometer sends and receives reflected light. The a* value of the scanner range of colors is more than the a* value in the digital camera and the spectrophotometer. This means that the scanner reproduces color towards the reddish region. The b* value of the spectrophotometer is near the zero position of the B axis, this means very light yellow color, while the b* value of the scanner is more toward the saturated yellow region. The L* value range of the three devices in recording the different ethnic groups showed that the Chinese had greater L* value, which means the Chinese ethnic group had lighter (more reflective) skin, then the Malay ethnic group, and finally the Indians. While the a* and b* value range are nearly the same in the three ethnic groups. The L* value range of the three devices showed that the females had greater L* value, that means their skin color is lighter than males. While the a* and b* value range are nearly the same in males and females. The results showed that L*a*b* values of the three devices were significantly different. The L*a*b* values of Malay and Chinese showed significant difference for spectrophotometer and digital camera. However, the scanner showed different results only for L* and b* values. This might be due to the limitation in the design of the scanner in recording this range of skin color spectrum. The results of spectrophotometer and the scanner for Malays and Indians showed that the L* and a* values only were different. On the other hand, the result of L* value was different only for the colors captured using digital camera. This result may be due to the fact that L* value demonstrates the lightness difference between the two ethnic groups due to different reflection of the skin. While, the a* and b* values signify the yellow and red color range which was not different. The Indians and Chinese L*a*b* values were different when recorded by spectrophotometer and digital camera, while, the scanner results were significant only for L* and b* values. Again the difference is related to the skin shade features in the two ethnic groups. Males and females skin colors were different for spectrophotometer and scanner in L* and a* values, while, the digital camera results were not
significant for L*a* b* values. This signifies that values from the spectrophotometer and scanner may be used to identify the differences of colors in males and females.
V. CONCLUSIONS The digital camera was the best device compared to the spectrophotometer and the scanner in reproducing facial skin color. It is indicated to record dark skin people. The scanner results of skin color reproduction were fair in capturing light skin tones.
REFERENCES 1. Haug SP, Andres CJ, Moore BK. (1999) Color stability and colorant effect on maxillofacial elastomers. Part I: colorant effect on physical properties. J Prosthet Dent 81:418-422 2. Polyzois GL. (1999) Color stability of facial silicone prosthetic polymers after outdoor weathering. J Prosthet Dent 82: 447-450. 3. Ouellette JE. (1969) Spray coloring of silicone elastomer maxillofacial prostheses. J Prosthet Dent 22: 271-275 4. Schaff NG. (1970) Color characterizing silicone rubber facial prosthesis. J Prosthet Dent 24: 198-202 5. Over LM, Andres CJ, Moore BK, Goodacre CJ, Muñoz CA. (1998) Using a colorimeter to develop an intrinsic silicone shade guide for facial prostheses. J Prosthodont 7: 237-249 6. Cowlishaw MF. (1985) Fundamental requirements for picture presentation. Proc. Society for Information Display 26: 101–107. 7. Margulis D. Photoshop Lab Color (2006) The Canyon Conundrum and Other Adventures in the Most Powerful Color space. Berkeley, Calif.: London: Peachpit ; Pearson Education 8. Obermeier B, Photoshop guide (2007) Photoshop CS3 all-in-one desk reference for dummies. River street, Hoboken: USA; Indianapolis; Wiley publishing,inc 9. Leow ME, Ow RK, Lee MH, Huak CY, Pho RW (2006) Assessment of color differences in silicone hand and digit prostheses: perceptible and acceptable thresholds for fair and dark skin shades. Prosthet Orthot Int.; 30:5- 16 10. Jarad FD, Russell MD, Moss BW ( 2005)The use of digital imaging for color matching and communication in restorative dentistry. Br Dent J 199: 43-49 11. Gozalo-Diaz DJ, Lindsey DT, Johnston WM, Wee AG (2007) Measurement of color for craniofacial structures using a 45/0-degree optical configuration. J Prosthet Dent 97:45-53 12. Bicchierini M, Davalli A, Sacchetti R, Paganelli S. Colorimetric (2005) analysis of silicone cosmetic prostheses for upper-limb amputees. J Rehabil Res Dev 42: 655-64 13. Mahmoud AL. (2010) Personal communication, Author: Dr Humam Laith Mahmoud Institute: Faculty of Dentistry, University of Malaya Street: Jalan University City: 50603 Kuala Lumpur Country: Malaysia Email:
[email protected]
IFMBE Proceedings Vol. 35
Context and Implications of Blood Angiogenin Level Findings in Healthy and Breast Cancer Females of Malaysia V. Piven1, Y.A. Manaf2, and M.A. Abdullah1 1
2
University Technology Petronas, Bandar Seri Iskandar, 31750 Tronoh, Malaysia University Science Malaysia/Advanced Medical and Dental Institute, 11800, Pulau Pinang, Malaysia
Abstract— Human angiogenin (ANG) is a potent inducer of neovascularization. In the adults it plays a role in normal tissue repair. There are indications that ANG is involved in angiogenesis in females associated with some forms of cancer, remodeling of reproductive organs after delivery and in the pathology during pregnancy. The late might have the implication of impaired development and health problems of newborns. However, precise role of ANG in these conditions is yet unknown. To reveal the roles of ANG as the biomarker of breast cancer occurrence in Malaysian female population, the expression of angiogenin in patients with primary breast carcinoma was evaluated using an immunoassay (ELISA). Our results suggest that elevated level of serum/plasma ANG is a favorable prognostic factor in primary breast carcinoma, which is consistent with its role in cancer cells growth. Surprisingly the level of ANG in health control group was found to be only about 20% of that of northland’s female population. This finding suggests that special emphasis should be given to explore the origin of low angiogenin level in females of Malaysia; what might be the impact of this phenomenon on the health of women and young generation, and how such ANG deficiency could be corrected. Keywords— Angiogenin, serum, breast cancer, biomarker.
I. INTRODUCTION The Age Standardized Rate of female breast cancer (BC) is 47.4 per 100,000 population in Malaysia. BC is the most common cancer among Malaysian women from all ethnicities and it is the leading cause of cancer deaths. It accounts for 30.4% of newly diagnosed cancer cases in Malaysian women and continues to rise rapidly as reported by the Malaysian National Cancer Registry [1]. Noteworthy, in Asian countries rates of increase in the BC incidence and mortality have formerly been comparatively low but lately the steep rate of BC occurrence has been revealed. The cumulative life time risk of developing breast cancer for Chinese women, Indian women and Malay women are 1 in 16, 1 in 17 and 1 in 28 respectively. Surprisingly in Malay females the 3 and 4 stage tumor size is largest and the mortality is highest. Unlike developed nations where the disease is prevalent among older women in Malaysia 52.3 percent of the BC cases occur in women in active reproductive age of below 50 [2,3]. The type of cancers that are found in this age group are often
more difficult to treat. The factor(s) that triggered these phenomena is yet to be ascertained. At present, there is scant research being carried out to understand these phenomena in Malaysia [4]. This would require understanding of BC occurrence at deep molecular level as such factors as genetic heredity, environmental, diet and cultural features are likely implied. Angiogenin (ANG) which is known as potent inducer of neovascularization (angiogenesis) is thought to be involved in the development of solid tumors such as BC cells proliferation. Hence, it might be considered as a biomarker of breast cancer. Besides, ANG accounts for the formation of vasculature in the adult during normal tissue repair such as the remodeling of the female reproductive organs. Lately it was proposed as one of the key growth factors that might affect the development of fetus and newborns [5]. ANG is a single chain, non-glycosylated polypeptide, 123 amino acids in length, with a molecular mass of 14 kDa [6]. It is secreted by some tumor cells and is produced by a variety of cell types including vascular endothelial cells and smooth muscle cells, fibroblasts, tumor colonic epithelium, normal peripheral blood lymphocytes, lung and colonic epithelial tumor cell lines, and primary gastrointestinal adenocarcinomas. It specifically binds to endothelial cells and elicits second messenger systems [7]. ANG shows a high degree of homology with as pancreatic ribonuclease A, and its capacity to induce blood vessel growth is dependent on its ribo-nucleolytic activity. The receptor for ANG is unknown. However, it would appear that the cytoskeletal protein αactinin-2, either as a receptor or as a binding molecule, is essential for the expression of angiogenin’s effects [8-10]. It has been suggested that ANG first binds to actin, followed by dissociation of the actin-ANG complex and subsequent activation of tissue plasminogen activator. Destruction of the basement membrane is considered a prerequisite for endothelial cell migration during neovascularization [9, 10]. Although ANG would appear to act principally extravascularly or perivascularly, circulating ANG has been detected in normal serum of healthy individuals in the range from over 100 up to more than 300 ng/mL concentrations [11, 12]. The aim of the study is to conduct the first ever assay of blood ANG level in three clinically important groups of Malaysian population to reveal the role of angiogenin as potential breast cancer biomarker.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 37–39, 2011. www.springerlink.com
38
V. Piven, Y.A. Manaf, and M.A. Abdullah
II. MATERIALS AND METHOD Blood Sampling from Patients. The consent from 35 breast cancer patients (Malays, Chinese, Indians) and 11 healthy female volunteers (all staff nurses of Klinik An-Nur Pasir Gudang and Hospital Penawar Pasir Gudang, Johor) were obtained and the study was approved by USM’s research and ethical committee. Additionally were obtained, 14 samples of blood from the patients with end-stage renal failure who were on regular and compliance hemodialysis treatment and medication as well as 3 control samples from the apparently healthy staff of the laboratory of Advanced Medical and Dental Institute (AMDI), USM. The age span was from 25 to 55 years. All the participants of the study have signed the consent form prior to the inclusion in the study, following a detailed explanation of the nature and aims of the research. The study cohort has been split onto three groups: Group I (14 healthy females), Group II (medical illness/on treatment but no cancer, 13 patients) and Group III (breast cancer patients, 8 patients of which only one was post-operated: medullary breast carcinoma). Blood Samples Preparation. Blood samples were collected in EDTA vials and centrifuged for 15 minutes at 1000 x g within 30 minutes. The separated plasma was placed in Eppendorf tubes and transported in ice-packed container within 10 hours to the AMDI’s laboratory for further measurements. In the laboratory samples were stored in the freezer at - 20°C and processed at shortest possible time.
were statistically treated using distribution-free MannWhitney test as all requirement associated with it are met. There was no statistically significant age-related correlation of sANG found in all study Groups. However considerably higher sANG levels were measured both in cancer and noncancer Groups II and III (see Fig. 2 and Fig. 3) if compare to s-ANG level of the healthy women, Group I. This finding shows that angiogenin is involved in the processes of organism’s responding on the immune stress caused by the diseased organ and the following medical treatment. To reveal the specific mechanism of ANG action more detailed research on molecular level is required. This study reveals the strikingly low sANG level in Malaysian healthy female population in comparison with the one of Northern countries (see Fig.1 and Table 1). Table 1 Serum ANG level in females of some northern countries Country
sANG, ng/ml
Method
Reference
Japan
362.3±84.1
ELISA
Hisai, etc., 2003
Germany
362.1±15.0
ELISA
Urugel, etc., 2001
Greece
394.6±137.6
ELISA
Koutroubakis, etc., 2004
US
328.1±123.3
ELISA
Majumder, etc., 2003
Italia
264.0
ELISA
Molica, etc., 2003
Plasma/Serum Angiogenin Measurement. The Quantikine Human ANG Immunoassay kit (a solid phase ELISA, R&D Systems, Inc) containing recombinant human ANG and antibodies raised against the recombinant factor has been used to measure angiogenin in serum (sANG), and plasma (pANG) of patients. Measurements were conducted with automated microplate reader (Dialab GMBH, Austria) in line with manufacturer’s assay procedure. The comparisons of data obtained were made using the Students t-test or Mann–Whitney test were appropriate.
III. RESULTS AND DISCUSSIONS The key data of the study are shown on the Figures 1 to 3. The accuracy and reproducibility of the method were proven in multiple careful preliminary experiments. In line with numerous literature reports sANG levels were found to be substantially lower than pANG ones in all study cohorts (see Fig.1). This is attributed to the stripping off the certain amount of ANG during clotting. For comparative analysis the serum ANG was chosen. To avoid the effects of this artifact with given limited number of measurements the data
Fig. 1 Age-related plasma (dots) and serum (circles) ANG levels in the Group I (healthy females) There are also a few reports [13, 14] on sex, age, body size, genetic and environmental determination of ANG levels in healthy population. In our study there were no
IFMBE Proceedings Vol. 35
Context and Implications of Blood Angiogenin Level Findings in Healthy and Breast Cancer Females of Malaysia
age-related found (perhaps due to limited number of patients involved). However unusually low ANG level found in Malaysian females is quite possibly may be associated with genetic and environmental factors. This phenomenon quite possibly may be a characteristic feature for tropical countries.
39
suggests that special focus should be given to exploration of the origin of this phenomenon in females of Malaysia. Further detailed research would reveal its impact on well-being of female population and young generation, and might disclose the ways of how such ANG deficiency could be corrected.
ACKNOWLEDGMENT We thank Dr. Ishak bin Mat (AMDI) for critical comments and fruitful discussions and Dr. Imran bin Khalid (HoD Surgery, Seberang Jaya Hospital) for providing specimens. This work was supported by USM Short Term Grant/Medical Biotechnology (account 100/227/CIPPT).
REFERENCES
Fig. 2 Age-related serum ANG levels in the Group II (non-cancer patients, on treatment) group
Fig. 3 Age-related serum ANG levels in the Group III (non-operated breast cancer patients
- operated patient)
IV. CONCLUSIONS Raised levels of angiogenin in serum of Malaysian female breast cancer patients were obtained in the study. This allows prpose serum ANG level as biomarkers for the assessment of the severity of the disease. Unusually low ANG levels were found for healthy Malaysian women compared to females of northlands. This finding
1. Breast Cancer – Facts & Stats (2008) http://www.radiologymalaysia.org/breasthealth/About/FactsNStats.htm 2. Parkin DM, Bray FI, Devesa SS (2001) Cancer burden in the year 2000 the global Picture. Eur J Cancer 37:4-66 3. Lim GCC, Halima Y, and Lim TO (Eds) (2003) Cancer Incidence in Malaysia, Med J Malaysia 58(5):141 4. Yip CH and Abdullah NH (2003) Spectrum of breast cancer in Malaysian Women: Overview. World Journal of Surgery 27: 921-923 5. Lassus P, Teramo K, Nupponen I, etc., (2003) Vascular endothelial growth factor and angiogenin levels during fetal development and maternal diabetes. Biol Neonate, 84(4):287-92 6. Saxena SK, Rybak SM, Davey RT, et al. (1992). Angiogenin is a cytotoxic, tRNA-specific ribonuclease in the RNase A superfamily. J Biol Chem 267 (30): 21982–21986 7. Gerritsen ME (2011) Angiogenesis. Comprehensive Physiology 351–383 8. Hu G-F, Chang SI, Riordan JF et al. (1991) An angiogenin-binding protein from endothelial cells. Proc Natl Acad Sci USA 88:2227-2231 9. Hu G-F, Strydom DJ, Fett JW et al. (1993) Actin is a binding protein for angiogenin. Proc Natl Acad Sci USA 90:1217-1221 10. Moroani J and Riordan JF (1994) Nuclear translocation of angiogenin in proliferating endothelial cells is essential to its angiogenic activity. Proc Natl Acad Sci USA 91:1677-1681 11. Hu G-F, Riordan JF & Vallee BL (1994) Angiogenin promotes invasiveness of cultured endothelial cells by stimulation of cell-associated proteolytic activities. Proc Natl Acad Sci USA 91:12096-12100 12. Folkman J and Klagsbrun M. (1987) Angiogenic factors. Science 235:442-447 13. Pantsulaia Ia, Trofimov S, Kobyliansky E,etc. (2006) Genetic and environmental determinants of circulating levels of angiogenin in community-based sample, Clin Endocrinol (Oxf). Mar; 64(3):271-279 14. Sherif EM, Matter RM, Abdel Aziz, etc. (2009) Serum angiogenin levels in type 1 diabetic children and adolescents : relation to microvascular complications, Pediatric Diabetes, 10 (Suppl. 11) , 44 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Vladimir Piven University Technology Petronas Bandar Seri Iskandar 31750 Tronoh Malaysia
[email protected]
Detection of Acute Leukaemia Cells Using Variety of Features and Neural Networks A.S. Abdul Nasir1, M.Y. Mashor1, and H. Rosline2 1
Electronic & Biomedical Intelligent Systems (EBItS) Research Group, School of Mechatronics Engineering, University Malaysia Perlis, 02600 Ulu Pauh, Perlis, Malaysia 2 Department of Haematology, School of Medical Sciences, University Science Malaysia, Kubang Kerian, Kelantan, Malaysia
Abstract— This paper presents the application of features combination and Multilayer Perceptron (MLP) neural network for classification of individual white blood cells (WBC) inside the normal and acute leukaemia blood samples. The WBC will be classified as either normal or abnormal for the purpose of screening process. There are total 17 main features that consist of size, shape and colour based features had been extracted from segmented nucleus of both types of blood samples and used as the neural network inputs for the classification process. In order to determine the applicability of the MLP network, two different training algorithms namely Levenberg-Marquardt and Bayesian Regulation algorithms were employed to train the MLP network. Overall, the results represent good classification performance by employing the size, shape and colour based features on both training algorithms. However, the MLP network trained using Bayesian Regulation algorithm has proved to be slightly better with classification performance of 94.51% for overall proposed features. Thus, the result significantly demonstrates the suitability of the proposed features and classification using MLP network for acute leukaemia cells detection in blood sample. Keywords— Acute Leukaemia, White Blood Cells, Feature Extraction, Classification, Multilayer Perceptron Neural Network.
I. INTRODUCTION The term leukaemia refers to cancers of the white blood cells (WBC). It is characterized by abundance of abnormal white blood cells (blast) in the body. Leukaemia begins in the bone marrow and spread to the other parts of the body. When a person has leukaemia, large numbers of abnormal WBC are produced in the bone marrow and they would not stop growing when they should [1]. Leukaemia can be cured if it is detected and treated at the early stage. In general leukaemia screening process, a complete blood count is the first process to be performed by haematologists [2]. If there are abnormalities in this count, a study of morphological bone marrow smear is done to confirm the present of leukaemia cells. Currently, the microscopic investigation of blood cells is done manually through identification under
the light microscope. However, the manual observation method has an error rate between 30% and 40% depending on the experience of haematologists [3]. In addition, even though there are machine to automate the counting process namely Automated Haematology Counter, certain developing countries are not able to deploy such an expensive machine in every hospital laboratory in the country [2]. Artificial neural network (ANN) is an intelligent technique that could be utilized for complex data analysis. Neural networks have been applied in many areas such as modeling, pattern recognition, bioelectric signal processing, diagnostics and prognosis [4]. ANN is differentiated between each other based on their architecture and learning algorithm. Thus, choosing the right ANN to perform certain tasks is important in order to determine the performance of ANN. Researches on applications of ANN in medicine have been done since the late 80s as an aid for diagnosis and treatment. In medical research, multilayer feedforward network that use back propagation (BP) training algorithm is seen as the most popular amongst researchers and neural network users for classification. ANN has been applied as statistical tools to solve problems such as predicting the health status of HIV/AIDS patients [5] and diagnosis of several types of cancer such as cervical cancer [6], breast cancer [4] and bladder cancer [7]. A number of useful researches into analyzing and classifying involving the blood cells using neural network have been carried out. For instance, Theera-Umpon [8] had used the ANN to classify morphological granulometric features of nucleus in automatic bone marrow white blood cell. In this research, four features extracted from each nucleus were tested using Bayes classifier and artificial neural network. The Levenberg-Marquardt algorithm was used as it provides a faster convergence but it required heavy computation load in the training process. The results showed that the features using nucleus alone can be utilized to achieve a classification rate of 77% on the test sets. Thus, the current study proposes to apply MLP network for classifying the individual WBC as either normal or abnormal based on the extracted features from both normal and acute leukaemia blood samples.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 40–46, 2011. www.springerlink.com
Detection of Acute Leukaemia Cells Using Variety of Features and Neural Networks
41
A. Size, Shape and Colour Based Feature
II. FETURE EXTRACTION In this research, 300 images were captured from normal blood samples, while 500 images (200 Acute Lymphoblastic Leukaemia and 300 Acute Myelogenous Leukaemia) were captured from acute leukaemia blood samples. During observation of WBC, it was found that the shape of cytoplasm is quite secondary for characterization of WBC. Thus, the morphological features of nucleus are predominantly been used for the extraction of WBC features. Here, the features were extracted based on the segmented nucleus that had been obtained by applying the combination of dark stretching technique and colour image segmentation based on HSI (Hue, Saturation, Intensity) colour space [9]. Figure 1(a) and (b) represent the original images for normal blood and acute leukaemia, respectively. Meanwhile, Figure 2(a) and (b) represent the resultant segmented nucleus for normal blood and acute leukaemia, respectively.
Size is expressed by the area of fully segmented nucleus. Here, the area of nucleus can be obtained based on Equation 1.
X Y (1) ∑ ∑ f(x y) x =1 y =1 For classifying the WBC successfully, haematologists will examine the shape of the cells. In order to reflect the information of shape in feature vector, several tools such as roundness, compactness [10] and moment [11] are taken from literature and had been used to represent the information of shape. Area of nucleus, An = µ 00 =
Roundness =
(Perimeter)
2 (2)
4π * Area
Compactness =
(Perimeter)
2 (3)
Area Moment is a sequence of numbers used for characterizing the shape of an object. Here, the position of centroid (xc,yc) is used to determine the central moment by identifying the position of the object [11]. 1 X Y ∑ ∑ x f(x,y) An x =1 y =1 1 X Y yc = y f(x,y) ∑ ∑ An x =1 y =1
xc =
(a) Normal blood
(b) Acute leukaemia
Fig. 1 Original images
(4) (5)
Since the object is balanced at the centroid, the first order central moment is zero. Meanwhile, the 2nd till 5th order central moment can be obtained by using the Equation 6 [11]. Here, the sum of power (p + q) is the order of the moment. X Y
(a) Normal blood
(b) Acute leukaemia
Fig. 2 Resultant images of segmented nucleus After the segmented nucleus has been obtained, the next step is to perform the feature extraction. The extracted features will provide useful information for classification of WBC into normal and abnormal. Here, the resultant extracted features consist of the size, shape and colour based features. The extracted features are then stored as parameters (area, perimeter and others) and then classified using ANN. The classification of WBC is based on these three main features as all types of information have to be incorporated for making the decision.
p
q
μ pq = ∑ ∑ ( x − x c ) ( y − y c ) f ( x, y ) x =1 y =1
(6)
The first and second affine invariant moments are defined in Equation 7 and 8, respectively [12].
L1 =
1 2 ( μ 02 μ 20 − μ11 ) 4 μ 00
L2 =
1 2 2 3 ( μ μ − 6 μ 03 μ12 μ 21μ 30 + 4 μ12 μ 30 + 10 03 30 μ 00 3 2 2 4 μ 03 μ 21 − 3μ12 μ 21 )
IFMBE Proceedings Vol. 35
(7)
(8)
42
A.S. Abdul Nasir, M.Y. Mashor, and H. Rosline
Several works had proposed the used of gray level image processing techniques for extracting the blood cell features. However, the actual screening process by haematologists is performed on stained slide where the leukaemia is detected based on colour and shape of blast. To overcome this problem, identification of blood cells based on colour is proposed. Here, the colour features are derived from the red, green, blue and intensity components. There are total of 8 colour features that will be extracted from the nucleus which consist of mean and standard deviation of colour. The equations for performing the colour based feature extraction are defined in Equation 9 till 16 [11] where An is the area of nucleus.
Mean of red, R n =
∑ ∑ Rn ( x, y ) x y
Mean of green, G n =
Mean of blue, B n =
(9)
An ∑ ∑ Gn ( x, y ) x y
(11)
An
Mean of intensity, I n =
∑ ∑ In ( x, y ) x y
NETWORK
After the feature extraction has been completed, the suitability of the extracted features in classifying the WBC is tested using artificial neural network. Cybenko and Hornik had proved that the MLP network with one hidden layer has the capability to approximate any continuous function up to certain accuracy [13]. The MLP network is designed by specifying the architecture, then using a training algorithm to train the network with a training data to adjust the synaptic weights. The classification performance of MLP network depends on the structure of the network and training algorithm. Here, two different training algorithms namely Levenberg-Marquardt (LM) and Bayesian Regulation (BR) will be used in order to determine the applicability of the MLP network. A. The Levenberg-Marquardt Algorithm
(10)
An
∑ ∑ Bn ( x , y ) x y
III. CLASSIFICATION OF WHITE BLOOD CELLS USING MLP
(12)
An
The Levenberg-Marquardt algorithm is a gradient-based, deterministic local optimization algorithm. The LM algorithm is used to train the MLP network because this algorithm is more powerful than the conventional gradient descent algorithms [13]. It has been proved that LM algorithm has much better learning rate and can keep the relative stability compare to the famous back propagation (BP) algorithm. The BP algorithm is based on the steepest descent algorithm, while the LM algorithm is an approximation of Gauss-Newton algorithm. Detail information of LM algorithm can be referred in [13].
Standard deviation of red, B. The Bayesian Regulation Algorithm
σRn = ∑ ∑ x y
( Rn ( x, y ) − R n )
2 (13)
An
Standard deviation of green,
σGn = ∑ ∑ x y
(Gn ( x, y ) − G n)
2 (14)
An
Standard deviation of blue,
σBn = ∑ ∑ x y
( Bn ( x, y ) − B n )
2 (15)
An
Standard deviation of intensity,
σIn = ∑ ∑ x y
( In ( x, y ) − In ) An
2 (16)
The Bayesian Regulation algorithm can be recognized as a smoothen version of the Levenberg-Marquardt algorithm. It is a training algorithm that updates the weight and bias values according to the Levenberg-Marquardt optimization. Here, the BR algorithm is implemented for improving generalization [14]. It optimizes the generalization quality of network training by minimizing a combination of squared errors and weights. By the virtue of its generalization quality, it should be less prone to overfitting the input data, given from the same architecture or set of inputs. The basic weight adjustment step of BR iteration is [15]: T −1 T x k +1 = x k − [ J J + μI ] J e
(17)
Based on Equation 17, J is the Jacobian matrix that contains first derivatives of the network errors with respect to the weight and bias values, μ is the Marquardt adjustment parameter and e is a vector of network errors. The performance function in BR algorithm involves modifying the
IFMBE Proceedings Vol. 35
Detection of Acute Leukaemia Cells Using Variety of Features and Neural Networks
mean square error, mse to improve the generalization capability of the network. mse =
1 n 2 ∑ ei n i =1
(18)
msw =
1 n 2 ∑ wj n j =1
(19)
optimal classification is obtained. The dataset is divided into 60:40 as ratio for training and testing phases. Here, a total of 480 images which consist of 1200 WBC are used for training while the remaining 320 images which consist of 801 WBC are used for testing. There are 17 main features which consist of 32 input features are extracted from the segmented nucleus. It needs to be cleared that certain main features such as central moment consists of several input features which correspond to a total of 32 input features.
The function in Equation 18 is expanded with the additional of the mean square weights, msw. Thus the mean square error for the BR algorithm becomes [15]: msebr = mse ( β ) + msw(α )
(20)
where, α and β are the parameters which are to be optimized in Bayesian framework. The advantage of using the BR algorithm is it best to overcome the overfitting problems by taking into account the goodness-of-fit as well as the network architecture [14]. C. Methodology for Classification of White Blood Cells The applicability of the MLP network will be test by employing two different training algorithms, namely the Levenberg-Marquardt and Bayesian Regulation algorithms. There are several analyses that will be conducted during the classification of WBC in order to compare the classification performance between LM and BR algorithm. The analysis will be performed by using the training algorithm provided inside the MATLAB R2010a software. The analyses to be done are: Analysis of classification performance based on individual feature. b) Analysis of classification performance based on size, shape and colour based features. c) Analysis of optimum numbers of training epoch and hidden node. a)
The number of input node, I depends directly to the number of input features to be placed. The inputs will be normalized within the range 0 and 1 to avoid features dominating during the training phase. Here, the network used only one output node for the classification of two classes: normal WBC (0) and abnormal WBC (1). In order to compare the overall classification performance between LM and BR algorithms, the analysis consist of finding the suitable numbers of training epoch and hidden node that can provide the optimal classification performance in MLP network. For the hidden node, the number used during the training phase will be increased within the range 1 to 50 nodes until the
43
IV. RESULTS AND DISCUSSION White blood cells had been classified as either normal or abnormal. The applicability of the MLP network had been tested by employing two different training algorithms, namely the Levenberg-Marquardt and Bayesian Regulation algorithms. Thus, the experimental results obtained using LM and BR algorithms in terms of testing accuracy, optimum numbers of hidden node and optimum numbers of training epoch will be discussed. The suitability of the proposed extracted features in classifying the types of WBC is determined based on the percentage of testing accuracy. Table 1 and 2 represent the results for the classification performance based on individual feature using LM and BR algorithms, respectively. There are total of 17 main features that had been extracted from each normal and leukaemia blood sample. Table 1 Classification performance of individual feature using LM algorithm No 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Main Features Area Perimeter Roundness Compactness Mean of intensity Mean of red Mean of green Mean of blue Standard deviation of intensity Standard deviation of red Standard deviation of green Standard deviation of blue 2nd order central moment 3rd order central moment 4th order central moment 5th order central moment Affine invariant of L1 and L2
IFMBE Proceedings Vol. 35
Training 78.42 77.83 81.50 81.50 88.33 95.25 81.00 78.33 88.00
Accuracy (%) Testing Overall 85.66 76.99 75.06 76.72 79.55 80.72 79.43 80.67 75.56 87.26 92.02 93.96 77.81 79.72 76.93 77.77 86.16 87.26
95.58 81.33
92.02 78.55
94.16 80.21
78.25 84.83 80.50 86.67 82.92 82.58
75.94 77.06 76.68 78.55 75.56 80.17
77.32 81.72 78.97 83.42 79.97 81.62
44
A.S. Abdul Nasir, M.Y. Mashor, and H. Rosline
Table 2 Classification performance of individual feature using BR algorithm No 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Main Features Area Perimeter Roundness Compactness Mean of intensity Mean of red Mean of green Mean of blue Standard deviation of intensity Standard deviation of red Standard deviation of green Standard deviation of blue 2nd order central moment 3rd order central moment 4th order central moment 5th order central moment Affine invariant of L1 and L2
Training 78.58 77.83 81.42 81.17 88.33 95.50 81.00 77.83 88.00 95.58 81.33 77.83 83.83 78.08 84.83 77.83 81.50
Accuracy (%) Testing Overall 75.81 77.40 74.90 76.62 79.43 80.62 79.30 80.42 85.66 87.26 92.02 94.11 77.81 79.72 75.81 77.02 86.16 87.26 92.02 78.55 75.81 77.18 75.68 78.80 75.31 79.67
Table 3 Classification performances based on size, shape and colour based features using LM algorithm Analysis Number of inputs Number of hidden nodes Number of training epochs Training accuracy (%) Testing accuracy (%) Overall accuracy (%)
Based on the results in Table 1, all 17 features have provided good accuracy in classifying the WBC into normal and abnormal with testing accuracy more than 70%. Mean and standard deviation of red have provided the highest testing accuracy with 92.02%. Meanwhile, perimeter of nucleus has provided the lowest testing accuracy with 75.06%. Based on the results in Table 2, all 17 features also provide good accuracy in classifying the WBC with testing accuracy more than 70%. Similar with Table 1, mean and standard deviation of red have provided the highest testing accuracy with 92.02%. Meanwhile, perimeter of nucleus has provided the lowest testing accuracy with 74.90%. Based on the results in Table 1 and 2, area of nucleus has provided the highest classification performance among the size based feature with testing accuracy of 85.66% and 75.81% for LM and BR algorithms, respectively. For the shape based feature, affine invariant has provided the highest classification performance with testing accuracy of 80.17% and 79.67%, respectively. For the colour based feature, by comparing the classification performance between mean and standard deviation, it is found that the standard deviation has showed the best classification performance compared to the mean of intensity and RGB colour space. Table 3 and 4 represent the results for the classification performance based on size, shape and colour based features using LM and BR algorithms, respectively.
Features Shape 22 9 150 89.42 82.67 86.05
Colour 8 9 150 99.00 90.90 95.75
Table 4 Classification performances based on size, shape and colour based features using BR algorithm
94.16 80.22 77.02 81.17 77.12 82.42 76.82 80.77
Size 2 4 150 83.67 80.92 82.62
Analysis Number of inputs Number of hidden nodes Number of training epochs Training accuracy (%) Testing accuracy (%) Overall accuracy (%)
Size 2 4 150 83.17 81.42 82.47
Features Shape 22 6 150 86.58 81.30 84.47
Colour 8 9 150 98.83 91.02 95.70
Here, there are total of 32 input features that consist of size, shape and colour based features had been extracted from each normal and acute leukaemia segmented images. Based on the results in Table 3, each size, shape and colour based features have provided high accuracy in classifying the WBC into normal and abnormal with testing accuracy more than 80%. It is discovered that the colour based feature produces the highest testing accuracy with 90.90%. This is followed by shape and size based features with testing accuracy of 82.67% and 80.92%, respectively. Similar with Table 3, classification using BR algorithm also provides promising results with testing accuracy more than 80%. Based on the results in Table 4, the colour based feature produces the highest testing accuracy with 91.02%. This is followed by size and shape based features with testing accuracy of 81.42% and 81.30%, respectively. Based on the resultant testing accuracy from these two tables, it is discovered that the colour based feature has provided the highest testing accuracy. This is followed by size and shape based features which have produced the similar classification results. The results also show that the size, shape and colour based features have provided good classification performance in classifying the WBC into normal and abnormal compare to the classification performance using individual feature. The classification performance based on overall proposed features using LM and BR algorithms is summarized in Table 5. Here, all 32 input features had been fed to the MLP network for the classification process.
IFMBE Proceedings Vol. 35
Detection of Acute Leukaemia Cells Using Variety of Features and Neural Networks
Table 5 The performance comparison between LM and BR algorithms for classification of normal and abnormal WBC Analysis Training algorithm Number of inputs Number of hidden nodes Number of training epochs Training accuracy (%) Testing accuracy (%) Overall accuracy (%)
Multilayer Perceptron LevenbergBayesian Marquardt Regulation 32 32 9 7 100 120 99.75 99.75 94.39 94.51 97.60 97.65
45
classification performance. In addition, by using the LM and BR algorithms, a good classification performance had been achieved. However, the MLP network trained using Bayesian Regulation algorithm has proved to be slightly better with classification performance of 94.51% by using the overall 32 input features. Thus, the result significantly demonstrates the suitability of the proposed features and classification using MLP network for acute leukaemia cells detection in blood sample.
ACKNOWLEDGMENT In order to determine the training algorithm that can provide the best classification performance, the actions that had been taken consist of 2 main steps namely analysis of number of training epochs and analysis of number of hidden nodes. Here, the optimum number of training epochs and hidden nodes are obtained when the MLP network achieved the highest classification performance. Based on the results in Table 5, both LM and BR algorithms provide good classification performance with testing accuracy above 90%. However, the BR algorithm has proved to be slightly better with classification performance of 94.51%, while the LM algorithm is manage to obtain the testing accuracy of 94.39%. In order to achieve the optimal classification performance, the number of hidden nodes required for both algorithms were 9 and 7, while the number of training epochs were 100 and 120, respectively. Here, the BR algorithm required less number of hidden nodes for providing the high generalization ability. Thus, by using the overall 32 input features, the BR algorithm has proved to be better training algorithm with high testing accuracy and required less number of hidden nodes in order to achieve the optimal performance for classification between normal and abnormal WBC.
V. CONCLUSIONS The MLP network trained using LM and BR algorithm were used to classify the WBC either normal or abnormal. Here, 17 main features that represent the size, shape and colour based features have been extracted from segmented nucleus and used as inputs to the neural network. The results show that the proposed size, shape and colour based features have the capability to classify the WBC with good
The authors gratefully acknowledge and thank the team members of acute leukaemia research and University Science of Malaysia (USM). We also would like to acknowledge the Malaysian Government for providing the financial support under Fundamental Research Grant Scheme under the Ministry of Higher Education.
REFERENCES 1. Henry D H (2010) Latest News in Blood Cancer Research 2. Priyankara G P M, Seneviratne O W, Silva R K O H et al. (2006) An Extensible Computer Vision Application for Blood Cell Recognition and Analysis 3. Reta C, Altamirano L, Gonzalez J A et al. (2010) Segmentation of Bone Marrow Cell Images for Morphological Classification of Acute Leukemia, Proceedings of the Twenty-Third International Florida Artificial Intelligence Research Society Conference (FLAIRS 2010), USA, 2010, pp 86–91 4. Hamdan H (2004) The Application of Backpropagation Neural Network in the Prognosis of Breast Cancer. Master’s thesis, Faculty of Computer Science and Information Technology, University Malaya, Malaysia 5. Kwak N K, Lee C (1997) A Neural Network Application to Classification of Health Status of HIV/AIDS Patients. Journal of Medical Systems 21:87–97 6. Mat-Isa N A, Mashor M Y, Othman N H (2007) An automated cervical pre-cancerous diagnostic system. Artificial Intelligence in Medicine 42:1–11 7. Frounchi J, Karimian G, Keshtkar A (2009) An Artificial Neural Network Hardware for Bladder Cancer. European Journal of Scientific Research 27:46–55 8. Theera-Umpon N (2007). Morphological Granulometric Features of Nucleus in Automatic Bone Marrow White Blood Cell Classification. IEEE Transactions on Information Technology in Biomedicine 11:353-359
IFMBE Proceedings Vol. 35
46
A.S. Abdul Nasir, M.Y. Mashor, and H. Rosline
9. Salihah A N A, Mashor M Y, Harun N H et al. (2010) Improving Colour Image Segmentation on Acute Myelogenous Leukaemia Images Using Contrast Enhancement Techniques, 2010 IEEE EMBS Conference on Biomedical Engineering & Sciences (IECBES 2010), Kuala Lumpur, Malaysia, 2010 10. Demir C, Yener B (2005) Automated cancer diagnosis based on histopathological images: a systematic survey, Tech. Rep., Rensselaer Polytechnic Institute 11. Gonzalez R C, Woods R E (2007) Digital Image Processing. Prentice Hall, New Jersey, USA 12. Ongun G, Halici U, Leblebicioglu K et al. (2001) Feature Extraction and Classification of Blood Cells for an Automated Differential Blood Count System, International Joint Conference on Neural Networks (IJCNN ’01), Washington DC, USA, 2001, pp 2461–2466
13. Kisi O (2004) Multi-layer perceptrons with Levenberg-Marquardt training algorithm for suspended sediment concentration prediction and estimation. Hydrological Sciences–Journal–des Sciences Hydrologiques 49:1025-1040 14. Doan C D, Liong S (2004) Generalization for Multilayer Neural Network Bayesian Regularization or Early Stopping 15. Saptoro A, Yao H M, Tadé M O et al. (2008) Prediction of coal hydrogen content for combustion control in power utility using neural network approach. Chemometrics and Intelligent Laboratory Systems 94:149–159 Author: Aimi Salihah Abdul Nasir Institute: University Malaysia Perlis Street: Ulu Pauh City: Perlis Country: Malaysia Email:
[email protected]
IFMBE Proceedings Vol. 35
A Novel Phantom for Accurate Performance Assessment of Bone Mineral Measurement Techniques: DEXA and QCT A. Emami1, H. Ghadiri1,2, M.R. Ay1,2,3, S. Akhlagpour4, A. Eslami1, P. Ghafarian1, and S. Taghizadeh1 1
2
Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran, Iran Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran 3 Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran 4 Departments of Interventional Radiology, Sina Hospital, Tehran University of Medical Sciences, Tehran, Iran
Abstract— accurate performance assessment of bone mineral densitometry is crucial due to the fact a high level of exact estimation of bone situation is needed for correct diagnosis of patient with bone disease. Variation of parameters like sensitivity and percentage error which highly affect the densitometry results may induce some level of uncertainty in diagnosis procedure. So, determination of this variation and designing an algorithm for correction is necessary to assure examiners about measurement results. In this study some phantoms consisting of soft tissue- and bone-equivalent material was devised to accurately test bone densitometry systems. It should be noted that there is no unique phantom to be able to use for simulations evaluation of both DEXA and QCT, this is the main background of this study which aim to design a completed phantom. Four inserts in spine phantom with precisely wide range of K2HPO4 simulate trabecular bone, and provide the basis for the accurately check of bone densitometry systems. It has shown, although a linear correlation between measured and true density can be observed but, there is a variation in sensitivity and percentage error when density of spine changes. To correct and stabilize the sensitivity variation, an analytical algorithm is proposed. Keywords— Quantitative Computed Tomography, DEXA, Bone mineral density, Phantom.
I. INTRODUCTION Osteoporosis is a condition in which the bones become fragile, leading to a higher risk of fractures such as breaking or cracking than in normal bone. When bones lose minerals, such as calcium, more quickly than the body can replace them, Osteoporosis is occurred. Any bone can be affected by osteoporosis. Bone mineral density (BMD) is a highlight key predictor of fracture risk in patients with osteoporosis and/or other metabolic bone diseases. Several methods are established for assessing bone content and density, namely, Dual Energy X-ray Abosorptiometry (DEXA), Quantitative Computed Tomography (QCT), Bone ultrasonography, radiographic absorptiometry, in which DEXA and QCT are well known today [1].
DEXA, although, has a limited ability to evaluate bone geometry and cannot provide separate cortical and trabecular bone evaluation, due to its good precision and low dose radiation, it has convincingly accepted as a gold standard among bone densitometry techniques. QCT is mentioned to as a dedicated computed tomography (CT) technique to find out bone mineral density. It is unique in that it provides for true three-dimensional imaging and reports BMD as true volume density measurements. The advantage of QCT is the ability to isolate a region of interest from surrounding tissues as it can, therefore, localize an area in a vertebral body of only trabecular bone, leaving out the elements most affected by degenerative change and sclerosis. QCT can complement DEXA through its ability to assess BMD in mg/cm3 (compared with DEXA, which measures areal BMD in mg/cm2), provide information about bone geometry, and enable compartmental bone assessments. Healthcare and professional organizations expressed their concern about the lack of standardization, assessment of accuracy and precision, variation of sensitivity and error by variation of bone density in bone mineral measurement techniques like DEXA and QCT, whereas there are generally recognized as important and unresolved issues[2,3]. As quality control is absolutely necessary in bone assessment techniques, especially in treatment follows up, a phantom that be able to assess variety of specifications regarding to measurement system would be crucial. As a result, several attempts were made to accurately assess the bone mineral densitometry techniques and many standard phantoms were designed for this purpose. European Spine Phantom (ESP) is designed as a geometrically defined and semi-anthropomorphic phantom. It contains a spine insert consisting of three vertebras of increasing bone mineral densities and thicknesses of cortical structures. Hologic anthropomorphic Spine phantom consists of four vertebras with similar densities. Bona Fide which designed by BioImaging Technology Inc., Lunar aluminum spine phantom, Norland hydroxyapatite spine phantom and some other phantoms that design for quality assurance of bone
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 47–50, 2011. www.springerlink.com
48
A. Emami et al.
densitometer, have limited capability to measure the sensitivity and error in variety range of densities as in lower densities, which related to osteoporosis values, these parameters deviate from average and may cause significant error in bone mineral densitometry. In order to accurately estimate the sensitivity and error in both DEXA and QCT techniques in a wide range of densities, it needs to scan a vast amount of densities in the full range of probable bone density. In addition, in contrast to Lunar and Norland phantoms, the related phantom should be able to be used in both mentioned techniques. In this study a new phantom with capabilities of easily supporting wide range of densities and using in both DEXA and QCT techniques is designed. Performing bone densitometry measurement in large amount of densities by this phantom, make it more accurate to assess the sensitivity and error of systems especially in low densities.
reference solution of K2HPO4 in distilled water, exactly the same as Calibration Phantom, but we do not use ethanol. The QA Phantom is designed in shape that matching calibration phantom (Fig.1).
(a)
(b)
Fig. 1 Calibration and QA phantom (a) CT image (b) Photograph
II. MATERIALS AND METHODS A. Scanners QCT and DEXA were used to evaluate the BMD in this study. Quantitative CT was performed using a 64-slice GE LightSpeed VCT scanner (GE Healthcare Technologies, Waukesha, WI) with volumetric data acquisition capability using the following acquisition reconstruction parameters: 120 kVp, 300 mAs, and 5 mm slice thickness. DEXA was measured on GE Lunar (GE Healthcare Technologies, Madison, WI, USA) scanners using standard technique that is used for spine exams.
Spine and Torso Phantom To avoid controversy with respect to the definition of different tissue-substitute materials, it was decided to limit the phantom constituents to soft tissue, fat and boneequivalent materials. A Plexiglas cylindrical phantom was designed and constructed as a spine phantom to assess different bone densities. This phantom consists of two malefemale separated parts which each part consists of two cylindrical shape vertebras with individual filling caps, So, filling and emptying the cavities with K2HPO4 solutions would be fast and reliable. Torso phantom is made up of polyethylene and designed in such a way that imitates the torso contour of human body and a hole in the low-middle part of it for inserting the spine phantom in. (Fig.2)
B. Designing of the Dedicated In-House Phantoms QCT Calibration Phantom The QCT Calibration Phantom is designed and constructed to be used as an accessory to a CT scanner, as a reference calibration standard for comparing CT numbers of unknown materials to a known material. The Phantom is made up of a plastic base material, polyethylene; containing 5 cylindrical holes (190±0.5 mm diameter). Each cavity filled with the reference solutions. Four cavities are filled with different solution of K2 HPO4 in distilled water; provide a calibration for calculating the bone mineral density. We used 0, 50, 100 and 200 mg/cc solution of K2HPO4 in distilled water and the fifth is a fat equivalent reference sample that it is filled with 60% ethanol [2].
Fig. 2 Calibration, Spine and Torso Phantom (a) Axial CT image of a torso and spine phantom which set over calibration phantom (b) Photograph of spine (up) and torso phantom (down)
QA Phantom A Quality Assurance Phantom is designed and constructed to assess performance of CT scanners and made up of poly ethylene with four cavities which filled with
According to World Health Organization (WHO) reference data and Mindways Company’s standards individuals are categorized in four main groups, which was
(a)
IFMBE Proceedings Vol. 35
(b)
A Novel Phantom for Accurate Performance Assessment of Bone Mineral Measurement Techniques: DEXA and QCT
the base of our calculation to make K2HPO4 solution concentration. Some thresholds of BMD determine the osteoporosis situation: for example density below 50 mg/cc is categorized as definite osteoporosis and over 140 mg/cc is categorized as Normal. For weighing of K2HPO4 powder, an accurate balance, Satrorius Model TE1245 with built-in motorized calibration weight in the 0.1-mg that ensures the highest weighing accuracy and scaled syringes to provide distilled water for solution were used. To minimize concentration error, solutions were made in large volumes and divided to small amounts. Wide range of solution from 20 mg/cc to 210 mg/cc with step of 15 mg/cc was prepared.
49
seen. Fig. 7 shows a similar attempt which is performed in DEXA technique to estimate the accuracy of BMD by means of linear regression analysis of measured density versus true area density (in g/cm2). True BMD is calculated by dividing the value of K2HPO4 mass per grams to projected area of the cylinder in our spine phantom. Our research in this area is continuing and we intended to obtain more accurate assessments of DEXA technique by extending the ranges of densities.
C. Software with Graphic User Interface Analysis of values in QCT technique was performed by using designed software with a graphical user interface (GUI) which is running under C# coding language. To validate the output of designed software, values from ImageJ software; a public domain Java image processing program and a series of analytical calculations were used. Fig. 3 Measured density versus true density by means of designed software in QCT technique
III. RESULTS The designed phantoms were used for performance assessment of two BMD systems including QCT and DEXA. Fig. 3 shows the linear regression plot between measured densities versus real densities of K2HPO4 solution which is performed by QCT technique as shown in the Fig. 2a. To calculate the measured density, the standard QCT procedure was used. Calculation algorithm is automatically run in the designed software. Fig. 4 shows a declining trend of percentage error in measured density versus true density values in QCT technique. Fig. 5 represents sensitivity variation in QCT in different densities. The sensitivity is calculated in 6 ranges of: 0 to 50 mg/cc, 50 to 80 mg/cc, 80-115 mg/cc, 115 to 145 mg/cc, 145 to 180 mg/cc and 180 to 210 mg/cc. As it can be seen, by increasing the average value of the ranges, a trend of decreasing in sensitivity is happened. For calculation of sensitivity, in each range, relative variation of measured values to variation of true values is assumed. Fig. 6 shows sensitivity variation with density, but with using correction which has calculated from percentage error. It should be noted that measurement drift, which represent the amount of deviation from true values in measurement procedure, in each range of density is different. Using the average of drifts values to correct the measured BMDs, a promising stability of sensitivity can be
Fig. 4 Variation of percentage error versus true density in QCT technique
Fig. 5 Variation of sensitivity when the true density is changed
IFMBE Proceedings Vol. 35
50
A. Emami et al.
worth to mention that, obtaining many correction coefficients to stabilize the sensitivity in each range of density may improve the accuracy of bone densitometry techniques. For this, a wide range of known densities have to be scanned to calculate calibration curve in order to correct the percentage error and sensitivity.
V. CONCLUSION
Fig. 6 Variation of sensitivity versus true density after correction
A spine phantom was designed and produced to evaluate the sensitivity and percentage error variation in bone densitometry measurements. These assessments cover wide range of mineral density that simulated to cover different stage of bone situation. In this research it is demonstrated that the sensitivity and percentage error can be affected by spine density that may cause uncertain results. Furthermore, it is shown that by using a spine phantom which can be easily filled or emptied from different concentration of K2HPO4, a correction algorithm may obtain to stabilize the sensitivity over the wide range of density.
ACKNOWLEDGEMENTS Fig. 7 Measured density versus true density by means of DEXA system
IV. DISCUSSION The need for exact estimation of bone mineral density to diagnose its diseases and treatment follow-up, require accurate assessment of measurement technique behavior. Some crucial properties like sensitivity and percentage (or relative) error, which undoubtedly affect the densitometry results, have to be completely determined. As it is shown in results, sensitivity and percentage error vary by mineral density of spine results in an uncertainty in measurement procedure. Although a linear regression between measured and true density can be observed (Fig. 3), but as far as writer’s concerns, it is not enough to assure the examiner about the results because by examining wide range of densities, which became feasible by designed phantom in this research, the instability of two above mentioned variable by varying spine density is demonstrated (Fig. 4 and Fig.5). A promising result would be achieved when by calculation the average of drift over multiple ranges of density a correction coefficient is obtained and consequence of exerting this coefficient results a stable sensitivity over whole range of density such as what is plotted in Fig. 6. It is
This work was supported by Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences and Iranian Foundation of National Genius with Grant no. 7719.
REFERENCES 1. Damilakis J, Maris TG, Karantanas AH. (2007) An update on the assessment of osteoporosis using radiologic techniques. Eur Radiol. 17(6):1591-602 2. Christopher E. Cann (1988) Quantitative CT for Determination of bone mineral density-A review. RSNA Radiology 166:509-522 3. Willi A. Kalender, Dieter Felsenberg, Harry K. Genant , Manfred Fischer , Jan Dequeker , Jonathan Reeve. (1995) The European Spine Phantom — a tool for standardization and quality control in spinal bone mineral measurements by DXA and QCT. European Journal of Radiology Vol. 20: 83-92
Corresponding Author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Mohammad Reza Ay Tehran University of Medical Sciences Pour Sina Tehran Iran
[email protected]
Calcination Effects on the Sinterability of Hydroxyapatite Bioceramics C.Y. Tan1, R. Tolouei1, S. Ramesh2, B.K. Yap1, and M. Amiriyan1 1
Ceramics Technology Laboratory, COE, University Tenaga Nasional, Km-7, Jalan Kajang-Puchong, 43009 Kajang, Selangor, Malaysia 2 Department of Engineering Design and Manufacturing, College of Engineering, Universiti Malaya, KL, Malaysia
Abstract— The sinterability of calcined synthesized HA (700ºC to 1000ºC) was investigated over the temperature range of 1050ºC to 1350ºC in terms of phase stability, bulk density, Young’s modulus and Vickers hardness. Calcination has resulted in higher crystallinity of the starting synthesized HA powder. Decomposition of HA phase to form secondary phases was not observed in the present work for the calcined powders. The results also indicated that calcination of the HA powder prior to sintering has negligible effect on the sinterability of the HA compacts (up to 900ºC). Further treatment at 1000ºC was found to be detrimental to the properties of sintered HA. Keywords— Hydroxyapatite, Calcination, Mechanical properties, Sinterability.
The success of HA ceramic in biomedical application is largely dependent on the availability of a high quality, sintered HA that is characterized having refined microstructure and improved mechanical properties [9]. Intensive works on HA involving a wide range of powder processing techniques; composition and experimental conditions have been investigated with the aim of determining the most effective synthesis method and conditions to produce well-defined particle morphology [10-12]. Various researchers have reported that calcination process would increase the degree of crystallinity of the initial particles [13]. Nevertheless, works on the sinterability of calcined powders are scarce. Thus, the objective of this work is to study the sinterability of synthesized HA powders calcined at various temperatures ranging from 700ºC to 1000°C.
I. INTRODUCTION The use of hydroxyapatite (HA) as a potential bioactive calcium phosphate ceramics has gained popularity in recent years due to its close chemical resemblance with the mineral components of natural bone and teeth [1], [2]. Several studies have been carried out and the results demonstrated that HA can accelerate initial biological response with host tissues at the implanted site in the body and improves the bone-implant adhesion [3],[4]. Evidence of rapid bone formation and subsequently healing around damaged sites in the body were also observed [5], [6]. Due to these excellent biocompatibility properties, HA ceramics have been widely used in many medical, orthopaedic and dental applications including the augmentation of the jaw, dental implants, spinal surgery, maxillofacial surgery and artificial middle ear implants [1], [2]. However, one of the major setbacks of HA is the poor mechanical properties inherited by the sintered body [6]. Owing to the brittle nature and the low fracture toughness (< 1 MPam1/2) of the sintered HA, such implants could only be utilised successfully in non-load bearing applications [7]. Therefore, the development of bioactive HA that has improved and ultimately bone-like mechanical properties are desirable. As a result, a great number of studies have been devoted to improve the mechanical properties of HA materials [4-8].
II. METHODS AND MATERIALS In the current work, the HA powder used was prepared according to a wet chemical method comprising precipitation from aqueous medium by slow addition of orthophosphoric acid (H3PO4) solution to a calcium hydroxide (Ca(OH)2) [14]. The synthesized HA powder were calcined in air atmosphere at temperature ranging from 700ºC to 1000°C at 10ºC/min. and, after a dwell time of 2 h, cooled to room temperature at 10ºC/min. The HA powder was compacted into disc and rectangular bar samples and subsequently cold isostatic pressed at 200 MPa (Riken Seiki, Japan) prior to sintering. This was followed by consolidation of the particles by pressureless sintering performed in air using a rapid heating furnace over the temperature range of 1050ºC to 1350ºC, with ramp rate of 2oC/min. (heating and cooling) and soaking time of 2 hours for each firing. All sintered samples were then polished to a 1 µm finish prior to testing. The calcium and phosphorus content in the synthesized HA powder were determined by using the Inductively Coupled Plasma-Atomic Emission Spectrometry (ICP-AES) technique. The specific surface area of the powder was measured by the Brunauer-Emmett-Teller (BET) method. Nitrogen gas adsorption analysis was performed on a Coulter SA 3100 Analyzer. Samples were outgassed at 150°C for
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 51–54, 2011. www.springerlink.com
52
C.Y. Tan et al.
III. RESULTS AND DISCUSSION The properties of the powders that were calcined at various temperatures are given in Table 1. The Ca/P ratio of all the powders was not affected by the calcination temperature and all powders exhibited an average Ca/P ratio of 1.67. The particle size analysis of the calcined HA powders, however indicated that the mean particle size increased from 1.78 ± 0.22 µm for the as-synthesized (uncalcined) HA to 2.25 ± 0.68 µm for HA powder calcined at 1000°C. The current results correlated well with the particle size trend observed by Zhang and Yokogawa [13] who reported that the mean particle size increased from 0.97 µm for untreated HA to 4.19 µm when calcined at 1000°C. On the other hand, the present result contradicts the observation made by Patel et al. [16] who found that the mean particle size decreased with increasing calcination temperature above 700°C. Table 1 Effect of Calcination Temperatures on the HA Powder Calcination Temperature Untreated 700°C 800°C 900°C
shown in Table 1. This result is in agreement with the findings observed by Raynaud et al. [17]. These authors suggested that the surface area reduction of the powders during calcination could be attributed to particles coalescence with increasing temperature. The effect of calcination temperature on the phase composition and crystallinity of the HA powders is illustrated in Fig. 1. All the powders resulted in diffraction peaks that only corresponded to HA. The XRD trace indicated that for calcination temperature at 700°C, broad diffraction peaks were obtained that was similar to the diffraction patterns obtained for the untreated powder. Nevertheless, there was a clear change in the diffraction peaks (width and height) when the calcinations temperature increases beyond 700°C. This observation indicated that increasing the calcination temperature results in higher crystallinity of the starting powder. All the samples, regardless of calcination temperatures and sintering conditions did not show any cracking after sintering except for rectangular compacts for HA calcined at 1000ºC. The XRD traces indicate that the sintering of all HA samples studied did not result in the formation of secondary phases such as TCP, TTCP or CaO even when sintered at 1350ºC. The XRD signatures belong to that of stoichiometric HA. The fact that the calcined HA in the current work did not decompose when sintered at high temperatures is in agreement with the findings of Sadeghian et al. [18].
Intensity (Counts)
30 minutes under vacuum. The particle size distributions of the HA powders was determined using a standard Micromeritics® SediGraph 5100 X-ray particle size analyzer. The phases present in the powders and sintered samples were determined using X-Ray diffraction (XRD) (Geiger-Flex, Rigaku Japan). The bulk densities of the compacts were determined by the water immersion technique (Mettler Toledo, Switzerland). The relative density was calculated by taking the theoretical density of HA as 3.156 Mgm-3. The Young’s modulus (E) by sonic resonance was determined for rectangular samples using a commercial testing instrument (GrindoSonic: MK5 “Industrial”, Belgium) [15]. The microhardness (Hv) of the samples was determined using the Vickers indentation method (Matsuzawa, Japan).
1000°C
Color
White
White
White
White
White
2ș(º)
Ca/P ratio
1.67
1.67
1.67
1.67
1.67
Fig. 1 Comparison of XRD Patterns of HA (a) Untreated (b) Calcined at 700°C (c) Calcined at 800°C (d) Calcined at 900°C (e) Calcined at 1000°C
Mean particle size (µm)
1.78. ± 0.22
1.87 ± 0.45
2.04 ± 0.06
2.14 ± 0.23
2.25 ± 0.68
Specific surface area/SSA (m2/g)
60.74
20.7
15.16
9.49
9.45
In tandem with the increasing average particle size, the specific surface area (SSA) of the powders decreased as the calcination temperature increased from 700°C to 1000°C as
The densification curves as a function of sintering temperatures is shown in Fig. 2. In general, when sintered at higher temperatures (>1150ºC), except for HA powder treated at 1000ºC, the sintered densities of HA powder were almost similar ranging from 97% to 99%. Nevertheless, when sintered at low temperature (1050ºC), the untreated HA recorded the highest relative density (99%). Calcined powders recorded a lower relative density when sintered at
IFMBE Proceedings Vol. 35
Calcination Effects on the Sinterability of Hydroxyapatite Bioceramics
similar temperature. Majling et al.,[19] find out that aggregation of powders during drying and calcination stages causes nonuniformity in packing of particles in green compacts, and thus prevents full densification during sintering. The relationship between the Young’s modulus of the sintered body and calcination temperatures (up to 900ºC) is shown in Fig. 3. In general, the stiffness of HA material shows no dependence on calcination temperature, with the values ranging from 110 GPa and 118 GPa. This trend is in agreement with the variations in bulk density. Attempts to correlate the Young’s modulus with bulk density of all the sintered HA revealed a linear relationship, indicating that the Young’s modulus of HA ceramics is governed by its bulk density. This result is in agreement with the work of Pecqueux et al., [20] who reported that the Young’s modulus of HA is controlled by the pore volume fraction in the sintered body.
53
The variation of the average Vickers hardness samples sintered at various temperatures is shown in Fig. 4. In general, the hardness value of HA material shows no dependence on calcination temperature for powders treated at 700-900°C. The hardness, however, decreased with increasing sintering temperature. In the current study, a maximum hardness value of 7.23 GPa was recorded for untreated HA when sintered at 1150oC. The HA calcined at 1000°C before sintering proved to be detrimental to its hardness. A low hardness value of 1.65 GPa was recorded for this sample when sintered at 1050oC. A maximum hardness value of 5.36 GPa could only be attained for calcined HA at 1000oC when sintered at 1250oC. Thereafter, the hardness value decreases when the compacts were sintered at 1350oC.
Fig. 4 Effect of Sintering Temperature and Calcination Temperatures on Fig. 2 Relative density variation as a function of sintering temperatures for HA
the Vickers Hardness of HA
IV. CONCLUSION 1.
2.
3. 4. Fig. 3 Effect of Calcination Temperatures and Sintering Temperatures on The Young’s Modulus of HA
The Ca/P ratio of all the powders was not affected by the calcination temperature and all powders exhibited an average Ca/P ratio of 1.67. The particle size analysis of the calcined HA powders, however indicated that the mean particle size increased from 1.78. ± 0.22 µm for the as-synthesized (untreated) HA to 2.25 ± 0.68 µm for HA powder calcined at 1000°C. Increasing the calcination temperature results in higher crystallinity of the starting synthesized HA powder. Decomposition of HA phase to form tri-calcium phosphate, tetra tri-calcium phosphate and calcium oxide was not observed in the present work for the calcined powders.
IFMBE Proceedings Vol. 35
54
5.
C.Y. Tan et al.
The results indicated that calcination of the HA powder prior to sintering has negligible effect on the sinterability of the HA compacts (up to 900ºC). Further calcinations at 1000ºC were found to be detrimental to the properties of sintered HA.
AKNOWLEDEMNET The authors would like to thank the Ministry of Science, Technology and Innovation of Malaysia for providing the financial support under the grant no. 03-02-03-SF0073. In addition, the support provided by UNITEN and SIRIM Berhad in carrying out this research is gratefully acknowledged.
REFRENCES 1. Hench L L (1998) Biomaterials: A forecast for the future, Biomaterials 19 1419-1423 DOI 10.1016/S0142-9612(98)00133-1 2. Hench L L (2000) The challenge of orthopaedic materials. Curr Ortho 14:1, 7-15 DOI 10.1054/cuor.1999.0074. 3. Nich C, Sedel L (2006) Bone substitution in revision hip replacement. Inter Ortho 30:6 525-531 DOI 10.1007/s00264-006-0135-6. 4. Kneser U, Schaefer DJ, Polykandriotis E et al (2006) Tissue engineering of bone: the reconstructive surgeon’s point of view J Cell Mol Med 10:1 7-19 DOI: 10.1111/j.1582-4934.2006.tb00287.x. 5. Li J, Liao H, Sjostrom M (1997) Characterization of calcium phosphates precipitated from simulated body fluid of different buffering capacities. Biomaterials 18:743-747 DOI 10.1016/S0142-9612(96) 00206-2. 6. Chu G, Orton TM, Hollister DG, et al (2002) Mechanical and in vivo performance of hydroxyapatite implants with controlled architectures. Biomaterials 23:1283–1293 DOI 10.1016/S0142-9612(01)00243-5. 7. Murugan R, Ramakrishna S (2004) Aqueous mediated synthesis of bioresorbable nanocrystalline hydroxyapatite. J Crystal Growth 274: 209-213 DOI 10.1016/j.jcrysgro. 2004.09.069. 8. Afshar M, Ghorbani N, Ehsani M et al (2003) Some important factors in the wet precipitation process of hydroxyapatite. Mater Desi 24: 197-202 DOI 10.1016/S0261-3069(03)00003-7 9. Herliansyah MK, Hamdi M, Ide-Ektessabi A et al (2009) The influence of sintering temperature on the properties of compacted bovine Hydroxyapatite. Mater Sci Eng C 29:5 1674-1680 DOI 10.1016/ j.msec.2009.01.007.
10. Zhang Y, Yokogawa Y (2008) Effect of drying conditions during synthesis on the properties of hydroxyapatite powders. J Mater Sci: Mater Med 19:623-628 DOI 10.1007/s10856-006-0047-4. 11. Rodríguez-Lorenzo LM, Vallet-Regí M, Ferreira JMF et al (2002) A hydroxyapatite ceramic bodies with tailored mechanical properties for different applications. J Biomedical Mater Res 60:159-166 DOI 10.1002/jbm.1286 12. Kokubo T, Kim HM, Kawashita M (2003) Novel bioactive materials with different mechanical properties. Biomaterials 24:2161-2175 DOI 10.1016/S0142-9612(03)00044-9 13. Drouet C, Bosc F, Banu M et al (2009) Nanocrystalline apatites: From powders to biomaterials. Powder Tech 1 90:1-2 118-122 DOI 10.1016/j.powtec.2008.04.041 14. Ramesh S, A method for manufacturing hydroxyapatite bioceramic, Malaysia Patent, PI. 20043325 (2004). 15. ASTM Standard C1259 - 2008e1, Standard Test Method for Dynamic Young's Modulus, Shear Modulus, and Poisson's Ratio for Advanced Ceramics by Impulse Excitation of Vibration. ASTM International, West Conshohocken, PA, 2008, doi:10.1520/C1259-08E01. 16. Patel N, Gibson IR, Ke S, et al (2001) Calcining influence on the powder properties of hydroxyapatite. J Mater Sci Mater Med 12:181188 DOI: 10.1023/A: 1008986430940. 17. Raynaud S, Champion E, Bernache-Assollant D (2002) Calcium phosphate apatites with variable Ca/P atomic ratio II. Calcination and sintering Biomaterials 23:1073-1080 DOI 10.1016/S0142-9612 (01)00219-8. 18. Sadeghian Z, Heinrich JG, Moztarzadeh F (2006) Influence of powder pre-treatments and milling on dispersion ability of aqueous hydroxyapatite-based suspensions. Ceram Inter 32:331-337 DOI 10.1016/j.ceramint.2005.02.016. 19. Majling J, Znaik P, Palova A et al (1997) Sintering of the ultrahigh pressure densified hydroxyapatite monolithic xerogels J Mater Res 12:1 198-202 DOI: 10.1557/JMR.1997.0026. 20. Pecqueux F, Tancret F, Payraudeau N et al (2010) Influence of microporosity and macroporosity on the mechanical properties of biphasic calcium phosphate bioceramics: Modeling and experiment. J Euro Ceram Soc 30:4 819-829 DOI 10.1016/j.jeurceramsoc.2009.09. 017.
Author: Institute: Street: City: Country:
Dr. Chou Yong Tan University Tenaga Nasional UNITEN-IKRAM Kajang Malaysia
Email:
[email protected]
IFMBE Proceedings Vol. 35
Chitin Fiber Reinforced Silver Sulfate Doped Chitosan as Antimicrobial Coating C.K. Tang1, A.K. Arof2, and N. Mohd Zain1 1
Biomedic Engineering Department, Engineering Faculty, University Malaya, Kuala Lumpur, Malaysia 2 Physics Department, Science Faculty, University Malaya, Kuala Lumpur, Malaysia
Abstract— A method to prepare chitin fiber reinforced chitosan coating doped with silver sulfate was presented. Tensile strength, surface morphology and silver ion release was studied to provide an overall idea of the properties of the coating. Keywords— Chitin, chitosan, silver ion, silver sulfate, antimicrobial.
I. INTRODUCTION Over recent years there has been an increasing concern over infections caused by a wide range of bacteria. It is very clear that surface contamination is directly related to infections. MRSA, for example, can survive up to 9 weeks if it dries on a surface of the wall, or 2 days when on a plastic laminate surface. It is stable in varying conditions of temperature, humidity, exposure to sunlight and desiccation [1]. Bacteria can easily spread from surface to surface by hand contact and other methods. Antimicrobial coating is particularly relevant to food and medical industry where a large number of potentially lethal bacteria can be present. Contamination can happen on the surface where food are treated and in hospital wards. Thus the development of antimicrobial coating is vital to prevent surface contamination. Chitosan has very good antimicrobial properties and is thus a very suitable weapon in the fight against the spread of infections. Chitosan has a polycationic structure with amino groups linking to its C-2 site. It has the ability to penetrate the cell wall of bacteria and combine with DNA to inhibit the synthesis of mRNA and DNA transcription, interact with cell surface and alter cell permeability or forming an impermeable layer around the cell [2]. An ideal antimicrobial polymer should be easily and inexpensively synthesized, stable during long term usage and storage at the temperature of its intended application, not soluble in water for a water disinfection application, does not decompose to and/or emit toxic products, should not be toxic or irritating to those who are handling it, can be regenerated upon loss of activity, and biocidal to a broad spectrum of pathogenic microorganisms in brief times of contact [3]. Chitosan possesses many of these attributes thus making it an attractive candidate for use in various industries
such as medical, food and textile industries where human safety and health are important. Although chitosan has been studied as an excellent antimicrobial agent, pure chitosan films have poor tensile strength and elasticity. Hence development of high strength composites is necessary for antimicrobial coating applications. Composite films from chitosan have been developed by incorporating chitin nanofibres to improve its tensile strength and elasticity [4]. Silver has long been used as an antimicrobial agent. The antimicrobial effect of silver comes from the highly reactive nature of silver ion which binds to tissue proteins and causes structural changes in the bacterial cell wall and intracellular and nuclear membranes. This will lead to cellular distortion and loss of viability of the bacteria. Silver ion also exhibits its bacteriostatic properties by binding to and thus denaturing bacterial DNA and RNA, and ultimately inhibiting bacterial replication [5].
II. EXPERIMENTAL A. Preparation of Chitin Fiber Chitin powder is hydrolysed by adding 3N hydrochloric acid under stirring for 2 hours at 104°C. Hydrochloric acid to chitin ratio is maintained at 30ml/g. After hydrolysis, suspension was diluted with distilled water followed by centrifugation at 9500 rpm for 10 minutes. This process was repeated trice. The suspension was then transferred to a dialysis bag and dialysed against distilled water for 24 hours. The pH of the suspension was adjusted to 2.5 by adding hydrochloric acid and subsequently subjected to ultrasonication for 20 minutes. The resulting suspension was in colloidal form due to the protonation of amino group of chitin. B. Preparation of Chitin Fiber Reinforced Silver Sulfate Doped Chitosan Membranes Chitosan was dissolved in 1% acetic acid solution. A solution of silver sulfate was mixed with the chitosan solution and stirred until homogeneous. Chitin fiber as prepared in the first part of experiment was added to the solution and
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 55–59, 2011. www.springerlink.com
56
C.K. Tang, A.K. Arof, and N. Mohd Zain
stirred for 2 hours. The solution was then spread on a Petri dish followed by oven drying. The dried membrane was peeled off from the plate and washed with ammonia solution to neutralize the residual acid. The membrane was then repeatedly washed with distilled water and placed in an oven for drying. The dried membranes were kept in desiccators prior to characterization.
b
c
a
d e
C. Characterization of Chitin Fiber Reinforced Silver Sulfate Doped Chitosan Membranes Tensile strength of the membrane was determined using Ingstron Microtensile machine. The samples with 1.4 cm in width, 2.8 cm in length and 0.015 cm thickness were used in this test. The films were analyzed using x-ray diffraction method (XRD) by D8 Advance X-Ray Diffractometer at Combicat, University of Malaya. The CuKα radiation was set at 40 kV and 20 mA. The data was collected with 2θ angle ranging from 5º to 90º with step size of 0.02º and a step time of 1s. Fourier transform infrared spectroscopy (FTIR) was recorded using Thermo Scientific Nicolet iS10 FTIR Spectrophotometer. Scanning electron microscopy (SEM) imaging was carried out using Gemini FE-SEM from Zeiss to study the morphological nature of the film. Silver ion release test was carried out using Orion 4-Star pH/ISE meter with Orion Ionplus Silver/Sulfide electrode to determine the ability of the membrane to release silver ion into the surrounding in presence of water. Sample was cut into 1 cm by 1 cm square film. Reading was taken after 24 hours immersion to determine the silver ion concentration in respective samples.
Fig. 1 Tensile test result for a) chitin and chitosan, b) chitin chitosan doped with 0.1g silver sulfate, c) chitin chitosan doped with 0.3g silver sulfate, d) chitin chitosan doped with 0.5g silver sulfate, e) chitin chitosan doped with 0.7g silver sulfate XRD patterns in fig. 2 shows that there are 2 peaks generally at 2θ = 9º and 2θ = 19º which indicate the present of α-chitin and at 2θ = 20º which indicate the present of chitosan. The broad peak and low intensity indicates amorphous nature of chitin-chitosan composite. As the amount of silver sulfate increases, a peak appears at 2θ = 38º. The absent of peak at 38 º for samples with lower amount of silver sulfate maybe due to the peak being covered by the noise generated due to amorphous structure of the chitin chitosan composite. Crystallinity index was calculated using the following equation [6]:
III. RESULTS AND DISCUSSION
%
Tensile testing carried out on chitin fiber reinforced chitosan film doped with 0 to 0.7 grams of silver sulfate. Figure 1 shows that tensile strength is highest for chitin fiber reinforced chitosan film doped with 0.1 grams of silver sulfate which fracture at 33N of load while chitin fiber reinforced chitosan film without doping fracture at 20N of load. Tensile strength decrease as more silver sulfate was doped into the film. The decrease in tensile strength of the antimicrobial coating is due to the excessive amount of silver sulfate added. Silver ions can form complexes with chitosan that will cause the decrease in degree of crystallinity of the film as shown on the XRD results where the characteristic peaks of chitin lowered showing signs of decrease in degree of crystallinity.
100
(1)
Where I110 is the maximum intensity (arbitrary units) of the diffraction at (110) plane at 2θ = 19º and Iam is the intensity of the amorphous diffraction in same unit at 2θ = 12.6º. Results from Table 1 shows that the crystallinity index increases greatly when 0.1 grams of silver sulfate was doped into the chitin fiber reinforced chitosan film. The crystallinity index increases from 32% to 50%. Crystallinity index dropped from 50% down until 26% as more silver sulfate was added. This suggests that the optimum amount to add into the coating was 0.1 grams where higher crystallinity index will give better mechanical strength to the coating.
IFMBE Proceedings Vol. 35
Chitin Fiber Reinforced Silver Sulfate Doped Chitosan as Antimicrobial Coating
Table 1 Crystallinity index of chitin chitosan composite film doped with silver sulfate Sample
Crystallinity index (%)
Chitin chitosan
32
0.1g silver sulfate doped chitin chitosan
50
0.3g silver sulfate doped chitin chitosan 0.5g silver sulfate doped chitin chitosan
46 26
0.7g silver sulfate doped chitin chitosan
39
57
the group –CH2OH from the side chain. This additional bond produces a decrease in absorbance the amide I band at 1627 cm-1.The existence of these inter-chain bonds is responsible for the high chemical stability of the α-chitin structure.
(e) (e)
(d) (d)
(c)
(c)
(b)
(b)
(a)
(a)
Fig. 2 XRD results for: a) chitin and chitosan, b) chitin chitosan doped with 0.1g silver sulfate, c) chitin chitosan doped with 0.3g silver sulfate, d) chitin chitosan doped with 0.5g silver sulfate, e) chitin chitosan doped with 0.7g silver sulfate
FTIR was used to confirm the structure of our chitin reinforced chitosan film doped with silver sulfate as shown in Figure 3. All samples exhibit transmission peak at 1154, 1082, 1022, and 895 cm-1, which show presence of saccharide moiety. The peak around 3417cm-1 corresponds to the vibration of N-H and O-H bond. The peak at 1597 cm-1 is attributed to the –NH2 group in both chitin and chitosan [7]. Peaks observed in the region of 1660 and 1627 cm-1 are due to amide I band which is characteristic of α-chitin in this case, from shrimp. Half of the carbonyl groups are bonded through hydrogen bonds to the amino group inside the same chain (C=O•••H-N) that is responsible for the vibration mode at 1660 cm-1. The remaining half of the carbonyl groups produces the same bond plus another with
Wave length, cm-1
Fig. 3 FTIR results for a) chitin and chitosan, b) chitin chitosan doped with 0.1g silver sulfate, c) chitin chitosan doped with 0.3g silver sulfate, d) chitin chitosan doped with 0.5g silver sulfate, e) chitin chitosan doped with 0.7g silver sulfate
Morphological properties of the samples were inspected using FESEM. Fig. 4 shows that chitin nanofibre was found to be distributed randomly in the chitosan matrix. The average diameter of the nanofibre was around 100 nm. Fig. 5 shows silver ion concentration in distilled water after 24 hours. The silver ion concentration increases as the amount of silver sulfate doped is increased. The silver ion concentration inceases from 3ppm to 11.2ppm as more silver sulfate was added. Results show that the membrane is able to release enough silver ions for antimicrobial purposes [8].
IFMBE Proceedings Vol. 35
58
C.K. Tang, A.K. Arof, and N. Mohd Zain
IV. CONCLUUSION
a
b
c
d
Silver sulfate was added into chhitin fiber reinforced chitosan to improve its antimicrobiall ability. As silver is a potent agent to kill or impair variouus types of gram positive or gram negative bacteria such as E.. Coli. Our study shows that with 0.1 grams of silver sulfate doped into 1 g of chitosan will gives best tensile strength while increasing the amount of silvver sulfate further would result in lost of tensile strength off the membrane greatly. The amount of silver ion release inncreases with the amount of silver sulfate added. All samples doped with silver sulfate shows concentration of over 900 pppb which previous study had shown to inhibit growth of E. C Coli. Thus 0.1 g of silver sulfate doped into chitin fiber reinforced chitosan would give the optim mum tensile strength and also improved antimicrobial propeerties with the ability to release silver ions into the surroundiing.
ACKNOWLEDGM MENT e
Fig. 4 SEM results for a) chitin and chitosan, b) chitin chitosan c doped with 0.1g silver sulfate, c) chitin chitosan doped with 0.3g g silver sulfate, d) chitin chitosan doped with 0.5g silver sulfate, e) chitin chitosan c doped with 0.7g silver sulfate
Fig. 5 Concentration of silver ion presence in solutiion after 24 hours
The author would like to acknow wledge the support of this work financially from University off Malaya under University Malaya Research Grant (RG058-09AFR).
REFERENCEES 1. Smith J, Jones M Jr, Houghton L et al. (1999) Future of health insurance. N Engl J Med 965:325–329 2. Page, K., M. Wilson, et al. (2009). Anntimicrobial surfaces and their potential in reducing the role of the inannimate environment in the incidence of hospital-acquired infections. Joournal of Materials Chemistry 19(23): 3819-3831.. 3. El-Refaie Kenawy, S. D. W., and Roy B Broughton (2007). The Chemistry and Applications of Antimicrobial Polymers: A State-of-the-Art Review. BioMacromolecules 8(5): 26. 4. Shelma R., W. P. a. S. C. P. (2008). Chiitin Nanofibre Reinforced Thin Chitosan Films for Wound Healing Appplication. Trends Biomater. Artif. Organs Vol 22(2): 111-115. 5. Castellano, J. J., S. M. Shafii, et al. (20007). Comparative evaluation of silver-containing antimicrobial dressinngs and drugs. International Wound Journal 4(2): 114-122. 6. Galo Ca´rdenas, G. C., Edelio Taboadaa, S. Patricia Miranda (2004). "Chitin Characterization by SEM, FTIR R, XRD, and 13C Cross Polarization/Mass Angle Spinning NMR." Journal of Applied Polymer Science 93: 10. 7. Jin, X., J. Wang, et al. (2009). Synthesiis and antimicrobial activity of the Schiff base from chitosan and ccitral. Carbohydrate Research 344(6): 825-829.
immersion in distilled water
IFMBE Proceedings Vol. 35
Chitin Fiber Reinforced Silver Sulfate Doped Chitosan as Antimicrobial Coating 8. Yamanaka, M., K. Hara, et al. (2005). Bactericidal Actions of a Silver Ion Solution on Escherichia coli, Studied by Energy-Filtering Transmission Electron Microscopy and Proteomic Analysis. Appl. Environ. Microbiol. 71(11): 7589-7593.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
59 Tang Chee Kuang University Malaya 952, Jalan 17/49, Seksyen 17 Petaling Jaya, Selangor Malaysia
[email protected]
Effects of Joule Heating on Electrophoretic Mobility of Titanium Dioxide (TiO2), Escherichia Coli and Staphylococcus Aureus (Live and Dead) P.F. Lee1, M. Misran2, and W.A.T. Wan Abdullah3
,
1
Mechatronics and BioMedical Engineering Faculty of Engineering & Science, University Tunku Abdul Rahman (Kuala Lumpur Campus) Jalan Genting Kelang, Setapak 53300, Kuala Lumpur 2 Colloid and Surface Laboratory, Chemistry Department, Faculty of Science, University Malaya 50603 Kuala Lumpur, Malaysia 3 Physics Department, Faculty of Science, University Malaya 50603 Kuala Lumpur, Malaysia
[email protected]
Abstract— Heating effects on surface charge of single cell bacteria was investigated. Selected samples were gram-positive strain of Staphylococcus aureus (S.aureus) and gram-negative strain of bacteria Escherichia coli (TiO2) were used in this study. Besides, control sample were chosen in this research was titanium dioxide (TiO2) due to the strong negative charges on it when suspended in solution. Surface charge of samples was determined by measuring the electrophoretic mobility and the zeta potential with laser Doppler velocimetry electrophoresis. Electrophoretic mobility of the materials was expected to increase as temperature increased due to the reduction of viscosity of the solution. This happened when temperatures of the solution increased, ions in solution move more randomly and causing the electrical double layer of charged particles and cells to diffuse greater. The electrophoretic mobility of control sample, TiO2 increased with temperatures, however both live and dead strain of E.coli and S.aureus had slight increased and the changes were erratic at various temperatures compared with TiO2. Keywords— Electrophoretic mobility, zeta potential, surface charge, electrical double layer, viscosity.
I. INTRODUCTION Surface charge studies on colloidal particles and biological cells have been much progress in recent years [14]. The knowledge on surface charge properties of the cells is important to understand the complex transport phenomena in both living and non-living organisms. The details investigation on the surface properties can provide precise information on the cell surface composition [5] of the cells, reaction to the antimicrobial drugs [6], cells agglomeration [7-9] and adhesion to surfaces. The natural environment has grown with many bacteria that are living attached to the surfaces which include human body [10], saliva, tears[11], sweat [12], raw food and places that highly humid. This has enhanced the importance of electrophoretic mobility measuring on the surface charge of the bacteria. Joule heating effects on electrophoretic mobility is essential to further collect the changes of surface charge
properties for colloidal particle and bacteria. Besides, joule heating is also a by product effect of electrical work done in passing a current through the resistive buffer solution in the electrophoresis system. As a result, the viscosity of electrolyte decreases and the solute’s diffusion coefficients increase [13]. In a consequent to this heating process, the buffer ions form excess charge on the solution and enhance the electrical double layer at the capillary of the electrophoresis system and hence affects the electrophoretic mobility significantly. The net bacteria surface charge can be approximated by measuring the zeta potential. This is measuring the electric potential difference at the hydrodynamic slip plane which interfaces between the aqueous fluid and the stationary layer of fluid attached to the bacterial cell [14]. Zeta potential has been reported for a wide range of gram positive and negative bacteria [15, 16]. Escherichia coli ( E.coli ) is a well known harmful bacteria that many strains of E. coli carries food borne and waterborne illness[17]. They are the causes of diarrhea, urinary tract infection, respiratory illness and pneumonia. This has contributed to a details study on the E.coli for years. E.coli is a well characterized gram negative bacterium that has a rigid but thin peptidoglycan layer compare to gram positive bacterium and the outer layer of E.Coli is known to be covered with lipopolysaccharide layer of 1-3 μm [18]. The difference between gram positive and gram negative bacterium can be identified with measuring their surface charges based on electrophoretic mobility [19]. Research on surface charge has enabled the investigation on electrocoagulation on E.Coli which was to neutralize the surface charges of the microorganism by electric field [20]. Surface charges measurement for bacteria is essential to gain information on the adhesive characteristic of the bacteria to substances. Investigation on the effects of difference nutrient broth on cell surface hydrophobicity of E.Coli when adhere to lettuce and apple surfaces were carried out [21]. There is also study on the effects of surface charge and hydrophobicity of E.Coli to adhere on beef muscle, where the E.Coli were suspended in different pH
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 60–68, 2011. www.springerlink.com
Effects of Joule Heating on Electrophoretic Mobility of Titanium Dioxide (TiO2), Escherichia Coli and Staphylococcus Aureus
and ionic strength of solution before attached to the meat [22]. The investigations have emphasized the importance of the study of surface charge on bacterium to improve the quality of food and diseases prevention. Staphylococcus aureus (S.aureus) is a well-known strain of gram-positive bacterium that frequently to be selected for research studies. It is a major cutaneous pathogen. Different method has been developed to monitor the surface charge of S.aureus, which can fertilize the growing rate of the bacteria [23]. On the other hand, the study on surface charges of bacteria can strongly help on improving the medical field. For instance, the bacteria are used to trigger an innate immune response in the heart with mice. This study is done to develop a method to attenuate S.aureus induce proinflammatory [24]. Many researches have been carried out to investigate the DNA of S.aureus to identify the different patterns that exists with pulsed field electrophoresis [25]. The surface charge property is essential to obtain information on the stability and agglomeration for colloidal suspension. Titanium dioxide (TiO2) is commonly used in many industrial applications. This is due to the white pigment powder possesses with high refractive index, leading to a wide usage of TiO2 as catalyst support [26], semiconductor photocatalyst [27], paints, antimicrobial coating [28], oxygen sensors and as UV filters for cosmetic products [13]. Many research on electrophoretic mobility of the colloidal suspension are investigated [29-30]. This is due to the surface properties of TiO2 remain uncertain when affected by the change of pH, ionic strength and temperature [31] . Experimental study on characterizing the colloidal stability of TiO2 and water interfaces under the presence of several electrolytes and different pH range [32]. Colloidal stability of TiO2 in different concentration range of dispersant of polyacrylic-based deflocculant and citric acid was studied and found out that the zeta potential of TiO2 was affected by sonication time [13]. In this investigation, effects of temperature changes on electrophoretic mobility of live, dead E.Coli and S.aureus together with TiO2 is carried out from 20 oC to 55 oC. The study is to determine the extent to which the metabolic process can control surface charge which the dead populations of bacteria is used as control sample. TiO2 as solid particles was compared its electrophoretic mobility under influences of temperature to the “soft particle” as bacteria cells. The purpose of this paper is to report the results of electrophoretic mobility and zeta potential different between the live and dead cells, gram positive and gram negative bacteria, hard and soft particles towards the influence of different temperature. The result was obtained with Laser Doppler velocimetry Electrophoresis system. This is a system which measured the migration of an ion in
61
a homogenous electric field where the migration rate was proportional to the field applied.
II. THEORY The electrophoretic mobility ( μ ) is the migration rate of the charged particles to the electrode when electric field is applied. The electrophoretic mobility is defined as Equation 1.
μ=
v
(1)
E
where the v is the rate of migration (m/s) and E is external voltage on separation distance (V/m). The particles move with constant velocity when equilibrium is achieved. The velocity of the particle is dependent on few factors that include strength of electric field or voltage gradient, the dielectric constant of the medium, the viscosity of the medium and the zeta potential. Application of the Henry equation has defined the formula for electrophoretic mobility, which is shown in Equation (2).
μe =
2εzf (κa ) 3η
(2)
where μe is denoted as electrophoretic mobility, z is zeta potential, f(ka) is Henry’s function, η is viscosity and ε is dielectric constant [16]. Two values are generally used as approximations for the f(ka) determination which is either 1.5 or 1.0. How to decide which value to be used? f(ka) carries 1.5 is used for electrophoretic determinations of zeta potential are commonly made in aqueous medium and moderate electrolyte concentration. For f(ka) 1.5 is referred to the Smoluchowski approximation which emphasizes for particles larger than about 0.2 microns dispersed in electrolytes containing more than 10-3 molar salt. Study on the changes of electrophoretic mobility of the charged particles due to the manipulated elevation of temperature was carried out. This had increased the internal energy of the particles increased as a result of the increasing temperature. The high internal energy of particles in the solution had led to a greater kinetic motion of the suspended particles and ions which then causing the double diffusion layers around the particles to be thicken. As a result, screening effects of the ions on the charged particles were reduced, increasing the speed of particles moving towards the electrode. Meanwhile, the heating process had reduced the viscosity of the solution, thus enhancing the electrophoretic mobility of the particles. One relatively simple model assumes that the viscosity obeys an “Arrhenius-like” in Equation 3.
IFMBE Proceedings Vol. 35
62
P.F. Lee, M. Misran, and W.A.T. Wan Abdullah
η = Ae
⎛E ⎜ a ⎜ RT ⎝
⎞ ⎟ ⎟ ⎠
(3)
where A and Ea are constants for a given fluid. A is called the pre-exponential factor and Ea can be interpreted as the activation energy for viscous flow. Note that this expression is nearly identical to the Arrhenius equation that describes the temperature variation of the rate constant (k) of a chemical reaction, except a negative sign. Equation 4 has demonstrated the viscosity to get smaller with increasing temperature. Nevertheless, the viscosity values of water changed due to different temperature are obtained from CRC handbook of Chemistry and Physics [27]. Electrophoretic mobility had been estimated with the formula which stated that the viscosity of the solution would reduce as the temperature increased.
μe =
ςεf ( ka ) η (T )
(4)
where η (T) is the viscosity, T is the absolute temperature, ε is the dielectric constant of the dispersing medium which is 80, f(ka) is a function of the particle size (radius a) and 1/k is the thickness of the double layer of counter ions. The value for f(ka) is 1.5 as radius of the particle is greater than 200 nm. Theoretical results of the electrophoretic mobility have been used to compare with the practical results for the following discussion. Smoluchowski value is 1.5 for f(ka) and dielectric constant ( ε ) is 79. Zeta potential is average value from ten measurements of the samples respectively. The Ohm’s law is essential in applied different voltages. The reason is current being generated when a potential difference is applied between the electrodes. The size of the current is then determined by the resistance of the medium and is proportional to voltage. Who carries the charges that form the current? The current in the solution between the electrodes is conducted mainly by the buffer ions with a small proportion being conducted by the sample ions. An increase in voltage will increase the total charge per second conveyed towards the electrode.
III. METHODOLOGY A. Sample Preparation Cyclohexane was purchased from Sigma-Aldrich that was used as an organic solvent and surfactant of Triton X45 (Sigma-Aldrich) was selected. Tetrapropyl orthotitanate (Ti(OC3H7)4) from Fluka and sodium hydroxide ( HmbG chemical ) were used. Buffer at pH 7 was prepared with Sodium Phosphate Monobasic Dihydrate (H2NaO4P.2H2O) from Fluka, assay >> 99.0 % and Sodium Phosphate
Dibasic Dihydrate (HNa2O4P.2H2O) with purity up to 98 % was also purchased from Fluka. Live and dead bacteria were confirmed the strain with CHROMagarTM O157 for E.Coli and CHROMagarTM Staph aureus which were purchased from LabChem Sdn.Bhd. The calculation of molar ratio (wo =reciat[H2O]/[Surf]) was determined to vary the sizes of the synthesized TiO2. Nanosized TiO2 particles were prepared by controlled the hydrolysis Tetrapropyl orthotitanate (TPOT) with 0.5 M of sodium hydroxide in deionized water. Triton X-45 was mixed with cyclohexane as it is dissolvable in organic solvent. The solution of microemulsion was kept stirring for ten minutes and the size of the microemulsion was measured with zetasizer, nano-ZS. On the other hand, the TPOT was mixed in cyclohexane and was added dropwise to the microemulsion. The solution was stirred throughout the process for overnight. Ethanol (95 %) was added into the solution in the ratio of 3:2 (Ti(OH)4: ethanol) and stirred for another 30 minutes to damage the emulsion and solubilize the surfactant. The solution was dried at 60 oC for 24 hours and followed by calcination at 550 oC for 4 hours. Sizes of the dried fine powder of the synthesized TiO2 were then examined. Escherichia Coli (E.Coli) O157 and Staphylococcus aureus (S.aureus), ATCC 35983 were selected for this surface charge study. Both strains of bacteria were supplied from Microbiology Lab at University Malaya. E.Coli was tested with CHROMagarTM O157 and the colony was in Mauve colour. Meanwhile, CHROMagarTM Staph aureus for colony S.aureus appeared in Rose colour. Bacteria were then cultured with plate agar to prepare for the experiments. The bacteria were cultured in incubator for overnight at 37 o C. This was the optimum temperature for the growth of both strains of bacteria. The bacteria were harvested after 24 hours and were pelleted by centrifugation for 15 min at 1500 g. The growth medium for bacteria was agar plate, thus the bacteria was easy to clean off before usage on experiment. The bacteria were then suspended in buffer at pH 7 and ready for the experiment. Fresh bacteria on the agar plate, which was incubated overnight, were incubated again at temperature of 55 oC for another four hours. The bacteria were then harvested and centrifuged at 1500 g for 15 min before was ready for experiment. Reculturing test was done on the overheated population of E.Coli and S.aureus. This procedure was to confirm the death of bacteria with observed none new growth of population on agar plate after being heated for additional of four hours at 55 oC. Temperature at 55 oC was over the surviving range for these strains of bacteria. However, new colony was found if the heating hours were less than four hours. Consequently, the bacteria was dead without denature at this duration with heating at 55 oC.
IFMBE Proceedings Vol. 35
Effects of Joule Heating on Electrophoretic Mobility of Titanium Dioxide (TiO2), Escherichia Coli and Staphylococcus Aureus
The percentages of water content in cyclohexane and Triton X-45 were measured with titrator ( Mettler Toledo DL38, Karl Fisher Titrator ) from Virginia. The particle size and electrophoretic mobility were determined by using zetasizer series nano-ZS, Malvern Instrument from United Kingdom. Stirrer of HTS-1003 LMS was used.
IV. RESULT AND DISCUSSION The electrophoretic mobility of live and dead E.coli was compared with TiO2 as shown in Fig. 1. The data indicated that the live and dead E.coli were both electronegative over the range of temperatures shown. Live E.coli in pH buffer suspension showed a mild increased in electrophoretic mobility as temperature increased. The difference between the highest and the lowest electrophoretic mobility values of live E.coli was only a marginal of -0.53 x 10-8 m2 V-1 s-1. Electrophoretic mobility values of dead E.coli were erratic. From 25 oC to 35oC, electrophoretic mobility of dead E.coli displayed a sharp decrease in electronegativity from 2.11 x 10-8 m2V-1s-1 to -1.24 x 10-8 m2 V-1s-1, a difference of -0.87 x 10-8 m2 V-1s-1. The electronegativity of dead E.coli increased at 40 oC but reduced for the next three increased of temperatures. However, the last temperature shows a drastic increase in negative electrophoretic mobility of dead E.coli at -2.9 x 10-8 m2V-1 s-1. The data showed repeating erratic from 20 oC to 35 oC and 35 oC to 55 oC respectively for electrophoretic mobility of dead E.coli. Besides, results showed that the surface charges of dead E.coli were more negative than live E.coli. Electrophoretic mobility of TiO2 increased with temperatures and agreed with the predicted results. In contrary, electrophoretic mobility of live and dead E.coli were not increased proportionally to temperatures indicated dead and live E.coli resisted the changes of surface charge to the increased of temperature. Fig. 2 shows the corresponding zeta potential of live and dead E.coli at the range of temperature. The zeta potential for live E.coli was constant as temperature increased while that of dead E.coli fluctuated erratically measured throughout the temperatures. E.coli is gram-negative bacteria which the surface layer associated to the lipopolysacharide and ions with protein carbohydrates interactions or protein-protein interactions. Carboxyl functional groups attached to the proteins groups associated with teichoic acids can deprotonate to form negatively charge metal binding sites. These anionic functional groups generate charge om the cell wall may result the formation of an electric field that surround the entire cell [32] and leads to negative surface charge of live
63
and dead E.coli. The result demonstrated more negative electrophoretic mobility of dead E.coli compare to live E.coli as temperature increased. Dead intact cells had been reported to generate a more effective negative surface charge [33]. This may due to the concentration of concentration of proton outside the plasma membrane and the bound protein transports of ion sodium and ion hydrogen, giving rise to a less negative net cell wall charge [28,33]. The live biological cell has the ability to balance the ions of in and out from the membranes in order to reach the neutralization and maintain the viability of the cell. Study on resting potential of a cell averagely is -73 mV [34] which resting potential are explaining that the cell keep on working to neutralize the charges around the membrane cell with pumping ions in and out continuously. The result of electrophoretic mobility of live E.coli displayed mild increased at various temperatures. This showed that the live E.coli was viable at this range of temperature. The live hydrophobic cell membrane prevents charged molecules from easily diffusing through it, permitting a potential difference to exist across the membrane. The surface charge of the live cells in phosphate buffer suspension is stable and is not effects much on the metabolism activity of the live cells. However, the increase in temperature has increased the kinetic energy of the ions which consequently causing the increase in ions diffusion around the charged cells. The screening effect on the live cells decrease and reason the mild increase in negative electrophoretic mobility of live bacteria throughout the change of temperatures. Dead cell has inactivated the metabolic process in it and messes the protein integrity of dead cells. This indicated from erratic electrophoretic mobility of dead E.coli. It demonstrated a repeated trend for moiety range of temperature which is similar to a process of charge and recharge of a dead cell. The dead cell is expected to demonstrate the constant increase in temperature as the result of TiO2, however the non-viable cell has more ions penetration through the semi-permeable membrane of dead cells. This screens the electrostatic effects of dead cells reducing the negative surface charge gradually. The sudden increased in negative electrophoretic mobility of dead cells at 40 oC and 55 oC might due to less ion penetration through the cell and the dead cell recovered the properties of negatively charge surface at that particular moment. Fig. 3 shows the result for live, dead S.aureus and TiO2. Live S.aureus demonstrates negative response towards the change of this range of temperature. The is slight increase in negative electrophoretic mobility of live S.aureus as the
IFMBE Proceedings Vol. 35
64
P.F. Lee, M. Misran, and W.A.T. Wan Abdullah
temperature increase. This has shown that the live S.aureus has not affected much by this range of temperature. Electrophoretic mobility of dead S.aureus is consistent from 20 oC to 40 oC but increase in negative electrophoretic mobility until end of range temperature. This indicates that the migration rate for dead S.aureus increase at higher temperature. Electrophoretic mobility for both live and dead S.aureus obtains as negative values but the result has no sign on which group is more electronegative before heated to 40 oC. At higher temperature, dead S.aureus is more propensity to electronegative than live S.aureus. The corresponding zeta potential which is used to estimate the surface charge potential for this range of temperature is shown in Fig. 4. The data displays a gradual decreasing of negative zeta potential for both live and dead S.aureus as temperature increase. The changes of zeta potential for both groups of S.aureus are range from -30 mV to -20 mV. S.aureus is gram-positive bacteria which has a thicker peptidoglycan layer at the cell membrane and simple more simple protein integrity compare to E.Coli. A more rigid cell membrane reduces the ion permeability through the cell. Dead S.aureus is able to abstain the lost of ions from the inner cells to the outer environment due to the inactivation of ions transport through the cells. This has explained the consistent electrophoretic mobility for live and dead S.aureus. At higher temperature, the counterions around dead cells increase in diffusion and reduce the screening effects on the negative charge cells which lead to a greater increase in electrophoretic mobility after 40 oC. Zeta potential of live and dead S.aureus shows the decrease in electronegative of the surface charge indicates the ions gain thermal energy from the heating process and are moving more immobile from the charged cells. Electrophoretic mobility for live E.Coli and S.aureus is compared as can be seen in Fig. 5 and corresponding zeta potential for both species in Fig. 6. The electrophoretic mobility for live E.Coli and S.aureus measured increases at range of determined temperature. The increase in electrophoretic mobility for both species is more obvious at lower temperature and displays consistency result at higher temperature which is from 35 oC to 50 oC. On the other hand, the corresponding zeta potential for E.Coli is more constant compare to S.aureus. The data for S.aureus shows a decrease in negative surface charge as the temperature increase. From both figures, the result demonstrates that the live E.Coli is less electronegative compare to live S.aureus.
Both the bacteria are consisted of different category which E.Coli is gram-negative and S.aureus is gram positive bacteria. The major different in both species is the structure of the cell membrane. E.Coli has a thinner peptidoglycan layer but comprises of more proteins, while the gram positive S.aureus has the surface layers which contain surface layer homology domains the binding occurs to the peptidoglycan and to negatively charged secondary cell wall polyme. E.Coli with thinner peptidoglycan layer is able fasten the balancing of ions to go in and out of the cell compare to S.aureus. Hence, it has produced a more constant result throughout change of temperature. S.aureus has the same characteristic as TiO2 and liposome at lower increase of temperature. At higher temperature, the ions diffuse more and have reached the saturation level which the cells be able to maintain the surface charge.
V. CONCLUSION The objective of the study is testing the effects of temperature on surface charge of TiO2, live, dead E.coli and S.aureus in phosphate buffer pH 7. Comparison on electrophoretic mobility and zeta potential of TiO2 and bacteria were done respectively on each strain of bacteria. The reduced of viscosity due to increase of temperature leaded to the increase in electrophoretic mobility of TiO2 and biological cells. TiO2 showed the proportionality between the electrophoretic mobility and temperatures but the bacteria cells increased slightly in electrophoretic mobility indicated the nature of surface charge of biological cells to resist the change on the surface charge especially for live cells. Fluctuated electrophoretic mobility of dead bacteria obtained at various temperatures due to increase in permeability of dead cells was screened by ions that passed through the cells. Zeta potential measured in contrary reduced slightly as temperature increased. This was due to the ions in electrolyte absorbed thermal energy and increased in kinetic energy. The faster moving ions diffuse greater at the surface of the charged particles and biological cells and reduced the zeta potential. Live E.coli obtained a lower electrophoretic mobility and zeta potential than live S.aureus due to the thinner layer of peptidoglycan of cell membrane for E.coli enable a faster exchanged ions into the cells and expel the extra to the surrounding of electrolyte.
IFMBE Proceedings Vol. 35
Effects of Joule Heating on Electrophoretic Mobility of Titanium Dioxide (TiO2), Escherichia Coli and Staphylococcus Aureus
65
Temperature (oC) 0.00 0
10
20
30
40
50
60
-0.50
Electrophoretic mobility, m2V-1s-1 (x 10-8)
-1.00
-1.50
-2.00
-2.50
-3.00
-3.50
-4.00
-4.50
Live E.Coli Dead E.Coli TiO2
-5.00
Fig. 1 Effects of temperature (20oC-55oC) on the electrophoretic mobility of TiO2, live and dead E.coli (Electrophoretic mobility of TiO2 measured is help to guide the trend as temperature increased). Temperature (oC) 0 0
10
20
30
40
50
Zeta Potential
-5
-10
-15
-20
-25
Live E.Coli Dead E.Coli
-30
Fig. 2 Zeta potential measured of live and dead E.coli corresponding electrophoretic mobility measured.
IFMBE Proceedings Vol. 35
60
66
P.F. Lee, M. Misran, and W.A.T. Wan Abdullah Temperature (oC) 0.00 0
10
20
30
40
50
60
-0.50
Electrophoretic mobility, m2V-1s-1 (x 10-8)
-1.00 -1.50 -2.00 -2.50 -3.00 -3.50 -4.00 -4.50
TiO2 Dead S.aureus Live S.aureus
-5.00
Fig. 3 Effects of temperature (20oC-55oC) on the electrophoretic mobility of TiO2, live and dead S.aureus (Electrophoretic mobility of TiO2 measured is help to guide the trend as temperature increased) Temperature (oC) 0 0
10
20
30
40
50
-5
Zeta potential (mV)
-10
-15
-20
-25
-30
Live S.aureus Dead S.aureus
-35
Fig. 4 Zeta potential measured of live and dead S.aureus corresponding electrophoretic mobility measured IFMBE Proceedings Vol. 35
60
Effects of Joule Heating on Electrophoretic Mobility of Titanium Dioxide (TiO2), Escherichia Coli and Staphylococcus Aureus
67
Temperature (oC) 0.00 0
10
20
30
40
50
60
Electrophoretic mobility, m2V-1s-1 (x 10-8)
-0.50
-1.00
-1.50
-2.00
-2.50
Live E.Coli Live S.aureus
-3.00
Fig. 5 Effects of temperature (20oC-55oC) on the electrophoretic mobility of E.Coli and S.aureus Temperature (oC) 0 0
10
20
30
40
50
Zeta potential (mV)
-5
-10
-15
-20
-25
-30
Live E.Coli Live S.aureus
-35
Fig. 6 Zeta potential measured of E.Coli and S.aureus corresponding electrophoretic mobility measured IFMBE Proceedings Vol. 35
60
68
P.F. Lee, M. Misran, and W.A.T. Wan Abdullah
ACKNOWLEDGEMENT Special thanks to Vot-F from University Malaya for supporting this research and Prof. Thong Kwai Lin to allow the usage of facility in Microbiology Lab, University Malaya and Ms. Fong Bao Wen for her helpful guidance on handling the bacteria. Many thanks to Ms. Vivien Tay to proof reading this manuscript.
REFERENCES 1. Cha T W et al. (2006) Formation of Supported Phospholipid Bilayers on Molecular Surfaces: Role of Surface Charge Density and Electrostatic Interaction. Biophysical Journal 90:1270-1274. 2. Davis E J et al. (1999) An Analysis of Electrophoresis of Concentrated Suspensions of Colloidal Particles. Colloids and Interface Sci 215:397-408. 3. Wong M G et al. (2001) Objective Testing for the Dependence of Electrophoretic Mobilities Upon Size in Capillary Zone Electrophoresis. Chromatographia 53:431-436. 4. Merode A E J, Mei H C, Busscher H J, Krom B P et al.(2008) Increased Adhesion of Enterococcus Faecalis Strains with Bimodal Electrophoretic Mobility Distributions. Colloids and Surfaces B:Biointerfaces 64:302-306. 5. Fazio S, Colomer M T, Salomoni A, Moreno R et al. (2008) Colloidal stability of nanosized titania aqueous suspensions. Journal of the European Ceramic Society 28:2171-2176. 6. Park Y, Jang S H, Woo E R, Jeong H G, Choi C H, Hahm K S et al. (2003) A Leu-Lys-rich Antimicrobial Peptide: Activity and Mechanism. Biochimica et Biophysica Acta 1645:172-182. 7. Strand S P, K.østgaard T N et al. (2002) Efficiency of Chitosans Applied for Flocculation of Different Bacteria. Water Research 36. 8. Ly M H, Meylheuc T, Fontaine M N B,Le T M, Belin J M, Wache Y et al. (2006) Importance of Bacterial Surface Properties to Control the Stability of Emulsions. International Journal of Food Microbiology 112:26-34. 9. Ly M H, Goudot S, Le M L, Cayot P, Teixeira J A, Belin J M, Wache Y et al. (2008) Interactions between Bacterial Surfaces and Milk Proteins, Impact on Food Emulsions Stability. Food Hydocolloids 22:742-751. 10. Gänzle M G, Vossen J M, Hammes W P et al. (1999) Effect of Bacteriocin-producing Lactobacilli on the Survival of Escherichia Coli and Listeria in a dynamic model of the Stomach and the small Intestine. International Journal of Food Microbiology, 48:21-35. 11. Willcox M D P, Cowell B A, Williams T, Holden B A et al. (2001) Bacterial Interactions with Contact Lenses; effects of Lens Material, Lens wear and Microbial Physiology. Biomaterials 22:32353247. 12. Høiby N, Tvede M, Bangsborg J M, Jerulf A K, Pers C, Hansen H et al.(1997) Excretion of Ciprofloxacin in Sweat and Multiresistant Staphylococcus Epidermidis. The Lancet 349. 13. Cao J, et al. (1998) Salt effects in Capillary Zone Electrophoresis II. Mechanisms of Electrophoretic Mobility Modification due to Joule Heating at High Buffer Concentrations. Journal of Chromatography A 809:159-171. 14. Saito T, Kato T, Ishihara K, Okuda K et al.(2001) Arch.Oral Biol 42:539-545. 15. West R J, Cilliers J J et al. (1998) Miner Eng 11:189-194.
16. Martinex R E, Schott J, Oelkers E H et al. (2008) Surface Charge and Zeta Potnetual of Metabolically Active and Dead Cyanobacteria. Journal of Colloids and Interface Science 323:317-325. 17. Kourkine I V, Davis E, Ruffolo C G, Kapsalis A, Barron A E et al. (2003) Detection of Escherichia coli O157:H7 Bacteria by a Combination of Immunofluorescent Staining and Capillary Electrophoresis. Electrophoresis 24:655-661. 18. Bartelt M, Kochanovski H, Jann B, Jann K et al.(1993) Carbohhdr. Res. 248:233. 19. Sonohara R, Oshima H, Kondo T et al. (1995) Difference in Surface Properties between Escherichia coli and Staphylococcus aureus as Revealed by Electrophoretic mobility Measurements. Biophysical Chemistry 55:273-277. 20. Ghernaout D, Kellil A, Ghernaout B et al. (2007) Application of Electrocoagulation in Escherichia coli Culture and Two Surfaces Waters. Desalination 219:118-125. 21. Frank J F et al. (2004) Attachmentt of Escherichia coli O157:H7 grown in Tryptic Soy Broth and NUtrient Broth to Apple and Lettuce Surfaces as Related to Cell Hydrophobicity, Surface Charge and Capsule Production. International Journal of Food Microbiology 96:103-109. 22. Li J et al.(1999) The effects of the Surface Charge and Hydrophobicity of Escherichia coli on its Adhesion to Beef Muscle. International Journal of Food Microbiology 53:185-193. 23. Campanhã M T N et al. (1999) Interactions between Cationic Liposomes and Bacteria: The Physical-Chemistry of the Bactericidal Action. Journal of Lipid Research 40:1495. 24. Knuefermann P, Baker J S, Huang C H, Sekiguchi K, Hardarson H S, Takeuchi O, Akira S, Vallejo J G et al.(2004) Toll-Like Receptor 2 Mediates Staphylococcus Aureus-Induced Myocardial Dysfunction and Cytokine Production in the Heart. American Heart Association 1524-4539. 25. Na'was T, Hendrix E, Hebden J, Edelman R, Martin M, Campbell M, Naso R, Schwalbe R, Fattom A I et al.(1998) Phenotypic and Genotypic Characterization of Nosocomial Staphylococcus Aureus Isolates from Trauma Patients. Journal of Clinical Microbiology, 36(2): 414-420. 26. Kleine J, Schuster C, Warnecke H J et al. (2002) Multifunctional System for treatment of Wastewaters from Adhesive-Producing Industries: Separation of Solids and Oxidation of Dissolved Pollutants using Doted Microfiltration Membranes. Chemical Engineering Sci 57:1661-1664. 27. CRC Handbook of Chemistry and Physics, 55th edition. 28. Wilson W W, Homan S C, Camplin F R et al. (2001) J Microbial Methods 43:153-164. 29. Ohshima H et al.(2006) Colloid Vibration Potential in a Suspension of Spherical Colloidal Particles. Colloids and Surfaces B:Biointerfaces 56:16-18. 30. Jimenez M J, Carrique F, Delgado A V et al.(2007) Surface Conductivity of Colloidal Particles : Experimental Assessment of its Contributions. Journal of Colloids and Interface Science 316:836-843. 31. Nieves et al.(1998) The role of ζ potential in the colloidal stability of different TiO2/electrolyte solution interfaces. A:Physicochemical and Engineering Aspects 148:231-243. 32. Yee N et al. (2004) A Donnan Potential model for Metal Sorption onto Bacillus Subtilis. Geochimica et Cosmochimica Acta 68(18):3657-3664. 33. Martinez R.E, Schott J, Oelkers E H et al.(2008) Surface Charge and Zeta Potential of Metabolically active and dead Cyanobacteria. Journal of Colloid and Interface Science 323:317-325.
IFMBE Proceedings Vol. 35
Electrochromic Property of Sol-Gel Derived TiO2 Thin Film for pH Sensor J.C. Chou1,2,*, C.H. Liu2, and C.C. Chen3 1
Department of Electronic Engineering, National Yunlin University of Science and Technology, Douliou, Yunlin, Taiwan, R.O.C. Graduate School of Optoelectronics, National Yunlin University of Science and Technology, Douliou, Yunlin, Taiwan, R.O.C. 3 Graduate School of Engineering Science and Technology, National Yunlin University of Science and Technology, Douliou, Yunlin, Taiwan, R.O.C.
2
Abstract— In this study, the electrochromic titanium dioxide (TiO2) thin film has been deposited on the indium tin oxide/glass (ITO/Glass) substrate by sol-gel method with spin coating technique. In order to optimize the electrochromic property of the TiO2 thin film, the influences of annealing temperatures (150 oC - 650 oC) for the TiO2 thin films have been studied. The electrochromic property of the TiO2 thin films was obtained in lithium (Li+) or hydrogen (H+) electrolyte solutions. The different transmittances (T%) of the TiO2 thin film could be obtained when the TiO2 thin film was immersed in different concentrations of H+ electrolyte solutions. Therefore, the pH concentration in solution can be defined by the transmittance. According to the results, the electrochromic TiO2 thin film can be developed to a pH sensor. Keywords— Electrochromic, Titanium Dioxide, Sol-Gel Method, pH Sensor, Transmittance.
I. INTRODUCTION The titanium dioxide (TiO2) thin film is a promising material for photocatalyst and dye-sensitized solar cell [1, 2], and has many advantages, such as nontoxic, chemical stability, acid and alkali resistance. Various methods have been proposed to deposit the TiO2 thin film, such as sputtering, electrodeposition [3, 4], and sol-gel [5-7]. In this study, the TiO2 thin film was deposited on the ITO/Glass substrate by sol-gel method. The advantages of sol-gel method comparing with other methods include low process temperature, low cost for equipment, and easy coated of large surface, as well as, the thin film made by this method is porous structure. Furthermore, the various grain size and porosity of the thin film were obtained at various annealing temperatures, which influence the electrochromic property of the TiO2 thin film [6, 7]. In this study, the electrochromic property of the TiO2 thin film has been studied in the lithium (Li+) or hydrogen (H+) electrolyte solutions, and the TiO2 thin film has been annealed at the various temperatures to optimize the electrochromic property of the TiO2 thin film. The different transmittances (T%) of the TiO2 thin film can be obtained when the TiO2 thin film is immersed in different concentrations of H+ electrolyte solutions, and the phenomenon can be applied to detect the pH value of solution.
II. EXPERIMENTAL A.
Materials
Indium tin oxide/glass (ITO/Glass) substrate was purchased from Ultimate Materials Technology Corp., Taiwan, and its sheet resistivity is 15 Ω/□. The TiO2 powder (P25) was purchased from Uniregion Bio-Tech Corp., USA. The acetoacetone (AcAc) solution was purchased from Acros Corp., USA. The tritonX-100 solution was purchased from Panreac Corp., Espana. The lithium perchlorate (LiClO4) powder was purchased from Acros Corp., USA. The propylene carbonate (PC) solution was purchased from Acros Corp., USA. The pH buffer solution was purchased from Riedel-dehaen Corp., Germany. B.
Preparation of TiO2 Thin Film
In this study, the TiO2 thin film has been prepared on the ITO/Glass by sol-gel method. The TiO2 colloidal paste consists of 3 g TiO2 powder, 5 ml deionized water, 0.1 ml ethylacetoacetone, and 0.1 ml tritonX-100. This solution was stirred for 10 min by ultrasonic bath. The TiO2 thin film was spread by spin-coating the TiO2 colloidal paste on the ITO/Glass substrate. And the coating process had two different spin rates: the first spinning rate was set at 1000 rpm for 10 s; the second spinning rate was set at 3000 rpm for 20 s. The as-deposited TiO2 thin film was annealed at 150 oC 650 oC for 30 min in air and stored in air. C.
Electrochromic Reaction of TiO2 Thin Film
Figure 1 shows the schematic diagram of electrochromic system in this study. The TiO2 thin film as work electrode was connected with 2.5 V of direct current power, and the counter electrode was platinum (Pt). The electrolyte solutions are Li+ or H+ electrolyte solutions with 0.1 M LiClO4PC solution and pH buffer solution, respectively. The colored time and bleached time of the TiO2 thin film are 90 s and 300 s, respectively. Furthermore, the transmittance of the TiO2 thin film was measured by UV-Vis spectroscopy (LABOMED UVD-3500, USA) after colored reaction or bleached reaction.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 69–72, 2011. www.springerlink.com
70
J.C. Chou, C.H. Liu, and C.C. Chen
A(101) R(110) A(105) A(211) A(004) A(200)
Intensity (a. u.)
650 oC 550 oC 450 oC 350 oC 250 oC 150 oC
Fig. 1 Schematic diagram of electro-chromic system
as-deposited
20
D.
Cyclic Voltammetric (CV) Measurement System
Cyclic voltammetric (CV) measurement system (BioLogic SP-150, France) for the as-deposited TiO2 thin film and the annealed TiO2 thin film was performed. All measurements were performed in an Li+ electrolyte solution with the three electrode arrangement comprising the TiO2 thin films (the asdeposited TiO2 thin film or the annealed TiO2 thin film) as the working electrode, a platinum counter electrode and Ag/AgCl reference electrode. Furthermore, the scan rate was 50 mV s-1, and potential was swept between cathodic (-2.5 V) and anodic (+2.5 V) vs. Ag/AgCl reference electrode.
III. RESULTS AND DISCUSSION A. Structural Analysis The X-ray diffraction (XRD) patterns of the as-deposited TiO2 thin film and the TiO2 thin film annealed at 150, 250, 350, 450, 550 and 650 °C, respectively, were shown in Fig. 2. The as-deposited TiO2 thin film and the annealed TiO2 thin film were polycrystalline phase, and the intensity of (101) characteristic reflections of the anatase phase of the TiO2 thin film was decreased with increasing annealing temperature. And the anatase phase of the TiO2 thin film was transformed to the rutile phase of the TiO2 thin film. As well as, the intensity of (110) characteristic reflection of the rutile phase of TiO2 thin film was obviously increased when the annealing temperature was 650 oC. Furthermore, the grain size of the TiO2 thin film was calculated by Bragg’s function. The full-width at a half maximum (FWHM) and grain sizes of TiO2 thin film were calculated. The grain sizes were 20.9 nm, 20.9 nm, 21.0 nm, 20.4 nm, 21.1 nm, 21.3 nm and 22.6 nm for the as-deposited TiO2 thin film and the TiO2 thin film annealed at 150 °C, 250 °C, 350 °C, 450 °C, 550 °C and 650 °C, respectively. Moreover, they are seen to lie in the range of 20 nm to 22 nm depending upon the thermal treatment temperature. And it can be obtained that the grain size of the TiO2 thin film was increased with increasing annealing temperature [7-9].
A: Anatase R: Rutile
30
40 50 60 70 2theta (degree)
80
90
Fig. 2 X-ray diffractograms of TiO2 films annealed at different temperatures B. Morphological Property The scanning electron micrographs (SEM) patterns of the as-deposited TiO2 thin film and the TiO2 thin film annealed at 150 °C, 250 °C, 350 °C, 450 °C, 550 °C and 650 °C were shown in Fig. 3. The morphology of the as-deposited TiO2 thin film was similar with the TiO2 thin film annealed at 150 °C (Fig. 3(a) and Fig. 3(b)). And the particle of the annealed TiO2 thin film was agglomerated by thermal treatment (Fig. 3(c)-(g)) to increase the grain size of the annealed TiO2 thin film. Furthermore, Fig. 3 obtained that the as-deposited TiO2 thin film had the highest degree of porosity than annealed TiO2 thin films, as well as, the porosity of the TiO2 thin film was decreased with increasing annealing temperature [7].
(a) as-deposited
(b) 150 oC
(c) 250 oC
(d) 350 oC
(e) 450 oC
(f) 550 oC
(g) 650 oC
Fig. 3 SEM patterns of TiO2 thin film annealed at different annealing temperatures, (a) as-deposited, (b) 150 oC, (c) 250 oC, (d) 350 oC, (e) 450 o C, (f) 550 oC and (g) 650 oC
IFMBE Proceedings Vol. 35
Electrochromic Property of Sol-Gel Derived TiO2 Thin Film for pH Sensor
C. Cyclic Voltammetric Property The electrochemical activity of the TiO2 thin film was measured by CV measurement system in Li+ electrolyte solution, and the scan voltages were between -2.5 V and +2.5 V vs. Ag/AgCl reference electrode. Figure 4 shows the cyclic voltammograms of the TiO2 thin film annealed at different temperatures, it can be obtained that the asdeposited TiO2 thin film had the highest peak current density than the annealed TiO2 thin films. Furthermore, the ion storage capacities (ISC, Li ion inserted per unit active area of the electrode) were 128 mC/cm2, 111 mC/cm2, 113 mC/cm2, 118 mC/cm2, 112 mC/cm2, 93 mC/cm2 and 82 mC/cm2 for the as-deposited TiO2 thin film and TiO2 thin films annealed at 150 °C, 250 °C, 350 °C, 450 °C, 550 °C and 650 °C, respectively. They obtained that the asdeposited TiO2 thin film had the highest ISC. The ISC of the TiO2 thin film related with grain size and porosity of the TiO2 thin film. As well as, the small grain size and high porosity of the TiO2 thin film can increase the reacted area between the Li+ electrolyte solution and the TiO2 thin film (Fig. 2 and Fig. 3). So that the Li+ ion diffusing in the asdeposited TiO2 thin film was better than annealed TiO2 thin films, therefore the as-deposited TiO2 thin film had higher ISC than the annealed TiO2 thin film [6, 7, 10-12].
71
bleached reaction (Tbleach%) were 10.9 %, 6.2 %, 13.3 %, 5.3 %, 10.1 %, 1.7 % and 1.9 % for the as-deposited TiO2 thin film and TiO2 thin film annealed at 150 °C, 250 °C, 350 °C, 450 °C, 550 °C and 650 °C, respectively. Furthermore, the coloration efficiency (CE, η) of the TiO2 thin film was determined using the following Eq.(1):
CE (η ) = log(Tbleach % / Tcolor %)Q / A
where (Tbleach% / Tcolor%) is transmittance in bleached/ colored state at λ= 550 nm and Q/A is ISC (C cm-2). The measured CE were 12.1 cm2/C, 10.6 cm2/C, 6.9 cm2/C, 2.7 cm2/C, 0.2 cm2/C, 3.4 cm2/C and 4.5 cm2/C for the asdeposited TiO2 thin film and the TiO2 thin films annealed at 150 °C, 250 °C, 350 °C, 450 °C, 550 °C and 650 °C, respectively, which obtained that the as-deposited TiO2 thin film had higher CE than the annealed TiO2 thin film. It is indicated that the higher degree of porosity and small grain size of TiO2 thin film enhanced electrochromic reaction of TiO2 thin film [6, 7, 10, 13]. Furthermore, the surfactant, Triton X-100, was completely eliminated in the TiO2 thin film by high temperature annealing, and destructed the pore structure of the TiO2 thin film. At the same time, the electrochromic activity of the TiO2 thin film was lost [13].
5
0
250 oC 350 oC
Transmittance (%)
2 Current density (mA/cm )
100 as-deposited 150 oC
450 oC 550 oC
-5
650 oC
-10 -2.5
0.0
2.5 Voltage (V)
5.0
(1)
80 60 40
as-deposite (colored) as-deposite (bleached) o 150 C (colored) o 150 C (bleached) o 250 C (colored) o 250 C (bleached) o 350 C (colored) o 350 C (bleached)
o 450 C (colored) o 450 C (bleached) o 550 C (colored) o 550 C (bleached) o 650 C (colored) o 650 C (bleached)
20 0
300
Fig. 4 Cyclic voltammograms of TiO2 thin film annealed at different
450
600
750
900
Wavelengh (nm)
annealing temperatures in 0.1 M LiClO4 solution
Fig. 5 Transmittance spectra of TiO2 thin film annealed at different annealing temperatures in 0.1 M LiClO4 solution
D. Optical Properties In this study, the TiO2 thin film was immersed in Li+ electrolyte solution during the electrochromic reaction, and the transmittance spectra of the as-deposited TiO2 thin film and the TiO2 thin films annealed at 150 °C, 250 °C, 350 °C, 450 °C, 550 °C and 650 °C were shown in Fig. 5. The T% of colored reaction (Tcolor%) of the as-deposited TiO2 thin film and the TiO2 thin film annealed at 150 °C, 250 °C, 350 °C, 450 °C, 550 °C and 650 °C were 0.3 %, 0.4 %, 2.3 %, 2.5 %, 9.5 %, 0.8 % and 0.8 %, respectively. And the T% of
E. Application for pH Sensor The as-deposited TiO2 thin film had the highest CE, so that the electrochromic reaction of the as-deposited TiO2 thin film was investigated in the H+ electrolyte solutions. Figure 6 shows the transmittance spectra of the as-deposited TiO2 thin film in H+ electrochromic reaction during the colored reaction. The T% of the as-deposited TiO2 thin film are different in different concentrations of H+ electrolyte
IFMBE Proceedings Vol. 35
72
J.C. Chou, C.H. Liu, and C.C. Chen
solutions. And the pH sensitivity and linearity are 1.92 T%/pH and 0.9963 at 800 nm, respectively, as shown in Fig. 7. Furthermore, the colored time of the as-deposited TiO2 thin film was 90 s. It is less than the response time (about 4 ~ 10 min) of traditional pH sensor [14, 15]. According to the result, the electrochromic TiO2 thin film was suitable to be applied in pH sensor.
Transmittance (%)
75
pH1-colored pH3-colored pH5-colored
50
25
0
300
450 600 750 Wavelengh (nm)
900
Fig. 6 Transmittance spectra of TiO2 thin film colored in H+ different
Transmittance (%)
electrolyte solutions
58
Sensitivity=1.92 T%/pH Linearity =0.9963
56 54 52 1
2
3 pH
4
5
Fig. 7 Transmittance spectra of TiO2 thin film colored in different concentrations of H+ electrolyte solutions at 800 nm
IV. CONCLUSIONS The electrochromic TiO2 thin film has been prepared successfully. And the T% between the colored reaction and the bleached reaction of the TiO2 thin film has obvious difference. Furthermore, the pH sensitivity and linearity of electrochromic TiO2 thin film are 1.92 T%/pH and 0.9963, respectively. According to the result, the electrochromic TiO2 thin film can be developed to a pH sensor.
ACKNOWLEDGEMENTS
REFERENCES 1. Qourzal S., Barka N., Tamimi M. et al. (2008) Photodegradation of 2naphthol in water by artificial light illumination using TiO2 photocatalyst: Identification of intermediates and the reaction pathway. Appl. Catal. A334:386-393 2. Patrocinio A. O. T., Paterno L. G., Iha N. Y. M. (2009) Layer-bylayer TiO2 films as efficient blocking layers in dye-sensitized solar cells. J. Photochem. Photobiol. A205:23-27 3. Ziegler J. P. (1999) Status of reversible electrodeposition electrochromic devices. Sol. Energy Mater. Sol. Cells 56 477-493 4. de Tacconi N. R., Chenthamarakshan C. R., Wouters K. L. et al. (2004) Composite WO3-TiO2 films prepared by pulsed electrodeposition: morphological aspects and electrochromic behavior. J. Electroanal. Chem 566:249-256 5. Wang C. M., Lin S. Y. (2006) Electrochromic properties of sputtered TiO2 thin films. J. Solid State Electrochem 10:255-259 6. Verma A., Basu A., Bakhshi A. K. et al. (2005) Structural, optical and electrochemical properties of sol-gel derived TiO2 films: Annealing effects. Solid State Ionics 176:2285-2295 7. Karuppuchamy S., Iwasaki M., Minoura H. (2006) Electrochemical properties of electro-synthesized TiO2 thin films. Appl. Surf. Sci. 253:2924-2929 8. Bhandari S., Deepa M., Srivastava A. K. et al. (2009) Structure optimization and ion transfer behavior in viologen adsorbed titanium oxide films. Solid State Ionics 180:41-49 9. Deepa M., Kar M., Agnihotry S. A. (2004) Electrodeposited tungsten oxide films: Annealing effects on structure and electrochromic performance. Thin Solid Films 468:32-42 10. Xia X. H., Tu J. P., Zhang J. et al. (2008) Morphology effect on the electrochromic and electrochemical performances of NiO thin films. Electrochim. Acta 53:5721-5724 11. Verma A., Samanta S. B., Bakhshi A. K. et al. (2005) Effect of stabilizer on structural, optical and electrochemical properties of sol-gel derived spin coated TiO2 films. Sol. Energy Mater. Sol. Cells 88:4764 12. Kubiak P., Geserick J., Husing N. et al. (2008) Electrochemical performance of mesoporous TiO2 anatase. J. Power Sources 175 :510516 13. Deepa M., Srivastava A. K., Sharma S. N. et al. (2008) Microstructural and electrochromic properties of tungsten oxide thin films produced by surfactant mediated electrodeposition. Appl. Surf. Sci. 254:2342-2352 14. Jan S. S. (2002) Study on the pH-sensing characteristics of the hydrogen ion-sensitive field-effect transistors with sol-gel-derived lead titanate series gate. Doctoral thesis, Department of Electrical Engineering, National Sun Yat-Sen University, Taiwan 15. Gill E., Arshak A., Arshak K. et al. (2009) Effects of polymer binder, surfactant and film thickness on pH sensitivity of polymer thick film sensors. Procedia Chemistry 1:265-268 Author: Jung-Chuan Chou Institute: Department of Electronic Engineering, National Yunlin University of Science and Technology, Street: 123, Sec. 3, University Rd. City: Douliou, Yunlin Country: Taiwan, R.O.C. Email:
[email protected]
This study has been supported by National Science Council, Republic of China, under the contracts NSC 97-2221E224-058-MY3. IFMBE Proceedings Vol. 35
Failure Analysis of Retrieved UHMWPE Tibial Insert in Total Knee Replacement S. Liza1, A.S.M.A. Haseeb1, A.A. Abbas2, and H.H. Masjuki1 1
Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia 2 Department of Orthopaedic Surgery, Faculty of Medicine Building, University of Malaya, 50603 Kuala Lumpur, Malaysia
Abstract— This study involves the failure analysis of an ultra high molecular polyethylene (UHMWPE) tibial insert from Apollo® Total Knee System, which was removed after 10 years of service from an 70 years old female patient. The tibial insert was investigated by using a stereoscope, scanning electron microscope (SEM), infinite focus microscope (IFM) and energy disperse spectroscopy (EDS) to characterize the morphology and composition of the bearing surface. Differential scanning calorimetry (DSC) and Fourier transform spectroscopy (FTIR) were employed to characterize the degradation and crystallinity of the component. Gel-permeation chromatography (GPC) was used to measure the polyethylene tibial insert molecular weight. Results showed that the failure of total knee replacement (TKR) was associated with high grade wear with oxidation degradation. Surface delamination, scratch marks, pitting, folding, and embedded third body particles were observed on the retrieved UHMWPE tibial surface. The damage features observed on the UHMWPE tibial insert suggested degradation is due to fatigue related wear and oxidativeinduced. From the depth measurement of the pit, the depths are large as 35 µm and 60 µm. Overall results showed that the UHMWPE tibial insert which retrieved from the patient who is an active and not overweight have a decreases in material properties and high grade wear that can contribute to the failure of the tibial insert after 10 years used. Keywords— UHMWPE, retrieval study, total knee replacement, wear, oxidation.
I. INTRODUCTION Ultra high molecular weight polyethylene (UHMWPE) has been used as a load bearing and articulating counterface in total knee replacement (TKR) for the past 40 years [1]. Polyethylene wear is considered a major limitation of long term success of TKR [2-4]. Analysis of retrieved polyethylene components has been carried out by a number of researchers to reveal the evidence of the TKR failure [5-10]. Their work has helped to characterize and analyse the failure due to wear of polyethylene tibial insert against metal femoral component. Recent studies have dealt with a
number of polyethylene tibial inserts from the TKR. The groups of retrieved tibial insert were analysed by using Hood’s grading scale system. The surface of tibial insert was divided in ten regions and the wear damage on the surface was measure based on the extent and severity of the seven damage modes (pitting, scratching, burnishing, embedded particulate debris, abrasion, permanent deformation and surface delamination). Then, the total damage is achieved by sums the score for all seven modes across all the ten regions. The results then was compared by considering different factors which could cause their failure. Previous study by Gerard et al.[11] compared the wear mode of two different designs of tibial insert component namely mobile and fixed bearing knee inserts. They found that TKR with increased contact areas between metal and polyethylene components can result in reduced contact stresses and decreased of wear. Garcia et al. [6] evaluated the extent of polyethylene by radiographic analysis. They found that the damage of polyethylene tibial inserts was related to patient characteristic and demographics. Ashraf et al. [12], in their retrieval analysis, determined the wear rate of 18 retrieved polyethylene tibial inserts by measuring the linear peneration and volumetric wear using a coordinate measuring machine. They found a linear relationship between both the volumetric and linear penetration with the time of implantation. Apart from retrieval study in a group, there was also a case study on one retrieved polyethylene tibial insert. Diabb et al. [13], investigated the cause of failure of a 6 month implanted tibial inserts. In their study, they focused more on the changes of the properties of polyethylene that lead to the failure of polyethene tibial insert. This present study attempted to establish the causes that led to the surface damage of polyethylene tibial insert from a knee prosthesis which was retrieved after 10 years in vivo. The patient was an elderly housewife who was a community ambulator, and was active and not overweight. Analysis on the worn surface polyethylene will provide information about the mechanism and process of the polyethylene degradation after 10 years implantation.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 73–79, 2011. www.springerlink.com
74
S. Liza et al.
II. MATERIALS AND METHODS Femoral Component
A. Retrieved Samples In 1999, a 70 years old female patient received a fixed bearing, posterior stabilized Apollo® Total Knee System (Sulzer Orthopaedics, Austin, Tx) for treatment of severe osteoarthritis of the right knee. Ten years postoperatively, the patient returned to the clinic complaining of pain which was increasing in severity. She had also complained of increasing instability and deformity of the knee when walking. Radiographs of the knee performed at this time showed that the tibial tray was in a severe varus position, with reduction of the medial joint space (Fig. 1a). The right total knee replacement was revised shortly after the consultation. Intra-operatively, the tibial prosthesis was found to be loose. There was no evidence of infection. The failed knee replacement was replaced with the new implant as shown in Fig. 1b. Fig. 2 shows the retrieved knee prosthesis consisting of three components which are femoral, polyethylene insert and tibial tray. The femoral component and tibial tray were made of cobalt-chromium (CoCr) alloys and tibial insert was made from UHMWPE.
A
B
Tibial Component
Tibial Tray
Fig. 2 Knee prosthesis components after retrieval from the patient (cleaned)
C. Oxidation Characterization Fourier transform infrared (FTIR) spectroscopy was performed on a Perkin-Elmer Spectrum 100 FTIR Spectrometer. FTIR was used to characterize the extent of oxidative degradation in the polyethylene tibial insert. The sample for FTIR characterization was prepared by cutting the UHMWPE sample into approximately 1 mm thin section using cutter. The FTIR measurement were performed on two distinct regions of the thin section. There are the oxidized surface layer and the oxidised bulk region. Three FTIR measurements were performed for each specimen to obatain an average result. Scans of thin sections were acquired at 2 cm1 intervals from 600 cm-1 to 4000 cm-1.
Fig. 1 Radiographs of the right knee of the patient (a) at presentation 10 years after primary total knee replacement (arrow pointing at decreased joint space at medial joint line with tibial tray in varus position) and (b) anteroposterior and lateral radiographs of the right knee post-revision knee replacement
B. Surface Degradation Evaluation Surface degradation features of UHMWPE component were evaluated using stereomicrosopy, scanning electron microscopy (Quanta 200 FESEM, FEI) and Alicona infinite focus microscope (IFM). Scanning electron microscope (Quanta FEG-SEM with Oxford EDX), used to investigate the worn surface characterization of the retrieved sample. Energy dipersive x-ray (EDX) microanalysis was performed to find out the component of the surface features.
D. Crystallinity Measurement The crystallinity of the UHMWPE tibial insert was measured using differential scanning calorimetry (DSC) (DSC 820, Mettler-Toledo Inc., Columbus, OH). The sample for DSC characterization was prepared by using a 5 mm inner diameter serrated coring bit at the weight bearing region. Heating runs were conducted from 30°C to 200°C at rate of 10°C/min on samples weighing about 5.6 mg. The sample was sealed in an aluminium sample pan and used for the measurement in the standard DSC mode. Sample crystalinity was determined by comparing the total heat of melting ∆H (the area under the endotherm) to the total heat of fusion of fully crystalline UHMWPE (∆Hf=293 J/g) [14]. The percentage crystallinity was calculated as 100 ∆H/∆Hf.
IFMBE Proceedings Vol. 35
Failure Analysis of Retrieved UHMWPE Tibial Insert in Total Knee Replacement
75
E. Molecular Weight Measurement Gel-permeation chromatography (GPC) analysis (ALLIANCE GPCV-200) equipment was made in order to measure the polyethylene molecular weight of the tibial insert. Molecular weight analysis was conducted with a 50 mg of polyethylene dissolved in triclorobenzene at 140ºC.
III. RESULTS
(a)
(b)
Pit
(c)
The wear damage analysis of the articulating surface of the UHMWPE tibial insert showed different types of features. Figs. 3b and 3c display the wear surface of retrieved UHMWPE at 40x magnification under the stereoscope. Light scracthing marks, oriented in the vertical direction in the micrographs are noted on both the lateral and medial compartments.
(d)
Delamination
Fig. 4 SEM images of UHMWPE tibial plate surface indicating different features of wear damage: (a) scratches, (b) pits, (c) folding and (d) delamination
(a)
(b)
(c)
1cm
1cm
Fig. 3 (a)General view of UHMWPE and optical microscopic images of
Fig. 5 A typical EDX spectrum on the white particles indicating the presence of calcium (Ca) and phosphorus (P)
the surface of tibial insert: (b) lateral compartment, and (c) medial compartment
Further observation was done in the SEM and the micrographs are shown in Fig. 4. Close observation of the surface at different areas revealed a number of features. Scratches (Fig. 4a), pitting (Fig. 4b), folding (Fig. 4c), and surface delamination (Fig. 4d) can be observed as a result of the wear of the component during service. Such features were seen all over the surface. As can be seen from Fig. 4c, there were some ‘white particles’ scattered on the worn surface. The EDX analysis (Fig. 5) confirmed the presence of Ca and P in these particles. Since hydroxyapatite (Ca10(PO4)6(OH)2 is an important component of the human bone [15], the presence of Ca and P in these paricles suggests that these particles are bone fragments.
Infinite focus microscope (IFM) allows the quantification of surface data and provide the dimensional surface profiles (Fig. 6a-b). For example, deep pits were revealed at the bottom right corner of Fig. 6a and upper left corner of Fig. 6b. Pit depths as large as 60 µm and 35 µm were obtained. Fig. 7 shows the FTIR spectra taken on material at the surface and bulk region of the UHMWPE samples. There was no significant difference between the two regions. The absorption at 2915 and 2848 cm-1 (CH2 stretching), 1462 and 1471 cm-1 (CH2 and CH3 bending) and 719 and 730 cm1 (linear chain of at least 4 CH2 groups) was clearly visible at Fig. 7. There were also another small absorption peak which can be assigned to the oxidation product. The components were carboxylic acid (C=O) at 1697.03 and 1300.69 cm-1, ketone (C=O) at 1716 cm-1, ester (C=O) at 1748.26 cm-1 and alcohol (C=O) at 3361.09 cm-1 [2,16]. It was
IFMBE Proceedings Vol. 35
76
S. Liza et al.
observed that at the surface layer, the band at 1166.54 cm-1 is mainly assigned to PO-4 [15]. This suggests that the sample contained a small amount of bone fragment which was indicated previously by EDX analysis. (a)
Fig. 8 A plot of heat flow versus temperature for UHMWPE tibial plate (b)
Fig. 6 3D surface images corresponding dimensional surface profiles
Fig. 9 The distribution of molecular weight of UHMWPE tibial insert after being implated for 10 years Finally, the molecular weight of the retrieved UHMWPE tibial insert was measured by gel-permeation chromatography (GPC). Fig. 9 shows the distribution of molecular weight of UHMWPE tibial insert after the tibial insert was implanted for 10 years. The molecular weight (Mw) of UHMWPE tibial insert was 130,104 g/mol.
IV. DISCUSSIONS Fig. 7 IR spectrum recorded on the surface and bulk region of UHMWPE samples
In the present study, the crystallinity and molecular weight measurement of the UHMWPE tibial insert were carried out in order to determine whether there was a change after the insert was implanted for 10 years. Fig. 8 shows the melting endotherm for the retrieved UHMWPE tibial insert. The heat of fusion for this sample was 164.85 J/g. It was obtained by integrating the area under the endothermic peak. The value of the average crystallinity obtained from DSC of the UHMWPE tibial insert was 56.6%.
Orthopaedic literature contains many articles describing the clinical performance of knee replacement with various designs and materials [7-9]. According to Hood et al. [5], there are seven surface wear damage modes on the articulating surface of retrieved tibial insert. The wear damage modes are pitting, scratching, burnishing, embedded particulate debris, abrasion, permanent deformation and surface delamination. In the present study the most common features on the articulating surface of retrieved polyethylene tibal insert was delamination, followed by scratches, pitting, folding, and third body particles (bone fragment).
IFMBE Proceedings Vol. 35
Failure Analysis of Retrieved UHMWPE Tibial Insert in Total Knee Replacement
Ho et al.[17] in their study suggested that high-grade wear consisted of pitting, scratching, and delamination. Thus, the damage modes on the 10 years implanted UHMWPE tibial insert in this current study can be classify as a high-grade wear. Under repetitive cyclic loading at the articulating surface during everyday activities suggests that fatigue wear mechanism may have been active in 10 years implanted tibial implant. Medel et al.[18] have suggested that the fatigue wear mechanism was predominant in the knee implant system due to the lower conformity and higher contact stress between the femoral and tibial component. Futhermore, fatigue wear mechanism such as delamination and pitting were reported by Bradford et al. [19] as the most common feature of damage for total knee replacement. Previous report have described that the formation of large flakes of wear debris called delamination was initiated by cracks from the cyclic stress at the contact surface of polyethylene [20]. Similarly, Burnett et al. [21] have reviewed that the fatigue wear modes (delamination and cracking) are resulted from the combination of higher stresses and reduced mechanical properties. They also reported that the reduction in mechanical properties such as loss of toughness is due to oxidation. It has been observed that the oxidative degradation of polyethylene can lead to a decrease in wear resistance and mechanical properties [2-4]. In another study by Collier et al. [9], delamination is correlated with zone of high oxidation. They found that subsurface zone which highly oxidized could be related to the delamination. They proposed that the reduction in mechanical properties such decreases in tensile strength and low elongation, leading to the visible subsurface white zone and resulting delamination and cracking. In the present study, the presence of oxidation product such as acid carboxylic, ketone, ester and alcohol from FTIR analysis confirms that oxidation degradation occurs in the UHMWPE insert. The gamma sterilization method has significant effect on delamination of the polyethylene surface. Severe delamination was found on the retrieved polyethlyne surface with gamma in air and gamma in nitrogen sterilization [9,22]. No delamination was found on the gas plasma sterilized polythylene insert [11]. This suggest that the 10 years retrieved poleythylene tibial insert might be sterilized with the gamma radiation method. Sterilization by gamma irradiation in air has been shown to have the potential to accelerate the oxidation of polyethylene components [23]. The degradation does not occur immediately after sterilization but over time. It has been reported that degradation occurs over a period prior to implantation (shelf aging) and in vivo services [3,24]. The correlation between implantion time and presence of delamination are also found by Collier et al. [9]. The
77
retrieved samples with short implantation duration which less than 4 years only displayed signs of delamination or cracking at 17-27% components. In contrast, 65% of the retrieved tibial component displayed a signs of delamination or cracking with durations greater than 4 years. A part from delamination, pitting, scraching and folding were also contributed to the damage of polyethylene surface. Garcia et al. [6] have found that pitting and scratching were the most predominant damage modes on the surface damage of 40 rotating-platformn TKAs The pitting formation that induced by the stress at the polyethylene tibial insert was not observed by Crowninshield et al. [8]. Third body particles have been reported by Crowninshield et al. [8] and Oonishi et al. [25] as a main cause of formation multiple surface folding, pitting and scratching. Burger et al. [26] found that the third body particles were embedded in and dislodged from the polyethylene surface, causing the formation of pitting during the motion and loading at knee prothesis. In addition, scratches were formed when third body particles were pushed along the articular surface and leaving tracks of deformation within the polyethylene surface. Hirakawa et al.[27] in their study suggested that the small debris particles may attributed to the formation of scratches and larger debris particles which produce by polythylene fatigue may attributed to the delamination formation. These findings clearly showed the formation of surface damage modes were related to each other. The absence of surface burnishing on the UHMWPE surface distinguishes this report from the previous reports. The absence of burnishing wear can be caused by the design of knee implant. Engh et al.[11], have found that the the knee implant with fixed bearing system have small percentage of the surface being burnished when compared with the mobile bearing system due to small contact area. In this present study, the Apollo® Total Knee system which is fixed bearing system was not produce burnishing on the polyethyne surface. In the present study, the crystallinity and molecular weight measurement of the UHMWPE tibial insert were carried out in order to determine whether there is a change after the insert was implanted for 10 years. Kurtz [29] suggested that the degree of crystallinity for medical grade polyethylene should be within 39-75%. Thus in this study, the crystallinity falls within the suggested range. Since the crystallinity data of UHMWPE tibial insert prior to implantation was also not available, no comment can be made on the roke of crystallinity on the performance of the present implant. However, it was expected that there is a change in the molecular weight of UHMWPE after implanted for 10 years. In the present case, the molecular weight of UHMWPE was measured at the 130, 104 g/mol. This result
IFMBE Proceedings Vol. 35
78
S. Liza et al.
is approximately ten times lower than the actual average molecular weight of a medical grade polyethylene which is 1.5x106 g/mol [28]. Diabb [13], also reported on the reduction of molecular weight after retrieved. In fact they found that the molecular weight of the tibial base polyethylene after 7 months was 166, 855 g/mol. The degradation in molecular weight of retrieved UHMWPE tibial insert shows that polyethylene properties have changes. This is believed to have contributed to the wear damage.
V. CONCLUSIONS The following conclusions can be drawn from the present study; •
High-grade wear and oxidation degradation are associated with the failure of 10 years implanted total knee replacement component.
•
The damage modes on the articulating surface are observed namely, surface delamination, scratching, pitting, folding, and third body particulates.
•
That oxidation degradation occurs in the retrieved UHMWPE tibial insert and the presence of oxidation products were found; acid carboxylic, ketone, ester and alcohol.
•
The retrieved UHMWPE tibial insert showed low molecular weight, indicating that polyethylene properties have been modified and leads to severe damage.
ACKNOWLEDGMENT The authors would like to thank University Malaya Medical Centre (UMMC) for supplying the sample. This research was supported by Postgraduate Research Grant (PPP), Grant No:PS079-2009B from University of Malaya.
REFERENCES 1. Santavirta S, Konttinen YT, Lappalainen R, Anttila A, Goodman SB, Lind M et al. Materials in total joint replacement. Curr Orthopaed. 1998;12:51-7. 2. Costa L, Luda MP, Trossarelli L, Brach del Prever EM, Crova M, Gallinaro P. Oxidation in Orthopaedic UHMWPE Sterilized by gamma-radiation and ethylene oxide. Biomaterials. 1998;19:659-68. 3. Kurtz SM, Hozack W, Marcolongo M, Turner J, Rimnac C, Edidin A. Degradation of Mechanical Properties of UHMWPE Acetabular Liners Following Long-Term Implantation. J Arthroplasty. 2003;18(1):68-78.
4. Wannomae KK, Christensen SD, Freiberg AA, Bhattacharyya S, Harris WH, Muratoglu OK. The Effect of Real-Time Aging on The Oxidation and Wear of Highly Crosslinked UHMWPE Acetabular Liner. Biomaterials. 2006;27:1980-7. 5. Hood RW, Wright TM, Burstein AH. Retrieval Analysis of Total Knee Protheses: a method and its application to 48 total condylar protheses. J Biomed Mater Res. 1983;17:829-42. 6. Garcia RM, Kraay MJ, Messerschmitt PJ, Golberg VM, Rimnac CM. Analysis of Retrieved Ultra High Molecular Polyethylene Tibial Components From Rotating-Platform Total Knee Arthroplasty. J Arthroplasty. 2009;24:131-8. 7. Engh GA, Zimmerman RL, Parks NL, Engh CA. Analysis of Wear in Retrieved Mobile and Fixed Bearing Knee Inserts. J Arthroplasty. 2009;24(6):28-32. 8. Crowninshield RD, Wimmer MA, Jacobs JJ, Rosenberg AG. Clinical Performance of Contemporary Tibial Polyethylene Components. J Arthroplasty. 2008;21(5):754-61. 9. Collier JP, Sperling DK, Currier JH, Sutula LC, Saum KA, Mayor MB. Impact of Gamma Sterilization on Clinical Performace of Polyethylene in the Knee. J Arthroplasty. 1996;11(4):377-89. 10. William IR, Mayor MB, Collier JP. The impact of sterilization method on wear in knee arthroplasty. Clin Orthop. 1998;356:170-80. 11. Engh GA, Zimmerman RL, Parks NL, Anderson EC. Analysis of Wear in Retrieved Mobile and Fixed Bearing Knee Inserts. The Journal of Arthroplasty 2009;24(6): 28-32. 12. Ashraf T, Newman JH, Desai VV, Beard D, Nevelos JE, Polyethylene wear in a non congruous unicompartmental knee replacement: a retrieval analysis. The Knee 2004; 11: 177-181. 13. Diabb J, Juarez-Hernandez A, Reyes A, Gonzalez-Rivera C, Hernandez-Rodriguez MAL. Failure analysis for degradation of a polyethylene knee prosthesis component. Eng Fail Anal. 2009; 16:1770-3. 14. Lopéz JM, Gómez-Barrenata E. Estructura y morfología de las regiones degradadas del polietileno de ultra alto peso molecular en prótesis articulares.Biomecánica 1999;7(13):2–11. 15. Slouf M, Sloufora I, Horak Z, Stepanek P, Entlicher G, Krejcik M, Radonsky T, Pokorny D, Sosna A. New Fast Method for Determination of Number of UHMWPE Wear Particles. J Mater Sci-Mater M. 2004;15:1267-78. 16. Goldman M, Lee M, Gronsky R, Pruitt L.Oxidation of Ultrahigh Molecular Weight Polyethylene characterized by Fourier Transformation Infrared Spectrometry. J Biomed Mater Res. 1997;37(1):43-50. 17. Ho FY, Ma HM, Liau JJ, Yeh CR, Huang CH. Mobile-bearing knees reduce rotational asymmetric wear. Clin Orthop Relat Res. 2007;462:143-9. 18. Medel FJ, Kurtz SM, Parvizi J, Klein GR, Kraay MJ, Rimnac CM. In Vivo Oxidation Contributes to Delamination but not Pitting in Polyethylene Components for Total Knee Arthroplasty. J Arthroplasty. 2010 (Articles in Press) 19. Bradford L, Baker DA, Graham J, Chawan A, Ries MD, Pruitt LA. Wear and Surface Cracking in Early Retrieved Highly Cross-linked Polyethylene Acetabular Liners. The Journal of Bone and Joint Surgery (American) 86:1271-1282 (2004) 20. Blunn GW, Joshi AB, Lilley PA, Engelbrecht E, Ryd L, Lidgren L, Hardinge K, Nieder E, Walker PS. Polyethylene wear in unicondylar knee prostheses. 106 retrieved Marmor, PCA, and St Georg tibial components compared. Acta Orthop Scand. 1992 Jun;63(3):247-55. 21. Burnett RSJ, Biggerstaff S, Currier BH, Collier JP, Barrack RL. Unilateral Tibial Polyethylene Liner Failure in Bilateral Total Knee Arthroplasty—Bilateral Retrieval Analysis at 8 Years. The Journal of Arthroplasty 2007:22 (5): 753-8. 22. Brandt JM, Medley JB, MacDonald SJ, Bourne RB. Delamination wear on two retrieved polyethylene inserts after gamma sterilization in nitrogen. Knee 2011: 18(2):125-9. 23. C.M. Rimnac, S.M. Kurtz. Ionizing Radiation and Orthopaedic Protheses, Nucl Instrum Meth B. 236 (2005) 30-37.
IFMBE Proceedings Vol. 35
Failure Analysis of Retrieved UHMWPE Tibial Insert in Total Knee Replacement 24. L. Costa, K. Jacobson, P. Bracco, E.M. Brach del Prever, Oxidation of Orthopaedic UHMWPE, Biomaterials. 23 (2002) 1613-1624. 25. Oonishi H, Kim SC, Kyomoto M, Iwamoto M, Ueno M. Comparison of In-Vivo Wear between Polyethylene Inserts articulating against Ceramic and Cobalt-Chrome Femoral Components in Total Knee Prostheses. Bioceramics and Alternative Bearings in Joint Arthroplasty 12th BIOLOX® Symposium Seoul, Republic of Korea September 7 – 8, 2007 Proceedings. 149-59. 26. Burger, N.D.L, de Vaal, P.L., Meyer, J.P., 2007. Failure analysis on retrieved ultr high molecular weight polyethylene (UHMWPE) acetabular cups. Engineering Failure Analysis 14, 1329-1345.
79
27. Hirakawa K, Bauer TW, Yamaguchi M, Stulberg BN, Wilde AH. Relationship Between.Wear Debris Particlesmand Polyethylene Surface Damage in Primary Total Knee Arthroplasty. The J Arthroplasty 14 (2) 1999. 165-71 28. Kurtz S. The UHMWPE handbook: Ultra-high molecular weight polyethylene in total joint replacement. Elsevier Inc., New York, 2004. Corresponding author: Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 35
Prof Dr. A.S.M.A. Haseeb University of Malaya Kuala Lumpur Malaysia
[email protected]
Influence of Magnesium Doping in Hydroxyapatite Bioceramics Sintered by Short Holding Time S. Ramesh1,2, R. Tolouei3, C.Y. Tan3, M. Amiriyan3, B.K. Yap3, J. Purbolaksono2, and M. Hamdi2 2
1 Centre of Advanced Manufacturing and Materials Processing, University of Malaya, 50603 Kuala Lumpur, Malaysia Department of Engineering Design and Manufacture, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia 3 Ceramics Technology Laboratory, COE, University Tenaga Nasional, 43009 Kajang, Selangor, Malaysia
Abstract— In the present research, nano hydroxyapatite (HA) powder doped with magnesia (MgO) was studied. The dopant was added to pure HA powder and ball milling was done for 1 hour. Green samples, in the form of discs and rectangular bars, were prepared and consolidated in air at temperatures ranging from 1000°C to 1300°C. The sintered samples were characterized to determine the phase stability, relative density, hardness, fracture toughness and Young’s modulus. The phase analysis revealed that the HA phase was not disrupted regardless of dopant additions and sintering temperature. It has been revealed that all HA samples achieved > 98% relative density when sintered between 1100ºC–1300ºC. However, the addition of 0.5 wt% MgO when sintered at 1100°C was found to be most beneficial in aiding sintering with samples exhibiting the highest Young’s modulus of 122.15 GPa and fracture toughness of 1.64 MPam1/2 as compared to 116.57 GPa and 1.18 MPam1/2 for the undoped HA. Keywords— Hydroxyapatite, MgO, Mechanical properties, Sinterability.
I. INTRODUCTION The development of advanced ceramics for biomedical applications is one of the fastest growing research areas. Due to the apatitic structure of human hard tissues, hydroxyapatite Ca10(PO4)6OH2 (HA) appear to be the best studied compounds among different forms of calcium phosphate ceramics [1]. Bone crystals are formed in a biological environment through the process of biomineralization [2]. The major inorganic component of bone mineral is a CaPbased apatite phase contains trace ions such as Na+, Mg2+ and K+, which are known to play a significant role in its overall performance [3]. Mg-deficiency could cause poor mechanical reliability of HA, thus, limiting its application to non-stressed loaded regions [4]. As a result, it is favorable to incorporate these ions that are found in human bone to improve the mechanical properties of synthetic HA without any decomposition. For instance, Li et al. [5] reported decomposition products such as TCP and TTCP were present
in the structure when HA were doped with zirconia. Evis et al. [6] described the presence of TCP in HA when magnesium was added and samples sintered above 1000ºC. These secondary phases would have an adverse influence on the mechanical properties and biodegradability of the HA ceramics [7]. In this work, the primary objective is to study the phase stability and sinterability of synthesized HA ceramics by wet precipitation method when doped with up to 1 wt% magnesium oxide (MgO) via a new profile for conventional pressureless sintering.
II. METHODS AND MATERIALS The HA powder used in the present work was prepared according to a novel wet chemical method comprising precipitation from aqueous medium involving calcium hydroxide and orthophosphoric acid [8]. The dopant used in this work was obtained from a commercial available MgO powder (99.99% purity) and amount of dopant used were 0.1, 0.5 and 1.0 wt%. The synthesized HA and MgO powder were mixed in 150mls of ethanol and followed by ball milling for 1 hour. After the mixing, the wet slurry was dried, crushed and sieved to obtain fine powder. The green samples were uniaxial compacted at about 1.3 MPa to 2.5 MPa into rectangular bar (4 × 13 × 32 mm) and circular discs (20 mm diameter) samples. The compacts were subsequently cold isostatically pressed at a pressure of 200 MPa (Riken Seiki, Japan). This was followed by consolidation of the particles by pressureless sintering performed in air using a rapid heating furnace (ModuTemp, Australia), over the temperature range of 1000ºC to 1300ºC, with ramp rate of 2oC/min. (heating and cooling) and soaking time of one minute for each firing. All sintered samples were then polished to a 1 µm finish prior to testing. The phase stability studies of all samples were characterized by using X-ray diffraction (XRD-6000, Shimadzu, Japan). The bulk densities of the samples were determined by the water immersion technique (Mettler Toledo, Switzerland). The Young’s
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 80–83, 2011. www.springerlink.com
Influence of Magnesium Doping in Hydroxyapatite Bioceramics Sintered by Short Holding Time
modulus (E) by sonic resonance was determined for rectangular samples using a commercial testing instrument (GrindoSonic: MK5 “Industrial”, Belgium). The modulus of elasticity or Young’s modulus was calculated using the experimentally determined resonant frequency [9]. The fracture toughness (KIc) of the samples were determined using the Vickers indentation method (Matsuzawa, Japan). The indentation load (< 200 g) was applied and held in place for 10 seconds. Five indentations were made for each sample and the average value was taken. The KIc value was calculated using the equation derived by Niihara [10].
III. RESULTS AND DISCUSSION The sinterability of the HA compacts were compared in terms of phase stability, relative density, Vickers hardness, fracture toughness and Young’s modulus. All the samples, regardless of sintering conditions and dopant addition have not shown any cracking or distortion after sintering. XRD phase analysis showed that through the sintering of MgO-doped and undoped HA samples, no secondary phases such as TCP, CaO and calcium hydroxide were verified as shown in Figure 1.
81
that could have played a role in hindering dehydroxylation in the HA matrix during high temperature sintering. The densification curves as a function of sintering temperatures is shown in Figure 2. In general, the bulk density variation of all the composition studied exhibited a similar trend with increasing sintering temperature. A general observation that can be made from ‘figure 2’ is that the dopant incorporated has minor effect on the measured bulk density of HA samples. All the samples attained above 98% of theoretical density when sintered above 1200ºC. The relationship between the Young’s modulus of the sintered body, sintering temperature and MgO additions are shown in Figure 3. The inclusion of MgO in HA lattice, particularly for the higher dopant concentration, was found to be beneficial in enhancing the stiffness of the sintered HA body. As shown in Figure 3, the highest value of 124.4 GPa is recorded for HA samples containing 0.5wt% MgO when sintered at 1200ºC.
Fig. 2 Relative density variation as a function of sintering temperatures for HA with different amount of MgO
Fig. 1 XRD patterns of (a) undoped HA and HA containing (b) 0.1 wt%, (c) 0.5 wt% and (d) 1 wt% MgO samples sintered at 1200ºC
The XRD results indicated that the phase stability of HA was not disrupted by the sintering schedule and temperature, pressing conditions prior to sintering as well as the dopant addition. This result is very encouraging as there are some findings showing that the addition of other materials into HA matrix may lead to decomposition of HA and formation of TCP and CaO [11, 12]. Sintering at high temperatures, above 1300ºC, has been reported in the literature to be detrimental as HA phase instability was observed. However, in the present work, decomposition of HA phase was not observed even when sintered at 1300ºC. This observation could be associated with the high local humid atmosphere
Fig. 3 The effect of sintering temperatures and MgO addition on the Young’s modulus of HA
IFMBE Proceedings Vol. 35
82
S. Ramesh et al.
The variation of the average Vickers hardness and fracture toughness of samples sintered at various temperatures is shown in Figure 4 and Figure 5, respectively. The beneficial effect of MgO especially for the 0.5 wt% addition in enhancing the hardness and toughness of HA has been revealed. ‘Figure 4’ shows that the measured hardness of all the samples revealed a similar trend, i.e. the hardness increased rapidly to a maximum value and then decreased slowly with increasing sintering temperatures. For example, the hardness of the undoped HA ceramic peaked at 1100ºC (8.1GPa) and further increase in temperature > 1100oC resulted in a decrease in the hardness.
highlighted here that the KIc value obtained for the MgO-doped HA is very encouraging, as most researchers had reported that the experimental KIc values for HA varied from 0.9 MPam1/2 to about 1.2 MPam1/2 [13-15]. Moreover, this improved in toughness in the presences of dopants reported by these researchers was accompanied by HA phase decomposition. Due to the results can concluded that MgO plays an important role in suppressing grain growth and this will lead to higher fracture strength and hence higher fracture toughness.
IV. CONCLUSION This study has shown that the incorporation of small amount of magnesium oxide can be beneficial in enhancing the mechanical properties without affecting the HA phase stability even when sintered at 1300ºC. MgO doping, however was found to have a minor effect on the bulk density of sintered HA regardless of the sintering temperature employed. The addition of 0.5 wt% MgO and when sintered at 1100°C was found to be most beneficial as the HA samples exhibited the highest Young’s modulus of 122.15 GPa and fracture toughness of 1.64 MPam1/2. Fig. 4 Effect of sintering temperature and MgO addition on the Vickers hardness of HA
AKNOWLEDEMNET The authors gratefully acknowledge the support provided by the Ministry of Science, Technology and Innovation of Malaysia (MOSTI), and SIRIM Berhad.
REFRENCES
Fig. 5 The effect of sintering temperature and MgO addition on the fracture toughness of HA The fracture toughness of all compositions exhibited very similar trend as the sintering temperature increased (figure 5). The results show that the addition of MgO was effective in enhancing the fracture toughness (KIc) of the synthesized HA, particularly when sintered at 1100ºC. The 0.5 wt% MgO-doped HA samples exhibited the highest fracture toughness of 1.64 ± 0.17 MPam1/2 as compared to 1.36 ± 0.05 MPam1/2 measured for the undoped HA. It should be
1. G. Muralithran and S. Ramesh, “The effect of MnO2 addition on the sintering behavior of hydroxyapatite,” Biomed. Eng. App, Basis & Comm., Vol. 12, pp.43-48, 2000. 2. J. K. Samar and A. Bhatt. Himesh, “Nanocrystalline hydroxyapatite doped with magnesium and zinc: Synthesis and characterization,” Mater. Sci and Eng., Vol. 27, pp. 837-848, 2007. 3. P. Quinten Ruhé, Joop G.C. Wolke, Paul H.M. Spauwen and John A. Jansen, “Calcium Phosphate Ceramics for Bone Tissue Engineering,” in Tissue Engineering and Artificial Organs, 3rd ed., Joseph D. Bronzino, Ed. Taylor & Francis, 2006, pp. 38-1. 4. W. Suchanek and M. Yoshimura, “Processing and properties of hydroxyapatite-based biomaterials for use as hard tissue replacement implants,” J. Mater. Res., Vol. 13, pp. 94-117, 1998. 5. J. Li, L. Hermansson, and R. Soremark, “High strength biofunctional zirconia: mechanical properties and static fatigue behaviour of zirconia-apatite composites,” J. Mater. Sci.: Mater. Med., Vol. 4, pp. 50-54, 1993. 6. Z. Evis, M. Usta, and I. Kutbay, “Hydroxyapatite and zirconia composites: Effect of MgO and MgF2 on the stability of phases and sinterability,” Materials Chemistry and Physics, Vol. 110, pp. 68-75, 2008.
IFMBE Proceedings Vol. 35
Influence of Magnesium Doping in Hydroxyapatite Bioceramics Sintered by Short Holding Time 7. E. Landi, A. Tampieri, G. Celotti, S. Sprio, M. Sandri and G. Logroscino, “Sr-substituted hydroxyapatites for osteoporotic bone replacement,” Acta Biomaterialia, Vol. 3, pp. 961-969, 2007. 8. S. Ramesh, “A method for manufacturing hydroxyapatite bioceramic,” Malaysia Patent 2004, No. PI. 20043325. 9. ASTM E1876-97, “Standard test method for dynamic Young’s modulus, shear modulus and Poisson’s ratio by impulse excitation of vibration,” Annual Book of ASTM Standards. 1998. 10. K. Niihara, “Indentation microfracture of ceramics – its application and problems,” Ceramic Jap., Vol. 20, pp.12-18, 1985. 11. Royer, J. C.Viguie, M. Heughebaert, and J. C. Heughebaert, “Stoichiometry of hydroxyapatite: Influence on the flexural strength,” J. Mater. Sci.: Mater. In Med., Vol. 4, pp.76-82, 1993. 12. P. E. Wang and T. K. Chaki, “Sintering behaviour and mechanical properties of hydroxyapatite and dicalcium phosphate,” J. Mater. Sci.: Mater. In Med., Vol. 4, pp. 150-158, 1993.
83
13. C. K. Wang, C. P. Ju and Lin J. H. Chern, “Effect of doped bioactive glass on structure and properties of sintered hydroxyapatite,” Mat. Chem. Phys., Vol. 53, pp. 138-149, 1998. 14. S. Gautier, E. Champion and D. Bernache-Assollant, “Processing, microstructure and toughness of Al2O3 platelet reinforced hydroxyapatite,” J. Euro. Ceramic, Vol. 17, pp. 1361-1369, 1997. 15. Z. Evis, M. Usta and I. Kutbay, “Improvement in sinterability and phase stability of hydroxyapatite and partially stabilized zirconia composites,” J. Euro. Ceramic, Vol. 29, pp. 621-628, 2009. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Professor Ramesh Singh University Malaya KL Malaysia
[email protected]
In-vitro Biocompatibility of Folate-Decorated Star-Shaped Copolymeric Micelle for Targeted Drug Delivery N.V. Cuong1,2, Y.L. Li1, and M.F. Hsieh1,* 1
2
Department of Biomedical Engineering, Chung Yuan Christian University, 200, Chung Pei Rd., Chung Li, Taiwan Department of Chemical Engineering, Ho Chi Minh City University of Industry, 12 Nguyen Van Bao St, Ho Chi Minh, Vietnam
Abstract— The drug delivery systems using conventional nanocarriers are associated with the systemic toxicity and poor bioavailability of drug due to lack of its specificity. The objective of this study is to overcome these limitations by introduction of folic targeting ligand in drug delivery system to deliver drug within tumor cells via passive and active-mediated endocytosis. Hence, the folate decorated-micelles based on the starshape FOL-PEG-PCL copolymer were prepared for targeting to the folate receptor overexpressing in human breast cancer cells. The structure was characterized by 1H NMR, FT-IR and DSC. The self assembly of amphiphilic copolymer was investigated. The particle size of the micelle was 110.5 nm. The safety evaluation of copolymeric micelle including the in vitro nitric oxide production and hemolytic tests. The results obtained from macrophage response and hemolysis test suggest that the micelle prepared in this study had moderate in vitro toxicity and could be safely used for intravenous injection in animal. Keywords— Doxorubicin, Targeted Delivery, Polymeric Micelle.
I. INTRODUCTION A drug delivery system using nanocarrier (i.e., polymeric nanoparticle and polymeric micelles) that was accumulated in tumor cells only on passive targeting mechanisms. This delivery system faces intrinsic limitation to its specificity [1]. To overcome these limitations, introduction of various targeting ligands or antibody in drug delivery system has provided opportunity to deliver drug within tumor cells via receptor-mediated endocytosis [2]. Among them, vitamin B12: folic acid (folate) has been widely employed as a targeting moiety for various anti-cancer drugs [3]. The folate receptor (FR) is a 38 kDa glycosylphosphatidylinositol-anchored protein that binds to the vitamin folic acid with high affinity (Kd < 1 nM). The folate receptors have been known to overexpress in several human tumors including ovarian and breast cancers, while it is highly restricted in normal tissues. Cellular uptake of folate is enhanced by reduced folate carrier and/or proton-coupled folate transporter or the glycosylphophatidylinositol-linked folate receptor (FR) [3]. Recently, folic acid was conjugated to biodegradable polymeric micellar system for doxorubicin targeted delivery. For example, poly(D,L-lactic-co-glycolic
acid)-poly(ethylene glycol) was conjugated to folic acid to deliver DOX (PLGA–PEG–FOL) that exhibited more potent cytotoxic effect on KB cells than free doxorubicin [4]. In other study, doxorubicin-loaded polymeric micelles targeting folate receptors using pH-sensitive micelles composed of poly(L-histidine-co-L-phenlyalanine-b-PEG and poly(L-lactic acid)-b-PEG-folate was reported. This micelle formulation effectively suppressed the growth of existing MDR tumors in vitro and in vivo [5]. In this study, the preparation and characterizations of folate-decorated star-shaped copolymers are presented. Additionally, the in-vitro biocompatibility of copolymeric micelles, including hemolysis and cytotoxicity were also investigated.
II. MATERIALS AND METHODS A. Materials Pentaerythritol ethoxylate (EO/OH: 15/4), ε-caprolactone, doxorubicin hydrochloride (DOX·HCl), 2-diphenyl-1,3,5hexatriene (DPH), and dimethylsulfoxide (DMSO) were purchased from Sigma-Aldrich Chem. Co. Inc. The stannous octoate (Sn(Oct)2) was obtained from MP Biomedicals Inc., USA. The tetrahydrofuran (THF) was distilled from metallic sodium and benzophenone. Triethylamine (TEA), hexane and diethyl ether were purchased from ECHO Chemicals (Miaoli, Taiwan). Cystamine dihydrochloride (Cystamine·2HCl, >98 %) was purchased from Acros Organics, New Jersey, USA. B. Characterizations of Block Copolymers The formation of copolymer was confirmed by 1H NMR, H NMR spectra of the block copolymers were then recorded by using a Bruker spectrometer operating at 500 MHz using CDCl3 and DMSO as solvents. 1
C. Synthesis of Star-Shaped Poly(ε-caprolactone)Poly(Ethylene Glycol) Block Copolymer: PCL-PEG-FOL Under a nitrogen atmosphere, to a DCM solution (5 mL) of OH-PEG-NH2 (240 mg, 0.07 mmol) and star-shaped
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 84–87, 2011. www.springerlink.com
In-vitro Biocompatibility of Folate-Decorated Star-Shaped Copolymeric Micelle for Targeted Drug Delivery
polymer PCL-NPC (105 mg, 0.017 mmol) (ratio 4:1), and TEA 10 uL were stirred at rt for 48 h. Folic acid (0.05 mmol, 22 mg) and DCC (0.06 mmol, 12.4 mg) were dissolved in DMSO (3 mL). The mixture was stirred at rt for 24 h and with DMAP (0.006 mmol, 1.0 mg) were added to above solution. The precipitated product was removed by centrifuge. The mixture was dialyzed against DMSO for 24 h and DD water for 24 h and lyophilized.
85
determined by Greiss reagent (1% sulfanilamide, 2.5% H3PO4, 0.1% naphthylethylenediamine dihydrochloride). Briefly, 100 μL of culture medium was added to 100 μL of Greiss reagent solution and incubated for 15 min. The absorbance was then measured at 540 nm [7]. In the control experiments, macrophages were incubated in a lipopolysaccharides (LPS) solution (100 ng/mL) and a micelle-free medium. Moreover, total protein extract was determined by Micro BCA Protein Assay.
D. Determination of Critical Micelle Concentration The critical micelle concentration (CMC) of copolymers was determined by UV-Vis spectroscopy (JASCO UV-530, Tokyo, Japan) using 1,2-diphenyl-1,3,5-hexatriene (DPH) as the fluorescent probe. Samples for UV-Vis measurement were prepared based on previous literature [6]. The concentration of the aqueous copolymer solution ranged between 1.0 mg/mL and 10-4 mg/mL. Next, a 1.0 mL polymeric solution was added to a 10 µL DPH solution (0.4 mM in MeOH) to give a 4×10-6 M DPH/polymeric solution. The resulting solution was incubated in dark place for 5 h. Additionally, the UV-Vis absorption of incubated solution was measured in a range of 250-500 nm. Finally, the absorbance at 359 nm was selected to determine the CMC. E. Preparation and Characterization of Micelle Micelle was prepared by dissolving 5 mg of copolymer in 1.0 mL mixture of THF and DMSO (1:1, v/v) and, then, 1.0 mL of deionized water (18.2 mΩ-cm purity) was added under stirring. The resulting solution was placed at room temperature for 3 h and, then, was transferred to a dialysis bag and dialyzed against deionized water for 24 h (MWCO: 8,000 Da, Spectrum Laboratories, CA, USA). The solution was diluted to 0.5 mg/mL for particle size measurement. Particle size and zeta potential measurements: The particle size and particle size distribution were determined by dynamic light scattering (DLS) using Zetasizer 3000HSA instrument (Malvern, UK) at a fixed angle of 90o and laser wavelength of 633.0 nm at 25 oC. The average diameter was estimated by the CONTTIN analytical method. Additionally, the zeta potential was measured using an aqueous dip cell in automatic mode using Zetasizer 3000HSA, Malvern instrument. F.
Measurement of Nitric Oxide Production of Micelle
RAW 264.7 macrophage cells were seeded in a 96-well plate (1×104 cells/well) and incubated in 37°C, 5% CO2 for 1 day. Micellar solution at various concentrations was added to the cells in a final volume of 0.2 mL. The supernatants were collected after 24 h and NO production was
G. In vitro Hemolytic Test of Micelle The experimental procedure described here is an adjustment of standard F-756-00 [8], which is based on colorimetric detection of Drabkin’s solution. 0.7 mL of micellar solution at various concentrations was incubated in 0.1 mL of rabbit red blood cells at 37 oC and for 3 h. Following incubation, the solution was centrifuged at 3800 rpm for 15 min. To determine the supernatant hemoglobin, 0.75 mL of Drabkin’s solution was added to 0.25 mL of supernatant and the sample was allowed to stand for 15 min. The amount of cyanmethemoglobin in the supernatant was measured by spectrophotometer (JASCO UV-530, Tokyo, Japan) at a wavelength of 540 nm and then compared to a standard curve (hemoglobin concentrations ranging from 0.003 to 1.2 mg/mL). The percent hemolysis refers to the hemoglobin concentration in the supernatant of a blood sample not treated with micelles to the obtained percentage of micelle-induced hemolysis. H. Cytotoxicity: MTT Assay The cell viability was expressed as a percentage of the control. The in vitro cytotoxicity of micelle was tested against human breast cancer cell lines: MCF-7 by a cell viability assay (MTT assay). MCF-7 cells were seeded in 96well plate at a density of 5×103 cells/well and were incubated at 37 oC under a humidified atmosphere containing 5% CO2 for 24 h before assay. After that, the cells were further incubated in media containing various concentrations of micelle. After 24 h, the medium was removed and washed with PBS. MTT solution was added to each well followed by 4 h of incubation at 37 oC. Subsequently, the medium was removed and violet crystals were solubilized with DMSO (200 ìL). After shaking slowly twice for 5 s, the absorbance of each well was determined using a Multiskan Spectrum spectrophotometer (Thermo Electron Corporation, Waltham, MA, U.S.) at 570 nm and 630 nm. The cell viability (%) was calculated as the ratio of the number of surviving cells in micelle-treated samples to that of control.
IFMBE Proceedings Vol. 35
86
N.V. Cuong, Y.L. Li, and M.F. Hsieh
III. RESULT AND DISCUSSIONS A. Synthesis of Star-Shaped Poly(ε-caprolactone)Poly(Ethylene Glycol) Block Copolymer: PCL-PEG-FOL The star-shaped PCL-PEG-FOL was synthesized by coupling reaction between PCL-PEG-OH and folic acid using DCC-DMAP. Before the coupling reaction, the aminated PEG was allowed to react with PCL-NPC to form star-shaped PCL-PEG-OH. Next, the free hydroxyl group of star-shaped copolymer reacted with γ-carboxylic group of folic acid because of higher reactivity of γ-carboxylic group than the α-carboxylic group. The structure of star-shaped PCL-PEG-FOL was characterized by NMR. The spectrum of star-shaped folic copolymer shows typical peaks for PCL (at 4.05, 2.29, 1.64 and 1.32 ppm) and PEG segments. Additionally, the peaks appear at 6.61, 7.21 and 7.56 ppm which belong to aromatic protons, and peaks at 8.01 and 8.32 ppm are corresponded to aliphatic amide proton and pteridine proton, respectively. The peaks at 2.5 and 3.3 ppm originated from DMSO and HOD, respectively. B. Micellization Behavior of Star-Shaped PCL-PEG-FOL in Aqueous Solution The critical micelle concentration (CMC) of star-shaped PCL-PEG-FOL was determined by fluorescence techniques using DPH as a probe. The results indicated that at a copolymer concentration below the CMC, the intensity of I359 is almost constant. The intensity ratio increases sharply when the polymer concentration increases beyond the CMC and DPH is accumulated into the hydrophobic core of the micelle. The CMC value was 62.5×10-3 mg/mL. Polymeric micelle was prepared via a dialysis method and the size distribution and zeta potential were determined by DLS in water at a concentration of 0.5 mg/mL. The monomodal peak was observed with diameter of 110.5 nm. The polydispersity index of the particle size distribution was 0.45. The zeta potential was -8.4 mV, it is more negative than micelle without folic conjugation (-2.5 mV). C.
micellar concentration of 1.0 mg/mL. In contrast, the LPS (100 ng/mL) significantly increased the NO production by macrophage cells (about 426% of control).
Fig. 1 Effects of star-shaped FOL-PEG-PCL micelle on the level of nitric oxide in RAW264.7 cells. Data represents the mean ± standard error of the mean of four experiments (p <0.01 is significantly different from the LPS) D. In vitro Hemolytic Test of Micelle
Measurement of Nitric Oxide Production of Micelle
To make sure of the safety and efficiency of the drug delivery system, the macrophages responses and hemolytic activities of star-shaped FOL-PEG-PCL micelle were assessed by NO assay and hemolytic test. Experimental results indicated that NO secret increases as micellar concentration increases from 0.001 to 1.0 mg/mL. The micelles did not affect NO production up to 0.25 mg/mL (Fig. 1). The NO production resembled that of the control group. The highest NO production was approximate 200% of control at
Fig. 2 Hemolytic activity on star-shaped FOL-PEG-PCL micelle. Data represents the mean ± standard error of the mean of three experiments (p < 0.01 compared to saline group) Furthermore, the biocompatibility of star-shaped FOLPEG-PCL micelle with red blood cells (RBCs) were examined by a hemolytic test. As shown in Fig. 2, the percentage of hemolysis increased accordingly with increasing of
IFMBE Proceedings Vol. 35
In-vitro Biocompatibility of Folate-Decorated Star-Shaped Copolymeric Micelle for Targeted Drug Delivery
the concentration of micelle. Polymeric micelle at a concentration of 2 mg/mL caused a slight increase in hemolysis when compared with that of the negative control (saline solution) and blank solution (PBS buffer). Star-shaped FOL-PEG-PCL micelle exhibited 2-fold higher in hemolytic activity than that of saline solution at highest concentration of 2.0 mg/mL.
87
potential doxorubicin delivery system. The copolymer structure was confirmed by 1H NMR. The copolymer that formed nanosized micellar structures in an aqueous solution were 110.5. In vitro NO release from macrophages and hemolytic tests confirmed that the PEG-PCL-PEG micelle induced very minor NO and hemolysis. The in vitro cytotoxicity study demonstrated that the micelle was safe and low cytotoxicity.
E. Cytotoxicity: MTT Assay
ACKNOWLEDGMENT The authors would like to thank National Science Council, Republic of China, for financial support under grant numbered 99-2221-E-033-010.
REFERENCES
Fig. 3 The cytotoxicity of star-shaped FOL-PEG-PCL micelle against MCF-7 cells. Data represents the mean ± standard error of the mean of four measurements
The cytotoxicity of star-shaped FOL-PEG-PCL micelle to MCF-7 cell was evaluated using MTT assay method. Fig. 3 demonstrates the cell viability after 24 hours incubation with polymeric micelle at various concentrations (0.001, 0.01, 0.1, 0.5 and 1.0 mg/mL). The results showed that cell viability of MCF-7 cell decreased as the concentration of micelle increased. However, the lowest cell viability approximately 92% was observed at a concentration of 1.0 mg/mL. The cells viability assay indicates that the star-shaped FOL-PEG-PCL micelle has generally low cytotoxicity to the MCF-7 cells with concentration up to 1.0 mg/mL. The data of macrophage response and hemolysis test suggest that the star-shaped FOL-PEG-PCL micelle prepared in this study had a moderate to low in vitro toxicity and could be safe for intravenous injection.
IV. CONCLUSIONS This study has synthesized star-shaped FOL-PEG-PCL copolymer by using ethoxylated pentaerythritol initiator as
1. Cuong NV, Hsieh MF. (2009) Recent Advances in Pharmacokinetics of Polymeric Excipients Used in Nanosized Anti-Cancer Drugs. Curr Drug Metab 10:842-850. 2. Allen TM. (2002) Ligand-targeted therapeutics in anticancer therapy. Nat Rev Cancer 2(10):750-763. 3. Cabral H, Kataoka K 2010. Multifunctional nanoassemblies of block copolymers for future cancer therapy. Sci Technol Adv Mater 11(1):014109. 4. Yoo HS, Park TG 2004. Folate-receptor-targeted delivery of doxorubicin nano-aggregates stabilized by doxorubicin-PEG-folate conjugate. J Control Release 100(2):247-256. 5. Kim D, Gao ZG, Lee ES, Bae YH 2009. In Vivo Evaluation of Doxorubicin-Loaded Polymeric Micelles Targeting Folate Receptors and Early Endosomal pH in Drug-Resistant Ovarian Cancer. Mol Pharm 6(5):1353-1362. 6. Cuong NV, Hsieh MF, Chen YT, Liau I. (2010) Synthesis and characterization of PEG-PCL-PEG triblock copolymers as carriers of doxorubicin for the treatment of breast cancer. J Appl Polym Sci 117:36943703. 7. Cuong NV, Jiang JL, Li YL, Chen JR, Jwo SC, Hsieh MF. (2011) Doxorubicin-Loaded PEG-PCL-PEG Micelle Using Xenograft Model of Nude Mice: Effect of Multiple Administration of Micelle on the Suppression of Human Breast Cancer. Cancers 3:61-78. 8. Standard practice for assessment of hemolytic properties of materials. ed., West Conshohocken, PA: ASTM International
Corresponding author:
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
M.F. Hsiesh Chung Yuan Christian University Chung Pei Road Chung Li 32023 Taiwan, ROC
[email protected]
Mercury (II) Removal Using CNTS Grown on GACs K.A. Nassereldeen1,2,*, S.E. Mirghami1, and N.W. Salleh1 1
Bio-Environmental Research Unit, Department of Biotechnology Engineering; International Islamic University Malaysia, Jalan Gombak 50728, Malaysia 2 Nanoscience and Nanotechnology Group, Department of Biotechnology Engineering; International Islamic University Malaysia
Abstract— Elemental (metallic) mercury primarily causes health effects when it is breathed as a vapor where it can be absorbed through the lungs, at higher exposures there may be kidney effects, respiratory failure and death. This study aimed to study the performance of carbon nanotubes (CNT) grown on granular activated carbon (GAC) as an adsorbent for removal of mercury from aqueous solution. Due to its highly toxic effects to humans and environment, heavy metal concentrations in water are restricted by strict standards and reduced to the standard permitted. The effect of pH, agitation speed, contact time and CNT dosage was studied for optimal adsorption of mercury in the aqueous solution. Design Expert software was used to determine the number of runs and its variations, which are 18 runs. It was found that the optimal condition for mercury (II) ions adsorption occurred at adsorbent dosage of 5 mg, pH value of 5, contact time of 120 minutes and agitation speed of 150 rpm. The model resulted R2= 0.8517 indicating 85.17% of the factors,which were pH, contact time,agitation speed and adsorbent dosage correlated to each other. Keywords— Mercury, CNT-GAS, ANOVA, adsorption.
I. INTRODUCTION Heavy metals such as mercury, lead, nickel copper and cadmium have been proven to cause serious health effects on human [1]. Mercury had been widely used in many fields, such as medical, scientific research applications, and in amalgam material for dental restoration. It is used in lighting; electricity passed through mercury vapor in a phosphor tube produces short-wave ultraviolet light which then causes the phosphor to fluoresce, making visible light. According to Torres [2], mercury is a well-known heavy metal pollutant of the aquatic environment, which is transformed to other more toxic species as methylmercury [3]. Many technologies were developed to avoid the throughput of mercury to the environment, however this element and its toxic species still cause many ecological problems due to wrong waste management by mining, electronic, chloro-alkali, etc. industries [4].
The first report pertaining to the toxicity of this metal and its compounds is probably in the works of Plenius Senior (23-79 A.D. During the Roman Empire, slavery at the Cinnabar mines was used as a terrible punishment for "disobedient" citizens. This was amounting to a slow, painful death [5]. Although mercury is useful for human, it still has its own disadvantages since human had suffered health problems due to the usage of mercury. Excessive ingestion on heavy metals can cause accumulative poisoning, cancer, nervous system damage and etcetera [6]. Therefore, there is a need to focus on the removal of mercury from water due to its toxicity to human health. Adsorption method is reported to be the most common method to remove mercury in wastewater because of its simplicity and cost-effectiveness [4]. Various adsorbents are normally used for this process such as iron oxides, activated carbon and filamentous fungal biomass. During the last years, there was a growing interest in the use of biomaterials for the sorption and preconcentration of heavy metals from water. Yeast biomass was tested for the speciation of methylmercury and Hg (II) [7]. Carbon nanotubes grown on granulated activated carbon (GAC) were applied since it is said to be a potential adsorbent for heavy metal removal in water treatment.
II. EXPERIMENTAL WORK The following diagram explained the method used to remove the mercury from water. CNT-GACs were obtained from the Department of Biotechnology Engineering, International Islamic University Malaysia. Manufacturing of this material was done by previous postgraduate student. CNTGACs were kept in a Bijou bottle at room temperature as the preservation procedure. The preparation of Mercury stock solution was done in order to produce stock solution with a concentration of 1mg/l. The glasswares used for the experiment were rinsed with 2% nitric acid in order to remove all the impurities that might present and also to prevent further adsorption of Mercury on the surface of walls of the glasswares.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 88–91, 2011. www.springerlink.com
Mercury (II) Removal Using CNTS Grown on GACs
89
(a)
(b)
Fig. 2 SEM images of CNT grown on GAC before adsorption experiment with varied magnifications; (a) x35 (b) x100
Table 1 Percentage removal of Mercury (II) ions un
Fig. 1 Overall steps of experiment
A. Experimental Design The experimental design was performed for optimization to determine the optimum value aqueous solution. This was done by using Design Expert 6.0.8 with different parameters as CNT dosages of 5-10 mg, agitation speed in rpm (50150), pH (5-8), and contact time between 20-120 minutes. With two-level factorial and two replicates, the experiments will be conducted in 18 runs. The values for pH, agitation speed, contact time and adsorbent dosage were set to compare between the high and low rate. For the initial Mercury concentration, the value was chosen based on the factory effluents from several chloralkali plants in Europe, which is 1.6 mg/l.
III. RESULTS AND ANALYSIS CNT and silicon structures are electrically conductive, thus they can be imaged using a conventional scanning electron microscope (SEM). In Figure 2, it can be observed that the CNT-GAC were scattered and not well-aligned. The layers of the wall cannot be seen clearly. After the adsorption experiment, it is observed that there were some black spots on the surface of the CNT-GAC. The spots showed that adsorption process had occurred on the CNT-GAC surface.
10 11 12 13 14 15 16 17 18
R Factor 1 A:pH
Factor 2 B:speed ( rpm)
1 2 3 4 5 6 7 8 9
150.00 50.00 50.00 150.00 100.00 150.00 100.00 150.00 50.00 50.00 150.00 150.00 50.00 50.00 50.00 150.00 50.00 150.00
8.00 5.00 8.00 8.00 6.50 8.00 6.50 8.00 5.00 8.00 5.00 5.00 8.00 5.00 5.00 5.00 8.00 5.00
Factor 3 C:contact time ( min) 20.00 120.00 120.00 120.00 70.00 20.00 70.00 120.00 20.00 20.00 20.00 120.00 120.00 120.00 20.00 120.00 20.00 20.00
Factor 4 D:adsorbe nt dosage (mg) 5.00 10.00 5.00 10.00 7.50 5.00 7.50 10.00 5.00 10.00 10.00 5.00 5.00 10.00 5.00 5.00 10.00 10.00
Response 1 Hg(II) concentration (mg/l) 0.654 0.313 0.613 0.568 0.139 0.731 0.196 0.432 0.925 0.826 0.437 0.031 0.042 0.375 0.838 0.039 0.735 0.417
% Remov al 59.125 80.438 61.688 64.500 91.313 54.313 87.750 73.000 42.188 48.375 72.688 98.063 97.375 76.563 47.625 97.563 54.063 73.938
Run 12 have given the best result of mercury removal by using pH5, rpm 150, at 120 minutes and at 5 dosage of mercury.
A. Modeling by Statistical Analysis Analysis of the result was done by using Design Expert 6.0.8. Information regarding the regression, correlation coefficients and standard deviations were also computed by using this software. The regression model relating the removal percentage of mercury is as follows: Final Equation in Terms of Coded Factors: Hg(II) concentration = +0.50+0.077 * A-0.085 * B-0.20 * C+0.014 * D+0.11 * A * B+0.035 * A * C+0.051 * A * D Final Equation in Terms of Actual Factors: Hg(II) concentration = +2.36249-0.22488* pH-0.010884 * speed-7.01417E-003 * contact time-0.082217* adsorbent dosage+1.41333E-003* pH * speed+4.73333E-004* pH * contact time+0.013533* pH * adsorbent dosage where A = pH; B= speed; C= contact time; D= adsorbent dosage
IFMBE Proceedings Vol. 35
90
K.A. Nassereldeen, S.E. Mirghami, and N.W. Salleh
The value of correlation coefficient of R2 and adjusted R2 value determine the quality of the results, in which R2 is equal to 0.8517 while adjusted R2 is equal to 0.7364. Table 2 shows the value of R2.
C. Comparative Analysis of Various Adsorbents Table 3 Comparison of various adsorbents and its adsorbent capacity Adsorbent
Adsorbent capacity
B. Analysis of Variance (ANOVA) To investigate the relationship and between the response variable and independent variables, analysis of variance was used. The model resulted R2= 0.8517 indicating 85.17% of the factors,which were pH, contact time,agitation speed and adsorbent dosage correlated to each other. The R2 value also indicates that 14.83% of the variation was not explained by the model. Table 2 P- and F Value for ANOVA of Hg(II) removal T
Sources
Mode l A B C D AB AC AD Curva ture Pure Error Cor Total
Sum of Squa res 1.07
DF
Mean Square
F value
Prob > F
7
0.15
7.39
0.0039
0.094 0.12 0.62 3.306 E-003 0.18 0.020 0.041 0.19
1 1 1 1
4.52 5.55 29.86 0.16
0.0624 0.0429 0.0004 0.6992
1 1 1 1
0.094 0.12 0.62 3.306E -003 0.18 0.020 0.041 0.19
8.66 0.97 1.98 9.38
0.0164 0.3502 0.1926 0.0135
0.19
9
0.021
1.46
17
significant
significant
Cellulose Acetobacter xylinum
of
Cellulose carrier modified with polyethleneimine Camel backbone Activated carbon (Indian almond) Sago waste carbon CNT-GAC
1. 65μg/g(chloralkali wastewater 2. 80μg/gsynthetic wastewater 288.0 mg/g 28.24 mg/g
Reference A.Rezaee al.,2005
et
Navarro al.,1996
et
55.6 mg/g
Hassan SS et al.,2008 Inbaraj & Sulochana,2006 Kadivelu,2004
6.405 mg/g
This study
94.43 mg/g
Based on Table 3, it shows that there are many studies on the removal of Hg (II) using various types of adsorbent. However, the adsorbent capacity for each adsorbent is different due to the variation in the operating parameters (pH, agitation speed, dosage, temperature and many more). Thus, this comparative study was conducted to further understand the mechanism of adsorption and compare the types of adsorbents that were previously used to remove Hg (II).
IV. CONCLUSIONS
From the analysis, the Model F-value of 7.39 implies the model is significant. There is only a 0.39% chance that a "Model F-Value" this large could occur due to noise. Values of "Prob > F" less than 0.0500 indicate model terms are significant. In this case B, C, AB are significant model terms. Values greater than 0.1000 indicate the model terms are not significant. If there are many insignificant model terms (not counting those required to support hierarchy), model reduction may improve your model. The parameters involved in this study, which are pH, contact time, agitation speed and adsorbent dosage can be analyzed by graphical representation. One plot factor and 3-dimensional interaction plot were used to show the interaction between those parameters. The linear effect of changing the level of a single factor in the range of low (-1) and high (+1) levels was shown in one plot factor, while other factors are fixed at certain values. The 3-dimensional plot shows the interaction between the actual factors.
The effects of heavy metals such as lead, mercury, copper, zinc and cadmium on human health have been studied extensively. Excessive ingestion of them can cause accumulative poisoning, cancer, nervous system damage, etc. Since human beings are exposed to hazardous metal such as mercury, a great concern on how to overcome the effect should be investigated. Carbon nanotubes and activated carbon are found to be efficient as an adsorbent to remove heavy metal from wastewater. This study concentrates on the removal of heavy metals from wastewater by using carbon nanotubes grown on granulated activated carbon (GAC), where mercury was chosen as the heavy metal. The efficiency of the adsorption was determined in term of percentage removal. The four parameters that were chosen to determine the optimization of the process are pH, agitation speed, contact time and also the adsorbent dosage used, which is the CNT grown on GAC. The results showed that the most significant factor contributing to the adsorption process was the contact time. This model term gives the highest F-value in
IFMBE Proceedings Vol. 35
Mercury (II) Removal Using CNTS Grown on GACs
91
ANOVA analysis, which is 29.86. It is also shown that the four parameters are significant from the ANOVA analysis. From the result, the optimal conditions for mercury (II) ions removal occur at adsorbent dosage of 5 mg, pH 5, agitation speed of 150 rpm and contact time of 120 minutes.
ACKNOWLEDGMENT The author acknowledge the financial help from the research grant EDW A10-160-0713 of IIUM.
REFERENCES 1. Kabbashi,N.A., Atieh,MA., Al-Mamun,A., Mirghami, Mohamed ES, Alam, Md.Z., & Yahya,N. (2008) Kinetic adsorption of application of carbon nanotubes for Pb(II) removal from aqueous solution. Journal of Environmental Sciences, 21, 539–544 2. Torres,J., Olivares,S., De La Rosa,D., Lima,L., Martinez,F.,Munita,C.S.,& Favaro,D.I.T.(1998).Removal of Mercury(II) and methylmercury from solution by tannin adsorbents. Journal of Radioanalytical and Nuclear Chemistry, Vol.240, p.361-365
3. Canstein,H.V, Li,Y., Timmis,K.N., Deckwer,W.-D.,& Wagner,D.B.(1999), Removal of Mercury from Chloralkali Electrolysis Wastewater by a Mercury-Resistant Pseudomonas putida Strain. Applied and Environmental Microbiology, 65(12), 5279–5284. South J, Blass B (2001) The future of modern genomics. Blackwell, London 4. Goyal,M., Bhagat,M., & Dhawan,R. (2009). Removal of mercury from water by fixed bed activated carbon columns. Journal of Hazardous Materials, 171, 1009–1015. 5. Pavlogeorgatos,G.,& Kikilias,V.(2003), The Importance of Mercury Determination and Speciation to The Health of The General Population. Global Nest: The Int.Journal ,4(2-3), 107-125 6. Zabihi,M., Ahmadpour,A., & Haghighi,A.A.,(2010) Studies on adsorption of mercury from aqueous solution on activated carbons prepared from walnut shell. Journal of Hazardous Materials, 174,251–256. 7. Sakamoto,H., Ichikawa,T.,Tomiyasu,T., & Sato,M.(2004). Mercury Concentration in Environmental Samples of Malaysia.Faculty of Science,Kagoshima University,37,83-90
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Nassereldeen Ahmed Kabbashi International Islamic University Malaysia Jalan Gombak Gombak Malaysia
[email protected]
Optical Properties Effect of Cadmium Sulfide Quantum Dots towards Conjugation Process S.A. Shamsudin1, N.F. Omar2, and S. Radiman1 1
2
School of Applied Physics, UniversitiKebangsaan Malaysia,Bangi, Malaysia School of materials and mineral resources engineering, Engineering Campus, UniversitiSains Malaysia, NibongTebal, Malaysia
Abstract— Sphere-shaped cadmium sulfide quantum dots (CdS QDs) with well-controlled morphology and uniform size were successfully synthesized by using simple colloidal method with addition of thiolglycolic acid as stabilizer. Size and optical properties of CdS QDs could be tuned by altering the CdS QDs surface with capped ligand Polyethylenimine (PEI) while their aqueous isoelectric charge was adjusted by changing the pH solution with added sodium hydroxide (NaOH). Morphology, optical properties and surface charge of CdS QDs have been characterized by transmitted electron microscopic (TEM), absorption spectra analysis, photoluminescence spectroscopy and Nanozeta potential, respectively. Furthermore, CdSlysozyme conjugates have been obtained by electrostatic interaction between CdS QDs and lysozyme, which is lysozyme protein with isoelectric +11.2 mV. Surface change after conjugation process was investigated. Optical absorption edge and intensity of luminescence was not affected after conjugation process. Keywords— CdS QDs, Polyethylenimine (PEI), Optical Properties, CdS-lysozyme conjugates, Electrostatic interaction.
I. INTRODUCTION Semiconductor quantum dots (QDs) are attracting great interest in applications such as display devices and biochemical fluorescent tag due to their photo stable, sizetunable, narrow bandwidth photoluminescence and chemically functionalizable surfaces[1]. QDs also can emit intense light in the region from near-infrared to ultraviolet due to exciton recombination [2,3]. Optoelectronic properties [4,5] of II–VI group like CdS has advantages due to direct band gap (Eg = 2.41 eV), high absorption coefficient, good conversion efficiency, high thermal stability and easy to synthesis [6]. The study of QDs in the strong-confinement regime becomes clear when one considers the Bohr radii of excitons in semiconductors, which are typically ~10 nm or less. If ae and ah are the Bohr radii of the electron and hole, then strong confinement will occur [7]. This phenomenon is called quantum confinement effectwhich modified the electronic structure of QDs. The small electron and hole masses, imply large confinement energies and make the band gap energy is widened, leading to a blue shift in the band gap,
emission spectra etc. Thus, electronic spectra of CdS QDs with energy spacing’s that can be much larger than the energy gaps of the bulk CdS [8,9]. The surface states will play a more important role in the nanoparticles, due to their large surface-to-volume ratio with a decrease in particle size. For QDs, radiative or nonradiative recombination of an exciton at the surface states becomes dominant in its optical properties with a decrease of particle size [10-12]. Luminescence efficiency of QDs has been sufficiently improved by a surface passivation technique[13]. Surface passivation is used to reduce non-radiative surface recombination of charge carriers, which behaving as non-radiative relaxation centers for the electron-hole recombination.By applying proper surface passivation ligands to eliminate surface traps that aroused by dangling bonds, it will lead to luminescence enhancement [14–21]. CdS QDs colloidal solutions must be prepared not only as water-soluble, but it should be biocompatible and able to react with the biomacromolecules such as proteins. Wang et al. use mercaptoacetic acid to react with the CdS nanoparticles, the mercapto group binds to a Cd atom, and the polar carboxylic acid group renders the nanoparticles water soluble and biocompatible. The free carboxyl group is also available for covalent coupling to various biomolecules by covalent interaction to reactive amine groups [22]. Thioglycolic acid (TGA) also contain carboxylic group, which was previously used as the stability agent to prevent the chalcogenidenanocrystals from aggregating [23]. Optical properties for the obtained CdS QDs capped by PEI and the conjugation process between CdS and lysozyme were discussed in this experiment.
II. EXPERIMENTAL A. Apparatus The Transmission Electron Microscopic (TEM) image of QDs were obtained using a Transmission Electron Microscope CM12 (Philips) operating at a 100 kV accelerating voltage. After sonication, the colloidal solutions of CdS QDs in aqueous were dropped onto 50Å thick carbon coated
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 92–96, 2011. www.springerlink.com
Optical Properties Effect of Cadmium Sulfide Quantum Dots towards Conjugation Process
copper grids with the excess solution immediately wicked away. Fluorescence spectrum was performed by using a F7000 spectrofluorometer (Hitachi, Japan)with a quartz cell (1 cm×1 cm). The fluorescence spectra were recorded at λex = 422 nm. The optical absorption spectra were measured by a UV- 2450 UV-Vis spectrophotometer (Shimadzu, Japan).ZetaSizer Nano ZS90 (Malvern Instrument, Worcs, UK) equipped with 4 mW He–Ne Laser was used to determine surface charge.
93
about 5.4 nm. The min diameter of the CdScapped with PEI is smaller by 0.65 nmthan CdS QDsmin diameter. In addition, TEM showed that the solublization and cross-linking steps did not result in aggregation.
(a)
(b)
B. Material and Reagent
C. Procedure In the present study, we investigated the effect of passivation agent employed on CdS QDs surface were investigated. The basic CdS colloids were prepared following a method described by Shamsudin et al.[24]. CdS quantum dots were synthesized by colloidal method using TGA as stabilizing agent. Briefly, 0.3 mmol Cd Acetate were dissolved in 65ml of water and 20µl of thioglicolic acid (TGA) were added under stirring. The pH value was adjusted with pH modifier 0.1 MNaOHto pH9. 0.02 mmol Sodium Sulfide were dissolved in 35 ml of water. The oxygen in the system was removed by the flowing of nitrogen. Under stirring condition, Na2S was added. Then, the colloidal CdSwas kept in stirring condition overnight at room temperature. Proper amounts of PEIs were added into the performed CdS solution with a microsyringe. We used the capped CdS QDs-PEI to conjugate with the lysozyme (lys) viaelectrostatic interaction. 1.0 ml of buffer NaOH-KH2PO4 (pH 7.28), 1.5 ml of colloids (2.0 × 10−6mol l−1), and an appropriate volume of sample or protein working solution were added, then diluted with water and mixed thoroughly.
Fig. 1 TEM images of (a)CdS QDs and (b)CdS QDs capped with PEI B. Absorbance Spectrum The observations were explained bycomparison of the absorbance spectrabetween CdS QDs solutions in the absence and in the presence of PEI. The situation is clearly seen from Fig. 2which illustratedCdS QDs in the presence of PEI exhibit absorption edges which is blue-shifted with decreasing particle size.Significant absorbance spectrum for CdS in the presentof PEI at 428nm when compared with absorbance spectrum in the absence of PEI (439nm), reflecting the quantum confinement effect of the CdS QDs[25].This corresponds mainly characterized by a bandgap between the valence band and the conduction electron band, which is a function of the diameter of QDs, called a size quantization effect [26].
Absorbance (a.u)
Cadmium acetate was purchased from Fluka, Switzerland. PEIs with 10,000 molecular weight,thioglycollic acid (TGA),hen egg-white lysozyme and sodium sulphide were sourced from Sigma-aldrich, German. Sodium hydroxide was obtained from Needham Suffolk, England.TGAmodified CdS was synthesized as follows [24]. All reagents were of analytical grade and were used as received without further purification. Distilled water was used throughout the whole experiment.
4 3 2 1 0
b
a
370 400 430 460 490 Wavelenght (nm)
Fig. 2 The absorbance spectra of CdS QDs solutions in the absence (a) and in the presence (b) of PEI
III. RESULT AND DISCUSSION C. Band Gap Energy
A. TEM Images of Quantum Dots TEM images of CdSQDs and CdS QDs capped with PEIare shown in Fig. 1. Themin diameter of the CdSQDs is
The effect of quantum confinement in semiconductors is supported by rigorous theoretical calculations by Brus [27]. The linear and resonant nonlinear optical properties will
IFMBE Proceedings Vol. 35
94
S.A. Shamsudin, N.F. Omar, and S. Radiman
exhibit the greatest enhancement when the QDs radius R is much smaller than the Bohr radius of the exciton (aB) in the parent bulk material [7]. The fundamental absorption, which corresponds to electron excitation from the valence band to conduction band, can be used to determine the value of the optical band gap [25].The relation between the band gap (Eg) and maximum wavelength ( λmax) can be written as:
ܧ ൌ ఒ
(1)
ೌೣ
where, Egis the band gap of the material, h is Planks constant, cisspeed of light andλmaxis the absorbance peak wavelength [27]. Base on this equation, band gap is inversely proportionalto wavelength (λ max).Equation (2) is used for analyses on optic absorbance spectrum to determine band gap energy for difficult spectra and unclear peaks. ሺߙ݄ݒሻଶ ൌ ܭ൫݄ ݒെ ܧ ൯
(2)
where, αis absorbance constant, hv is photon discrete energy and Egis the band gap. Absorbance constant value can be calculated using equation (3) in the below.
α =
1 − log I t I o A = t log e t log e
(3)
(αhv)2 (eV/m)2
where,tisquartz cuvettecell thickness, ItandIoare transmission and incidence light intensity, respectively andAis sample absorbancefrom UV-Vis measurement. Band gap energy is illustrated in graph (αhv)2versusphoton energythat which is shown in Fig.3 is 2.99eV through calculation with using equations (2) and (3).
800
a b 400 0 2.5
2.8
Upon excitation wavelength, λex = 400nm with open slit for both emission and excitation are 5nm for both CdS QDs and CdS QDs modified with PEI, exhibited photoluminescence with the maximum emission at 540nm, which is the peaks of photoluminescence intensity is located in green-yellow peak range in wavelength 400nm-660nm. This emission peaks were assigned to an electron-hole recombination in the CdS QDs and is further indicative of the quantum size effect. Jie et al.[28] have been capped CdS QDs with three difference molecular weight of PEI, showed photoluminescence spectrum shape changes very little as a function of the ligand, indicating little or no excitation transfer between QD and the ligand PEI for these samples. They also mentioned, PEI is a kind of Lewis base and Lewis bases can be caused by photoluminescence enhancement. Based on that fact, we just used only one kind of PEI with 10,000 of molecular weight. For the latter function, the two major factors of the capping density and the electron donation ability were important factors for passivating the surface defects. If the surface defects could not be sufficiently passivated, therefore, the photoluminescence intensity became small. The photoluminescence intensity increased with the increasing electron donation ability of the amines group(s). It was shown that the electron transfer from the capping ligand to the surface atoms of the CdS QDs was effectively passivated the surface defects. Therefore, the chemical role of the amines affects the luminescence properties [29]. By taking the amines as coordinating agents in the removal of cadmium metal into consideration [13], the alkylamines in the solution should behave not only as capping ligands, but also complex ligands as Cd-amine in the source solution. Polymer-capped nanocrystals can preserve their original optical properties and enhance some attractive features without any aggregation because amine acts as an anchor to the surface of the particle[30]. PEI was cationic in nature; it carried a positive charge and so could be absorbed onto the negatively charged CdS QDs surfaces by electrostatic attraction [31]. Therefore, PEI can be usednot onlyas a passivation ligand but also as a dispersion agent in aqueous medium of CdS QDs.
3.1
Photon Energy (eV)
Fig. 3 The band gap energy of CdS QDs solutions in the absence (a) and in the presence (b) of PEI
D. Photoluminescence Spectra As shown in Fig. 4, changing of photoluminescence spectra can be studied by the interaction between CdS QDs and PEI molecules in aqueous solutions. Thephotoluminescence intensity drastically enhanced in spectra observed upon CdS QDs modified with PEI, which containing amine group.
Fig. 4 The photoluminescence spectraof CdS QDs solutions in the absence (a) and in the presence (b) of PEI
IFMBE Proceedings Vol. 35
Optical Properties Effect of Cadmium Sulfide Quantum Dots towards Conjugation Process
E. Conjugation Process by Electrostatic Interaction Optical Properties Effects: The absorption intensities have shown in Fig. 5which increased from 3.6 to 3.9. Juan et al. [32]have reported that the absorbancepeaks have a blue shift, because the aggregation of lysozyme on the surface of nanoparticles broadens the energy gap of the nanostructure, and thus cause a blue shift on the absorbance spectrum. However for our experiment,it was seen that the absorbance peaks have not givenany effect of wavelength shifted after conjugation process. Here we assume the lysozyme was well absorbed on CdS QDs surface and no aggregation has been done.Other than that, the size of CdS QDs has not been affected by this conjugation process.
95
Surface Charge: Further investigation of this electrostatic interaction, the charge of CdS QDs and conjugate CdSlys was measured by pH values on their surfaces. The pH value greatly affected the interaction between CdS QDs and lysozyme. The surface of CdSQDs has given by Zeta potential result is -1.32 mV, and the isoelectric point (pI) of lysozyme is +11.2 mV as reported by Juan et al. [32]. Based on zeta potential theory, the surface charge become negative when the solution in alkaline condition.Whilethe solution is adjusted to acidic condition, the surface charge turns to positive. We believe this negative charge CdS QDs and positive charge lysozyme has been conjugated by electrostatic interaction.
IV. CONCLUSIONS We fabricated the well-ordered CdS QDs assemblies with lysozyme. The resulting of conjugation has been showed that the optical properties have not been affected by electrostatic interaction. We expect that following more rigorous testing as toxicity test upon biology materials could afford entry of CdS QDs capped with PEI into devices or components for molecular biology, biotechnology and biomedicine.
ACKNOWLEDGMENT Fig. 5 Comparison of absorption peaks between CdS QDs and conjugate CdS-Lys
The effect of photoluminescence intensity upon conjugation process was observed via photoluminescence spectra that areshown in Fig. 6. This observation found that the CdS QDs photoluminescence intensity showed a little increase after conjugation process. Specifically,the intensity increase from 15225to 15935, which the CdS QDs photoluminescence peak has not shifted toward other wavelength region upon this electrostatic interaction.
Fig. 6 Effect of photoluminescence intensity upon conjugation process
We thank UKM for giving us the full grant of UKMOUP-NBT-27-138/2008 and UKM- ST- 01- FRGS00632006.
REFERENCES 1. Sibel E. D., Rıdvan S., Sibel B., Deniz H., Adil D., Arzu E (2008) Quantum dot nanocrystals having guanosine imprinted nanoshell for DNA recognition. Talanta 75: 890–896 2. Raffaelle R. P., Castro S. L., Hepp A. F., and Bailey S. G. (2002) Quantum dot solar cells. Prog.Photovolt: Res. Appl. 10:433–439 3. Trindade T., O’Brien P., Pickett N.L. (2001) Nanocrystalline Semiconductors: Synthesis, Properties, and Perspectives. Chem. Mater. 13:3843-3858 4. Brus L.E. (1991) Quantum Crystallites and Nonlinear Optics. Appl. Phys. A 53:465-474 5. Ghazali A., Zainal Z., Hussein M.Z., Kassim A. (1998) Cathodicelectrodeposition of SnS in the presence of EDTA in aqueous media. Solar Energy Materials and Solar Cells 55:237-249 6. Gouri S. P., Purabi G. ,Pratima A. (2008) Structural and stability studies of CdS and SnS nanostructures synthesized by various routes. Journal of Non-Crystalline Solids 354: 2195–2199 7. Wise, F. (2000) Lead Salt Quantum Dots: The Limit of Strong Quantum Confinement. Account of Chemical Research 33:773-780 8. Krishnan R., Norma R. de Tacconi, Chenthamarakshan C. R. (2001) Semiconductor-Based Composite Materials: Preparation, Properties, and Performance. Chem. Mater. 13:2765-2782
IFMBE Proceedings Vol. 35
96
S.A. Shamsudin, N.F. Omar, and S. Radiman
9. Alivisatos A. P. (1996) Semiconductor Clusters, Nanocrystals, and Quantum Dots Science. Science 271:933-937 10. Anderson M. A., Gorer S., Penner R .M. (1997) A Hybrid Electrochemical/Chemical Synthesis of Supported, Luminescent Cadmium Sulfide Nanocrystals. J. Phys. Chem. B101:5895-5899. 11. Henshaw G., Parkin I. P. , Shaw G (1996) Convenient, low-energy synthesis of metal sulfides and selenides; PbE, Ag(2)E, ZnE, CdE (E=S, Se). CHEM COMMUN 10:1095-1096 12. Hirai T., Bando Y., Komasawa I. (2002) Immobilization of CdS Nanoparticles Formed in Reverse Micelles onto Alumina Particles and Their Photocatalytic Properties. J. Phys. Chem. B106(35): 8967-8970 13. Bowe C. A., Pooré D. D., Benson R. F., Martin D. F. (2003) Extraction of heavy metals by amines adsorbed onto silica gel. J Environ Sci Health ATox Hazard Subst Environ Eng. 38 (11):2653-2660 14. Majetich S.A., Carter A.C.(1993) Surface Effects on the Optical Properties of Cadmium Selenide Quantum Dots. J. Phys. Chem., 97:(34)8727- 8731 15. Selvan B.S.T., Bullen C., Ashokkumar M., Mulvaney P. (2001) Synthesis of Tunable, Highly Luminescent QD-Glasses Through Sol-Gel Processing. Adv. Mater. 13:985-988 16. Seker F., Meeker K., Kuech T.F., Ellis A.B. (2000) Surface Chemistry of Prototypical Bulk II–VI and III–V Semiconductors and Implications for Chemical Sensing. Chem. Rev. 100:2505- 2536 17. Meyer G. J., Lisensky G. C., Ellis A. B. (1988) Evidence for Adduct Formation at the Semiconductor-Gas Interface. Photoluminescent Properties of Cadmium Selenide in the Presence of Amines. J. Amer. Chem. Soc. 110:4914-4918 18. Lisensky G. C., Penn R. L., Murphy C. J., Ellis A. B. (1990) ElectroOptical Evidence for the Chelate Effect at Semiconductor Surfaces. Science 248:840 19. Winder E.J., Moore D.E., Neu D.R., Ellis A.B., Geisz J.F., Kuech T.F. (1995) Detection of ammonia, phosphine, and arsine gases by reversible modulation of cadmium selenide photoluminescence intensity. J. Cryst. Growth 148 63-69. 20. Murphy C. J., Lisensky G. C., Leung L. K., Kowach G. R., Ellis A. B. (1990) Photoluminescence-Based Correlation of Semiconductor Electric Field Thickness with Adsorbate Hammett Substituent Constants. Adsorption of Aniline Derivatives onto Cadmium Selenide. J. Amer. Chem. Soc. 112:8344 21. Murphy C.J., Ellis A.B. (1990) The coordination of mono- and diphosphines to the surface of cadmium selenide . Polyhedron 9 19131918. 22. Wang L., Wang L., Zhu C., Wei X.W., Kan X. (2002) Preparation and application of functionalized nanoparticles of CdS as a fluorescence probe. AnalyticaChimicaActa 468:35–41
23. Zhang H., Ma X., Ji Y., Xu J., Yang D. (2003) Single crystalline CdSnanorods fabricated by a novel hydrothermal method. Chemical Physics Letters 377:654–657 24. Shamsudin S.A., Radiman S., Ghamsari M. S., Khoo K. S. (2009) Synthesis CdSnanocrystals in various pH values. AIP Conference Proceeding 1136:292-296 (proceeding paper) 25. Kuldeep S.R., Patidar D., Janu Y., Saxena N.S., Kananbala, S., Sharma, T.P. (2008) Structural and optical charcterization of chemically synthesized zns nanoparticles. Chalcogenide Letter 5 (6) :105-110 26. 26. Hwang S.-H., Moorefield C.N., Wang P., Jeong K.-U., Chong S.Z.D., Kotta, K.K. ,Newkome G.R. (2006) Construction of CdS quantum dots via a regioselective dendritic functionalized cellulose template. Chemical Communication 3495- 3497 27. Brus, L.E. (1984) Electron–electron and electron-hole interactions in small semiconductor crystallites: The size dependence of the lowest excited electronic state. Journal of Chemical Physics 80: 4403-4409 28. Jie M., Jun-Na Y., Li-Na W., Wei-Sheng, L. (2008) Easily prepared high-quantum yield CdS quantum dots in water using hyperbranchedpolyethylenimine as modifier. Journal of Colloid and Interface Science 319: 353-356 29. Nose K., Fujita H., Omata T., Shinya O.Y.M., Nakamura H., Maeda H. (2007). Chemical role of amines in the colloidal synthesis of CdSe quantum dots and their luminescence properties. Journal of Luminescence 126:21-26 30. Chun, F., Qi, X.Y., Fan, Q.L., Wang, L.H. & Huang, W. (2007). A facile route to semiconductor nanocrystal-semiconducting polymer complex using amine-functionalized rod–coil triblock copolymer as multidentate ligand. Nanotechnology 18 (3): 035704 31. Zhang, Y. & Jon, B. (2008) Effect of dispersants on the rheology of aqueous silicon carbide suspensions. Ceramics International 34: 13811386 32. Juan Li., Xi-Wen H. Yun-Li W. Wen-You L., Yu-Kui, Z. (2007) Determination of lysozyme at the nanogram level by a resonance lightscattering technique with functionalized CdTe nanoparticles. Analytical Sciences 23:331-335
Author: SitiAisyahBintiShamsudin Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
University Kebangsaan Malaysia Selangor Bangi Malaysia
[email protected]
Synthesis of Hydroxyapatite through Dry Mechanochemical Method and Its Conversion to Dense Bodies: Preliminary Result S. Adzila1,3, I. Sopyan2, and M. Hamdi1 1
2
Department of Engineering Design and Manufacture, University of Malaya, Kuala Lumpur, Malaysia Department of Manufacturing and Materials Engineering, International Islamic University Malaysia, Kuala Lumpur, Malaysia 3 Department of Materials Engineering and Design, University of Tun Hussein Onn Malaysia, Johor, Malaysia
Abstract— Hydroxyapatite (HA) powder has been prepared through mechanochemical synthesis from a dry powder mixture of calcium hydroxide Ca(OH)2 and di-ammonium hydrogen phosphate (NH4)2HPO4. Three different rotation speeds of 170 rpm (M1), 270 rpm (M2) and 370 rpm (M3) were used in this method. The as synthesized powder analyzed by FTIR and XRD confirmed the formation of HA structure with nano crystallite size in all milling speeds. XRD results showed the wide broad peaks of HA powders narrowed and crystallinity increased (31.0-42.5%) when the milling speed was accelerated to 370 rpm. The powders were compacted using cold isostatic pressing at 200 MPa and then subjected to 1150oC, 1250oC and 1350oC sintering. The sintered compacts were mechanically tested by Vickers microhardness indentation method. Powder synthesized at 370 rpm was found to have a significant hardness, 5.3 GPa obtained after 1250oC sintering. Keywords— Hydroxyapatite, Mechanochemical, Milling speed, Sintering, Vickers microhardness.
Various morphology, stoichiometry and level of crystallinity can be achieved depend on the technique and methods use for synthesis process. There have been several methods applied in synthesized HA nanocrystalline powder consist of co-precipitation [9], emulsion/microemulsion [15], solgel [16], hydrothermal [17] and mechanochemical [18]. Mechanochemical method is simple and low cost compared to other methods. The chemical processes occurring during mechanical action on solids became to be more specific and versatile. Besides, mechanochemical treatment has been recently receiving attention as an alternative route in preparing materials characterized by better biocompatibility with natural bone [18, 19]. Since then, there are several studies using a wet medium in mechanical milling have been reported [20-22] instead of milling in dry condition [18,23]. In this present study, HA powder was prepared through mechanochemical route without using any wet medium. The effect of speed will be investigated into powder properties as well as sintered dense bodies.
I. INTRODUCTION Hydroxyapatite (HA) is usually used for a number of biomedical applications in the forms of granules, blocks, as coating [1-4], as composite with polymer and ceramic [5-7], for bone augmentation and middle-ear implants [4]. HA has shown also the benefits in therapeutic antitumor vaccine [8] and was useful for drug delivery and antibiotics [9-10]. HA naturally contained in human bone as the crystals within collagen. The high strength and crack resistance or fracture toughness are necessary for the reliable work of an implant in the body [11]. Many improvements have been made earlier to overcome the limitation of HA in loading application by controlling microstructures via novel sintering technique or utilization of nano powders and by adding dopants [12-13]. Development of dense HA ceramics with superior mechanical properties is possible if the starting powder is stoichiometric with better powder properties such as crystallinity, agglomeration, and morphology. A decrease in grain size to nano scale in dense sintered materials is a desired parameter to enhance the mechanical and biological properties of HA-based bioceramic materials [14].
II. MATERIALS AND METHODS The two precursors for synthesis HA powder were commercially available calcium hydroxide Ca(OH)2 (R&M Chemicals) and di-ammonium hydrogen phosphate (NH4)2HPO4 (Systerm). The reaction of the two precursors as follows 5Ca(OH)2 + 3(NH4)2HPO4 → Ca5(PO4)3OH + 6NH3 + (1) 9H2O In the planetary ball mill, the precursor powders with molar ratio of 1.67 Ca/P were loaded and mixed in stainless steel vials and ball as a milling medium. Powder to ball mass ratio was 1/6 and the milling time was taken to 15 hours with three different rotation or milling speeds; 170 rpm (M1), 270 rpm (M2) and 370 rpm (M3). To determine the weight loss, the as synthesized powders were subjected to thermal analysis in a heating rate of 10oC/min from room temperature to 1300oC under atmosphere using Perkin Elmer Phyris Diamond TG-DTA equipment.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 97–101, 2011. www.springerlink.com
98
S. Adzila, I. Sopyan, and M. Hamdi
Phases in the as synthesized powders were identified using an X-ray diffractometer (CuKα, Shimadzu XRD 6000 diffractometer). All measurement were performed at room temperature with the range of 2θ=25-55oC at 2oC/min scan speed. All samples were analyzed by referring to standards of the Joint Committee of Powder Diffraction Standards (JCPDS) card number, 09-0432 [24]. The functional group of both the as synthesized powders and the powders after sintering were analyzed using a Perkin-Elmer Spectrum FTIR spectrometer of 4000-400cm-1 scanning range with resolution of 4 cm-1. The as synthesized powders were uniaxially pressed into pellets in a steel die of 10.5 mm in diameter using 2.5 MPa loading for two minutes , followed by cold isostatic pressing (CIP) at 200 MPa for five minutes. The green bodies were sintered at three different temperatures; 1150oC, 1250oC and 1350oC with both heating and cooling rates were 5oC/min in two hours holding time. Sintered compacted powders were then subjected to Vickers microhardness testing using indentation technique (MVK H2) after polished at 1000 grid of silicon carbide.
calculate the crystallite size. Table 1 shows the crystallite sizes of the samples. The crystallite size was not proportional with the milling speed, so it was not affected by the various speed applied. In contrast, the crystallinity of the powders was increased as the speed level up to until 370 rpm. Table 1 Crystallite size and crystallinity of HA powder Sample
Milling Speed (rpm)
Crystallite size (nm)
Crystallinity (%)
M1 M2 M3
170 270 370
4.71 3.08 4.87
31.0 32.0 42.5
III. RESULTS A. X-Ray Diffraction (XRD) Analysis
Fig. 1 XRD patterns of as-synthesized HA powder with various milling speeds
Figure 1 shows the XRD patterns of the as synthesized powder at three different milling speeds. HA characteristic with the broad peaks already appeared in lower milling speed of 170 rpm. However, the powder still containing the unreacted precursor belongs to di-ammonium hydrogen phosphate (DAP). As the milling speed increased to 270 rpm, the HA peaks intensely growth with peaks improved. The DAP peaks were not detected. At 370 rpm, the narrow peaks observed with the new peaks of HA detected at (112) and (212) compared to previous milling speeds. The crystallite size of the powders was calculated using Scherrer formula [25] ܦൌ
ୡ୭ୱඥனమ ାன୭మ
(1)
where D is the crystallite size (nm); k is the shape coefficient, 0.9; λ is the wave length (nm); θ is the diffraction angle (o); ߱ is the experimental full width at half maximum (FWHM); ߱ is the standard FHWM value. For this purpose, FWHM at (002) (2θ=25.8o) has been chosen to
B. Thermal Gravimetric Analysis (TGA) The graph in figure 2 shows that all of powder samples start to loss the weight below 100oC. This situation is attributed to the evaporation of adsorbed water and phase transformation occurs until 900oC. This trend was continuous until 1300oC. C. Fourier Transform Infra Red (FTIR) The chemical functional group of the synthesized calcium phosphate powder was determined by using FTIR analysis. Figure 3 for example shows the FTIR spectra for HA powder through dry mechanochemical synthesis with different speeds. Phosphate (PO4) band have four vibration modes ν1, ν2, ν3, and ν4. All samples indicate that at ν1 and ν2 modes, PO4 band was appeared at around 962 cm-1 and 474 cm-1 respectively. ν3 mode also consist of PO4 band at around 1088 cm-1 and 1023 cm-1. Besides that the PO4 band also was detected at ν4 mode at around 599 cm-1 and 560 cm-1.
IFMBE Proceedings Vol. 35
Synthesis of Hydroxyapatite through Dry Mechanochemical Method and Its Conversion to Dense Bodies: Preliminary Result
Fig. 2 TGA graphs of milled HA powders at different speed heated to 1300oC in air (heating rate=10oC/min)
From M1 HA powders, ν1, ν3 and ν4 modes became weak when sintering temperature increased from 1150oC to 1350oC except for ν2 mode which disappeared when sintering temperature increased. This trend also similar with M2 and M3 HA powders. The weak band of hydroxyl group was detected at 628 cm-1 [22, 26] from all of the as synthesized HA powders at different milling speeds. In M1 HA powders, the hydroxyl band exists clearly at 1150oC compared to the as synthesized powder, and changed to a weak band at 1250oC until vanished at 1350oC sintering temperature. On the other hand, M2 HA powder reduced the OH band from 1150oC to 1250oC and lost consequently at 1350oC. In contrary, the small hydroxyl band only stayed at 1150oC and fully removed at 1250oC and 1350oC. Band around 1635-1646 cm-1 was attributed to the present of water in all powder samples. At the same time, the broad band of absorbed moisture can be seen at the range of 3200-3600 cm-1[27]. These bands totally disappeared when subjected to the heat treatment from 1150oC to 1350oC sintering temperature. D. Vickers Micro Hardness The effect of sintering temperature on the Vickers hardness is shown in figure 4. In this test, M3 compacted powder yielded significant hardness, 5.3 GPa when sintered at 1250 oC followed by M1 compacted powder, 5.1 GPa sintered at 1350oC. Only hardness in M1 compacted powder was continuously increased with sintering temperature compared to M2 and M3 compacted powders which were increased initially at 1250oC and then dropped when they reached at 1350oC sintering temperature. At 1150oC, hardness in M2 compacted powder (4.6 GPa) was found higher than M3 and M1 (2.9 GPa and 2.1 GPa).
99
Fig. 3 The FTIR spectra of HA powder milled at 370 rpm rpm before (a) and after sintering at 1150oC (b), 1250oC (c) and 1350oC (d). [( ): H2O), ( ): PO4, ( ): OH]
Fig. 4 Microhardness graph for sintered compacted powder samples at three different milling speed
IV. DISCUSSION From XRD analysis, nanocrystallite size of HA powder obtained after milling at different speed denoted as M1, M2 and M3. HA structure starts to form in early of milling speed, 170 rpm. At this speed, another phase which is believed belongs to precursor, was existed and it indicated that the process was not reacted at all. Higher milling speed has led to the formation of single HA without formation of any secondary phase. The crystallinity was increased from 3142.5% with the milling speed and it can be seen from the peaks which were intensely growth as the speed increased. Mechanical milling on powders size did not affected by milling speed and the size around 3 to 5 nm yielded through this method. This size was significantly lower than previous study by Silva et al [28] using CaHPO4, CaCO3 and NH4H2PO4 as precursors in dry milling at 370 rpm for 15 hours. It is noted that the weight of all powder samples dropped at the range of 900oC an above might be associated with the formation of β-TCP by releasing the hydroxyl group. This condition was similar with FTIR spectra where hydroxyl
IFMBE Proceedings Vol. 35
100
S. Adzila, I. Sopyan, and M. Hamdi
group merely disappeared as heated from 1150oC -1350oC. In addition, the phosphate band was maintained but the peak was reduced until 1350oC. During sintering process, some of impurities were detected from the band in the range of 1970 cm-1 - 2600 cm-1. At 1150oC, greater hardness is shown by M2 compacted powder, 4.6 GPa which related to the smaller crystallite size obtained compared to M1 and M3 compacted powders. At 1250oC, significant hardness was found in M3 compacted powder, 5.3 GPa followed by M2 compacted powder. This consolidation might be attributed to the sintering effect on M3 powder compaction.
V. CONCLUSION The dry mixture of Ca(OH)2 and (NH4)2HPO4 from different milling speeds has been successfully produced nanocrystalline hydroxyapatite. The milled HA at different speeds were analyzed by XRD, FTIR and TGA analysis. HA powder was obtained in all milling speeds. All of HA powders showed the continuous weight loss until 1300oC. The M3 compacted HA powder was found to have a greater hardness, 5.3 GPa at 1250oC. FTIR analysis of sintered compacted powder results in weak bands when temperature increased. The higher milling speed has increased the percent of crystallinity of the powders obtained. The morphology analysis and powder characterization after synthesized and sintering process will be carried out in future study.
ACKNOWLEDGEMENT The authors are grateful to Biomedical Engineering Research Group of International Islamic University Malaysia (IIUM) for supporting this research.
REFERENCES [1] M. Wei, et al. (2005) Precipitation of hydroxyapatite nanoparticles: effects of precipitation method on electrophoretic deposition," Journal of materials science. Materials in medicine, 16:319-324. [2] S. W. K. Kweh, et al. (2000) Plasma-sprayed hydroxyapatite (HA) coatings with flame-spheroidized feedstock: microstructure and mechanical properties. Biomaterials 2: pp. 1223-1234. [3] X. Zheng, et al.(2000) Bond strength of plasma-sprayed hydroxyapatite/Ti composite coatings. Biomaterials 21:841-849. [4] N. Patel, et al. (2001). Calcining influence on the powder properties of hydroxyapatite. Journal of materials science. Materials in medicine 12:181-188. [5] H. Liu and T. J. Webster (2010) Mechanical properties of dispersed ceramic nanoparticles in polymer composites for orthopedic applications. International journal of nanomedicine, 5: 299-313.
[6] I. B. Leonor, et al. (2003) In vitro bioactivity of starch thermoplastic/hydroxyapatite composite biomaterials: an in situ study using atomic force microscopy. Biomaterials 24: 579-585. [7] J. Ni and M. Wang (2002) In vitro evaluation of hydroxyapatite reinforced polyhydroxybutyrate composite. Materials Science and Engineering: C 20: 101-109. [8] D. R. Ciocca, et al. (2007) A pilot study with a therapeutic vaccine based on hydroxyapatite ceramic particles and self-antigens in cancer patients. Cell stress & chaperones 12:33-43. [9] S. Sotome, et al.(2004) Synthesis and in vivo evaluation of a novel hydroxyapatite/collagen-alginate as a bone filler and a drug delivery carrier of bone morphogenetic protein. Materials Science and Engineering: C 24: 341-347. [10] S. Wang, et al. (2010) Towards sustained delivery of small molecular drugs using hydroxyapatite microspheres as the vehicle. Advanced Powder Technology 21:268-272. [11] T. J. Webster, et al. (2004) Osteoblast response to hydroxyapatite doped with divalent and trivalent cations. Biomaterials 25: 21112122. [12] J. Wang and L. L. Shaw (2009) Nanocrystalline hydroxyapatite with simultaneous enhancements in hardness and toughness. Biomaterials 30:6565-6572. [13] I. Sopyan and A. Natasha (2009) Preparation of nanostructured manganese-doped biphasic calcium phosphate powders via sol–gel method. Ionics 15:735-741. [14] C. Y. Tang, et al.(2009) Influence of microstructure and phase composition on the nanoindentation characterization of bioceramic materials based on hydroxyapatite. Ceramics International 35:2171-2178. [15] G. K. Lim, et al.(1999) Nanosized hydroxyapatite powders from microemulsions and emulsions stabilized by a biodegradable surfactant. Journal of Materials Chemistry 9:1635-1639. [16] Ramesh Singh, Iis Sopyan, Mohammed Hamdi (2008) Synthesis of Nano sized hydroxyapatite powder using sol-gel technique and its conversion to dense and porous body. Indian Journal of Chemistry 47:1626-1631. [17] K. Ioku, et al.(2006) Hydrothermal preparation of tailored hydroxyapatite. Journal of Materials Science 41:1341-1344. [18] B. Nasiri-Tabrizi, et al.(2009) Synthesis of nanosize single-crystal hydroxyapatite via mechanochemical method. Materials Letters 63:543-546. [19] J. Salas, et al.(2009) Effect of Ca/P ratio and milling material on the mechanochemical preparation of hydroxyapaptite. Journal of Materials Science: Materials in Medicine 20:2249-2257. [20] K. C. B. Yeong, et al.(2001) Mechanochemical synthesis of nanocrystalline hydroxyapatite from CaO and CaHPO4. Biomaterials 22:2705-2712. [21] N. Y. Mostafa (2005) Characterization, thermal stability and sintering of hydroxyapatite powders prepared by different routes. Materials Chemistry and Physics 94:333-341. [22] S.-H. Rhee (2002) Synthesis of hydroxyapatite via mechanochemical treatment. Biomaterials 23:1147-1152. [23] C. C. Silva, et al.(2003) Structural properties of hydroxyapatite obtained by mechanosynthesis. Solid State Sciences 5:553-558. [24] F. B. O. Markovic M. & Tung M S. (2004) J Res Natl Inst Stand Technol.109:553. [25] T. Tian, et al. (2008) Synthesis of Si-substituted hydroxyapatite by a wet mechanochemical method. Materials Science and Engineering: C. 28:57-63.
IFMBE Proceedings Vol. 35
Synthesis of Hydroxyapatite through Dry Mechanochemical Method and Its Conversion to Dense Bodies: Preliminary Result [26] [F. BO. (1974) Infrared studies of apatites II. Preparation of normal and isotopically substituted calcium, strontium, and barium hydroxyapatites and spectra-structure-composition correlations. Inorganic Chemistry 13:207–214. [27] D. Choi and P. N. Kumta (2007) Mechano-chemical synthesis and characterization of nanostructured [beta]-TCP powder. Materials Science and Engineering: C. 27: 377-381. [28] A. G. P. C.C.Silva et al. (2004) Properties and in vivo investigation of nanocrystalline hydroxyapatite obtained by mechanical alloying. Materials Science and Engineering: C . 24: 549-554.
101
Corresponding Author:
Author: Dr. Iis Sopyan Institute: Department of Manufacturing and Materials Engineering International Islamic University Malaysia Street: PO Box 10 City: Kuala Lumpur Country: Malaysia Email:
[email protected]
IFMBE Proceedings Vol. 35
The Effect of Ball Milling Hours in the Synthesizing Nano-crystalline Forsterite via Solid-State Reaction K.L. Samuel Lai1, C.Y. Tan1, S. Ramesh2, R. Tolouei1, B.K. Yap1, and M. Amiriyan1 1
Ceramics Technology Laboratory, COE, Universiti Tenaga Nasional, Jalan IKRAM-UNITEN, 43009 Kajang, Selangor, Malaysia 2 Department of Engineering Design and Manufacturing, College of Engineering, Universiti Malaya, KL, Malaysia
Abstract— The effect of ball milling hours on the manufacture of nano-crystalline forsterite powder was investigated in terms of particle size and phase stability. A quasi-mechanical activation method followed by heat treatment was successfully employed to produce nano-crystalline forsterite powder. During the attempt, ball milling hours were manipulated to study its effects on the particle size. XRD analysis was then conducted on the heat treated powders to determine the critical particle size. Based on XRD traces, it was revealed that 7 hours of low-energy ball milling was sufficient to produce crystalline forsterite powders. Subsequent FWHM studies also affirmed the critical particle size (≈ 41 nm) required to successfully transform MgO and Mg3Si4O10(OH)2 into pure forsterite powder. Keywords— Forsterite, Synthesis, Bioceramic.
I. INTRODUCTION Recent studies suggest that forsterite ceramics has the potential of being developed as a bioactive ceramic for biomedical purposes. This potential can be found attributed to its chemical composition (Mg2SiO4) whereby Mg is reported to contribute towards the bone mineralization of calcined tissues [1,2], and Si as an indispensible mineral during the preliminary stages of bone calcification [3]. Furthermore, the high proliferation rates exhibited by forsterite during a cytotoxicity study has also approve of its usage as a biomedical implant [1]. Besides the biological attraction of forsterite, forsterite is also found to be in possession of favorable mechanical properties. Existing mechanical tests conducted on forsterite samples have revealed high fracture toughness in forsterite ceramics, in which its maximum fracture toughness was reported to be approximately 4.3 MPa·m1/2 [3]. Additionally, it was established that the fracture toughness of forsterite is in surplus of the lower fracture limit in human bone (2.0 MPa·m1/2) [4], therefore making forsterite a potentially suitable material for future developments of high load bearing biomedical implants. However, the manufacture of crystalline forsterite via solid-state reactions accompanied with subsequent heat
treatment are often found to lack homogeneity [5,6], and often results in undesirable intermediate phases (i.e. clinoenstatite (MgSiO3)) [5]. In general, the homogeneity issue of forsterite is often linked to the sluggish formation of silicates since the diffusivity of the formed compounds are relatively low [7]. Furthermore, MgSiO3 is often associated as an undesired phase since it is a detrimental element to the high temperature properties of forstertite [8]. To overcome the homogeneity issues of forsterite, researchers have necessitated the involvement of mechanical energy as a mean of refining the powder particle size [5,7], often into nanoparticles whereby documented studies have proved that particles within the nano-meter range were able to boost diffusion process and enhance the chemical reactions between the starting precursors [5]. Additionally, literature also suggests that the refinement of particle size increased the reacting interface of the starting precursors, therefore enhancing the reaction kinetics or the reactants during heat treatment [5]. With the objective of pursuing for a more economical method of manufacturing forsterite ceramics, the present work investigates the potential usage of conventional, low energy ball-milling. Investigation was carried out on the effect of ball-milling hours on the conversion of magnesium oxide (MgO) and talc (Mg3Si4O10(OH)2) into pure forsterite. Subsequent examination of the XRD traces yielded positive results, indicating positive prospects of manufacturing nano-crystalline forsterite via solid-state reaction without the need for high-mechanical energy.
II. METHODS AND MATERIALS Nano-crystalline forsterite powder was prepared using a quasi-mechanical activation method whereby forsterite reactants were ball-milled on a uni-direction, variable speed, table top ball-milling machine. Forsterite reactants comprises of MgO (Merck, 97%) and Mg3Si4O10(OH)2 (SigmaAldrich, 99%) whereby ethanol (HmbG, 98%) was employed as the solvent. MgO and Mg3Si4O10(OH)2 powders were weighed using a weight balance of accuracy up to
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 102–104, 2011. www.springerlink.com
The Effect of Ball Milling Hours in the Synthesizing Nano-crystalline Forsterite via Solid-State Reaction
4 decimal points (Mettler Toledo, Switzerland). The weighed powders were then subjected to high frequency vibrations with an ultrasonic cleaner (Liarre, Italy) to break existing powder agglomerates to ensure a homogenous mixing. MgO was initially mixed with 100 ml of ethanol and ultrasonified for 2 minutes, followed by the addition of Mg3Si4O10(OH)2 into the ultrasonified MgO mix for an additional 2 minutes of high frequency vibration. The ultrasonified powders were then ball-milled at 3500 rpm for 1, 5, 7 and 10 hours respectively. Subsequently, the ball-milled slurry was then dried at 60°C (Memmert) and sieved (230μm) prior to heat treatment at 1200°C for 2 hours. Phase stability studies of the all the heat treated powders were then carried out by using X-ray diffractometer (Rigaku Geiger-Flex Difractometer, Japan). Subsequently the powder particle size was then obtained based on the FWHM theory, whereby the peak width was calculated according to the Scherrer’s equation. The critical particle size was then obtained by comparing both XRD traces and calculated particle size.
103
Thorough study on the XRD trace was evident that conventional, low-energy ball milling was capable of producing pure forsterite powder after subjecting the reactants to 7 hours of milling time as shown in Figure 2. The ability to form pure forsterite after 7 hours of ball-milling in the current work was not in agreement with the previously reported findings [5, 7] and was furthermore able to prove otherwise on the need for high mechanical energy to overcome the formation of the intermediate phases. Therefore, this creates a potentially efficient as well as cost saving method of producing pure forsterite.
III. RESULTS AND DISCUSSION The XRD analysis of produced forsterite powder shown in Figure 1 reveals peaks that were obtained based on the heat treated powders that were ball-milled for 1,5,7 and 10 hours respectively. The presence of intermediate phases for forsterite powders milled at 1 hr (MgO and MgSiO3) and 5 hr (MgO) were noted as shown in Figure 1.
Fig. 2 XRD pattern of forsterite powder milled at 7 hours On the successful formation of forsterite, it was hypothesized that prolonged ball-milling hours could have led to the shrinkage of particle size, thus assisting the conversion of the starting precursors into pure forsterite. To affirm this, a FWHM study was conducted on the as-received, before heat treatment powders to obtain the particle size of the milled powders. According to Scherrer’s equation (Equation 1), it is observed that the peak width is inversely proportional to the crystallite size. Therefore in order to obtain the powder particle size, further XRD examination as shown in Figure 3 on the ball-milled, untreated powders is carried out. (1)
Fig. 1 XRD pattern of forsterite powder milled at various hours (●= Mg2SiO4, ▼= MgSiO3, ■ = MgO)
IFMBE Proceedings Vol. 35
whereby, B = peak width K = crystal shape factor = 1 λ = wavelength θ = Bragg angle
104
K.L. Samuel Lai et al.
assume a significant role in the conversion of MgO and Mg3Si4O10(OH)2 into pure Mg2SiO4. It was also further proved that the conventional, low energy ball-milling was capable of obtaining the nano-particles required to produce nano-crystalline forsterite.
AKNOWLEDEMNET The authors would like to thank UNITEN for providing the financial support under the grant no J510050318. In addition, the support provided by SIRIM Berhad in carrying out this research is gratefully acknowledged. Fig. 3 XRD patterns of forsterite powder milled at various hours (before heat treatment)
REFRENCES 1.
By coupling the width of the most prominent peaks (≈ 28.6°) in Figure 3 with Scherrer’s equation (Equation 1), calculated results indicated that particle size decreased with increasing ball-milling time as indicated in Figure 4. With the obtained results in Figure 4, this therefore validates the earlier drawn hypothesis on the refinement of powder particle size and its significance towards the formation of pure forsterite. In the current work, it has been observed that the successful formation of nano-crystalline forsterite occurred at a critical particle size of approximately 41 nm.
2. 3.
4.
5.
6.
7.
8.
9.
Fig. 4 Effect of milling hours on the crystallite size of forsterite powder
Ni S, Chou L, Chang J (2007) Preparation and characterization of forsterite (Mg2SiO4) bioceramics. Ceramics Inter 33:83 – 88 DOI 10.1016/j.ceramint.2005.07.021 Schwarz K, Milne DB (1972) Growth-promoting effects of silicone in rats, Nature 239:333 – 334. Kharaziha M, Fathi MH (2010) Improvement of mechanical properties and biocompatibility of forsterite bioceramic addressed to bone tissue engineering materials. J Mech Behav Biomed Mater 3:530 – 537 DOI 10.1016/j.jmbbm.2010.06.003. Suchanek W, Yashima M, Kakihana M et al (1997) Hydroxyapatite ceramics with selected sintering additives. Biomaterials 18:923 – 933 DOI 10.1016/S0142-9612(97)00019-7 Tavangarian F, Emadi R, Shafyei A (2010) Influence of mechanical activation and thermal treatment time on nanoparticle forsterite formation mechanism. Powder Tech 198:412 – 416 DOI 10.1016/j.powtec.2009.12.007 Kosanovic C, Stubicar N, Tomasic N et al (2005) Synthesis of forsterite powder by combined ball milling and thermal treatment. J Alloys Comp 389:306 – 309 DOI 10.1016/j.jallcom.2004.08.015 Kiss SJ, Kostić E, Djurović D et al (2001) Influence of mechanical activation and fluorine ion on forsterite formation. Powder Tech 114: 84 – 88 DOI 10.1016/S0032-5910(00)00268-0 Fathi MH, Kharaziha M (2009) The effect of fluorine ion on fabrication of nanostructure forsterite during mechanochemical synthesis. J Alloys Comp 472:540 – 545 DOI 10.1016/j.jallcom.2008.05.032 Cullity BD, Stock SR (2001) Elements of X-ray diffraction, 3rd Edition (Prentice Hall, Inc.) 167-170.
Author: Dr. Chou Yong Tan
IV. CONCLUSION The effects of ball-milling time on the formation of forsterite have affirmed that the particle size of the reactants
Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
University Tenaga Nasional UNITEN-IKRAM Kajang Malaysia
[email protected]
The Effect of Titanium Dioxide to the Bacterial Growth on Lysogeny Broth Agar N.H. Sabtu, W.S. Wan Zaki, T.N. Tengku Ibrahim, and M.M. Abdul Jamil Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia
Abstract— In this paper, the effect of Titanium dioxide powder to the bacterial growth on Lysogeny broth (LB) were investigated. LB agar used as a nutrient media to grow bacteria from drain water mixed with different weight of TiO2 The image of the cultured plate was captured using webcam. Then the size of bacterial colonies was measured by Microsoft Visual Basic software. The result shows that the Titanium dioxide powder are able to decrease the growth of bacterial colonies in the drain water sample. This finding suggests that the Titanium dioxide have a potential to be used as purification agent in water treatment process. Keywords— Titanium dioxide, LB agar, Bacterial colonies, Microsoft visual basic software.
I. INTRODUCTION Monitoring the environmental quality in water, soil or air is very important to ensure that these environments comply with pre-defined government limits for particular bacteria, fungi or moulds. In drinking water samples especially, high levels of bacteria contamination can lead to the rapid spread of illness which in some cases can be fatal. Titanium dioxide, also known as titanium (IV) oxide or titania, is the naturally occurring oxide of titanium with the chemical formula known as TiO2. It is noteworthy for its wide range of applications such as in solar cells, photocatalysis, chemical sensors, white pigment, and as optical coatings [1]. Recently, the potential application of this material is in the removal of inorganic and organic pollutants in water treatment process [2,3,4]. Photocatalytic activity of TiO2 thin films also has been investigated for environmental purification and self-cleaning applications, governed by the photo-induced decomposition of organic pollutants [5]. In this study, the effect of titanium powder to the bacterial growth on LB agar was investigated. The bacteria were acquired from the drain water and the size of bacterial colonies on the LB agar were measured using developed Microsoft Visual Basic software.
II. METHODOLOGY The LB agar was prepared by immersed two LB tablet into 100ml of distilled water in a beaker. The mixture was
heated to 100˚C then cooling down to 40-50˚C before poured into half of the petri plate. The bacterial from the water sample was cultured onto the nutrient media using wire loop. The water sample was prepared by adding different weight of TiO2 to 50 ml of drain water in a beaker and left for three hours. The weight of TiO2 powder was set to 1 gram, 2 gram, and 3 gram respectively. After a loop was created, the petri plate was placed inside the incubator for 24 hours with temperature of 37 ˚C. After incubated for 1 day, the Petri plate was taken out and placed under the webcam to be analyzed by the Microsoft Visual Basic software. The image of bacterial colonies will be displayed on the interface of the system and the size of the colonies will be calculated automatically. Figure 1 shows the flowchart of the developed software.
Fig. 1 Flowchart of software development for measuring bacterial colonies size
There are many method that can be used to count the bacteria colony such as Fuzzy formalism [6], distance transform [7] and model-based image segmentation [8]. In this project, the bacterial colonies size was measured by the difference in the image pixel values. The color of LB agar
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 105–107, 2011. www.springerlink.com
106
N.H. Sabtu et al.
without bacterial colony will be bright yellow but if the colonies exist, the color will change to dark yellow. The changes in color will change the image pixel values, thus the size of the bacterial colonies can be calculated. The result then can be saved in database system created using Microsoft Office Access.
III. RESULTS AND ANALYSIS The experiment to observe the effect of the titanium dioxide powder on the bacterial growth was performed by varying the weight of TiO2 powder immersed into 50ml of drain water. Figure 2, 3 and 4 shows the result. Fig. 4 LB agar sample with 3 gram of TiO2 powder As in Figure 2, the image of the bacterial colonies were in dark yellow color and the white pigment inside the media is the TiO2 powder. The result shows that the size of the bacterial colonies decreased as the amount of TiO2 powder was increased. Next the image of this sample was analyzed by the developed Microsoft Visual Basic Software as shown by Figure 5. Bacteria colonies
Fig. 2 LB agar sample with 1gram of TiO2 powder
Bacteria colonies
Fig. 5 The developed software to calculate the size of bacterial colonies on LB agar
Fig. 3 LB agar sample with 2 gram of TiO2 powder
The result of the automated system will be saved in Microsoft Office Access 2007 for future references. Using this software, the size of bacterial colonies according to the weight ratio of TiO2 powder can be determined. The results of these findings were plotted as shown by the Figure 6.
IFMBE Proceedings Vol. 35
The Effect of Titanium Dioxide to the Bacterial Growth on Lysogeny Broth Agar
REFERENCES
Size of Bacterial Colonies versus Weight of Titanium Dioxide Powder
Size (Cm^2)
25 20 15 10 5 0 0
1
2
107
3
Weight (gram)
Fig. 6 The size of Bacerial colonies plotted against different weight of TiO2
The result shows that the size of the bacterial colonies decreased as the amount of TiO2 powder increased. This preliminary finding suggests that the TiO2 is powerful oxidizing agent that breaks down organic chemicals and kills microorganisms [9].
IV. CONCLUSIONS In a conclusion, the findings from this study shows that the TiO2 powder able to reduce the amount of bacteria colonies in the water sample. This would make the material can be used as purification agent to maintain hygiene and prevent the spreading of pathogenic infection in water treatment process. Besides, the developed software for measure the bacterial colonies size using Visual Basic Software offer huge advantages in terms of time saving and accuracy of measurement compared to the manual counting method.
1. Zhang Y, Crittenden J.C. et al. (1994) Fixed-bed photocatalysts for solar decontamination of water, Environ. Sci. Technol., 28: 435–442. 2. Dheaya M.A.A, Patrick S.M.D et al. (2009) Photocatalytic inactivation of E. coli in surface water using immobilised nanoparticle TiO2 films, J. Water Research, 43:47-54. 3. Hassan A.K, Chaure N.B. et al. (2003) Structural and electrical studies on sol–gelderived spun TiO2 thin films, J. Phys. D: Appl. Phys., 36:1120-1123. 4. Seong Y.B., Seung Y. C. et al. (2005) Synthesis of Highly Soluble TiO2 Nanoparticle with Narrow Size Distribution, Bull. Korean Chem. Soc., 26(9):1333-1334. 5. Yan H., Chunwei Y. (2006) Low-temperature Preparation of Photocatalytic TiO2 Thin Films, J. Mater. Sci. Technol., 22: 239-244. 6. Marotz J., Lu’bbert C et al. (2000) Effective object recognition for automated counting of colonies in petri dishes (automated colony counting), Computer Methods and Programs in Biomedicine, 66: 183–198. 7. Mukherjee D.P.,Amita P et al. (1994) Bacterial colony counting using distance transform, International Journal of Bio-Medical Computing, 38:131-140. 8. Bernard R, Kanduser M et al. (2001), Model-based automated detection of mammalian cell colonies, Phys Med Biol., 46(11): 3061-3072. 9. Block S.S., Seng B.P. et al. (1997) Chemically enhanced sunlight for killing bacteria, Journal of Solar Energy Engineering, 119:85-91.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Wan Suhaimizan bin Wan Zaki Universiti Tun ussein Onn Malaysia Parit Raja Batu Pahat, Johor Malaysia
[email protected]
Thermal Analysis on Hydroxyapatite Synthesis through Mechanochemical Method A.S.F. Alqap1,3, S. Adzila2,4, I. Sopyan1, M. Hamdi2, and S. Ramesh2 1
Department of Manufacturing and Materials Engineering, International Islamic University Malaysia, Kuala Lumpur, Malaysia 2 Department of Engineering Design and Manufacture, University of Malaya, Kuala Lumpur, Malaysia 3 Mechanical Engineering Program, University of Bengkulu, Bengkulu, Indonesia 4 Department of Material Engineering and Design, Universiti Tun Hussein Onn Malaysia, Johor, Malaysia
Abstract— Thermal analysis of hydroxyapatite formation through dry mechanochemical method has been studied. The calcium phosphate was synthesized using calcium hydroxide and di-ammonium hydrogen phosphate as the precursors. The ball milling of 1/6 ball-powder mass ratio was employed on mixtures of calcium hydroxide and di-ammonium hydrogen phosphate in three different speeds 170, 270 and 370 rpm for 15 h. As ball-milled powders were then sintered at 1150, 1250 and 1350oC for 2 h, then subjected to TGA, XRD and FTIR for phase characterization. Calcium phosphates with ammonium are phases of the material. The ammonium is trace of phosphorus precursor. Choosing condition of the process and type of precursors determines type of reactions and its products. Keywords–– Heating, Milling, Hydroxyapatite, Calcium phosphate, Phase transformation.
I. INTRODUCTION Hydroxyapatite (HA) is main concern of many works to develop a biomaterial for biomedical application. Many types of synthesis method have been performed for its production. One of them is mechanochemical process. Mechanical factor has been greatly involved in many synthesis processes where mechanical strength is as success key of phase transformation and chemical reactivity. In ceramic fields, the mechanical milling was proven successful to drive a chemical reaction to synthesize HA from calcium and phosphorus precursors either in dry [1, 2] or wet method [3-5]. Prolonged mechanical milling has induced amorphous phase transforming from β-TCP (tricalcium phosphate) that makes the new phase more soluble than the source phase and more feasible to produce biphasic phase containing βTCP and HA after aging in a solution [6]. The mechanical factor also is involved to make calcium phosphate precursors more reactive for setting reaction [7]. Here the dry mechanical milling is attempted to synthesize HA from calcium and phosphorus precursors. Any phases as result of mechanically induced thermal reaction are discussed.
II. MATERIALS AND METHODS The mechanical milling is attempted on an objective to synthesize hydroxyapatite phase based on the reaction as follows 5Ca(OH)2 + 3(NH4)2HPO4 → Ca5(PO4)3OH + 6NH3 + 9H2O
Calcium hydroxide (CH, Ca(OH)2) and di-ammonium hydrogen phosphate ((NH4)2HPO4, DAP) are commercially available of, respectively, R&M Chemicals and Systerm. The planetary ball mill is performed to induce mechanical effect using a powder to ball mass ratio as 1/6 for 15 hours of running and stopping with three different speeds; 170, 270 and 370 rpm. Sintering is then given on the as-milled powders for different high temperatures 1150, 1250 and 1350˚C for 2 hours with 5˚C/min heating and cooling rates. Phase characterizations are attempted by using of a Perkin Elmer Phyris Diamond TG-DTA of 10˚C/min heating rate to evaluate thermal stability, a Shimadzu XRD 6000 diffractometer at the range of 2θ=25-55˚C and 2˚C/min scan speed to evaluate phases and a Perkin-Elmer Spectrum FTIR spectrometer of 4000-400cm-1 scanning range with resolution of 4 cm-1 to evaluate the functional group of chemical bonding.
III. RESULTS Thermal characterization is important to know material reactivity under specific temperature. Different speed obviously affects weight loss and weight gain of the material under TGA test. TG and DTA tests divide the material into regions as detailed in Table 1. How are the lines pattern in the regions of endo and exothermal and during weight gain and weight loss that observed from TGDTA tests are shown in Figure 1 and Figure 2. Phases that appear after thermal reaction are evaluated by XRD and the patterns are depicted in Figure 3. Finally, IR spectra that has been obtained to evaluate functional group of chemical bonding from FTIR characterization is given in Figure 4.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 108–111, 2011. www.springerlink.com
Thermal Analysis on Hydroxyapatite Synthesis through Mechanochemical Method
109
Table 1 TGDTA tests divide material reactivity into areas of loss-gain in weight and Endo-Exothermal as well TGDTA rpm
30-190ÛC (I)
190-400ÛC (II)
170 (DTA)
Endo (-)
Go up to (+)
270 (DTA) 370 (DTA)
As above As above
As above As above
400-800ÛC (III) Continuous in Exo (+) As above As above
170 (TGA)
Fast to W/L
W/L
W/L
270 (TGA)
Fast to W/L
370 (TGA)
Fast to W/L
Jump to W/G at 600ÛC + W/L W/L with fluctuation W/L with W/L fluctuation W/L: weight loss
800-1000ÛC (IV) Still in (+) and down to (-) As above As above W/G at 900 + fast to W/L
1000-1100ÛC (V)
1100-1300ÛC (VI)
Continuous in Endo (-) As above As above
Go up to (+) As above As above
W/L + small W/G
Go up to W/G
W/L
A few W/G
W/L
A few W/G
W/G at 900ÛC + fast to W/L W/G at 900ÛC + fast to W/L W/G: weight gain
Fig. 1 DTA patterns of three samples of different speeds at 10˚C/min Fig. 2 The example of TGA patterns. The patterns are for the 270 and 170 rpm speed samples with the inset to high the scale
heating rate
Fig. 3 The XRD tests of three different speeds: 170 rpm (upper), 270 rpm (middle), and 370 rpm (lower) at three different heating temperatures: 1150˚C (left), 1250˚C (middle), and 1350 ˚C (right)
Fig. 4 The IR spectra test characterizes the appearance of certain element by its functional group with its transmittance percentage (%T) showing the degree of intensity. The appearance intensity among available functional groups is comparable (left). The example of complete IR spectra is taken after 1350 heating. 170 rpm black, 270 rpm blue and 370 rpm red (right)
IFMBE Proceedings Vol. 35
110
A.S.F. Alqap et al.
IV. DISCUSSIONS In the region I (see Table 1) the sample heating reaction absorbs heating from its surrounding as result of decomposition of ADP and water evaporation as DTA records in negative region, i.e. endothermal, and accordingly TGA records as weight loss. All of the three speeds have similar condition at the region I where losing rate is tremendously high. The formation of calcium ammonium phosphate with water crystal, CaNH4PO4.6H2O (CAP), is possible at around 50˚C only. By heating, the amount of water crystal decreases then the weight decreases. On the way of this heating DAP decomposes into ammonia (NH3) and ammonium di-hydrogen phosphate (ADP). ADP then decomposes into NH3 and phosphoric acid. Then the phosphoric acid produces water and H4P2O7. From H4P2O7, P2O5 and water show up. This condition causes the system is rich of Ca2+, OH-, NH4+, water crystal, CaO, HPO42- and so variation of H-P-O and P-O ions on. Another work of ADP with calcium precursor in aqueous synthesis by Alqap and coworker [8] is worthy for comparison. In the region II, CAP decomposes to CaHPO4 and NH3. Then CaHPO4 transform to calcium pyrophosphate (CPP) with constitutional water. This stage causes the region II going to weight gain gradually, accordingly DTA records as going to positive region, i.e. exothermal region. The gradual increase is because the formation of the new crystal is followed by dehydration of crystal water. Slow down in dehydration rate makes the slope of increment steeper and slower in the range of 400-600˚C. In the region III, the DTA reaction is being continuously at positive or exothermal region. In the range of 600-700˚C the formation of calcium pyrophosphate is still possible (see Figure 2). The formation of the new phase does not significantly happen for the 170 and 370 rpm speeds. They show only very tiny increase in gradual manner. This is not as the 270 rpm speed that suddenly jumps to increase its weight by 1%. The fluctuation in weight gain and weight loss is observed at those samples in further way. In between 700-800˚C, a formation of amorphous apatite is possible. This formation with very tiny weight gain before 800˚C is observed. In the region IV, especially at 900˚C, all the samples have weight gain before they go to lose their weight again. Here HA forms. In the region V, HA gradually decompose into βTCP. This decomposition that observed as faster to 1100˚C and gradually slower further is interesting. Difference density between HA and βTCP is only around 0.08 while between βTCP and αTCP around 0.3 g/cm3. It should be other reason to explain this phenomenon. In the region VI, the 170 rpm sample inclines direction from weight loss to weight gain at 1200˚C, while the two others continuously lose the weight but a few jump in weight at the end (see Figure 2). These weight gains
accordingly are recorded as inclination point from endo to exothermal by DTA. This explains that there should be a formation of new crystal takes place. XRD characterization shows that the 170 rpm speed gives two major peaks after 1150 and 1250˚C heating at 38.3˚ and 44.5˚ 2θ, and after 1350˚C heating at 29.5˚ which is followed by minor peaks at 30.5˚, 31.5˚, 32˚ and so on. The 270 rpm speed gives one major peak 29.5˚ after 1150˚C heating, minor peaks at 31˚, 31.7˚ and so on. In the next heating, 1250˚C, these peaks are reduced in intensity. After 1350˚C heating, the major peak disappears while the minor peaks become significant, i.e., at 30.5˚, 31.3˚, 31.7˚ and so on. The 370 rpm speed delineates all the major peaks and the minor peaks become the characteristic peaks of the sample but still with tiny peaks. After 1150˚C heating peaks of 31.5˚, 32˚ and so on appear. The same appearance is also after 1250 and 1350˚C, however the intensity of the 1250˚C heating is small. The major peaks at 29.5˚, 38.3˚ and 44.5˚ may belong to ADP which as pure component has characteristic peaks at 29˚, 33.3˚, 37.5˚ and 45˚. The minor peaks can be counted at 29.5˚ 2θ as CPP (29.55˚), CaHPO4.2H2O, DCPD (29.5˚), or CaPO3(OH).2H2O, CPH (29.3˚), at 30.5˚ as α-TCP (30.8˚), at 31˚ as β-TCP and at 31.7˚ as HA. (JCPDS cards: CPP # 33-297, DCPD #2-85, CPH #9-77, β-TCP #9-169, α-TCP #9-348, HA #9-432). The minor of the calcium phosphate phases with the major appearance of ADP is very interesting. The minor and major appearance suggests that the former is possibly coated by the later, or because the former is crystalline while the later is amorphous. From XRD, the recovery of weight at > 1100˚C of TGA test is possible by the formation of N-H structure. FTIR test show the pattern as shown in Figure 4. The main peaks that appear in the IR spectra [9] are mainly at 3630 /cm region and its surrounding for O-H and N-H, 3350 /cm region and its surrounding for N-H or C-H, 2157/cm region and below for N-H, C-O, C-N or N-O, 1020/cm region and surrounding for P-O and 560/cm region and surrounding for P-O also. Using D(%Transmittance) that is different between %T point initially starts at 4000/cm and %T point at a characteristic peak of the wave number mentioned earlier, the degree of intensity is depicted in the left of Figure 4. At all of the three speeds, O-H as water molecule appears after synthesis (the figure is not shown here), but not after heating. After three heatings O-H of crystal is observed. The O-H as crystal interestingly increases with heating temperature increases. It suggests that the increase in heating temperature may exert crystal water in the particle as contribution of products of many reactions above discussed. However, the 270 rpm speed at 1250˚C heating has the lowest in the crystal water (see Figure 4 at left). The 3350 and 2157 /cm regions confirm
IFMBE Proceedings Vol. 35
Thermal Analysis on Hydroxyapatite Synthesis through Mechanochemical Method
the appearance of N-H functional bonding systems. C and O that may come from the air during milling can be neglected here because the system is sealed. The 3350 /cm significantly increase for the 370 rpm speed after 1350˚C heating. The 2157 /cm region is strengthened by heating not by speed. The 1020 /cm region is strengthened by speed and by heating, especially at 1250. The region of 560 is reduced by speed, but the 370 speed strengthens the region. The 3350 and the 2157 /cm peaks strengthen each other such NH structure significantly appear and compete with the orthophosphate peaks, i.e. 1020 and 560 /cm regions (see Figure 4). The N-H does not appear significantly in either the range of 3300-2800 /cm or 1456-1400 /cm as Salas et al found [10], it may be the condition there and here are different like that people found carbonate systems in different regions between dry and wet processes. The increase in speed increases the intensity of 1020 /cm region, however, it is true if the sample is heated at 1250˚C. Among three speeds and three heatings the 270 speed and 1350˚C heating gives better appearance of calcium phosphate. Indeed, the continuous heating drives the phase transformation takes place from a non stable phase to more stable. The dry condition, however, maintains some traces keeping in touch with the particles. In further cooling these traces transform to form the complex structure like calcium ammonia ortho- phosphate. The above facts suggest that dry process supports the appearance of NH3 instead of NH4OH. NH4OH appears because NH3 reaction of the phosphorus precursor with water in a aqueous system. This NH4OH is responsible to cause the formation of hydroxyapatite success in such a way that water washes out ammonium from phosphorus precursor and exert a condition such conducive that hydroxyapatite formation takes place. In dry condition the situation is much different. Mobility of particle is more limited, once the particle sticks on the wall or the ball then no more mobility is expected, hence, the reactivity now fully depends on heating and the mass ratio of ball and particle. However, the condition is not fulfilled enough by the heating and mass ratio. There is another factor affecting reaction products at the end, that is physical properties of precursors such as hygroscopic, wettability, melting, boiling and polymerization that could affect condition of the reaction and its steps advance as this work has found out it here.
V. CONCLUSIONS The dry milling process involving calcium hydroxide and di-ammonium hydrogen phosphate (DAP) has been employed to produce calcium phosphate powders. Three
111
different conditions of speed and heating have shown their effects on the phase transformation. The characterization revealed that the calcium phosphate phases appear with ammonia trace remained from phosphorus precursors.
ACKNOWLEDGEMENT The authors are grateful to Biomedical Engineering Research Group of International Islamic University Malaysia (IIUM) for supporting this research.
REFERENCES 1. Nasiri-Tabrizi, B., P. Honarmandi, R. Ebrahimi-Kahrizsangi, and P. Honarmandi (2009) Synthesis of nanosize single-crystal hydroxyapatite via mechanochemical method. Materials Letters 65:543-546. 2. Silva, C.C., A.G. Pinheiro, M.A.R. Miranda, J.C. Góes, and A.S.B. Sombra (2003) Structural properties of hydroxyapatite obtained by mechanosynthesis. Solid State Sciences 5:553-558. 3. Yeong, K.C.B.W., J.Ng, S. C. (2001) Mechanochemical synthesis of nanocrystalline hydroxyapatite from CaO and CaHPO4. Biomaterials 22: 2705-2712. 4. Mostafa, N.Y.(2005) Characterization, thermal stability and sintering of hydroxyapatite powders prepared by different routes. Materials Chemistry and Physics 94:333-341. 5. Rhee, S.-H.(2002) Synthesis of hydroxyapatite via mechanochemical treatment. Biomaterials 23:1147-1152. 6. Gburecka, U., Grolms, O., Barralet, J.E., Grover, L.M., Thull, R.(2003) Mechanical activation and cement formation of b-tricalcium phosphate. Biomaterials 24:4123–4131. 7. Song, Y., Feng, Z., Wang, T.(2007) In situ study on the curing process of calcium phosphate bone cement. Journal of Material Sciences: Materials in Medicine 18:1185–1193. 8. Alqap, A.S.F., Sopyan, I.(2009) Low temperature hydrothermal synthesis of calcium phosphate ceramics: effect of excess Ca precursor on phase behaviour. Indian Journal of Chemistry 48A: 1492-1500. 9. Smith, B. (1999) Infrared spectral interpretation, a systematic approach. CRC Press, Boca Raton. 10. Salas, J., Benzo, Z., Gonzalez, G., Marcano, E., Gomez, C. (2009) Effec of Ca/P ratio and milling material on the mechanical preparation of hydroxyapatite. Journal of Material Sciences: Materials in Medicine 20:2249-2257.
Corresponding Author:
Author: Dr. Iis Sopyan Institute: Department of Manufacturing and Materials Engineering International Islamic University Malaysia Street: PO Box 10 City: Kuala Lumpur Country: Malaysia Email:
[email protected]
IFMBE Proceedings Vol. 35
Continuous Passive Ankle Motion Device for Patient Undergoing Tibial Distraction Osteogenesis C.T. Ang1, N.A. Hamzaid1, Y.P. Chua2, and A. Saw2 1
2
Biomedical Engineering, University Malaya, Kuala Lumpur, Malaysia Department of Orthopaedic Surgery, University Malaya Medical Centre, Kuala Lumpur, Malaysia
Abstract— This paper describes the development of a portable, easy to use, and standalone Continuous Passive Motion (CPM) device for ankle which intend to reduce the workload of physiotherapist due to increased number of patients, long term of treatments with Ilizarov Ring, and inconvenience of patients to attend to physiotherapy session frequently. The main elements in constructing CPM device for ankle were discussed in two parts: hardware and software. Hardware parts include the design of mechanical structure, and the specification of motors. While software parts are the programming codes using PIC18F4520. Expected results are the functionality of CPM device with user control. Keywords— Continuous Passive Motion, Ankle, Ilizarov Ring, Microcontroller Programming.
I. INTRODUCTION In the United Kingdom, there was an increase of Ilizarov External Fixator sales between year 1993 and 1997 by 286%, while centres which apply Ilizarov Fixators increased from 15 to 44 [6]. Patients with Ilizarov Ring are also encouraged to apply functional load through the treated limb by using the limb as normal as possible [7]. However, during the treatment with Illizarov external fixator, movement of joints maybe difficult due to pain over the metal-skin wound, surgical wound or tightness of the soft tissue that is being stretched. Psychologically, these reduced the willingness of patient to load on the treated limb. The most common problem after tibial lengthening using an Ilizarov Ring is the reduced range of motion (ROM) in the ankle which is mainly controlled by Achilles’ tendon [1]. Similar complications of loss of ROM of ankle were also supported by Taylor [14]. One of the ways to keep the ankle moving is by stretching them passively by another person or with the use of the patient’s unaffected limbs, i.e. the upper limbs. One way of passively reducing loss of ROM in dorsiflexion which happens more severely than plantarflexion is the use of string or elastic band to keep the foot in position [14]. The inconvenience of patients to attend physiotherapy sessions frequently may be due to long term wearing of external fixator (6-8 months for lengthening of 4 cm) [12]
and the small number of physiotherapist available. These become the major factors of increasing patient with remarked reduction in range of motion (ROM) due to joint stiffness or causing downward pointing of the foot. All these implications cause the recovery time of patients to be extended and further physiotherapy treatments are needed. To increase ROM of ankle, physiotherapy treatments or other similar effect treatments are needed. However, due to increase of Ilizarov Ring users, long term of treatments, difficulties in accessing to physiotherapist and the small number of physiotherapist, the need of ankle CPM device for patient with Ilizarov Ring are highly required. Some research had shown the effectiveness of CPM in reducing passive ankle joint stiffness in healthy subjects (i.e. cyclic stretching) [4, 8] and also among patients with stroke [13]. Moreover, several studies [4, 9, 13] also showed that prolonged static stretching is effective in reducing ankle joint resistance, increasing ankle joint ROM and improving gait characteristics in spastic ankles. Besides increasing the ROM of joints, the CPM device was found to increase volume of blood flow in the femoral vein at the ankle joint. A 123% increase of blood flow during the first five minutes to 142% after 15 minutes was found by Bonnaire in his research [2]. Overall, CPM applied after orthopedic surgery is found to prevent joint stiffness and reduce formation of haematomas and edema [10]. It is suggested that ROM exercises done earlier such as the use of CPM demonstrated improvement in the biomechanical behavior of in vitro tendons rupture. Hence, CPM might be a better rehabilitation method for patients recovering from Achilles tendon ruptures [3]. Due to the several positive effects shown by previous research, it is applicable to provide CPM device in order to treat ankle joint stiffness among patients who undergo tibial distraction osteogenesis.
II. MATERIALS AND METHODS A. Mechanical Design and Actuator Mechanical Design: In hardware design, major parts were the mechanical structures which were able to support the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 112–115, 2011. www.springerlink.com
Continuous Passive Ankle Motion Device for Patient Undergoing Tibial Distraction Osteogenesis
foot and gives appropriate motion to the ankle by the actuator. In considering the mechanical structure, it should be a lightweight material but strong enough to support the weight of leg. Besides, according to Ilizarov Ring treatments for Tibial Distraction Osteogenesis, normally only one leg is treated, hence, the device was designed to be a single pedal. According to the training done manually by physiotherapist, dorsiflexion of ankle is normally done with a maximum of 30° while plantarflexion of ankle is done at a maximum of 50° [5]. Pause time of around 10 second will be included depending on condition at the maximum input degree of both plantar and dorsiflexion. Hence, the mechanical structure was designed so that it will have a range of movement of 30° above and 50° below standard zero degree level. The torque of the motor was required to be high enough to maintain in the position during pausing period. In the design of an intelligent stretching device, the device was built so that the footplate can be adjusted in all directions to align the ankle axis so that it is in line with the motor shaft [13]. Hence the centre of rotation was designed so that it is suitable for most patients, in which the measurement is made according to majority user foot size. To reduce the amount of torque required by the motor, the device was designed to be used by patients in sitting or supine position. The lower limb is straightened by supporting the foot with a chair or stool. Ankle exercise with sitting position together with extended knee was also proven to give the best performance for ankle perceived movements due to the stretched of calf muscle [11]. Figure 1 shows the built mechanical structure of ankle CPM. Two blocks and a moveable hinge were applied to accommodate the Ilizarov Ring and fix the position of the Ilizarov Ring to prevent slip during the ankle stretch training exercise. Actuator: In choosing the most suitable motor, torque is in prior consideration instead of speed. Low speed is preferable so that device will not cause any harm to the patient during the training. A 24Vdc DC Iron Core Motor model R555 with output speed of 3500rpm and output torque of 10Ncm was chosen and step down using gear box to 80rpm. With an external gear system (1:100) as shown in figure 1 output torque is further reduced to 195Nm. B. Software Functional Block As shown in Figure 2, the design of the ankle CPM device can be divided into three major units with its components. The Input Acquire Unit (IAU), consists of potentiometer, was used
113
to obtain the changes of degree at the pedal. Obtained voltage was first converted from analogue to digital so that it can be further processed to reduce noise. IAU is important in monitoring the pedal movement and also provides output to the display to alert users.
Fig. 1 Mechanical Structure of ankle CPM Output Control Unit (OCU) is the unit which made the movement of pedal possible. OCU consists of DC motor where the motor was controlled by the PWM output from the Central Processing Unit (CPU). CPU which mainly consists of microcontroller managed all the process by obtaining input, processing input into understandable value and output to the OCU and User Interface Unit (UIU). UIU is an important feature in communicating with users. Various parameters, status or condition of device is continuously displayed on the LCD display. The different signals are also conveyed to the users by using LEDs with different colors, indicating different condition of the device. Users are also being prompted to input parameters accordingly via keypad and switches. In software programming, there were considerations that have to be taken into account such as the ability of hardware to response to the output given by the microcontroller. In a mechatronic device, software programming functions as the main controller controlling the desired movement and functions. A good programming should be covering all the possibilities that might occur during the operation of device.
IFMBE Proceedings Vol. 35
114
C.T. Ang et al.
IAU ADC Converter Potentiometer
CPU PIC18F4520 SK40C
OCU PWM signals Motor Driver DC Motor
NO
UIU LCD display LED Keypad Switch
YES
Fig. 2 Functional Block Diagram
NO
C. Flow Chart As shown in Figure 3, once the device was turned on, it will prompt the user to input the required parameters. With these parameters, the value obtained from potentiometer was compared followed by cycle number comparison. Once these two parameters were fulfilled, the program ends and shows that the training has ended. When the emergency stop button was pressed, even at the middle of the training, the controller will direct the device to return to zero degree from any instance of degree. There were also Reset button to reset the whole program and ON/OFF toggle switch to reduce power consumption.
IFMBE Proceedings Vol. 35
YES
Fig. 3 Flow Chart on Software Program
Continuous Passive Ankle Motion Device for Patient Undergoing Tibial Distraction Osteogenesis
115
can be made so that the device can be applied clinically on patients and various experimental data can be collected.
III. EXPECTED RESULTS Figure 4 shows the operation of the device which produces force to move the foot in plantar and dorsiflexion. This figure also shows the position of normal Ilizarov Ring placement and the ROM of ankle which should be in the same axis for both device ROM and ankle ROM. The expected results from this study are the ability of this device to function accordingly as programmed and be able to apply on patients who undergo tibial distraction osteogenesis. The device should meet its objective that is functional, standalone, easy to use and portable. And most importantly is to be safe for both users and operators.
Fig. 4 Device Operation
IV. DISCUSSION AND FUTURE WORKS Mechanical structure built are functional, however, it can be further improved to a changeable point of ROM so that it fit patients with different sizes of feet. Besides, the material usage of mechanical structure can be also further improved to reduce the weight and enhance portability. Improvement can also be made on software controller where Bluetooth device can be added to ease the usage of patient. Miniaturized of the user interface with circuit optimization can also be further develop so that the device is made lighter and more user friendly.
REFERENCES 1. Barreto, B. V. (2007). Complications of Ilizarov leg lengthening: a comparative study between patients with leg discrepancy and short stature. International Orthopaedics (SICOT) 31 , 587-591. 2. Bonnaire, F. (1994). Mechanical dynamic ankle passive motion for physical prevention of thrombosis: changes in hemodynamics in the lower pressure system with new dynamic splints. 97:366-71. 3. Bressel E, M. P. (n.d.). Biomechanical behavior of the plantar flexor muscle-tendon unit after an achilles tendon rupture. Am J Sports Med , 29(3): 321-326. 4. Bressel E, M. P. (2002). The effect of prolonged static and cyclic stretching on ankle joint stiffness, torque relaxation, and gait in people with stroke. . Phys Ther 82 , 880-887. 5. Dai, J. S. (2004). Sprained Ankle Physiotherapy Based Mechanism Synthesis and Stiffness Analysis of a Robotic Rehabilitation Device. Autonomous Robots 16 , 207-218. 6. Graham. (1999). Personal Communication. Smith and Nephew Surgical Products. 7. Ilizarov, G. (1989). Tension-stress effect on the genesis and growth of tissues: Part I The influence of stability of fixation and soft tissue preservation. Clinical Orthopaedics and Related Research 238 , 249281. 8. McNair P, D. E. ( 2001). Stretching at the ankle joint: viscoelastic responses to holds and continuous passive motion. Med Sci Sports Exerc 33 , 354-358. 9. Nuyens, G. (2002). Reduction of spastic hypertonia during repeated passive knee movements in stroke patients. Arch Phys Med Rehabil , 83:930-5. 10. O’Driscoll SW, G. N. (2000). Continuous passive motion (CPM): theory and principles of clinical application. . J Rehabil Res Dev , 37:179-88. 11. Refshauge, K. M. (1995). Perception of Movement at the Human Ankle: Effect of Leg Position. Journal of Physiology , 243-248. 12. Saw A., C. Y. (2008). Patient Handbook on Limb Lengthening and Reconstruction with External Fixator. Kuala Lumpur: New Voyager Corporation. 13. Selles, R. W. (2005). Feedback-Controlled and Programmed Stretching of the Ankle Plantarflexors and Dorsiflexors in Stroke: Effects of a 4-Week Intervention Program. American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation , 86. 14. Taylor, G. J. (1988). Ankle motion after external fixation of tibial fractures. Journal of the Royal Society of Medicine , 81.
Author: Institute: City: Country: Email:
V. CONCLUSIONS The built ankle CPM device was successfully functioning to provide motion and control. However, further improvements
IFMBE Proceedings Vol. 35
ANG CHENG TIAN UNIVERSITY OF MALAYA KUALA LUMPUR MALAYSIA
[email protected]
Musculoskeletal Model of Hip Fracture for Safety Assurance of Reduction Path in Robot-Assisted Fracture Reduction S. Joung1, S. Syed Shikh1, E. Kobayashi1, I. Ohnishi2, and I. Sakuma1 1
Department of Precision Engineering, the University of Tokyo, Tokyo, Japan 2 Graduate School of Medicine, the University of Tokyo, Tokyo, Japan
Abstract— We have developed a fracture-reduction assisting robotic system for hip fracture. The robotic system provides power assistance to surgeons during the reduction procedure and simulated fracture reduction trials using a polyurethanemade bone model has shown good reduction results. While the system can reduce surgeon’s burden, it also has the possibility to damage soft tissues around bone fragments. It is required to predict or monitor forces produced by the reduction procedure from a safety point of view. To this end we have developed a musculoskeletal model of hip fracture that provides the force acting on muscles from the relative position of bone fragments. Though many musculoskeletal models of limbs have reported, there are few studies that apply it to fracture reduction. In addition, the musculoskeletal model allows us to simulate the reduction force acting on muscles, given reduction path. Two usage methods of the reduction force simulation are proposed for safe reductions. One is to find the gentle reduction path that minimizes the reduction force. The reduction force is simulated according to two reduction paths, and the path with lesser reduction force is identified as the gentle path. The other application is to serve as a detection tool where unexpected large reduction forces can be identified by comparing the measured force with the simulated force in real-time. The consideration about personal error of the muscle geometry and parameter is required. At present, the second function of using the developed simulation to detect unexpected large reduction forces has not been integrated to the robotic system. This will be part of our future work. Keywords— fracture-reduction robot, safety, musculoskeletal model, fracture-reduction path, fracture-reduction force.
I. INTRODUCTION Occurrence of hip fracture in elderly patients with osteoporosis is expected to increase in an aging society. Most cases of hip fracture are surgically treated and in these cases, fracture reduction that describes the medical procedure to restore a fractured bone to its original alignment should be conducted before fixation of bone fragments. A fracture table is often used to assist pulling power of a lower limb. However, fracture tables have few degrees of freedom and no safe methods are available to avoid the application of excessive force to the injured limb. A robot assisted fracture reduction has the possibility to assist surgeons in handling
translations and rotations of the bone while minimizing human error and at the same time increasing the level of precision. We have developed such a system and reported the related problems and countermeasures from a safety point of view [1]. Fracture-reduction force acting on lower limb during fracture-reduction procedure is the one of keywords to assure safety in using the robotic system. We have proposed the designs of mechanical failsafe units and a software force limiter in order to prevent excessive forces to the limb and have shown their usefulness from experimental evaluations. Some useful clinical data for estimating the limitation of the fracture-reduction force have been already reported though more need to be done in order for it to be a decisive criterion [2,3]. The aim of this paper is to strengthen the safety of a robot assisted fracture-reduction system by introducing a musculoskeletal model, which can predict and manage the fracture-reduction force. The musculoskeletal modeling of the hip fracture is useful for increasing the safety of the robot assisted fracture reduction. This model simulates the forces and moments to each joint movement. Two following functions are expected with this simulation. • •
The gentle reduction path, which minimizes the reduction force, can be decided on from among the reduction paths that are generated by the navigation system. An unexpected large reduction force can be detected during the fracture reduction by comparing the reduction force measured in real time with the simulated reduction force.
Though there have been several studies about the modeling of the lower extremities [4,5], the application of this modeling to hip fracture reduction is not found. In this study, the musculoskeletal modeling method is introduced, and the model is applied to the hip fracture. Finally, the reduction force is simulated for two reduction paths.
II. METHOD The musculoskeletal model requires the muscle force simulation and the musculoskeletal geometry. For the muscle force simulation, Hill-based muscle models have been used for
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 116–120, 2011. www.springerlink.com
Musculoskeletal Model of Hip Fracture for Safety Assurance of Reduction Path in Robot-Assisted Fracture Reduction
several studies [4,6,7]. A Hill-based muscle model, shown in Fig. 1, typically consists of three components: a contractile element, a parallel elastic element, and a series elastic element. The parallel and series elastic element are simple nonlinear elastic elements. The contractile element is described by a force-length relationship; this is the result of changes in overlap between the actin and myosin filaments in the sarcomere [8]. The force-generating properties of a specific muscle actuator are derived by scaling of the Hill-based model[8]. For this, four parameters and three curves are required. These are: the peak isometric muscle force ( ), the optimal muscle-fiber length ( ), the pennation angle( ), and the tendon slack length( ). The means the muscle length that can develop the maximum muscle force ( ). The means the length at which a tendon begins to develop force when stretched, and the pennation angle specifies the angle between the muscle fibers and the tendon. For a given musculo-tendon length and activation level, the model determines the musculo-tendon force following equation with the given curves and parameters (1) The functions of and are calculated using the normalized active and passive curves, respectively. The normalized curves and the parameters can be found at the webpage of the ISC (International Society of Biomechanics [9]). The curves are provided as a list of control points, and the curves are made by interpolating the given control points. In this study, the active force are not considered on the assumption that relaxants that block transmission of nerve impulses to the muscles were given in high doses before the fracture reduction; this means the active force is not developed. Therefore, the active term of Eq. (1) can be canceled and this can be rewritten by Eq. (2).
117
(2)
The musculoskeletal geometry is required for the calculation of the muscle length. The musculoskeletal geometry consists of the three dimensional location of muscle attachment and the position relation of bones such as the pelvis, femur, tibias, and feet. There have been several attempts to develop a database for the lower extremities [10-12]. The model of Brand et al. [12] is used in this study because they provide a full database of the 43 muscles. The attachment points of each muscle are described based on the coordinates of the attached bone. The graphically reconstructed model of muscles and each coordinate of the bone are shown in Figure 2. Unlike the given coordinates in Brand et al., the coordinates for the femur are divided into the proximal and the distal in order to determine the fracture position. The coordinates of the force sensor are added to calculate the muscle forces and moments at the position of the force sensor, which will be compared to the force measured in real-time. The forces and moments at the coordinates of the force sensor are calculated with the geometry model of the lower extremities and the muscle-force model as described by the following: 1. The coordinates of the pelvic and the distal femur are measured by the navigation system 2. Each muscle length is calculated based on the measured coordinates 3. Muscle forces are calculated using the Eq. (2) 4. The force( ) and moment( ) at the force sensor are calculated by Eq. (3) and Eq. (4), respectively.
3
4
Fig. 1 Hill-based musculotendon actuator model IFMBE Proceedings Vol. 35
118
S. Joung et al.
(a)
Fig. 2 Constructed muscle geometry and coordinates of the bone The muscle geometry is presented based on the pelvic coordinate
where i means the number of the muscle, means the force developed by ith muscle, is the vector from the force sensor to the insertion position of the ith muscle, and is the transformation matrix from the sensor coordinate to the distal coordinates. The distal bone fragment can be moved by modifying , which is the transformation matrix from the pelvic to the distal bone fragment. Thus, the reduction path . can be expressed by enumerating Two reduction paths that have ten steps from the initial position to the goal position are made as shown in Figure 3. These are expressed using six parameters that express , three for the translation and three for the rotation. The two paths have the same initial position and goal position, but different paths. With the initial position, the distal bone fragment is pulled up to pelvic-y is designated as minus- and externally rotated-b is plus-. The reduction methods for the two paths are similar: the traction of the distalincreasing of y-, reduction of external rotation-decreasing of b-, and the reposition of the traction-decreasing of y-. However, traction distance, rotation angle, and moving timing are a little different.
III. RESULTS The forces and moments at the force sensor are simulated for each path. Figure 4 shows the simulated forces and moments to the path in Figure 3. With the reduction path1, the maximum force and moment is 273N and -58Nm at step 7,
(b)
Fig. 3 Reduction path, the path is presented using six parameters that express ; (a) path1 and (b) path2 respectively. The maximum force is 382N at step 6, and the maximum moment is -101Nm at step 5 in the case of the reduction path2. It can be found that the reduction path1 requires smaller reduction force and moments than the reduction path2.
IV. DISCUSSIONS The reduction force simulation has been developed and the application methods are devised. This method is common for muscle simulations, but there are few studies that apply it to fracture reduction. Advantages include the ability to find the gentle reduction path that minimizes the reduction force from among the reduction paths that are generated from the navigation system. In addition, fracture-reduction force of two difference paths can be simulated.
IFMBE Proceedings Vol. 35
Musculoskeletal Model of Hip Fracture for Safety Assurance of Reduction Path in Robot-Assisted Fracture Reduction
119
the simulation parameters, the peak muscle force of the glutaeus medius is larger than that of the glutaeus minimus. Nevertheless, the simulation results show that the passive force of the glutaeus minimus is bigger than that of the glutaeus medius. This can be explained by a difference of the tendon slack length at which force begins to develop when stretched. The tendon slack length of the glutaeus medius is bigger than that of the glutaeus minimus. Thus, the larger muscle length for developing a passive force is required in the glutaeus medius than in the glutaeus minimus. We have developed the musculoskeletal model of hip fracture to predict or monitor forces produced by the reduction procedure. We could find the gentle reduction path from comparing reduction forces against to two reduction paths. Though the simulation was developed, the muscle geometry and parameter have personal error. And it should be more discussed. One simple and conventional solution is to scale the geometry and parameter used in this study to the frame of the patient. Future work includes integrating the function of detecting unexpected large reduction forces to the current robot system.
(a)
(b)
ACKNOWLEDGMENT
Fig. 4 Simulated forces and moments (a) against path1 and (b) against path2
This work was supported by Health Labour Sciences Research Grant.
REFERENCES
Fig. 5 Variation of passive force of the glutaeus medius and the glutaeus minimus for the simulation of path2
The passive force of each muscle can also be calculated. For instance, figure 5 shows the variation of passive force of the glutaeus medius and the glutaeus minimus for the simulation of path 2. In general, the glutaeus medius is stronger and bigger than the glutaeus minimus. Comparing
1. S. Joung, H. Liao, E. Kobayashi et al.(2010) Hazard analysis of fracture-reduction robot and its application to safety design of fracturereduction assisting robotic system, 2010 International Conference on Robotics and Automation, pp 1522-1561 2. Y. Maeda, N. Sugano, M. Saito, et al (2008) Robot-assisted femoral fracture reduction: Preliminary study in patients and healthy volunteers. Comput Aided Surg, 13(3):148-156 3. T. Gosling, R. Westphal, J. Fauulstich et al. (2006) Forces and torques during fracture reduction: Intraoperative measurements in the femur. J Orthop Res, 24(3):333-338 4. S.L. Delp, J.P. Loan, M.G. Hoy et al. (1990) An interactive graphicsbased model of the lower extremity to study orthopaedic surgical procedures. IEEE T Bio-Med Eng, 37(8):757-767 5. S.L. Delp, F.C. Anderson, A.S. Arnold et al. (2007) Opensim: Opensource software to create and analyze dynamic simulations of movement. IEEE T Bio-Med Eng, 54(11):1940-1950 6. U Glitsch and Baumann W. (1997) The three-dimensional determination of internal loads in the lower extremity. J Biomech, 30(1112):1123-1131 7. Soest Arthur J. and Bobbert Maarten F. (1993) The contribution of muscle properties in the control of explosive movements. Biol Cybern, 69(3):195-204 8. M. Gordon, A. F. Huxley, and F. J. Julian. (1966) The variation in isometric tension with sarcomere length in vertebrate muscle fibers. J Physiol, 184(1):170-192
IFMBE Proceedings Vol. 35
120
S. Joung et al.
9. International Society of Biomechanics at http://isbweb.org/data/delp/ index.html 10. TM Kepple, HJ Sommer III, KL Siegel, and SJ Stanhope. (1998) A three-dimensional musculoskeletal database for the lower extremities. J Biomech, 31:77-80 11. Thomas L. Wickiewicz, Roland R. Roy, Perry L. Powell et al. (1983) Muscle architecture of the human lower limb. Clin Orthop Relat R, 179:275-282
12. Brand RA, Crowninshield RD, Wittstock CE et al. (1982) A model of lower extremity muscular anatomy. J Biomech Eng, 104(4):304-310 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Sanghyun Joung the University of Tokyo 7-3-1, Hongo, Bunkyo-ku, Tokyo Japan
[email protected]
Speed Based Surface EMG Classification Using Fuzzy Logic for Prosthetic Hand Control S.A. Ahmad1, A.J. Ishak1, and S.H. Ali2 1
Dept of Electrcial and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, Selangor, Malaysia 2 Dept of Electrical, Electronics and System, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysa, Selangor, Malaysia
Abstract— Electromyographic (EMG) signal is an established technique for the control of a prosthetic hand. In its simplest form, the signals allow for opening a hand and subsequent closing to grasp an object. An EMG control system consists of two main components: feature extraction and classification. Using the information from different speeds of contraction, this paper describes the classification stage of the signal in determining the final grip postures of the hand. Fuzzy logic (FL) system is used in classifying the final information and the results demonstrate the ability of the system to discriminate the output successfully. Keywords— prosthesis control, electromyography, fuzzy logic, feature extraction, classification.
I. INTRODUCTION The human hand is a complex biological system with twenty seven bones. A multitude of muscles and tendons provide multiple degrees of freedom of movements and an array of over 17,000 tactile sensors for sensory feedback mechanisms [1]. Prosthetic hands have been designed to provide either functional and/or visual replacement for individuals with amputation or a natural birth defect. Commercial hand prostheses provide the user with limited dexterity and functionality due to the restricted number of grip patterns that may be achieved. The last few decades have shown continuous work on prosthetic hand developments with the ultimate aim of developing a prosthetic hand that is able to mimic the functionality and control of the human hand. For example, the US Department of Defense's Advanced Research Project Agency has put a large investment into prosthetic arm research with the aim to have a mechanical limb with significant function and sensory perception of a natural limb controlled via the amputee's nervous system [2]. To improve the functionality of the artificial hand, two main factors have to be considered in the development process. Those two factors are the structural design of the artificial hand and the control mechanism of the hand. Rapid growth in the structural design of the artificial hand can be seen and there is renewed interest in the development of
hands with multiple degrees of freedom that lead to multiple grip hand postures [3, 4]. The control mechanism has become the main concern in the prosthetic hand development process. Various methods have been proposed in controlling the operation of an artificial hand and surface electromyography has become an established technique as the control mechanism for prosthesis control application. The concept of using surface electromyography signal for prosthesis control has started since 1940s [5]. By using the residual muscles on the amputee's arm, they have been used as the control channel to determine the final movement of the hand. The simplest application is to either open or close a hand. Attempts to recognize patterns in the surface electromyography signal for control purposes have been investigated and it has been demonstrated that this control method is robust for the prosthesis control. Pattern recognition aims to classify the surface electromyography data based on statistical information extracted from the signal and determine the final output of the device operation. Even though the amputees may not have fully functioning muscles, it has been shown that they are able to generate repeatable surface electromyography patterns during movements [6]. The aim of this research is to develop simple and robust ECS in discriminating four different grip postures by using two SEMG signals as the control channel. This paper emphasizes the classification stage of the ECS where the extracted features will be discriminated accordingly to determine the final output of the system. A FL classifier has been used in this work and its designed was based on the findings in the feature extraction stage.
II. ELECTROROMYOGRAPHIC CONTROL SYSTEM Fig.1 shows the general block diagram of ECS based on pattern recognition. It consists of two main modules: feature extraction and classification. Each module plays important role for the success of the system but they can be adjusted (merge or omit) depend on the implementation of the system.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 121–124, 2011. www.springerlink.com
122
S.A. Ahmad, A.J. Ishak, and S.H. Ali
metronome. During the maximum speed, the participant was asked to perform the task based on their ability. SEMG1 SEMG2
Feature extraction
Classification
Prosthesis
Fig. 1 The block diagram of an electromyographic control system (ECS)
A. SEMG Data Acquisition SEMG1 and SEMG 2 labeled in Fig. 1 are the control channels for the system. The SEMG data are acquired from the surface of the skin by placing electrodes over the person's muscle. Different muscles responsible on different movement. Therefore, the electrode must be placed on the muscles that are to be investigated. For example, the extensor and flexor muscles are responsible for wrist flexion/extension movement. It is important to place the electrode on the accurate location as correct placement of the electrodes will give strong SEMG signals and gives a good distinction between movements. Inaccurate placement of the electrodes will affect the performance of the classifier [7]. Normally, the electrodes are accompanied by miniature pre-amplifiers. In this work, it focuses on the wrist muscles which are the flexor carpi ulnaris (FCU) and extensor carpi radialis (ECR). This pair of antagonistic muscles was selected as they are the muscles responsible for the wrist movement. Typically the flexor and extensor muscles are used in the SEMG control application for prosthetic hand [8, 9]. SEMG data were acquired from twenty healthy participants’ wrist muscles. The experimental protocol used in this investigation was approved by the School of Electronics ancd Computer Science, university of Southampton. The SEMG signals were acquired using Noraxon Ag/AgCl dual electrodes (diameter 15 mm, centre spacing 20mm). They were placed on the forearm above FCU and ECR with a reference electrode at the elbow. The procedures for the electrodes placement were referred from the SENIAM guidelines [10]. This is to ensure a stable and maximum pickup area of the SEMG signals. Also, the surface was cleaned with rubbing alcohol to reduce the impedance at the surface. The SEMG signals were sampled at 1,500 Hz using a Noraxon 2400T in conjunction with Noraxon Myoresearch XP Master Edition (version 1.06) software. The participants were asked to do movements that related to these particular muscles which were wrist flexion, wrist extension and cocontraction. All the movements were done at different speeds; 60 beep per minute (bpm), 90 bpm, 120 bpm and maximum speed. The speed was controlled by using a
B. Feature Extraction Feature extraction is a process where the raw SEMG signal is represented into a feature vector which is then used to separate the desired output, e.g. different hand grip postures. The success of the ECS based on pattern recognition depends on the selection and extraction of features [6]. To get a reasonable processing time, the recorded SEMG data were pre-processed. There are various methods to do this but the most widely used is data segmentation because it can improve the accuracy and the response time to the controller. All the raw SEMG data were post-processed using the following methods: 1) Approximate entropy (ApEn) [11]
ApEn (m,r, N) = Φ m ( r ) − Φ m +1 ( r )
(1)
2) Mean absolute value
Xi =
1 N
N
∑x
k
; i = 1,......, I
(2)
k =1
3) Kurtosis
∑ kurtosis =
N i =1
(X i − X )4
( N − 1)σ 4
(3)
The investigation focused on the analysis of the ApEn as the main method and other methods were used to support the performance of the ApEn. The whole data was first divided into overlapping segments of length 200 data values (N), which is about 130 ms duration with a delay of one point to the next segment. For prosthetic control, calculations should be less than 200ms. Otherwise the delay would be too long for practical use. A sample window of 200 samples (130 ms) is a very good balance between having enough sample to give robust estimated of ApEn and delay time. A moving data window was applied to the data sequence and ApEn within data is calculated repeatedly. The moving ApEn is obtained as the windows moves point by point along the time axis. The work carried out in this stage has been reported here [12, 13] and will not be discussed in detail. C. Classification The information obtained during feature extraction will be then fed into a classifier. A classifier should be able to map different patterns and match them appropriately. An efficient classifier should be able to classify patterns in a short duration to meet the real-time constraint of prosthetics
IFMBE Proceedings Vol. 35
Speed Based Surface EMG Classification Using Fuzzy Logic for Prosthetic Hand Control
123
device. However, due to the nature of EMG signal it is possible to see a large variation in the value of the feature used. This variation may be due to the electrode placement or sweat. The next section will discuss in detail of the classifier design for this project.
III. CLASSIFIER DESIGN This classifier is built using the information on the contraction speed where earlier investigation has shown that moving ApEn, MAV and kurtosis show differences of their magnitudes at different speeds of contraction. For this type of classifier, the dip that can be observed at the start and end of a contraction in ApEn analysis will be used to trigger and stop the classification process respectively (see Figure ?). With this method, the processor power consumption could be saved as the system doesn't need to do continuous classification. Three speeds have been defined in this work and they are SLOW (60bpm), MEDIUM (90bm)) and FAST (120bpm) speeds. The four-state classifier system is to classify the extracted information from wrist movements and cocontraction at different speeds in selecting one of the four outputs. The operation of the system is as follows: 1) 2) 3) 4) 5) 6) 7)
Wrist flexion at the SLOW speed – STATE1 Wrist flexion at the MEDIUM speed – STATE2 Wrist flexion at the FAST speed – STATE3 Wrist extension at the SLOW speed – STATE1 Wrist extension at the MEDIUM speed – STATE2 Wrist extension at the FAST speed – STATE3 Co-contraction at the SLOW speed – STATE4
The subject has to do two movements which are wrist flexion/extension at the speed of 60 bpm (SLOW, S), 90 bpm (MEDIUM, M) and 120bpm (HIGH, H) and cocontraction only at 60bpm (C). The feature extraction of two SEMG signals gives six features: ApEn1 (A1), MAV1 (B1), kurtosis1 (C1) from the FCU and ApEn2 (A2), MAV2 (B2) and kurtosis2 (C2) from ECR. The FL classifier will discriminate the information from the feature extraction process into four different states namely STATE1 (S1), STATE2 (S2), STATE3 (S3) and STATE4 (S4). The MF for the inputs and the output of the system is shown in Figure 2. In the top left plot of Figure 6.3 is the MF of the input A1. In order from left to right: co-contrcation (C), HIGH (H, 120bpm), MIDDLE (M, 90bpm), SLOW (S, 60bpm) and RELAX (R). These functions are derived from the mean and SD obtained in the signal processing stage. The means between speeds of the contraction have small but substantial differences.
Fig. 2 The membership function for the inputs and the output of the four state system. Inputs A1, B1 and C1 are from FCU and A2, B2 and C2 are from ECR. S: SLOW, M: MEDIUM, H: HIGH, C: Co-contraction, R: RELAX After several cycles of testing, possible changes of the MF for each input were explored. The changes included modification of the MF shape and range of values of the inputs but the same results were obtained. It has been found that poor classification results are due to the kurtosis input from both SEMG channels. These tests have led to the exclusion of kurtosis for further analysis in the ECS development for this type of classifier.
IV. RESULTS AND DISCUSSION Fig. 3 shows the results of the revised ECS during wrist flexion/extension at 60bpm when kurtosis is removed. It can be seen clearly the classifier is able to select the correct output, which in this case is STATE1 (S1), labeled (a) and (b), where from the output MF, the range is between 0 until 0.3. Even though there are glitches during the classification result in one contraction, the average of the values produces the correct output state.
IFMBE Proceedings Vol. 35
124
S.A. Ahmad, A.J. Ishak, and S.H. Ali
Fig. 4 (continued)
REFERENCES
Fig. 3 The result of the classification system for wrist flexion and wrist extension at 60bpm for the revised four states system
1.
The analysis of accuracy (Fig. 4) of the four-system has shown that from the speeds of contraction perspective, during wrist flexion/extension, the classifier was able to select STATE1 (S1) and STATE2 (S2) accordingly but not at STATE3 (S3) which is when the contraction is performed at the speed of 120bpm.
2. 3. 4. 5. 6. 7. 8.
9.
10.
Fig. 4 The accuracy (in %) of the ECS during 1. wrist exion/extension at 60bpm (SLOW) - top, 2. Wrist flexion/extension at 90bpm (MEDIUM) middle and 3. Co-contraction at 60bpm - bottom, for the revised four-state system
11. 12. 13.
Carrozza, M., Dario, P., Zecca, M., & Micera, S. (2002a). Control of multifunctional prosthetic hands by processing the electromyographic signal. Crit. Rev Biomed Engineering, 30, 459-485. Evans-Pughe, C. (2006). Smarter prosthetics. Tech. rep., IET - Engineering and Technology. Mitchell, W. R., M. (2008). Development of a clinically viable multifunctional hand prosthesis. In MyoElectric Controls/Powered Prosthetics Sympossium. TouchBionics (2007). The i-limb hand.URL http://www.touchbionics.com Plettenburg, D. H. (2006). Upper Extremity Prosthetics, Current Status & Evaluation. VSSD Asghari Oskoei, M., & Hu, H. (2007). Myoelectric control systems - a survey. Biomedical Signal Processing and Control, 4 (4), 275-294 Hargrove, L., Englehart, K., & Hudgins, B. (2006). The effect of electrode displacements on pattern recognition based myoelectric control. In IEEE Ann.Intl. Conf. on Engineering in Medicine and Biology Society, 2203-2206). Ajiboye, A., & Weir, R. (2005). A heuristic fuzzy logic approach to EMG pattern recognition for multifunction prosthesis control. IEEE Trans. on Biomedical Eng, 52 (11), 280-291. Karlik, B., M.O., T., & M., A. (2003). A fuzzy clustering neural network architecture for multifunction upper- limb prosthesis. IEEE Trans. on Biomedical Engineering, 50 , 1255-1261. Hermans, H., Freriks, B., Merletti, R., Stegeman, D., Blok, J., Rau, G., Disselhorst-Klug, C., & Hagg, G. (1999). SENIAM 8, European Recommendations for surface Electromyography, results of the SENIAM project. Rossingh Research and Development. Pincus, S. M. (1991). Approximate entropy as a measure of system complexity. Proc Natl Acad Sci U S A, 88 (6), 2297-2301. Ahmad, S. A. and Chappell, P. H. (2008) Moving Approximate Entropy Applied to Surface Electromyographic Signal, Biomedical Signal Processing and Control, Vol. 3, pp. 88–93, 2008. Ahmad, S.A. Ahmad and Chappell, P. H. (2009) Surface EMG Pattern Analysis of the Wrist Muscles at Different Speeds of Contraction, Journal of Medical Engineering and Technology.
IFMBE Proceedings Vol. 35
Activity of Upper Body Muscles during Bowing and Prostration Tasks in Healthy Subjects M.K.M. Safee, W.A.B. Wan Abas, N.A. Abu Osman, and F. Ibrahim Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia
Abstract— This paper investigate the activity of the neck extensors (NE), sternocleidomastoideus (SCM), trapezius (TRP), deltoid (DT), biceps brachii (BB), triceps brachii (TB), rectus abdominal (RA), and erector spine (ES) muscles in healthy subjects during bowing and prostration using surface electromyography (EMG). A group of student aged between 23 to 28 years voluntarily participated in this study. The subjects were asked to perform two types of flexion positions, namely bowing (90o forward flexion with the hands on the respective knees) and prostration (flexion with the palms of the hands and the forehead flat on the floor). The motion signals of the muscles were recorded. The finding indicated that during the bowing, there was contraction of the NE, DT, TB, and RA muscles while the muscle relaxation was found in the SCM, TRP, BB, and ES. During prostration, there was contraction at the SCM, DT, TB, and RA but muscle relaxation was found at the NE, TRP, BB, and ES. For the muscles that showed electrical activity in both the postures, the Wilcoxon Rank Sum Test showed a statistically no significant difference between bowing and prostration only for DT (p = 0.534) and but statistically significant difference for RA and TB (p<0.05). The muscle relaxation that existed at the ES showed no significant difference between bowing and prostrating (Wilcoxon Rank Sum Test, p = 0.075). This study indicates the effects of bowing and prostration on the biomechanical response of the human muscles. Muscle contraction and relaxation that occur, show an agonistantagonist response which is good for exercise and strengthening programmers. These two movements are involved in the Muslim’s prayer, called the salat. Hence, the current experiment can be taken as a pilot study on the biomechanical response of the human muscles during one's act of performing the salat. Keywords— Electromyography; Muscle; Bowing; Prostration; Flexion relaxation.
I. INTRODUCTION The electrical activity in the human muscles can be measured using electromyography (EMG). This allows for the measurement of the change in the membrane potential as the action potentials are transmitted along the fiber. The study of the muscles from this perspective can be valuable in providing information concerning the control of voluntary and reflexive movement. The study of muscle activity during a particular task can yield insight into which muscles
are active and when the muscles initiate and cease their activities. Surface electrodes are placed on the skin over a muscle and thus are mainly used for superficial muscle [1]. With the technique of electromyography, or recording the electrical impulses generated by muscular contractions, it is possible to determine very precisely which muscle, superficial and deep, contract during a given movement. Electromyography can also provide information on the sequence in which each of several participating muscles contracts and can help in estimating the strength of contraction of each muscle [2]. There are many research findings that show the benefits of muscle contraction and muscle strength. For example, to protect and stabilize the head and neck in high Gz environments, higher neck muscle strength is needed; less muscle strength in fighter pilot may cause pain and perhaps reduced mission effectiveness [3]. Besides, more research that relate the NE to the FRP and low back pain (LBP) [4-6]. The FRP is defined by a reduction in or silence of myoelectric activity of the lumbar erector spine muscle observed during full trunk flexion [5]. The FRP was found to occur at 40o to 70o of body flexion [7, 8]. Based on the biomechanical models of the spine, it was proposed that spinal stabilization should be considered the result of highly-coordinated muscular activation interacting with passive elements [9]. Spinal stability is also highly dependent on spinal load and posture [10] as well as task requirements [12]. Instability of the lumbar spine has been suggested to be both a cause and a consequence of LBP [9]. Efficiency of motion and stresses imposed on the spine are very much dependent upon the posture maintained in the trunk as well as on the trunk stability. Positioning of the vertebral segments is so important that a special focus on posture and spinal stabilization is warranted [1]. Muscles that play an important role in spinal stabilization include the transverse abdominal, multifidus, erector spine, and internal oblique [1]. The erector spine is better suited for control of spinal orientation by nature of its ability to produce extension [12]. The RA is one of the abdominal wall’s muscles. The wall is very important because it not only contract to increase intra-abdominal pressure but also distend considerably, accommodating expansions caused by ingestion, pregnancy, fat deposition, or pathology [13]. Beside that,
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 125–129, 2011. www.springerlink.com
126
M.K.M. Safee et al.
there are many experiments that show muscle contraction for the biceps and triceps during exercise [14, 15]. Other experiments also show that the TRP and DT are also involved in shoulder exercise [16, 17]. Common exercise recommendations from health professionals include trunk exercises to prevent and treat low back injuries. Knowing the trunk muscle activation levels during exercises is important in the prescription and design of exercise programs that aim to increase the training intensity over time (progressive resistance model). Previous researches have documented trunk muscle electromyograms (EMGs) during various exercises designed to train the trunk musculature and during functional activities [18-21]. Ng et al [20] found that abdominal and trunk muscles not only produce torque but also maintain spinal posture and stability during axial rotation exertions. Exercise programming for a young, healthy population incorporates exercises that push the muscular system to high levels of performance. Muscles can exert force and develop power to produce the desired movement outcomes. The lost of strength in the muscles can create a variety of problems, ranging from inability to reach overhead or open a jar lid to difficulty using stairs and getting up out of a chair [1]. Skeletal muscle performs a variety of different function, all of which are important to efficient performance of the human body. The three functions relating specifically to human movement are contributing to the production of skeletal movement, assisting in joint stability, and maintaining posture and body position [1]. From the current research, it is possible to identify the muscles that are involved and contract during bowing and prostration. The experiment can also serve as a pilot study on the biomechanical response of human muscle during the Muslims' prayer, or the salat.
groups, parallel to their fiber orientation in the muscle belly. The electrodes were connected to an EMG data collection system (Myomonitor IV Wireless Transmission, Delys) and the signals were collected using customized software (Delys EMGWorks, Boston, MA, USA). These records were then downloaded into a personal computer (Toshiba, Japan). The EMG bandwidth was 20-450Hz at a sampling rate 1500Hz. The electrodes were placed according to the SENIAM recommendation [22]. The myomonitor was capable of recording 16 muscles simultaneously. C. Experimental Procedure The current study involved EMG recording of eight muscles of each subject while the subjects were performing the prostration and bowing actions. The eights muscles are the neck extensors (NE), sternocleidomastoideus (SCM), trapezius (TRP), deltoid (DT), biseps brachii (BB), triceps brachii (TB), rectus abdominus (RA), and erector spine (ES), Figure 1. At each site, the readings were taken twice and the average of the two values was recorded for analysis. 11 subjects were involved in the study. Frontal view Sternocleidomastoid Deltoideus p. clavicularis Biceps Rectus abdominis
Dorsal view
II. SUBJECTS AND METHODS
Neck extensors Trapezius p. descendenz
A. Subjects
Triceps brachii
A total of 11 male undergraduates (age: 23 ± 5.1 years) with no medical history and no back pain were recruited as subjects of the study. Subjects were verbally informed about the experimental protocols, and they read and signed a consent form prior to participating in the experiments. Three repetitions were recorded for every salat and exercise protocol. B. Apparatus Disposable bipolar Ag-AgCl disc surface electrodes with a diameter of one cm were affixed over the chosen muscle
Erector spinae
Fig. 1 The muscles under study Subjects were briefed on the study protocol which involved two types of forward flexion, namely bowing and prostration, Figure 2. They were then allowed to experience the movements. The areas of the skin onto which the
IFMBE Proceedings Vol. 35
Activity of Upper Body Muscles during Bowing and Prostration Tasks in Healthy Subjects
electrodes were to be affixed, were shaved using a disposable razor and abraded with a cotton swab and alcohol. Alcohol wipes were used for cleaning the surface of the skin before electrode placement. Posture
Description • Bend as far as possible to reach the 90o bending position • Hands gripped the respective knees
127
involved in the experiment. To obtain stable maximum force prior to formal EMG data collection, enough practice time was allowed for warming-up and for the subjects to familiarize themselves with the testing procedures. The EMG reading during the 3 s of the MVC was taken and used to represent the normalized value (100% MVC).
Bowing • • • Prostration
•
The forehead and palms of the hands touch the ground. The upper limbs are abducted slightly outward. The thighs are vertically straight with both knees touching the ground. The toes are flexed in the vertical position.
F
Electrode placement was preceded by palpation and visual inspection of each of the muscles. The positions of the electrodes are as given in Table 1. A ground electrode was placed on the tibial tuberosity. Electrode placement at a particular site was verified to be proper by inspection of the EEG signal output when a subject was asked to voluntarily moved the respective muscle. Table 1 Electrode positions
SCM TRP DT BB TB RA ES
Fig. 3 A subject undergoing the data acquisition process. (a) Bowing; (b) Prostration D. Data Analysis
Fig. 2 Bowing and prostration postures
Muscle NE
(b)
Both the MVC and myoeletric data from both types of forward flexion tasks were identically processed. The EMG signals were analyzed using EMG analysis software version 3.5.1.0, (EMGWorks, Delsys, Boston, MA), then a root mean square (RMS) technique was used to smoothen the data thus producing a linear envelope of EMG activity record. The data obtained from each subject were downloaded into a personal computer (Toshiba, Japan). The values of all RMS were averaged and then normalized as % MVC. E. Statistical Analysis
Electrode position Bilaterally on the paraspinal muscle at 2 cm lateral of the C4 spinous process Along the sternal portion of the muscle, with the electrode centered 1/3 of the distance between the mastoid process and the sternal notch Halfway between the C7 spinous process and the tip of the acromion on the crest of the shoulder in line with the direction of the muscle fibers 3.5 cm below the anterior angle of the acromium Midway between the elbow and the midpoint of the upper arm, centered on the muscle midline Midway between the elbow and the midpoint of the upper arm, centered on the muscle midline On the left aspect of the umbilicus and oriented parallel to the muscle fibers on the right side of the body Bilaterally about 2 cm laterals from the spinous processes between the fourth lumbar (L4) and fifth lumbar (L5) on the right side of the body
To normalize the EMG signals, a record was made of the maximum voluntary contraction (MVC) for the all muscles
A descriptive statistics was used to study the features of the entire signal. The Wilcoxon Rank Sum was used to examine the differences between the two postures. The significant level was set at p<0.05. The data was analyzed using the Statistical Package for the Social Sciences (SPSS) software, version 14.0.
III. RESULT The level of the EMG signals for all the subjects indicate that, during the bowing position, there was muscle contraction in four of the muscles (values printed bold) with the other four experiencing muscle relaxation. During prostration, there was muscle contraction in four of the muscles as well. This is shown in Table 2, displaying the EMG average in % MVC of the muscles.
IFMBE Proceedings Vol. 35
128
M.K.M. Safee et al.
Table 2 Average Muscle Contraction Muscle
Table 3 Median Of muscle Contraction Posture
EMG average in % MVC
NE
Bowing 18.42
Prostration 6.20
SCM
4.49
51.49
TRP
2.84
3.54
DT
10.48
10.35
BB
1.38
1.35
TB
17.92
11.04
RA
18.72
12.98
ES
4.67
4.02
A plot of the values given in Table 2 is shown in Figure 4, displaying graphically the muscle contractions during bowing and prostration. From Table 2 and Fig. 4, it is clearly seen that the muscles DT, TB and RA contracted at both the bowing and prostration positions. On top of that, the NE contracted during bowing and the SCM contracted during prostration. The highest average muscle contraction for bowing occurred at the NE (18.42% MVC) whilst the highest for prostration occurred at the SCM (51.49% MVC). Meanwhile, the muscle ES was undergoing the flexion relaxation phenomenon (FRP) or muscle relaxation phase. Muscle Contraction (%MVC) 60
Percentage
50 40 Bowing Prostration
30 20 10 0 NE
SCM
TRP
DT
BB
TB
RA
ES
Muscles
Fig. 4 Muscle contractions (% MVC) Table 3 shows the median of the EMG values for bowing and prostration of the eight muscles. The Wilcoxon Rank Sum Test was used to test for significance in the difference. It is found that bowing produced significantly higher contraction in the muscles NE, TB, RA, and ES while prostration produced significantly higher contraction in the SCM. No significant difference between bowing and prostration EMG contraction levels was found for the muscles TRP, DE, BB and ES.
Neck Extension Bowing Prostration Sternocleidomastoid Bowing Prostration Trapezius Bowing Prostration Deltoid Bowing Prostration Biceps Bowing Prostration Triceps Bowing Prostration Rectus Abdominal Bowing Prostration Erector Spine Bowing Prostration
Median (% MVC)
Interquartile Range
Z
p
20.10 5.94
7.87 4.14
-2.934
0.003
4.25 51.94
2.37 13.02
-2.934
0.003
3.09 2.42
1.91 3.71
-0.711
0.477
10.15 10.88
5.34 4.11
-0.622
0.534
1.20 1.04
0.92 1.47
-0.178
0.859
16.11 10.41
11.18 6.83
-2.845
0.004
19.54 13.02
9.48 3.05
-2.578
0.010
4.94 3.79
0.84 0.60
-1.778
0.075
IV. DISCUSSION This study shows that all the muscles involved play their own roles to produce the bowing and prostration movements. There are contractions of the RA during bowing and prostration. It will actively produce the actions such as flexes trunk (lumbar vertebrae) and compresses abdominal visceral, stabilizes and controls tilt of pelvis (antilordosis) [2]. Besides, muscle relaxation or FRP occurs in the ES for both bowing and prostration. This shows that bowing and prostration can be taken as the ES exercise movements because the muscle will contract and relax during repetitive a standing-bowingstanding and sitting-prostration-sitting movements. Traditionally, the treatment of LBP has included strengthening exercises for the back extensor muscle. Prone arch or their variations, prone trunk, and leg lifting exercises, have been known to alleviate back pain [23]. Consistent activity in the ES in patients with LBP provides stability to help protect the diseased passive spinal structure from movements that may cause pain [24]. Strengthening exercises for the abdominal muscles are frequently used in the rehabilitation of LBP. It is hypothesized that the local muscles, such as the transverse abdominal and internal oblique abdominals, are instrumental to the stabilization of the lumbosacral spine [25]. The global muscles, including the RA and external oblique abdominals, are responsible for producing gross movements of the trunk and pelvis [26]. Besides, several of the strength exercises produce high activation of the neck and shoulder muscles in women with chronic neck pain. These exercises can be used as treatment of chronic neck muscle pain. [27].
IFMBE Proceedings Vol. 35
Activity of Upper Body Muscles during Bowing and Prostration Tasks in Healthy Subjects
V. CONCLUSIONS This study illustrates the effect of bowing and prostration on the biomechanical response of the upper body human muscles. Muscle contraction and relaxation that occur show agonist-antagonist response which is good for exercise and strengthening programmers. The investigations can be extended to other muscles exercises either involving standing or sitting positions. Bowing and prostration are two of the many movements involved in the Muslim’s prayer of salat. Hence the current study can be taken as a pilot study for more investigations on the biomechanical response of the human muscles during the act of performing the salat.
REFERENCES 1. Hamill J and Knutzen KM (2009) Biomechanical Basis of Human Movement. 3rd ed. Philadelphia. 2. Jenkins DB (2009) Hollinshead’s Functional Anatomy of the Limbs and Back. 9th ed. Philadelphia. 3. Ang B, Linder J, Harms-Ringdahl K (2005) Neck strength and myoelectric fatigue in fighter and helicopter pilots with a history of neck pain. Aviat Space Environ Med 76(4):375-80. 4. Leinonen V, Kankaapaa M, Vanharanta H, Airaksinen O, Hanninen O (2005) Back and neck extensor loading and back provocation in urban bus drivers with and without low back pain. Pathophysiology 12(4): 249-55. 5. Colloca CJ, Hinrichs RN (2005) The biomechanical and clinical significance of the lumbar erector spinae flexion-relaxation phenomenon: a review of literature. J Manipulative Physiol Ther 28(8) :623631. 6. Pialasse JP, Dubois JD, Pilon Choquette MH, Lafond D, Descarreaux M (2009) Kinematic and electromyographic parameters of the cervical flexionrelaxation phenomenon: The effect of trunk positioning. Ann Phys Rehabil Me 52(1):48-49. 7. Sihvonen T (1997) Flexion relaxation of the hamstring muscle during lumbar-pelvic rhythm. Arch Phys Med Rehabil 78:486-490. 8. Watson PJ, Booker CK, Main CJ, Chen CAN (1997) Surface electromyography in the identification of chronic low back pain patients: the development of the flexion relaxation ratio. Clinical Biomechanic 12(3):165-171. 9. McGill SM (2007) Low back disorders: Evidence based prevention and rehabilitation, Human Kinetics Publishers, Champaign, IL, 2nd ed. U.S.A. 10. Shirazi-Adl A, El-Rich M, Pop DG, Parnianpour M (2005) Spinal muscle forces, internal loads and stability in standing under various pustures and load- application of kinematic –based algorithm. Eur Spine J 14(4):381-392. 11. Kaycic N, Grenier S, McGill SM (2004) Quantifying tissues loads and spine stability while performing commonly prescribed low back stabilization exercises. Spine 29(20):2319-2329.
129
12. Richardson C, Jull G, Hodges P, Hides J (2002) Therapeutic Exercise for Spinal segmental Stabilization in Low Back Pain. Philadelphia. pp 45-89. 13. Moore; K. L., Dalley, AF, Agur AM (2010) Clinically Oriented Anatomy. 6th ed, Baltimore, MD, pp. 66-290. 14. Dundon JM, Cirillo J, Semmler JG (2008) Low-frequency fatigue and neuromuscular performance after exercise-induced damage to elbow flexor muscles. J Appl Physiol 105:1146-1155. 15. Dartnall TJ, Nordstrom MA, Semmler JG (2008) Motor unit synchronization is increased in biceps brachii after exercise-induced damage to elbow flexor muscles. J Neurophysiol 99:1008–1019. 16. Jun AY, Choi EH, Yoo YS, Park DS, Nam HS (2010) The activities of trapezius and deltoid in rotator cuff tear patients injected local anesthetics in subacromial space. J Korean Acad Rehabil Med. 34(3):316-324. 17. Marshall PW, Murphy BA (2005) Core stability exercises on and off a Swiss ball. Arch Phys Med Rehabil. 86(2):242-9. 18. Kumar S and Narayan Y (2001) Torque and EMG in isometric graded flexion-rotation and extension-rotation. Ergonomic 44(8):795-813. 19. Lehman GJ, McGill SM (2001) Quantification of the differences in electromyogrphic activity magnitude between the upper and lower portion of the rectus abdominis muscle during selected trunk exercises. Physical Therapy 81(5):1096-1101. 20. Ng JKF, Parniampour M, Richardson CA (2001) Functional roles of abdominal and back muscles during isometric axial rotation of the trunk. Journal of Orthopaedic Research 19(3):463-471. 21. Sarti MA, Monfort M, Fuster MA (1996) Muscle activity in upper and lower rectus abdominus during abdominal exercises. Archives of Physical Medicine and Rehabilitation 77(12):1293-1297. 22. Surface Electromyography for the Non-Invasive Assessment of Muscles at http:www.seniam.org 23. Manniche C, Lundberg E, Christensen I, Bentzen L, Hesselsoe G (1991) Intensive dynamic back exercises for low back pain: a clinical trial. Pain 47(1):53-63. 24. Kaigle AM, Wessberg P, Hansson TH (1998) Muscular and kinematic behavior of lumbar spine during flexion-extension. J Spinal Disrd 11(2):163-74. 25. Saal JA (1992) The new back school prescription: stabilization training, part 2. Occup Med.7(1):33-42. 26. Norris CM (1993) Abdominal muscle training in sport. Br J Sports Med. 27(1):19-26. 27. Lar LA, Michael K, Christoffer HA, Peter BH, Mette KZ, Klaus H, Gisela S (2008) Muscle activation during selected strength exercises in women with chronic neck muscle pain. Physical Therapy 88(6):703-711.
Address of the corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Mohd Khairuddin Bin Mohd Safee Biomedical Engineering, Faculty of Engineering, UM 50603 Kuala Lumpur Malaysia
[email protected]
Analysis of the Effect of Mechanical Properties on Stress Induced in Tibia B. Sepehri1, A.R. Ashofteh-Yazdi2, G.A. Rouhi3, and M. Bahari-Kashani4 1
Department of Mechanical Engineering, Islamic Azad University-Mashhad Branch, Mashhad, Iran 2 MSc Graguate, Islamic Azad University-Mashhad Branch, Mashhad, Iran 3 Department of Mechanical Engineering and School of Human Kinetics, University of Uottawa, Ottawa, Canada 4 Department of Ortopeadics, Mashhad University of Medical Sciences, Mashhad, Iran
Abstract— In this research, a 3D model of tibia was created with the exact geometry of the real bone by using spiral scan images of the human left leg. It was materialized by MIMIX and devel-oped in ABAQUS software by considering other invivo condi-tions, and also by loading a transversal impact which repre-sents the collision of a vehicle and pedestrian. Three different mechanical properties, i.e. elastic isotropic; elastic but trans-aversely isotropic; and finally viscoelastic were considered for the bone tissue to see and compare its behavior under the impact. Results showed that the stresses were at the same level as the ultimate stress of long bones. Interestingly, the maximum stress resulted from the impact loading, was seen in the viscoe-lastic model of tibia and the minimum was found to be when an elastic and transversely isotropic material property was considered. Moreover, because of the viscoelastic property of tibia, it has the capability to differ the rate of fracture propa-gation and also there will be an increase in the amount of stress as time goes on during the impact cycle but once loading period is over, stress relaxation was observed in which stress decreased. This was verified by checking the amounts of stress for each increment of the impact cycle. The maximum stress was seen in the last increment of the impact cycle when tibia’s viscoelasticity was taken into account, while for the purely elastic, it happened when the maximum load was applied. Keywords— Bone mechanics, Tibia, Impact, Finite element.
I. INTRODUCTION Injuries to the human lower extremities are mostly due to the collision of a vehicle and pedestrian, which results in fractures of long bones, injuries to the knee and ankle. This is because of the impact applied by the vehicle and high acceleration created in the lower extremities. Different injury mechanisms can be seen in long bones, which bending and torsional moments have been considered as major affecting factors [1]. In the recent years many new models have been proposed to analyze the injury mechanisms in bone tissue caused by the collision of vehicle-pedestrian. Also, dynamic load tests were performed to obtain fundamental information on the
fracture behavior and morphology of human bones, e.g. for tibia [2]. On the other hand, numerical methods, for instance finite element methods, are being widely employed to estimate mechanical stimuli caused by impact loading in bones in order to get useful information on the mechanical properties and also the behavior of bone tissue. During the last three decades, impact biomechanics in the collision of vehicle-pedestrian has been studied widely in order to recognize the injury mechanisms in human limbs, reaction of lower extremities to the impact and getting information on the strength of different human bone tissues [3]. Stress analysis is a good way to get useful information on the structural properties of bone tissue in macroscopic scale. Thus, by choosing suitable material and structural properties for the bone tissue and by employing finite element methods, one may be able to find answers of many unknowns about the behavior of bone tissue under impact loading. In this research, a 3D finite element model of human tibia, based on the real geometry of tibia, was created to investigate the effect of different mechanical properties of bone on the stress analysis under a transversal impact load. Also, the time of maximum amounts of stress has been taken into account for each material property during the impact cycle. Using the real geometry of tibia is an advantage of this work in comparison with other similar studies; also the mechanical properties used for the construction of the model were chosen to be close to those of real bone tissue [4].
II. MATERIALS AND METHODS A 3D model of tibia was created with the real geometry of one human male and mid-sized left leg using spiral scan images. It was materialized by MIMIX software (version 10.01), then the model was developed in ABAQUS software (version 6-7.1) for the purpose of modeling of some other in-vivo conditions (see Fig. 1).
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 130–133, 2011. www.springerlink.com
Analysis of the Effect of Mechanical Properties on Stress Induced in Tibia
131
In order to estimate these constants and to introduce them to the software, available data from [5], were used (see Table 2). Table 2 Data used for the elastic transversely isotropic model [5] Young Modulus (GPa)
E1= 12 E2= 12 E3= 18
Fig. l The 3D model of the human left tibia used in this research with the
Poisson Ratio
υ12 υ 23 υ13 υ 21 υ 32 υ 31
Shear Modulus (MPa)
= 0.300 = 0.253
G12= 4850
= 0.253
G23= 5700
= 0.300 = 0.390
G13= 5700
= 0 .390
exact geometry of real bone using spiral scan images and MIMIX software
As mentioned before, tibia was considered as a cortical shell and the spongy part was ignored. Three different mechanical properties, i.e. elastic isotropic; elastic but transversely isotropic; and viscoelastic properties were considered for the bone tissue in order to investigate tibia’s behavior under the impact loading. For the first model, tibia was considered as an elastic isotropic shell and the following data (see Table 1) were used [4]: Table 1 Data used for the elastic isotropic model [4] Young Modulus (GPa)
Poisson Ratio
Density (kg / m3)
15
0.3
1730
In order to consider the viscoelastic behavior for tibia as the third model, a Prony series was introduced for tibia as follows (see table 3) [6]: Table 3 Data used for the viscoelastic model [6] g-i prony
k-i prony
Tau-i prony
0.3942
0.5256
2.102
0.1278
0.1704
79.99
0.0397
0.0529
800
The transversal impact load function was extracted from the forces exerted on tibia in simulation of the collision of a vehicle-pedestrian (see Fig. 2) [7].
The second material property adopted for tibia was transversely isotropic property of the elastic behavior. The stiffness matrix for cortical bone was assumed to have 5 independent constants as follows [5]:
Fig. 2 The transversal impact function of the collision of a vehiclepedestrian [7]
(1) C11=C22 , C13=C23 , C44=C55, C66=1/2( C11-C12)
(2)
In agreement with some of the previous studies [7], for the boundary conditions in the case of transverse loading, both ends of tibia were considered to be clamped.
IFMBE Proceedings Vol. 35
132
B. Sepehri et al.
An optimized free mesh was used in this study to keep the bone shape in the best form with totally 12150 linear 3node triangular elements (S3R).
Results of the viscoelastic behavior of tibia showed that the maximum stress will be 310 MPa, which is much higher than the bending strength of tibia, and so it will surely cause fracture in tibia (see Fig. 5).
III. RESULTS Figure 3 shows the stress contour of a medial transversal impact load on tibia by considering the elastic isotropic material properties for tibia. The maximum stress of 176 MPa compared with the bending ultimate stress of the cortical bone, i.e. 140 MPa [8], shows that fracture will occur in the mid-shaft of tibia and near the dorsal edge.
Fig. 5 The stress contour in tibia under a transversal impact load with viscoelastic behavior (t = 0.03 s)
A decrease in the maximum amount of stress was observed by the viscoelastic behavior of tibia after the impact load was removed (see Fig. 6).
Fig. 3 The stress contour in tibia under a transversal impact load with elastic isotropic property
In the case of considering transversely isotropic properties for tibia, the maximum stress of 150 MPa indicates that the cracks will be produced in the tibial shaft (see Fig. 4).
Fig. 6 The stress contour in tibia under a transversal impact load with viscoelastic behavior after the impact cycle (t = 0.04 s)
IV. DISCUSSION
Fig. 4 The stress contour in tibia under a transversal impact load with elastic transversely isotropic property
The literature concerning the fundamental mechanical properties of fresh compact bone is sadly poor. The majority of the current studies are based on animal bones because the different dimensions of human bones make the standardized testing of their material difficult. Also, these fundamental
IFMBE Proceedings Vol. 35
Analysis of the Effect of Mechanical Properties on Stress Induced in Tibia
material characteristics are not automatically applicable to whole bones, because bone geometry has a decisive influence on post impact mechanical changes [2]. In this research, by using a real geometric model of tibia, according to its complex and unique geometry, the effects of different mechanical properties of tibia on the stress analysis under a transversal impact load has been investigated .The maximum stress was seen in the case of viscoelastic model of tibia while the minimum was found with the transversely isotropic property. In agreement with previous studied [7 and 8], the maximum amount of stress reached by the transversely isotropic model of tibia was closer to the results of theoretical and experimental works by other researchers. The dependency of the viscoelastic material property to the time caused the maximum stress to be seen in the last increment of the impact cycle. But, for the elastic behavior of tibia, the maximum stress was seen in the increment with the maximum applied force. The stress relaxation was seen by a reduction in the maximum amount of stress just after the impact load was over because of the constant strain rate in the tibial shaft. It is known that bone is a semi-brittle material and will experience fracture with forces greater than its ultimate strength. Based on the maximum amount of stress, the viscoelastic behavior of tibia showed a tendency to propagate the fracture lines and also higher stresses were reached compared to the isotropic and transversely isotropic property cases of tibia during the impact cycle. This indicates the dependency of a viscoelastic material to the time and also the strain rate, as expected. Also, the observed reduction in the maximum amount of stress after removing the impact load can be seen as a consequence of stress relaxation in tibia. The minimum stress was found to be when a transversely isotropic material property was considered for tibia.
133
Checking the maximum amounts of stress incrementally showed that because of the dependency of tibia’s vicoelasticity to the time, it happened in the last increment of the impact cycle. While for the purely elastic assumption, the maximum stress was observed when the maximum force was applied, as expected.
REFERENCES 1. J. Shen and X-L. Jin, Improvement in numerical reconstruction for vehicle-pedestrian accidents, Journal of automobile engineering, Vol.222, Part D, pp.25-39, 2008. 2. W. Rabl, C. Haid and M. Krismer, Biomechanical properties of the human tibia: fracture behavior and morphology, Forensic Science International, 83, pp.39-49, 1996. 3. R. Levine, Injury to the extremities, Springer - Verlag 1993. pp. 460- 491. 4. Murdula S. Kulkani,S.R. Sathe, Experimental determination of material properties of cortical cadaveric femur bone, Trends Biomat. Art. Organs, Vol.22 (1), pp.10-20, 2008. 5. R. Krone and P. Schuster, An investigation of importance of material anisotropy in finite-element modeling of the human femur, SAE international 2006, paper No. 2006-01-0064. 6. S. Muller, Mechanical and adaptive behavior of bone in relation to hip replacement, Printed by NTNU- trykk, No.83, 2005. 7. T. Maeno and J. Hasegawa, Development of a finite element model of the total human model for safety and application to car-pedestrian impacts, Toyota system research inc., Japan, paper No. 494, 2000. 8. H. Yamada. Strength of biological materials, The Williams & Wilkins co., 1970.
IFMBE Proceedings Vol. 35
Comparative Studies of the Optimal Airflow Waveforms and Ventilation Settings under Respiratory Mechanical Loadings S.L. Lin1, S.J. Yeh2, and H.W. Shia1 1
Feng Chia University/Department of Automatic Control Engineering, Taichung, Taiwan 2 Department of Neurology, Cheng Ching Hospital, Taichung, Taiwan
Abstract— This study modeled and optimized three common ventilator airflow waveforms, including sinusoidal, square, and descending waves to minimize the effort of breathing under continuous mandatory ventilation. We employed the optimal respiratory control model in conjunction with a lumped-parameter RC model for the mechanical respiratory system. Simulations were performed to mimic possible changes in respiratory mechanics, including continuous resistive and elastic loading to optimize ventilator settings and breathing patterns. We compared breathing patterns between loaded and no load breathing to characterize three types of airflow. The results of the current study demonstrate the potential for tailoring airflow and ventilator settings specifically to meet the physiological needs of patients. Keywords— Ventilator; airflow, optimization; respiratory mechanics.
I. INTRODUCTION A mode of ventilation for mechanical ventilator is a particular set of control and phase variables to breath, including either mandatory or spontaneous mode. When first selecting ventilator settings, the primary object is to stabilize the patient by assuring adequate oxygenation and ventilation. There are numerous support modes provided by modern mechanical ventilators, however, all variations can be summarized into three categories: (1) continuous mandatory ventilation (CMV), (2) intermittent mandatory ventilation (IMV), and (3) spontaneous mode. As may be targeted by either volume or pressure, CMV is a full ventilation support mode in which mandatory breaths are triggered according to a preset time interval regardless of patient’s effort [1]. Although CMV mode is poorly tolerated by most patients because it generally increases the work of breathing (WOB) and the oxygen consumption of the respiratory muscles, it still has a proper place in the support of selected patients in respiratory failure especially if the patient’s primary problem is inadequate alveolar ventilation [2]. Ventilators of new generation also allow the clinician to choose either a volume-targeted or pressure-targeted approach. The use of volume-targeted approach is guaranteed by a minimum minute volume, even as the patient’s compliance and resistance are changing. During CMV mode and
volume-targeted breaths, the therapist can set both magnitude and pattern of inspiratory airflow waveform. The inspiratory flow pattern will affect the inspiratory time and resultant time parameters. If the patient-triggered ventilations are of our concerns, the inspiratory flow and improper ventilator settings are primary factors increasing the patient’s the WOB, which can often lead to muscle fatigue and hypercapnia [3,4]. Most ventilators operating in the flow-limited mode generally offer a choice of several different inspiratory flow patterns including sinusoidal, square, descending ramp, and ascending ramp. Although it was found that current literature provides conflicting perspectives on the relative benefits of these various flow waveforms [4~6], guidelines have also been developed for selecting the flow waveform during mechanical ventilation. Among these, sine or square-wave flow pattern is probably adequately adopted for most patients requiring uncomplicated or short-term ventilatory support [7]. In earlier researches [8], it was suggested that normal ventilatory responses to CO2, exercise inputs, and mechanical loading could be predicted by the minimization of a controller objective function consisting of total chemical and mechanical cost of breathing. The optimal respiratory control model was later proposed and verified by optimizing a quadratic inspiratory neural drive [9]. The optimal instantaneous airflow and lung volume were derived based on a lumped-parameter RC model [10] for the relation between respiratory neural and mechanical outputs. Three usually seen airflow, including sinusoidal, square, and descending wave, in mechanical ventilation are modeled as the inspiratory flow pattern during CMV mode and volume-targeted mechanical ventilation in this paper. Instead of optimizing the pressure profile, the airflows waveforms will be optimized in current study through the optimal respiratory control model.
II. MATHEMATICAL MODEL A. The Optimal Respiratory Control Model Among those respiratory control models under studied during the past years, possible optimality principle has
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 134–138, 2011. www.springerlink.com
Comparative Studies of the Optimal Airflow Waveforms and Ventilation Settings under Respiratory Mechanical Loadings
found to be existed in the modeling of the respiratory control [8,9]. The optimal chemical-mechanical control model, which successfully demonstrates the ventilatory responses to chemical stimuli as well as muscular exercise, was previously proposed and simulated by Poon and Lin [9], respectively. They suggested that it is possible to minimize a combined controllers cost, consisting of the total chemical challenges and mechanical work that can be simulated through a neuro-mechanical effector with the form of an electrical RC model. The basic hypothesis is that all respiratory responses, including breathing pattern and ventilation, may be direct consequences of the optimization of an instantaneous respiratory neural output. Thus, rather than assume a hierarchy of control mechanisms for the various levels of the control outputs, the model proposes and verifies the profile of the neuromuscular drive, which can successfully relate the neuro-respiratory output to the resultant mechanical airflow.
135
B. Neuro-mechanical Effector Rrs . V(t)
P(t)
P(t) : isometric inspiratory pressure measured at FRC + Crs . V(t) : instantaneous airflow V(t) V(t) .: instantaneous lung volume above FRC _ Rrs
: total respiratory system resistance
Crs
: total respiratory system elastance
Fig. 2 Simplified RC model of respiratory mechanics
Fig. 3 Mathematical model of Airflow waveforms JM
Logarithmic Coupler
+
S
J
Optimal Controller
+ JC
Quadratic Coupler
. W
P(t)
Work Rate Index
. V(t)
NeuroMechanical Effector
. PICO2 VCO 2
Gas Exchanger
PaCO2
The description of respiratory mechanical system is based on lumped-parameter model [10] of Fig. 2 for the relation between respiratory neural P(t) and the mechanical , V(t) and the dynamical equation is given by: outputs V(t)
˙ (t) ⋅ Rrs+ V(t) ⋅ Ers P(t) = V
Chemoreceptors
Fig. 1 Optimal respiratory control model
(1)
C. Airflow Model
The modeling of the optimal chemical-mechanical respiratory control model is illustrated in the block diagram of Fig. 1, where the controller is driven by both chemical and neuro-mechanical feedback signals. In Fig. 1, this is demonstrated by the coupling of chemical cost JC and mechanical cost JM, which is represented by the quadratic coupler of chemical feedback signal and the logarithm coupler of work rate, respectively. The respiratory control is modeled as a closed-loop feedback system comprising four major functional blocks: the plant (gas exchanger), the feedback path (chemoreceptors), the optimal controller, and the neuromechanical effector. In earlier researches, the control model employed quadratic rising and exponential falling neural and lung volsignal P(t) to further derive the airflow V(t) ume V(t) through the neuro-mechanical effector. In current study, however, the airflow was modeled to be sinusoidal, square, and descending in the inspiratory phase and exponential in the expiratory phase [11] and was the direct output of the controller.
Instead of optimizing an approximated inspiratory neural pressure P(t) of Fig. 2, the inspiratory airflow waveforms of (2) were modeled with sinusoidal, square, and descending waves of Fig. 3 and optimized through the optimal chemical-mechanical control model of Fig. 1. The parameter of PF in (2) denotes the peak flow of the inspiratory airflow waveform. During quiet breathing, inspiratory activity does not cease abruptly after reaching the peak value but usually decays nearly exponentially throughout much of the expiration phase [11]. Consequently, the expiratory profiles of (3)~(5) with exponential wave will be resulted via (1) as was figured in Fig. 3. Inspiration : 0 ≤ t ≤ Ti Sinusoidal Square
= PF ⋅ sinωt : V(t) = PF : V(t)
=( Descending : V(t)
Expiration : Ti ≤ t ≤ T
IFMBE Proceedings Vol. 35
1 ) 12
2
− PF
2VT
(2)
2
+ PF
136
S.L. Lin, S.J. Yeh, and H.W. Shia
P(t) = P(Ti) ⋅ e V(t) =
IV. RESULTS AND DISCUSSION
-(t - Ti) τ rs
(3)
P(Ti) (t - Ti) ⋅ e R rs
˙ (t) = P(Ti) [ e V R rs
-(t -Ti) τrs
-(t -Ti)
+ V(Ti) ⋅ e
-(t -Ti) τrs
τrs
-(t -Ti)
− τ1 (t − Ti) ⋅ e rs
τrs
(4) -(t -Ti)
] − τ1 V(Ti) ⋅ e
τrs
(5)
rs
III. SIMULATION AND OPTIMIZATION Simulations were performed by optimizing the setting parameters and modeled airflows. The pressure and lung volume profiles can be observed simultaneously and breathing patterns can also be obtained. The simulator imitates the software structure with the GUI interface manifestation. The respiratory physiology (PICO2 and V CO2 ) was firstly set while imitating with the respiratory mechanical characteristics Rrs and Ers under various loaded breathing. 1. Continuous Resistive Loading (CRL): The simulations of CRL were performed by increasing Rrs from controlled (Rrs=3.02 cm-H2O·l-1·s) to Rrs=5, 10, and 20 (cm-H2O·l1 ·s) throughout the inhaled and exhaled airway.
% Changes From Control
2. Continuous Elastic Loading (CEL): Similar to that of CRL, the simulations of CEL were made by increasing the Ers (decreasing Crs) under loading, and with Ers=21.9 (cm-H2O·l-1) under no load (NL). The lung compliance (Crs=1/Ers) were set to be 0.05, 0.04, and 0.03 (Ers=20, 25, 33.3, respectively) in the simulations. For the case of Crs=0.05 (Ers=20), indeed, it can be considered as the case of elastic assistance for a soft lung.
This paper modeled sinusoidal, square, and descending airflow waveforms and control parameters in CMV, and implemented an optimal control model for respiratory systems. Simulations were performed under resistive and elastic loading. In Fig. 4, we may find the optimal airflow profiles of sinusoidal (left), square (middle), and descending (right) model of no load (solid line), which was compared with that of CRL (top) and CEL (bottom). In Fig. 5~7, the results of optimized breathing pattern under loaded breathing (CRL, top; CEL, bottom) were compared with that of no load for three models. Total venti ), breathing frequency (F), tidal volume (VT), lation ( V E peak flow (PF), mean flow (MF), peak pressure (A), inspiration/expiration time (Ti, Te), and duty cycle (Ti/T) were derived and optimized through the optimal control model. The percentage changes of the breathing pattern from no load were calculated for CRL and CEL. In Fig. 5 of sinusoidal airflow model, the resistive loading have shown substantial effect on the decreased F and increased VT, however, the total ventilation was depressed (≈-20%). For the case of CEL in Fig. 5, a stiffer lung (increased Ers, or and F, while decreased Crs) resulted in the rise in both V E VT was suppressed. Both peak flow and mean flow were also found to be increased, and it is also noteworthy to figure that peak pressure was increased to ≈40% with Crs=0.03. In Fig 6, the results of CRL on breathing patterns using the square airflow model are similar to those of the sinusoidal model, except in the case of R=10, in which the imposed resistance appears to have had the opposite effect on the resultant breathing, compared to other loading magnitudes. Load resistance of Rrs=5, 10, and elastic loading of Crs=0.04, exhibited little effect on breathing patterns, while , VT, and F were significant in the influence of Rrs=20 on V E CRL, and an increase in peak pressure was observed in both CRL and CEL. In contrast to the sinusoidal and square airflow, imposed resistive loading appears to have had a considerable influence on the breathing patterns in the descending airflow model, as shown at the top of the Fig. 7. CRL
5 10 20
VE
Fig. 4 Optimal airflow using three inspiratory waveform models under -1
140 120 100 80 60 40 20 0 Ͳ20 Ͳ40 Ͳ60
F
VT
PF
MF
A
Ti
Te
Ti/T
-1
NL, CRL (Rrs=8 cm-H2O·l ·s) and CEL (Ers=31.9 cm-H2O·l )
Fig. 5 Comparisons of loaded breathing with no load for sinusoidal wave
IFMBE Proceedings Vol. 35
% Changes From Control
Comparative Studies of the Optimal Airflow Waveforms and Ventilation Settings under Respiratory Mechanical Loadings 50
CEL
0.05
40 30
137
in magnitude. Meanwhile, the results of breathing under CEL with a descending wave also demonstrated patterns comparable to those of the square airflow model.
0.04 0.03
20 10 0
Model 1 - Model 2
Ͳ10 Ͳ20 VE
F
VT
PF
MF
A
Ti
Te
Ti/T
% Changes From Control
% Changes From Control
Fig. 5 (continued) 140 120 100 80 60 40 20 0 Ͳ20 Ͳ40 Ͳ60
50
CRL
5 10
Model 2 - Model 3
20
VE
F
VT
PF
MF
A
Ti
Te
40
0.04
30
0.03
Ti/T
CEL
0.05
Model 1 - Model 3
20 10 0 Ͳ10 Ͳ20 VE
F
VT
PF
MF
A
Ti
Te
Ti/T
Fig. 6 Comparisons of loaded breathing with no load for square wave % Changes From Control
120 100 80
CRL
5 10 20
60 40 20 0 -20 -40 -60
% Changes From Control
VE
F
VT
PF
MF
A
Ti
Te
50
30
Ti/T
CEL
0.05
40
Fig. 8 Comparison of two distinct airflows for loaded breathing and no load under CRL and CEL
140
0.04 0.03
20 10 0 -10 -20 VE
F
VT
PF
MF
A
Ti
Te
Ti/T
Fig. 7 Comparisons of loaded breathing with no load for descending wave The results of breathing under CRL with the descending wave exhibited a similar influence on sinusoidal airflow for each breathing pattern but with greater percentage changes
Figure 8 provides a comparative study of the two modeled airflows. Models 1~3 were defined as sinusoidal, square, and descending airflow. At the top of Fig. 8, a comparison of percentage changes in resultant breathing patterns is provided under NL, CRL, and CEL between sinusoidal and square airflow (Model_1-Model_2). In both no load and loaded breathing, sinusoidal waves retained greater total ventilation and higher breathing frequency with a decrease in tidal volume, peak flow, mean flow, and peak pressure. Similar observations were found by comparing the sinusoidal wave to the descending airflow model (Model_1Model_3). The distinctive characteristics of breathing patterns using square and descending airflow were not obvious , F, VT, or PF. However, in the center in the patterns of V E of Fig. 8 (Model_1-Model_2), the square airflow appears to have increased by nearly 75 % of mean the flow under both loaded and no load breathing, while attaining superior peak pressure. As far as duty cycle is concerned (a 50 % duty cycle corresponds to 1:1 of IE ratio), and the sinusoidal model appears to have the largest Ti/T, with the square airflow model showing the least Ti/T.
IFMBE Proceedings Vol. 35
138
S.L. Lin, S.J. Yeh, and H.W. Shia
V. CONCLUSIONS This study optimized three types of airflow commonly used in respiratory therapy under mechanical ventilation, in conjunction with breathing patterns of ventilator settings. Optimal respiratory control is employed to minimize the effort of breathing. One remarkable characteristic of the proposed optimization model is its ability to mimic ventilatory and breathing responses to CO2 inhalation, under both muscular exercise, and mechanical loading. The results of the current study appear to be consistent with previous research, demonstrating the potential for tailoring airflow and ventilator settings specifically to meet the physiological needs of patients. This comparative study on any two airflows under loaded and no load breathing provides a valuable clinical reference for respiratory therapy.
ACKNOWLEDGMENT This research was partially supported by grant NSC 96-2221-E-035-096 from the National Science of Council, Taiwan.
REFERENCES
3. Sasoon C, Mahutte C, and Te T, et al (1988) Work of breathing and airway occlusion pressure during assisted-mode mechanical ventilation, Chest 93: 571~576, 1988. 4. Sasoon C, Mahutte C, and Light R (1990) Ventilator modes: old and new”, Crit. Care Clin 6(3): 605~634. 5. Modell H, and Cheney F (1979) Effects of inspiratory flow pattern on gas exchange in normal and abnormal lungs, J. Appl. Physiol. 46: 1103~1107. 6. Al-Saady N, and Bennett E (1985) Decelerating inspiratory flow waveform improves lung mechanics and gas exchange in patients on intermittent positive pressure ventilation, Intensive Care Med 11:68~75. 7. Rau J (1993) Inspiratory flow patterns: the ‘shape’ of ventilation, Respir. Care 38(1): 132~140. 8. Poon C (1987) Ventilatory control in hypercapnia and exercise: optimization hypothesis, J. Appl. Physiol. 62: 2447~2459. 9. Poon C, Lin S, and Knudson OB (1992) Optimization character of inspiratory neural drive, J. Appl. Physiol., 59: 2005~2017. 10. Younes M, and Riddle W (1981) A model for the relation between respiratory neural and mechanical output. I. Theory, J. Appl. Physiol., 51: 963~977. 11. Agostoni E, Citterio G, and D’Argelo E (1979) Decay rate of inspiratory muscle pressure during expiration in man, Respir. Physiol. 36: 269~285. Author: Institute: Street: City: Country: Email:
1. Scanlan C (1995) Initiating and adjusting ventilatory support, Egan’s Fundamental of Respiratory Care, 6th edition, Chap. 32, 894~919. 2. Laghi F, Karamchandani K, and Tobin MJ (1999) Influence of Ventilator Settings in Determining Respiratory Frequency during Mechanical Ventilation, American Journal of Respiratory and Critical Care Medicine, 160: 1766-1770.
IFMBE Proceedings Vol. 35
S.L. Lin Feng Chia University 100 Wenhwa Rd., Seatween Taichung Taiwan
[email protected]
Development of Inexpensive Motion Analysis System–Preliminary Findings Y.Z. Chong1, J. Yunus2, K.M. Fong, Y.J. Khoo, and J.H. Low1 1
2
Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kuala Lumpur, Malaysia Faculty of Biomedical and Health Science Engineering, Universiti Teknologi Malaysia, Johor Bahru, Malaysia
[email protected]
Abstract— A Real-Time, Inexpensive Gait Analysis System that cost less than RM3,000 (≈USD982) is proposed in this paper. The proposed system consists of subsystems such as wireless wearable accelerometer system, Custom-made force platform, data acquisition system with real-time computer data logging capability. Kinematics and kinetics parameters of human locomotion were collected and analysed by the system. The results collected were compared against published normative human gait data. Data collection was performed on 20 healthy test subjects without any reported gait-deficiency. Keywords— Gait Analysis System, Instrumentation, Inexpensive, Wearable System.
Biomedical
I. INTRODUCTION Human locomotion may be defined as the action by which the body as a whole moves through aerial, aquatic, or terrestrial space. Locomotion is achieved by coordinated movements of the body segments, taking advantage of an interaction of internal and external forces [1]. The rapid development in science and technology have give raise to various biomechanical instrumentations developed such as the vision-based system and transducer-based system [2] - [8]. Generally, vision-based systems are more expensive than transducer-based system [8] – [12] [22]. In view of the latter, motivation of the paper is to produce a low-cost system, transducer-based system comprising of a kinematics measurement system and kinetics measurement system. There are various systems being developed in recent years for various applications from clinical applications to telemedicine applications [8] - [13]. The emergence of such products may not have benefited the entire community due to the high cost of such systems (USD5,000 to USD50,000) [9]. There are numerous researches attributed to the development of insole-based system that collects various gait parameters seen in the literatures [7], [9] - [13], [22]. Most of the system proposed did not emphasis on the factor of costs in commercialization. Hence, the need of developing cost-effective instrumentation is necessary to cater the lower-income nations. In view of the latter, this paper proposes a cost-effective and affordable gait analysis system that costs less than RM3,000 (≈USD892) [14].
II. METHODOLOGY A. System Overview The major subsystems are wireless wearable accelerometer subsystem, instrumented platform subsystem, data logging subsystem and analysis subsystem. The instrumented platform will measure the vertical ground reaction force while the wearable accelerometer subsystem will measure the relevant joint kinematics. The datalogging system consists of signal conditioning circuitry. Finally, the computer system was used to store and analyse gait parameters collected by the system. Fig. 1 shows the overview setup of the proposed system.
Fig. 1 Overview Setup of the Proposed System B. Sensor Subsystems There are two different sensor subsystems namely, the wireless wearable accelerometer subsystem, and instrumented platform. The function of wireless wearable accelerometer subsystem is to measure joint kinematics and instrumented platform to quantify the vertical ground reaction forces of human movement. The wireless wearable accelerometer system utilized three (3) MMA7260QT Freescale Tri-Axial Accelerometers attached to the left knee, pelvis, and right knee. The accelerometer subsystem records the tri-axis acceleration of the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 139–142, 2011. www.springerlink.com
140
Y.Z. Chong et al.
different anatomical landmarks, and provides relevant kinematics information on movements. Fig. 2 shows the developed prototype.
D. Computer Datalogging Two different computer interfaces were created for datalogging for both wearable accelerometer subsystem and instrumented platform. The datalogging interfaces were created using both Microsoft® Visual Basic and National Instruments® LabView Software. All data could be exported as text file for further analysis. Both interfaces provided realtime display on data collected by both subsystems. Fig. 4 shows the interfaces of the subsystems.
Fig. 2 The developed wearable accelerometer subsystem An instrumented platform of 0.81 m X 0.81 m X 0.019 m in size was built. Total of eight (8), 38.1cm2 Interlink Electronics™ Force Sensing Resistors (FSR) were used to collect vertical ground reaction forces of human movement. Fig. 3 shows the completed prototype of the subsystem. Fig. 4 The computer datalogging screen-captures E. Subject Information
Fig. 3 The developed prototype of the instrumented flooring C. Signal Conditioning Subsystem The signal conditioning subsystems consist of amplification of signals generated by both accelerometers and force sensing resistors. Microchip® PIC16F877A microcontroller were used to collect relevant signals from the sensors. Utilizing the on-board Analog-Digital Converter (ADC), analog signals of both sensors were converted into digital signal for further processing by the computer. Datalogging of the wearable system is done via Bluetooth transmission. As for the instrumented platform, datalogging was done through the RS232 interface.
Data collection was performed on a pool of test subjects. The studies have been approved by Ethical Committee of the University and all participants signed the voluntary participation agreement. Table 1 depicts the subjects participated in the data collection. Each test subject is required to wear the accelerometer system attached to the belt. Each subject is required to walk across the instrumented platform which is embedded on the floor of 5 meters in the laboratory. The motions were repeated three times. Therefore, there are three readings recorded on each test subjects. Both real-time observation and offline data collection were performed simultaneously where the data was saved in a flash drive for future analysis. Table 1 Subject Description Number of Subjects Mean Mass (kg) Mean Height (m) Mean Age (Years) Mean BMI
IFMBE Proceedings Vol. 35
20 (10 Male, 10 Female) 63.21 (S. D. 13.41, Min = 50, Max = 100) 1.68 (S. D. 0.0973, Min = 1.54, Max = 1.80) 28 (S. D. 9.16, Min 21, Max 50) 24.15 (S. D. 4.89, Min 17.30, Max 31.89)
Development of Inexpensive Motion Analysis System–Preliminary Findings
141
of 0.01 BH [20]. Fig. 6 shows the real-time data capturing of human walking data collected by the wearable accelerometer subsystem.
III. RESULTS AND DISCUSSION A. Kinetic Parameters 1. Vertical Ground Reaction Force for Walking The results revealed that the average Vertical Ground Reaction Force (VGRF) for all subjects is 1,071 N (S.D. 243 N) or 1.50 BW (S.D. 0.33 BW) which is within 1.0 – 1.5 BW. These readings are consistent with the findings of Virginia L. Gidding et. al. [12], [15], [16]. Fig 5 shows the sample graph generated by the software depicting the realtime datalogging of average VGRF for walking. The system is capable of capturing the walking waveforms comparable with other published results [12][15][16][18][19][21]. Furthermore, the system also generated waveforms of walking in the phasic terms of initial contact (IC), loading response (LR), mid-stance (MSt), terminal stance (TSt), pre-swing (PSw), initial swing (ISw), mid-swing (MSw) and terminal swing (TSw) [17] - [19]. The waveform generated by the system is the same as a normal gait waveform. Such data will be useful in determining normal and pathological gait. Fig. 5 shows the screenshot of the real-time datalogging of the instrumented platform sub-system.
Fig. 6 Acceleration profile for human walking as collected real-time by the analysis system Table 2 summarises the data been collected in the study. Table 2 Data collected via the system Parameters Vertical Ground Reaction Forces (VGRF) Centre of Mass Vertical Displacement
Walking 1,071 N (S.D. 243 N) /or 1.50 BW (S.D. 0.33 BW) 0.01 BH (S.D. 0.001 BH)
Running 1,041 N (S.D. 200 N) /or 2.0 BW (S. D. 0.445 BW) 0.01 BH (S.D. 0.001 BH)
IV. CONCLUSION AND RECOMMENDATION
Fig. 5 The real-time instrumented platform datalogging facility screencaptures
B. Kinematics Parameters The wireless wearable accelerometer system collects the acceleration data which were used to derive the displacement in three-dimension of three different anatomical landmarks as described in previous section of the paper. The vertical displacement of centre of mass/trunk is found to be between 0.01 Body Height (BH) (S.D. 0.001 BH). The results correspond to the normal vertical COM displacement
The developed system was an accurate, cost-effective and user friendly motion analysis system for basic human motion analysis. In addition, the proposed system could be used as start-up gait laboratory where fundamental kinematics and kinetics parameters of human gait are essential. Furthermore, the scope of application of such instrumentation is wide from clinical application of studying pathological gait, rehabilitation, sport science, and eventually to telemedicine and development of humanoid robot. The system could be further improvised by adding more measurement modalities such as physiological monitoring or other modalities suiting the desired applications. Further improvements could be made on the datalogging software side whereby inclusion of remote monitoring of the human motion analysis is possible via 3G or 4G networks. Further improvement on the system to cater for rehabilitation purposes could also be made whereby the system could be modified to capture various important parameters to be studied for continuous rehabilitation monitoring. It is envisioned that following this simple and accurate platform, the system could be expanded into a full-body
IFMBE Proceedings Vol. 35
142
Y.Z. Chong et al.
human gait monitoring system that comprised of both laboratory-based platform, and wearable kinematics measurement platform would enable to provide a cost-effective, and accurate human monitoring system.
REFERENCES [1] Cappozzo, A. 1984. Gait analysis methodology. Hum. Movement Sci. 3:27–50. [2] Lee, J. B., Mellifont, R. B., & Burkett, B. J. (2010). The use of a single inertial sensor to identify stride, step, and stance durations of running gait. Journal of science and medicine in sport / Sports Medicine Australia, 13(2), 270-3. Sports Medicine Australia. doi: 10.1016/j.jsams.2009.01.005. [3] Cooper, G., Sheret, I., McMillan, L., McMillian, L., Siliverdis, K., Sha, N., et al. (2009). Inertial sensor-based knee flexion/extension angle estimation. Journal of biomechanics, 42(16), 2678-85. Elsevier. doi: 10.1016/j.jbiomech.2009.08.004. [4] Schepers, H. M., Asseldonk, E. H. F. van, Buurke, J. H., & Veltink, P. H. (2009). Ambulatory estimation of center of mass displacement during walking. IEEE transactions on bio-medical engineering, 56(4), 1189-95. doi: 10.1109/TBME.2008.2011059. [5] Takeda, R., Tadano, S., Natorigawa, A., Todoh, M., & Yoshinari, S. (2009). Gait posture estimation using wearable acceleration and gyro sensors. Journal of biomechanics, 42(15), 2486-94. Elsevier. doi: 10.1016/j.jbiomech.2009.07.016. [6] Preece, S. J., Goulermas, J. Y., Kenney, L. P. J., Howard, D., Meijer, K., & Crompton, R. (2009). Activity identification using bodymounted sensors--a review of classification techniques. Physiological measurement, 30(4), R1-33. doi: 10.1088/0967-3334/30/4/R01. [7] Zhou H., Stone T., Huosheng H., Harris N. 2008. Use of multiple inertial sensors in upper limb motion tracking. Medical Engineering & Physics. 30: 123 – 133. [8] Zhou H., Huosheng H. 2008. Human motion tracking for rehabilitation – A survey. Biomedical Signal Processing and Control. 3: 1- 18. [9] Kyriazis V., Rigas C., Xenakis T. 2001. A portable system for the measurement of the temporal parameters of gait. Prosthetics and Orthotics International. 25: 96 – 101.
[10] Hausdorff J. M., Ladin Z., Wei J.Y. 1995. Footswitch system for measurement of the temporal parameters of gait. Journal of Biomechanics. 28: 247 – 351. [11] Mavroidis C. et. al. 2005. Smart portable rehabilitation devices. Journal of NeuroEngineering and Rehabilitation. 2: 18. [12] Morris S. J. et. al. 2008. Gait Analysis Using a Shoe-Integrated Wireless Sensor System. IEEE Transactions on Information Technology in Biomedicine. 12(4): 413 – 423. [13] Lee J-A. et. al. 2007. Wearable Accelerometer System for Measuring the Temporal Parameters of Gait. Proceedings of the 29th Annual International Conference of the IEEE EMBS Cite Internationale, Lyon, France. :483 – 486. [14] Conversion rates are taken from XE.com conversion website http://www.xe.com/ucc/convert.cgi accessed on 31st January 2011. [15] Huang B. et. al. 2007. Gait Event Detection with Intelligent Shoes. Proceedings of the 2007 International Conference on Information Acquisition, Jeju City, Korea. :579 – 584. [16] Giddings V. L., Beaupre G.S., Whalen R. T., Carter D. R. 1999. Calcaneal loading during walking and running. Medicine & Science in Sports & Exercise. 627-634. [17] Ayyappa Ed. 1997. Normal Human Locomotion. Part 1 : Basic Concepts and Terminology. American Academy of Orthosis & Prosthetists. http://www.oandp.org/jpo/library/indes/1997_01.asp Accessed on 18th October 2008. [18] Grimshaw P. Burden A. 2007. Instant Notes – Sport & Exercise Biomechanics. T & F Informa. New York, USA. ISBN 1 85996 2848. [19] Medved V. 2001. Measurement of Human Locomotion. CRC Press LLC. Boca Raton, Florida, United States. ISBN 0 8493 7675 0. [20] Kirtley C., Teach-in ’97 – Gait Analysis. http://www.univie.ac.at/cga/teach-in/transient/index.html. Accessed on 21st October 2008. [21] Vaughan CL, Davis BL, O’Connor JC. 1992. Dynamics of human gait. Champaign, Illinois. Human Kinetics. [22] Hongche Guo et. al. 2007. A wearable human motion analysis system for lower limb rehabilitation robot. IEEE/ICME International Conference on Complex Medical Engineering. 39 – 42.
IFMBE Proceedings Vol. 35
Diabetic Foot Syndrome-3-D Pressure Pattern Analysis as Compared with Normal Subjects H.S. Ranu1,2 and A. Almejrad1 1 2
College of Applied Medical Sciences, King Saud University, Riyadh 11433 Saudi Arabia American Orthopaedics Biomechanics Research Institute, Atlanta, GA 31139-1441. U.S.A.
Abstract— 3-D Foot Print Device was developed to measure the plantar foot pressures for normal and diabetic patients with neurotrophic ulcers. The results show that patients with peripheral neuropathy develop very high forefoot pressures as compared to normal segment of the populations. Bubble technology and new memory biomaterials insoles were used to reduce these forefoot pressures combined with providing new custom made shoes.
Figure 2 shows the prevalence of diabetes in KSA by age and sex. These statistics clearly show that there is an urgent need to identify, describe and treat this population worldwide, especially in the Kingdom of Saudi Arabia (KSA). [2, 3].
Keywords— 3-D Foot Print Device, Plantar Foot Pressures, Diabetic and Normal Subjects, Bubble Technology Insoles, Memory Biomaterials.
I. INTRODUCTION International Diabetic Federation (IDF) recently reported that Kingdom of Saudi Arabia (KSA) has the world's highest percentage of diabetic patients. In addition, [1] reported that China has 92 million diabetics (Chin A recent & China B earlier projection). In Indonesia 152.8 million, Malaysia 16.9 million, Myanmar 32.5 million, Singapore 34.3 and in Thailand 46 million. Figure 1 projects the diabetic population by year 2030 for different countries. These data indicates that there will be a doubling of the diabetic population. Fig. 2 Prevalence of diabetes according to age and sex in KSA (1990-1993)
Fig. 1 Diabetes incidence in different countries in 2010 and projections for year 2030
Complications secondary to diabetes, such as diabetic foot ulcers continue to be a major worldwide health problem [4]. At the same time, health care systems are changing rapidly, causing concern about the quality of patient care. While the ultimate effect of current changes on health care professionals and patient outcomes remain uncertain, measures commonly used to reduce costs, e.g., disease and multi-disciplinary management strategies have been shown to help prevent the occurrence of diabetic ulcers. In addition, utilizing a multi-disciplinary approach, the principles of off-loading and optimal wound care, and the vast majority of diabetic foot ulcers can be expected to heal within 12 weeks of treatment. Education of primary care providers and patients is paramount.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 143–147, 2011. www.springerlink.com
144
H.S. Ranu and A. Almejrad
Researchers [5, 6 and 7] have highlighted that lowerextremity amputation in people with diabetes continues to be a major public health problem. More than 65,000 amputations are performed annually on diabetic patients in the United States, and, despite recent efforts, this number is increasing. Ulceration in the neuropathic foot is a major precursor of amputation, and thus identification of risk factors, together with primary and secondary prevention of foot ulceration, are key goals of this research. It has been reported by [8, 9] that neuropathic patients experience problems with gait and posture. They also suffer more falls and fractures. Gait analysis techniques are being used to explore both the role of the foot as a sensory organ and the contributions of proprioception to the control of movement. Others [10] have indicated the importance of footwear in the prevention of foot lesions in patients with Non-Insulin Dependent Diabetes Mellitus (NIDDM). In addition, [11, 12, 13, and 14] have investigated the pressure patterns under the feet of diabetic and non-diabetic population in the United States.
normal subjects. It incorporates the latest Pressure Sensitive Resistors, number 2048, active area 193 X 412 mm. Dimensions: 630 X 325 X 32 mm. Maximum applied pressure was 500 psi (34 m Pa). Scanning time for one scan of 15 ms, sampling frequency 136 KHz. The maximum number of scans were 64 and the senor size were 5 x 5 mm. The data were obtained from 20 normal and 20 diabetic patients with ulcers and having peripheral neuropathy. Data were stored frame by frame as absolute pressures for the sampling interval.
III. RESULTS Plantar foot pressure data were 628 ± 78 k Pa (values are means ± SE) peak pressure for 20 normal subjects and 1501 ± 128 k Pa peak pressure for 20 patients with neurotrophic ulcers. A correlation coefficient r = 0.27 reveled a significant difference in plantar pressures between the two groups (p>0.05).
Fig. 3 Planter pressures of a normal subject in different modes
One of the most serious complications of long-standing diabetes is the diabetic foot syndrome. It often leads to amputation and marked adverse consequences in terms of morbidity and mortality. This deformity leads to peripheral neuropathy and later on could result in lower extremity amputation.
II. MATERIALS AND METHODS A3-D Foot Print Device (WM Automation Ltd, North Wales, U.K.) was used to analyze the gait of diabetic and
Fig. 4 Pressure patterns for a normal subject (Pressures are evenly distributed bottom of the foot)
Figures 3 and 4 show pressure patterns for normal subjects in 3D and 2D and in different modes of representation. They are evenly distributed under the foot. Figure 5 shows the pressure pattern at higher magnification and thus allowing to home in on the area under study more closely. From this figure one can develop a precise foot insole for the diabetic patient. Figure 6 shows the pressure
IFMBE Proceedings Vol. 35
Diabetic Foot Syndrome-3-D Pressure Pattern Analysis as Compared with Normal Subjects
145
patterns under the forefoot of a diabetic patient. Very high forefoot pressure patterns can clearly be seen and thus the need to reduce them. This was achieved by using specially developed insoles. They were made of bubble technology and special memory biomaterials. This technique reduced the forefoot pressures by 75 percent. A significant relief for the diabetic patients from repetitive loading. Figures 7 and 8 show the real time pressure patterns for a normal person from heel strike to toe off and centre of foot pressure. Figure 9 shows the feet of a diabetic patient with clear marking of high forefoot pressures which needed and provided attention in this case. Figure 10 shows the pressure patterns under the forefoot of another diabetic patient before and after bubble technology insole. Thus providing a75 percent reduction in repetitive loading and a significant pain relief. Figure11 shows the bubble technology insole specially developed for the diabetic patient to reduce the forefoot pressures for this segment of the population. Fig. 6 Pressures under the diabetic patient. Here it can clearly be seen the need to reduce the forefoot pressures. This was achieved by prescribing a bubble technology insole or the use of new memory biomaterials
Fig. 5 Close up and highly magnified area under the foot. One can clearly see the high pressures under the foot and thus develop a foot insole to prevent high forefoot pressures under the foot
Fig. 7 Real time measurement of pressure under the feet from heel strike to toe off. (3D and 2D mode) for a normal subject
IFMBE Proceedings Vol. 35
146
H.S. Ranu and A. Almejrad
Fig. 8 Real time measurement of Centre of Foot Pressure of a normal subject from heel strike to toe off
Fig. 9 Feet of a diabetic patient. Areas of high pressure can clearly be seen
A
B
Fig. 10 Diabetic patient plantar pressures under forefoot before & after bubble Technology shoe insoles. A 75 percent reduction in forefoot pressures can clearly be seen (difference between A & B) IFMBE Proceedings Vol. 35
Diabetic Foot Syndrome-3-D Pressure Pattern Analysis as Compared with Normal Subjects
147
REFERENCES
Fig. 11 Special shoe insoles developed for diabetic patients to reduce their forefoot pressures. Also new Ranu’s bubble technology insoles and new memory biomaterials are being used to reduce and remember the forefoot high pressures
IV. CONCLUSION Based upon data analyses, the following conclusions were: •
•
•
•
Ulcers occur at the location of the highest horizontal shear force and also at the location of the highest vertical force. The mechanism involved in diabetic foot ulceration is high forefoot pressure linked with plantar foot ulcerations. It is the combination of insensitivity and abnormally high loading of the metatarsal heads that leads to ulceration. Special shoe insoles and shoes have been developed to reduce the forefoot high pressures for diabetic persons.
ACKNOWLEDGEMENT
[1] Yang, W., Lu, J, Weng, J. et al. (2010). Prevalence of Diabetes among Men and Women in China. N Engl J. Med 362, 20-31. [2] Irvine, W.J., (1977), Classification of idiopathic diab et es. La nc et, 1 , 638 -642. [3] National Diabetes Data Group. Classification and diagnosis of diabetes mellitus and other categories of glucose intolerance.(1979). Diabetes, 28: 1039-1057. [4] Boulton, A., Meneses, P., and Ennis, W. (1999). Diabetic foot ulcers: A framework for prevention and care. Wound Rep Reg. 7, 7-16. [5] Cavanagh, P.R. (2004). Therapeutic footwear for people with diabetes. Diabetes Metab Res Rev. 20, Suppl 1: S51-5. Review. [6] Cavanagh, P.R, Owings, T.W. (2006). Nonsurgical strategies for healing and preventing recurrence of diabetic foot ulcers. Foot Ankle Clin 11, 735-743. [7] Ulbrecht, J.S., Cavanagh, P. R. and Caputo, G.M. (2004). Foot problems in diabetes: an overview. Clin Infect Dis. Aug 1; 39 Suppl 2:S73-82. [8] Bus, S.A., Ulbrecht, J.S. and Cavanagh, P.R. (2004). Pressure relief and load redistribution by custom-made insoles in diabetic patients with neuropathy and foot deformity. Clin Biomech 19, 6, 629-38. [9] Bus, S.A, Maas, M., Cavanagh, P.R., Michels, R.P. and Levi, M. (2004). Plantar fat-pad displacement in neuropathic diabetic patients with toe deformity: a magnetic resonance imaging study. Diabetes Care. 27, 10, 2376-81. [10] Litzelman, D.K., Marriott, D.K. and Vinicor, F. (1997). The role of footwear in the prevention of foot lesions in patients with NIDDM conventional wisdom or evidence -based practice? Diabetes Care. 20, 156-162 [11] [11] Ranu, H.S. (1995) .Gait Analysis of a Diabetic Foot. In roceedings of 14th Southern Biomedical Engineering Conference. April, Shreveport, Louisiana. 197-200. [12] Ranu, H.S. (1989). A Quantitative Method of Measuring the Distribution of Forces under Different Regions of the Foot. In Proceedings of IEEE Engineering in Medicine & Biology Society 11th Annual International Conference. Seattle, Washington. November, 824. [13] Ranu., H.S. (1992). Pressure under the Foot. The Journal of Bone & Joint Surgery. 74 (Br.), 5,787. (Letter). [14] Ranu, H.S. (1993). A Three Dimensional Pressure Sensing System. Medical Electronics. 24, 90-95.
Authors acknowledge the assistance of Mr. Mohammad Nisar, College of Applied Medical Sciences, King Saud University, Riyadh, KSA.
IFMBE Proceedings Vol. 35
Effects of Arterial Longitudinal Tension on Pulsatile Axial Blood Flow Y.Y. Lin Wang1,2, W.K. Sze1, J.M. Chen1, and W.K. Wang2 1
National Taiwan Normal University, Department of Physics ,Taipei, Taiwan, R.O.C 2 Academia Sinica, Institute of Physics, Taipei, Taiwan, R.O.C.
Abstract— By treating the arterial wall and the enclosed blood as one integrated system, a more general axial blood flow equation which includes the effect of arterial longitudinal tensions at the first step was derived. We found that longitudinal tensions not only provide the tautness needed for the radial oscillation of arterial walls, but also reduce the dissipative axial pulsatile blood motion induced by the pulse pressure. The results explain why large arteries in vivo are subjected to large longitudinal tensions. The possible benefit of body stretching exercises by reducing the local pulsatile axial blood flow was also discussed. Keywords— longitudinal stress, stretching exercise.
I. INTRODUCTION Many studies showed that large arteries in vivo are subjected to large longitudinal tensions [1]–[3]. It was found physiologically that the axial component of wall stress plays a fundamental role in compensatory adaptations by arteries [4]. Rehabilitation exercises for heart diseases and traditional exercises such as Yoga or Taichichuan also involve the stretching of various parts of the bodies. However, the benefit of the longitudinal stress from a biomechanical point of view is still lacking. Since distributing blood to the peripheral arteries or arterioles is the major function of the arterial system, the pulsatile component of the axial blood flow in large artery is wasteful [5]–[6]. In this study, we will analyze the effect of longitudinal stress on the pulsatile axial blood flow by deriving a more general axial momentum equation in large arteries.
Fig. 1 A segment of artery of length dz and the forces acting on the wall-blood combined system
We assume that the cross section of a segment of the artery at axial position z with length dz is circular with inner radius r ( z, t ) . The lumen cross-sectional area is then given
S = πr 2 . The elastic arterial wall is of thickness hw and with circumferential Young’s modulus Eθ and Peterson’s elastic modulus of the artery is E P . Inside the artery resides blood with density ρ, axial velocity v Z ( z, t ) and internal pressure Pi ( z , t ) . The local external pressure is P0 ( z ) ,
by
and
II. METHOD We initiate our studies by deriving an axial momentum equation for the segment of the artery shown in Fig. 1. As in previous studies [8]–[10], we take the blood and the elastic vessel as a combined system and assume the “non-slip” boundary condition; hence the complicated interaction between the blood and the elastic vessel can be treated as internal forces and need not be considered. Only forces acting on the surfaces in contact with the outside systems and the viscous force inside the blood will be included.
P( z , t ) = Pi ( z , t ) − P0 ( z ) .
Arteries in vivo are subjected to substantial longitudinal stretches as revealed by their retraction after excision [1]–[4]. The net axial tension force dFTZ acting on the wall surfaces I(at z) and II (at z+dz) of the segment (Fig. 1) by the adjacent outside vessels can be derived as
dFTZ
∂ 2 r ∂r dz = −T 2 ∂z ∂z
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 148–150, 2011. www.springerlink.com
(1)
Effects of Arterial Longitudinal Tension on Pulsatile Axial Blood Flow
On the outside surface of the arterial wall, the local outside pressure P0 contributes a force dFOZ in the z direction, and
dFOZ = 2πrdzP0 (∂r / ∂z ) = P0 (∂S / ∂z )dz . On surfaces 1 and 2, the outside adjacent fluids will exert pressure forces S1 Pi1 and S 2 Pi 2 in opposite axial direction. The sum of these two forces can be decomposed into a pressure gradient force dFPG and an area gradient force
dFAG
dFAG , where dFPG = − S (∂Pi / ∂z )dz and = − Pi (∂S / ∂z ) dz .
The momentum equation associated with the axial motion is then given by
(2)
μW
∂P ∂πr 2 ∂ 2 r ∂r DQ Z = −π r 2 i − P −T 2 + fVZ ∂z ∂z ∂z ∂z Dt
III. RESULT The forces on the right hand side of equation (3) all contribute to the axial flow motion. The first term, the pressure gradient force
f PG = −πr 2 (∂Pi / ∂z ) = −πrE P (∂r / ∂z ) , is the only dominant term to be considered in most studies such as in the derivation of the pulse wave velocity for the Moens-Korteweg equation [6],[13]. The second term is the
[
(
f TZ e r Δl ) n = −8π 2n 2 ( ZZ )( ) 2 Z f PG eθ λ1 lZ
]
f AG = − P ∂ (πr 2 ) / ∂z ; whose role
has been discussed in a previous paper [10], is of comparable order as the f PG in large arteries. The third term
f TZ = −T (∂ 2 r / ∂z 2 )(∂r / ∂z )
(4)
Eθ the
circumferential Young’s modulus, λ1 is the wavelength of the fundamental mode. And it can be estimated as
( f TZ / f PG ) n ≈ −2n 2 %
(5)
The minus sign shows that the forces associated with the tension and with the pressure gradient act in opposite directions. Thus a higher longitudinal tension will cause less axial pulsatile motion of the blood.
(3)
On the left hand side, the spatial convective term can also be included.
area gradient force
viscosity of the flow. Since the viscous force is proportional to the blood velocity, the axial blood flow causes the major energy dissipation [6], [7], [13]; and needs to be as small as possible for an efficient arterial system. The ratio of fTZ to f PG for the nth harmonic wave can be evaluated as
and VWZ are
the mass per unit axial length and the axial velocity of the arterial wall respectively. Since the axial motion of the arterial wall is negligibly small [11], [12], an axial flow equation from Eq. (2) is obtained as
ρ
is the force associated with the longitudinal tension and has not been pointed out yet. The fourth term fVZ is the force associated with the
where e ZZ is the longitudinal elastic modulus,
d ( ρ Q Z + μ W VWZ ) dz dt = dF PG + dF AG + dF OZ + dF TZ + dFVZ Here Q Z is the axial blood flux, while
149
IV. CONCLUSION AND DISCUSSION The axial flow equation of the artery (Eq.3) shows that the pulsatile pressure gradient may induce the axial pusatile acceleration of the blood which results in energy waste [6], [7], [13]. We have shown that the force associated with the tension counteracts the force associated with the pressure gradient (Eq. 4). Their ratio for the nth harmonic wave is of the order of 2n 2 % (Eq. 5). Hence we conclude that high arterial longitudinal tensions in vivo may significantly reduce the dissipative pulsatile axial blood motion induced by pulse pressure. The effect is especially prominent for higher harmonic components of the pressure wave which have shorter wavelengths. In our previous studies [9, 14], we have shown that the pressure wave equation is similar to the one dimensional transverse wave equation of a string, with the radial displacement being analogous to the transverse displacement of the string. Large longitudinal tension in arteries of mammals plays a crucial role for their periodic radial motions, much like the tautness needed for the transverse vibration of a string.
IFMBE Proceedings Vol. 35
150
Y.Y. Lin Wang et al.
Exercises involving proper stretches of various parts of the bodies may accompany stretches on the local arteries. These can further decrease the axial pulsatile motion of the local blood.
ACKNOWLEDGMENT This study was supported by a grant from the National Science Council of R. O. C. (Grant No. NSC 97- 2112 – M – 003 – 006-MY3).
REFERENCES [1] D. A. McDonald. Blood Flow in Arteries, London, Arnold, 1974. [2] H.W Weizsäcker, H.Lambert, K. Pascale, “Analysis of the passive mechanical properties of rat carotid arteries,” J. Biomech. 16: 703–715, 1983. [3] H. H. Han and Y. C. Fung, “Longitudinal strain of canine and porcine aortas,” J Biomech 28: 637–641, 1995. [4] X. Guo and G. S. Kassab, “Variation of mechanical properties along the length of the aorta in C57bl/6 mice,” Am J Physiol Heart Circ Physiol 285: H2614–H2622, 2003. [5] J. D. Humphrey, J. F. Eberth, W. W. Dye and R. L. Gleason, “Fundamental role of axial stress in compensatory adaptations by arteries,” J Biomech 42:1–8, 2009 [6] W.R Milnor, Hemodynamics (2nd. ed.), Baltimore, MD: Williams & Wilkins Co, 1989. [7] W. W. Nichols and M. F. O’Rourke, McDonald’s Blood Flow in Arteries: Theoretic, Experimental and Clinical Principles (5th ed.), London: Hodder Arnold, 2005.
[8] Y. Y. Lin Wang, M. Y. Jan, C. S. Shyu, C. A. Chiang, and W. K. Wang, “The natural frequencies of the arterial system and their relation to the heart rate,” IEEE Trans. Biom. Eng. 51(1):193–195, 2004. [9] Y. Y. Lin Wang, M. Y. Jan, G. C. Wang, J. G. Bau and W. K. Wang, “Pressure pulse velocity is related to the longitudinal elastic properties of the artery,” Physiol. Meas., 25: 1397-1403, 2004. [10] Y. Y. Lin Wang, W. B. Chiu, M. Y. Jan, J. G. Bau, S. P. Li, and W. K. Wang, “Analysis of transverse wave as a propagation mode for the pressure pulse in large arteries,” J. Appl. Phys. 102:064702, 2007. [11] D. J. Patel, D. L. Fry, “In situ pressure-radius-length measurements in ascending aorta of anesthetized dogs,” J Appl Physiol., 19: 413-416, 1964. [12] J. J. Manak, “The two-dimensional in vitro passive stress-strain elasticity relationships for the steer thoracic aorta blood vessel tissue,” Journal of Biomechanics, 13:637–646, 1980. [13] R. Skalak, F. Wiener, E. Morkin, A. P. Fishman, “The energy distribution in the pulmonary circulation, II: Experiments,” Phys. Med. Biol. 11:437-449, 1966. [14] Y. Y. Lin Wang, W. K. Sze, J. G. Bau, S. H. Wang, M. Y. Jan, and T. L. Hsu, and W. K. Wang, “The ventricular-arterial coupling system can be analyzed by the eigenwave modes of the whole arterial system,” Appl Phys Lett 92:153901, DOI: 10.1063/1.2911746, 2008.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Yuh-Ying Lin Wang National Taiwan Normal University, Department of physics Ting-Chou Rd Taipei Taiwan, R. O. C.
[email protected]
Effect of Extracellular Matrix on Smooth Muscle Cell Phenotype and Migration T. Ohashi1 and Y. Hagiwara2 1
2
Faculty of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan Graduate School of Engineering, Hokkaido University, Sapporo, Hokkaido, Japan
Abstract— Adherent cells are known to adhere to their extracellular matrix at focal adhesions, where actin filaments are indirectly connected to integrins via multiple proteins and integrins are in contact with the extracellular matrix mediating various cellular signals. Expression of integrins is very specific to the types of extracellular matrix such as fibronectin, vitronectin, collagen, etc. With this background, it is speculated that different types of extracellular matrix may modulate cell physiological functions. In this study, cell migration assays are performed with different types of extracellular matrix on smooth muscle cells. Cells were seeded with DMEM + 10% FBS in tissue culture dishes where a polydimethylsiloxane (PDMS)-made block was placed in the center of the dishes. After removing the PDMS block, cell migration was observed. Prior to cell seeding, the dishes were coated with fibronectin, vitronectin, and type I collagen. Under all the three conditions, cells randomly migrated but, on the whole, migrated towards the space where the PDMS block had been placed. The total cell migration length for fibronectin was significantly higher than those for vitronectin and type I collagen. Since it was found that fibronectin could change smooth muscle cell phenotypes from contractile type to synthetic type, it is indicated that fibronectin-modulated smooth muscle cells could enhance cell migration. In summary, different types of extracellular matrixmediated smooth muscle cell phenotype might modulate cell physiological functions possibly via integrin formation in a different manner. Keywords— Smooth muscle cells, Contractile phenotype, Synthetic phenotype, Extracellular matrix, Integrin expression.
Therefore, it is speculated that extracellular matrix-mediated integrin expression may modulate physiological functions of smooth muscle cells through alterations in phenotypes.
Fig. 1 Experimental protocol. Cells are cultured on a dish coated with the three different extracellular matrix. When they reach confluent, the PDMS block is removed to allow cells to migrate
In this study, we perform cell migration assays on smooth muscle cells cultured on three different types of extracellular matrix such as fibronectin, vitronectin, and type I collagen. The total cell migration length is evaluated for 24 h under a microscope.
I. INTRODUCTION Adherent cells are known to adhere to their extracellular matrix at focal adhesions, where integrins interact with the extracellular matrix and mediate various cellular signals, possibly leading to modulations of a variety of physiological functions in cell proliferation, migration, etc. Integrins are obligate heterodimers containing two distinct chains, called the α and β subunits, very specific to the types of extracellular matrix such as fibronectin, vitronectin, type I collagen, etc. In previous studies [1,2], it has been found that fibronectin and vitronectin might direct smooth muscle cell phenotype into synthetic type while type I collagen contractile type. It is also known that inhibition of α-SMA expression, a marker of contractile phenotype, could enhance cell migration [3].
II. MATERIALS AND METHODS A. Cell Preparation Smooth muscle cells from bovine thoracic aortas were purchased (Cell Applications, USA). Cells were seeded in tissue culture flasks with DMEM + 10% FBS. Cell populations from the 4th to 9th generation were studied. B. Migration Assays The experimental protocol is shown in Fig. 1. For experiments, the tissue culture dishes were coated with fibronectin, vitronectin, and type I collagen at 50 µg/ml. In order to create a space that allows cells to migrate, a block was made from
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 151–152, 2011. www.springerlink.com
Effect of Extracellular Matrix on Smooth Muscle Cell Phenotype and Migration
152
Fig. 3 The total length of smooth muscle cell migration on the three different extracellular matrix-coated substrate. Data are represented by mean + SD. n = number of cells, *: p < 0.05, **: p < 0.001
Fig. 2 Smooth muscle cell migration on the three different extracellular matrix-coated substrate. Cells cultured on fibronectin more migrate into the space than those on vitronectin and type I collagen. Scale bars: 500 µm polydimethylsiloxane (PDMS, Dow Corning, Japan) and was placed in the center of the dishes. Cells were then cultured in the dishes to induce the phenotypic modulation. After reaching confluent, the dishes were transferred into a CO2 chamber maintained at 37˚C and 95%Air/5%CO2 under an inverted microscope (IX71, Olympus, Japan). After removing the PDMS block, cell migration into the space was observed. Images were captured with a digital CCD camera every 10 min up to 24 h. The total cell migration length was evaluated with the ImageJ software (National Institute of Health, USA).
III. RESULTS AND DISCUSSION Figure 2 shows cell migration on the three different extracellular matrix at 0 h and 24 h. The dashed lines indicate the edges of the PDMS blocks that were placed before cell migration. Cells exhibited random migration but migrated towards the space on the whole. The total length of cell migration is summarized in Fig. 3. For fibronectin, cells migrate for 810 ± 332 µm, while for vitronectin and type I collagen, cells migrate for 658 ± 255
µm and 573 ± 198 µm, respectively. Cell migration on fibronectin was significantly enhanced compared to other two conditions. In our previous study [1], fibronectin-coated substrate directed smooth muscle cell phenotype into synthetic phenotype. It is also known that 〈5®1 integrin specifically binding to fibronectin may enhance cell proliferation. In fact, in a separate study, we measured the density of cells from region to region in Fig. 2 and found that the rate of cell proliferation was higher for fibronectin than vitronectin and type I collagen. Taken together, fibronectin could modulate smooth muscle cell phenotype into synthetic type, enhancing cell migration as well as proliferation. In summary, this study performs cell migration assays with different types of extracellular matrix on smooth muscle cells. The result indicates that fibronectin-mediated smooth muscle cell phenotype, synthetic type, might enhance cell migration as well as proliferation, possibly via modulation in integrin formation.
REFERENCES 1. Ohashi T, Ichihara H, Sakamoto N, Sato M (2008) Specificity of traction forces to extracellular matrix in smooth muscle cells. 13th International Conference on BioMedical Engineering, Singapore (in CDROM). 2. Hedin U, Bottger BA, Forsberg E, Johansson S, Thyberg J (1988) Diverse effects of fibronectin and laminin on phenotypic properties of cultured arterial smooth muscle cells. J Cell Biol 107: 307-319. 3. Ronnov-Jessen L, Petersen OW (1996) ADP-ribosylation of actins in fibroblasts and myofibroblasts by botulinum C2 toxin: influence on microfilament morphology and migratory behavior. Electrophoresis 17: 1776-1780.
IFMBE Proceedings Vol. 35
Effects of the Wrist Angle on the Performance and Perceived Discomfort in a Long Lasting Handwriting Task N.Y. Yu1 and S.H. Chang2 1
2
Department of Physical Therapy, I-Shou University, Kaohsiung City, Taiwan Department of Occupational Therapy, I-Shou University, Affiliation, City, Country
Abstract— Since wrist joint position affects the length of finger muscles and then on the grip force, its influence on handwriting product quality and efficiency was studied in this research. We performed a repeated measures experiment to test whether the angle of wrist extension has a significant influence on the kinematic characteristics, handwriting production and the perceived soreness of writing hand. Forty young adults (aged 18 to 22) performed a continuous writing task (30 minutes) on a computerized system which measured the wrist joint angle and documented the handwriting process. The tasks were performed on a 2-D digitizing tablet. The top panel of the tablet records the position when the pen comes in contact with its surface or within 1 cm of its surface. An electrogoniometer was used for recording the wrist extension angle during the writing tasks. According to the progression of self perceived soreness, subjects were classified into “effortless” and "hard” groups. The wrist joint angle and in-air trajectory length (i.e., the length of non-writing while writing) were found significant different between the two groups in the writing process. “Hard” group was consistently associated with longer in-air trajectory length, as well as with more extended wrist joint. Instead, the other characteristics such as writing speed, in-air time, on-paper time, and pentip pressure were not found significant different between the two groups. The wrist extension angle was significantly correlated with the perceived soreness as well as with the ratio of in-air to on-paper time. The results of this study indicate that ergonomic and biomechanic analysis provides important information about the handwriting process. Less extended wrist may result in more effort needed in the work activity of handwriting such that more soreness perceived by the subjects. Keywords— handwriting, ergonomics, movement analysis, computerized evaluation, writer’s cramp.
I. INTRODUCTION Handwriting is an essential fine motor skill in schoolaged children. Children’s ability to produce fluent and legible script is important for expressing, communicating and recording ideas as well as for educational development, achievement in school and self-esteem (Phelps, Stemple & Speck, 1985; Weil & Amundson, 1994). This skill is directly related to most school activities. From a survey of the
activities in an elementary school classroom, 30% to 60% of the time is spent in fine motor activities, with handwriting predominating over other tasks (McHale & Cermak, 1992). From surveys on occupational therapy service in elementary schools, the most common referrals were handwriting problems (Tseng & Cermak, 1993). Although a person with normal development can learn how to write through traditional training between ages six to seven, handwriting is actually a very complicated skill. Neat and smooth handwriting requires the maturity and integration of cognition, visual perception and fine motor skills (Tseng & Murray, 1994; Weil & Amundson, 1994; Volman, van Schendel, & Jongmans, 2006). Levine et al. (1981) found that 72% of children with low academic achievement were considered to have difficulty with fine motor tasks for such dysfunctions on relevant items in the parent and/or teacher questionnaires (e.g., using a pencil, putting things together, etc). Poor fine motor control such as the lack of coordination of muscle contractions and the irregularities of stroke speed and force may lead to laborious or even illegible handwriting. Pertaining to the motor control of handwriting, the assessment of fine motor control in handwriting movement is important in a comprehensive evaluation of handwriting dysfunction. However, since handwriting involves a complicated series of fine movement controls, it is difficult to assess finger movement simply by visual inspection. Computerized kinematic and kinetic analyses provide a resolution of the difficulties in the direct measurement of fine motor control in the handwriting process. With the aid of a digital tablet and instrumented ink pen, handwriting can be monitored in real-time or stored in formats for further kinematic and kinetic analyses. These devices enable the measurement of much more than the simpler performance test can. Patients with writer's cramp have been characterized by two neurophysiological abnormalities: reduced reciprocal inhibition of the wrist flexor motoneurons at rest, and increased cocontraction of antagonist muscles of the forearm during voluntary activity (Valls-Solé and Hallett, 1995). Wu and Luo (2006) found the gesture of gripping a pen for handwriting is unnatural such that tend to use the wrist, elbow or little finger for support.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 153–156, 2011. www.springerlink.com
154
N.Y. Yu and S.H. Chang
Since wrist joint position affects the length of finger muscles and then on the grip force, its influence on handwriting product quality and efficiency was studied in this research. We performed a repeated measures experiment to test whether the angle of wrist extension has a significant influence on the kinematic characteristics, handwriting production and the perceived soreness of writing hand.
II. METHODS
In the experiment, an electrogoniometer was utilized to measure the movement of wrist joint. The lightweight and flexible electrogoniometer can be comfortably worn without hindering the actual movement of the joint. The sensor was fitted to be capable of reaching across the joint so that the two endblocks can be mounted where least movement occurs between the skin and underlying skeletal structure. The two endblocks were mounted on the dorsum of the 3rd metacarpal and distal radius (Figure 1).
A. Subjects Recruitment Forty young adults (aged 18 to 22) performed a continuous writing task (30 minutes) using a computerized system which measured the wrist joint angle and documented the handwriting process. According to the progression of self perceived soreness, subjects were classified into “effortless” and "hard” groups. B. Handwriting Task The handwriting task in this study is continuous writing for total 3 paragraphs. For simulating usual handwriting and increasing the complexity, the task was selected from a Chinese literacy textbook usually used in the class of freshmen in college.
Fig. 1 The measuring system. Digital tablet with an inked pen and electrogoniometer for measuring the wrist angle E. Measured Parameters
C. The Experiment Setup and Protocol In the experiment, all participants performed the task under similar environmental conditions in a quiet classroom which was designed for children with special need in their school. Each participant learned the instruction individually during the morning hours. The participants were seated on a standard school chair and in front of a standard school desk which are appropriate to his or her height. The tasks were written on normal writing paper with printed lineation, which is affixed to the digitizing tablet. In the experiment, the displacement of pen tip can be monitored and recorded by a computer program running on Windows XP. The participants’ writing trace was displayed in real time on the display of notebook computer in front of the experimenter. Each participant was instructed in the same fashion about what he or she would be required to do. The whole procedure took approximately 60 minutes.
Self perceived soreness –Visual analog scale Biomechanics –Wrist extension angle, Writing time (on-paper and inair), Trajectory length (on-paper and in-air) (Fig 2), and Axial pen pressure.
D. Apparatus The tasks were performed on a 2-D digitizing tablet (Wacom, Intuos 2, Japan). The top panel of the tablet records the position only when the pen comes in contact with its surface or within 1 cm of its surface.
Fig. 2 The computerized handwriting evaluation system can document the movement of pentip stroke. It can redraw the script and measure the total on-paper (black) and in-air (red) trajectory length
IFMBE Proceedings Vol. 35
Effects of the Wrist Angle on the Performance and Perceived Discomfort in a Long Lasting Handwriting Task
F. Statistical Analyses
IV. DISCUSSIONS
SPSS for Windows/PC (SPSS Inc., Ver 10.0, Chicago, Illinois) was used for the statistical analyses. Pearson’s correlation analysis was also performed to find out the important parameters can explain the largest part of variation of perceived effort. In the experiment of handwriting performance function tests, t-test was used to compare the differences of the variables between the two groups. The significant level was set at α=0.05.
III. RESULTS The wrist joint angle and in-air trajectory length (i.e., the length of non-writing while writing) were found significant different between the two groups in the writing process. “Hard” group was consistently associated with longer in-air trajectory length (Fig 3), as well as with more extended wrist joint (Fig 4). Instead, the other characteristics such as writing speed, in-air time, on-paper time, and pentip pressure were not found significant different between the two groups. The wrist extension angle was significantly correlated with the perceived soreness (r=.-443, p<0.05) as well as with the ratio of in-air to on-paper time (r=.403, p<0.05).
M ean in -air trajecto ry len g th (cm
3.5 effortless group
3
hard group
2.5 2 1.5 1 0.5 0
Time (minutes)
Fig. 3 The comparison of mean in-air to on-paper trajectory length between “effortless” and “hard” groups
Wrist extension (degree)
70 60 50
effortless group hard group
40 30 20 10 0
0
10
155
20
30
Time (minutes)
Fig. 4 The comparison of mean in-air to on-paper trajectory length between “effortless” and “hard” groups
The results of this study indicate that ergonomic and biomechanic analysis provides important information about the handwriting process. The strenuous work of handwriting was found consistent with lower performance efficiency and can be characterized by inferior biomechanical ergonomics. Less extended wrist may result in more effort needed in the work activity of handwriting such that more soreness perceived by the subjects. Valls-Solé and Hallett (1995) also presented evidence for an impaired integration of sensory inputs into the voluntary motor activity during performance of a force-related task in patients with writer's cramp. Normal (control) subjects and patients activated wrist flexor and extensor muscles to maintain a predetermined level of force rather than relying mainly on wrist flexor. These results are compatible with our findings in the effects of wrist angle on the perceived muscle soreness. In the analysis of handwriting speed in different tasks, the on-paper length was found positively correlated with the writing speed. In the study of Rosenblum et al. (2003), most of the non-proficient handwriters can be characterized by longer pause time and wandering path within two successive stroke segments. Rosenblum et al. (2003) studied the pauses during handwriting and found that they are not stationary breaks between the writing of successive segments. In their kinematics study, the time and path above the writing surface were prefixed with "in air" and recorded for measuring the delayed duration and meandering distance. They found the "in air" time and path length of nonproficient handwriters were especially longer, compared to the proficient handwriters. Based on the view of van Galen’s model of handwriting (van Galen, Portier, Smits-Engelsman, & Schomaker, 1993), the computerized parameters have been linked with the components in the performance of handwriting tasks. The model proposed three components in succession: the motor program, parameterization and regularization of the motor program, and muscular initiation, in order that the task may be performed (Smits-Engelsman & Van Galen, 1997). The “in air” time was thought to correspond to the time needed to parameterize the motor program or to initiate activity in the muscle groups needed to execute the character (Rosenblum & Livneh-Zirinski, 2008). For linking the other parameters with the model, the increased number of pause time per stroke in “hard” group may be linked with the regularization of the motor program or the initiation of muscle activity in the later component. For the clinical implication of this study, wrist in extension provides a lengthened finger flexor which is important to the production of contraction force. Handwriting with a
IFMBE Proceedings Vol. 35
156
N.Y. Yu and S.H. Chang
less extended wrist may increase the flexibility in finger movement but may reduce the efficiency to build up contraction force. In future study, myoelectrical measures can be incorporated to verify the role of extensors or flexors influencing on the performance efficiency and muscular conditions.
ACKNOWLEDGMENT The authors would like to thank the National Science Council of the Republic of China for supporting this work financially under contract nos. NSC-95-2221-E-214-008 and NSC-97-2221-E-214-054-MY2.
REFERENCES 1. Levine, M. D., Oberklaid, F., & Meltzer, L. (1981). Developmental output failure: A study of low productivity in school-based children. Pediatrics, 67, 18-25. 2. McHale, K, & Cermak, S. A. (1992) Fine motor activities in elementary school: preliminary findings and provisional implications for children with fine motor problems. American Journal of Occupational Therapy, 46, 898-903. 3. Phelps, I., Stemple, L., & Speck, G. (1985). The children's handwriting scale: A new diagnostic scale. Journal of Educational Research, 79, 46-50. 4. Rosenblum, S., Parush, S. & Weiss, P. L. (2003). Computerized temporal handwriting characteristics of proficient and poor handwriters. The American Journal of Occupational Therapy, 57 (2), 129-138.
5. Rosenblum, S. & Livneh-Zirinski, M. (2008). Handwriting process and product characteristics of children diagnosed with developmental coordination disorder. Human Movement Science, 27, 200-214. 6. Smits-Engelsman, B. C. M. & van Galen, G. P. (1997). Dysgraphia in children: lasting psychomotor deficiency or transient developmental display. Journal of Experimental Child Psychology, 67, 164-184. 7. Tseng, M. H. & Cermak, S. A. (1993). The influence of ergonomic factors and perceptual-motor abilities on handwriting performance. American Journal of Occupational Therapy, 47, 919-926. 8. Tseng, M. H., & Murray, E. A. (1994). Differences in perceptualmotor measures between good and poor writers. American Journal of Occupational Therapy, 14, 19-36. 9. Valls-Solé J., Hallett M. (1995) Modulation of electromyographic activity of wrist flexor and extensor muscles in patients with writer's cramp. Mov Disord, 10, 741-8. 10. van Galen, G. P., Portier, S. J., Smits-Engelsman, B. C., & Schomaker, L. R. (1993). Neuromotor noise and poor handwriting in children. Acta Psychologica, 82, 161-78. 11. Volman M. J. M., Van Schendel B. M., Jongmans M. J. (2006) Handwriting difficulties in primary school children: a search for underlying mechanisms. American Journal of Occupational Therapy, 60, 451 - 460. 12. Weil, M. J., & Amundson, S. J. C. (1994). Relationships between visuomotor and handwriting skills of children in kindergarten. American Journal of Occupational Therapy, 48, 982-988. 13. Wu, F. G. & Luo, S. (2006) Design and evaluation approach for increasing stability and performance of touch pens in screen handwriting tasks. Appl Ergon, 37(3):319-27. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Shao-Hsia Chang Department of Occupational Therapy, I-Shou University No.8, Yida Rd., Jiaosu Village, Yanchao District Kaohsiung Taiwan
[email protected]
Estimation of Muscle Force with EMG Signals Using Hammerstein-Wiener Model R. Abbasi-Asl1, R. Khorsandi1, S. Farzampour2, and E. Zahedi1 1
School of Electrical Engineering, Sharif University of Technology, Tehran, Iran 2 School of medicine, Artesh University of Medical Science, Tehran, Iran
Abstract— Estimation of muscle force is needed for monitoring or control purposes in many studies and applications that include direct human involvement such as control of prosthetic arms and human-robot interaction. A new model is introduced to estimate the force of muscle from the EMG signals. Estimation is based on Hammerstein-Wiener Model which consists of three blocks. These blocks are used to describe the nonlinearity of input and output and linear behavior of the model. The nonlinear network is designed base on the sigmoid network. The introduced model is trained by some data sets which are recorded from different people and tested by some other data sets. The simulation results show low error rate between measured force and estimated force. Keywords— Force estimation, EMG signal, HammersteinWiener model.
I. INTRODUCTION Measurement of muscle force has so many applications such as analysis of sports activities or ergonomic design analysis, or teleportation of robotic devices [1]. In these applications, it is impractical or inconvenient to measure the generated limb forces with force sensors [2]. Additionally, as it is very expensive to use commercial six degreeof-freedom (6-DOF) force sensors, it makes alternative methods for muscle or limb force estimation appealing. Electromyography (EMG) signals are usually used in these methods, which require comparatively inexpensive sensors and electrodes to be measured. The main cause of generated force in muscles is activation dynamics and muscle contraction dynamics [3], then an accurate and reproducible prediction of muscle force could be obtained from surface electromyography (EMG) which is an important goal in biomechanics and kinesiology. In this case, EMG signals are recorded using a single bipolar electrode pair placed on the skin above the muscle belly [4]. Muscle contraction dynamics include the mechanical properties of muscle tissues and tendons, which are expressed as force-length and force-velocity relations. The activation dynamics include the voluntary and nonvoluntary (reflex) excitation signal and motor unit recruitment level in the muscle. It is well known that regardless of fatigue, the generated torque in each joint is dependent on muscle activation levels (MALs) and joint angle when in a stationary position. This was first observed by Inman et al.
[5]. When in motion, joint torque is also a function of joint angular velocity [6]. Therefore, joint torque (and force) can be predicted using EMG signals, and joint angle and velocity measurements. In this paper, it is assumed that the wrist angle is constant. Thus, elbow-induced wrist force can be determined directly from elbow torque. In the past, nonparametric and parametric model-based approaches have been proposed for muscle force or human joint torque estimation using EMG signals. Parametric approaches have used Hill’s muscle model, which takes MAL as input and outputs the muscles generated force as a function of muscle length and contraction speed [7], [8]. Musculoskeletal kinematic models have also been employed to derive joint torque [9]. Nonparametric methods propose the use of polynomial functions or artificial neural networks (ANNs) and have the capability of accounting for nonlinearities in the EMG-force relationship. One significant advantage of nonparametric estimation of the EMG-force relationship is that it does not need any knowledge about the muscle and joint dynamics. Clancy et al. [10] proposed the use of a third-order polynomial to estimate the generated torque in elbow joint under isometric and quasi-isotonic conditions. Misener et al., on the other hand, used an exponential force/velocity function in addition to a third-order polynomial model to estimate the elbow torque [3]. In this paper we will introduce a non-parametric model based on Hammerstein-Wiener model with use of sigmoid network in nonlinear block of model which can be classified in neural network model category. Section II will discuss on structure of Hammerstein-Wiener model. In section III and IV, data acquisition and simulation procedure will be described and finally a conclusion remark will be represented.
II. METHOD: HAMMERSTEIN-WIENER MODEL A. Structure of Hammerstein-Wiener Models Figure 1 illustrates the block diagram of a HammersteinWiener model structure [11]:
Fig. 1 Block diagram of a Hammerstein-Wiener model
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 157–160, 2011. www.springerlink.com
158
R. Abbasi-Asl et al.
where:
2.
w(t ) = f ( u (t ) )
and initial conditions: x (t ) = B
(1)
Equation (1) is a nonlinear function transforming input data u (t ) and w(t ) has the same dimension as u (t ) .
x (t ) =
B F
3.
w(t ).
(2)
Equation (2) is a linear transfer function. x (t ) has the same dimension as y (t ) where B and F are similar to polynomials in the linear Output-Error model, For n y outputs and nu inputs, the linear block is a transfer function matrix containing entries:
B ji ( q ) Fji (q )
(3)
where:
j = 1,2,..., n y i = 1,2,..., nu .
(4)
Finally
y (t ) = h ( x(t ) )
(5)
which is a nonlinear function that maps the output of the linear block to the system output. w(t ) and x (t ) are internal variables that define the input and output of the linear block, respectively. Because f acts on the input port of the linear block, this function is called the input nonlinearity. Similarly, because h acts on the output port of the linear block, this function is called the output nonlinearity. If system contains several inputs and outputs, you must define the functions f and h for each input and output signal. It is not necessary to include both the input and the output nonlinearity in the model structure. When a model contains only the input nonlinearity f, it is called a Hammerstein model. Similarly, when the model contains only the output nonlinearity h), it is called a Wiener model [12]. The nonlinearities f and h are scalar functions, one nonlinear function for each input and output channel. The Hammerstein-Wiener model calculates the output y in three stages: 1.
Calculates w(t ) = f ( u (t ) ) from the input data. w(t) is an input to the linear transfer function B/F. The input nonlinearity is a static (memoryless) function, where the value of the output a given time t depends only on the input value at time t. The input nonlinearity can be set as a sigmoid network, wavelet network, saturation, dead zone, piecewise linear function, one-dimensional polynomial, or a custom network. It is possible to remove the input nonlinearity.
Computes the output of the linear block using w(t)
F
w(t ) . Configura-
tion of the linear block will be done by specifying the numerator B and denominator F orders. Compute the model output by transforming the output of the linear block x (t ) using the nonlinear function h: y (t ) = h ( x (t ) ) .
The input-output relationship will be decomposed into two or more interconnected elements, when the output of a system depends nonlinearly on its inputs. So, we can describe the relationship by a linear transfer function and a nonlinear function of inputs. The Hammerstein-Wiener model uses this configuration as a series connection of static nonlinear blocks with a dynamic linear block. Applications of Hammerstein-Wiener model are in wide areas, for example we can mention modelling electromechanical system and radio frequency components, audio and speech processing and predictive control of chemical processes. These models have a useful block representation, transparent relationship to linear systems, and are easier to implement than heavy-duty nonlinear. So, they are very useful. The Hammerstein-Wiener model can be used as a blackbox model structure since it prepares a flexible parameterization for nonlinear models. It is possible to estimate a linear model and try to improve its quality by adding an input or output nonlinearity to this model. Also, we can use Hammerstein-Wiener model as a grey box structure to take physical knowledge about process characteristics. For instance, the input nonlinearity might represent typical physical transformations in actuators and the output nonlinearity might describe common sensor characteristics [13]. B. Selected Parameters for Hammerstein-Wiener Model In this work, the sigmoid network is chose to represent the input and output nonlinearity. This network can model the system smoother and more dynamic than the others. Other networks are been simulated too, but the results were satisfactory when sigmoid network is used. Number of units for the sigmoid network has been set to 20. This amount of units, estimate the model very precisely, on the other hand, the time of simulations are not too much in this case. For the linear block, the selected dimensions for the poles and zeros are 3 and 2, respectively. Simulations showed that this is enough to model the linear behaviour of the system.
IFMBE Proceedings Vol. 35
Estimation of Muscle Force with EMG Signals Using Hammerstein-Wiener Model 400
The electrodes are disposable plate ones which are suitable for this experiment. All data sets are captured from forearm muscle and force is recorded from the wrist of target. The sample rate of data sampling was 2000 hertz which is suitable for EMG signal. Sampling rate of force signal has been chosen the same to have a good and easy comparison between measured and simulated force signal.
Simulation Result Measured Force
350 300 250 Force (N)
159
200 150 100
IV. SIMULATION RESULTS
50
0
0.5
1
1.5
2 Time (sec)
2.5
3
3.5
4
Fig. 2 Simulation result and measured force for data set # 1
III. SIGNAL ACQUISITION PROCEDURE In this work, 7 data sets were recorded to train and test the model. These signals include both EMG and force signals which are captured simultaneously from 7 different people in nearly same age. Model training had been done with three sets of data. In these data recordings, we asked the targets to press the force handle 10 seconds and rest for 5 seconds, in one case. In the other case, we asked them to press the handle for 1 second simultaneously. We trained the model with both these data to cover all kind of force variations. To test the model, 4 more data sets are recorded. The analyses are done on these tests and results are represented. EMG signals are captured by AD Instrument’s power lab model 16 SP. The AD Instrument’s dual pod expander is used to achieve a fine signal. To measure the force signal, AD Instruments MLT003 Hand Dynamometer is used. This instrument is strain-gauge based isometric dynamometer with a linear response, in the 0-600 Newton range which is suitable for our experiments. When force is applied to the metal bars of dynamometer, the output is 100 N for every 2.37 mV which is scaled in our figures.
Simulation results for data set number 1 which was recorded to test the model have been depicted in figure 2. As we see, the error rate is low. The introduced model can easily follow the raises and falls of the force value. The estimated force for data set number 2 has been shown in figure 3. To have a better comparison between simulation results and measured force, the curves with the average values over 200 samples are depicted in figure 4. This average process helps us to have an output with lower sudden differences in values. Such an output can easily used in control applications but with a delay of 200 samples. 350 Simulation Result Measured Force
300
250
Force (N)
0 -50
200
150
100
50
0
0
5
10
15 Time (sec)
20
25
30
Fig. 4 Simulation result and measured force AVI for data set # 2 over 200 samples
400 Simulation Result Measured Force
350
V. MODEL VALIDATION
300
Force (N)
250 200 150 100 50 0 -50
0
5
10
15 Time (sec)
20
25
30
Fig. 3 Simulation result and measured force for data set # 2
To validate the introduced model, the correlation coefficient (R-Value) between measured and simulated signals is calculated and is listed in table 1. These values are archived from averaged data sets over 200 samples. As we can see, the R-Value is high and acceptable. The normal root mean square error is listed in next column. Table 2 shows that the averaging process has influence in R-Value and NRMSE of the result. Both R-Value and NRMSE of averaged data set number 1 are better than the same values of normal data set.
IFMBE Proceedings Vol. 35
160
R. Abbasi-Asl et al.
Table 1
Data set #
R-Value
1 2 3 4
NRMSE
94.75 % 94.12 % 98.15 % 98.12 %
0.218 0.252 0.271 0.281
Table 2 Comparison of R-Value and NRMSE between normal and averaged data set Normal Data set # 1 Averaged Data set # 1
R-Value
NRMSE
94.75 % 95.45 %
0.218 0.199
The scatter plot for the data set number 1 and number 2 are depicted in figures 5 and 6. These plots are shown for averaged data sets. An acceptable correlation is achieved by the introduced model. 350 fitted curve 300
Simulated Force (N)
250 200 150 100 50 0 -50
0
50
100
150 200 250 Measured Force (N)
300
350
400
Fig. 5 Scatter plot of data set # 1 350 fitted curve 300 250 Simulated Force (N)
VI. CONCLUSIONS
R-Value and NRMSE between measured and simulated values
200 150 100 50 0 -50
0
50
100
150 200 Measured Force (N)
Fig. 6 Scatter plot of data set # 2
250
300
A new method is introduced to estimate the force of the wrist by the EMG signal of the elbow muscle. The estimation is based on Hammerstein-Wiener model. The nonlinear block in this model is based on sigmoid network which can smoothly estimate the output values. Simulations results shows that this model have low error rate. In further works we will analyse the model for random EMG signals which needs to a new data acquisition process. Also we will try to reduce the NRMSE value for simulation results.
REFERENCES 1. S. E. Salcudean, “Control for teleoperation and haptic interfaces,” Control Problems in Robotics and Automation, ser. Lecture Notes in Control and Information Sciences, vol. 230, pp. 50–66, 1997. 2. D. Staudenmann, I. Kingma, A. Daffertshofer, D. F. Stegeman, J. H. van Dieën, “Improving EMG-Based Muscle Force Estimation by Using a High-Density EMG Grid and Principal Component Analysis,” IEEE Trans on Biomed. Eng. , vol. 53, no. 4, April 2006 3. D. L. Misener and E. L. Morin, “An EMG to force model for the human elbow derived from surface EMG parameters,” Proc. IEEE Int. Conf. Engineering in Medicine and Biology Society, Montreal, QC, Canada, Sep., pp. 1205–1206, 1995. 4. F. Mobasser, J. M. Eklund K. Hashtrudi-Zaad, “Estimation of ElbowInduced Wrist Force With EMG Signals Using Fast Orthogonal Search,” IEEE Trans on Biomed. Eng., vol. 54, no. 4, April 2007 5. Guia Rosaa, M. A. Cavalcanti Garcia, M. N. Souzaa, “A novel electromyographic signal simulator for muscle contraction studies,” Computer methods and programs in biomedicine 89, 269–274, 2008 6. V. T. Inman, H. J. Ralston, J. B. Saunders, B. Feinstein, and W. B. Wright, “Relation of human electromyogram to muscular tension,” Electromyogr. Clin. Neurophysiol., vol. 4, pp. 187–194, 1952. 7. C. De Luca, “The use of surface electromyography in biomechanics,” J. Appl. Biomech., vol. 13, pp. 135–163, 1997. 8. R. Bogey, J. Perry, and A. Gitter, “An emg-to-force processing approach for determining ankle muscle forces during normal human gait,” Neural Syst. Rehabil. Eng., vol. 13, no. 3, pp. 302–310, September 2005. 9. M. Hayashibe, D. Guiraud, P. Poignet, “EMG-to-force estimation with full-scale physiology based muscle model,” The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, USA, October 11-15, 2009 . 10. E. A. Clancy and N. Hogan, “Estimation of joint torque from the surface emg,” Proc. IEEE Int. Conf. Engineering in Medicine and Biology Society, New York, Nov. 1991, vol. 13, pp. 877–878. 11. https://www.mathworks.com/help 12. J. Wang, T. Chen, L. Wang, “A blind approach to identification of Hammerstein-Wiener systems corrupted by nonlinear-process noise,” Proceedings of the 7th Asian Control Conference, Hong Kong, China, August 27-29,2009. 13. J. Wingerden, M, Verhaegen, “Closed-loop subspace identification of Hammerstein-Wiener models,” Joint 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference, Shanghai, P.R. China, December 16-18, 2009.
IFMBE Proceedings Vol. 35
Hip 3D Joint Mechanics Analysis of Normal and Obese Individuals’ Gait M.H. Mazlan1, N.A. Abu Osman2, and W.A.B. Wan Abas2 1
Department of Electrical and Electronics Engineering, University Tun Hussein Onn Malaysia, Batu Pahat, Malaysia 2 Department of Biomedical Engineering, University of Malaya, Kuala Lumpur, Malaysia
Abstract— Previously, the study of 3D joint angle has been proven to give better interpretations on the joint and muscular activities. However, it has not been widely discovered and usually limited to normal subject and level walking activity. The inspiration of the study was based on the fact that, the obesity and staircase activity factors will significantly influence the mechanics of a joint. It has been proven significantly affected the knee rather than the hip which leading to the knee osteoarthritis disease. Moreover, since to date no experimental study has reported on hip 3D joint mechanics associating with these two factors, therefore, the analysis of hip 3D joint angle for obese and normal individuals during stair ascending activity have been proposed in order to help the hip 3D joint interpretations. Our hypothesis is that, it is hard to describe the difference of the gait strategy used between obese and normal individuals due to the effect of the mechanics adaptations of the hip joint. Therefore, with the aid of hip 3D joint angle interpretations, it is believed that these phenomena can be successfully described and investigated. The result seems to confirm that, at the late stance of stair ascending activity, the obese individuals seem to have an alternative strategy of mainly hip resistance compared to the normal’ strategy of mainly hip stabilization. Otherwise, the normal and obese individuals seem to have a similar strategy of mainly hip stabilization. In addition, the obese seem to absorb or generate energy with systematically lower proportion of the 3D joint moments than normal individuals. Keywords— 3D joint angle, 3D joint power, inverse dynamics, Euler/Cardanic angle, stair ascending.
I. INTRODUCTION 3D joint moments, angle and joint power vectors have initially been used to investigate the gait strategy used in normal adult’s and children’s gait [1][2]. However, the discovery pertaining to this issue is still new and most of the time the research is focused on normal subjects and during level walking activity. Thus, the area of the research seems can be expanded to the other form of motion activities and involve with obese community which believed associated with a large number of loads cycle [3]. Theoretically, the joint power is the best method to appropriately describe the muscle activities whether it is negative, null or positive joint power which corresponds to an absorbed, null or generated energy coarsely associated to
eccentric, isometric or concentric muscular actions [4]. However, when stand alone it seems insufficient to give details explanation on the joint and muscular activities. Therefore, the introduction of 3D joint angle is purposely to help the 3D joint power interpretations in a manner of describing the proportion of the joint moment which contributes to the movements (i.e. propels, resists or stabilized the joint). Moreover, with the aid of inverse dynamics approach, the calculations are become simpler without the need of musculo-skeletal modeling as been required in the forward dynamics analysis. The focused of the study was to analyze the hip 3D joint mechanics for obese and normal individuals during stair ascending activity. It could be used to investigate the gait strategy used and how the neuromuscular adaptations applied to the joint. It is of interest due its vast contributions in understanding the mechanics of normal and pathology of the hip joint during normal and abnormal loading conditions [5]. Besides the human hip joint can withstands peak contact forces up to 4 to 5 times of the body weight [6][7][8], however, the repetition of high joint loading profile will make it more susceptible to injury and structural deterioration over time and this situations can be worsen when involving with obese individuals. In the present study, the selection of stair ascending rather than stair descending activity was made based on the judgment that, even both hip joint contact forces and moments are significantly higher for stair descending than for stair ascending activity, however, the effective contact areas of the stair ascending are relatively small in comparison to descending stair activity which lead to the high pressure distribution at the hip joint acetabulum (even with a small value of joint contact forces) [6][9]. In addition, for obese individuals, this task is quite demanding where the motor functions are reduced [10]. Besides, this task is comparably much more important than level walking activity due to during level walking activity the hip joint angular velocity remains relatively small and passive resistance of the skeleton to the gravity is considerably less than one times of the body weight [7]. Human body automatically has been programmed to adapt to the outside environment for the sake of the skeletal health of the joint. For obese individuals, the compensatory mechanism relative to their BMI (body mass
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 161–166, 2011. www.springerlink.com
162
M.H. Mazlan, N.A. Abu Osman, and W.A.B. Wan Abas
index) including slower walking speed, shorter stride length, increased double support phase and decreased knee range of motion are the true examples of the kinematics adaptations of human locomotion [6][11][12][13]. This phenomena are associated with an adaptation to minimize metabolic energy expended per unit distance traveled [8], to reduce peak pressure distributions [5], to offset total inertia generated [14][15] and above all to reduce muscle forces and moment [11][16][17][18]. Therefore, at the end of the day it is believed that this study will give different perspectives in describing the mechanics adaptations of a body in reaction to internal and external mechanical changes. In this context, the hypothesis is that, it is hard to describe the difference of the gait strategy used between the obese and normal individuals due to the effect of the mechanics adaptations of the joint. Therefore, with the help of 3D joint angle interpretations, it is believed that, at the end of the day these phenomena can be successfully described and investigated.
carefully stuck on subject’s lower limbs bony landmarks based on the VICON Skeleton Template for the basic lower body model in order to prevent underestimation of the prediction of the hip joint centre position and motion artifact due to the variability of the subjects’ BMI [2][21][22][23]. The subject then asked to climb a custom-made three-step stair with standard dimensions of 17cm riser and 29cm tread (Figure 1) as proposed for the design of the stairs in public environments [6].
II. MATERIALS AND METHODS Fig. 1 Schematic diagram showing a subject ascending a three-step stair with tested foot step on the second step fitted with force platform
A. Gait Experimental Protocol 10 healthy normal male subjects, 10 healthy obese male subjects, 10 healthy female subjects, and 10 obese female subjects without any history of lower limb participated in this experiment. The normal subjects had average age, height, weight and BMI of 23.4 ± 2.26 years old, 1.6 ± 0.09 m, 58.45 ± 7.65kg and 22.37 ± 1.38 kg/m2, respectively. While the obese subjects had average age, height, weight and BMI of 23.65 ± 2.21 years old, 1.62 ± 0.08m, 84.88 ± 9.36kg and 32.17 ± 1.46kg/m2, respectively. The normal and obese groups were matched by their BMI according to World Health Organization standard. The subjects were classified as a normal subject for the BMI range of 18.5 to 24.9 kg/m2, and they were classified as an obese subject for the BMI range of 30 to 34.9 kg/m2.The subjects were provided informed consent in accordance with the policies of University of Malaya’s Ethical Committee. All participants above all must complete a medical history questionnaire and they must indicate whether they have no current or past neurological or cardiovascular disorders, orthopedic abnormalities or pain, were never diagnosed with arthritis in any joints, were not diabetic and no other health problems, must be remarkably fit in all other aspects and they could walk without difficulty or pain. Twenty anthropometric parameters were taken for each of the subjects in order to calculate the body segment parameters will be finally be used in the inverse dynamics analysis [19][20]. Sixteen passive reflective markers were
Prior to the experiment, the subject was asked to walk through the experimental area until he/she felt relaxed and comfortable. Each of the experimental condition was measured five times. 3D motion trajectories data of the markers were recorded with sampling rate of 50Hz using sevencamera VICON Nexus motion analysis system. The ground reaction forces and moments were measured simultaneously at sampling rate of 200Hz using one Kistler Force Plate and one AMTI force plate. The force plates were arranged as shown in the figure 2. A starting point was selected so that the right foot will contact the force platform in a normal stride. A trial was discarded if the foot was not completely step on the force platform or if the subject made visually obvious stride alterations to contact the force platform. B. Data Processing All markers’ trajectories data were filtered in order to remove unwanted signals due to markers vibrations or motion artifacts. The filling gap processed were also implemented in order to predict the actual positions of the markers locations in which the VICON system fail to capture the actual markers positions due to its invisibility during recording processed. The trials were examined in order to get the clearest markers positions and to ensure the consistency of the gait patterns. The bad representation of the trial was omitted.
IFMBE Proceedings Vol. 35
Hip 3D Joint Mechanics Analysis of Normal and Obese Individuals’ Gait
After signal processing, the computations of body segment parameters, joint centre positions, linear velocity and acceleration of segment’s COG, and segment angular velocity and acceleration data were executed first before the calculations of the joint forces, moments, power and angle could be proceed. The body segment parameters were calculated through the multiple regression equations [19][20]. The hip joint centre was predicted based on the external body landmarks, 3D markers positions and prediction equations [19]. The linear velocity and acceleration of the segment’s COG were computed by their first and second derivatives of displacement-time data, whereas the angular velocity and acceleration were computed by their first and second derivatives of displacement-cardanic angles data. The first and second derivatives of the linear and angular components were calculated based on the finite difference methods derived from Taylor series expansions [24]. All the calculations were implemented in MATLAB 2008. The joint forces and moments were computed successively, respectively in the global reference frame and segment reference frame by means of bottom up inverse dynamics approach based on vectors and Cardanic angles. Then, the hip 3D joint moments were re-sampled on 60% of gait cycle and normalize to dimensionless value [2]:
⎛ M ⎞ ⎟ M normalize = ⎜⎜ ⎟ ⎝ m 0 L0 g ⎠
(1)
where m0 is the body mass, L0 the lower limb length and g the gravity acceleration. After that, the 3D joint angle was computed [1, 2]:
⎛ M ×ω ⎞ ⎟ ⎟ ⎝ M •ω ⎠
α Mω = tan −1 ⎜ ⎜
(2)
163
the 3D angle is in the interval 60o to 120o, the joint is considered principally in the stabilization configuration (3) when the 3D angle is in the 120o to 180o, the joint is considered principally in the resistance configuration. It this experiment, the evaluation of the gait was performed only at the stance phase or 60% of the gait cycle. This was because at swing phase (61% to 100% of the gait cycle), the hip joint power and moments tend to zero where extra cautions were necessary when analyzing this phase [2]. While, the introduction of dimensionless value was to ensure that there was unnecessary for normalization or scaling procedure [2][25][26] .
III. RESULT A. Hip 3D Joint Moments
The hip 3D joint moments (Figure 2(a)-(c)) reveal approximately similar curve patterns between normal and obese individuals. However, normal individuals never show flexion and external moments in comparison to obese individuals where the durations are 10% and 26% of the gait cycle, respectively. Considering the entire stance, the standard deviation is more important for normal than for obese individuals at internal/external and abduction/adduction. Besides, the dimensionless extension (-0.4736 and -0.3514 dimensionless for obese and normal, respectively), internal (0.2607 and 0.1376 dimensionless for obese and normal, respectively) and abduction (0.1483 and 0.0219 dimensionless for obese and normal, respectively) moments were significantly superior for obese individuals at pre-swing. (a) mean obese SD obese mean normal SD normal
and re-sampled on 60% of the gait cycle. Therefore, it was defined positive in the range of 0o to 180o [1]. The equation (2) above is directly related to 3D joint power by:
P = M ω cos α Mω
(3)
The joint power was then re-sampled on 60% of the gait cycle and normalized to dimensionless value, P0 [25]: ⎛ ⎜ M •ω P0 = ⎜ ⎜ m g 2L 3 0 ⎝ 0
⎞ ⎟ ⎟ ⎟ ⎠
(4)
The interpretation of the 3D joint angle and its correlation to the 3D joint power is described as [1][2]: (1) when the 3D angle is in the interval of 0o to 60o, the joint is assumed principally in the propulsion configuration, (2) when
Fig. 2 (a), (b) and (c) Mean hip 3D joint moments (±SD) dimensionless on 60% of the gait cycle about flexion-extension, internal-external and abduction-abduction rotation axes, respectively
IFMBE Proceedings Vol. 35
164
M.H. Mazlan, N.A. Abu Osman, and W.A.B. Wan Abas
(b)
mean obese SD obese mean normal SD normal
Fig. 3 Mean hip 3D joint power (±SD) dimensionless on 60% of the gait cycle
(c)
C. Hip 3D Joint Angle
The hip 3D joint angle (Figure 4) for normal and obese individuals approximately depicts similar curve patterns which exclusively reveal mainly a stabilization configuration by dominating 42% and 48% of the gait cycle for obese and normal individuals, respectively. Both subjects never show a propulsion configuration for the entire stance. However, peak hip 3D joint angle is slightly higher and shorter (1500 or 86.6% of the hip 3D joint moments contribute to the joint power) for normal than for obese (1400 or 76.6% of the hip 3D joint moments contribute to the joint power) at early stance. Besides, two different joint configurations of resistance and stabilization exist at late stance phase for obese and normal individuals, respectively. Moreover, for the first 30% of the gait cycle, 3D joint angle is slightly lower for obese than for normal individuals and vice versa for the last 30% of the gait cycle.
Fig. 2 (continued)
B. Hip 3D Joint Power
The hip 3D joint power (Figure 3) shows variability in the configuration of absorbed, null and generated energy with both normal and obese individuals show approximately similar curve patterns. However, for normal individuals, the configuration of generated energy can be shortly observed (6% of the gait cycle) at late stance. Considering the entire stance phase of gait cycle, the configuration of absorbed energy is significantly longer for normal (29% of the gait cycle) than for obese individuals (19% of the gait cycle). Peaks 3D joint power is always superior for obese at midstance (-0.4078 dimensionless) and at pre-swing (-0.3237 dimensionless) with the obese individuals significantly absorbed more energy than normal individuals. However, both subjects tend to produce low or null energy at 25% to 52% of the gait cycle.
Fig. 4 Mean hip 3D joint angle (±SD) dimensionless on 60% of the gait cycle
IFMBE Proceedings Vol. 35
Hip 3D Joint Mechanics Analysis of Normal and Obese Individuals’ Gait
165
IV. DISCUSSION
V. CONCLUSION
The 3D joint dynamics showed variability at initial stance and late double stance phase for both obese and normal individuals. At initial stance, obese individuals tend to flex, external rotate and abduct the hip joint, whereas normal individuals tend to extend, internal rotate and adduct the hip joint. These configurations were associated with a high absorbed energy confirmed by a resistance configuration with 76.6% and 86.6% of the joint moment contributing to the joint power for obese and normal individuals, respectively. This phenomenon could be due to the variability of the foot strikes the stairs either with the forefoot or with the heel [27] [28]. Based on the observation during the experiment, it could be observed that normal individuals tend to strike the stairs with the forefoot, whereas obese individuals tend to strike the stairs with the heel. This situation was believed associated with the effort to reduce the effective moment arms distance in order to reduce the net joint moment generated at the ankle, knee and hip joint successively. At middle stance, an abduction moment in obese and normal individuals was revealed which corresponded to a low generated energy confirmed by a stabilization configuration with only 14% and 37% of the joint moment contributing to the joint power, respectively. The low generated energy produced at this time instant maybe associated with the weight acceptance activity of the hip joint which required a low energy to produce such movement [10]. At late stance-phase, a flexion-extension, an abductionadduction, and an internal-external moments were comparably higher for obese than for normal individuals. For obese individuals, this situation corresponded to a high absorbed energy confirmed by a resistance configuration with 77.7% of the joint moment contributing to the joint power. In contrast, for normal individuals, this situation corresponded to a lower absorbed energy confirmed by a stabilization configuration with only 27.5% of the joint moment contributing to the joint power. The higher absorbed energy for obese than for normal individuals was maybe due to more important anterior mass transfer in order to push the body forwards and upwards to the next step for each ascending stairs [1]. This phenomenon was believed associated with a higher joint moment produced at the hip joint for obese than for normal individuals [4]. As a conclusion, the high variability of the joint dynamics of the stair ascending activity for the obese and normal individuals could be due to base on the two main factors: (1) The individuals’ ability to control their centre of mass within a constantly changing base of support (2) The individuals’ capacity to adapt strategies to accommodate changes in the stair environment [28].
The hip 3D joint mechanics seems to be a good approach to investigate and to highlight the difference of the hip joint gait strategies used between the obese and normal individuals during stair ascending activity. In general, both subjects tend to stabilize the joint, however, at the late stance phase of the stair ascending activity, the obese individuals tend to resist instead of to stabilize the hip joint. In addition, the neuromuscular adaptation as a compensatory mechanism relative to the BMI has been proven with only a small proportion of the high hip generated moment of the obese individuals was contributed to the joint power in comparison to the normal individuals at the early and middle stance of the stair ascending activity.
ACKNOWLEDGMENT Special thank to my supervisor Associate Prof. Dr. Nor Azuan Abu Osman, my friends Radzi and Ambhiga, and Mr. Firdaus for their support, guidance and work hard.
REFERENCES R. Dumas, L. Cheeze, Hip and knee joints are more stabilized than driven during the stance phase of gait: an analysis of the 3D angle between joint moment and joint angular velocity, Gait and Posture, 2008, 28, 243-250. W. Samson, G. Desroches, L. Cheze and Rapheal Dumas, 3D joint dynamics of healthy children’s gait, Journal of Biomechanics, 2009, 42, 24472453. M. O. Heller, G. Bergmann, G. Deuretzbacher, L. Durselen, M. Pohl, L. Claes, N. P. Haas, and G. N. Duda, Musculo-skeletal loading conditions at hip during walking and stair climbing, Journal of Biomechanics, 2001, 34, 883-893. D. E Robertsen, D.A. Winter, Mechanical energy generation, absorption and transfer amongst segments during walking, Journal of Biomechanics, 1980, 13, 845-854. H. Yoshida, A. Faust, J. Wilckens, M. Kitagawa, J. Fetto, Edmund Y. –S. Chao, Three-dimensional dynamic hip contact area and pressure distribution during daily activities, Journal of Biomechanics, 2005, 39, 19962004. G. Bergmann, G. Deuretzbatcher, M. Heller, F. Graichen, A. Rohlmann, J. Strauss, G. N. Duda, Hip contact forces and gait patterns during routine ativities, Journal of Biomechanics, 2001, 34, 859-871. T. A. Correa, K. M. Crossley, H. J. Kim, M. G. Pandy, Contributions of individual muscles to hip joint contact force in normal walking, Journal of Biomechanics, 2010, 43, 1618-1622. F. C. Anderson and M. G. Pandy, Static and dynamic optimization solutions for gait are practically equivalent, Journal of Biomechanics, 2001, 34, 153-161. W. A. Hodge, K. L. Carlson, R. S. Fijan, R. G. Burgess, P. O. Riley, W. H. Harris, R. W. Mann, Contact pressures from an instrumented hip endoprosthesis, Journal of Bone and Joint Surgery, 1989, 71, 138-186. R. Reiner, M. Rabufetti, and C. Frigo, Stair ascent and descent at different inclination, Gait and Posture, 2002, 15, 32-44.
IFMBE Proceedings Vol. 35
166
M.H. Mazlan, N.A. Abu Osman, and W.A.B. Wan Abas
P. Devita and T. Hortobagyi, Obesity is not associated with increased knee joint torque and power during level walking, Journal of Biomechanics, 2003, 36, 1355-1562. N. Hashimoto, M. Ando, T. Yayama, K. Uchida, S. Kobayashi, K. Negoro, and H. Baba, Dynamic analysis of the resultant force acting on the hip joint during level walking, Artificial Organs, 2004, 29, 387-392. B. McGraw, B. A. McClenaghan, H. G. Williams, J. Dickerson, D. S. Ward, Gait and postural stability in obese and non-obese pre-pubertal boys, Archives of Physical Medicine and Rehabilitation, 2000. D. R. Pedersen, R. A. Brand and D. T. Davy, Pelvic muscle and acetabular contact forces during gait, Journal of Biomechanics, 1997, 30, 959-965. F. Farahmand, F. Razaeian, R. Narimani, and P. Hejazi Dinan, Kinematic and dynamic analysis of the gait cycle of above knee amputees, Scientia Iranica, 2006, 13, 261-267. T. Foti, J. R. Davids, A. Bagley, A biomechanical gait during pregnancy, Journal of Bone and Joint Surgery, 2000, 82, 625-632. P. M. Quesada, L. J. Mengelkoch, R. C. Hale, S. R. Simon, Biomechanical and metabolic effects of varying backpack loading on simulated marching, Ergonomics, 2000, 43, 293-309. T. Sturmer, K. P. Gunther, H. Brenner, Obesity, overweight and patterns of osteoarthritis: the ULM osteoarthritis study, Journal of Clinical Epidemiology, 2000, 53, 307-313. Vaughan, C.L., Davis, B. L., and O’Connor, J. C.; ‘Dynamics of human gaits’; 2nd Edition; Kiboho Publishers, South Africa; 1992, 15-43. R.F. Chandler, C.E. Clauser, J.T. McConville, H.M. Reynolds and J.W. Young, Investigation of inertial properties of the human body (Aerospace Medical Research Laboratory Tech. Rep. No. 74-137). Dayton, OH: Wright-Patterson Air Force Base, AMRL. (Prepared for U.S. Department of Transportation, National Highway Traffic Safety Administration, Contract No. DOT-HS-017-2-315-1A; National Technical Information Service No. AD-A016485), 1975.
M. E. Harrington, A. B. Zavatsky, S.E.M. Lawson, Z. Yuan, T. N. Theologies, Prediction of the hip joint centre in adults, children and patients with cerebral palsy based on magnetic resonance imaging, Journal of Biomechanics, 2007, 40, 595-602. R. K. Jensen, Changes in segment inertia proportions between 4 and 20 years, Journal of Biomechanics, 1989, 22, 529-536. L. Ren, R. K. Jones, D. Howard, Whole body inverse dynamics over complete gait cycle based only on measured kinematics, Journal of Biomechanics, 2008, 41, 2750-2759. D.I. Miller and R.C. Nelson, Biomechanics of sport, Philadelphia: Lea & Febiger, 1973. At. L. Hof, Scaling the gait data to body size. Gait and posture, 1996, 4, 222-223. B. W. Stansfield, S. J. Hillman, M. E. Hazlewood, A. M. Lawson, A. M. Mann, I. R. Loudon, J. E. Robb, Normalisation of gait data in children, Gait and Posture, 2003, 17, 81-87. Peggy, P. K. Lai, A.K. L. Leung, A. N. M. Li, M. Zhang, Three dimensional analysis of obese adults, Clinical Biomechanics, 2008, 23, s2-s6. D. H. Gates, Characterizing ankle function during stair ascent, descent, and level walking for ankle prosthesis and orthosis design, Master’s thesis, Boston University, 2004.
IFMBE Proceedings Vol. 35
Impact Load and Mechanical Respond of Tibiofemoral Joint A.A. Oshkour1, N.A. Abu Osman1, M.M. Davoodi1, M. Bayat1, Y.H. Yau2, and W.A.B. Wan Abas1 1 2
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Malaysia Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, Malaysia
Abstract— Mechanical respond of the tibiofemoral joint in impact with a rigid plate is the aim of current study. A circular plate with initial velocity hit the tibiofemoral joint. All components all considered as linear elastic material. Cartilages were perfectly attached to the femur and tibia; and while menisci were allowed to move vertically. Circular plate motion was vertically. Femur and tibia if they were target of rigid plate they left free for motion vertically; otherwise, they were fixed at the end. Computed tomography images were used to construct 3-dimentional model of the tibiofemoral joint to analysis in the commercial finite element software package ABAQUS v6.7. We found that during impact depend on the velocity of impactor the contact pressure increased from 0 to maximum value and again descend to 0. Moreover, the maximum contact pressures occur with small time delay in knee joint due to the nature of impact. Regardless types of loading the results were same with some percentage difference.
better underrating of knee joint especially during knee impact. Therefore for the current study of impacting knee with rigid plate is considered to understand of the knee respond to impact load.
II. MATERIAL AND METHODS Two models are considered for current study as shown in Fig. 1. In the model a, a circular plate hit the femur at the end (P-F) with initial velocity. Meanwhile, in model b, circular plate has impact with tibia (P-T). To perform the current study two main steps has been done (i) creating 3D model of TF and (ii) FEA.
Keywords— finite element analysis, 3-dimensional, contact pressure, compressive stress.
I. INTRODUCTION The human knee joint is the most important and biggest joint in the body [1]. It has a complex structure and constructed from different articulations and components, which namely are femoral cartilage, tibia cartilages, ligaments and menisci [2]. Knee tolerates large forces and moments during different daily activity [3,4]. Hence, knee injury is a common disease. Better understanding on knee joint biomechanics can help to prevent injury to the knee. To date variety of parameters analyzed via experimental measurements or finite element studies [5]. Finite element (FE) methods as a powerful tool in engineering have been employed by many researchers to understand of knee biomechanics. Yildirim et al. used FE analysis to determine and show the contact location in the knee during high flexion [4]. Finite element methods applied to analysis menisci and meniscectomy effect, and ligaments biomechanics, in knee joint behaviour [2,6,7]. 3-dimentional (3D) FE was performed to analysis contact pressure and compressive stress in healthy human knee in flexion and gait [1,8]. The knee joint impact is an action that we faced with it regularly during daily jumping and running or exercise. Even many studies have been done on knee joint, still there are much unknown parameter should be considered for
Fig. 1 Two tibiofemoral joint model A. Creating Tibiofemoral Joint To create 3D model Computed tomography (CT) were used. The CT scan has performed on a 24-year-old healthy female with weight of 50 kg and height of 162cm. 988 images were captured using a multidetector Siemens machine with 512*512 pixels and a spatial resolution of 0.549 mm. CT images were converted using Digital Imaging and Communication (DICOM) formats and were then imported to the Mimics software. Soft and hard tissues were
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 167–169, 2011. www.springerlink.com
168
A.A. Oshkour et al.
identified using tissue specific threshold values of 148-1872 and 125-700, respectively and the tibia, femur, cartilages, and menisci were represented in the knee model (maximum and minimum value of threshold corresponds to the range of grey values to highlight pixels). After creating the knee components in Mimics, the 3D model was imported as into ABAQUS finite element software.
started to retreat. Meanwhile, Femur after 1.2s and tibia after 0.9s reached to their maximum displacement in P-F and T-F respectively as shown in Fig. 3.
B. Defining Material Properties The properties were given to TF components based on literature review [1]. Bones (tibia and femur) were defined cortical bone, elastic and linear material with Young modulus of 11GPa and Poisson’s ratio of 0.3. Femoral and tibial cartilages were considered linear and elastic material with a Young’s modulus of 5 MPa and a Poisson’s ratio of 0.46. Menisci were also considered elastic with a Young’s modulus of 59 MPa and a Poisson’s ratio of 0.49. C.
Loading and Boundary Conditions
A rigid plate with initial velocity of 60mm/s was hit the end of the femur and tibia in the P-F and P-T respectively. The distance of 20mm was considered between rigid plate and its target (femur or tibia). Femur allowed moving vertically in PF and it has been fixed at the end in P-T. At the same time, tibia was fixed at the P-F and the allowed moving vertically in P-T. Cartilages were perfectly attached to the corresponding bones, and the motion of the menisci was restricted to the lateral and medial direction. The frictionless contact properties are considered between all model components.
Fig. 3 Femur and tibia displacement (a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Fig. 2 Circular plate displacement
III. RESULTS AND DISCUSSION Fig.s 2 and 3 presented the displacement of circular plate, femur and tibia in P-F and P-T. Circular plate after 0.4s in both models reached to maximum displacement and then
Fig. 4 Contact pressure a, b, e and f & Compressive stress c, d, g and h
IFMBE Proceedings Vol. 35
Impact Load and Mechanical Respond of Tibiofemoral Joint
Fig. 4 represent the maximum value of contact pressure and compressive stress in femoral and tibial cartilages in both models. The maximum value of compressive stress and contact pressure happened in P-F at second of 1.2 and in P-T at second of 0.9.
IV. CONCLUSION A 3D finite element analysis of a right healthy knee has been conducted. Linear, elastic and isotropic material properties were considered for TF components. TF mechanical respond to a impact load were assessed. Depending on velocity of impactor to TF the compressive stress started from 0 and increased to maximum value and then descend to zero because of nature of impact. Regardless of type of modeling and putting load on femur or tibia result are almost same.
169 2. Peña E, Calvo B, Martínez MA, Doblaré M (2006) A threedimensional finite element analysis of the combined behavior of ligaments and menisci in the healthy human knee joint. J Biomech 39: 1686-1701. 3. Abu Osman NA, Spence WD, Solomnids SE, Paul JP, Weir AM (2010) Transducers for the determination of the pressure and shear stress distribution at the stump-socket interface of trans-tibial amputees. 224: 1239-1250. 4. Mesfar W, Shirazi-Adl A (2005) Biomechanics of the knee joint in flexion under various quadriceps forces. The Knee 12: 424-434. 5. Abu Osman NA, Spence WD, Solomonidis SE, Paul JP, Weir AM (2010) The patellar tendon bar! Is it a necessary feature? 32: 760-765. 6. Guess TM, Thiagarajan G, Kia M, Mishra M (2010) A subject specific multibody model of the knee with menisci. Med Eng Phys 32: 505-515. 7. Peña E, Calvo B, Martínez MA, Palanca D, Doblaré M (2005) Finite element analysis of the effect of meniscal tears and meniscectomies on human knee biomechanics. Clin Biomech 20: 498-507. 8. Guo Y, Zhang X, Chen W (2009) Three-Dimensional Finite Element Simulation of Total Knee Joint in Gait Cycle. Acta Mech Solida Sin 22: 347-351.
Address of the corresponding author:
REFERENCES 1. Zhang X-s, Guo Y, Chen W (2009) 3-D Finite Element Method Modeling and Contact Pressure Analysis of the Total Knee Joint in Flexion, 3rd International Conference on Bioinformatics and Biomedical Engineering, Beijing, China, 2009, pp 1-3.
Author: Azim Ataollahi Oshkour Institute: Department of biomedical engineering,University of Malaya City: Kuala Lumpur Country: Malaysia Email:
[email protected]
IFMBE Proceedings Vol. 35
Investigation of Lung Lethargy Deformation Using Finite Element Method M.K. Zamani1, M. Yamanaka2, T. Miyashita2, and R. Ramli1 2
1 Mechanical Engineering/ Univ of Malaya, K.Lumpur, Malaysia Graduate School of Creative Science and Engineering, Waseda University, Tokyo, Japan
Abstract— Lung cancer has become one of most killing disease in 20th century. There are various external and internal causes identified by experts to be the causes of this disease. Various type of treatments have been developed taking account on the stage and condition of cancer itself, such as Radiation Therapy, Chemotherapy and Surgery. In this paper, a method called Thoracoscopy is focused for improvement by investigating a possibility of using finite element method (FEM) to predict deformation and movement of tumor during lung lethargy process as a preparation stage for this surgery method to be deployed. Thoracoscopy is suitable for stage-I cancer, as it will only create a small incisions on the skin and has high possibility to cure by removing the tumor. This paper used a commercial FEM tool to model lung in 3dimension and by using tested data of material properties, a simulated result of lung lethargy process was compared with experimental data. Results revealed a few promising points to further develop this virtual capability and use it to predict deformation prior the surgery. Keywords— FEA, Thoracoscopy, Lung, Cancer, Nonlinear.
I. INTRODUCTION Lung cancer is one of the highest mortality rates out of all forms of cancers. In many cases it could not be detected until it is in the advanced stages and makes treatment more difficult. Therefore, early detection and treatment of this may increases patient chances of survival. There are many factors that contribute to the causes of this disease. There may be differentiated by external and internal factors. Factors such as chemical-substance, radiation, and virus are in this external factor family, and the first one is contributing the most to this disease. Most of cancer patients come from smokers and lung cancer developed due to chemicalsubstance created during smoking. Others factors such genetic factors also contributing to cause the disease, but is revealed not so high statistically. According to MAKNA ( Majlis Kanser Nasional ) in the report MALAYSIAN CANCER STATISTICS – DATA AND FIGURE PENINSULAR MALAYSIA 2006, lung cancer become 3rd most frequent cancer in peninsular of Malaysia, become 2nd for most frequent cancer in males, and 6th in females cancer [2]. To treat cancer, they are several ways developed depends on the stage of the cancer itself. In this research, a surgery method is being focused to treat cancer at an early stage to remove tumor. There are two main types of surgery to remove cancer tumor; An open chest surgery, and a Thoracoscopic
Surgery. As the focus is on thoracoscopic surgery it is a method of surgery assisted with minimal invasive to patient, and become more popular method to be used. One of the popular thoracoscopic surgery is VATS (Video Asissted Thoracoscopic Surgery) that use small incision opening (0.5 – 1.5 cm) and insert an endoscope that will display the video while surgeon doing the operation. The advantageous are such as, less pain, small size of wound that will spare patient’s beauty, fast recovery after operation, short hospital admission, and patient able to maintain stamina of lung function due to small incision and invasion into the body. The disadvantageous are since this surgery is done remotely while observing the position using video monitor, it is necessary that the surgeon is well trained and experienced to perform this. From this view, this kind of surgery is seen as highly complex method. Moreover, since the CT-scan that is used to point out the 3-D location for the tumor is only working when lung is full of air, the state of lethargy of lung that is necessary for this surgery method results in the unpredictable dislocated of tumor compare to its initial location before the lethargy. This condition adds up to the degree of difficulty for performing this surgery. Therefore, in this research, the above difficulties are targeted to be the objective and thoroughly create a safer method of predicting tumor position in lethargy state using finite element method.
II. LETHARGY EXPERIMENT A. Experiment Method and Result Lethargy experiment was done over pig’s lung. A location measurement device called Aurora by NDI was used to track surface movement based on indicated location. During lethargy process, a constant pressure of air was blown into the lung, and then it was released naturally to mimic the lethargy process of lung. Detail explanations for this set up are explained below. A fixed stand is designed and created to locate the lung during experiment. It is designed such that it will fully support lungs geometrical shape. The mechanism as in figure 1 will constraint first branch of bronchial tube without damaging lungs and its pleura. Two pieces of sheet metal will separate bronchial and pleura and slide for easy setting and fixation. Main trachea and other locations that are going to be measured are set to move freely.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 170–174, 2011. www.springerlink.com
Investigation of Lung Lethargy Deformation Using Finite Element Method
171
Fig. 1 b Fixed stand – Slide In
Fig. 1 a Fixed stand – Slide Out
Fig. 2 From left : Head Side, Hip Side, Discrete Location Three measuring modes are set with 4 measuring points for each mode are located on the lung for each experiment as shown in figure 3.; Head side, hip side, and discrete location with X-Z axis as shown in figure 2.Using compressor, a total of 3250[Pa] of air pressure is sent into lung and stopped when the pressure maintain. Plotted results are omitted, but discussions for those results are explained below. To confirm the plot, readers are welcome to contact authors directly. Each measured location is offset to zero initially for each direction. Different from real breathing process that remove almost all air inside lung, this experiment does not have squeezing factor as in real breathing from thoracic cavity and diaphragm. Hence, during air release process, an amount of air will remain inside pulmonary alveolus (alveolus) that makes the displacement converged. It is observed that for all three measurement of head side, hip side and all lungs, displacement direction is toward the fixed location of trachea. Moreover, the movement is bigger if the measured location is at farther distance from trachea. This kind of movement tendency is concluded to be shown inside thoracic but the only different is that the real lung in human is covered by its external frame.
Overall, due to less friction between thoracic and lung during lethargy, it is expected that human lung during lethargy and movement will be more easier and bigger compare to this experiment, especially in gravity acting direction. Another unique movement is shown at head side measurement. The displacement of Y-direction movement initially started to move downward, but then moved towards its initial height, and finally reached higher than initial location. This is observed to be because of existence of lung lobes. Each lobe consists of its own independent bronchioles and blood vessels. Since measured points were crossing different boundaries of lobes, this kind of displacement behavior are seen due to independent lethargy process of each lobes. Same behavior of displacement is observed to hip side experiment of measure point number 1 , 3 , and 4 of Y-direction.
○○
○
III. FINITE ELEMENT MODELING AND SIMULATION OF LUNG A. Finite Element Model CT-scanned of human lung images were taken as 3-D data. Using commercial pre-processer, FEA model is created. However, there was hiccup when generating CT image for bronchial tube, and hence, bronchial tubes are replaced with blood vessels, but the material property for this is using the real experimented bronchial tubes property.
IFMBE Proceedings Vol. 35
172
M.K. Zamani et al.
Fig. 3 Finite Element Model for Lung
Fig. 5 Average Value of Compliance Curve
Figure 3 is showing blood vessels and parenchyma as taken for finite element simulation. B. Material Properties Modeling Parenchyma and trachea taken from pig’s lung that was dead within 48 hours were tested using Rheometer, made by TA-Instrument of AR550 model. Test pieces temperature was kept constant at 36º[C]. Results for this test are shown in figure 5. As understand from material properties test, both parenchyma and trachea show viscoelastic properties behavior. To model a mechanical property for calculation, Zener’s Model of differential equation is used.
C. Finite Element Simulation Two types of simulations were executed in this process. First simulation imitated the lethargy experiment. Results were compared with data from experiments and the different points and corresponding points are discussed. Second simulation was executed based on lethargy process but, a few parameters are changed and the deformations are observed. In both lethargy and varying parameters investigation simulation, the fixed stand is modeled as rigid shell, and xy-z axis are fixed. Gravity load is applied downward the stand. A pressure of 3245[Pa] is applied normal to lung elements. Nodes around area are fixed. Assumed friction coefficient of 0.01 between model and stand is used. Results are discussed in later part. Table 2 Varying Parameters Value Parameters Direction of gravity, θ Friction Coeff. Between Lungs and Thoracic Cavity
Fig. 4 Zener’s mechanical model
Table 1 Material Properties Parenchyma 0.768
Trachea 4.67
Gl kPa
0.242
10.3
β
0.0358
0.0199
0.35 2.0736 0.87
0.45 13.543 0.95
Ge kPa s
Poission Ratio
ν
Young’s Modulus E kPa Density, ρ kg/m^3
0.001 , 0.01, 0.05, 0.07, 0.1 10.3
Table 3 Equivalent Stiffness
Based on the above approximation for mechanical model by Zener, material properties are calculated as shown in table 1 below. Properties
Range 15 , 30, 45, 60, 75
Trachea Property, %
0
1
2
3
4
5
Ge kPa Gl kPa
0.7680
0.8070
0.8460
0.8851
0.9241
0.9631
0.242
0.3426
0.4432
0.5437
0.6443
0.7328
s
0.0358
0.0356
0.0355
0.0353
0.0352
0.0350
0.3500
0.3510
0.3520
0.3530
0.3540
0.3550
2.0736
2.1806
2.2877
2.3950
2.5024
2.6100
0.87
0.8708
0.8716
0.8724
0.87324
0.8265
β
Poission Ratio ν Young’s Modulus E kPa Density, ρ kg/m^3
IFMBE Proceedings Vol. 35
Investigation of Lung Lethargy Deformation Using Finite Element Method
173
D. Results for Varying Parameters 1. Changing degree of rotation
○
○
Fig. 6.b Changing in degree at point 2
Fig. 6.a Changing in degree at point 1 2. Changing Friction Coefficient
○
Fig. 6.c Changing Friction Coefficient at point 1
Fig. 6.d Changing Friction Coefficient at point
○ 2
E
E
3. Varying Equivalent Stiffness of Trachea
○
○
Fig. 6.e Changing in degree at point 1
Fig. 6.f Changing Friction Coefficient at point 2
For varying parameters investigation simulation, parameters that are been investigated are as shown in table 2. Gravity direction is changed to investigate the effect of acting direction during surgery. In real surgery operation, patient’s is laid down in spine’s direction in order to minimize
movement of lungs. The second parameter is investigated due to difficulty of getting the value via experiments. Hence an assumed value is used, but varying it at very discrete step to see its effect to displacement of lung’s element.
IFMBE Proceedings Vol. 35
174
M.K. Zamani et al.
The third parameter investigated is the effect of trachea stiffness. As explained in previous section, due to difficulty of acquiring images for bronchial tube, blood vessels images are assumed to be the trachea. In this simulation, equivalent stiffness method is used to investigate the stiffness effect at the increase rate of 1[%] in the range from 0~5[%]. The values are as shown in table 3.
Next, an observation was done towards existing of trachea and varying its equivalent stiffness. Result shows that there is no significant effect in Y-direction when varying its equivalent stiffness . However, displacement tends to increase at rate of 0~2% with the changes of this value, but then slowly decreased.
IV. CONCLUSIONS
E. Simulation Result Both results for lethargy experiment imitation and varying parameters simulation are discussed below.
(E.1) Simulation of Imitating Lethargy Experiment Measured points are set at same location for both simulation and experiments. From simulation, all lung elements are moving towards fixed nodes. Moreover, the three points of , , only move at very small rate, but point 2 moved largely near diaphragm. This is due to location of measuring points of , , were located near fixed location of trachea. One point that matched experiment result is that the longer the distance located away from fixed location, the displacement value is bigger. The different point from experiment is that, instead of converging displacement as observed in experiment, displacement in simulation did not converged but stopped at one point. This is due to absolute value of normal pressure towards lung’s element and will only stop until all acting forces balanced. In real application, pressure decreased as the air is released naturally. It is believed that if it can be combined with Computer Fluid Dynamic (CFD) simulation, a more realistic result can be achieved.
①③④
○
① ③ ④
(E.2) Results of Investigation by Varying Parameters Referring to figure 6, displacement is observed for each rotation changed 3.5[s] after simulation started. It is understood that displacement of point , 2 in Y-direction does not affected by the change of degree or rotation. However, lung elements experienced a large decreased in displacement in Z-direction at point , 2 . For each 15º[deg], 10 [mm] of displacement is observed. In X-direction, the rate of approximately 10[mm] per 15º[deg] of displacement increased, differed from value observed in Z-direction. On the other hand, it is observed that there are no significant changes of displacement at point , when varying friction coefficient. Even when the value is largely increased, maximum displacement observed only 4[mm].
①○
In this paper, an experiment that was done over pig’s lung is presented. The experiment covered a lethargy behavior of lung’s that is usually done for specific lung’s surgery operation to remove cancer. Various parameters effect is studied to identify displacement sensitivity of lung’s tissue through finite element simulation method. In experiment, larger displacement is observed at location far away from fixed point. Similar behavior observed in simulation. However, displacement in Z-direction in simulation is observed to be bigger compared to result from experiment. This is due to the different of loading condition in simulation and experiment. In simulation, constant pressure is applied outside lung and normal to element. Whereas in experiment, pressure decreased due to air released naturally from lung. Then, to imitate patient’s laid direction during surgery, lung is rotated into specific degree of rotation and simulated. It is observed that approximately 10[mm] of displacement occurred for each 15º[deg] rotated. However, there is no significant behavior observed for varying friction coefficient. Finally, by varying equivalent stiffness of bronchial tube, measured points increased in X, Z-direction, but slowly decreased after it reached certain maximum value. It is concrete reason to consider the stiffness of bronchial tube to accurately measure displacement of particular location. In future, it is hoped that an experiment and material test could be done for human’s organ. Moreover, a more accurate model for lung must be developed in simulation to imitate precise displacement value acquired from experiment.
①○
①②
REFERENCES 1. A.P.Santhanam, F.G, Hamza-Lup, and J.P. Rollad. (2007). Simulating 3-D Lung Dynamics using a Programmable Graphics Processing Unit. 11 (5), 497 - 506. 2. MAKNA http://www.makna.org.my/
IFMBE Proceedings Vol. 35
Knee Energy Absorption in Full Extension Landing Using Finite Element Analysis M.M. Davoodi1,*, N.A. Abu Osman2, A.A. Oshkour2, and M. Bayat2 1
Department of Mechanical and Manufacturing Engineering, Universiti Putra Malaysia 43400 Serdang, Selangor, Malaysia 2 Department of Biomedical Engineering, Universiti of Malaya, 50603 Kuala Lumpur, Malaysia
Abstract— Full extension landing present great amount of stress, deflection, and energy, which damps by a contribution of bones, muscle, and ground reaction force (GRF). This study focused on analyzing the bone components deflection and energy absorption in full extension landing by dynamic finite element analysis (FEA). The impact load and time were derived from our previous research from a female subject with 18.8 BMI, who landed from three different levels, 25, 50, and 75 cm, to the calibrated force plate and 200 Hz camera. The three dimensional (3D) model developed by reprocessing of the computed tomography (CT) images by Mimics and Geomagic software’s’ in order to take the smooth and uniform surfaces. The imported data analyzed by FEA software Abaqus v6.9 while the instance impact load is applied to the femur in the upward direction. The maximum deflection and energy absorption was measured. It is concluded that full extension landing creates a great deflection in the knee components which might make high strain energy and bone to bone contact. The critical deflection and energy absorption might be considered for more reliable design and material selection of artificial knee components. Keywords— Full extension landing, Energy absorption, Knee joint.
I. INTRODUCTION Knee joint is the largest and complex joint in the human body that is mainly connected by four ligaments, which tight the bones in tension condition like a strong rope. Meniscus located between articulating surfaces, which covered with hyaline cartilage in order to reduce friction provide stabilization load transmission, shock absorption and joint lubrication [1-5]. Running, and landing raise the impact loads over 12 times of the body weight (BW) [6].Landing as a consequence of jumping applies great impact load to the body which is damped the kinetic energy absorption by a contribution of all lower extremity joints, muscle and ground reaction force (GRF) and the potential for injury raise by increasing the impact load [7-11]. Varied parameters could be measured by FEA or experimental measurement [12]. Since the landing load took place in less than half second the impact absorption last between 150-300 ms depending on landing condition. Measuring the landing load and interface pressure need require proper measurement technique, suitable data, and analyzing capability [13].
This instance load in full extension landing makes a significant deflection and energy in knee components, which could make a bone to bone contact and great pain. Currently, some studies have been done regarding energy absorption of a lower extremity joint in different genders during landing. Decker et al, (2003) studied about sex differences in energy absorption patterns, he founded that women exhibited greater energy absorption compare with men, and knee was the primary shock absorber for both genders. Also, Schmitz, et al, (2010) founded that women absorbed 69% more knee energy than men. Zhang et al, (2000) found that knee joint extensors were consistent contributors to energy dissipation and change by varied ground stiffness and landing height. The recent research shows the high impact load and risk injury occurs when the knee joint locks in full extension in an upright standing position [16-18].So, it might occur the highest stress and energy absorption during full extension landing. In this research the knee components deflection and energy absorption is studied. The purpose of this study is to find energy absorption and maximum deflection in full extension landing by experimental and finite element analysis software (Abaqus v6.9). The experimental data was derived from our previous research in the female subject with 18.8 BMI, who landed from three different heights on the force plate in five trials. Next, the data imported to the Abaqus to measure deflection of the whole knee, deformable components and the energy absorption of knee structure. Calculating the amount of maximum energy absorption and critical deflection could lead to find the threshold safety height for specified BMI and more reliable material selection for improving product life spans during artificial knee component development.
II. MATERIALS AND METHOD Materials and method divided to three subsections about developing suitable and reliable 3D model for FEA analysis, defining knee components material properties and behavior in the software, and clarifying the boundary condition and landing simulation method. A. Geometrical 3D Model Development The geometrical 3D information of the required models is developed by taking 2D images of the whole legs from
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 175–178, 2011. www.springerlink.com
176
M.M. Davoodi et al.
computed tomography (CT) scan. The data (990 images resolution 512 x 512 and thickness 0.539 mm) was developed at 0º flexion from whole legs scanned images of the 24 years old girl with body mass index (BMI) 18.8 and reprocesses in digital imaging &communication (DICOM) to make the desired image format as input in Mimic’s version 13.1 software to generate a discrete 3D surface models according to their density. Then, Geomagic studios version 11.7 take the vertex points of triangle meshes of Mimics output surfaces as clouds of the points, and create fitted smooth non-uniform rational B-spline (NURBS) surfaces with defined precision. Then dynamic explicit of Abaqus v6.9 is employed to simulate the energy absorption of whole knee structures in full extension landing (See Fig1).
Computed Tomography (CT) Scan
FEM Analysis
1
Digital Imaging &Communication (DICOM)
6
Knee Meshing in FEA
2
Materialse Mimics
5
Making NRBS Surface
3
Table 1 Mechanical property of knee components Name
Young’s modulus (MPa)
Poisson’s ratio
Density 3 (g/cm )
Reference
Bone
11000
0.3
1.8-2.1
[3,4]
Cartilage
5
0.46
1.0
[5,6]
Meniscus
59
0.49
1.5
[6,7]
C. Boundary Condition and Simulation Model It could be considering two methods for free falling dynamic impulse load simulation in this study. In first case an equivalent instance pressure was applied in upward direction from premrose tibia in full extension condition. Femur is considered fully DOF constrained, but the tibia had movement in vertical (Z direction) to transfer the impact load to whole structure. The surface to surface contact between the tibia plateau cartilage, articular cartilage and meniscus were defined. Besides, the contact area between tibia plateau, femur and their own cartilages were tied together. Both, femur top sides were fully fixed and both top sides of the tibia fully fixed except vertical (Z direction). It was because not to let the bones have side movement during the impact. Moreover, could be possible to consider a plunger with the same sample body weight and velocity to impact to the Z free tibia and imitate the same jumping force condition of ground to the human when landing (See Fig. 2).
4
Fig. 1 General flowchart of finite element analysis B. Material Behavior Consideration in FEM Analysis Mechanical properties of the knee components are derived from selected referred articles (See Table 1). Bone’s density and its mechanical properties varied along its length, but in this study tibia and femur simplified as a non-linear isotropic material. Meniscus is composed of fibrocartilage, an anisotropic nonlinear viscoelastic material and its viscoelastic behavior make constant after 1500s [2]. Because an instance load applied in the dynamic analysis, so the properties’ variation is neglected. Articular cartilage and tibia plateau cartilage are assumed as isotropic elastic material.
Fig. 2 (a) Moving plate with specified velocity (b) Applying I stance load to tibia
The applied instance load transfer to the catrilage tibia, cartilage femur and meniscus. Since the young’s modulus of the meniscus is almost 12 times of the cartilages (see Table 1) so their deflection and energy absorption is higher than cartilages. The damping system could be simplified as a
IFMBE Proceedings Vol. 35
Knee Energy Absorption in Full Extension Landing Using Finite Element Analysis
177
parrallel spring damper mechanism with different spring and damping coefficient. Load transmission should deform tibial cartilage (viscoelsatic material) and transmit the load to the meniscus as a main energy absorber with higher damping capacity and the resedual impact load transmit to the femural cartilage (See Fig 3).
Fig. 4 Energy absorption of the knee components in jumping
Fig. 3 Damping mechanism in knee components The defined pressure on the tibia applied to the cross section area of the cut tibia 381.83 mm2 in order to find the applied force into the whole leg. Moreover, previous studies show that the knee contact force (KCF) in walking ranging from 1.6 to 3.5 times body weight (BW) in different speed walking.[8,9,10,11and 14] and in jumping is 10- 16 times of body weight. The knee components finite element data for limited dimension is shown in Table 2.
The contact pressure makes depression between cartilage tibia, cartilage femur and meniscus. The translation was measured between two nodes after and before the depression. The contact pressure at first step was 23.05 MPa which change to 128.7 MPa in last step. The translation percentage after the applied pressure was about 48 % of its initial condition. The previous research shows that the great injury happen if the distance between tibia plateau and femoral condyle decrease until 50% of its normal distance [13]. So the knee is almost in the threshold condition (See Fig 5).
Table 2 Knee component’s finite element data Parameters
Tibia
Femur
Fem-Cart
Tib-Carti
Meniscus
Total
Volume (m³)
7.26E-05
1.08E-04
1.02E-05
7.55E-06
2.60E-06
2.01E-04
Mass (kg)
3.90E-03
0.40217
44534
151189
0.15236
0.22691
1.15E-02
7.55E-03
Nodes
49411
10620
45579
1045
Elements
27975
5770
26681
4.15E+02
26266
Fig. 5 Maximum penetration after impact load
87107
III. RESULTS
IV. DISCUSSION
Figure 4 shows the energy absorption of the whole knee structure at impulse load step. The energy damping commences from 0 and smoothly increases till 0.15s. Then the energy steadily increases until 0.023 to achieve 1.5 E6. Next, the value rate of the energy increase exponentially to the maximum energy damping 6.2 E6 at 0.04s. According to the graph, it can be concluded that energy absorption of the whole structure increase linearly till half of the impact time, then it is drastically increasing the energy absorption cause of nonlinear behavior of material.
This study is focused on finite element analysis and simulation model of human legs in full extension landing in order to find energy absorption and knee components translation. The results indicate that the applied loads in full extension jumping are quite high compare with flexion jumping. If the impact loads applied completely straight to the tibia without bones buckling and knee flexion; it would affect high energy absorption, stress, deformation and much pain. However, the viscoelastic behavior of the knee helps to damp the impact load and prevent it to fail; it is still high
IFMBE Proceedings Vol. 35
178
M.M. Davoodi et al.
energy and deformation cause by the body weight in landing. People bend their knee unintentionally in jumping in order to decrease the direct load. Moreover, the center of mass in the human body makes a momentum which causes the knee bending reaction. Previous research showed that the maximum flexion observed in jumping [14] and the impact load in flexion downhill walking to contribute by three types of reaction force, bone to bone compressive force, muscle force and ground reaction force, which share about 8, 6 and 2 times of the body weight [15,16] but the flexion effects and the load applied directly to the knee without any flexion which is quite higher normal jumping which causes flexion unintentionally. It is evident the load drive the cartilage to fail and make a serious stress.
V. CONCLUSION In the present study, it has shown the full extension landing makes a great deformation in the knee components which might make a bone to bone contact and serious injury compare with downhill walking and flexion landing. It might consider the value of the stress and the impact load in geometry design and reliable material selection for artificial knee within the ongoing development.
REFERENCES 1. Abu Osman N, Spence W, Solomonidis S, Paul J, Weir A (2010) The patellar tendon bar! Is it a necessary feature? Medical Engineering and Physics 32: 760-765. 2. Pena E, Calvo B, Martinez M, Palanca D, Doblaré M (2005) Finite element analysis of the effect of meniscal tears and meniscectomies on human knee biomechanics. Clinical Biomechanics 20: 498-507. 3. Beillas P, Papaioannou G, Tashman S, Yang K (2004) A new method to investigate in vivo knee behavior using a finite element model of the lower limb. Journal of Biomechanics 37: 1019-1030. 4. Bitsakos C, Kerner J, Fisher I, Amis A (2005) The effect of muscle loading on the simulation of bone remodelling in the proximal femur. Journal of Biomechanics 38: 133-139. 5. LeRoux M A, Setton L A (2002) Experimental and biphasic FEM determinations of the material properties and hydraulic permeability of the meniscus in tension. Journal of biomechanical engineering 124: 315.
6. Li G, Lopez O, Rubash H (2001) Variability of a three-dimensional finite element model constructed using magnetic resonance images of a knee for joint contact stress analysis. Journal of biomechanical engineering 123: 341. 7. Cohen Z, Henry J, McCarthy D, Mow V, Ateshian G (2003) Computer simulations of patellofemoral joint surgery. The American Journal of Sports Medicine 31: 87. 8. Zhao D, Banks S, Mitchell K, D'Lima D, Colwell Jr C, et al. (2007) Correlation between the knee adduction torque and medial contact force for a variety of gait patterns. Journal of Orthopaedic Research 25: 789-797. 9. Hurwitz D, Sumner D, Andriacchi T, Sugar D (1998) Dynamic knee loads during gait predict proximal tibial bone distribution. Journal of Biomechanics 31: 423-430. 10. Kim H, Fernandez J, Akbarshahi M, Walter J, Fregly B, et al. (2009) Evaluation of predicted knee joint muscle forces during gait using an instrumented knee implant. Journal of Orthopaedic Research 27: 1326-1331. 11. Mündermann A, Dyrby C, D'Lima D, Colwell Jr C, Andriacchi T (2008) In vivo knee loading characteristics during activities of daily living as measured by an instrumented total knee replacement. Journal of Orthopaedic Research 26: 1167-1172. 12. Abu Osman, N., Spence, W., Solomonidis, S., Paul, J., & Weir, A. (2010). The patellar tendon bar! Is it a necessary feature? Medical Engineering and Physics, 32(7), 760-765. 13. Abu Osman, N., Spence, W., Solomonidis, S., Paul, J., & Weir, A. (2010). Transducers for the determination of the pressure and shear stress distribution at the stump–socket interface of trans-tibial amputees. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 224(8), 1239-1250. 14. Zhao D, Banks S, D'Lima D, Colwell Jr C, Fregly B (2007) In vivo medial and lateral tibial loads during dynamic and high flexion activities. Journal of Orthopaedic Research 25: 593-602. 15. John B. McGinty, Burkhart SS (2003) Operative arthroscopy, Volume 1. Philadelphia: Lippincott Williams & Wilkins. 995 p. 16. Nigg B, MacIntosh B, Mester J (2000) Biomechanics and biology of movement: Human Kinetics Publishers. 17. Piazza S, Delp S (2001) Three-dimensional dynamic simulation of total knee replacement motion during a step-up task. Journal of biomechanical engineering 123: 599. 18. Kuster M, Wood G, Stachowiak G, Gachter A (1997) Joint load considerations in total knee replacement. Journal of Bone and Joint Surgery-British Volume 79: 109. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Majid Davoodi Makinejad Universiti Putra Malaysia Serdang Selangor Malaysia
[email protected]
Knee Joint Stress Analysis in Standing A.A. Oshkour1, N.A. Abu Osman1, M.M. Davoodi1, M. Bayat1, Y.H. Yau2, and W.A.B. Wan Abas1 1 2
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Malaysia Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, Malaysia
Abstract— Assessment of the stress distribution in the tibiofemoral joint during standing on both legs was the aim of the present work. The finite element method was employed to perform the current study. A compressive load of half of body weight (250N) was applied to the femur head. The cartilages perfectly attached to femur and tibia; and while the femur and menisci were allowed vertical motion, the tibia was fixed at its end. The tibiofemoral joint components were considered linear elastic material. Computed tomography images were used to create subject-specific three-dimensional models of tibiofemoral joints in the commercial finite element software package ABAQUS v6.7. Mechanical responses, such as contact pressures and compressive stresses of the joint, were measured. It was found during the standing the lateral side of tibiofemoral joint carry out more stress rather than medial part. Moreover, the contact area of the lateral side is larger than the medial side. Keywords— Tibiofermoral Joint, Finite element Method, Contact Pressure, Compressive Stress.
I. INTRODUCTION Knee joint is the most important and biggest joint of the human body that is consisted of the two separate joint; The Tibiofemoral joint (TF) and patella femoral Joint (PF) [1,2]. Knee joint components are femur, tibia, fibula, patella, plateau tibial cartilages, femoral cartilage, menisci and ligaments [3]. Knee injuries are common in young and adult person. Hence, having good knowledge of knee biomechanics helps to keep it safe. To date variety of parameters have been analyzed via experimental measurements or finite element studies [4] and Finite element Methods in the variety area of research (mechanical engineering, aviation, biomechanics, etc.) are well known as powerful tools to analysis of the mechanical respond of structures to the different loading. Many researchers have done finite element analysis (FEA) on knee joint [5,6,7,8,9]. Guo et al. carried out 3D FEA of knee joint in gait cycle [1]. W. Mesfar and A. Shirazi-Adl investigated the biomechanics of human knee joint for flexion from 0˚ to 90˚ with FEA [10]. Explicit FEM was employed to analyze impact in knee during hopping [11]. Zhang et al. calculated contact pressure and contact area for
different parts of knee compartment by 3D FEA of healthy human knee joint and found: by increasing flexion angles contact pressure and area increased, smaller contact area on lateral cartilage in comparison with medial cartilage and variable peak contact pressure on medial meniscus whereas constant contact pressure on lateral area [12]. However, many studies have been done on knee joint, still there many unknown parameters, which could affect the knee joint health. Standing is the one of the phase that regularly during daily activity we faced with it. Therefore, the aim of this study was to develop a three-dimensional (3D) finite element model to investigate the stress distribution in knee joint during the standing.
II. MATERIAL AND METHODS Three steps for obtaining the three dimensional knee joint geometry have been carried out; (i) obtaining the surface geometry of the healthy knee using Computed tomography (CT), (ii) importing the CT images into the Materialise Mimics software (version 13.1) for constructing the 3D model of the knee, and (iii) importing the 3D model into the FE software ABAQUS (v. 6.7). A. Creating Tibiofemoral Joint CT images were obtained from the knee of a 24-year-old healthy female (mass 50 kg, height 162 cm). 988 images were captured using a multidetector Siemens machine with 512*512 pixels and a spatial resolution of 0.549 mm. CT images were converted using Digital Imaging and Communication (DICOM) formats and were then imported to the Mimics software. As shown in Figure 1, soft and hard tissues were identified using tissue specific threshold values of 148-1872 and 125-700, respectively and the tibia, femur, cartilages, and menisci were represented in the knee model (maximum and minimum value of threshold corresponds to the range of grey values to highlight pixels). After creating the knee components in Mimics, the 3D model was imported as into ABAQUS finite element software.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 179–181, 2011. www.springerlink.com
180
A.A. Oshkour et al.
III. RESULTS AND DISCUSSION
Fig. 1 Constructing of hard and soft tissue in Mimics
(a)
B. Defining Material Properties Bony components (femur and tibia) and their cartilages (Femoral and tibial cartilages) were considered as linear and elastic material with Young’s modulus of 11GPa and Poisson’s ratio of 0.3 and Young’s modulus of 5MPa and a Poisson’s ratio of 0.46 respectively for bone and cartilages. Menisci were also considered elastic with a Young’s modulus of 59MPa and a Poisson’s ratio of 0.49. C.
Contact pressure and compressive stress are presented in Fig. 3. As shown in Table 1, in the lateral femoral cartilage, tibial cartilages and menisci contact pressure are 1.5, 1.1 and 1.3MPa; and in the medial side are 1.2, 0.9 and 0.8MPa respectively. Meanwhile compressive stress is equal to 1.3, 0.9 and 0.7 in the lateral side and 0.7, 0.5 and 0.4 in the medial side of femoral cartilage, tibial cartilages and menisci respectively. In relation to the higher contact pressure and compressive in the lateral side of TF compartment the contact area in lateral side is larger than in the medial side as shown in Table1.
(c)
(b)
(d)
Loading and Boundary Conditions
A compressive vertical load of half body weight (BW) was applied to the top of the femur. Femoral motion was limited to the Z direction while the tibia was completely fixed. Cartilages was perfectly attached to the corresponding bones, and the motion of the menisci was restricted to the lateral and medial direction as shown in Fig. 2. Frictionless contact property was assumed between TF components.
(e)
(f)
P=0.5*B
Femur
Fig. 3 Contact pressure: (a) femoral cartilage (c) tibial cartilages (e) Menisci; & Compressive stress: (b) femoral cartilage (d) tibial cartilages (f) Menisci
Femoral Cartilage
IV. CONCLUSION Menisci Tibial Plateau Cartilages
Tibia
A 3D finite element analysis of a right healthy knee has been conducted. Linear, elastic and isotropic material properties were considered for TF components. TF mechanical respond to a compressive load of 0.5*BW were assessed. Analyzing and comparing results showed bigger value of stresses and larger contact area in lateral rather than medial side. Therefore, it was concluded that, during standing on
Fig. 2 Circular plate displacement IFMBE Proceedings Vol. 35
Knee Joint Stress Analysis in Standing
181
both legs in full extension situation the lateral comportment of TF tolerate more stress than medial compartment. Table 1 Contact pressure, compressive stress and Contact area of TF Parts Femoral cartilage contact pressure
Medial 1.2MPa
Lateral 1.5MPa
Femoral cartilage compressive stress
0.7 MPa
1.3 MPa
Femoral cartilage contact area
286.3mm2
337.0mm2
Tibial cartilages contact pressure
0.9MPa
1.1MPa
Tibial cartilages compressive stress
0.5 MPa
0.9 MPa
Tibial cartilages contact area
141.9mm2
208.9mm2
Menisci contact pressure
0.8MPa
1.3MPa
Menisci compressive stress Menisci contact area
0.4 MPa
0.7 MPa
108.7mm2
116.5mm2
REFERENCES 1. Guo Y, Zhang X, Chen W (2009) Three-Dimensional Finite Element Simulation of Total Knee Joint in Gait Cycle. Acta Mech Solida Sin 22: 347-351. 2. Masouros SD, Bull AMJ, Amis AA (2010) (i) Biomechanics of the knee joint. JOT 24: 84-91. 3. Goldblatt JP, Richmond JC (2003) Anatomy and biomechanics of the knee. Oper Techn Sport Med 11: 172-186. 4. Abu Osman NA, Spence WD, Solomonidis SE, Paul JP, Weir AM (2010) The patellar tendon bar! Is it a necessary feature? 32: 760765. 5. Gruionu LG, Gruionu G, Pastrama S, Iliescu N, Avramescu T (2009) Contact Studies between Total Knee Replacement Components Developed Using Explicit Finite Elements Analysis, Proceedings of the 12th International Conference on Medical Image Computing and Computer-Assisted Intervention, London, UK, Springer-Verlag. 2009, pp 316-322.
6. Halloran JP, Clary CW, Maletsky LP, Taylor M, Petrella AJ, et al. (2010) Verification of Predicted Knee Replacement Kinematics During Simulated Gait in the Kansas Knee Simulator. Journal of Biomechanical Engineering 132: 081010-081016. 7. Halloran JP, Petrella AJ, Rullkoetter PJ (2005) Explicit finite element modeling of total knee replacement mechanics. J Biomech 38: 323331. 8. Katsuhara T, Sakaguchi J, Hirokawa S (2006) A 3D model analysis for developing multifunctional knee prosthesis, World Congress on Medical Physics and Biomedical Engineering 2006 In: Magjarevic R, Nagel JH, editors. IFMBE Proceedings, Seoul, Korea, Springer Berlin Heidelberg. 2006, pp 3177-3181. 9. Li G, Gil J, Kanamori A, Woo SL-Y (1999) A Validated ThreeDimensional Computational Model of a Human Knee Joint. J Biomech Eng 121: 657-662. 10. Mesfar W, Shirazi-Adl A (2005) Biomechanics of the knee joint in flexion under various quadriceps forces. The Knee 12: 424-434. 11. Beillas P, Papaioannou G, Tashman S, Yang KH (2004) A new method to investigate in vivo knee behavior using a finite element model of the lower limb. J Biomech 37: 1019-1030. 12. Zhang X-s, Guo Y, Chen W (2009) 3-D Finite Element Method Modeling and Contact Pressure Analysis of the Total Knee Joint in Flexion, 3rd International Conference on Bioinformatics and Biomedical Engineering, Beijing, China, 2009, pp 1-3. Address of the corresponding author: Author: Azim Ataollahi Oshkour Institute: Department of biomedical engineering,University of Malaya City: Kuala Lumpur Country: Malaysia Email:
[email protected]
IFMBE Proceedings Vol. 35
Mechanical Behavior of in-situ Chondrocyte at Different Loading Rates: A Finite Element Study E.K. Moo1, N.A. Abu Osman1, B. Pingguan-Murphy1, S.K. Han2, S. Federico3, and W. Herzog2 1
Department of Biomedical Engineeering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia 2 Human Performance Laboratory, Faculty of Kinesiology, University of Calgary, Alberta, Canada 3 Department of Mechanical Engineering, Faculty of Engineering, University of Calgary, Alberta, Canada
Abstract— Previous findings indicated that in-situ chondrocytes die readily following impact loading, but remain essentially unaffected when loaded at the same magnitude but with a slow (non-impact) loading rate. The current study was aimed at identifying the causes for cell death in impact loading by quantifying chondrocyte mechanics when cartilage was compressed nominally by 5% at different loading rates.. Multiscale modeling techniques were used. Cartilage was modeled accounting for collagen stiffening in tension. Chondrocytes were modelled to be including the cell membrane, pericellular matrix, and pericellular capsule. The results showed that cell deformations were lowest and cell fluid pressures were highest for the highest (impact) loading rate. Tangential strain rates on the cell membrane were highest at the highest loading rate and occurred primarily in superficial tissue cells. Since cell death following impact loading was primarily observed in superficial zone cells, we speculate that cell death in impact loading is caused by the high membrane strain rates observed in these cells for simulated impact loading conditions. Keywords— Finite Element Modeling; impact; chondrocyte; cell mechanics; cell death.
unaffected. However, it is not known why high loading rates of joint loading cause chondrocyte death, while equivalent magnitudes of loading applied slowly do not. Furthermore, although the transient deformations of cells during muscular loading have been studied now for the first time in an intact knee of a live animal [11], this would not be possible for impact laoding (about 1-2ms in duration), for lack of temporal resolution of the two-photon microscopy approach required for studying cells inside intact joints. Therefore, the purpose of this study was to investigate the effects of different loading rates on chondrocyte mechanics in cartilage explants using a theoretical approach based on finite element modelling. Changes in cell shape and fluid pressure were studied in detail as a function of loading rate. We hypothesized that: 1. 2.
Cell membranes rupture when exposed to excessive tangential strain, and so cause cell necrosis Excessive fluid pressure induces apoptosis [12, 13].
I. INTRODUCTION
II. METHODOLOGY
Osteoarthritis (OA) is a disease involving chronic degeneration of a joint associated with loss of articular cartilage [1]. The onset of OA may be due to impact to a joint causing chondocyte (cartilage cell) death [2, 3], and thus a reduced capacity for cartilage to adapt and repair [4]. There are many reports focused on studying the effects of impact loading of a joint on cartilage tissue damage and chondrocyte death [5, 6, 7, 8]. These studies demonstrated that chondrocyte death depends greatly on the rate and magnitude of loading [5, 6, 9, 10], and that cell death occurs primarily in the superficial zone and seldom extends into the middle and deep zone of cartilage. For example, impact loading of rabbit patellofemoral joints resulted in vast chondrocyte death in the superficial zone cartilage while loading with the same magnitude through muscular contraction, but with a peak rate approximately 100 times slower than for the impact loading, chondrocytes remained
In order to model the mechanical behaviour of chondrocytes, we developed a two-scale model of the articular cartilage and embedded cells [14, 15, 16, 17]. The macroscale model was used to represent the cartilage tissue, the microscale model, to represent individual cells. A. Macroscale Model Articular cartilage was modeled as a 1mm-thick cartilage tissue layer with a 2mm-thick underlying bone (Fig 1). Cartilage was assumed to be a biphasic material that consists of a solid phase and a fluid phase. [17, 18, 19, 20]. The solid phase was taken as hyperelastic due to the large deformations that are normally encountered. The strain potential function of the solid phase used here was assumed to be an additive function of isotropic (properties arising from proteoglycans and ions) and
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 182–186, 2011. www.springerlink.com
Mechanical Behavior of in-situ Chondrocyte at Diifferent Loading Rates: A Finite Element Study
anisotropic (properties arising from differentt collagen fiber orientation) properties [Eq. 1].
183
; 0.176) as thee focus of this study was not on the bone’s behavior. B. Microscale Model
Fig. 1 Articular cartilage tissue (macroscale moddel) and chondron (microscale model) axisymmetric mesh model
,
,
(1)
The isotropic part was taken from Holmes & Mow’s strain potential function [17] while the anisotrropic part was modeled based on concepts introduced by Holzapfel H et al. 2000: exp
3
3 ln n
,
∑
exp
,
(2) 1
1 (3)
where , , are the three t principal invariants of the right Cauchy-Green deforrmation tensor, ; , , , ( 2 ) are material constants. [16, 18]; is a stress-like material constan nt (defined as “ 0.15 ” in the superfiicial zone; 0.23 MPa in the middle and deep zone,); is a dimensionless parameter (=0.8393) [21]. Permeability, , was assumed to be sttrain-dependent [16, 18]: exp
1
(4)
where and are non-dimensional materrial parameters that need to be determined experimentally [18]; is the initial permeability; and is the initial voiid ratio (=0.42) The initial permeabilities [22] and ellastic material properties [23] of articular cartilage were assumed to be depth-dependent. Bone was assumed to be biphasic with constant permeability ( 2.0 ; 0.2;
The microscale model consists of biphasic chondrons embedded in the extracellular matrix (ECM). Chondrons are chondocytes surrounded by a peericellular matrix and a pericellular capsule [24]. We assum med that cell, PCM, PC and ECM are perfectly bonded [19]. The dimensions of the ECM inn the microscale model were designed to be ~ 6 times the ceell radius [19]. Chondrocytes were modelled aas revolution ellipsoids (spheroids) and were assumed too be biphasic continua [17, 19]. Chondrocytes in differentt cartilage zones exhibit different mechanical properties: suuperficial zone cells are stiffer ( 1.59 , 0.34 ) than middle/deep zone cells ( 0.69 , 0.34) [225]. The solid phase of chondrocytes was assumed to be isootropic, homogenous and linear. The depth-dependent geomeetry of chondrocytes was obtained from previous studies [17]. Chondrocytes in the superficial zone lack the PC [24]. Therefore, chondrocytes in the superficial zone were modeled as being surrounded by the pericellular matrix (PCM) only. The PCM layer in thhe superficial zone was assumed to be 1.5 µm thick and bipphasic. The solid phase of the PCM layer ( 40 ) [26] was assumed to be non-linear, isotropic and homogenneous, as the PCM does not show preferential collagen fiber orientation [24]. The thickness of the PCM surroounding the cells in the middle and deep zones was assumeed to be 2.0 µm. Middle and deep zone cells are also surroounded by the PC which was modeled as a biphasic, hoomogeneous but linear transversely isotropic composite w with thickness of 1.0 µm [27, 28]. Permeability of the cells, PCM Ms and PCs were also assumed to be deformation–dependdent and non-linear [Eq. 4] [16]. The initial permeability, , and initial void ratio, of the the PCM and PC were assumeed, respectively [16, 29]: 4 10 / ; 1.079 10 / ; 4.0; 4.0 Besides the PC and PCM, chondroocytes were also assumed to be surrounded by a biphasic ccell membrane. The cell membrane was modelled as 10 m tthick with a permeability of ~5 orders less than the permeabilitty of the cell [30]. C. Numerical Implementation Proccedures The commercial finite element sooftware, ABAQUS v6.8, was used to implement the model. Quadratic pore pressure elements were used to represent thhe biphasic nature of the cartilage tissue and the chondrocytees. The top surface of the
IFMBE Proceedings Vol. 35
184
E.K. Moo et al.
cartilage tissue (macro-scale) model was subjected to a nominal strain of 5 % under confined d compression conditions using at four loading rates (0.167 %/s, 5 %/s, 50 ular surface was %/s, 500 %/s). The top surface of the articu assumed to have zero-permeability (to allow w for exudation of interstitial fluid). The cellular model waas placed at the location of interest in the tissue model. In thiss study, cellular responses at three locations were studied: the superficial zone( 0.9), middle zone ( 0.5), and the deep zone ments and fluid ( 0.1) The time-varying solid displacem pressures at each node of the macroscale model m were used as boundary conditions for the micro-scale model. m
The cell compressive strain (Figg. 2) was maximal at the 5%/s loading rate, and was smalllest when loaded at the impact loading rate (500%/s). Fluidd pressure changes in the cells (Fig. 3) increased about 50 tiimes when loading rates were increased from 0.17 %/s to 5000 %/s. The rate of change of the cell meembrane tangential strain (Fig. 4) was highest for the highhest loading rate for the superficial but not the middle and deeep zone cells.
III. RESULTS Changes in cell height, cell width, cell volume, n on the cell hydrostatic pressure, and tangential strain membrane were compared for the different lo oading rates for the cells located at the three different positio ons (superficial, middle and deep zone) (Fig. 2-4).
Fig. 4 Absolute peak tangential strain rate onn cell membrane for cells in the superficial, middle and deep zones of the ccartilage tissue exposed to four different strain rates
IV. DISCUSSSION
Fig. 2 Absolute peak Lagrange-Green cell compressiive strain for cells located in the superficial, middle and deep zones and d subjected to four different strain rates
Fig. 3 Absolute peak hydrostatic cell fluid pressure channge for cells located in the superficial, middle and deep zones and subjecteed to four different strain rates
In this study, a 5% confined com mpression test was carried out numerically. 5 % compression was chosen as the purpose of this study was to unnderstand why articular cartilage chondrocytes may die w when exposed to impact loading but survive when exposedd to muscular loading of similar magnitude. Therefore, thee strain magnitude was chosen very small so that it should nnot be injurious by itself. Previous experimental studies show wed that 5% compression applied to cartilage tissue in ~ 0.3ss resulted in cell death in the superficial zone [7]. This madee the 5% compression a reasonable choice for the current stuudy. We found that superficial cells are more susceptible to the changing loading rates than cellls in the middle and deep zones. It is interesting to note, as an aside, that the maximum compressive cell strain reached values of up to ~ 20% and tangential membrane strainn values of up to ~ 25% in superficial zone chondrocytes, w while the applied nominal strain was merely 5%. Compressive cell strains were lowest for the highest (impact) loading rate, which was eexpected because of the viscoelastic properties of the tissuee (Fig. 2). Nguyen et al. found that compressive cell strainss of approximately 78% were required to kill chondrocytes [31]. Such values were
IFMBE Proceedings Vol. 35
Mechanical Behavior of in-situ Chondrocyte at Different Loading Rates: A Finite Element Study
not even approximated using our theoretical loading protocol. It had been shown that isolated chondrocytes could be killed with hydrostatic pressures of 5-10 MPa [12, 13]. However, chondrocytes protected by the PCM could resist pressures as high as 50 MPa [13]. The highest pressures in our study were produced with the fastest (impact) loading rates, but reached only approximately 55 MPa (Fig. 3), suggesting that chondrocytes would not have been killed in our impact loading scenario. The maximal tangential strain rates on the cell membrane were obtained for the highest loading rates, but only for the superficial zone cells (Fig. 4). This observation is consistent with experimental findings showing cell death following impact loading primarily in superficial zone cells [7, 8]. Therefore, we speculate that the tangential membrane strain rate plays an important role in chondrocyte death associated with impact loading. Experimental studies suggested that most cell deaths in articular cartilage caused by high rate compressions occurred through apoptosis rather than necrosis [32]. Therefore, chemical signalling and cellular receptors would likely be involved. It was found that integrin α5β1 (a transmembrane protein receptor in chondrocytes) is important in providing the matrix survival signals to the cell by interacting with the ECM [33]. If the receptor is blocked chemically by antibodies, chondrocytes die of apoptosis, providing strong evidence for the importance of integrin receptors to cell viability. Besides providing survival signals to the cell, integrin is also involved in sensing mechanical stresses applied to the cell and trigger cellular adaptive responses [34]. Mechanical forces are thought to converge on transmembrane integrin receptors that link the ECM to the intracellular cytoskeleton [35]. Based on our theoretical considerations and previously published experimental results, we propose that chondrocyte death in impact loading occurs when the tangential membrane strain rate exceeds a threshold value. This causes excessive forces transmitted to the cell membrane which then mechanically disrupt transmembrane integrins by breaking linkages that protect the protein subunits. In this manner, integrins are denaturated as they lose their native configuration. Since integrins are not functioning properly, chondrocyte cannot receive matrix signals that are essential for cell adaptation and survival. Also, cells that are unable to sense mechanical stresses and adapt accordingly will be susceptible to injury. These two effects may work synergistically and cause cell death by apoptosis. One of the limitations of the current cartilage/ chondrocyte model is that the cell-matrix boundary is assumed to be perfectly bonded. In reality, cells are
185
anchored to the extracellular matrix through focal adhesions [36]. Another limitation is that the anisotropic material property, , used in modelling the cartilage tissue is only approximation from value used to describe artery.
V. CONCLUSIONS The simulation results obtained in this study suggest that cell death following impact loading might be caused by apoptosis, which is linked to the disruption of transmembrane protein receptors (eg. integrins) that are disrupted by excessive cell membrane strain rates, and so become disrupted. This may lead to failure of transmembrane signaling and lead to ultimate chondrocyte death.
ACKNOWLEDGMENT The author would like to thank Mr. Jan Pajerski and Dr. Amr Guaily for their help in the numerical implementation of the simulation model. This study was supported by grants from the CIHR, the Canada Research Chair Programme, and the AHFMR Team grant on osteoarthritis
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26.
Brandt KD et al. 1986. J Rheumatol 13: 1126-1160 Simon WH et al. 1976. J Bone Jt Surg Am 58: 517-526 Blanco FJ et al. 1998. Arthritis Rheum 41: 284-289. Mankin HJ. J Bone Joint Surg Am 1963(45):529–40. Ewers BJ et al. J Orthop Res 19 (5): 779–784. Repo RU & Finlay JB. 1977. Journal of Bone and Joint Surgery (Amercian) 59 (8), 1068–1076. Milentijevic D et al. 2003. J Biomech Eng 125:594-601 Milentijevic D & Torzilli PA. 2005. J Biomech 38:493-502. Kurz B et al. 2001. J Orthop Res 19 (6): 1140–1146. Quinn TM et al 2001. J Orthop Res 19 (2), 242–249. Abusara Z et al. 2011. J Biomech (In Press) Ismam N et al. 2002. Journal of Cellular Biochemistry 87: 266-278 Nakamura S et al. 2006. J Orthop Res. 24 (4): 733-739 Guilak F & Mow VC. 1992. ASME Advances in Bioengineering BED-20: 21-23. Mow VC & Guilak F. 1993. In: Bell, E. (Ed.), Tissue Engineering: Current Perspectives. Birkhauser, Boston, pp. 128-145. Wu JZ & Herzog W. 2000. Annals of Biomedical Engineering 28: 318-330 Han SK et al. 2007. Biomechan Model Mechanobiol 6:139-150 Holmes MH & MowVC. 1990. J Biomech 23:1145– 1156 Guilak F & Mow VC. 2000. J Biomech 33:1663–1673 Federico S et al. 2005. J Biomech 38(10):2008–2018 Holzapfel GA et al. 2000. J Elasticity 61: 1-48. Maroudas A & Bullough P. 1968. Nature 219:1260-1261 Wang CCB et al. 2003. J Biomech 36:339-353 Poole CA et al. 1987. J Orthop Res 6:509–522 Shieh AC & Ateshian KA. 2006. J Biomech 39:1595-1602 Alexopoulos LG et al. 2005. Acta Biomaterialia I:317-325
IFMBE Proceedings Vol. 35
186
E.K. Moo et al.
27. Han SK et al. 2010. Computer Methods in Biomechanics and Biomedical Enginerring 0: 1-8. 28. Federico S et al. 2004. J Mech Phys Solids 52(10):2309–2327 29. Alexopoulos LG et al. 2005. J Biomech 38: 509-517. 30. Ateshian GA et al. 2007. Biomech Model Mechanobiol 6 (1-2): 91101. 31. Nguyen BV et al. 2009. Biotechnol Lett 31: 803-809
32. 33. 34. 35. 36.
Patwari P et al. 2004. Osteoarthritis and Cartilage 12: 245-252. Pulai JI et al. Arthritis & Rheumatism 46 (6): 1528-1535. Matthews BD et al. 2005. Journal of Cell Science 119: 508-518. Choquet D et al. Cell 88: 39-48 Hynes RO. 1992. Cell 69: 11-25
IFMBE Proceedings Vol. 35
Posture and EMG Evaluation of Assist Functions of Full-Body Suits T. Kitawaki1, Y. Inoue2, S. Doi2, A. Egawa2, A. Shiga2, T. Iizuka3, M. Kawakami3, T. Numata4, and H. Oka1 1
Graduate School of Health Sciences, Okayama University, Okayama, Japan Faculty of Health Sciences, Okayama University Medical School, Okayama, Japan 3 Daiya Industry Co., Ltd., Okayama, Japan 4 Okayama Southern Institute of Health, Okayama Health Foundation, Okayama, Japan 2
Abstract— The motion assist method is investigated in the case of daily action of elderly people whose muscle forces decline or nursing hard work. Full-body suit with the elastic elements along the muscle which acts motion assist function seems to assist the muscle load. In this study, we examined the methods used to evaluate the assistance functions of full-body suits by focusing on EMG measures of the crural muscles, the center of mass positions, and the elevation angle of the chest in order to evaluate the assisting functions of a full-body suit when the subjects wearing the suit are in an upright position and changes occur in their posture. As a consequence, it was found that EMG measures of the tibialis anterior may reflect the assisting effects of the suit’s elastic material around the ankle joints and that the center of gravity position and the elevation angle may reflect the posture correcting effect of the suit. In the future, we would like to verify this evaluation method by using elderly people and people with weak muscles as the test subjects. Keywords— Motion assists, full-body suit, muscular activity.
I. INTRODUCTION As the average age of the population increases, methods for mitigating the fatigue and pain from movements in daily life that area experienced by the elderly with weak muscles have been proposed [1]. In addition, various methods have been proposed to reduce the workload in cases of heavy labor, such as in field work in farming and in care giving. Full-body suits (Darwin, Daiya Industry, Japan) [2], as shown in Fig. 1, are equipped with assistance functions and are thought to reduce fatigue, improve stability, correct posture, and mitigate pain in general, while preventing edema and straightening the posture by compressing the Fig. 1 Full-body suit entire body [3].
In this study, we examined the methods used to evaluate the assistance functions of full-body suits by focusing on the changes that occur in crural muscle activity when the subjects wearing the suit are in an upright position and changes occur in their posture [4].
II. MATERIALS AND METHODS The test subjects were five healthy male adults in their 20s – 40s who were briefed about the experiment and who agreed to the testing. The experiment consisted of the following 3 measures: (1) integrated electromyogram (IEMG) measurements of the activity of the 3 crural muscles (gastrocnemius medialis: GC, soleus: SOL, and tibialis anterior: TA), which are the main muscles that are involved in posture maintenance, while the subjects were maintaining an upright position, (2) measurement of the center of mass (COM) unsteadiness of the test subjects (the COM position was calculated from the strain gauge output at the 4 corners of the footplate), which was used to evaluate the stability of the subjects’ COM by the posture straightening effect, and (3) calculation of the elevation angle at the chest, which was determined by affixing a 3D accelerometer on the chest, in order to determine if the upper body was functioning like an inverted pendulum from the ankle joint and to measure the changes in upright position. The experiment was conducted while the subjects performed tilting movements twice in each of the forward, backward, left, and right 3D Accelerodirections from an upright position, meter as shown in Fig. 2, for a total of 60 s in each trial. Next, the test subjects maintained an upright position Footplate EMG with eyes open, eyes closed, eyes closed, and eyes open for 60 seconds in each condition. The above processes were considered to be Fig. 2 Experimental one set of measurements, and four measurements were taken before conditions
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 187–189, 2011. www.springerlink.com
188
T. Kitawaki et al.
the subjects put on the suit (normal pre), with the suit on but without the elastic element (spring OFF), with the suit and elastic element on (spring ON), and after the suit was taken off (normal post). Furthermore, the test subjects were instructed to intentionally refrain from moving their upper body to the knees while moving their ankle joints freely, so that only the crural muscles were involved and an upright position was maintained.
0.2
Anterior tibialis Gastrocnemius medialis
0.16 0.12
) V
m ( G M E I
0.08 0.04
III. RESULTS
0 4
6 <-
A. COM Position and EMG Measures Fig. 3 shows the experimental results of the relationship between the COM positions and the IEMG measurements when the forward and backward tilting movements were performed by a man in his 20s. (The results in Figs. 4 and 5 were also performed by a man in his 20s.). Based on these results, we can see that the actual COM (indicated by downarrow) of the subjects was shifted toward the toes in the position where the total sum of the IEMG measures of the crural muscles reached the minimum, and it is assumed that the balance was maintained by the continuous activation of the GC and SOL muscles while the subjects were upright position. This tendency was also observed in the other test subjects.
C. COM and Elevation Angle of the Chest Measures Fig. 5 shows the COM position and the elevation angle in the chest measured during each condition while subjects maintained an upright position. Both the COM position and the elevation angle varied in a similar fashion by wearing the suit, and the elevation angle changed in the direction in which the upper body was more upright while the COM moved backward.
IV. DISCUSSION Because the IEMG measures were lower only in the spring ON condition, as shown in Fig. 4, we concluded that
10
12
14
COM position
16
18
20
Toward the toes ->
Fig. 3 COM position and changes in IEMG 0.02
0.016
) 0.012 V m ( G M IE A0.008 T
0.004
0
B. Changes in IEMG Measures in Subjects in an Upright Position Fig. 4 shows the average IEMG values (for each 60 second-period) in the TA muscle. There was a tendency for the average value to become smaller when the spring was ON, compared to that measured without the suit and spring OFF. In contrast, there were no clear differences in the GC or the SOL muscles with or without the suit or spring.
8
Toward the heel
Normal pre
Spring OFF
Spring ON
Normal post
Fig. 4 Changes in TA muscle IEMG in upright position 20
→ eso T M O C le eH ←
70
15
60
10
50
5
: :
○ Elevation angle ◆ COM
Normal pre
Spring OFF
Spring ON
→ s) ee rg ed ( el gn a no tia ve lE ←
40
Normal post
Fig. 5 Changes in elevation angle and COM position the assistance effect of the suit’s elastic element (spring), which covered the ankle joints, helped the functioning of the TA muscle. In addition, based on Fig. 5, we assumed that the elevation angle in the chest became larger and the COM position moved backward due to the synergy of the
IFMBE Proceedings Vol. 35
Posture and EMG Evaluation of Assist Functions of Full-Body Suits
suit’s effects of pulling back the shoulders and opening up the chest and putting the pelvis upright by the suit’s corset functions at the hip. That is, the posture was corrected by wearing the full-body suit. We assume that this posture correcting effect moved the COM position backward, leading to a reduction in the total IEMG value measured in the crural muscles, as shown in Fig. 3.
189
the latissimus dorsi muscles, the erector spinae muscles, and the glutei in order to facilitate chest opening, increase the hip stability and pull up the hips, and reduce the load on hips, which resulted in a posture correcting effect. As shown in Fig. 3, this seemed to reduce the activities of the lower-limb muscles and guide the body to “an easier posture.”
Table 1 Relationship between measurement item and subjects Subjects
TA IEMG
Angle
COM
1
***
-
-
2
***
-
-
3
* -
**
**
4
**
**
5
-
*
*
***: **: *:
High tendency for difference in spring existence High tendency for difference by wearing the suit Partial tendency for difference by wearing the suit
Table 1 classifies the test subjects according to each measurement. In Group (1), as in Subjects 1 and 2, an effect was observed on the TA muscle IEMG measure observed with spring ON. In this group, the body trunk was extended in the normal pre condition and the COM was maintained at a backward position in the initial upright position and the muscle activity was not very high. However, the TA muscle was highly active while balance was maintained. That is, the spring effect of the suit around the ankle joints seemed to assist the suppression of the activity of the TA muscle. In contrast, Group (2), as in Subjects 3 to 5, showed the effects of wearing the suit (both spring ON and OFF) in the elevation angle and the COM position, when the elevation angle in the chest was small for both the normal pre and post conditions and the COM position was relatively maintained in a forward position with the back muscles bent in a S shape. These subjects seemed to have low TA muscle activity in the initial posture, and the elastic element did not seem to influence the muscle activities to a visible level. We therefore concluded that the back elastic element assisted
V. CONCLUSIONS In this study, we focused on EMG measures of the crural muscles, the COM positions, and the elevation angle of the chest in order to evaluate the assisting functions of a fullbody suit. As a consequence, it was found that EMG measures of the TA muscle may reflect the assisting effects of the suit’s elastic material around the ankle joints and that the COM position and the elevation angle may reflect the posture correcting effect of the suit. In the future, we would like to verify this evaluation method by using elderly people and people with weak muscles as the test subjects.
ACKNOWLEDGMENT The authors thank to Daiya Industry Co. Ltd. (Japan) since they rent us the special Full-body suits for the experiment.
REFERENCES 1. Ito, et al., Journal of Japan Society of Kansei Engineering, 8 (2), 2009, 285-289. 2. Darwin at http://darwin.deci.jp/(DAIYA INDUSTRY) 3. Nazuka, et al., The Journal of Clinical Sports Medicine, 32 No.8 Supplementary Volume, 1047-1051, 2009 4. M.Trenell et al., Journal of Sports Science and Medicine, 5, 2006, 106114. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Tomoki Kitawaki Graduate School of Health Sciences, Okayama University 2-5-1 Shikata, Kita-ku, Okayama-shi, Okayama, ZIP:700-8558 JAPAN
[email protected]
Posture Control and Muscle Activation in Spinal Stabilization Exercise Y.T. Ting1, L.Y. Guo2, and F.C. Su1 1
Institute of Biomedical Engineering, National Cheng Kung University, Tainan, Taiwan 2 Faculty of Sports Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
Abstract— The Swiss ball size may affect the training intensity in spinal stabilization exercise. However, a comprehensive study regarding the effects of ball size on posture stability still lacks. This study aimed to investigate the effects of Swiss ball size on posture control of whole body by examining the center of mass (COM) and sway of center of pressure (COP) of contact point of the Swiss ball on the ground. Eighteen healthy volunteers participated in this study. The motion analysis system two force plates and a surface electromyography (EMG) system were used for data collection. The increasing ball size increased the anterior-posterior (AP) and mediallateral (ML) sway of COM, AP and ML sway of COP and AP and ML COM-COP inclination (p<0.05). It is harder to maintain whole body balance when the lower trunk kept at a higher position. Abdominal and back muscles had stronger contractions in smaller ball size. It seems more muscle activation of trunk was required on smaller ball, despite the requirement for maintaining its balance was less. The findings may provide references for clinicians to conduct spinal stabilization exercise while considering the effectiveness and safety of this intervention. Keywords— Core stability ball exercise, Ball size, Posture balance, COM-COP inclination angles, Muscle activation.
I. INTRODUCTION It is estimated that 60 to 80 percent of individuals suffer from low back pain (LBP), making it the most common musculoskeletal disease [1]. Among clients with LBP, those with lumbar segmental instability have been classified as a distinctive subgroup, characterized by the loss of firmness between spinal motion segments [2]. Due to the high incidence of LBP, a profusion of treatment strategies have been proposed. Spinal stabilization is controlled not only by the bone and ligament structures but also trunk muscles. Therefore, one of the most common concepts involves the strengthening of the trunk muscles, known as spinal stabilization exercises, is usually prescribed for this purpose. This theory stress to promote muscle strength of trunk, muscle control and muscle endurance using core stabilization exercise for individuals with spinal dysfunction [1]. In recent years, core stabilization exercises are believed to be able to improve muscle functions, maintain trunk balance, and even protect the spine from injuries [3].
Surface EMG was used to observe the various activities of muscles in different stabilization exercises that were the major concern in previous studied. Marshall et al studied four kinds of different core stabilization exercises on ball, and used surface EMG to record muscle activations of RA, EO, TA, and ES[4]. While subjects performed tasks of the single-leg hold and at the top of the press-up on the ball, there was significant increase in muscle activation of RA, implying that the ball could provide an increased training stimulus for the RA compared with doing on stable surface. Although many studies investigate core stabilization exercises, few studies focus on quantitatively posture control. Also, it is relatively rare to investigate core stabilization exercises with different ball sizes. The purpose of this studywas to investigate the muscle activation intensitiesand posture stability while performing roll out exercise on three different ball sizes.
II. MATERIALS AND METHODS A. Subjects We recruited eighteen healthy volunteers (11 male and 7 female, with mean age of 23.72±1.96 years old, mean height 170.30±9.59 cm and mean weight 63.75±10.19 kg) to participant in this study. They are without any history of back or spinal disease. Prior to examination, each subject was given an informed consent, approved from the Human Institutional Ethics Committee. B. Instrumentation An eight camera motion analysis system (HiRES Motion analysis Corp., Santa Rosa, CA, USA) sampling at 100 Hz is used to collect three-dimensional trajectory data of the markers. Two Kistler forceplates were used to calculate COP data during exercise. During exercise, subjects placed their hands on one of the force plates, and placed the contact surfaces on the other. Surface electromyography (MA300 Multi-channel surface EMG) sampling at 1000 Hz was used to record the abdominal (rectus abdominis, obliquus externus) and back (erector spinae, multifidus) muscles while the subjects performed these exercises.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 190–192, 2011. www.springerlink.com
Posture Control and Muscle Activation in Spinal Stabilization Exercise
The selected stability ball exercise was to remain at push up position with both feet on top of the height-adjustable ball simulator. the height-adjustable ball simulator was designed to have the consistent ball contact area with ball height adjustable according to body height. All subjects performed roll-out exercises on three different ball sizes, respectively. The trunk angles relative to the level with up tilt 10 degrees, leveled and down tilt 10 degrees were used to representing big, medium and small balls , respectively (Fig. 1). The surface of low limbs is closer to ankle joint, and administers a command to control breathing, such as ask subjects expiratory and abdomen emphatic when data collecting. The order of exercises was randomized to prevent any practice effect. Data were collect at least 3 trials for each exercise with 5 seconds for each trials followed by 1 minute rest.
out exercise. The vertical COM displacement was not affected by the ball size. The COM displacement increased with increasing ball size in AP and ML directions. There were significant differences in AP sway range (p=0.000) and ML sway range among three different balls (p=0.000) (Fig. 2). Down tilt 10 degree (small ball size) Level Up tilt 10 degree (big ball size) *
12 11 10 9 8 7
(%)
C. Protocol and Experimental Procedures
191
6 5 4 3
*
2 1 0 AP
ML
UD
* p<0.05
Fig. 2 COM displacement
(a) Foot-up tilt 10 degree
(b) Level
The COP displacements were significant different in AP sway range (p=0.004) and ML sway range (p=0.000) (Fig. 3). The COP sway in ML direction increased with increasing ball size. Down tilt 10 degree (small ball size) Level Up tilt 10 degree (big ball size)
(c) Foot-down tilt 10 degree
(%)
Fig. 1 Simulation of three ball sizes in roll out exercises D. Statistical Analysis A repeated measured ANOVA were used to examine the relationships between muscle activation, COM parameters, COP parameters and COM-COP inclination angles of three ball sizes and two ground contact areas. Post hoc test were used LSD to evaluated. Two-sided significance was defined as p < 0.05.
III. RESULTS Figure 2 shows the displacement of the COM in AP, ML, and vertical directions with 3 different ball sizes during roll
*
10.0 9.5 9.0 8.5 8.0 7.5 7.0 6.5 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0
*
* p<0.05
AP
ML
Fig. 3 COP displacement Figure 4 shows the COM-COP inclination angles. There were significant differences in COM-COP AP inclination angles (p=0.002) and ML inclination angles (p =0.006) among different ball sizes.
IFMBE Proceedings Vol. 35
192
Y.T. Ting, L.Y. Guo, and F.C. Su
exercise with different ball sizes is unclear Fig. 3b indicates that, the load ratio of hands to legs decreased with decreasing ball size. The whole body/ball system, in force equilibrium may be considered that with both hand and the Swiss ball as pivots, the downward forces from the body weight must be counteracted by contraction force of trunk muscles. This means the trunk muscle had to recruit more muscles to maintain body balance, and the muscle activation of the trunk is increased.
Down tilt 10 degree (small ball size) Level Up tilt 10 degree (big ball size) *
2.6 2.4 2.2
Inclination angle (degree)
2.0 1.8 1.6 1.4 1.2 1.0
*
0.8 0.6 0.4
V. CONCLUSIONS
0.2 0.0 AP
ML
* p<0.05
Fig. 4 COM-COP inclination angle ranges Although there were no significant differences in muscles activation in abdominal and back while performing exercise on three ball sizes (p> 0.05). A trend of decreased muscle activation in abdominal and back muscles with increasing ball size was found.
IV. DISCUSSION When roll out exercise was performed with larger ball size, the AP and ML sway of COM and COP significantly increased. Meanwhile, the AP and ML COM-COP inclination angles also significantly increased too. As expected, the higher COM with the bigger ball may be due to the increased difficulty in maintaining stable posture in rollout. When COM and COP separation distances are greater in order to counteract the increased moment, more active postural control and energy are required [5]. As the ball size increased, all subjects produced greater COM-COP inclination angles in sagittal and frontal planes. During movement, excessive ML COM-COP inclination angles may cause balance loss. And maintaining frontal plane stability, an active central nervous system control is a necessary factor [6]. In examining the COM and COP displacements, we found that the ML components are greater than AP components in agreement with the inclination angles. Also, in roll out exercise position the base of support in AP direction is greater than in ML direction because two hands and ball contact areas in the AP direction makes the AP inclination angle limited. In contrast, the base of support is smaller in the ML direction. When roll out exercise is performed on unstable support base, there is slightly higher muscle activation of abdominal and back extensor muscles. Our findings agree with the previous literature. Several studies have investigated core stability of muscle activation, but core stability during ball
In this study, the core stability ball exercise with effects of ball size was quantitatively investigated. The displacements of the COM and COP and COM-COP inclination angles are larger in bigger ball. In contrast, the muscle activation of abdominal and back extensor muscles is higher in smaller ball during exercise. The findings indicate that to exercise in big ball is more difficult to maintain body balance with greater force on both hands and smaller force on the Swiss ball.When the load on the ball applied by legs is smaller, the required trunk muscle forces decrease with smaller recruitment of muscle fibers. Therefore, the muscle activation of trunk is lower. For training abdominal and back extensor muscles, it could choose small ball during exercise. Finally, ball size may be used as a kind of challenge for core stability ball exercise.
REFERENCES 1. Souza GM, Baker LL, Powers CM. Electromyographic activity of selected trunk muscles during dynamic spine stabilization exercises. Arch Phys Med Rehabil. 2001; 82:1551-7 2. Hicks GE, Fritz JM, Delitto A, McGill SM. Preliminary development of a clinical prediction rule for determining which patients with low back pain will respond to a stabilization exercise program. Arch Phys Med Rehabil. 2005; 86:1753-62. 3. Stevens VK, Coorevits PL, Bouche KG, Mahieu NN, Vanderstraeten GG, Danneels LA. The influence of specific training on trunk muscle recruitment patterns in healthy subjects during stabilization exercises. Man Ther. 2007; 12:271-9. 4. Marshall PW, Murphy BA. Core stability exercises on and off a Swiss ball. Arch Phys Med Rehabil. 2005; 86:242-9. 5. 5. Hsue BJ, Miller F, Su FC. The dynamic balance of the children with cerebral palsy and typical developing during gait. Part I: Spatial relationship between COM and COP trajectories. Gait Posture. 2009; 29:465-70. 6. Bauby CE, Kuo AD. Active control of lateral balance in human walking. J Biomech. 2000; 33:1433-40. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Fong-Chin Su Institute of Biomedical Engineering 1 University Road, Tainan 701, Taiwan Tainan TAIWAN
[email protected]
Preliminary Findings on Anthropometric Data of 19-25 Year Old Malaysian University Students Y.Z. Chong and X.J. Leong Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Kuala Lumpur, Malaysia
[email protected]
Abstract— Studies have indicated that body dimensions differ for various populations. To determine whether there are differences in the anthropometric data of the population in Malaysia and other country, the anthropometric data of Malaysian adult were collected. This paper describes anthropometric data of 19-25 year old university students. One hundred individuals were measured — 50 female and 50 male, with ages ranging from 18 to 35 years old. A set of 27 static anthropometric measurements were obtained for each individual. Additionally, the statistical analysis carried out to the data obtained show that some statistical parameters, such as the variation and correlation coefficients, behave as expected, and as observed in other populations. Keywords— Anthropometric, Malaysian University Student.
I. INTRODUCTION The word Anthropometry is derived from the Greek word ‘anthros’, meaning man, and ‘metrein’ meaning to measure. Anthropometry literally means measurement of human [1]. Anthropometry is a branch of human science that deals with human body measurement, particularly with measurement of body size, shape, strength and working capacity [2]. The paper describes the preliminary findings on anthropometric measurements of individual between 19 – 25 year old. Such database will be useful as source of reference in field of human factor engineering and product development in Malaysia. The main objective of anthropometric database is to describe the physical measurements of students in a higher education institution in Kuala Lumpur. The anthropometric characteristic of a population varies with respect to ethnicity, demographic characteristics and other associated factors. For example, the Japanese are generally smaller than American in terms of body dimensions. Therefore, the body measurement or the anthropometric database built by the Americans is not suitable to use by the Japanese [3]. The following are the objectives of the study:1. To develop a reliable anthropometric data of Malaysian university students from age range of 19 – 25.
2.
To collect data of body dimension which can provide parameters for ergonomic design of industrial product that aid in preventing strain injuries, and provide a safer, more productive, and user-friendly design.
II. METHODS A. Methodology The study was conducted for a period of 28 weeks in university environment. Protocols were developed and necessary ethical clearance was obtained through the university ethical committee. Besides that, participants consent was obtained whereby each participant confirmed voluntarily joined the studies. Figure 1 shows the methodology of the study. • • •
Protocol development. Subject selection Development of a preliminary database
• Measurements will be made on selected subjects
•
Data Analysis
Fig. 1 The methodology of the study The following subsections will provide the methodology adopted in the study. B. Protocol Development There are a total of 27 anthropometric parameters being measured in this study. Five dimensions were measured
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 193–196, 2011. www.springerlink.com
194
Y.Z. Chong and X.J. Leong
while the individual remained standing, while the remaining was taken while the individual remained seated. This is useful in providing basic information in product development which involves sitting and standing postures [1]. Anthropometer, Stadiometer and Segmometer utilized in measuring of anthropometric data in the study [1]. Table 1 shows the parameters been measured in the study.
created and respective measurement data entered into the database after each measurement made. Figure 2 shows the screenshot of the form created.
Table 1 Anthropometric Parameters [1] No 1
Anthropometric Dimension Stature
2
Eye Height
3
Shoulder Height
4
Elbow Height
5
Hip Height
7
Finger Tip Height
8
Sitting Height
9
Sitting Eye Height
10
Sitting Shoulder Height
11
Sitting Elbow Height
13
Buttock-knee length
14
Buttock-popliteal length
15
Knee height
16
Popliteal height
17
Shoulder breadth (bideltoid)
19
Hip breadth
21
Abdominal depth
22
Shoulder-elbow length
23
Elbow-fingertip length
24
Upper limb length
28
Hand length (Midstylion-Dactylion)
29
Hand breadth
32
Span
33
Elbow Span
34
Vertical Grip Reach(standing)
35
Vertical Grip Reach(sitting)
36
Forward grip reach
Fig. 2 Anthropometry Database Form From the database, data collected was transferred to statistical analysis software to perform statistical analysis of the data collected. The data analysis was performed via SPSS 15.0.1™. D. Subject Description A total of 100 subjects participated in the studies (n = 100, 50 males, 50 females). Average age of participants is 22.1 year old) from various ethnicity within university environment.
III. RESULTS
The anthropometric measurement technique were standardizes. All measurements are taken with shoes on and cloths on unless otherwise noted. Linear measurements are in millimeters. Protocol used in the study is in accordance to standard proposed in [1]. C. Database Development Microsoft® Access ™ 2003 was utilized to create the anthropometric database for the study. Tables, forms were
This section of the paper present outcomes of the study been conducted. From the selected 22 anthropometric measurement, it is found that the average stature of male is 1,711.83 mm (±53.56) and female is 1,577.11 mm (±45.46). Table 2 shows the collected average anthropometric data collected from the study. The data collected from the study have been compared with published anthropometric data from various literatures from United Kingdom, Poland, Japan and the Netherlands and Malaysia as shown in Table 3 [1]. Correlation analysis was done to determine relationships between different anthropometric measurements. For example, for stature, the highest differences found are those between Malaysian and Dutch populations – 84 and 73 mm for male and female populations respectively.
IFMBE Proceedings Vol. 35
Preliminary Findings on Anthropometric Data of 19-25 Year Old Malaysian University Students
Table 2 Anthropometric data for males and females participated in the study Female Dimension
Male Std. Mean Dev
Independent ttest
195
Table 3 Means values (in mm) for the anthropometric dimension from different adult of male populations (Pheasant, 1998) and the data obtained in this study Dimension
UK
Holland
Japanese
270
Poland 265
310
220
Current Study 228
Abdominal depth Buttock-knee length Buttock popliteal length Elbow height Eye height Forward grip reach Hip breadth Knee height Popliteal height Shoulder breadth Shoulder height Sitting elbow height Sitting eye height Sitting height Sitting shoulder height Stature Vertical grip reach (sitting) Vertical grip reach (standing)
595
585
620
550
564
495
455
520
470
451
1090 1630 780
1065 1600 795
1135 1670 745
1105 1540 690
1073 1588 757
360 545 440 400
345 530 445 390
375 656 455 410
305 490 400 380
354 539 431 436
1425 245
1365 240
1670 240
1340 260
1407 234
790
780
820
785
788
910 595
885 605
940 620
900 590
891 607
1740 1245
1695 1290
1795 1280
1655 1185
1711 1278
2060
2205
2125
1940
2133
Mean
Std. Dev
t
p
Ave Stature
1560.71
60.99
1713.22
47.82
13.562
0.000
Ave Span
715.33
61.34
1702.57
59.69
12.389
0.000
11.851
0.001
9.667
0.012
5.649
0.000
2.556
0.000
3.270
0.000
12.348
0.000
8.751
0.000
13.362
0.000
5.610
0.000
7.838
0.000
8.682
0.000
5.955
0.000
1.558
0.122*
7.201
0.000
IV. DISCUSSION
4.316
0.015
5.827
0.000
5.968
0.000
7.153
0.000
0.735
0.464*
5.894
0.000
4.673
0.000
10.871
0.000
10.511
0.000
11.236
0.000
8.236
0.000
The purpose of this project to provide a reference point for industrial product designers in product development tailoring the Malaysian 19-25 year old market. Besides that, such study could aid in comparing anthropometric differences between different ethnicity. It will indirectly help the designer to decide on what are the criteria or parameter to use when designing effectively [12]. The well-being of people is greatly dependent on their geometrical relationship with various factors such as clothing, places of work, transportation, homes and recreational activities. To ensure harmony between people and their environments, it is necessary to quantify the size and shape of people for optimization of the technological design of the workplace and the home environment [11]. It could also be used for the ergonomic design of working and living environment products. It is important that the industrial product provide a safer, more productive and user-friendly design aid in preventing strain injuries [7].
Ave Upper 601.76 24.37 723.90 62.23 Limb Length Ave Finger 901.75 48.06 628.05 25.90 Tip Height Ave Hip 979.61 44.71 958.80 57.63 Height Ave Elbow 1301.32 45.71 1069.23 44.27 Height Ave Shoulder 1473.02 47.99 1410.95 46.34 Height Ave Eye 1587.75 49.85 1588.98 52.07 Height Ave Forward 701.47 45.50 764.25 56.74 Grip Reach Ave Vertical 1913.21 78.76 2122.05 69.92 Grip Reach Ave Elbow 774.19 53.34 844.74 32.12 Span Ave Sitting 828.49 32.19 873.53 111.61 Height Ave Sitting 720.74 40.18 786.16 36.65 Eye Height Ave Sitting Shoulder 560.58 30.35 603.13 34.22 Height Ave Sitting 217.07 29.86 228.65 27.55 Elbow Height Ave Buttock528.66 25.93 561.34 21.15 knee length Ave Buttockpopliteal 433.88 28.41 450.26 22.82 Length Ave Knee 494.50 22.02 542.97 54.87 Height Ave Popliteal 401.75 30.73 430.66 16.31 Height Ave Shoulder 383.21 18.83 435.26 31.07 Breadth Ave Hip 350.61 22.42 353.90 28.83 Breadth Ave Abdomi195.44 21.20 227.34 26.87 nal Depth Ave Shoulder330.49 16.66 344.59 26.82 elbow Length Ave Elbowfingertip 415.26 19.15 452.85 18.73 length Ave Hand 168.15 8.24 184.37 7.32 Length Ave Hand 72.09 5.64 84.09 4.14 Breadth Ave Vertical 1145.45 62.66 1291.18 83.25 Grip Reach1 *Bold indicates that value is significant at the 0.05 level.
IFMBE Proceedings Vol. 35
196
Y.Z. Chong and X.J. Leong
With all the important value in the database such as mean value, Standard deviation, 5th percentile 50th percentile and 95th percentile value, it easy for designer to design a part or a product using these information from the database [9]. For example, the designer can use the principle in anthropometry design – design for extreme individual, design for adjustable range and design for average [4]. A minimum dimension of a facilities/product would usually be based on an upper percentile value of the relevant anthropometric feature of the sample used, such as the 95th percentile, for example doors, escape hatches and passageways [5] [10]. On the other hand, maximum dimension of some facilities would be predicted on lower percentiles, e.g. 5th percentile, of the distribution of the people on the relevant anthropometric feature, for example, the distance of control device from an operator [6] [8]. Table 4 depicts the differences between current studies with various literatures. Table 4 Means values (in mm) for the anthropometric dimension from different adult of male populations [2] and the data obtained in this study Dimension
UK
Poland
Abdominal depth Buttock-knee length Buttock popliteal length Elbow height Eye height Forward grip reach Hip breadth Knee height Popliteal height Shoulder breadth Shoulder height Sitting elbow height Sitting eye height Sitting height Sitting shoulder height Stature Vertical grip reach (sitting) Vertical grip reach (standing)
270 595
265 585
Holland 310 620
495
455
1090 1630 780 360 545 440 400 1425 245 790 910 595
220 550
Current Study 228 564
520
470
451
1065 1600 795 345 530 445 390 1365 240 780 885
1135 1670 745 375 656 455 410 1670 240 820 940
1105 1540 690 305 490 400 380 1340 260 785 900
1073 1588 757 354 539 431 436 1407 234 788 891
605
620
590
607
Japan
1740
1695
1795
1655
1711
1245
1290
1280
1185
1278
2060
2205
2125
1940
2133
V. CONCLUSION AND RECOMMENDATIONS This study provides anthropometric and physiological information for the future development Malaysian anthropometry database. An anthropometric database of Malaysian adults in the age range of 18-25 had been developed. From the results obtained, it is observed that average stature for Malaysian male and female population is 1711mm and 1577 mm respectively. Coefficient of variation (CV) for each of the anthropometric dimension indicates that circa
70% of the CV values obtained are within the range proposed by Pheasant. The T-test result showed that there were significant differences between male’s and female’s anthropometric dimension. The Anova test results showed that there were no significant differences in the anthropometric dimension between age group of 18-23, 24-28, and 29-35, except for vertical grip reach (standing), popliteal height, buttockpopliteal height and buttock-knee length. The correlation analysis, the matrix of the Pearson correlation coefficient, shows that high correlations exist between most of the anthropometric dimension. This information could be used for the ergonomic design of working environment, industrial products, clothing, hand tool designs and furniture in order to achieve human friendly products and workplaces with none or limited negative effects on human health. Others can use this information for general information and guidelines on ergonomics and design. This dataset is very useful especially in the designing for the younger youth group since it mostly comprises of college or university students. The data is comparable with the previous local study.
REFERENCES [1] Carter, J.K. (2007). Applied Science And Engineering: Work Physiology. Retrieved July 27, 2009 from http://www.asse.org/sphandbook/docs/Work%20Physiology.pdf [2] Pheasant, S. (1996). Bodyspace: Anthropometry, Ergonomics, and the Design of Work. (2nd ed), 6 – 7. CRC Press. [3] Baba. M. D., Darliana. M., Ahmad. R. I., Owi. W.S., Kek. C.L., Nordin. S. (2009). Recommended Chair and Work Surfaces Dimensions of VDT Tasks for Malaysian Citizens. European Journal of Scientific Research.Vol.34 No.2, pp.156-167. Retrieved July 27, 2009 from http://www.eurojournals.com/ejsr_34_2_02.pdf [4] Heron, R.E., Karwowski, W. (2006). International Encyclopedia of Ergonomics and Human Factors. (2nd ed.). [5] K. H. E. Kroemer, H. J. Kroemer, K. E. Kroemer-Elbert. (1997). Engineering Physiology: Bases of Human Factors/Ergonomics. John Wiley and Sons Press. [6] Kjell B. Zandin. (2001). Maynard’s Industrial Engineering Handbook. (5th ed.). McGraw-Hill. [7] Langford, J.W. (2006). Logistics: Principles and Applications. (2nd ed.). McGraw-Hill Professional. [8] National Aeronautics and Space Administration, NASA – ManSystems Integration Standards, Anthropometry and Biomechanics. Volume 1 Section 3. Retrieved 5th August, 2009, from http://msis.jsc.nasa.gov/sections/section03.htm#3.2.1 [9] Sanders M. S., McCormick E. J. (1993). Human Factors in Engineering and Design. (7th ed.). McGraw-Hill International Edition. [10] V.J. Jalkanen. (2005) New Information About Sitting. Retrieved April 1, 2010, from http://www.ergoweb.com/forum/upload/New_Information_About_Si tting_05 09_467Kb.pdf [11] Pheasant, S. & Haslegrave, C.M. (2006). Bodyspace: anthropometry, ergonomics, and the design of work. (3rd ed), CRC Press. [12] Kjell B. Zandin. (2001). Maynard’s Industrial Engineering Handbook. (5th ed.). McGraw-Hill.
IFMBE Proceedings Vol. 35
Quantification of Patellar Tendon Reflex by Motion Analysis L.K. Tham1, N.A. Abu Osman1, K.S. Lim2, B. Pingguan-Murphy1, and W.A.B. Wan Abas1 1
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia 2 Division of Neurology, Faculty of Medicine, University of Malays, Kuala Lumpur, Malaysia
Abstract— This study collected quantitative measurements of the patellar tendon reflex using 3-dimensional motion analysis. The study was conducted in the Motion Analysis Laboratory, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia. Patellar tendon reflex of a group of healthy subjects involving 28 males and 22 females was examined. The left and right patellar tendons were tapped by a Queen Square reflex hammer with three different angles where the reflex hammer was released. The phenomenon of Jendrassik maneuver was tested by tapping on the tendon with the hammer raised to an angle of 60⁰. The purpose of this was to investigate the effect of reinforcement method on reflex responses. Results of the study showed that the patellar tendon reflex response of male subjects were higher at higher tapping force. Reflex response of the left and the right side did not show any significant difference. The study of tapping force on reflex response showed greater response at increasing tapping force. The reinforcement method named Jendrassik maneuver increased patellar tendon reflex response for both legs. Keywords— Patellar tendon reflex, Motion analysis.
I. INTRODUCTION The deep tendon reflexes (DTRs) are automatic responses of muscle spindles if the tendons were stretched suddenly [1]. Such reflex can be obtained at different parts of the body such as the biceps, triceps, knee or ankle. DTRs are often showed as immediate muscle contraction once tap is applied on the tendon [1]. Reflexes assessment is the basic test performed in a neurologic examination, which is an important diagnosis in identifying diseases or lesions in the nervous system [2]. This is because abnormal reflex responses are generally the signs of diseases in the nervous or muscular system [3]. During a particular assessment, reflex of the responding muscle is observed by the examiner and subsequently graded the response base on the examiner’s experience [4]. This in turn leads to variation when a patient is examined by different examiners. Different grading scales are used to grade reflex response at the moment [5], leading to high probability where judgments are conveyed wrongly among medical practitioners.
This study investigated the ability of motion analysis in obtaining quantitative measurements for one of the DTR, the patellar tendon reflex which is commonly known as the knee jerk. Motion analysis is widely applied in area such as sports, rehabilitation, gait analysis, prosthetics and orthotics [6], [7]. Cameras capturing the motion of reflective markers attached on human joints produce 3D motion, allowing detail analysis of the motion at a specific joint. This method is hoped to solve problem in reflex assessment by providing objective results for the judgments.
II. METHODS A. Subjects In this study, 50 subjects consisted of 28 males and 22 females were involved as the subject of patellar tendon reflex tests. The subjects were having age within 20 to 25 years old. Male subjects were having an average height of and weight of 1.72 ± 0.07 m and 69.75 ± 12.18 kg. The mean height and weight of female subjects were 1.57 ± 0.06 m and 52.45 ± 9.29 kg respectively. In general, all subjects were categorized under normal range of the Body Mass Index (BMI). All subjects are normal and healthy without any history of neurologic disease in order to avoid possibility in obtaining abnormal reflex response. B. Patellar Tendon Reflex Test The patellar tendon reflex tests were carried out in the Motion Analysis Laboratory under the Department of Biomedical Engineering, University of Malaya, Kuala Lumpur, Malaysia. All tests were recorded and processed using the motion analysis system named Vicon Nexus 1.4. Prior to any test, body measurements such as height and weight were taken from the subject. Then, setup of the subject was done by attaching sixteen reflective markers on the lower limb following the position stated on the Plug-in-Gait marker placement [8]. The subject was seated on a stool with both leg relaxing and not touching the ground. The positions of patellar tendon of both legs were identified and marks were made on the patellar tendons.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 197–199, 2011. www.springerlink.com
198
L.K. Tham et al.
Patellar tendon reflexes were examined by tapping the tendon with a Queen Square reflex hammer. To study the effect of different tapping force on patellar tendon reflex, the Queen Square reflex hammer was released at three different angles. Firstly, tendon was tapped with reflex hammer released at 30⁰ which was named Force 1. Taps were also applied at 60⁰ named Force 2 and 90⁰ which named Force 3. The hammer was first placed perpendicular to the left patellar tendon. Then, reflex hammer was raised to an angle represented as Force 1 measured using a goniometer. The hammer was released to tap on the patellar tendon. Tapping at a particular angle was repeated for five times. The test was then done with taps of Force 2 and Force 3. The same steps were then repeated for the right patellar tendon. The Jendrassik maneuver was tested by tapping the tendons with the reflex hammer released at 60⁰. Subject was asked to clench the fingers and concentrating in pulling the fingers [4] during the test.
physiologically normal human [4]. Normal subjects involving in the study must show no significant difference in reflex response between the left and the right knee. Asymmetry will be a sign of abnormality and neurologic disorder [10].
C. Analysis The motion of knee was analyzed by obtaining the knee joint angles from the tests. Knee angles were compared statistically between genders and different sides of leg. Independent t tests with statistical significance of P < 0.05 were carried out for these comparisons. The study of different tapping for on reflex responses was done by analyzing the data with one way analysis of variance (ANOVA) and Tukey’s HSD (P< 0.05) as the post hoc analysis.
Fig. 1 Knee angles for males and females at different tapping force
III. RESULTS AND DISCUSSION Analysis of the patellar tendon reflex response between males and females as represented in Fig. 1 did not show a constant trend. Female subjects showed significant higher knee angle in tapping with Force 1. On the other hand, male subjects showed significant greater knee angle compared to females at Force 2. Results obtained for Force 2 and Force 3 agreed with an early study which found that males exhibit greater reflex responses at higher tapping strength [9]. However, comparison between males and females at Force 3 was not significant statistically due to small sample size involving in the study. The test on Jendrassik maneuver showed non-significant comparison between genders with tap applied with Force 2. This was again due to small number of subjects. Analysis of reflex responses between different sides of body was done as shown in Fig. 2. Non-significant results were found for all pairs of comparison at all tapping angles and the Jendrassik maneuver. Such observations were the actual clinical facts where reflexes must be symmetry in
Fig. 2 Knee angles for left and right knee at different tapping force Tapping on the patellar tendon using Force 2 is the normal tapping range applied by the physicians in clinical examination of reflexes. In order to study the effect of tapping force on reflex response, data obtained for Force 1 and Force 3 was compared to data for Force 2, which was the
IFMBE Proceedings Vol. 35
Quantification of Patellar Tendon Reflex by Motion Analysis
standard in the analysis. In Fig. 3, knee angle obtained by tapping with Force 1 was lower than Force 2 but the comparison was not significant. Tapping with Force 3 produced significantly greater reflex response compared to Force 2. This showed that tapping with higher force lead to greater reflex response. Referring to Fig. 3, tapping with the method of Jendrassik maneuver was compared against tapping with Force 2. The subjects generally showed greater reflex responses when the Jendrassik maneuver was applied during tendon taps. The actual mechanism behind Jendrassik maneuver is still unknown but the effect of this procedure is well known. Observation in this study agreed with the effect of the maneuver. However, the comparisons with Force 2 were not significant statistically due to small number of subjects.
199
REFERENCES 1. Walker HK (1990) Deep tendon reflexes. In Walker HK, Hall WD, Hurst JW (Eds.), Clinical methods: The history, physical and laboratory examinations (3rd ed.). Butterworth Publisher, London 2. Schwartzman RJ (2006) Neurologic examination (1st ed.). Blackwell Publishing, Massachusetts 3. Burke JR, Schutten, MC, Koceja DM et al. (1996) Age-dependent effects of muscle vibration and the Jendrassik maneuver on the patellar tendon reflex. Arch Phys Med Rehabil 77 (6): 600-604 4. Campbell WW (2005) DeJong’s the neurologic examination (6th ed.). Lippincott Williams & Wilkins, Philadelphia 5. Manschot S, van Passel L, Buskens E et al. (1998) Mayo and NINDS scales for assessment of tendon reflexes: between observer agreement and implications for communication. J Neurol Neurosur Ps 64: 253255 6. Aggarwal, J. K., & Cai, Q. (1997). Human Motion Analysis: A Review, IEEE Proc., Nonrigid and Articulated Motion Workshop, San Juan, Puerto Rico, 1997, pp 90-102 7. Griffiths IW (2006) Principles of biomechanics and motion analysis. Lippincott Williams & Wilkins, New York 8. Oxford Metrics Ltd. (1999) Vicon 512 users manual 9. Lim KS, Bong YZ, Chaw YL et al. (2009) Wide range of normality in deep tendon reflexes in the normal population. Neurol Asia 14: 2125 10. Corey-Bloom J (Ed.) (2005) Adult Neurology. Wiley-Blackwell, Massachusetts Address of the corresponding author: Author: Tham Lai Kuan Institute: Department of Biomedical Engineering, Faculty of Engineering, University of Malaya City: Kuala Lumpur Country: Malaysia Email:
[email protected]
Fig. 3 Comparison of knee angle between Force 2 and other tapping condition
IV. CONCLUSION Quantification of patellar tendon reflex by motion analysis produced significant results. The technique of motion analysis is having great potential to further quantify deep tendon reflexes. The objective method to assess deep tendon reflexes is hoped to solve current problem in reflex assessment.
IFMBE Proceedings Vol. 35
Quantitative Analysis of the Human Ankle Viscoelastic Behavior at Different Gait Speeds Z. Safaeepour1, A. Esteki2, M.E. Mousavi3, and F. Tabatabaei3 1
Ph.D. Student in Biomechanics, Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran 2 Department of Medical Physics and Engineering, Shahid Beheshti University (Medical Campus), Tehran, Iran 3 Department of Orthotics and Prosthetics, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
Abstract— Human ankle joint has viscoelastic behavior. Two components of viscoelastic stiffness have been defined as quasi-stiffness and hysteresis. Quasi-stiffness is the slope of the ankle moment-angle curve during walking. Hysteresis is the area between loading and unloading curves. Numerous studies have shown that viscoelastic components vary as gait speed changes. The aim of this study was to quantify the quasistiffness and hysteresis components of viscoelastic stiffness in different gait velocities during walking. Ankle kinetic and kinematic data of a healthy young male were collected in three levels of walking speeds using VICON motion analyzing system and Kistler force plates. Ankle angles, moments and powers were computed using VICON Workstation software. Based on moment-angle curves quasi-stiffness and hysteresis were quantified. At slow walking speed, hysteresis loop was clockwise. At normal speed hysteresis was nearly zero and in fast velocity hysteresis loop changed to counter-clockwise direction. Spearman correlation coefficient showed significant positive correlation between gait velocity and quasi-stiffness (p<0.005). The correlation between walking speed and hysteresis was also significant (p<0.005). Therefore in designing prosthetic ankles, the effect of walking speed on quasi-stiffness and hysteresis should be considered. Keywords— quasi-stiffness, ankle joint, viscoelastic stiffness, hysteresis,
low speeds and counter-clockwise hysteresis loop at normal to fast speeds [3, 6-8]. Esteki [2] showed that there is a nonlinear relation between viscous component and velocity in finger joints. Palmer [9] examined the function of ankle joint at different gait speeds in sagital plane and divided moment-angle curve into three sub-phases: Controlled Plantarflexion (CP), Controlled Dorsiflexion (CD) and Powered Plantarflexion (PP). He concluded that ankle function can be mimicked by linear spring in CP and by nonlinear spring in CD. In PP a torque actuator should be added to the non-linear spring. Hansen et al. [3, 8] showed that human ankle-foot system changes from the passive mechanism to the active mechanism while walking speed is increased from the slow to the fast. Recently, researchers have focused on ankle biomechanics in different gait velocities. However, different ankle viscoelastic components at different gait speeds are not clearly quantified, yet. The aim of the present study was to quantify ankle joint quasi-stiffness and hysteresis based on moment-angle curve in different gait velocities. Results of this study are used to design a powered ankle prosthesis mimicking human ankle dynamics in the measured range of walking speeds.
I. INTRODUCTION The biomechanical behavior of normal human ankle joint during walking has been examined in several studies. It has been shown that ankle joint has viscoelastic behavior with two distinct components of stiffness: elastic and viscous [1, 2]. In some studies the slopes of the moment-angle loop in loading and unloading curves have been defined as quasistiffness [3, 4]. The other component ‘’viscous hysteresis’’ has been demonstrated as an area between loading and unloading curves [1, 5]. Investigations have revealed that ankle quasi-stiffness and hysteresis change as gait speed changes. Moment-angle curves in the ankle joint have clockwise hysteresis loop at
II. MATERIALS AND METHODS Kinetic and kinematic data were collected from a healthy 24 years old male with 81 kg weight and 176 cm height. Informed contest approved by the university ethnics committee was signed by the subject. Data were collected at a gait analysis lab equipped with six infra-red Motion Analysis cameras (Vicon 460, Vicon Motion System Ltd., Oxford. UK) and two force plates (Kistler Instrument AG, Switzerland). Data were collected at a rate of 200 Hz. Markers were placed on anatomical landmarks according to the modified Helen Hayes marker set including Toe, Heel, Ankle, Tibia, knee, Thigh and Hip
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 200–202, 2011. www.springerlink.com
Quantitative Analysis of the Human Ankle Viscoelastic Behavior at Different Gait Speeds
markers. A marker was placed on sacrum (midpoint between posterior superior iliac spines) for computing gait velocity. Subject was initially instructed to walk at three normal, slow and fast self-selected speeds. After getting used to the test, three trials were performed for each speed. Ankle angles (degree), moments (N.m/kg) and powers (W) were processed using Vicon Motion Analysis software (Workstation version 4.6). Quasi-stiffness and hysteresis were computed from the moment-angle and power-time data. The stance phase was divided into three sub-phases: Controlled Plantarflexion (CP), Controlled Dorsiflexion (CD), Powered Plantarflexion (PP) and a swing phase. Loading and unloading quasi-stiffness were defined as the slope of the line fitted to moment-angle curve in loading and unloading paths, respectively. Hysteresis was defined as the area between loading and unloading curves and was calculated by trapezoidal approximation approach. Positive values of hysteresis indicate counter-clockwise and negative values indicate clockwise hysteresis. Walking speed was estimated as total displacement of Sacrum marker divided by time in a defined distance. Statistical analysis was performed using Microsoft Office Excel 2007 and SPSS version 16. For each trial mean and standard deviation for variables were estimated. Spearman non-parametric coefficient was used to determine the correlation between variables and gait speeds. Significant level was set for P< 0.05.
201
Table 2 Hysteresis in sub-phases in 3 gait speeds hysteresis
CP CD PP Stance phase
Gait speeds (m/s) Slow 1.338 21.321 18.045 -3.275
Normal 1.720 20.009 23.524 3.424
Fast 2.528 17.943 23.108 5.165
Averaged ankle power was negative in CP and CD and was positive in PP phases. Average ankle power increased as gait velocity increased [fig1].
III. RESULTS Mean and standard deviation were found to be 0.98 ± 0.12 at slow 1.30 ± 0.12 at normal and 1.61 ± 0.12 at fast speed. The Slope of the ankle moment-angle curve at different gait speeds is shown in Table 1. Table 1 Quasi-stiffness in 4 sub-phases in 3 gait speeds Quasi-stiffness (N. m/deg.kg) CP CD PP Swing
Fig. 1 Average ankle power for different walking speeds
Gait speeds (m/s) Slow 0.003 0.068 0.078 0.000
Normal 0.009 0.090 0.087 0.000
Fast 0.021 0.100 0.091 0.000
Spearman correlation coefficient showed that there was positive significant correlation between gait velocity and estimated variables in stance phase of walking. For quasistiffness r=0.901 and p= 0.006, for hysteresis r= 0.929 and p= 0.003 and for power r=0.893 and p=0.007 were found.
IV. DISCUSSION Table 2 presents hysteresis values for different gait phases in different velocities. In slow walking, moment-angle curve displayed clockwise loop. As speed changed to normal, hysteresis was nearly zero. At fast speeds, hysteresis loop was changed to counter-clockwise direction.
In this study two distinct components of ankle viscoelastic stiffness were evaluated based on the moment-angle curves in different walking velocities. Ankle kinetic and kinematic data and alterations of hysteresis loops, power
IFMBE Proceedings Vol. 35
202
Z. Safaeepour et al.
and quasi-stiffness with different walking speeds are in agreement with previous studies [3, 8-9]. Ankle moment-angle curve was divided into three subphases: Controlled Plantarflexion, Controlled Dorsiflexion and Powered Plantar flexion [9]. Average power was negative in slow speed in stance phase of gait. But with increasing walking velocity, power turned to positive values indicating switching from power consumption to power generation stage which is in agreement with the other studies [3, 8]. As walking velocity increases, more power is generated in the ankle joint. The slope of the moment-angle curve, ‘’quasi-stiffness‘’, varied at different sub-phases and also at different speeds. It has already been proposed that ankle-foot system can be modeled as a spring and torque generator and the spring stiffness should change at different sub-phases of walking [3, 8, 9-11]. However, based on our study, the quasistiffness is a function of walking speed. Ankle hysteresis was also found to be highly effected by the gait speed. Therefore, velocity-dependent effects in ankle viscoelastic stiffness caused by damping elements are not negligible. Similar effect was found in work done by Esteki et al [2]. They suggested a velocity-dependent nonlinear viscoelastic model in finger’s joints. This velocity dependent hysteresis could be taking into account in designing prosthetic ankles, too.
V. CONCLUSIONS In conclusion, if a prosthetic ankle foot is to act like a normal ankle, its viscoelastic components should be adjusted for different gait velocities at different sub-phases of the joint movement.
ACKNOWLEDGMENT The authors would like to acknowledge ergonomy gait analysis laboratory of University of Social Welfare and Rehabilitation Sciences, Tehran, Iran.
REFERENCES 1. Hajrasouliha AR, Tavakoli S, Esteki A et al. (2005) Abnormal viscoelastic behaviour of passive ankle joint movement in diabetic patients: an early or a late complication? Diabetologia 48:1225-8. 2. Esteki A, Mansour JM. (1996) An experimentally based nonlinear viscoelastic model of joint passive movement. Journal of Biomechanics. 29(4):443-50. 3. Hansen AH, Childress DS, Miff SC et al. (2004) The human ankle during walking: implications for design of biomimetic ankle prostheses. Journal of Biomechanics. 37(10):1467–74. 4. Latash ML, Zatsiorsky VM. (1993) Joint stiffness: Myth or reality? Human Movement Science. 12:653-92. 5. Sepehri B, Esteki A, Ebrahimi-Takamjani E et al. (2007) Quantification of rigidity in Parkinson’s disease. Annals of Biomedical Engineering. 35(12):2196-203. 6. Davis RB, DeLuca PA. (1996) Gait characterization via dynamic joint stiffness. Gait & Posture. 4:224-31. 7. Frigo C, Crenna P, Jensen LM. (1996) Moment-angle relationship at lower limb joints during human walking at different velocities. Journal of Electromyogeraphy and kinesiology. 6(3):177-90. 8. Hansen AH, Miff SC, Childress DS et al. (2010) Net external energy of the biologic and prosthetic ankle during gait initiation. Gait & Posture. 31:13-7. 9. Palmer ML. (2002) Sagital plane characterization of normal human ankle function across a range of walking gait speeds: MS Thesis, Massachusetts Institute of Technology. 10. Samuel KA, Hugg MH. (2008) Powered ankle-foot prosthesis. IEEE Robotics and Automation Magazine. 2008. 11. Samuel KA, Weber J, Herr MH. (2009) Powered ankle–foot prosthesis improves walking metabolic economy. IEEE Transaction on Robotics. 25(1):51-66.
IFMBE Proceedings Vol. 35
Response of the Human Spinal Column to Loading and Its Time Dependent Characteristics H.S. Ranu1, A.S. Bhullar2, and A. Zakaria3 1 2
College of Applied Medical Sciences, King Saud University, Riyadh 11433 Saudi Arabia American Orthopaedic Biomechanics Research Institute, Atlanta, GA 31139-1441. U.S.A. 3 Department of Biophysics, Panjab University, Chandigarh, 160014. India
Abstract–– A set of intervertebral pressure transducer (IVPT) and an intradiscal pressure transducer (IDPT) were developed to measure the pressures in the nucleus pulposus and around the annulus. Human lumbar spinal segments were loaded up to 2.0 kN and following parameters were recorded: applied compressive load as a function of time, intradiscal pressure as a function of time, intervertebral pressure as a function of time for anterior and lateral edges of the vertebra, strain as a function of time for anterior and lateral right and left sites of the vertebra, stress-relaxation of the complete segment as a function of time. All show very similar response to loading and stress-relaxation. However, annulus pressure did not respond to stress-relaxation. This was due to functional aspect of annulus. Keywords–– Human Spine, Time Dependent, StressRelaxation, Spinal Column, Loading.
I. INTRODUCTION The earliest time dependent response of the human spinal column has been mentioned by the Scotland Yard of the United Kingdom. Recruiters who did not qualify for the height requirements for enlisting in the police force were told to come back in the morning and thus recording an increase in their height, a clear record of time dependent behavior of the human spinal column. Another incident observed by [1] the time dependent behavior of the spinal column, that there is an overall increase in body height overnight, and as a result of this one has to adjust the car mirror in the morning. Also, in the absence of a gravitational effect, as the case in outer space, one would expect an increase in intervertebral disc volume. Therefore, there is a need to measure the changes in the intervertebral disc volume in humans both in gravitational and on earth , [2], [3]. It is known that the nucleus pulposus of the spinal disc of young human beings have gel-like characteristics. In order to investigate this phenomenon ex vivo, young human spinal column was subjected to loading expected in vivo. Their non-linear and time-dependent response of the human disc and the vertebral column to loading were studied.
II.
MATERIALS AND METHODS
An intervertebral pressure transducer (IVPT) Figs. 1 and 2 and intradiscal pressure transducer (IDPT) Figs. 3 were developed to measure the pressures in the nucleus pulposus and around the annulus. They were capable of measuring pressures up to 3450kN.m-2 and 4830nkN.m-2. The IVPT's were inserted to controlled depth of penetration. The instrumented needle was inserted by a specially developed probe. Spinal specimens (ten) from lumbar region of spine were obtained from unembalmed bodies, within a short period after death. Their suitability for experimentation was determined by taking radiographs. After this, the specimens were stripped off all muscle and ligaments, making sure that none of the bone and discs were damaged.
Fig. 1 Underside of the intervertebral pressure transducer (IVPT)
Fig. 2 The intervertebral pressure transducers assembled in a partial circular ring
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 203–206, 2011. www.springerlink.com
204
H.S. Ranu, A.S. Bhullar, and A. Zakaria
i.
Upper vertebral body-holder and loader in (Z) direction. j. Special loading head. k. Cross-head of universal testing machine.
Fig. 3 Close up of the Ultra-miniature pressure transducer
1) Applied compressive load as a function of time. 2) Intradiscal pressure as a function of time. 3) Intervertebral pressure as a function of time for\anterior and lateral edges of the vertebra. 4) Strain as a function of time for anterior and lateral right and left sites of the vertebra. 5) Stress-relaxation of the complete segment as a function of time.
The spinal segments were prepared by moulding the top and bottom ends of the segments in an alloy (Ostalloy m.p. 47.3°C) thus providing an adequate load bearing surface for the specimen. However, during the moulding process, two conditions were satisfied: a. the natural curvature of the spinal segment was kept the same as in situ, b. the top and bottom surfaces of the fixture were kept parallel so that a normal load be applied to the top surface In vitro testing was done by applying a quasi-static compressive load on the specimen by a Universal Testing Machine. A three-dimensional loading platform equipped with a six-axis load cell was incorporated into the test set up (Figs. 5 and 6). Loading eccentricity was carried out by moving the platform in controlled increments along the two (X and Y) orthogonal axes but in the present case it was kept to zero. Loading rate was 0.13 mm.s-1 (Figs. 5 and 6) is quit slow for some of the tasks human spinal column is subjected in many day to day tasks . The segment was loaded up to 2.0 kN. The following data were recorded. Six-axis, load cell attached to the base of the al testing machine. a. b. c. d. e. f. g. h.
Fig. 4 The needle, the intervertbral pressure transducers and strain gauges in use around a spinal segment
Moment applicator in antero-posterior direction (X). Moment applicator in lateral direction (Y). Bottom vertebral body holder. Intradiscal pressure transducer (IDPT). Intervertebral pressure transducers (IVPTs). Partial ring holder for IVPT's. Strain gauge rosettes used to measure strains on the vertebral body. Specimen (spinal unit).
IFMBE Proceedings Vol. 35
Fig. 5 Schematic arrangement of the test set-up
Response of the Human Spinal Column to Loading and Its Time Dependent Characteristics
205
III. RESULTS Typical results for ten cadavers illustrate the loading behaviour of the lumbar spinal segment. The variation of load, intradiscal pressure, strains, intervertebral pressures and stress-relaxation with time for a 26-year-old female cadaver having no disc degeneration when loaded to a compressive load of over 2.0 kN are shown in the present study, all show the similar response to loading and to stress-relaxation (Figs. 6, 7 and 8).
Fig. 9 Annulus fibrosus pressure versus time curve for (•) IVP3 lateral left ; (o) IVP5 lateral right; (□) IVP7 anterior
Fig. 6 Load versus time curve for a normal disc
Fig. 7 Nucleus pressure versus time curve for a normal disc
However, annulus pressure does not respond to stressrelaxation (Fig. 9). This is due to the functional aspect of the annulus, i.e. it distributes the pressures evenly in all directions when the non-degenerated nucleus of the intervertebral disc is loaded in compression. It is further observed that the annulus only responds to stress-relaxation in the direction in which bulging of the disc takes place. Annulus fibres also respond to stress-relaxation. Stress-relaxation results show that in more than 150 S, load relaxed by more than 800 N (42%) (Fig. 6), nucleus pressure by 0.5 MN.m-2 (50%) (Fig. 7) and an average strain by 0.3 mm.M-1 (30%) (Fig. 8). These specimens were allowed to relax for more than 40 min. Stress-relaxation of the spinal segment was recorded up to 150 seconds as there was no change in this phenomenon after this time period. These are typical findings of a young female cadaver. It is further suggested that this stress-relaxation phenomenon of the spinal disc can be used to look into monitoring the low back pain and disc degeneration.
IV. CONCLUSIONS
Fig. 8 Strain versus time curve for (•) lateral left, (o) lateral right and (■) anterior sites of vertebra
It is concluded that: 1) the response of the normal young lumbar spine to loading is time dependent. 2) the normal disc and the vertebral body show a significant amount of stress- relaxation. 3) the nucleus and the annulus pressures have a linear relationship with applied compressive load. Earlier it was postulated that pressure in nucleus is three times the annulus pressure [4]. However, these findings suggest that there is a one-to-one relationship between the annulus and the nucleus pressures.
IFMBE Proceedings Vol. 35
206
H.S. Ranu, A.S. Bhullar, and A. Zakaria
REFERENCES [1] [2] [3]
[4]
Ranu, H.S. (1970). Personal observation. Ranu, H.S. (1985).Time dependent response of the human intervertebrl disc to loading. Engineering in Medicine. 14. 43-45. Ranu, H.S. (1993).Multipoint determination of pressure-volume curves in human intervetebral discs. Annals of Rheumatic Diseases. 52.142-146. Nachemson, A. (1960). Lumbar intradiscal pressure. Acta Orthop. Scand. 43. 1-104.
IFMBE Proceedings Vol. 35
Shoulder’s Modeling via Kane’s Method: Determination of Torques in Smash Activity F.H.M. Ariff1, A.S. Rambely2, and N.A.A. Ghani2 1
Fundamental Engineering Unit, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia, 43600 Bangi Selangor, Malaysia 2 Centre of Mathematical Sciences, Faculty of Science & Technology, Universiti Kebangsaan Malaysia, 43600 Bangi Selangor, Malaysia
Abstract— A 3-D model for a shoulder segment is developed using Kane’s method. Through the inverse dynamic approach, the unknown torque at the segment is found using kinematic data produced from an observation of a professional badminton player in a Thomas Cup’s Tournament. The subject is 1.80 m in height and weighs 80 kilograms. The result shows that the highest value of torque is produced during contact while performing the jumping smash activity. Keywords— shoulder, Kane’s method, torques.
I. INTRODUCTION The shoulder is one of the most mobile joints in the human body and moves in a complex 3-dimensional pattern. Shoulder is divided into four separate articulations namely glenohumeral joint, sternoclavicular joint, acromioclavicular joint and coracoclavicular joint. The glenohumeral joint is an articulation between the humerus head and the glenoid fossa of the scapula. It is typically considered as the major shoulder joint. It is called ball socket joint and it has three rotations and no translation in terms of movement. Clinically, the shoulder motions are defined as flexion / extension, abduction / adduction, horizontal abduction / adduction and external rotation / internal rotation. The first three rotations are not independent as only two of them are needed to determine the rotation of humerus about its long axis. Shoulder abduction can be defined as the angle between the humerus and the inferior direction of the trunk in the trunk’s frontal plane. The shoulder horizontal abduction can be defined as the angle between the humerus and a line connecting two shoulder marker in the trunk’s transverse plane while shoulder external rotation can be described as rotation of the upper arm about its own long axis (1). Many movement of the body involve turning and rotating motions. All linear movement originated from the lever actions of joint articulations. Whenever a joint rotates, torques occurs. Torque is defined as the turning effect produced by a force (2).
Smash is known as the most powerful strokes because of its speed and steep trajectory, which contributes to a winning point. Smash is also defined as the most common killing shot, which accounted for 53.9 % of the distribution of the killing shot (5). Smash can be divided into two types, standing smash (smash) and the jump smash. However, according to Rambely et al. (2005), jumping while performing a smash is the most popular technique chosen by world top ranking badminton player. When a player performs a smash, arm movement pattern plays an important role in the execution of the stroke. The pattern involved an overarm pattern, which is the flexion of the elbow and medial rotation of the humerus during the forward or forceproducing phase of the arm action (2). Biomechanics analysis of badminton smash has revealed that during this phase there is a powerful inward rotation of the arm, followed by inward rotation of the forearm and lastly a flexion of the hand. Therefore, the objective of the current study is to determine the torque at shoulder joint during smash activity by obtaining the inverse dynamic matrix through the development of kinematic equations and dynamics equation using Kane’s method that was constructed by.
II. MODEL A biomechanics model for movements of a shoulder was constructed using Kane’s method (Fig.1, (6)). Kane’s method is a vector-based approach which used vector cross and dot products to determine velocities and acceleration rather than calculus (6). It creates auxiliary quantities called partial angular velocities and partial velocities, and uses them to form dot product with the forces and torques acting from external and inertial forces. The dot products form quantities called the generalized active forces and the generalized inertia forces, which are the simplified forms of the forces and moments used to write the dynamic equation of motion (6).
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 207–209, 2011. www.springerlink.com
208
F.H.M. Ariff, A.S. Rambely, and N.A.A. Ghani
There are three degree of freedom involved to construct this model represented by three angles q1, q2, and q3. Each angle represent three movement of the shoulder namely flexion/extension, adduction/abduction and external rotation/internal rotation.
Generalized active forces and generalized inertia forces are then formulated for shoulder segment. To form the generalized active forces, vector dot products between the partial velocities of points and the forces acting at those points are computed and added together. Additionally, dot products between partial angular velocities and torques are added together and summed together with the previous result. After computing the generalized active forces, the generalized inertia forces are calculated next. These are composed of the dot products between the partial velocities of the mass centre and the inertial forces there, as well as the dot products between the partial angular velocities and the inertial torques. The generalized active forces and the generalized inertial forces represented by the equations are summarized as follows, F1 + F1* = 0; F2 + F2* = 0; F3 + F3* = 0;
F1 = - F1* F2 = -F2* F3 = -F3*
(1)
where Fi, i = 1, 2, 3, 4, 5, 6, 7 are generalized active forces and Fi*, i = 1, 2, 3, 4, 5, 6, 7 are generalized inertia forces.
Fig. 1 Model of an arm First, the kinematic equation is constructed by obtaining the direction of cosines tables. Direction of cosines is obtained by using Euler Rotational Series which is a series of three rotations used to define uniquely the orientation of rigid body in 3-dimensional space. Table 1 summarize the direction of cosines for reference frame of scapula (N) acting on reference frame of humerus (A). Table 1 Direction of cosines table for reference frame of scapula (N) with respect to reference frame of humerus (A)
Next, by using all the quantities from the kinematic equation, the dynamic equation then constructed. The end result of the dynamical equation is the generalized active and inertia forces. In Kane’s approach, quantities called partial velocity and partial angular velocities are produced directly from linear velocities and angular velocities obtained from the kinematical equation by factoring these velocities. A quantity called generalized speed ui ≡ qi (i = 1, 2, 3) are then introduced to factorize these velocities.
III. RESULT AND DISCUSSION The torque of the joints is obtained from the model developed above by using the kinematic data obtained from a world-class badminton player while performing a jumping badminton smash. In order to describe the movement, four significant points are identified, which represent a point in the phases while a player performs a jumping smash activity, which are the planting of foot, taking off, contact and landing. These four significant points occur in the five phases during the execution of the smash; the getting into position, back swing, forward swing, contact and followthrough phases. The first event identified is the planting of foot, which occurs in the getting into position phase. Then a player will lower down his body before the taking off event. The player will then push himself upward and jump. During this airborne phase, the arm segment is in the force producing phase and at contact, the racket touches the shuttlecock. Then the player is in a follow through phase and his body moves downward and he lands. Table 2 shows the value of torques obtained at the shoulder joint while the subject is performing the smash stroke and Fig. 2 illustrates the changes of torques on the joint in each position.
IFMBE Proceedings Vol. 35
Shoulder’s Modeling via Kane’s Method: Determination of Torques in Smash Activity
Table 2 Torques of the joints produced for the shoulder segment at the specific joint Joint
Shoulder
Planting of foot
Take off
Contact
Landing
(Frame 7)
(Frame 18)
(Frame 24)
(Frame 28)
3.32E+03
1.17E+06
-5.74E+05
8.89E+05
During the execution of smash, the shoulder is in its extension movement in the force producing phase and reaches its maximum before contact. After impact the value of torque at the shoulder joint ceases and it continues in the follow through phase.
209
proximal limb achieves its maximum, the distal limb (hand and racket segment) starts to accelerate. At this point, the value of torque for shoulder joint is greater than that of the other movements. This will continue until the racket makes contact with the shuttlecock. The greater the value of torque at shoulder contributes to the speed of the racket. Hence the transfer of force from the racket to the shuttlecock accelerates the shuttlecock in the opposite direction. Finally, in the follow through phase, the value of torques at the shoulder joint decrease as the player lands to the ground.
IV. CONCLUSION The shoulder joint starts from lateral rotation and changes its rotation medially after the take-off phase. The shoulder continues to rotate medially during the airborne phase and at contact. During take-off, the shoulder gained the maximum force and hence produced the highest value of torque to ensure that the highest speed of the racket and accelerate the shuttlecock.
ACKNOWLEDGMENT The research is supported by grants from MOHE (UKMKK-07-FRGS0004-2008).
REFERENCES Fig. 2 Torques of joints at the shoulder from planting of foot, taking-off, contact during airborne and landing events
Through the value of torques showed in Figure 2, there are changes in the value of torque as movements change. In a study of momentum transfer between segments in a throwing motion, Shih et al. (1990) reported that the momentum does transfer from proximal segment to distal segment. When the player is in the position of planting his foot, forces to make next movement are generated by lowering his body to generate the upward vertical component of force (2). The force gained is transferred from the ground to the lower limb and transferred sequentially to the trunk. As the trunk rotates, the force is transmitted to the shoulder, which is the proximal limb. When the force at the
1. Hung, G. K. & Pallis, J. M. 2004. Biomedical Engineering Principles In Sports. New York: Springer. 2. Piscopo, J. & Baley, J. A. 1981. Kinesiology : The Science of Movement. Canada: John Wiley & Sons, Inc. 3. Rambely, A. S., Wan Abas, W. A. B. & Yusof, M. S. (2005b). The analysis of the jumping smash in the game of badminton. In Proceedings of XXIII International Symposium on Biomechanics in Sports, 671-674. Beijing, China. 4. Shih, J. P. et al. 1990. Transfer of momentum between segments in under hand throwing motion. In Proceedings of Beijing Asian Games : Scientific Congress (pp 579-580). Beijing, China : Scientific Congress. 5. Tong, Y.M. & Hong, Y. (2000) The playing pattern of world’s top single badminton players. In Proceedings of XVIII International Symposium on Biomechanics in Sports (pp.825-830). Hong Kong: The Chinese University of Hong Kong. 6. Yamaguchi, G. T. (2001). Dynamic Modelling of Musculoskeletal Motion: A Vectorized Approach for Biomechanical Analysis in Three Dimensions. New York: Springer.
IFMBE Proceedings Vol. 35
Simulation of Brittle Damage for Fracture Process of Endodontically Treated Tooth S.S.R. Koloor1, J. Kashani1, and M.R. Abdul Kadir2 1
2
Computational Solid Mechanics Laboratory, Faculty of Mechanical Engineering, Universiti Teknologi Malaysia Medical Implant Technology Group, Faculty of biomedical & health science Engineering, Universiti Teknologi Malaysia
Abstract— The mechanics of brittle damage in porcelain of an endodontically treated maxilla incisor tooth was simulated using finite element method (FEM). For this purpose a very complex composite structure of endodontically treated tooth is simulated under transverse loading. Three dimensional (3D) model of human maxilla incisor tooth root was developed based on Computed Tomography (CT) scan images. Crown, core cement, resin core, dental post, post cement and dentin were created using SolidWorks software, and then the model was imported into ABAQUS-6.9EF software for nonlinear behavior analysis. This study utilizes finite element method to simulate onset and propagation of crack in ceramic layer (porcelain) by the cause of both tension and compression loading related to complexity of the geometry of tooth implant. The simulation has been done using brittle damaged model available in ABAQUS/Explicit in quasi-static load condition. The load-displacement response of whole structure is measured from the top of porcelain by controlling displacement on a rigid rod. Crack initiated at the top of porcelain bellow the location of the rod caused by tension damage at equivalent load of 590 N. Damage in porcelain accounts for up to 63% reduction of whole structure stiffness from the undamaged state. The failure process in porcelain layer can be described by an exponential rate of fracture energy dissipation. This study demonstrated that the proposed finite element model and analysis procedure can be use to predict the nonlinear behavior of tooth implant. Keywords— Endodontically Treated Tooth, Damage Mechanics, Brittle Cracking, Explicit Dynamics Procedure, Nonlinear Finite Element Analysis.
I. INTRODUCTION Natural tooth conservation from loss or damage is one of the important aims in endodontic dentistry. The damage or changes of mechanical characteristics of the tooth can be appeared as elasticity modulus degradation, which is the cause of reducing strength and increasing fracture susceptibility in them, therefore in the bad condition the original tooth must replace with an Endodontically Treated Tooth (ETT). The components of an ETT which is acting as composite structure, are including crown tooth (crown), metal crown replace, crown cement, composite core, post, cement post and gutta percha which are attaching together perfectly
and fixed on dentin (root) as it shown in Fig. 4. Crown is the upper part of an ETT which has contact with other tooth and also with the other materials during mastication, therefore crown must be enough strength to resist against any pressure or load to fracture. Nowadays, manufacturers prefer to produce the crown part from ceramic materials such as porcelain which is very strong and also it keep aesthetics of the teeth same as the original colour. One of the significant problems in ETT is damage of ceramic part under high transverse loads and because of complex geometry of crown, prediction of damage evolution (imitation and propagation of cracks) is too complicated which is one of the highly demands from industry for this type of structure. This research is concentrated on damage mechanics of crown which is made by porcelain material and proposes a model to utilize for nonlinear mechanical analysis of this type of composite structure.
II. DAMAGE MODEL DESCRIPTION The theory for modelling mechanical behaviour of brittle materials under monotonic loading which is known as “brittle damage plasticity” was proposed by Lubliner et al. (1989) and Lee and Fenves (1998) describe here. Prior to commencing an description need to be given for mechanical behaviour of brittle materials which is different in tension and compression loads condition. When the material is under compression condition, the initial response is elastically till some inelastic straining appears and continually it goes to softening. In this condition after stress reach to ultimate stress, material softens until it can no longer carry any load. In the condition of removing load at any point in inelastic region, the unloading response is very important to consider because it is softer than the initial elastic response. When the material is under tension condition, the response is elastically till a stress that is approximately 10% of the ultimate compressive stress, and then micro-cracks occur so quickly and the material will loses its strength through a softening mechanism. The cracking and compression responses of brittle materials that are incorporated in the model are illustrated by the uniaxial response of a specimen shown in Fig. 1.The most important aspect in mechanical analysis
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 210–214, 2011. www.springerlink.com
Simulation of Brittle Damage for Fracture Process of Endodontically Treated Tooth
of brittle materials is to predict the cracks evolutions which need to consider in the purposed model. The cracks are assumed to be occurred when the value of effective stress reach to strength of the material based on crack detection surface as it shown in Fig. 2.
211
The considered damage model is using the classical concepts of plasticity theory to analysis brittle materials and the strain decomposition of the theory can be formulated as (1) is the elastic where is the total mechanical strain, and are the plastic strain associated with the strain, tension and compression surfaces. The two major failure mechanisms are assumed to be tensile cracking and compressive crushing. Damage evolution is controlled by hardening variables of tensile and compressive equivalent plastic strain ̃ , ̃ , linked to failure mechanisms under tension and compression loads. Fig. 3 is illustrating the response of brittle materials under uniaxial tensile and compressive loads which the stress-strain response is consisting of linear elastic and plastic regions.
Fig. 1 Uniaxial behavior of plain concrete The crack opening and softening process was based on Hillerborg's (1976) approach, which defined the energy required to open a unit area of crack, as a material parameter, using brittle fracture concepts. Also the prediction of crack growth direction was based on Rots and Blaauwendraad (1989) researches. The influences of cracks normally appear as high stiffness reduction of structures which is one of the important factors in the applications of this type of materials and it must be consider for the mechanical model.
Fig. 3 Response of brittle materials to uniaxial loading in tension (a) and compression (b)
Fig. 2 Brittle failure surfaces in plane stress
Parameter represents failure stress in tension condition and and are initial yield and ultimate stresses in compression condition. As it shown in Fig. 3, when the brittle specimen is unloaded from any point on the strain softening branch of the stress-strain curves, the unloading response is weakened and the elastic stiffness of the material appears to be damaged (or degraded). The degradation
IFMBE Proceedings Vol. 35
212
S.S.R. Koloor, J. Kashani, and M.R. Abdul Kadir
of the elastic stiffness is characterized by two damage variables, and , which are as functions of the plastic strains as follow: ; 0
1
; 0
1
(2)
The value of damage parameters can be from zero; which is representing the undamaged state of the material, to one; represents total loss of strength in each point of the structure. The reduction of elastic module for brittle damaged plasticity is defined in terms of scalar degradation variables as 1
Fig. 4 The components of an Endodontically Treated Tooth Fig. 5 is showing the model setup, loading and boundary conditions.
(3)
where E is the initial elastic stiffness and d is a function of the stress state and the uniaxial damage variables and . In general 3D condition, the stress-strain relation is given by the scalar damage elasticity equation: 1 where
(4)
is the initial (undamaged) elasticity matrix.
III. STRUCTURAL MODEL, MATERIAL AND METHOD A 3D model of an adult maxilla root of incisor tooth were developed from a Computed Tomography (CT) scan image set of 33 slices, with 1mm slice thickness. By using an image processing software package (Mimics, Ver. 10, Materialise NV, Leuven, Belgium), maxillary incisor tooth was separated and modeled. 3D model of a tapered passive post (D.T. LIGHT-POST® ILLUSIONTM X-RO®, BISCO Inc., USA) was created in a commercially 3D modeling software (SolidWorks 2009, Dassault Systèmes, USA). The dental posts geometry were measured via a profile projector (PJ-A3000, Mitutoyo, Japan). The post was designed with 10mm length inside the root canal. Other elements of endodonticed tooth such as composite core, core cement, base metal and porcelain, were modeled using SolidWorks on the basis of root geometry. A commercially available FEM software (Abaquse 6.9EF, Dassault Systèmes, USA) was used for analysis of the model. The model was meshed by element type of C3D4 parabolic tetrahedral with the total number of 838326 elements. In this model, major concern was on damage behavior of crown and as it can be seen in Fig. 4, meshing in part (1) was more refined (616153 elements) and as it goes to part (8) the size of the meshes have increased.
Fig. 5 Loading and boundary Condtions Table 1 Mechanical properties of porcelain, [5, 6] Porcelain Properties Density Elasticity Modulus Poisson Ratio Cracking Failure Stress
Values 2400 Kg/m3 61 GPa 0.19 24.8 MPa
Direct Cracking Failure displacement
0.01x10-3 m
2 segments of tension stiffening model
Remaining direct stress (MPa), Direct cracking displacement (m) 24.8 MPa, 0.0 m / 1.0 MPa, 0.00001m
2 segments of inelasticity in compression condition
Remaining direct stress (MPa), Direct cracking displacement (m) 75 MPa, 0.0 m / 1.0 MPa, 0.00001m
Loading was controlled by displacement with the rate of 14 mm/s applied in the angle of 45 degree at the top of tooth. The load cell was created like a cylinder with rigid body material and meshed by 1520 type of R3D4 quadrilateral
IFMBE Proceedings Vol. 35
Simulation of Brittle Damage for Fracture Process of Endodontically Treated Tooth
Table 2 Mechanical properties of endodontically treated tooth components Components Properties Metal Crown [7] Crown Cement [9] Composite Core [8] Post [10] Cement Post [9] Gutta Percha [7] Dentin [8]
Density (Kg/m3) 7800 2000 2000 1500 2000 200 2100
Elastic Modulus (GPa) 180.5 8.1 15 83.7 8.1 0.0069 18.3
Poisson Ratio 0.27 0.3 0.24 0.28 0.3 0.45 0.31
damage evolution in the crown part for two different times, first a time near the initiation of crack and second at maximum loading, therefore the propagation of cracks can be observed. 4000
Force (N)
elements. Brittle damage theory is defined in porcelain part of the to represent the inelastic behavior of the brittle part under transverse loading. The other parts of the model assumed to be homogenous, isotropic and they are treated as linear behavior. The connection between all parts assumed to be prefect bond during the loading. The mechanical properties of porcelain and other components were assigned according to tables 1 and 2 that were used in this study.
213
3000 2700 N Linear
2000
NonLinear
1000 1010 N 0 0
0.1
0.2 0.3 Displacement (mm)
0.4
Fig. 6 Load-Displacement curves for linear and nonlinear behavior of structure
The explicit dynamics procedure was operated as the most suitable process for damage analysis of quasi-brittle material. The reaction force on the load cell was computed for both linear and nonlinear mechanical behaviour of whole structure, to compare stiffness reduction from the prefect linear condition.
IV. RESULTS AND DISCUSSION
Fig. 7 Contour of tension damage at crown (cracking)
Model development was through some steps, at first the model meshed with coarse elements and it refined till 5% finite element error in the response of whole structure. Also in porcelain part tried to create more refined mesh, smaller than the crack size (0.01mm) to observe the better contour of damage and result because once the cracks occur stiffness of the part reduce rapidly. The loading was controlled by displacement with rate of 14mm/s, and the reaction force of whole structure on the loadcell was recorded based on the movment of loadcell (Fig. 6). The curve is showing the stifness changing of whole structure in elastic and plastic regions. Once the cracks occure in tension or compration parts of porcelain, suddenly the load drops. Also the reaction force of linear behavior of structure (all components act elastically) has plotted to compare with nonlinear condition, which is showing that stiffeness of structure redused up to 63% from the undamaged state. As it mentioned before, brittle cracks occur mostly in tension condition which the strength of the material is low, and Fig. 7 is illustrating the
Also Fig. 8 is showing brittle crushing because of compression loads on the porcelain material. First initial crack occur when the value of load is about 590 N, and it can be assumed that the structure failed and it cannot be use any more. In this study only focused on crown non-linear behaviour therefore the other components are acting linearly and attached to each other perfectly, and because of that reason, the stiffness of whole structure keep increasing by passing the time (Fig. 6).
Fig. 8 Contour of compression damage at crown (crushing)
IFMBE Proceedings Vol. 35
214
S.S.R. Koloor, J. Kashani, and M.R. Abdul Kadir
V. CONCLUSIONS The total stiffness reduction caused by damage in ceramic part of an ETT computed from a perfect condition. The fracture and damage model details such as brittle cracking and crushing was described to introduce as a tool for nonlinear analysis of ETTs.
REFERENCES 1. Lubliner, J., J. Oliver, S. Oller, and E. Oñate. (1989) “A PlasticDamage Model for Concrete,” International Journal of Solids and Structures, vol. 25, pp. 299–329. 2. Lee, J., and G. L. Fenves. (1998) “Plastic-Damage Model for Cyclic Loading of Concrete Structures,” Journal of Eng Mechanics, vol. 124, no.8, pp. 892–900. 3. Hillerborg, A., M. Modeer, and P. E. Petersson. (1976) “Analysis of Crack Formation and Crack Growth in Concrete by Means of Fracture Mechanics and Finite Elements,” Cement and Concrete Research, vol. 6, pp. 773–782. 4. Rots, J. G., and J. Blaauwendraad. (1989) “Crack Models for Concrete: Discrete or Smeared? Fixed, Multi-directional or Rotating?,” HERON, vol. 34, no.1, pp. 1–59.
5. A.S. Rizkalla and D. W. Jones. (2004) Indentation fracture toughness and dynamic elastic moduli for commercial feldspathic dental porcelain materials, Dent Mater.; 20: 198–206. 6. W. J. O'Brien. (1997) Dental materials: properties and selection, 2nd ed. Chicago (IL): Quintessence, p:339-98. 7. C. Ko, C. Chu, K. Chung and M. Lee. (1992) Effects of posts on dentin stress distribution in pulpless teeth, J Prosthet Dent 68: 421-7. 8. T. Papadopoulos, D. Papadogiannis, D. E. Mouzakis, K. Gianadakis and G. Papanicolaou. (2010) Experimental and numerical determination of the mechanical response of teeth with reinforced posts, Biomed Mater.; 5 (3):035009. 9. L. Ceballos, M. A. Garrido, V. Fuentes, J. Rodriguez. (2007) Mechanical characterization of resin cements used for luting fiber posts by nanoindentation. Dental materials 23, 100–105. 10. Torbjörner A, Karlsson S, Syverud M, Hensten-Pettersson A. (1996) Carbon fiber reinforced root canal posts. Mechanical and cytotoxic properties. Eur J Oral Sci, 104:605-611.
Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 35
Seyed Saeid Rahimian Koloor Universiti Teknologi Malaysia 81310 UTM, Skudai, Johor Malaysia
[email protected]
Stress Distribution Analysis on Semi Constrained Elbow Prosthesis during Flexion and Extension Motion M. Heidari1, M. Rafiq Bin Dato Abdul Kadir2, A. Fallahiarezoodar1, and M. Alizadeh1 1
2
Department of Mechanical Engineering, UTM, Johor, Malaysia Department of Health Science and Biomedical Engineering, UTM, Johor, Malaysia
Abstract— Total elbow arthroplasty can be the best decision for those patients that has advance elbow dysfunction, regarding to relive the pain and restore the normal physiological function. As it is reported in literature most complication for linked prosthesis is loosening and mechanical failure. during of this study stress distribution of total elbow replacement in three different flexion angle during both flexion and extension motion was analyzed. According to results most stress concentration accrued near the articulation surface and pin. maximum stress was happened in 90 degree of flexion in both flexion and extension motion. This study was validated using experimental results which are reported in previous study. Keywords— Elbow joint, Total elbow arthroplasty, Stress distribution.
computer modeling and finite element method, during flexion and extension motion in three different , and 9 of flexion angle. Whereas high rate of loosening is considered as most important complication of the elbow joint, stress distribution analysis can be useful to predict the most critical motion after total elbow replacement. During of this study, a semi constrained 3D model of elbow prosthesis was designed using CAD modeling software and also a 3D human bone was reconstructed using CT data. Regarding to stress analysis of elbow arthroplasty, it is necessary to calculate active muscles force in each related angle [1]. According to previous study, Biceps brachi, and Brachialis, act as flexor and Tricpes brachi act as extensor during elbow motion. Comparison of results with experimental results obtained by other studies was considered as validation method for this study.
I. INTRODUCTION Elbow is one of the most complex articular anatomy of human joint. Distal humerus, Proximal radius, and Proximal ulna are three elbow joint bones which are connected by tendons, ligaments and muscles [1]. Most famous disease that can affect normal elbow's function is rheumatoid arthritis that weaken the function status of patient. Total elbow arthroplasty can be the best decision for those patients that has advance elbow dysfunction, regarding to relive the pain and restore the normal physiological function [2]. Like all other joint replacement some complication such as instability, loosening, dislocation, polyethylene wear and infection, has restricted the long term survivorship of Total elbow arthroplasty [3]. Between two different semi constraint and unconstraint elbow prosthesis, Semi constrained prosthesis has a polyethylene-metal loose-hinged device with intrinsic stability. Although this stability prevents bones unfavorable dislocation, but it permit to valgus-varus motion. As it is reported in literature, loosening and mechanical failure can be considered as the most complication for linked prosthesis [6,7]. Purpose of this study is to analyze the stress distribution along semi constrained total elbow prosthesis using
II. MATERIALS AND METHODS A 3D computer model of a left elbow (scanned humerus and ulna separately) was constructed using computed tomography (CT).This model, as is shown in figure 1, specifies the geometric shape of bony segment. model was arranged manually in three different flexion angle ( , and 9 ) and smoothing operation was done in order to removing the sharp edges to avoiding stress concentration. A simple 3D model of current generation elbow prosthesis (Coonrad morrey) that is consist of humerus component, ulna component and pin, were designed using CAD modeling software as is shown in figure1. Both reconstructed bones and designed prosthesis were meshed regarding to finite element analysis procedures. Designed implant was placed in appropriate position in bone and extra parts of bone were removed. All articulating surface's mesh were recreated manually in order to matching all interface points and removing all interferences mesh. All material properties were chosen according to previous study as is shown in table 1. All materials were assumed homogenous, elastic, and isotropic according to previous report [8,10].
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 215–218, 2011. www.springerlink.com
216
M. Heidari et al.
extension motion, bone weight force was considered 16N applying to centre of Ulna in Z direction. As is shown in figure 2 all muscles force were applied where the position of the insertion points and their directions were estimated from previously published anatomical study [9]. Proximal top surface of Humerus were fully constrained and not allowed to translate or rotate [9]. A finite element analyzing software was used to analyze the model. Table 2 Three effective calculated muscles force during flexion and extension in three different degree of flexion (N)
Fig. 1 Meshed reconstructed bones and designed prosthesis in three different flexion angle
Table 1 Material properties applied for prosthesis and bone
Item
material
Elastic modulus E (MPa)
Poisson ratio
Prosthesis
Titanium
110000
0.3
Bone
Bone,( cancelus part is not considered)
15000
0.3
Item
0 degree
30 degree
90 degree
BIC
0
180
300
BRA
0
160
270
TRI
0
250
400
All required muscle forces were taken from Rositsa et al (2009) report as is shown in figure 2 [9]. According to their investigation, elbow angle displacement was assumed based on following equation:
wherein is the flexion angle of elbow, = , and T=0.4. In this study variable , was considered unknown regarding to known elbow angle. In order to calculating the magnitude of in each flexion angle, three 0,30 and 90 degree of flexion was set in above function and equation was solved. Table 2 shows all calculated muscles force during flexion and extension regarding to different magnitude of . There was no muscle force in 0 degree and bone weight was the only external force that act to elbow joint in Z direction as is shown in figure 2. In all three position and both flexion and
Fig. 2 Schematic of elbow joint in (a) flexion (b) extension in
, and 9 position. Point O was considered as center of rotation ulna about humerus. G=16N is gravity force of forearm
IFMBE Proceedings Vol. 35
Stress Distribution Analysis on Semi Constrained Elbow Prosthesis during Flexion and Extension Motion
III. RESULTS Stress plots were created for designed elbow prosthesis. Figure 3 shows an example of stress plot that were created during of this study. As is shown in figure 3 stress concentration in articulation surface is more in comparison to other area. Also in Humeral component, a significant stress concentration appeared in the upper part while a moderate stress was observed in middle. On the contrary, in Ulnar component, elements which are in middle or further parts of articulation surface suffer less range of stress than elements which are closer to articulation surface. Figure 6 shows stress plot for the pin during flexion part of motion in 90 degree of flexion. According to results most
217
critical part of prosthesis is pin which suffer high stress in compare with Ulnar and Humeral component. Figure 5 shows the maximum stress calculated in each component and connecting pin, in three different flexion angle during flexion motion. The highest maximal stress for Ulnar component, Humeral component and pin was calculated 150, 193 and 214 MPa respectively. Maximum stress was happened in 90 degree of flexion. As is shown in figure 4 stress was increased significantly from 30 degree to 90 degree of flexion. According to these results less stress was observed in 30 degree than 0 degree in Ulnar component.
Fig. 5 Maximum stress comparison in three different flexion angle during flexion. UC: Ulnar component, HC: Humeral component, p: pin
Figure 6 shows the maximum stress calculated in each components and pin, in three different flexion angle during extension motion. the maximum stress for Ulnar component, Humeral component and pin was calculated 113, 159 and 208 MPa respectively. Fig. 3 Von Mises stress plots for the designed implant in 90 degree of flexion during flexion motion
Fig. 4 Von Mises stress plots for pin in 90 degree of flexion during flexion motion
Fig. 6 Maximum stress comparison in three different flexion angle during extension. UC: Ulnar component, HC: Humeral component, p: pin
IFMBE Proceedings Vol. 35
218
M. Heidari et al.
IV. DISCUSSION According to the results most stress concentration was accrued near the articulation surface. maximum stress was occurred in 90 degree of flexion in both flexion and extension motion. Maximum stress in Ulnar component was more than humeral component and pin in 0 and 30 degree of flexion that shows during small flexion angles stress in Ulnar component are more critical than pin and humeral. By increasing the flexion angle stress magnitude increased in Humaral and pin but in Ulnar component from 0 to 30 degree of flexion stress decrease and then increased until reach to 214mpa in 90 degree of flexion. During extension maximum stress was decreased corresponding to decreasing the flexion angle from 90 to 0 degree. As is shown in figures 5,6 maximum stress in high degree of flexion are significantly more than those are happen in less degree of flexion. Regarding to results most critical part of semi constrained elbow prosthesis is pin, which suffer high stress especially in elements that are connected to humeral and Ulnar components. This high stress would increase wear in pin and also increase the possibility of pin failure which can cause serious problems in prosthesis function and stability. As it was reported in previous experimental studies most total elbow replacement failure are happened in articulation surface and in bone-implant interface, also according to the stress plots that are resulted from this study, upper part of Humeral component, articulating surface and connecting pin suffer high magnitude of stress which may cause that two kinds of loosening and it shows the validation of results.
V. CONCLUSION During of this investigation a stress analysis of semi constrained elbow implant was done. All muscles force were calculated regarding to previous study and was applied in each different position. Bones, muscles force, bone gravity force and complete elbow prosthesis were all included in the model regarding to providing most accurate estimates of stress distribution. It can be recommended for those patients who had undergone total elbow replacement surgery to prevent from high flexion angle in order to decrease the risk of implant failure. Results of this study can
be a reliable suggestion especially for prosthesis designer and manufacturer. Further study can be done in stress analysis in bone and also comparison the stress distribution and micro motion analysis in semi constrained and unconstrained elbow prosthesis.
ACKNOWLEDGMENT This study was supported by Universiti Teknologi Malaysia.
REFERENCES 1. Judd S. (2009) The Clinical Performance of UHMWPE in Elbow Replacements, UHMWPE Biomaterials Handbook, chapter 10, , Academic Press, 2. Lee K T, Singh S, Lai C H,( 2005) Semi-constrained total elbow arthroplasty for the treatment of rheumatoid arthritis of the elbow, Singapore Med Journal; 46(12):718-722 3. Shih S, Lu, Fu Y.-C, Hou S.-M, Sun J.-S, Cheng C.-Y, (2008) Biomechanical analysis of unconstrained and semi constrained total elbow replacement primary report k-s. Journal of Mechanics, Vol. 24, No. 1 4. Gregory J.J, Ennis O, Hay S.M,(2008), Total elbow arthroplasty, journal of elbow and shoulder surgery, Vol. 22, Issue 2, Pages 8089 5. Morrey BF, Adams RA ,(1992) Semi constrained arthroplasty for the treatment of rheumatoid arthritis of the elbow, Journal of Bone Joint Surg Am.;74:479-490 6. Szekeres M, Graham J.W. King,(2006), Total Elbow Arthroplasty, journal of hand therapy;19:245–54. 7. Daniel B. Herren, MD, Shawn W. O’Driscoll, Kai-Nan An, Rochester, M,(2001), Role of collateral ligaments in the GSB-linked total elbow prosthesis, Journal of Shoulder Elbow Surgery; 260-4. Vol. 10, Number 3 8. Thillemann T M, Olsen B S, Johannsen H V, Sjbjerg J O, Denmark A, (2006), Long-term results with the Kudo type 3 total elbow arthroplasty, Journal of Shoulder Elbow Surgery;15:495-499 9. Rositsa T. Raikova , (2009), Investigation of the influence of the elbow joint reaction on the predicted muscle forces using different optmimaization functions,Journal of Musculoskeletal Research, Vol. 12, No. 1, 31–43, 10. zander T. Rohlmann A. Burra N K. Bergmann G, (2006), Effect of posterior dynamic implant adjacent to a rigid spinal fixator,Journal of clinical biomechanics, Vlo.21, 767-774 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Ali Fallahiarezoodar Universiti Teknologi Malaysia Taman University Johor Bahru Malaysia
[email protected]
Stress Distribution of Dental Posts by Finite Element Analysis P.H. Liu and G.H. Jhong Department of Biomedical Engineering, I-Shou University, Kaohsiung, Taiwan, R.O.C.
Abstract— The purpose of this study was to investigate stress distribution in three types of dental posts for understanding the biomechanical effects of various posts by three-dimensional finite elements analysis (FEA). The models of the FEA were consisted of three types of posts, Co-Cr crown, premolar without crown and regional bon of mandibular resection. Three types of dental post were splitshank post, core crown post and non split-shank post. The boundary condition was fixed at the cross sections of the regional mandible. Occlusal forces of 150N were applied at the buccal cusp along a 45-degree inclination and center fossa parallel to a longitudinal axis of the premolar. The stress values at the core region of the traditional core crown post were two and three times greater than non split-shank and split-shank post respectively. The stress distributions both split-shank and non split-shank posts were similar pattern, but peak stress in the non split-shank post showed smaller than the split-shank post. In two types of shank post system, the relationship between Young’s modulus and stress concentration was an important factor to influence a root fracture. The conclusion indicated that stress concentration and magnitude in the core crown post was demonstrated more significant than other posts. The stress concentrations in the shank posts with and without split design by the same materials, such as titanium and glass fiber, were no noteworthy distinction.
II. MATERIALS AND METHODS Models of the FEA were consisted of three types of posts, Co-Cr crown, premolar without crown and regional bon of mandibular resection. The premolar model was reconstructed by 3D laser scanning (NEXTEngine), and divided premolar into two parts of crown and root. The Post of core crown, non split-shank and split-shank was constructed using CAD software (SolidWorks 2009) (Fig. 1), the same procedure was applied to construct the regional mandible, including cortical and concellous bone structure. Three materials, titanium and glass fiber for the shank posts and Co-Cr alloy for the core crown post, combination with 1.2mm diameter of the dental posts were investigated in each type of post, hence, total of five post models of the FEA were studied. The materials properties of the FE models were shown in Table 1 and assumed as linear elastic, homogeneous and isotropic [4-5].
Keywords— Dental post, Finite element analysis, Stress concentration.
I. INTRODUCTION A fracture of endodontically treated tooth is an important issue for restoration when brittle of the teeth is increased due to take the moisture out of the tooth. Hence, tooth fracture in larger occlusive force or files treatment can happen [1]. Many posts have been developed in various shapes, diameters and materials to reinforce the tooth [2]. An excellent post can reduce stress concentration and tooth damage, but good design for dental post is still unclear. Furthermore, stress distributions in the post and dentin were evidenced to be a factor of failure in post system [3]. The purpose of this study was to investigate stress distribution in three types of posts system with core crown, non split-shank and split-shank design by three-dimensional finite elements analysis (FEA). We hope that the results of the FEA could provide a suggestion about choice of the posts.
Fig. 1 Three types of posts in the FE mesh models Table 1 The material properties of the FE models Properties of Material Dentin
Young's modulus E(MPa) 18600
Poisson's ratio υ 0.31
Co-Cr Crown
120000
0.28
Titanium Post Cortical bone
112000 13700
0.33 0.3
Cancellous bone
1370
0.3
Glass Fiber Post
45000
0.28
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 219–221, 2011. www.springerlink.com
220
P.H. Liu and G.H. Jhong
The boundary condition was fixed at the cross sections of the regional mandible. Occlusive forces of 150N were applied at the buccal cusp along a 45-degree inclination and center fossa parallel to a longitudinal axis of the premolar model (Fig.2) [5-6]. The nodes and elements of the FE models in the split-shank, core crown and non split-shank post were 41435 and 30580, 33287 and 23925 as well as 49432 and 42086 respectively. The tetrahedral element of 10 nodes was utilized for meshing in this study.
the non split-shank post was more significant than the non split-shank post (Fig. 4). Hence, split-shank post seems efficiently to transfer forces into tooth. The stress values at the core region of the traditional core crown post were two and three times greater than non splitshank and split-shank post respectively. Therefore, the structure of core crown post could induce stress concentration in the root around the end of the post. In addition, the Co-Cr crown base margin of buccal side of the traditional core crown post was demonstrated a stress concentration by pressing contact on the dentin margin of the root (Fig. 5). This also meant enlarged possibility of root fracture. The maximum Von Mises stresses in the split-shank and non split-shank posts with glass fiber by the FEA were showed in Fig. 6. The stress distributions both the splitshank and non split-shank posts were similar pattern,
Fig. 2 The boundary and loading conditions of the FE post model
III. RESULTS AND DISCUSSION The results of FEA revealed that the core crow post of Co-Cr alloy was detected a maximum Von Mises stress (109.279 MPa) at the fossa area of the crown, and secondary maximum Von Mises stress was also detected at the post shaft (Fig. 3). Comparison between split-shank and non split-shank titanium post showed that stress concentration at
Fig. 3 The result of Von Mises stress in the core crown post
Fig. 4 The result of Von Mises stress in the split-shank (left) and non splitshank (right) titanium post
Fig. 5 Stress concentration at the buccal side of dentin margin in the core crown post
IFMBE Proceedings Vol. 35
Stress Distribution of Dental Posts by Finite Element Analysis
but peak stress of the non split-shank post showed smaller than the split-shank post. Moreover, in this study with a higher Young’s modulus of the titanium posts were concentrated a larger stress on the posts. On the other hand, the result of the FEA demonstrated that there were not strongly difference in the stress distribution when comparing the non split-shank with split-shank post using the same material in the posts, but split-shank post was easier deformation than non split-shank one. It could be decreased the stress concentration around the end of the post and avoided root fracture. In two types of shank post system, the relationship between Young’s modulus and stress concentration was an important factor to influence a root fracture. The effects in combined various diameters with materials in split-shank and non split-shank posts will be undertaken to investigate stress concentration by FEA in the future study.
221
ACKNOWLEDGMENT This research was supported by a grant from National Science Council of Taiwan, NSC 99-2221-E-214-042.
REFERENCES 1.
2.
3. 4. 5.
6.
Carter JM, Sorensen SE, Johnson RR, Teitelbaum RL, Levine MS. (1983) Punch shear testing of extracted vital and endodontically treated teeth. J Biomech 16: 841-848 Coelho CSM, Biffi JCG, Silva GR, Soares CJ. (2009) Finite element analysis of weakened roots restored with composite resin and posts. J Dental Materials 28: 671-678 Fernandes AS, Shetty S, Coutinho I. (2003) Factors determining post selection : a literature review. J Prosthet Dent 90: 556-562 Adanir N, Belli S. (2007) Stress Analysis of a Maxillary Central Incisor Restored with Different Posts. Eur J Dent 1: 67-71 Pegoretti A, Fambri L, Zappini G, Bianchetti M. (2002) Finite element analysis of a glass fibre reinforced composite endodontic post. Bomaterials 23: 2667-2682 Adanir N, Belli S. (2007) Stress Analysis of a Maxillary Central Incisor Restored with Different Posts. Euro J Dentis 2:67-71 Author: Institute: Street: City: Country: Email:
Fig. 6 The result of Von Mises stress in the split-shank (left) and non splitshank (right) glass fiber post
IV. CONCLUSIONS The results of stress concentration and magnitude in the core crown post was demonstrated more significant than others posts. The stress concentrations in the shank posts with and without split design were no noteworthy distinction by the FEA. The shank post with glass fiber could provide a capability for decreasing stress concentration in the post.
IFMBE Proceedings Vol. 35
Pao-Hsin Liu I-Shou University No.8, Yida Rd., Jiaosu Village Yanchao District Kauhsiung Taiwan, R.O.C.
[email protected]
Temporal Characteristics of the Final Delivery Phase and Its Relation to Tenpin Bowling Performance R. Razman1, W.A.B. Wan Abas2, N.A. Abu Osman2, and J.P.G. Cheong3 1
Sports Centre, University of Malaya, Kuala Lumpur, Malaysia Biomedical Engineering, University of Malaya, Kuala Lumpur, Malaysia 3 School of Sport Science, Exercise and Health, The University of Western Australia, Perth, Australia 2
Abstract— Spatial and temporal variability in the execution of skills has been analyzed in many sports. An expert performer is commonly referred to as being more consistent in the execution of skills compared to a novice. The purpose of this study was to analyze temporal characteristics and variability of the final delivery phase and examine how it is related to bowling level and performance. Two level of bowlers were used in this study - 18 elite (Male=10, Female=8; Bave 213.2±6.80; BRvel 17.66±0.85mph) and 12 semi-elite bowlers (Male=7, Female=5; Bave 181.3±9.36; BRvel 16.90±1.46mph). The final delivery phase consisted of three major events which were the arm swing, front foot slide and ball release. The temporal variable that was measured was execution time and the between-trial temporal variability, while average bowling score and ball release velocity represented the performance criteria. In general, the results indicate that the temporal characteristic between the two groups were quite similar, but in terms of relationship to bowling performance, front foot slide time was correlated with bowling average. Variability wise, the elite group was less consistent in front foot slide execution time. There were no significant differences or correlations for the other variables. It was concluded that lower temporal variability was not indicative of higher playing level or better bowling performance. Keywords— Biomechanics, Variability, Consistency, Arm Swing, Foot Slide.
I. INTRODUCTION The objective of tenpin bowling is to try to knock down as many pins as possible within the allotted number of tries. In the modern game, bowlers achieve this by generating a lot of momentum using heavy balls that are released accurately and consistently at great velocities. Spatial variability in performing sport skills has been studied in various disciplines such as javelin and basketball [1]. In bowling, it was revealed that reduced variability of the medio-lateral foot path and anterior-posterior foot placement during the slide correlated to better bowling scores [2].
Meanwhile, in terms of temporal variability, it has been demonstrated that expert performers in a number of games (namely - baseball, table tennis and field hockey) execute their drives with more consistent movement times. It appeared that the time between the first forward motion of the implement and the moment of ball contact varied little between trials [3]. Such consistency in expert performers has been suggested by some as a motor program theory of control - a program which is thought to be a set of instructions for movement, organized ahead of its execution [3]. The consistency of movement therefore can be argued to be the result of consistent motor programming. Currently, there are no published works on the temporal variability of the delivery phase in tenpin bowling. The final delivery phase in bowling comprised of three major events which are the arm swing, front foot slide and ball release. The temporal variable that was measured was the execution time and the between-trial temporal variability while the measured performance criteria were average bowling score (Bave) and ball release velocity (BRvel). The purpose of this study was to examine how temporal variability of the events in the delivery phase was related to the Bave and BRvel. In addition, the temporal characteristics and variability of the delivery phase between elite and semielite bowlers were also compared. Consistent execution times in the delivery phase were believed to be related to higher playing level and better bowling performance.
II. METHODOLOY Participants were assigned into two groups based on their Bave which was recorded over three tournaments. Those averaging above 200 pin falls were placed in the elite group. There were 18 elite (Male=10, Female=8; Bave 213.2±6.80; BRvel 17.66±0.85mph) and 12 semi-elite bowlers (Male=7, Female=5; Bave 181.3±9.36; BRvel 16.90±1.46mph). Temporal data was derived from Kwon3D system, while BRvel was measured using timing gates and recorded in
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 222–224, 2011. www.springerlink.com
Temporal Characteristics of the Final Delivery Phase and Its Relation to Tenpin Bowling Performance
miles per hour (mph) in accordance with common bowling literature. Four Basler (100Hz) cameras were used for motion capture at the bowling alley. The participants aimed for a strike at each delivery, with pins reset after each trial irrespective of whether there were any pins left standing. There were seven trials in total and bowlers were instructed to use similar delivery methods for every trial. However, only trials 3 to 6 were used in the analysis. To assist in identifying events, reflective markers were placed at the wrist and metacarpal of the bowling arm as well as under the heel counter of the sliding foot. Event markers used in the delivery phase were:
223
Table 2 Correlation Scores
+
Time Variable
Bave
BRvel
TBS to BR
r = 0.115
r = -0.092
FFS to BR
r = -0.023
r = -0.414+
Mean SD for TBS to BR
r = -0.003
r = 0.162
Mean SD for FFS to BR
r = 0.304
r = 0.095
Significant correlation.
• Top of back swing (TBS) – the start of the downward swing of the arm • Front Foot Strike (FFS) – the point at which the font foot touched the ground • Ball Release (BR) – the point at which the ball separates from the hand The mean time over the four trials was calculated, while the standard deviation (SD) of each participant was used as the variability indicator. Lower mean SD values indicated less variability between trials. Independent samples t-test and Pearson product moment correlation were used to compare group score and identify relationships, respectively. Significance level was set at p<.05. Fig. 1 Scatter plot of FFS to BR and BRvel
III. RESULTS Mean times and mean SD of groups are presented in Table 1. The elite and semi-elite bowlers were only significantly different in their mean SD for FFS to BR (t28 = 2.138). Table 1 Mean Group Scores Variable
Elite
Semi Elite
TBS to BR time (sec)
0.539 ±0.047
0.526 ±0.097
FFS to BR time (sec)
0.356±0.051
0.355±0.068
Mean SD for TBS to BR
0.030±0.014
0.030±0.014
Mean SD for FFS to BR*
0.014±0.005
0.010±0.005
BRvel (mph)
17.661±0.852
16.897±1.465
* Significant difference between the groups.
Fig. 2 Scatter plot of Mean SD of FFS to BR and Bave
The results of the relationships between temporal and variability variables to bowling performance are summarized in Table 2. Scatter plots of variables with strong correlations to bowling performance are presented in Figure 1 and 2.
IV. DISCUSSION From the results, it appeared that temporal characteristics of the arm swing were not major determinants of playing level and bowling performance. The execution times for the
IFMBE Proceedings Vol. 35
224
R. Razman et al.
arm swing of the elite and semi-elite groups were relatively quite similar. Considering that swing time is a function of swing velocity, the result is quite plausible as the ball release velocity of both groups were also not significantly different. Furthermore, arm swing times were not significantly correlated to Bave or BRvel. The same also applies for the variability of arm swing execution time – the elite and semi-elite were quite similar in this aspect, while its relationship to Bave or BRvel was also very low. In terms of foot slide execution time, it appears that quicker execution times lead to higher BRvel. A bowler that releases at a higher velocity would invariably have their whole body moving at higher initial velocity, which means that the foot needed to slide faster – hence the quicker slide time. Interestingly, the foot slide temporal variability was rather unexpected. The elite group had higher temporal variability of the foot slide (Mean SD of FFS to BR) as compared to the semi-elite bowlers. In other words, the expert performers were less consistent in the temporal aspects of the foot slide. Also, it appeared that higher FFS execution time variability was correlated with better bowling average – albeit not significantly. Results of this study were in contrast to previous findings mentioned by Bootsma and Wierengen [3] which suggested that expert performers had more consistent skill execution times. It is possible that those results were limited to ballimplement impact sports such as table tennis, baseball and hockey, and did not necessarily apply to a sport like bowling, whereby the ultimate aim is accuracy and not maximal velocity nor distance. This is supported by a bowling study on spatial variability that found that expert bowlers had less consistent anterior-posterior foot placement at FFS [2]. In impact sports, often a performer consistently tries to minimize execution time in order to increase implement velocity to achieve maximum ball velocity and/or distance. In bowling, it is likely that expert bowlers vary their temporal and spatial parameters - in this instance especially their foot slide, to adjust their final body segments into correct positions prior to releasing the ball. These adjustments are necessary to attain maximum accuracy.
V. CONCLUSION Consistent temporal and spatial skill execution in sports has commonly been linked to expert performers. However, results of this study indicated that the arm swing temporal characteristics and variability were not determinants of playing level nor did it relate to bowling performance. Furthermore, the foot slide temporal data showed that the elite bowlers were less consistent than the semi-elite bowlers. Overall, it is concluded that in the final delivery phase of tenpin bowling, a more consistent execution time did not relate to higher playing level or better bowling performance.
ACKNOWLEDGMENT The authors would like to acknowledge the logistical and technical support provided by the Malaysian Tenpin Bowling Congress. The study was funded by the University of Malaya UMRG research grant.
REFERENCES 1. Bartlett, R. M. (2008) Movement variability and its implications for sports scientists and practitioners: An overview. Int. J of Sports Science & Coaching 3: 113-123. 2. Razman, R., Abas, W. and Osman, N. (2010) Front foot slide variability and its relation to tenpin bowling performance. Proceedings of the 28th International Conference on Biomechanics in Sports. Northern Michigan University, Marquette, Michigan, USA, 2010. 3. Bootsma, R .J. and van Wieringen, P. C. W. (1990) Timing an attacking forehand drive in table tennis. J of Exp Psych: Human Perception and Performance 50:197–209. Corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Rizal Razman Sports Centre, University of Malaya Lembah Pantai 50603 Kuala Lumpur Malaysia
[email protected]
The Biomechanics Analysis for Dynamic Hip Screw on Osteoporotic and Unstable Femoral Fracture H.F. Wu1, K.A. Lai2, M.T. Huang2, H.S. Chen1,3, K.C. Chung1, and F.S. Yang4 1
Institute of Biomedical Engineering, National Cheng Kung University, Tainan, Taiwan 2 Dept. of Orthopedics, National Chang Kung University Hospital, Tainan, Taiwan 3 Department of Anesthesiology, E-DA Hospital, Kaohsiung, Taiwan, R.O.C. 4 Global Alliance Technology Co., Ltd, Kaohsiung, Taiwan
Abstract— Osteoporotic patient has low bone quality and often occur unstable fracture. Conventional dynamic hip screw method can provide good result on stable femoral fracture. On the unstable femoral fracture, it has high failure rate. The purpose is to investigate the biomechanics analysis for dynamic hip Screw on osteoporotic and unstable proximal femoral fracture. The specific aims are to :(1) develop the osteoporotic and unstable proximal femoral fractures model,(2) investigate the influence of lesser trochanter fixation on stress and deformation distribution. This study total reviewed 130 patients treated by dynamic hip screw surgery and classified the fracture according to AO system. Statistic data showed that the common unstable femoral proximal fracture type is A2.1 (AO classification). The osteoporotic femur model of finite element method (FEM) was developed from the CT image of 80 years patient. Loading condition assigned single-leg stance. The FEM analytic result show the lesser trochanter has a moment of rotation in dynamic fixation without wires fixation condition. Using wires to fix the lesser trochanter can enhance the structure stiffness and reduce the stress distribution in unstable intertrochanter fracture (A2.1 type). Keywords— Osteoporosis, Unstable fracture, DHS, wire.
I. INTRODUCTION Osteoporosis is a major public problem, it is a skeletal disease characterized by low bone mass and microarchitectural deterioration. Osteoporotic patient occur fragility fracture frequently, and the common positions were vertebral, hip, wrist. There was 1.66million hip fracture in worldwide [1], 1,197,000 in women and 463,000 in men. Dynamic hip screw was the standard treatment in stable femoral proximal fracture. But in unstable fracture, it has high failure rate. Unstable fracture has the weak structure, it cause that the force loads on femoral head. Then the cut-out complication will happen, especially on osteoporotic patient. The hip biomechanics can help us to design new device and develop new technique to solve the clinical problem. From the diagram of the lines of stress in the upper femur, the lesser trochanter supply the compression
loading. It was an important role in dynamic hip screw fixation. Losing lesser trochanter, the force can’t transfer from strong cortex bone. This study investigated the biomechanics of unstable proximal femoral fracture and improve the lesser trochanter was important factor.
II. METERIA AND METHOD A. Unstable Proximal Femoral Fracture Model Radiographic image can provide patient's bone fracture information. From the Radiographic image analysis of DHS surgery, it can find the common unstable femoral fracture. AO Femur fracture classifications are according the fracture position: intertrochanter, neck and head. Type A fracture is fractures of the trochanteric area. These fractures are divided into three groups. Group A1 contains the simple (twofragment) pertrochanteric fractures whose fracture line runs from the greater trochanter to the medial cortex; this cortex is interrupted in only one place. There are three subgroups, reflecting the pattern of the medial fracture line: A1.1 fracture runs above the lesser trochanter; A1.2 fracture has calcar impaction in the metaphysis; while A1.3 fracture is trochantero-diaphyseal fracture that finish up distal to the lesser trochanter. Type A2 fracture has a fracture line pattern identical to that Type A1 fracture; however, the medial cortex is multifragments. It is subdivided into A2.1 fracture, with one intermediate fragment(lesser trochanter); A2.2 fracture, with two fragments; and A2.3 fracture, with more than two intermediate fragments. Clinical A2.2 and A2.3 is difficult to identify. In this study is combined to count. Type A3 fracture is characterized by a line that passes from the lateral femoral cortex below the greater trochanter to the proximal border of the lesser trochanter; often there is also an undisplaced fracture separating the greater trochanter. A3.1 fracture is reverse intertrochanteric fractures (with an oblique fracture line); while A3.2 fracture is transverse (intertrochanteric). A3.3 fracture involves the detachment of the lesser trochanter, and is notoriously difficult to reduce and stabilize.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 225–228, 2011. www.springerlink.com
226
H.F. Wu et al.
This study collects the radiographic images of the patient operate with dynamic hip screw in National Cheng Kung University medical center orthopedic department. The statistics method is divided intertrochanter, neck and head, then the type at most classifies subgroup. The common type was used to be the FEM model. B. Investigate Unstable Proximal Femoral Fracture Biomechanics Finite element analysis (FEA) is common method to investigate the orthopedic biomechanics. It can determine the stress and strain to evaluate fixation in complex geometric model .Femur FEA analysis needs a femur geometric model, force boundary conditions and material property. The femur mode is reconstructed from the computed tomography image of 69 years patient. The CT-scan resolution is 0.6mm. Femur reconstruct has four stages. First, 3D solid model was reconstruction by Mimics software, and the model was saved as STL file. Then, Solidworks software can modify the femur mode smooth. Muscle, DHS and wires model was also established by Solidworks model. In all simulation models, the lag screw was fixed in femur head where tip-to-apex distance (TAD) was 20mm. The wires model created along femur surface was described by irregular curve and the depression was filled by straight lines. The wire fixation position was defined according to lesser trochanter migration. In this study, wire fixation has three fixation methods (Fig. 1). The material property of osteoporosis was assigned according to references data [2, 3]. The dynamic hip screw material and wires are stainless steel. The different is that
assigning muscle boundary on femur outside. Linear elastic isotropic material properties were assigned to all the materials involved in the model (Table 1).The loading boundary condition was followed the one-leg stance (Fig. 2). Major forces are including hip reaction force applied on femur head was 2,997N, abductor muscle force applied on greater trochanter was 122N, and the distal femur was sat fixation [4]. Table 1 FEM material property parameter Item cortical bone
Elastic Module, E (MPa) 14,217
Poisson’s ratio 0.32
cancellous bone
100
0.2
head cancellous bone
13,000
0.3
Muscle
0.5
0.35
stainless steel
19000
0.3
Fig. 2 FEM boundary conditions: (A) femur distal was set fixation, (B) hip reaction force and (C) muscle force
III. RESULT A. Unstable Proximal Femoral Fracture Model
Fig. 1 Three wire fixation methods: Fixation1 has 2wires, upper wire passes across from greater trochanter to lesser trochanter above, lower wire across greater trochanter bottom to lesser trochanter below. Fixation2 also has 2 wires, upper wire passes across from greater trochanter bottom to lesser trochanter above and lower wire passes across from greater trochanter bottom to lesser trochanter below. Fixation3 has 1 wire, it passes across cross from greater trochanter bottom to lesser trochanter below
This study collect 130 patient’s radiography image. There was no head fracture type, but have many combine fracture difficult to indentify in others. Statistic result represent that Type A has 95 cases (40 cases belong Type A1&A3, 41 cases belong TypeA2.1, 14 cases belong Type A2.2&A2.3), Type B has 10 cases and others have 25 cases. Type A intertrochanter fracture was the common fracture type, especially in Type A2.1. So the FEM analysis model was Type A2.1 unstable proximal fracture. The model has three parts fragments, and the first fracture is from greater trochanter to lesser trochanter and the seconds fracture line cut the lesser trochanter.
IFMBE Proceedings Vol. 35
The Biomechanics Analysis for Dynamic Hip Screw on Osteoporotic and Unstable Femoral Fracture
Table 2 Statistic data of 130 patient’s fracture Item Type A Intertrochanter fracture A1 &A3
counts 95 40
A2.1
41
A2.2 & A2.3
14
Type B Neck fracture
10
Others
25
Total
130
227
the lesser trochanter fixation by wires can increase the structure stability. FEM von Mises stress result(Fig. 4) showed that the Max. stress of femur head appears on superior of lag screw. In DHS fixation with wire condition, three methods show that Max. stress decreased 60%(Fig. 5).
B. Investigate Unstable Proximal Femoral Fracture Biomechanics In total deformation result, the arrows show the direction of movement on force loading and the color show that the magnitude. The total deformation result of DHS fixation without wires showed the migration of lesser trochanter was award down (Fig. 3), and the femur head have a rotation clockwise moment. The lesser trochanter fixation may maintain the contact of fracture surface and against the head part migration. Then, the wire fixation method was defined according to the goal of lesser trochanter movement and clinical experiment.
Fig. 5 von Mises stress of femur head
Fig. 6 Comparison of Max. stress of femur head: wire fixation decrease 60% femur head stress
IV. DISSCUSION Fig. 3 Type A2.1 femur fixation on Dynamic hip screw fixation without wires FEM analytic result. Arrows show the femur head and lesser trochanter migration
In this study, the statistic result showed that the common unstable fracture was TypeA2.1. This fracture type character has three fragments, lesser trochanter fragment make the structure instability. The conventional treatment was only DHS fixation. In instability condition, the femur head has bear the larger force. In the lower bone mass situation, it’s easily occur “cut-out”[5]. The migration of femur head showed the lesser trochanter movement direction. And the FEM von Mises result showed the Max. stress of femur head appears in superior of lager screw.
Fig. 4 Total deformation of femur head
V. CONCLUSIONS
In total deformation result of femur head, the head part still had the rotation movement. But from the color of arrows, the movement significantly decreased. It’s show that
From the FEM analytic result, the DHS fixation with wire advantage to decrease the femur head stress and decrease the femur head migration. The adding wire fixation can raise the
IFMBE Proceedings Vol. 35
228
H.F. Wu et al.
stability of unstable fracture fixation in biomechanics analysis. Another good point, the wire fixation contact surface was the stronger cortical bone. The wire fixation will have good fixation compared with screw fixation on cancellous bone. It provides the new method to solve the simple femoral proximal unstable fracture fixation.
ACKNOWLEDGMENT This study was supported by grant National Science Council, Taiwan, Republic of China. (Grant No. BY 05-0730-98).
3.
4.
C.J. Wang, A.L. Yettram, M.S. Yao and P. Procter , (1998) Finite element analysis of a gamma nail within a fracture femur. Med Eng Phys 20:677–683. M.E. Taylor, K.E. Tanner, M.A.R. Freeman et al. (1996) Stress and strain distribution within the intact femur: compression or bending? Med. Eng. Phys 18:122-131
5.
Weon-Yoo Kim, Chang-Hwan Han, Jin-Il Park Jin-Young Kim, International Orthopedics (SICOT) (2001) 25:360–362
Author: Hsu-Fu Wu Institute: Institute of biomedical engineering, National Cheng Kung University Street: No. 1, Daxue Rd. City: Tainan Country: Taiwan Email:
[email protected]
REFERENCES 1.
2.
Nancy E. Lane, Philip N. Sambrook (2006) A companion to rheumatology osteoporosis and the osteoporosis of rheumatic diseases. MOSBY ELSEVIER. Allen F. Tencer, Kenneth D. Johnson (1994) Biomechanics in Orthopedic Trauma: Bone fracture and fixation. M. Dunitz.
IFMBE Proceedings Vol. 35
Hemodynamic Activities of Motor Cortex Related to Jaw and Arm Muscles Determined by Near Infrared Spectroscopy L.M. Hoa, Đ.N. Huan, N.V. Hoa, D.D. Thien, T.Q.D. Khoa, and V.V. Toi Biomedical Engineering Department, International University of Vietnam National University, Ho Chi Minh City, Viet Nam
Abstract— It has been reported that imbalance in the jaw can cause loss in arm strength. In order to investigate this phenomenon based on the relationship between bite and neuron activities, Near-infrared spectroscopy (NIRS) was used. The purpose of this study was to determine the influence of biting on the concentrations of oxygenated hemoglobin on the motor area. NIRS was administered for three healthy subjects. Subjects were asked to perform 3 different tasks: biting on a spacer using the right jaw, lifting up a weight with the right arm, and combining the two. The NIRS probes were positioned contralaterally, in the left region. The results suggested that using jaw muscles and arm muscles influenced the neuron activities in the motor area. Moreover, the neuron activities increased significantly when the arm muscles were used instead of the jaw muscles, as shown by levels of OxyHemoglobin in the motor area. Keywords— Motor cortex, biomechanics and Near-Infrared Spectroscopy.
I. INTRODUCTION It has been reported that there is a strong relationship between oral maxillofacial muscle, temporomandibular joint, and neck and arm muscles. Linderholm et al. [1] who investigate the relationship between bite force magnitude and forces produced by several skeletal muscle groups in children suggested that there was a relationship between bite forces and other forces, and maximal bite force correlated with elbow flexion force and hand grip force. Raadsheer et al. [2] found that the size of the jaw muscles was significantly related to the size of the limb muscles; however, maximal voluntary bite force moments were not significantly related to the moments of the arm flexion and leg extension forces. Various investigations have show that the effect of dental protect device on different muscle groups and second stage of labor [3, 4, 5, 6]. Smith [7] performed a study to examine the effect of an increased vertical dimension of occlusion on isometric deltoid strength in 25 members of a professional football team with a variety of temporomandibular joint dysfunction, stomatognathic muscle and bite abnormalities. He concluded that there was a relationship between the jaws, posture, and
the ability to the arm muscles to contract strongly. Forgione [8] analyzed the effect on isometric strength of biting on three intraoral devices and habitual occlusion using Nautilus lateral rise exercising device. They found that the average strength obtained with the elevated bite set to the functional criterion was significantly greater than in all other bite condition. It was concluded that a relationship does exist between bite and isometric strength. In a study of finding relationship between the height of bite plates and the strength of deltoid muscles, Chakfa et al. [9] performed experiments with 20 female subjects using the bite plates of adjusted heights of 2, 4, 6 and 12 mm. Each subject was seated on a dental chair with the arms extended to the side at the shoulder level and parallel to the floor. With or without introducing the bite plate in the subject’s mouth, the examiner applied a downward force on the wrist of the extended arm of the subject with a strain gauge while applying a stabilizing force on the contra lateral shoulder of the subject until the subject’s arm could no longer resist to the downward force. These authors found that while increasing the height of the plate the arm strength increased (from the strength without the plate of 6.9 kg on the right arm and 6.4 kg on the left arm) to a maximum (of 8.6 kg on either side of the arms) then decreased to 6.5 to the right and 6.3 kg to the left arm. Although these authors did not indicate the height of the plate when the maximum strength occurred and simply reported that it depended on the subject, however, we deemed that they were either 4 or 8mm. In a study to determine the effects of the imbalance in the jaw to the strength of the arms, we [10] conducted experiments with a pool of 34 young (age: 19.8±0.9 years old) and healthy subjects of both genders. The subjects stood straight with both arms extended and feet distanced apart. They resisted to a pull-down force applied on one hand at a time while firmly holding in one side of the jaws a firm spacer using essentially their molar and pre-molar teeth until they could no longer resist to the force. We found that when the subjects held the spacer on one side of the jaw the contra lateral arm lost its strength. This loss varied linearly with respect to the thickness of the spacer and it stopped when the thickness of the spacer reached about 2mm.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 229–232, 2011. www.springerlink.com
230
L.M. Hoa et al.
Resistive Force[N]
Right Hand 102 100 98 96 94 92 90 88 86 84 0
1
2
3
4
5
Spacer Thickness[mm]
Fig. 1 (Excerpted from [10]): The variation of the arm strength with respect to the thickness of a bite plate of one subject There is a relationship between the jaw muscles and the other skeleton muscles including the arm muscles. However this phenomenon is unclear according to the activities of neuron activities. In order to study this phenomenon, near infrared spectroscopy (NIRS) was used. It is a powerful, non-invasive imaging system. It offers many advantages, including compact size, no need of specially-equipped facilities, the potential for real time measurement, and measurement in natural posture and condition [11]. By using NIRS, previous studies also suggested that the premotor area is involved in bite force control [12] and intensity of bite and finger clenching influences activation levels in primary motor (MI) and somatosensory cortices, activation was more obvious with bite than finger clenching [13]. In Onozuka’s experiments [14], biting significantly activated the oral region of the primary sensorimotor cortex, supplementary motor area, insula, thalamus, and cerebellum. These regions are believed to receive sensory information from the lips, tongue, oral mucosa, gingivae, teeth, mandibles, and TMJ and to control masticatory movement and the lingual and facial muscles and therefore may be called the masticatory center. The purpose of this study was to investigate the relation between bite and arm strength on oxygenated hemoglobin (OxyHb) and deoxygenated hemoglobin (deOxyHb) levels in the motor area using NIRS. Corresponding to Brodmann area we will use NIRS system to measure in a square area of area 4 and area 6. We will determine the relationship between bite and arm strength and neural activities in the brain. NIRS plays a role of a new tool for testing hypotheses about the anatomical regions involved in processing sensory and motor information in the human brain.
healthy and showed no musculoskeletal or neurological restrictions or diseases. They had a complete dentition, i.e. no premolar or molar was missing in a quadrant. There were no signs of severe malocclusions and no facial malformations. All subjects were free of any medication and didn’t complain of any kinds of muscle pains at the time of the experiments. Before the experiments each subject filled out a questionnaire, which was kept confidential and included patient’s identification, age and gender. The tenets of the Declaration of Helsinki were followed; the local Institutional Review Board approved the study and informed consent was obtained for all subjects. B. Protocols NIRS system (FOIRE – 3000, SHIMAZU Co.LTD, Japan) was used to determine regional cerebral blood flow (rCBF) in motor cortex during biting and lifting up a weight. Subjects were fitted with one 3 x 3 arrays of optodes on the left region, allowing for 12 channels of measurement. The measurement protocol for this paper consisted of several steps: Step 1: Measure the concentration of Oxyhemoglobin (OxyHb) of the motor cortex, while asking the subject to bite on the spacer introduced in the one side of the jaw (let’s say right). The spacer’s thickness is 3.3mm. Step 2: The subject is to lift up a weight with the right arm while the spacer is not introduced and the concentrations of OxyHemoglobin is being measured. Step 3: Similar to step 2 but the subject is to lift up a weight with the right arm while the spacer is introduced. All tasks were performed using a block design: rest (20 seconds) - task (20 seconds) - rest (20 seconds).
Fig. 2&3 Photo example of a volunteer’s position during the experiments and optodes position on a volunteer
II. MATERIALS AND METHODS III. RESULTS
A. Subject Three male subjects (average age: 21.67±1.53 years old) participated in this investigation. All participants were
The results obtained on one male subject during biting on a thickness spacer, lifting a weight, and combining the two
IFMBE Proceedings Vol. 35
Hemodynamic Activities of Motor Cortex Related to Jaw and Arm Muscles Determined by Near Infrared Spectroscopy
simultaneously are indicated in Figure 4. They show significant correlation between bite and arm strength and the levels of OxyHemoglobin in the motor cortex. In other words, the concentrations of OxyHemoglobin significantly increased when the subjects bit on the spacer on right jaw and lifted up a weight with the right arm. It is possible that the neuron activities in motor cortex involved in the function of muscles. There are some studies also suggested that the neuronal activity in motor cortex is tightly coupled to muscle output. For example, during voluntary movement, the activity of motor cortex neurons is correlated with muscle force and muscle activity [15-17]. Neurons in motor cortex are active in correlation with a range of movement parameters including direction of movement of the hand through space, velocity, force, joint angle, and arm posture [15- 26].
231
The amount of brain matter devoted to any particular body part represents the amount of control that the motor cortex has over that body part. In order to observe this phenomenon clearly based on the neuron activities, NIRS Fusion technique was used to map the brain in 3D image (Figure 5). At the 31nd second, the concentrations of OxyHemoglobin in three difference tasks are differences. The amount of brain matter devoted to biting is smaller and in lower position than lifting up a weight.
Fig. 5 The brain’s mapping of 1 subject during biting on spacer in the right jaw (task 1), lifting up a weight with the right arm (task 2), and combining the two simultaneously (task 3) by using NIRS Fusion technique In the previous study [13], they suggested that the concentrations of OxyHemoglobin were higher during biting than finger clenching. In their experiment, hand muscles were involved when they measured, in our experiment, subjects used both hand and arm muscles which may be the caused why the levels of OxyHemoglobin were higher during the use of arm muscles than jaw muscles. Especially, levels of OxyHemoglobin considerably increased when subjects simultaneously used both jaw and arm muscles.
Subject A
Subject B
Subject C
Fig. 4 The signal of motor cortex of 3 subjects during biting on spacer in the right jaw (task 1), lifting up a weight with the right arm (task 2), and combining the two simultaneously (task 3) Moreover, the concentrations of OxyHemoglobins were higher during the use of arm muscles than that of jaw muscles. So it is clearly that the levels of OxyHemoglobin in motor cortex depend on the force we apply. That is also the reason why the levels of OxyHemoglobin of the third subject quite smaller than the others. For every muscles activities cells need a flow of Oxygen in to support their activities. Oxygen related with Hemoglobin so when we do something, oxygen rushes into the place where we use the muscle so the arm muscles require more Oxygen supplies because generally arm muscles are bigger than jaw muscles.
Fig. 6 The average levels of OxyHemoglobin in 3 different tasks of 3 subjects
IV. CONCLUSION It is obviously that skeleton muscles activities such as jaw and arm muscles can cause hemodynamic actions which
IFMBE Proceedings Vol. 35
232
L.M. Hoa et al.
can be measured by NIRS. These pieces of information help us to understand the mechanism of the relation between jaw and arm muscles with neuron activities in the motor cortex. Neuron activities significantly increased with the use of arm muscles when compared to the use of jaw muscles, as indicated by levels of OxyHemoglobin in the motor area. Especially, the levels of OxyHemoglobin considerably increased when subjects simultaneously used both jaw and arm muscles.
ACKNOWLEDGMENT We would like to thank Vietnam National Foundation for Science and Technology Development-NAFOSTED for supporting research grant No. 106.99-2010.11 as well as attendances and presentation. This research was partly supported by a grant from Shimadzu Asia-Pacific Pte. Ltd. and research fund from Vietnam National University in Ho Chi Minh City. We also would like to thank Associate Professor, Doctor Nguyen Thi Doan Huong, Ho Chi Minh City Medicine and Pharmacy University for her valuable information about human anatomy and physiology. Finally, an honorable mention goes to our volunteers and friends for their supports on us in completing this project.
REFERENCES 1. Linderholm H, Lindqvist B, Ringqvist M, Wennström A. (1971) Isometric bite force and its relation to body build and general muscle force. Acta Odont Scand; 29: 563–568. 2. Raadsheer M.C., van Eijden T.M.G.J., van Ginkel F.C., PrahlAndersen B. (2004) Human jaw muscle strength and size in relation to limb muscle strength and size. Eur J Oral Sci; 112: 398–405. 3. Gelb H, Mehta NR, Forgione AG. (1996). The relationship between jaw posture and muscular strength in sports dentistry: a reappraisal. Cranio; 14(4):320-325. 4. Kaufman RS, Kaufman A. (1984) An experimental study on the effects of the MORA on football players. Basal Facts. 6(4):119-126. 5. Matsuo.K, Janna V. Mudd, Jerome N. Kopelman and Robert O. Atlas. Duration of the second stage of labor while wearing a dental support device: A pilot study. Obstet. Gynaecol. Res. 2009; 35(4): 672–678. 6. Garner DP, McDivitt E. (2009) Effects of Mouthpiece Use on Airway Openings and Lactate Leve in Healthy College Males. Conpedium. 30(2):9-13. 7. Smith S. (1978) Muscular strength correlated to jaw posture and the temporomandibular joint. N Y State Dent J; 44(7):278-285. 8. Forgione AG, Mehta NR, Westcott WL.(1991) Strength and bite, Part 1: An analytical review. Cranio; 9(4):305-15. 9. Abdallah EF, Mehta NR, Forgione AG. (2004) Affecting upper extremity strength by changing maxilla-mandibular vertical dimension in deep bite subjects. Cranio; 22(4):268-275. 10. Hoa LM, Huan DN, Thao NH, Hoan NT, Khoa TQ, Tam NH and Vo Van Toi (2010): Relationship between Dental Occlusion and Arm Strength, International Federation of Medical and Biological Engineering (IFMBE) Proceedings Series, Springer, Vol. 27, p. 265-268.
11. Wolf.M, Morren. G, Haensse. D, Karen. T, Wolf. U, Fauchere. J.C, Bucher. H.U. Near infrared spectroscopy to study the brain: an overview. Opto-electronics review; 16(4): 413-419. 12. Takeda T, Shibusawa M, Sudal O, Nakajima K, Ishigami K, Sakatani K.(2010) Activity in the premotor area related to bite force control--a functional near-infrared spectroscopy study. Adv Exp Med Biol; 662:479-84. 13. Mami Shibusawa, Tomotaka Takeda, Kazunori Nakajima, Handa Jun, Shinichi Sekiguchi, Keiichi Ishigami and Kaoru Sakatani.(2010) Functional Near-Infrared Spectroscopy Study on Primary Motor and Somatosensory Cortex Response to Biting and Finger Clenching. Adv Exp Med Biol; 662:485-90. 14. Onozuka.M.Fujita.M. Wantabe., Hirano.Y. Niwa.M., Nishivama.K., Saito.S (2003). Age-related changes in brain regional activity during chewing: a functional magnetic resonance imaging study. Journal of Dental Research; 82(8): 657-660. 15. Evarts (1968). Relation of pyramidal tract activity to force exerted during voluntary movement. J Neurophys; 31:14-27. 16. Holderfer RN, Miller LE (2002). Primary motor cortical neurons encode functional muscle synergies. Exp Brain Res; 146:233-243 17. Morrow MM, Miller LE (2003). Prediction of muscle activity by popylations of sequentially recorded primary motor cortex neurons. J Neurophys; 89:2279-2288. 18. Camitini R, Johnson PB, Urbano A (1990). Making arm movements within different parts of space: dynamic aspects in the primate motor cortex. J Neurosci 10: 2039- 2058 19. Georgopoulos AP, Schwartz AB, Kettner RE (1986). Neuronal population coding of movement direction. Science; 233:1416-1419. 20. Georgopoulos AP, Lurito JT, Petrides M, Schwartz AB, Massey JT (1989). Mental rotation of the neuronal population vector. Science; 243: 234-236. 21. Kakei S, Hoffman D, Strick P (1999). Muscle and movement representations in the primary motor cortex. Science 285: 2136-2139. 22. Kalaska JF, Cohen DA, Hyde ML, Prud’homme MA (1989). Comparison of movement direction – related versus load direction-related activity in primate motor cortex, using a two-dimensional reaching task. J Neurosci; 9:2080-2102 23. Reina GA, Moran DW, Schwartz AB (2001). On the relationship between joint angular velocity and motor cortical discharge during reaching. J Neurophys; 85: 2576- 2589. 24. Scott SH, Kalska JF (1995). Changes in motor cortex activity during reaching movements with similar hand paths but different arm postures. J Neurophys; 73: 2563- 2567. 25. Scott SH, Kalaska JF(1997). Reaching movements with similar hand paths but different arm orientations. I. Activity of individual cells in motor cortex. J Neurophys; 77:826-852. 26. Sergio LE, Kalaska JF (2003). Systematic changes in motor cortex cell activity with arm posture during directional isometric force generation. J Neurophys; 89: 212-228 Author: Vo Van Toi Institute: Biomedical Engineering Department – Ho Chi Minh City International University Street: Quarter 6, Linh Trung, Thu Duc Dist. City: Ho Chi Minh City Country: Viet Nam Email:
[email protected]
IFMBE Proceedings Vol. 35
Time-Dependent EMG Power Spectrum Parameters of Biceps Brachii during Cyclic Dynamic Contraction S. Thongpanja, A. Phinyomark, P. Phukpattaranont, and C. Limsakul Department of Electrical Engineering, Prince of Songkla University, Songkhla, Thailand
[email protected],
[email protected],
[email protected],
[email protected]
Abstract— Mean frequency and median frequency (MNF and MDF) features are global used parameters of EMG power spectrum to determine muscle fatigue. A major problem of these parameters is a non-linear relationship between muscle load and feature value, especially in large muscle and in cyclic dynamic contraction. To analyze the EMG signal in both of muscle fatigue and muscle load, we have been proposed time dependence of the MNF and MDF (TD-MNF and TD-MDF) of a time-sequential data. Moreover, the surface EMG signals that were used in this study were acquired from the biceps brachii muscle during round-trip dynamic contraction. After that TD-MNF and TD-MDF were calculated and compared with the standard MNF and MDF features which were calculated based on the whole data. Three statistical parameters including mean, median, and variance were used to apply with the selected efficient range of TD-MNF and TD-MDF in order to easily observe and use in an application. The result shows that mean parameter of selected TD-MNF have a better linear relationship with muscle load compared to the others and have a significant difference (p<0.005) between feature values for different loading conditions. In addition, it was found that there was a certain pattern of TD-MNF and TD-MDF for each data and each subject that has not been found in traditional MNF and MDF features. The optimal method of TD-MNF and TD-MDF was success when overlapping consecutive window was performed with 512-sample window size and 64-sample window increment. In conclusion, mean of the selected TDMNF band can be used as both muscle load and muscle fatigue indexes. Keywords— Mean frequency, Median frequency, Power spectrum, Electromyography (EMG) signal, Isotonic contraction.
I. INTRODUCTION Surface electromyography (EMG) signal is one of the most significant electro-physiological signals that is normally used to measure electrical activity from the muscles. There are many potential applications that can be detected and analyzed by the EMG signals such as muscle load, muscle activity, muscle fatigue, and muscle diseases. Usually, in order to detect them, feature extraction method should be performed. It is a method to maintain the important EMG information and discard the unwanted EMG
parts. Root mean square, mean absolute value, integrated EMG and zero crossing per second are commonly quantitative used method in detection of muscle load. There are features based on time domain. However, performance of time domain features in muscle fatigue detection is a drawback. Effective domain of EMG feature method for detection of muscle fatigue is usually based on frequency information. Parameters of the EMG power spectrum are used as a feature. There are two ordinarily used namely, mean frequency (MNF) and median frequency (MDF). On the contrary, usage of the MNF and MDF features in determining muscle force illustrates the contradictory findings [1-8]. A number of literatures [1-2] showed that the MNF and MDF values increase with force levels. On the other hand, decrease of the MNF and MDF values with force levels is shown [3-4]. Moreover, in some experimental results, values of the MNF and MDF become independent of the contraction levels [5-6]. All of these literatures were analyzed with the EMG signals recorded from biceps brachii (BB) muscle during static muscle contraction. In addition, nonlinear relationship with muscle load of the MNF and MDF features are investigated in dynamic muscle contraction [7] and in large muscle [8] as well. In this study, we are proposing modification of the traditional MNF and MDF methods in order to solve the non-linear relationship between feature values and muscle loads. Instead of using a whole signal fast Fourier transform (FFT) in standard features, a concept of using consecutive FFT has been proposed for resolving the above problem. As a result the proposed methods can be used to detect both of the muscle fatigue and muscle load instead of using a multi-features.
II. METHODS AND MATERIALS A. Time-Dependent MNF and MDF Methods The definition of the traditional MNF and MDF features of the EMG power spectrum are described as follow: MNF is an average frequency that is calculated as sum of product of the EMG power spectrum and frequency divided by total sum of spectrum intensity. It can be expressed as:
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 233–236, 2011. www.springerlink.com
234
S. Thongpanja et al. M
MNF = ∑ f j Pj j =1
M
∑P
j
(1)
j =1
where fj is a frequency of the EMG power spectrum at frequency bin j and Pj is the EMG power spectrum at frequency bin j. MDF is a frequency value at which the EMG power spectrum is divided into two regions with equal integrated power. It can be expressed as: MDF
∑P
j
j =1
=
M
∑
j = MDF
Pj =
1 M ∑ Pj 2 j =1
(2)
In order to analyze the EMG power spectrum in both of the muscle fatigue and muscle load, we investigate the TDMNF and TD-MDF methods. The proposed methods used a sliding window technique for extracted features instead of using the whole data. It is suitable for non-stationary EMG signal that is recorded from the cyclic dynamic contraction. The concept of consecutive FFTs is shown in Fig. 1. In this figure, the window size in length of 512 samples was used with the window overlapping by 64 samples. A series of the MNF and MDF features can be derived from the successive FFT for the whole length of the EMG data. A total of 9 segment FFTs per second was obtained in this case. In this study, performances of the TD-MNF and TD-MDF methods that were used to determine the muscle load and muscle fatigue were proposed and evaluated. Firstly, the effects of the window size and window overlapping functions were optimized. The window size function was set to 512 samples in both traditional and time-dependent methods. However, six overlapped window functions, including 32 samples (6.25%), 64 samples (12.5%), 128 samples (25%), 256 samples (50%), 384 samples (75%), and 512 samples (100%, disjoint), were evaluated. The observation and selection of the suitable range of the TD-MNF and TD-MDF were proposed. After that, secondly, three statistical parameters were calculated for that range. Analysis of variance (ANOVA) was performed on these parameters in order to measure the relationship between parameter value and muscle load and to appraise the significant difference in loading conditions. B. Data Acquisition and Experiments EMG signals were recorded by two surface electrodes on BB muscle and reference electrode was placed on the wrist as shown in Fig. 2(a). The EMG signals were acquired from four normal subjects with different loads (2, 4, 6, and 8 kg). The subjects were asked to perform the round-trip flexionextension contraction with 3 seconds in the range of 0-180° as shown in Fig. 2(b). All the EMG signal recordings were carried out using a Mobi6-6b (TMS International BV, Netherlands). The signals were sampled at a rate of 1024 Hz
Fig. 1 Concept of consecutive overlapping FFTs with window size is 512 samples and window overlapping is 64 samples
Fig. 2 (a) Electrode locations (left) BB muscle (right) common ground. (b) Apparatus used to apply constant force at various joint angles (I) flexion (II) extension with 24-bit A/D resolution and were band limited from 20 to 500 Hz. Further, an amplifier with 19.5 times was set.
III. RESULTS AND DISCUSSION It is distinctly presented in the above introduction that many discrepancies existed between the findings of a relationship between the MDF (and MNF) parameter and muscle loading level. In addition, its investigation was confirmed in our results as shown in Figs. 3(a) and 3(b), respectively. The average MDF and MNF values were shown that their relationships are difference in each subject. For instance, in Fig. 3(b), the MNF values are independent of the muscle loading levels in subject 1, increasing together with the muscle loading levels in subject 2, and decreasing with the muscle loading levels in subject 3 and subject 4. The similar trend can be observed for the MDF values as well. It is shown in Fig. 3(a). Consequently the TD-MDF and TD-MNF were calculated as shown in Figs. 4(a) and 4(b), respectively. From the experimental results, we can found that the TD-MDF and TD-MNF of the EMG data
IFMBE Proceedings Vol. 35
Time-Dependent EMG Power Spectrum Parameters of Biceps Brachii during Cyclic Dynamic Contraction
with cyclic dynamic contraction showed a dynamical change with respect to time and there were certain patterns of the TDMDFs and TD-MNFs for each data. The selection of a suitable range of the TD-MDF and TD-MNF feature vectors will offer the better separation performance between muscle contraction levels. However, the certain relationship between the proposed methods and muscle loading conditions was found in some consecutive window condition as shown in Figs. 4(a) and 4(b). The solid line boxes present the region of features that has an inverse relationship between feature values and muscle loading conditions (a decrease of the feature values with an increase of the force levels) for each subject. That it has not found from the traditional MDF and MNF features, as presented in Figs. 3(a) and 3(b). Mostly, these solid line boxes were located in middle of the EMG sequences. To confirm the discovery that it was not occurring from the effect of muscle fatigue. The linear regression analysis was employed to observe the behavior of the muscle fatigue. For instance, series of the MDF values from the subject 3 with different loading levels are shown in Fig. 5. It confirms that decreased MDF values are not affected from the muscle fatigue. Moreover, in some subject, the positive relationship was found in the beginning and ending of the sequence as shown with the dash line boxes in Figs. 4(a) and 4(b). In this study, we selected the middle TD-MNF and TD-MDF ranges to calculate the statistical parameters. These parameters can be seen as a dimensionality reduction method in order to easily use in the applications. To automatically assess, we used the same range
235
Fig. 4 (a) TD-MDF (b) TD-MNF of BB during cyclic dynamic contraction with 512-sample window size and 64-sample window overlapping
Fig. 5 Series of the MDF feature of the subject 3 at different loading conditions for proving an un-fatiguing of the muscle
Fig. 3 (a) Mean MDF of EMG for different loads from four subjects (b) Mean MNF of EMG for different loads from four subjects
of TD-MNF and TD-MDF for each subject. It ranges between 19 and 22 (about 10% of the whole windows). For the first optimized issue, the results show that the method of 64-sample window overlapping function obtained the better in separation ability than the other overlapping consecutive window functions including the disjoint consecutive window function (overlapped with 512 samples). Moreover, it has not been found the certain relationship in each subject for the disjoint window function as formerly founding with traditional methods. Secondly, an evaluation
IFMBE Proceedings Vol. 35
236
S. Thongpanja et al.
of the statistical parameters calculated from the selected TDMNF (and TD-MDF) was proposed. The results of p-value obtained from ANOVA analysis are reported in Table 1. It was found that there are the significant difference (p<0.005) between feature values for different loading conditions. The performance of TD-MNF is better than the TD-MDF. The TD-MNF’s p-value is much lower than the p-value of the TD-MDF. Moreover, mean statistical parameter shows a better separation performance than the other two parameters, median and variance. In the Figs. 6(a) and 6(b), the relationship between mean TD-MNF (and mean TD-MDF) parameter and muscle loading level are shown, respectively. There was a certain relationship of mean TD-MDF and mean TDMNF for each trial and each subject that was not found in traditional methods. Thus we can conclude that the proposed method can be used as an index that can be used to detect both muscle load and muscle fatigue.
IV. CONCLUSIONS The idea of using consecutive FFT has been proposed for extracting the MNF and MDF features that can be used to detect both of the muscle loads and muscle fatigue. Its better separation and reliability performance of the proposed were shown through the experiments. An increasing of the processing cost does not require any additional hardware. In the future works, the various efficient statistical parameters should be evaluated with the selected suitable ranges.
ACKNOWLEDGMENT This work was supported in part by the Thailand Re-search Fund (TRF) through the Royal Golden Jubilee Ph.D. Program (Grant No. PHD/0110/2550), and in part by NECTEC-PSU Center of Excellence for Rehabilitation Engineering, Faculty of Engineering, Prince of Songkla University.
REFERENCES
Fig. 6 (a) Mean of selected TD-MDF (b) Mean of selected TD-MNF of BB during cyclic dynamic contraction with 512 sample window size and 64 sample window overlapping Table 1 ANOVA analysis (average p-value from 4 subjects) of three statistical parameters extracted from selected TD-MDF and TD-MNF (window size is 512 samples and window overlapping is 64 samples) Average p-values
Feature extraction
TD-MNF
Mean
0.00305
0.01768
Median
0.00340
0.02387
Variance
0.09327
0.27870
TD-MDF
1. Muro M, Nagata A, Murakami K et al. (1982) SEMG power spectral analysis of neuromuscular disorders during isometric and isotonic contraction. Am J Phys Med 61:244–254 2. Gander RE, Hudgins RE (1985) Power spectral density of the surface myoelectric signal of the biceps brachii as a function of static load. Electroen Clin Neuro 25:469–478 3. Kaplanis PA, Pattichis CS, Hadjileontiadis LJ et al. (2009) Surface EMG analysis on normal subjects based on isometric voluntary contraction. J Electromyogr Kines 19:157–171 4. Rainoldi A, Galardi G, Maderna L et al. (1999) Repeatability of surface EMG variables during voluntary isometric contraction of the biceps brachii muscle. J Electromyogr Kines 9:105–119 5. Farina D, Fosci M, Merletti R (2002) Motor unit recruitment strategies investigated by surface EMG variables. J Appl Physiol 92:235–247 6. Hagberg M, Ericsson B-E. (1982) Myoelectric power spectrum dependence on muscular contraction level of elbow flexors. Eur J Physiol 48:147–156 7. Potvin JR (1997) Effects of muscle kinematics on surface EMG amplitude and frequency during fatiguing dynamic contractions. J Appl Physiol 82:144–151 8. Zhou P, Rymer WZ (2004) Factors governing the form of the relation between muscle force and the EMG: A simulation study. J Neurophysiol 92:2878–2886 Address of the corresponding author: Author: Sirinee Thongpanja, Angkoon Phinyomark Institute: Biomedical Engineering and Assistive Technology Laboratory, Department of Electrical Engineering, Faculty of Engineering, Prince of Songkla University Street: 110/5 Kanjanavanid Road, Kho Hong, Hat Yai City: Songkhla Country: Thailand Email:
[email protected],
[email protected] Website: http://saturn.ee.psu.ac.th/~beatlab/
IFMBE Proceedings Vol. 35
Transcutaneous Viscoelastic Properties of Brain through Cranial Defects H. Nagai, D. Takada, M. Daisu, K. Sugimoto, T. Miyazaki, and Y. Akiyama Department of Neurosurgery, Shimane University Faculty of Medicine, Izumo, Japan
Abstract— Decompressive craniectomy is a procedure necessary for preventing brain herniation due to brain edema in patients with acute intracranial hypertension. We transcutaneously palpated brain stiffness through cranial defects in order to evaluate brain edema of such patients for one month until cranioplasty. However, there are no instruments designed to quantitatively evaluate brain stiffness in the decompressive cranial defect. This study attempted to evaluate brain stiffness under normal conditions by using a palpation-like tactile resonance sensor that measured penetrating depth (Dp), pressure (Pr), and change in frequency (dF). For this purpose, we studied seven patients, who had undergone unilateral decompressive craniectomy, and measured the viscoelastic properties of cranial defects using the sensor. The patients meeting our inclusion criteria underwent haptic measurements and head CT scans within three days before cranioplasty. We compared the data sets obtained from the seven subjects with clinical factors including physical data, neurological parameters, and CT images. We found that under stable conditions, the clinical factors did not have an impact on brain stiffness. Then, we selected data sets from six patients with the same maximum depth (h = 3.0 mm) and treated them as one group. In addition, we analyzed the hysteresis graphs obtained. The estimated mean values of the transcutaneous viscoelastic properties in the cranial defects were as follows: stiffness = 28.37 gf/mm and shear modulus = 1.94 kPa. The correlation between Pr and dF indicated linear elasticity in a loading state (Pr = 0.853*dF + 2.729, R2 = 0.997). Keywords— brain, decompressive craniotomy, stiffness, tactile, viscoelasticity.
I. INTRODUCTION Using intraoperative digital brain palpation to assess brain stiffness, neurosurgeons can obtain significant information for evaluating brain edema and tumor location. Postoperative palpation through the window of a cranial defect is a routine part of the physical examination of patients who have undergone decompressive craniectomy. Knowledge of brain stiffness is important for neurosurgical management. However, in clinical practice, it is difficult to quantitatively evaluate brain stiffness. The stiffness of living tissues is determined based on biomechanics of the viscoelastic properties of the tissues. Previous studies have explained viscoelasticity of brain
specimens by using phantoms, animal or human cadavers, or living animal brains [1–5]. Although brain stiffness has been clinically evaluated by elastography, which includes data obtained by ultrasound and magnetic resonance imaging [6–8], most studies on the viscoelastic properties of brain still employ stress–strain sensors and indentation methods [9–12]. Recently, a tactile resonance sensor with a phase-shift circuit system has been applied for the measurement of the viscoelastic properties of living tissues. This type of sensor, with palpation-like behavior, has a utilitarian design for easy and convenient use [13–15]. In the present study, as a first step toward the evaluation of human brain stiffness using a tactile resonance sensor, we determined the standard values of the transcutaneous viscoelastic properties near cranial defects under stable conditions before cranioplasty.
II. MATERIALS AND METHODS A. Subjects This study was approved by the ethics committee of Shimane University Hospital (IRB #475). The background populations were inpatients admitted for unilateral decompressive craniectomy (DC), which prevents brain herniation due to acute intracranial hypertension, at our institution between 2006 and 2010. In this study, seven subjects underwent transcutaneous measurements of viscoelastic properties through cranial defects by a tactile resonance sensor and CT scans within three days before cranioplasty, which was most often done around one month after DC. Their transcutaneous viscoelastic properties via the cranial defects were determined under normal conditions. B. Tactile Resonance Sensor The tactile resonance sensor employed was Venustron®II, ver. 2.5 (Axiom Inc., Fukushima, Japan) (Fig. 1). The principle of this sensor is based on the fact that each object has a characteristic resonance frequency that depends on its stiffness. The difference between the frequencies under nontouching and touching conditions depends on the stiffness of the object, which can be estimated by monitoring change
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 237–240, 2011. www.springerlink.com
238
H. Nagai et al.
in the frequency (dF) [13–15]. The control system of the device enables the sensor probe to automatically reciprocate via a stepping motor. This sensor simultaneously measures both the pressure (Pr: gf) and change in frequency (dF: Hz) for a given depth (Dp: mm). dF is the frequency change detected by the tactile resonance of the phase-shift circuit system and Pr is the stress detected by an indentation method. The depth of indentation and dF were simultaneously measured by the counterweight and depression method [13– 15]. This implies that the stiffness (stress/deformity: gf/mm) is calculated as the reference of viscoelasticity for dF. When the sensor is pressed on and released from an object at a constant rate, it can determine dF at certain intervals. C. Measurement Protocol The patients were kept supine in a relaxed condition. The probe position of the handheld device was adjusted to come in contact with skin surface using a guide attachment set on the scalp with three-point fixation. Measurements were conducted around the center of the cranial window, distant from the bone edge. The measurement protocol involved a loading-unloading cycle. The interval of each cycle was 1 min and at least three cycles were employed. The Pr and dF values obtained at each depth were individually averaged. The movement of the probe was set within a safe range. The settings of the tactile resonance sensor remained constant at the following values: probe frequency = 57 kHz, contact pressure = 1.0 gf/mm2, contact speed = 1.5 mm/s, diameter of the probe = 5.0 mm, and sphere radius = 2.5 mm. The probe was applied to a maximum depth of 3.0 mm, except in case #1 (4.0 mm), and then released from the object.
midline shift (CT shift), width of the cranial window (CT width), distance of scalp protrusion (CT scalp), distance of dural protrusion (CT dura), and scalp thickness.
III. RESULTS A. Young’s Modulus Data of the Seven Subjects We plotted scatter graphs for Dp, Pr, and dF from the mean values measured in the seven patients who fulfilled the study criteria. Then, the hysteresis curve for indentation loading/unloading against depth was obtained. The value for case #1 (depth = 4.0 mm) was lower than that for the other six cases in which depth was 3.0 mm in both the Pr-depth graph and the stiffness-depth graph. It was suggested that stress was proportional to the maximum Dp, whereas dF was independent of it. Next, we estimated Young’s modulus (E) on the basis of indentation theory. It was obvious that Young’s modulus was related to the slope of Pr-h3/2 graph by the following equation:
E=
3(1 − υ 2 ) P × 3 2 ≈ 0.356 × slope δ 4 R
(1)
With the assumption that δ = h, υ = 0.5, and R = 2.5 mm in the above equation (1), the slope of the Pr-h3/2 graph represented Young’s modulus for the loaded state (Fig. 2).
Fig. 1 Photograph of tactile resonance sensor D. Factors Used for Comparison with Stiffness We considered that the factors affecting the stiffness were the temperature and thickness of an object. The following factors were compared with the stiffness parameters: the body temperature and consciousness level on the day of measurement and the thickness parameters measured using CT scan within three days before cranioplasty, namely the
Fig. 2 The Pr-h3/2 loading graph. The slope of graph represented Young’s modulus in the loaded state B. Results of Averaged Graphs for Six Subjects In this study, depth was common to both the Pr and dF values. Because we intended to analyze the data for both Pr
IFMBE Proceedings Vol. 35
Transcutaneous Viscoelastic Properties of Brain through Cranial Defects
and dF in relation to depth, we selected Pr and stiffness data of depth from six patients, omitting case #1, and treated them as one group. We plotted the averaged scatter graphs including averaged Dp-Pr graph, averaged Dp-dF graph, and averaged Dp-stiffness graph for the depth from the six data sets, which were plotted in relation to stiffness defined by Pr/depth (gf/mm). The averaged Dp-Pr graph showed a linear correlation in the loaded state whereas a nonlinearity one in the unloaded state. We assumed that the “dash-pot effect” was apparent in the unloaded state. The averaged Dp-dF graph exhibits a nonlinear correlation in the loaded and unloaded states. dF was still 40 Hz at the end of the protocol. The averaged Dp-stiffness graph exhibits a nonlinear correlation in the loaded and unloaded states. It approximated a logarithmic equation in the loaded state. At first, the stiffness increased rapidly, subsequently reaching a plateau, and converged at stiffness = 28.37±7.09 gf/mm. C. Results for the Shear Modulus We analyzed the relationship between Pr and stiffness on the basis of dF. We plotted dF-Pr and stiffness-dF scatter graphs using the data set obtained from the six patients. The averaged dF-Pr scatter graph for the six subjects is shown in Fig. 3. It approximated a high linear correlation in the loaded state whereas a nonlinear correlation in the unloaded state. In contrast, the averaged stiffness-dF scatter graph for the six subjects (Fig. 4) approximated a nonlinear exponential correlation in the loaded state whereas a linear correlation in the unloaded state.
Fig. 3 The averaged dF-Pr scatter graph
Fig. 4 The averaged stiffness-dF scatter graph
239
Because the averaged Dp-Pr scatter graph for the six subjects exhibits a linear correlation under loaded conditions, we assumed elasticity varied linearly. We calculated the shear modulus (G) on the basis of Lee and Radok’s correspondence principle [16]. The value of G was convergent to 1.94 ± 0.49 kPa in the shear modulus-depth graph. G=
3P 16h Rh
(2)
D. Comparison with Clinical Factors We compared the data set obtained from the seven subjects with clinical factors including body temperature, consciousness level, and CT imaging parameters. The haptic measurement parameters of the seven subjects were the maximum values of Pr, dF, and stiffness; Young’s elastic modulus upon loading; and the value of G. The clinical factors included the day of measurement from onset of the study; body temperature; Japan Coma Scale; CT shift, width, scalp, and dura; and skin thickness, which was calculated as CT scalp minus CT dura. Spearman’s test revealed no significant correlations. Therefore, it seems that under physiological conditions, the clinical factors do not affect the transcutaneous local viscoelasticity before cranioplasty.
IV. DISCUSSION A. Viscoelasticity under Stable Conditions Our goal was to measure the viscoelastic properties of the human brain under practical conditions. Therefore, we used the tactile resonant sensor with the stress–strain function that simulated manual palpation. In this study, the stiffness was 2.837 ± 0.709 (N), Young’s elastic modulus was E = 5.08 ± 1.31, and the shear modulus was G = 1.94 ± 0.49 for a depth of 3.0 mm. Poisson’s ratio (υ) was calculated as 0.31–0.62 using the equation E = 2G (1+ υ). These values were approximately equal to those previously reported for the viscoelasticity properties of the brain in vivo [1–7]. The results of indentation fitted the Maxwell model as expressed by the equation G = Ge − G1 exp (-t/τ), where Ge is the instantaneous modulus in shear, G1 is the relaxation in the shear modulus, t is time, and τ is the relaxation time. Thus, G = 1.94 + 3.3 exp(-t/0.5) under the assumption that Ge = 1.94, G1 = 3.3, t = h/1.5, and τ = 0.5. The results obtained in this study by an indentation method, reflected those of a previous model [9–12]. However, this measurement method evaluated brain viscoelasticity via multiple structural layers including the skin, subcutaneous tissues, muscle fascia, and dura. Moreover, some assumptions had to be made to approximate the expression for elasticity.
IFMBE Proceedings Vol. 35
240
H. Nagai et al.
B. Correlation between dF and Pr, dF and Stiffness The scatter graph of dF-Pr shows a linear correlation between dF and Pr in the loaded state (Fig. 3). Therefore, the Pr-value was approximated by the dF-value as Pr = 0.853dF + 2.729. On the other hand, in the unloaded state, it shows a nonlinear correlation and the dF value still continued to exhibit a frequency change of 40 Hz at the release point, that is, when the probe pressure against the object was zero. In this context, the contact area, surface conditions, and bulk flow of water content were considered relevant [17,18]. In this study, the correlation between dF and stiffness was demonstrated by an exponentially fitted curve (Fig. 4). However, our previous study obtained a semilogarithmic curve for a gelatin phantom with brain-like behavior [19]. This discrepancy was due to the differences in the measured objects. Our previous study involved the direct measurement of gelatin with an even structure. On the other hand, in the present study, the measured object had uneven and multiple-layer structures and was investigated via a decompressive cranial window. Although this study was limited because of the obstructions due to the multiple structural layers including skin, subcutaneous tissues, muscle fascia, and dura, it was shown that the viscoelastic properties of the brain could be evaluated by tactile resonance instead of stress–strain methods.
V. CONCLUSIONS In conclusion, the measurement of transcutaneous viscoelasticity of brain via a cranial defect yielded the following values: stiffness = 28.37 ± 7.09 g/mm, E = 5.08 ± 1.31, and G = 1.94 ± 0.49. Under stable conditions before cranioplasty, clinical factors were found to have no impact on brain stiffness. The correlation between Pr and dF demonstrated linear elasticity in the loaded state (Pr = 0.853dF + 2.729, R2 = 0.997).
ACKNOWLEDGMENT This study received financial support from the alumni society of Shimane University Faculty of Medicine.
3. Hirakawa K, Hashizume K, Hayashi T (1981) Viscoelastic property of human brain for the analysis of impact injury. No To Shinkei 33:1057–1065 (Japanese) 4. Nagashima T, Shirakuni T, Rapoport SI (1990) A two-dimensional, finite element analysis of vasogenic brain edema. Neurol Med Chir (Tokyo) 30:1–9 5. Takizawa H, Sugiura K, Baba M et al. (1991) Deformation of brain and stress distribution caused by putaminal hemorrhage, numerical computer simulation by finite element method. No To Shinkei 43:1035–1039 (Japanese) 6. Green MA, Bilston LE, Sinkus R (2008) In vivo brain viscoelastic properties measured by magnetic resonance elastography. NMR Biomed 21:755–764 7. Kruse SA, Rose GH, Glaser KJ et al. (2008) Magnetic resonance elastography of the brain. Neuroimage 39:231–237 8. Scholz M, Lorenz A, Pesavento A et al. (2007) Current status of intraoperative real-time vibrography in neurosurgery. Ultraschall Med 28:493–497 9. Galford JE, McElhaney JH (1970) A viscoelastic study of scalp, brain, and dura. J Biomech 3:211–221 10. Kuroiwa T, Yamada I, Katsumata N et al. (2006) Ex vivo measurement of brain tissue viscoelasticity in postischemic brain edema. Acta Neurochir Suppl 96:254–257 11. Miller K, Chinzei K, Orssengo G et al. (2000) Mechanical properties of brain tissue in-vivo: experiment and computer simulation. J Biomech 33:1369–1376 12. Walsh EK, Schettini A (1976) Elastic behavior of brain tissue in vivo. Am J Physiol 230:1058–1062 13. Lindahl OA, Omata S (1995) Impression technique for the assessment of oedema: comparison with a new tactile sensor that measures physical properties of tissue. Med Biol Eng Comput 33:27–32 14. Lindahl OA, Constantinou CE, Eklund A et al. (2009) Tactile resonance sensors in medicine. J Med Eng Technol 33:263–273 15. Omata S (1998) Frequency deviation detecting circuit and measuring apparatus using the frequency deviation detecting circuit. United States Patent No. 5766137 16. Lee EH, Radok JRM (1960) The contact problem for viscoelastic bodies. J Appl Mech 27:438–444 17. Jalkanen V, Andersson BM, Bergh A et al. (2006) Prostate tissue as measured with a resonance sensor system: a study on silicone and human prostate tissue in vitro. Med Bio Eng Comput 44:593–603 18. Jalkanen V, Andersson BM, Bergh A et al. (2008) Explanatory models for a tactile resonance sensor system-elastic and density-related variations of prostate tissue in vitro. Physiol Meas 29:729–745 19. Yamamoto Y, Moritake K, Nagai H, Sato M (2004) Quantitative estimation of brain stiffness measured using a tactile biosensor in animal models. Neurol Res 26:622–627
Author: Hidemasa Nagai Institute: Department of Neurosurgery, Shimane University Faculty of Medicine Street: Enya 89-1 City: Izumo city, Shimane 693-8501 Country: Japan Email:
[email protected]
REFERENCES 1. Aoyagi N, Masuzawa H, Sano K et al. (1982) Compliance of brain-Part 2 Approach from the local elastic and viscous moduli. No To Shinkei 34:509–516 (Japanese) 2. Fallenstein GT, Hulce VD, Melvin JW (1969) Dynamic mechanical properties of human brain tissue. J Biomech 2:217–226
IFMBE Proceedings Vol. 35
Two Practical Strategies for Developing Resultant Muscle Torque Production Using Elastic Resistance Device S.J. Aboodarda1, A. Yusof 1, N.A. Abu Osman2, and F. Ibrahim2 1
2
Sports Center, University of Malaya, Malaysia Dept of Biomedical Engineering, Faculty of Engineering, University of Malaya, Malaysia
Abstract— This investigation was conducted to quantify and compare Resultant Muscle Torque (RMT) pattern in quadriceps muscle using elastic resistance training device. Sixteen subjects completed 8-RM seated knee extension by elastic tubing with original length (E0) and elastic tubing with 30% decrement of original length (E30). Every repetition was partitioned into 6 phases (3 concentric and 3 eccentric) and mean value for external force and acceleration were calculated and synchronized for every phase of lifting. The magnitude of Resultant Muscle Torque was calculated based on the equation recommended by Enoka (2002) for dynamic motions. the RMT values among various exercise types phases demonstrated significantly higher value for E30 compared with E0 (all P < 0.05). This is except in the 3rd and 4th phases of contraction which no significant difference observed between E0 and E30. Accordingly, the magnitude of increasing RMT in phase 1 (2.6 %) and phase 6 (3.98 %) for E30 compared with E0% supported the effectiveness of two applied strategies in developing muscle torque production in ER exercises.
of elastic force. Firstly, 30% of the initial length of the elastic material are reduced [9, 10]. Secondly, additional elastic bands are utilized in parallel to the current unit [7]. The aim of the first and the second strategies were to enhance the provided force by ER device at the beginning of lifting motion and throughout the whole range of motion (ROM), respectively. We hypothesize that applying these strategies can improve muscle torque production during intensive knee extension exercises. The purpose of this study, therefore, is to quantify and compare Resultant Muscle Torque (RMT) production within performing 8-RM seated knee extension in contribution of two types of ER training.
II. MATERIALS AND METHOD A. Experimental Approach to the Problem
Keywords— strength training, variable external resistance training, elastic tubing, rehabilitation.
I. INTRODUCTION Elastic resistance exercise (ER) is well established as an effective mode of training in rehabilitation and fitness setting [1, 2]. Numerous studies have evaluated the clinical features of utilizing ER and recommended it for patients and older adults to strength and pain impairment, balance and proprioception enhancement, increase range of motion after trauma [3-7]. Furthermore, few investigators have suggested ER for prevention of injuries and developing muscle strength rather than curing, particularly among healthy individuals [8]. However, there is controversial evidence regarding application of ER for improving muscle strength among healthy individuals [9]. Reportedly, ER cannot lead muscle to its maximal activation level due to providing inadequate external force [4, 7]. Based on this viewpoint, utilizing ER has been confined to the initial stages of rehabilitation protocols [1, 3]. On this basis, in the current study two strategies are proposed to enhance the magnitude
All data were collected within one testing session. Subjects completed 8-RM seated knee extension by elastic tubing with original elongation (E0) and elastic tubing with 30% decrement of original elongation (E30). The external force, linear acceleration and range of motion data were collected and synchronized using a 16 bit acquisition mode with an eight channel TeleMyo™ 2400T G2 (Noraxon, Scottsdale, Arizona, USA) system. B. Subjects Seven female (mean, SD; 22.4 ± 4.7 year, 60.05 ± 6.17 kg, 158 ± 3 cm) and 9 male (24.0 ± 3.6 year, 78.14 ± 7.2 kg, 174 ± 7 cm) healthy volunteers were recruited for this study. None of the subjects had experience of participating in any resistance training program in the past 12 months. This study was approved by ethic committee Sports Center, University Malaya and all participants signed informed consent forms. C. Instrumentation Range of motion in dominant knee was monitored using a 2D electrogoniometer in 200 uv and a two dimension
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 241–244, 2011. www.springerlink.com
242
S.J. Aboodarda et al.
accelerometer (10 g) was places at the lateral side of the shank to measure linear acceleration of the shank and foot (Noraxon, Scottsdale, Arizona, USA). Data acquisition package Myoresearch-XP, Master Edition (Noraxon, Scottsdale, Arizona, USA) was used to collect data from all 8-RM while the first, 5th and 8th repetitions has been selected for data analyze. Various color code of Thera-Band elastic tubing (Hygienic Corporation, Akron, OH) were used to provide elastic resistance. In order to avoid any biomechanical differentiation in position of the subjects, pattern of the movement and range of motion, the nautilus machine chair (Nautilus, Vancouver, WA) was used for elastic resistances as well. The original length of elastic tubing was measured from axis of the elastic tubing anchored to the base of the nautilus machine (underneath the seat) and a load cell (Noraxon, Scottsdale, Arizona, USA) which was connected to a custom made leather shin pat of elastic device. D. Experimental Measurement All data were collected within one testing session and the order of the measurement was randomized across two exercise modalities. Following warm up, the electrogoniometer and accelerometer were strapped to the subject`s dominant knee and ankle respectively. Then, subjects were seated on the leg extension nautilus machine chair. They completed 8-RM tests with 2 modes of resistance exercise within 80º to 180º of knee extension. Lifting more than 8 repetitions with the current resistance in the assigned range of motion required the subject to execute the next trial with heavier weight or undertaking elastic resistance with next color code of elastic tubing or add up a tubing until the subject was no longer able to perform more than 8 repetitions. The first, the 5th and the 8th repetitions were divided into concentric and eccentric segments and value of every segment was divided in to 3 equal phases (totally 6 phases: 3 concentric and 3 eccentric). The acceleration (g) and the average of external force (N) were calculated for each phase of movement. In terms of torque development, the magnitude of Resultant Muscle Torque was calculated according to Enoka [11].
1
the x axis, and dy= distance which center of mass passes on the x axis. In this equation, the left-hand side of the equation represents the static value of the torque and the right-hand side quantifies the dynamic part. However, in the recommended equation by Enoka, the impact of external moment of force in dynamic side of equation (right-hand side of the formula) has been ignored. Bringing the movement of external moment of force in X and Y axis in consideration resulted in following formula: 1
2
Mass of the shank and shank plus foot (M) were calculated using Zatsiorskie`s table [12] and Winter, [13], respectively. In order to measure D1= perpendicular distance from elbow to center of mass, the location of Center Of Mass (CoM) was determined using the table C9.1 [13]. The inertia (I) of the forearm was calculated using the following equation recommended by Grimshaw et al. [14]. ² Where I = moment of inertia of a segment about the axis of rotation; M= mass of the segment; K= radius of gyration. Then, angular acceleration (α), was determined using linear acceleration (measured by accelerometer = m/s²) divided by the distance of the accelerometer to the axis of rotation to be converted to angular form (rad/m²). E. Statistical Analyses Differences in RMT values were examined within various phases (1 to 6), repetitions (1, 5 and 8) and modalities of exercise (E0 and E30) using a 6 × 3 × 2 Repeated Measure Analysis of Variance (ANOVA). If significant results were obtained from ANOVA, a series of pair sample t-tests were used to compare analogous phases and repetitions among modalities of exercise. Significance was defined as P < .05.
III. RESULTS
2
Where: τ m= Resultant Muscle Torque, F = external force, D1 = distance from point of applied force to axis of rotation, m = mass of the segment × g, D2 = distance from center of mass to axis of rotation, a= acceleration measured by accelerometer, dx= distance which center of mass passes on
The data addressing RMT for the main effects (phases, repetitions and training modes) were listed in Table 1. Analysis of variance demonstrated a significant difference among two modalities of exercise (p = .00) and six phases of contraction (p = .00). In addition, interaction of the main effects exhibited statistically significant values for phases × training mode (p = .044), and phases × repetitions × training
IFMBE Proceedings Vol. 35
Two Practical Strategies for Developing Resultant Muscle Torque Production Using Elastic Resistance Device
modes (p = .045). Total RMT production demonstrated significantly higher value for E30 as compared with E0. Furthermore, examining the RMT value among various exercise types phases demonstrated higher value for E30 compared with E0 (all P < 0.05) across whole range of motion; thought no significant values were observed in 3rd and 4th phases of contraction.
IV. DISCUSSION This study addressed the issue of whether manipulating the initial length and color code of elastic materials could effectively enhance the magnitude of muscle torque production in elastic resistance device. On the other hand, to the best of our knowledge this is the first investigation that attempted to measure RMT [11] for ER devices during dynamic contractions rather than static angle-torque relationship [15, 16]. The present findings indicate that decreasing 30 % of initial length and matching various color code of elastic tubing can develop total torque production (1.41 %) in E30 as compared with E0. Examining various phases of contraction also demonstrated higher RMT values during E30 compared with E0 in 1st, 2nd, 5th and 6th phases. Table 1 RMT values within various levels of exercise modalities RMT (Nm) E0% E30% Total Average of 6 phases Total Average of 6 phases 556.87 565.10 ‡ (121.01) (121.64) Mean (±SD) of RMT of each phase of contraction (N=16) 1 216.83 223.35‡ (44.07) (45.77) 2 657.16 665.27‡ (143.49) (149.21) 3 826.62 839.96 (183.22) (189.70) 4 818.86 820.61 (177.91) (172.66) 5 589.04 605.06 ‡ (129.19) (127.65) 6 216.92 225.96 ‡ (43.95) (45.35) NOTE. Mean (SD) values for RMT of various phases of contraction (1-6); the value of every phase comprises of average of 1st + 5th + 8th repetitions. ‡= E30 % is significantly higher than E0%.
In the first and 6th segments, E30 generated significantly higher muscle activation than E0. Accordingly, the magnitude of increasing RMT in phase 1 (2.6 %) and phase 6 (3.98 %) for E30 compared with E0 supported the effectiveness of two applied strategies in developing muscle torque production in ER exercises. In the mid-concentric and
243
mid-eccentric phases also E30 demonstrated significantly higher value than E0. However, in late concentric and early eccentric phases (the 3rd and 4th phases) no statistical difference observed between the two modes of training Based on similar finding, Hodges [10] concluded that after reducing the initial length of elastic material, a shifting is happening in the distribution of provided external force as well as the muscle tension from late concentric to early concentric and from early eccentric to late eccentric range of motion.
V. CONCLUSION Our results supported the idea that using these two strategies can potentially eliminate the drastic gap in provided external force, probably level of muscle activation and torque production between ER and conventional exercises training apparatuses. Nonetheless, the results of the present study point to the need of further studies in elucidating the magnitude and patterns of RMT in response to the reducing different percentages of elastic material length.
REFERENCES 1. Enoka, R. M. (2002). Neuromechanics of human movement. (3rd ed.). Champaign, IL: Human Kinetic. 2. Grimshaw, P., Fowler, N., Lees, A., & Burden, A. (2006). Sports and Exercise Biomechanics Taylor and Francis 3. Hintermeister, R. A., Bey, M. J., Lange, G. W., Steadman, J. R., & Dillman, C. J. (1998 ). Quantification of Elastic Resistance Knee Rehabilitation Exercises. journal of Orthopedic and Sport Physical Therapy, 28(1), 40-50. 4. Hodges, G. N. (2006). The effect of movement strategy and elastic starting strain on shoulder resultant joint moment during elastic resistance exercise. University of Manitoba. 5. Hopkins, J. T., Christopher, D. I., Michelle, A. S., & Susan, D. B. (1999 ). An Electromyographic Comparison of 4 Closed Chain Exercises. jorunal of athletic training, 34 (4), 353–357. 6. Hughes, C. J., Hurd, K., Jones, A., & Sprigle, S. (1999). Resistance Properties of Thera-Band® Tubing During Shoulder Abduction Exercise. J Orthop Sports Phys Ther, 29(7), 413-420. 7. Matheson, J. W., Kernozek, T. W., Fater, D. C. W., & Davies, G. J. (2001). Electromyographic activity and applied load during seated quadriceps exercises. Medicine & Science in Sports & Exercise, 33(10), 1713-1725. 8. Mikesky, A. E., Topp, R., Wigglesworth, J. K., Harsha, D. M., & Edwards, J. E. (1994). Efficacy of a home-based training program for older adults using elastic tubing European Journal of Applied Physiology and Occupational Physiology, 69(4), 316-320. 9. Myers, J. B., Maria R Pasquale, Kevin G Laudner, Timothy C Sell, James P Bradley, & Sxoxt M Lephart. (2005). On-xhe-Field Resistaxce-Tubing Exexcises for Throwers: An Electromyographic Axalysis. Journal of Athletic Training, 40(1), 15-22 10. Page, J. L., Ben A., Robert B., Robert C., & Robert C. (1993). Posterior Rotator Cuff Strengthening Using Theraband® in a Functional Diagonal Pattern in Collegiate Baseball Pitchers. J Athl Train, 28(4 ), 346-354.
IFMBE Proceedings Vol. 35
244
S.J. Aboodarda et al.
11. Schulthies, S. S., Mark D. Ricard, Kimbie J. Alexander, & J. William Myrer. (1998 ). An Electromyographic Investigation of 4 ElasticTubing Closed Kinetic Chain Exercises After Anterior Cruciate Ligament Reconstruction. Journal of Athletic Trainining, 33 (4 ), 328–335. 12. Simoneau, G. G., Shellie M Bereda, Dennis C Sobush, & Andrew J Starsky. (2001). Biomechanics of Elastic Resistance in Therapeutic Exercise Programs. Journal of Orthopedic and Sports Physical Therapy. 31(1), 16-24. 13. Swanik, A. K., Swanik, C. B., Lephart, S. M., & Huxel, K. (2002). The Effect of Functional Training on the Incidence of Shoulder Pain and Strength in Intercollegiate Swimmers. journal of sport rehabilitation, 11(2).
14. Treiber, F. A., Lott, J., Duncan, J., Slavens, G., & Davis, H. (1998). Effects of Theraband and Lightweight Dumbbell Training on Shoulder Rotation Torque and Serve Performance in College Tennis Players. The American Journal of Sports Medicine, 26(4), 510-515. 15. Winter, D. A. (2004). Biomechanics and Motor Control of Human Movement. Wiley and Sons, Inc. 16. Zatsiorsky, V. M. (2002). Kinematics of Human Motion. Champaign, IL: Human Kinetics.
IFMBE Proceedings Vol. 35
2.5
2
1.5
1
0.5
0 0
5
10
15
20
25
30
35
40
45
50
A Quantitative Study of Gastric Activity on Feeding Low and High Viscosity Meals K. Takahashi1, A. Kobayashi2, and H. Inoue1 1
Graduate school of Engineering and Resource Science/Department of Electrical and Electric Engineering, Akita University, 1-1 Tegatagakuenmachi, Akita 010-8502, Japan 2 The Health Care Facility for the Elderly/Honobono-en, Seiwa-kai Medical Corporation, 92-1 Kaidoshita Showa-Okubo, Katagami 018-1401, Japan
Abstract–– The purpose of this study is to clarify the relationship among the viscosity of fluid diet and mechanism of the gastric myoelectrical activity with simultaneous measurements of electrogastrography (EGG) and heart rate variability (HRV). We studied for 8 healthy male volunteers, EGG and electrocardiogram (ECG) were measured simultaneously. And indexes of EGG and HRV for the periods of fasting and postprandial obtained by the analysis using Short-time Fourier transform (STFT) to assess gastric myoelectrical activity and autonomic nervous activity respectively on the intakes of three kinds of test meals which were different in viscosities. In relation to indexes of EGG, dominant power (DP) of CH3 and CH4 changed significantly (p < 0.05, 0.01) on the intake of low or high viscosity meal. In addition, dominant frequency deviation (DFD) of CH1 and CH2 changed significantly (p < 0.05, 0.01) only on the intake of low viscosity meal. In relation to indexes of HRV, LF/HF of only 15-20 min on postprandial period changed significantly (p < 0.05) on the intake of high viscosity meal. These results may express that distal stomach has a main role in gastric emptying of high viscosity meal similar to known mechanism of gastric emptying, whereas the intake of low viscosity meal lead to irregular activity on proximal stomach. Keywords–– Gastrostomy, Gastroesophageal reflux disease (GERD), Electrogastrography (EGG), Heart rate variability (HRV), Short-time Fourier transform (STFT).
I. INTRODUCTION The patients of gastrostomy indicate delay of gastric emptying, gastric hypokinesis and gastroesophageal reflux disease (GERD) causing of aspiration pneumonia, and then not a few patients lead to death [1]. Previous enteral nutrition using liquid diet has not conquered many serious complications, it has caused decline in the quality of life (QOL). Recently, the enteral nutrition using semisolid diet has been introducing and utility of this method is showed. However, previous studies reported quantitatively for relationship among viscosity of fluid diet and gastric activity for man, but these are not many [2]. The invasive measurement of EGG was cause of this fact [3, 4]. The purpose of this study is to clarify the relationship among viscosity of
fluid diet and mechanism of gastric myoelectrical activity for the healthy subject with simultaneous measurements of EGG and HRV derived by ECG.
II. MATERIAL AND METHODS A. Subjects The subjects of this study were covered 8 healthy male volunteers of our laboratory without any history of significant gastrointestinal, neurological or psychiatric disease. And none had any medication for the gastrointestinal function, at the time of study. The mean age was 23.4 ± 2.4 standard deviation (SD) years (range of age 22-29 years) and mean body mass index (BMI) was 21.2 ± 1.8 SD kg/m2 (range of BMI 19.1-24.4 kg/m2). This study was accomplished with informed consent by all subjects. B. Test Meals We prepared three kinds of fluid diets, which showed different viscosity grades respectively, in order to clarify the relationship among viscosity of fluid diet and gastric myoelectrical activity. First was low viscosity meal, 410 g, 600 kcal consist of high calorie diet (Teru-meal-mini, TERUMO, Tokyo, Japan), its viscosity measured by B-type viscometer was 1 cP. Second was middle viscosity meal, 410 g, 569 kcal consist of first meal mixed with food to get into semisolid (Reflunon, Healthy Food, Tokyo, Japan), its viscosity was 3,000 cP. And third was high viscosity meal, 410 g, 503 kcal consist of first meal mixed with food to get into semisolid, its viscosity was 20,000 cP. C. Experimental Procedures The experiments were performed in the silent room, after at least 9 h overnight fast. All subjects were instructed not to speak, preferably move their body. The abdominal surface of the recording sites were cleaned by the preparatory procedure, cleaned with tissues moistened with ethanol, and then careful skin abrasion with skin pure (Nihon Kohden, Tokyo,
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 249–252, 2011. www.springerlink.com
250
K. Takahashi, A. Kobayashi, and H. Inoue
Japan) to enhance electrical conduction, lastly cleaned with tissues moistened with saline solution. On all measurements, EGG and HRV were recorded simultaneously for total of 48 min. Firstly, fasting EGG and HRV were recoded for 13 min in the supine position, and then subjects were instructed to sit up, ingest one of test meals for 5 min. After the meal, they were instructed to lie in the supine position again, postpradial EGG and HRV were recorded for 30 min. These series of measurement with three test meals were performed for each subject on separate days. Recordings and analysis in EGG: After the preparatory procedure of abdominal skin, five Ag / Ag-Cl adhesive electrodes (Vitrode M, Nihon Kohden, Tokyo, Japan) were attached to the abdominal surface (Fig. 1). The third electrode (CH3) was positioned on the midpoint between the xiphoid process and the umbilicus. The second (CH2) and the forth (CH4) were positioned respectively 5-6 cm to the subject’s left and right side in horizontal line with CH3, and the first (CH1) was positioned 5-6 cm up to the CH2 in vertical line. Reference electrode was positioned on the right ankle [5]. These electrodes were connected to a portable recording device (Nipro EG, Nipro, Osaka, Japan), the raw EGG signals were isolated using a band-pass filter from 0.035 to 0.2 Hz, and then digitally recorded at a sampling rate of 1Hz. Recorded EGG data were transferred to a personal computer using EGS1 software (Gram, Urawa, Japan), and EGG signal processing was performed with short-time Fourier transform (STFT) using a Hamming window, frame length and frame rate were 120 s and 20 s respectively. Indexes of EGG were defined as below. The dominant power (DP) was given as the peak voltage on the each frame of the frequency spectrum from 0.035 to 0.2 Hz, reflects the intensity of gastric myoelectrical activity. And the dominant frequency (DF) was given as the frequency at the DP on the each frame of the frequency spectrum, reflects the frequency of gastric myoelectrical activity. Obtained time series of DP and DF were divided for subperiods, such as 3 to 8 min for fasting period, and 5 to 10, 10 to 15, 15 to 20, 20 to 25 min for postprandial periods. Additionally mean ± SD for each period of DP and DF were calculated respectively. The SD for each period of DP and DF were defined as the dominant power deviation (DPD) and the dominant frequency deviation (DFD) respectively. Recordings and analysis in HRV: After the preparatory procedure of abdominal skin, two electrodes similar to that used on EGG recording were positioned on the infraclavicular region of right side and the abdomen below the left costal margin at its intersection with the left anterior axillary line respectively. Reference electrode was positioned on the right ankle similar to position used on EGG recording (Fig. 1). These electrodes were connected to a portable recording device (Pocket ECG monitor, Nihon Kohden, Tokyo, Japan), the read II of (ECG) was derived.
The raw ECG signal were isolated using a band-pass filter from 0.4 Hz to 20 Hz, were digitized online at a sampling frequency of 2000 Hz using analogue to digital converter (DAQCard-6036E and BNC-2110, 200kS/s, 16-bit, NATIONAL INSTRUMENTS, Texas, USA), and were stored on the hard disk of a personal computer. With recorded ECG signal, R-R intervals (RRI) were calculated using a cross-correlation function to detect R waves. And analysis of time series of RRI to estimate heart rate variability (HRV) was performed with STFT as well as analysis of EGG. Indexes of HRV were defined as below. The component of low frequency (LF) and high frequency (HF) were given as area of frequency spectrum from 0.05 to 0.15 Hz and 0.15 to 0.4 Hz respectively, and then ratio of LF to HF (LF/HF) reflects the activity of sympathetic nervous system and HF reflects the activity of parasympathetic nervous system. Obtained time series of HF and LF/HF were divided for sub-periods as well as DP and DF.
Fig. 1 Position of electrodes D. Statistical Analysis Tow-way factorial analysis of variance (ANOVA) without replication and Tukey’s method for multiple comparisons were used to examine the significant changes of the postprandial periods compared with fasting period on the three test meals. All variables were described as mean ± SD, and p < 0.05 or p < 0.01 were considered statistically significant.
III. RESULTS A. Electrogastrography The example in the same subject of EGG, DP and DF on the high viscosity meal at the CH1 and CH4 were shown in Fig. 2 and Fig. 3 respectively. In this subject, the EGG amplitude and DP at the CH4 increased significantly and DF at CH4 became stable after the intake of that meal compared
IFMBE Proceedings Vol. 35
A Quantitative Study of Gastric Activity on Feeding Low and High Viscosity Meals
with fasting period. On the other hand, EGG amplitude and DP at the CH1 did not increase significantly after intake compared with fasting period, and DF at the CH1 was still unstable if intake finished. All subject’s data expressed as mean ± SD were shown as to DP, DPD, DF and DFD on the each test meal at the each channel (Table 1). On DP at the CH1 and CH2, no significant changes were found on comparisons among fasting and postprandial periods on the all test meals. On the other hand, on DP at the CH3 and CH4, significant changes were found on comparisons among fasting and postprandial periods on the low and high viscosity meals. Additionally, on DFD at the CH1 and CH2, significant changes were found on comparisons among fasting and postprandial periods on only the low viscosity meal, whereas on DFD at the CH3 and CH4, no significant changes were founded on all test meals.
251
B. Heart Rate Variability The example of RRI, LF, HF and LF/HF on the high viscosity meal in the same subject was shown in Fig. 4. All subject’s data were shown as mean ± SD together with EGG indexes (Table 1). On HF, no significant change were found on comparisons among fasting and postprandial periods on the all viscosity meals, whereas on LF/HF, significant change was found only on comparisons among fasting and 15-20min of postprandial periods only on the high viscosity meal.
Fig. 4 An example of HRV on high viscosity meal
IV. DISCUSSION Fig. 2 The example of EGG waveforms at CH1 on high viscosity meal
Fig. 3 The example of EGG waveforms at CH4 on high viscosity meal
Indexes of EGG at the each channel on the three test meals showed distinct results. No significant change of DP at the CH1 and significant change of DP at the CH3 and CH4 on the low and high viscosity meal after the intake may be explained by a mechanism that the distal stomach, such as pylorus and pyloric antrum have the main role in gastric emptying of liquid or semisolid meal [4]. The position of CH1 and CH2 are close to proximal stomach, such as gastric fundus and corpus, whereas the position of CH3 and CH4 is close to distal stomach. In addition, significant change of DFD at the CH1 and CH2 on the low viscosity meal and the no significant changes of DFD at CH3 and CH4 on the high viscosity meal after intake, suggest that the activity of proximal stomach is irregular. Since some reports show the proximal stomach has the main role in gastric emptying of liquid meal [3, 4], our result may indicate that, because the receptive relaxation do not occur in case of intake of the liquid meal, the physiological gastric
IFMBE Proceedings Vol. 35
252
K. Takahashi, A. Kobayashi, and H. Inoue
Table 1 Indexes of electrogastrograpy and heat rate variability on the fasting and postprandial period Index Channel Electrogastrography CH1 DP [μV] CH3
CH4 DFD*103 [Hz]
CH1
CH2
CH3
Heart rate variability HF*103
LF/HF
Test meal
Fasting
Postprandial 5-10 min
10-15 min
15-20 min
20-25 min
low middle high low middle high low middle high low middle high low middle high low middle high
49.3±12.0 55.2±35.2 47.4±10.5 37.0±18.9 48.1±35.9 32.8±10.6 32.6±17.1 36.1±26.1 29.4±11.5 7.5±4.5 13.3±12.3 10.3±10.8 9.1±6.3 13.2±10.1 13.1±15.5 23.5±18.5 14.9±22.7 15.6±12.9
48.7±12.7 54.3±16.1 58.1±11.3 62.4±26.1 68.6±28.3 72.0±40.2** 59.4±15.0** 66.9±41.5* 71.6±31.6** 25.5±11.7** 21.6±12.6 14.7±7.1 29.3±14.6** 24.1±15.4 18.7±12.3 19.9±18.2 16.6±14.2 14.0±13.2
55.8±26.6 54.9±15.5 56.2±19.9 79.4±57.6** 61.2±26.7 78.1±42.3** 64.6±28.6** 65.5±30.0 71.1±38.5** 23.3±14.0* 18.2±11.9 11.4±4.5 30.9±10.6** 27.2±12.0 20.2±18.8 16.1±14.1 17.2±11.5 9.8±13.4
59.3±26.1 60.7±15.7 52.5±14.5 60.5±27.2 61.8±19.7 70.2±46.4** 59.3±25.3** 54.6±17.8 61.7±22.5* 16.0±9.2 16.5±9.0 19.6±8.3 31.5±13.7** 16.2±7.0 22.3±12.5 26.7±16.6 21.0±13.5 16.4±15.6
48.1±9.4 57.4±13.1 58.5±26.5 55.7±21.7 63.9±23.3 68.0±42.1** 51.7±13.4 59.4±19.4 55.2±16.1 19.2±7.8 16.7±8.3 16.0±7.4 18.0±6.6 20.7±10.8 17.7±7.0 19.5±15.9 15.3±8.6 22.4±12.3
low middle high low middle high
1.11±0.47 0.99±0.15 1.01±0.40 0.99±0.26 0.95±0.18 0.97±0.20
0.96±0.28 0.94±0.16 0.99±0.33 1.00±0.22 1.01±0.19 0.98±0.15
0.90±0.21 0.92±0.15 0.98±0.34 1.05±0.32 1.03±0.22 1.05±0.22
0.93±0.24 0.94±0.16 0.96±0.23 1.05±0.30 1.08±0.23 1.19±0.15*
0.98±0.32 0.96±0.10 0.98±0.16 1.10±0.41 1.06±0.27 1.16±0.30
All variables were described as mean ± SD. * p < 0.05, ** p < 0.01, the postprandial periods compared with the fasting period.
emptying was not promoted. And then this may lead to easily complications such as GERD. The autonomic nervous systems consist of sympathetic nerve and parasympathetic nerve control gastric activity. In this study, there was significant change of LF/HF at only a 15-20 min of postpradial period on the high viscosity meal, and this result may express the mechanism contrary to the mechanism that the gastric activity is controlled by parasympathetic nerve [6, 7]. In conclusion, these results suggest distal gastric activity may be promoted by intake of high viscosity meal, whereas proximal gastric activity may be irregular easily intake of low viscosity meal. Hence, in this study, the behaviors of gastric myoelectrical activities in response to the difference in viscosity of meal were detected quantitatively with the non invasive measurement of EGG. For the future work, we will investigate as to the patients of gastrostomy and simulate the gastric activity with a numerical analysis.
REFERENCES 1. R. M. Corben, A. Weintraub, A. J. Dimarino Jr et al. (1994) Gastroesophageal Reflux during Gastrostomy. J Parenter Entearl Nutr, 106, pp. 13-18. 2. Kenneth L. Koch, William R. Stewart, and Robert M. Stern (1987) Effect of Barium Meals on Gastric Electromechanical Activity in Man. Dig Dis Sci, 32, 11, pp. 1217-1222.
3. J Prove, H J Ehrlein (1982) Motor Function of Gastric Antrum and Pylorus for Evacuation of Low and High Viscosity Meals in Dogs. Gut, 23, pp. 150-156. 4. Kelly, Keith A (1980) Gastric Emptying of Liquids and Solids roles of proximal and distal stomach. Am J Physiol, 239, pp. G71-G76. 5. B. Krusiec-Swidergol and K. Jonderko (2008) Multichannel Electrogastrography under a Magnifying Glass – An In-Depth Study on Reproducibility of Fed. Neurogastroenterol Motil, 20, pp. 625-634. 6. C. A. Friesen, Z. Lin, J. V. Schurman, L. Andre and R. W. Mccallum (2007) Autonomic Nervous System Response to a Solid Meal and Water Loading in Healthy Children : Its Relation to Gastric Myoelectrical Activity. Neurogastroenterol Motil, 19, pp. 376-382. 7. Michiko Watanabe et al. (1996) Effects of Water Ingestion on Gastric Electrical Activity and Heart-rate Variability in Healthy Human Subjects. J Auton Nerv Syst, 58, pp. 44-50. Author: Keita Takahashi Institute: Graduate school of Engineering and Resource Science Department of Electrical and Electric Engineering, Akita University Street: 1-1 Tegata-gakuenmachi City: Akita Country: Japan Email:
[email protected]
IFMBE Proceedings Vol. 35
A Study of Extremely Low Frequency Electromagnetic Field (ELF EMF) Exposure Levels at Multi Storey Apartment R. Tukimin1 and W.N.L. Mahadi2 1
Radiation Health and Safety Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang, Selangor 2 Department of Electrical Engineering, University Malaya, 50603 Kuala Lumpur
Abstract— People living near to the transmission lines are exposed to the extremely low frequency electromagnetic field (ELF EMF). The concern over the electromagnetic fields (EMFs) exposure is growing as it is categorized as non-ionising radiation (NIR) radiation in the electromagnetic spectrum. Scientific reports have suggested that the excess exposure to electromagnetic field (EMF) emitted from various electrical sources could lead to adverse health effects. This paper focuses on the ELF EMF radiation exposure level at multi-storey apartment located adjacent to 275kV transmission lines at Klang Valey area. The objective of this study is to determine the extremely low frequency electromagnetic field (ELF EMF) radiated from the 275kV transmission line as well as to assess the potential EMF exposure received by the residence living in the apartment. The ELF EMF were assessed using electromagnetic probe for low frequency. The electric and magnetic fields level were evaluated and compared to the exposure limit recommended by International Commission of Non-Ionising Radiation Protection (ICNIRP). It is observed that the magnetic field near the transmission lines are higher than the typical situation we normally obtain but still below the permissible exposure limit.
I. INTRODUCTION Electricity system produces extremely low frequency electromagnetic fields (ELF EMF) radiation which is fall under category of non-ionising radiation (NIR). Scientific reports have suggested that excessive exposure to high electromagnetic field (EMF) radiation could lead to adverse health effects. As EMF from the electricity system has become a major source of NIR exposure to environment, it has resulted in significant opposition to the construction and upgrading of power transmission facilities at some area. The issue of EMF needs to be addressed clearly to alleviate misunderstanding of perceived health effects especially to those living adjacent to the overhead transmission line. In Malaysia, electricity supply at 50 cycles per second (50 Hertz) and it involves three main process namely generation, transmission and distribution. For electrical transmission, the system used by Tenaga Nasional Berhad (TNB) has main
voltage levels at 500kV (backbone national grid), 275kV, 132kV and 66kV. Electricity distributed via extensive distribution system which carry voltages at 33kV, 22kV and 11kV before finally step down by various sub-station to 415V and 240V. In order to know the potential hazard of ELF EMF exposure, a study need to be carried out to assess the level of ELF EMF radiation emitted from the source such as transmission line. The objective of this study is to determine the electric field and magnetic level radiated by the 275kV transmission line as well as to assess the potential EMF exposure received by the residence living at the selected study sites. The electric and magnetic fields level were evaluated and compared to the exposure limit recommended by International Commission of Non-Ionising Radiation Protection (ICNIRP).
II.
HAZARD AND RECOMMENDED PERMISSIBLE LIMITS
It is widely reported that the emission of ELF EMF is hazardous to human health. The claimed ELF EMF’s hazard to human beings has previously been associated with occurrence of cancers among children living close to overhead high tension cables and adults working in electricity industries due to prolonged exposure to ELF EMF [2]. The magnetic and electric fields normally are generated in a very weak form and they are unable to break the bond of basic biological structure of human body and to render its tissues and organ functionless but known to have capability to induce current in human body when exposed directly to them (Tenforde T.S. 1996). Electric fields are produced by electric charges while magnetic fields are produced by the flow of current through wires or electrical appliances [1].The strength of the magnetic field will vary with the power consumption same as electrical field. The critical field of causing the problem is the magnetic fields since electric fields can easily be absorbed by any object, such as walls and floors of buildings and trees. A large number of research studies have been carried out by scientists to confirm the occurrence of such harmful health effects and their association with exposure to ELF
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 253–257, 2011. www.springerlink.com
254
R. Tukimin and W.N.L. Mahadi
EMF’s. However, up to now, none of them can provide convincing evidence to establish a linkage between the health effects and the exposure. As there is no complete consistent picture and conclusive findings of the association of the ELF EMF’s exposure and cancer risk from the outcomes of the research studies so far, scientists and prominent national responsible agencies unanimously agreed that more conclusive evidence is needed before a definite conclusion can be made on this issue[3]. Based on the existing evidence resulted from exposure to higher level of emission, international organizations, such as, International Commission on Non-Ionising Radiation Protection (ICNIRP),World Health Organization (WHO), Institute of Electrical and Electronic Engineers (IEEE) and National Council on Radiation Protection and Measurements (NCRP) have established permissible exposure limits for workers and public which are deemed acceptable for prevention of serious health effects from exposure to such fields. The limits were established based on confirmed scientific findings of observable health effects and were set in terms of relevant measurable quantities[4]. Many countries and international organizations have adopted the International Commission of Non-Ionising Radiation Protection (ICNIRP) limits into their respective national and international safety standards and legislation and an example of these countries and international organizations, which include World Health Organization (WHO/182 1998), International Labour Organization (ILO) and the European Committee on Electrotechnical Standardization (CENELEC 1995). Based on these guidelines, the occupational and public exposure limit for electric and magnetic fields of ELF EMF are 10,000 V/m and 5,000 V/m, and 5000 mGauss and 1000 mGauss respectively[5].
against the standard fields at NIR Laboratory of Malaysian Nuclear Agency to ensure the performance consistency and acceptability before use. For the purpose of this study, two sites were selected which located at Seri Kembangan and Petaling Jaya area (Figure 2 and Figure 3). The measurement points at selected study sites were indicated in layout plan as shown in Figure 4 to Figure 7.
III. MEASUREMENT METHOD
Fig. 2 Photo of study location (site A) in Petaling Jaya
Measurements of the field strengths were carried out using PMM instrument Model 8053 with an attached probe Model PMM EHP-50A (5Hz-100kHz) and Emdex II together with a data logger and a reader (see Figure 1). The instruments can measure both electric and magnetic fields simultaneously. This instruments draw signal from three sensors, which are positioned in a manner that they are orthogonal to each other in order to minimize geometrical error during the measurement. Readings were taken in miliGauss (mGauss) for the magnetic fields and volt per meter (V/m) for the electric fields. Instrument setup and measurement protocols were adopted from standard measurement method and procedures of US IEEE/ANSI Standard C644 – 1987 and ICNIRP/IRPA Guidelines 1997[6]. The lowest detection limit for PMM and EmdexC instrument is 0.10V/m and 0.10 mGauss respectively. The instruments were calibrated by the manufacturer with an accuracy of ±2% and ±0.8dB and tested
Fig. 1 PMM instrument with EHP-50A probe
Fig. 3 Photo of study location (site B) in Seri Kembangan
IFMBE Proceedings Vol. 35
A Study of Extremely Low Frequency Electromagnetic Field (ELF EMF) Exposure Levels at Multi Storey Apartment
255
Fig. 4 Layout of the measurement points on the ground around the site A
Fig. 7 Indication of measurement at every level of the apartment at site B
IV. RESULTS AND DISCUSSION
Fig. 5 Indication of measurement inside one of the residence house at level 10 of site A
Fig. 6 Layout of the measurement points on the ground around the site B
Results obtained from the study are shown in Figure 8 to Figure 14. The results confirm that presence of both electric and magnetic fields at the selected assessment locations. The levels were found to vary against measurement locations, which were strongly influenced by the vertical and horizontal distance from the transmission line. At all of the locations selected in the study, the average magnetic field strengths were found to vary between the lowest of 1.24 mGauss to the highest of 38.34 mGauss. (Figure 8 and Figure 10). The electric field strengths captured at all measurements points of both location study were varied between 0.10V/m to the highest of 247.06 V/m (Figure 9 and Figure 11). Through the results obtained, we can compare the magnetic field level at outside and inside the apartment. The highest magnetic field recorded outside the site A was 10.45 mGauss which is 1.05% of the exposure limit for public. Site B has slightly higher magnetic fields which was 20.46 mGauss (2.05% of the exposure limit recommended for public). Results inside the apartment shows that inside site A, the highest magnetic fields captured was 38.34mGauss while inside the site B the highest fields captures was only 6.63mGauss. The highest magnetic field strength inside the apartment was 3.83% of the exposure limit (1000mGauss) and measured inside the apartment unit located at Level 10. Compared to other measurement points, the magnetic field at this unit was higher due to closer distance to the transmission cables and magnetic fields contribution from the electrical appliances inside the unit.
IFMBE Proceedings Vol. 35
256
R. Tukimin and W.N.L. Mahadi
The spatial distribution of the field strengths inside the building was also captured using the LINDA system and presented in isocontour plots (Figure 12). The plot clearly indicates the spatial variation of the field strengths against space. The magnetic fields level inside the apartment was slightly higher because the fields was not only emitted by the transmission line but also contributed by the electrical appliances in the study location.
survey. Higher voltage and current would produce higher electric and magnetic fields around the cables. However, such influence was unable to be observed from the actual spectrum of ELF electric and magnetic fields measured over 24 hours because the field strengths had dropped very significantly to a level well below the exposure limit and capability of the measuring instrument.
Fig. 10 Magnetic field strengths against floor level at site B
Fig. 8 A plot of ELF magnetic field strength, mGauss against location around the site (and their comparison with the exposure limits) for site A
Fig. 11 Electric field strengths against floor at site B
Fig. 9 A plot of ELF electric field strength, V/m against location around the site (and their comparison with the exposure limits) for site A Measurements made underneath the cables indeed showed very significant field strengths for both the electric and magnetic fields. This was understandable since the distance involved was very close to the cable. Prolonged measurements over 24 hours were made in site A and site B. The field strengths were expected to be influenced by the amount of electricity carried by the cables, which, in turn, depends on usage of electricity during the period of the
Fig. 12 Isocontour plot of magnetic field strengths inside the house at site A
IFMBE Proceedings Vol. 35
A Study of Extremely Low Frequency Electromagnetic Field (ELF EMF) Exposure Levels at Multi Storey Apartment
257
points at the apartment, the magnetic fields were higher than the typical situation we normally obtain but still far below the permissible exposure limit. The highest magnetic fields captured was 38.34mGauss which was only 3.83% of the exposure limit recommended by ICNIRP for members of public. The EMF level were very much lower than recommended exposure limit by ICNIRP and considered safe, but evaluation are still important in determining the possibility of the long term effect of ELF EMF due to prolonged exposure. Fig. 13 Plot of magnetic field against time measured over 24 hours (site A)
Fig. 14 Plot of magnetic field against time measured over 24 hours (site B)
V. CONCLUSION The study and evaluation of electric and magnetic field has identified the presence of extremely low frequency EMF radiation at the study site. The results obtained shows that magnetic field levels were found to vary against distance from the transmission line. At some measurement
REFERENCES [1] Washburn E.P., Orza M.J., Berlin J.A., Nicholson W.J., Todd A.C., Frumkin H. and Chalmerrs T.C.; Residential proximity to electricity. [2] Transmission and distribution equipment and risk of children leukaemia, childhood lymphoma, and childhood nervous system tumors: systematic review, evaluation and meta-analysis, Cancer Causes and Cont. 5, 1994, pp 299-309 [3] Tenforde T. S.; Interaction of extremely-low frequency electromagnetic fields with living system , Proceedings of Third International Non-Ionising Radiation Workshop, Baden, Austria, 1996, edited by R. Matthes. [4] WHO/182 1998; Electromagnetic Fields and Public Health, WHO fact sheet No.182, May 1998. [5] ICNIRP Guidelines; Guidelines for limiting exposure to time-varying electric, magnetic and electromagnetic fields (up to 300 GHz), Health Physics Vol. 74 No 4 April 1998. [6] IEEE; IEEE Standard Procedures for Measurement of Power Frequency Electric and Magnetic Fields from AC Power Lines, 1987. [7] National Radiological Protection Board; Electromagnetic Fields and the Risk of Cancer, Documents of the NRPB Vol. 3 No. 1 1992.. [8] Australian National Health and Medical Research Council ; Interim guidelines on Limits of exposure to 50/60 Hz electric and magnetic fields, Radiation Health Series No. 30 . [9] VitaTech 2000; Guide to Solving AC Power EMF problems in Commercial Buildings, Vita Tec Engineering, January 2000.
IFMBE Proceedings Vol. 35
Accuracy Improvement for Low Perfusion Pulse Oximetry J.M. Cho1,3, N.H. Kim1, H.S. Seong1, and Y.S. Kim2 1
Department of Biomedical Engineering, Inje University, Gimhae, Korea 2 Research Labs, Medical Supply Co. Ltd., Wonju, Korea 3 Institude of Aged Life Redesign, Inje University, Gimhae, Republic of Korea
Abstract— This paper describes feasible methods which can improve the accuracy of SpO2 values for low perfusion indices of 1 % or lower. The methods suggested were reviewed and analyzed through. Time-domain methods including digital filters and common-mode noise reduction based on a differential amplifier were considered. It shows that the suggested system could distinguish SpO2 signals up to 1 % when PI is 1 %. When PI is 0.1 %, it still could classify signal, however, the standard deviation was too wide to be accepted in clinical practice. Keywords— Pulse oximetry, Digital filter, Common-mode noise reduction, Low perfusion index.
I. INTRODUCTION Blood oxygen concentration is very important in the clinical practice and this saturation level is good indications for the performance of the cardio-respiratory functions. The main application areas of oximetry are the diagnosis of cardiac and vascular abnormalities. Also, a major concern during anesthesia is the prevention of tissue hypoxia, necessitating immediate and direct information about the level of tissue oxygenation. Oximetry is regarded as a standard of care in anesthesiology and has significantly reduced anesthesia-related cardiac deaths [1][2]. It is already used in an operating room, recovery room and most intensive care unit. However, it is not easy to obtain accurate blood oxygen concentration when the pulsing signal in pulse oximetry is extremely low. This paper focuses on the methods that can improve this problem.
where HbO2 is the concentration of oxygenated haemoglobin and Hb is the concentration of deoxygenated hemoglobin. Pulse oximetry is based on the concept that arterial oxygen saturation determinations can be made using two wavelengths, provided the measurements are made on the pulsatile part of the waveform. The two wavelengths assume that only two absorbers are present; namely oxyhemoglobin (HbO2) and reduced hemoglobin (Hb) [1]. A typical fingertip oximeter probe has two LEDs, one emits infra-red light with wavelength of 940 nm and the other one emits red light with wavelength of 660 nm, and one photo detector. The pulse oximetry is based on the theory that the degree of absorption by living tissue of each of these two lights is significantly different for oxyhemoglobin (HbO2) and reduced hemoglobin (Hb). The red and infra-red LEDs are driven alternately as shown in Figure 1 in this study. As shown in Figure 2, the output of photo detector contains DC component and AC component. The pulsing of arterial blood generates the AC signal while all constant absorbers generate the DC component.
Fig. 1 Driving red LED and infrared LED alternately
II. METHODS AND MATERIALS A. Pulse Oximetry Oximetry refers to the determination of the percentage of oxygen saturation of circulation arterial blood. Oxygen saturation is defined as follow:
Oxygen saturation =
HbO2 HbO2 + Hb
(1)
Fig. 2 The raw signal containing DC and AC components caused by pulsing of arterial blood
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 258–261, 2011. www.springerlink.com
Accuracy Improvement for Low Perfusion Pulse Oximetry
259
Fig. 3 Signal flow diagram of the setup prepared for this study Peripheral arterial oxygen saturation can be obtained from the ratio of absorbance as shown in Eq. (2). RED_AC
R=
RED_DC IR_AC
amplifier to remove the error caused by the ambient light as shown in Figure 3. 2) Design of Digital Filters
(2)
IR_DC
Once the ratio, R, calculated, the peripheral arterial oxygen saturation, SpO2, can be obtained from the look-up table that has been established empirically. In Eq. (2), perfusion index (PI) is defined as the ratio of the AC component to the DC component and it depends on individuals and measuring sites even in the same person. Generally, the calculated value R contains little error for the case where PI is of 3 % or higher, but the error contained in this value is getting larger when PI is 1 % or lower. Thus, this inaccurate R value results in error for finding of SpO2 value from the look-up table. B. Error Sources for the Case of Low PI Many issues cause error for calculation of R, but through the analysis of experimental results, the following conditions were identified as major error sources for low PI. The experimental setup used in this study is depicted in Figure 3.
First of all, it is necessary to separate AC component and DC component from the output signal from photo detector to calculate R value. The output signal from the photo detector contains very small AC signal in amplitude compared to large DC signal as shown in Figure 2. A band-pass filter (BPF) and low-pass filter (LPF) are used to extract AC signal and DC signal respectively from this composite signal. Digital filters were examined and used for these BPF and LPF to realize this function. 3) Design of LPF The main role of the LPF for this study is to extract DC component only, thus the cut off frequency of the LPF is decided based on this. The amplitude ratio of AC component to DC component is 1/1000 (-60 dB) when PI is 0.1 %, thus the LPF should satisfy the following conditions; the maximum ripple voltage in pass band is less than 0.001 (0.00868 dB) and the attenuation in stop band should be greater than 60 dB. Since the phase response of an IIR filter has non-linear characteristic, only finite impulse response (FIR) was examined by using Filter Design and Analysis tool of MATLAB [3] [4]. 4) Design of BPF
1) Ambient light Light emitted from the LED is absorbed through skin and vascular tissues and the remaining light is received by the photo detector. However, it was proved through experiment that ambient light is also received by the photo detector and this causes large error for calculation of R value. The strength of ambient light was measured while both LEDs were turned off (in Figure 1), low-pass filtered, and applied to the common-mode input terminal of a differential
The main role of the BPF for this study is to remove DC component and unnecessary higher AC signal and get necessary AC component whose frequency is decided based on frequency of heart rate (HR) we are interested in. In this study, we considered the HR from 30 to 300 beats per minute (BPM) only, thus the pass band of BPF falls on 0.5 – 5 Hz. Similarly to LPF, the amplitude ratio of AC component to DC component is 1/1000 (-60 dB) when PI is 0.1 %, thus the BPF should satisfy the same condition; pass band ripple
IFMBE Proceedings Vol. 35
260
J.M. Cho et al.
of voltage should be less 0.001 (0.00868 dB) and stop band attenuation should be greater than 60 dB. In addition, the phase response of the BPF should be linear so that all the AC components keep their phases. Therefore, only finite impulse response (FIR) was examined by using Filter Design and Analysis tool of MATLAB [3] [4].
B. Design of BPF A BPF filter which satisfies the required conditions was designed with equiripple method using FDA tool of MATLAB and its magnitude response, phase response, and pass band ripple were shown in Figure 7 – Figure 9 respectively.
III. RESULTS A. Design of LPF A LPF filter which satisfies the required conditions was designed with equiripple method using FDA tool of MATLAB and its magnitude response, phase response, and pass band ripple were shown in Figure 4 – Figure 6 respectively.
Fig. 7 Magnitude response for the designed BPF filter
Fig. 4 Magnitude response for the designed LPF filter
Fig. 8 Phase response for the designed BPF filter
Fig. 5 Phase response for the designed LPF filter
Fig. 9 Pass band ripple of the designed BPF filter C.
Fig. 6 Pass band ripple of the designed LPF filter
Simulation of the Performance for the Suggested Pulse Oximetry System
The suggested pulse oximetry system was constructed using SIMULINK (Mathworks) and the output signal from
IFMBE Proceedings Vol. 35
Accuracy Improvement for Low Perfusion Pulse Oximetry
261
Index 2 pulse oximeter simulator (FLUKE) was used for input signal to the system. The performance of the whole system was examined for PI values of 1 % and 0.1 %, and the results are depicted in Figure 11 and Figure 11 respectively. In addition, mean values and its standard deviations of the R for the various SpO2 were listed in Table 1.
IV. CONCLUSIONS We could increase the accuracy of the SpO2 for the low value of PI by implementation of two methods. The error in SpO2 caused by ambient light could be reduced sufficiently by applying it to the common-mode input of a differential amplifier. The overall performance including the digital filters satisfying the required conditions were designed, implemented and tested with SIMULINK using pulse oximeter simulator signal. It shows that the suggested system can distinguish SpO2 signals up to 1 % when PI is 1 %. When PI is 0.1 %, it still can classify signal, however, the standard deviation is too wide to be accepted in clinical practice. It is considered that the LPF and BPF cannot reduce unnecessary signal effectively. Of course, it can be solved if longer digital filters were used, but it will produce extremely long time-delay and cannot be practical. It is required to find different approaches to separate AC and DC components from the composite input signal.
Fig. 10 Simulation result of R value when PI is 1 %
Fig. 11 Simulation result of R value when PI is 0.1 %
REFERENCES
Table 1 Mean values and its standard deviations of the R for the various SpO2 PI=0.1 % SpO2
Mean
PI=1 % SD
SpO2
Mean
SD
100
0.484
0.074
100
0.460
0.004
98
0.562
0.055
98
0.551
0.004
96
0.617
0.064
96
0.610
0.004
94
0.681
0.077
94
0.671
0.003
92
0.734
0.066
92
0.731
0.003
90
0.786
0.084
90
0.781
0.004
88
0.861
0.086
88
0.842
0.004
86
0.886
0.073
86
0.883
0.005
84
0.934
0.063
84
0.922
0.005
82
0.972
0.066
82
0.964
0.006
80
1.019
0.077
80
1.004
0.005
78
1.055
0.087
78
1.044
0.006
76
1.083
0.088
76
1.085
0.005
74
1.131
0.109
74
1.116
0.007
72
1.161
0.126
72
1.144
0.007
70
1.208
0.116
70
1.236
0.006
1. Khandpur R (2003) Handbook of biomedical instrumentation, TaTa McGraw-Hill Publishing Co. Ltd., New Delhi 2. http://en.wikipedia.org/wiki/Photoplethysmograph 3. Ingle V and Proakis J (2007) Digital signal processing using MATLAB, Thomson, U.S.A. 4. Reddy D (2005) Biomedical signal processing: principles and techniques, TaTa McGraw-Hill Publishing Co. Ltd., New Delhi
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Jongman Cho Inje University 607, Obang-dong Gimhae Korea
[email protected]
Application of a Manometric Technique to Verify Nasogastric Tube Placement in Intubated, Mechanically Ventilated Patients H.S. Chen1,2, K.C. Chung1, S.H. Yang1, T.H. Li1, and H.F. Wu1 1 2
Institute of Biomedical Engineering, National Cheng-Kung University, Tainan, Taiwan, R.O.C. Department of Anesthesiology, E-DA Hospital, I-Shou University, Kaohsiung, Taiwan, R.O.C.
Abstract— Objectives: Confirmation of nasogastric tube (NGT) placement is sometimes difficult in clinical practice, especially in intubated, mechanically ventilated patients. The purpose of this study was to validate the accuracy of a manometric technique on confirmation of intragastric NGT placement in intubated, mechanically ventilated patients. Methods: A total of 100 adult patients who underwent elective open abdominal surgery and required NGT placement were enrolled in this prospective descriptive study. After patients were anesthetized and intubated, an NGT was inserted to reach a predetermined depth according to the nose-earxiphoid method. The NGT position was verified by two blinded investigators: one using the manometric technique and the other using fiberscopic examination. The manometric technique involved using a cuff pressure manometer to verify NGT placement. The primary measurements, sensitivity and specificity of the manometric technique for verifying NGT placement, were calculated according to the standard findings of fiberscopic examination. Results: In 81 of 100 NGT placements, intragastric placement was interpreted by the manometric technique. All of these 81 placements were confirmed by fiberscopic examination. The manometric technique was therefore 100% sensitive. The other 19 placements interpreted as extragastric placement by the manometric technique were confirmed by fiberscopic examination as being either in the oral cavity, trachea, or esophagus, indicating 100% specificity. These results revealed a 100% accuracy of the manometric technique in verifying intragastric NGT placement. Conclusion: The manometric technique may be used to verify correct NGT placement for the purpose of gastric decompression and in those environments where a roentgenogram is not available. Keywords— manometric technique, nasogastric tube placement, mechanically ventilated patients, manometer, accuracy.
I. INTRODUCTION Placement of a nasogastric tube (NGT) is a common procedure in the intensive care unit and operating room for reasons of gastric decompression, nutrition, and drug administration. However, verification of correct NGT
placement is sometimes difficult in intubated patients [1]. Malposition of NGTs may lead to lethal problems, such as aspiration pneumonia. Many current methods to verify NGT placement have been reported unreliable [2]. Accordingly, the development of cost-effective and reliable techniques to confirm NGT placement is imperative in the present healthcare environment. The purpose of this study was to validate the efficacy of a manometric technique on verifying NGT placement in intubated, mechanically ventilated patients. The accuracy of the manometric technique on verifying NGT position was compared with the standard findings of fiberscopic examination.
II. MATERALS AND METHODS This study was approved by the Research Ethics Committee of E-DA Hospital. Written, informed consent was obtained from each patient prior to conducting all procedures. The study was conducted in the operating room of E-DA Hospital. The study participants comprised 100 consecutive adult patients (age, >18 y) who required general anesthesia, endotracheal intubation, and NGT placement prior to undergoing elective open abdominal surgery. Patients were excluded if they had evidence of nasal, pharyngeal, esophageal, or gastric disease; extreme hemodynamic instability; morbid obesity (body mass index > 40 kg·m-2); or any contraindication for nasal insertion of an NGT. A prospective descriptive trial was performed. After the patients were anesthetized and intubated, a full-lubricated 16 Fr. NGT (Pacific Hospital supply Co. Ltd., Miaoli, Taiwan) was inserted by nurse anesthetists to reach a predetermined depth according to the nose-ear-xiphoid method [3]. Two blinded investigators then verified the NGT locations: one using the manometric technique and the other using fiberscopic examination. The manometric technique involved using a cuff pressure manometer (Mallinckrodt Medical GmbH, Hennef, Germany) attached to the proximal end of an NGT. The proximal end of a disposable suction catheter (Sigma Medical Supplies Co, Taipei, Taiwan) was used as an adaptor to connect the manometer to the NGT (Fig. 1). After ensuring
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 262–265, 2011. www.springerlink.com
Application of a Manometric Technique to Verify Nasogastric Tube Placement in Intubated, Mechanically Ventilated Patients
263
no air leaks from the manometric device, the first observer applied the manometric technique to verify NGT position according to the following protocol (Fig. 2): 1.
2.
3.
4.
The air pump was first inflated to clear secretion and rule out any obstruction of the tube that may interfere with pressure transmission. Upward pressure while inflating the air pump indicated cases of tube obstruction or intraesophageal placement. When an NGT was placed in the stomach, the changes in pressure upon air inflation were insignificant (< 2 cmH2O) due to high compliance of the abdominal compartment (Fig. 2A). The baseline pressure readings on the manometer were obtained. If the pressure reading was zero (equal to the atmospheric pressure), the location of the tube was thought to be intraoral. If the pressure reading was above zero, either an intragastric, intraesophageal, or intratracheal placement was presumed. If the pressure readings showed synchronous changes of pressure along with mechanical ventilation, this finding was additionally recorded. Pressure changes under gentle epigastric palpation were observed. If pressure swings under epigastric palpations were noted, an intragastric placement was considered (Fig. 2B); otherwise, an extragastric placement was considered. Finally, an intragastric placement was interpreted only in the event of insignificant changes in pressure while inflating the air pump, positive baseline pressure readings, and presence of pressure swings during epigastric palpations; if any one of the above criteria was not fulfilled, an extragastric placement was interpreted.
Fig. 2 The manometric technique for verification of nasogastric tube placement (baseline pressure reading: 8 cmH2O). (A) Air inflation test: the intragastric placement of a nasogastric tube shows insignificant changes in pressure (< 2 cm H2O) upon air inflation (arrow). (B) Epigastric palpation test: pressure swings during manual compressions of upper abdomen (arrow) are observed when a nasogastric tube is placed in the stomach After the first observer had finished the verification of NGT placement using the manometric technique, the second observer entered the operating room and used a fiberscope (Olympus LF-TP tracheal intubation fiberscope; Olympus Optical Co. Ltd., Tokyo, Japan) to confirm NGT placement. The fiberscope was inserted from the patient’s mouth to verify NGT position which was noted as intragastric, intraoral, intratracheal, or intraesophageal. The findings of the manometric technique and fiberscopic examination were recorded in every trial. On the basis of the fiberscopic findings, the NGT was further advanced to reach the stomach if the tube was placed in the esophagus or was reinserted if the tube was in the mouth or tracheobronchial tree. The procedures conducted following the fiberscopic examination were not enrolled for data analysis. Each intragastric NGT placement was finally confirmed by direct palpation of the tube during open abdominal surgery. The primary outcome measurements in this study were the sensitivity and specificity of the manometric technique for verifying NGT placement. Additionally, adequacy of NGT drainage and occurrences of nasal, oropharyngeal, or esophageal trauma were recorded in the post-anesthesia care unit. Characteristics of patients and specific events were expressed as mean ± SD (range) unless otherwise noted. The sensitivity and specificity of the manometric technique for verifying NGT placement were calculated according to the standard findings of fiberscopic examination. SPSS 13.0 software (SPSS Inc., Chicago, IL) was used for data analysis. Table 1 Characteristics of patients and specific events (n = 100)
Fig. 1 Setting of the manometric technique with a cuff pressure manometer and a nasogastric tube. Before each measuring, the nasogastric tube must be occluded along with inflating the air pump (arrow) to ensure no air leak from the manometric instrument
Sex (male / female) 71/29 Age (year) 59 ± 14 (28-84) Weight (kg) 64 ± 10 (38-95) Height (cm) 162 ± 9 (134-179) Body mass index (kg·m-2) 24 ± 3 (17-36) Success rate on the first attempt (%) 81 (81%) Values are expressed as means + SD (range) or number (percent).
IFMBE Proceedings Vol. 35
264
H.S. Chen et al.
Table 2 The results of manometry and fiberscopy for verification of nasogastric tube placement Findings of Manometry Technique Upward pressure while inflating air pump Presence Absence Baseline pressure Zero Above zero Synchronous change of pressure along with mechanical ventilation Presence Absence Pressure swings during epigastric palpation Presence Absence
Verification of Manometry (n = 100) Intragastric Extragastric (n = 81) (n = 19)
Intragastric (n = 81)
Verification of Fiberscopy (n = 100) Intraoral Intratracheal (n = 12) (n = 3)
Intraesophageal (n = 4)
0 81
13 6
0 81
9 3
0 3
4 0
0 81
12 7
0 81
12 0
0 3
0 4
0 81
3 16
0 81
0 12
3 0
0 4
81 0
1 18
81 0
0 12
0 3
1 3
III. RESULTS
IV. DISCUSSION
One hundred patients met the eligibility criteria and were enrolled in this study. The trial protocol was completed in all patients. Thus, a total of 100 NGT placements were analyzed. Characteristics of patients and specific events are summarized in Table 1. Intragastric placement of NGTs at first attempt was successful in 81 of the 100 patients. The results of the manometric technique and fiberscopic examination for verifying NGT placement are shown in Table 2. All the 81 intragastric placements interpreted by the manometric technique were confirmed by fiberscopic examination, indicating 100% sensitivity and no false negatives. The remaining 19 placements, which did not fulfill the criteria of intragastric placement, were verified by fiberscopic examination as being in the oral cavity, trachea, or esophagus. Therefore, the manometric technique demonstrated no false positives and was 100% specific. These results indicated a 100% accuracy of the manometric technique for verifying intragastric NGT placement in intubated, mechanically ventilated patients. In the 19 patients in whom NGT placement failed at first attempt, successful intragastric placement was accomplished within 3 subsequent attempts. Two of these placements required the use of Magill forceps under direct laryngoscopy. Correct placement of NGTs was confirmed successfully by direct palpation of the NGTs during surgery in all patients. NGT drainage was adequate in all patients, and none of the patients showed significant nasal, oropharyngeal, or esophageal complications in the postanesthesia care unit.
In this prospective descriptive study, we found that the manometric technique is highly accurate for verifying intragastric placement of NGTs in intubated, mechanically ventilated patients. The criteria of the manometric technique used to verify intragastric NGT placement in this study were unequivocal and based on physical characteristics of the upper gastrointestinal tract. The positive baseline pressure readings appearing in intragastric NGT placement mainly derive from intrinsic intra-abdominal pressure and positive pressure ventilation [4]. Although both intragastric and intraesophageal placements of NGT showed positive baseline pressure readings, their opposite changes in pressure upon air inflation allowed operators to easily differentiate intragastric from intraesophageal NGT placement. The high compliance of the abdominal compartment contributes to insignificant changes in pressure observed upon a small amount of air inflation when NGTs are correctly placed in the stomach. In contrast, the small diameter and lower compliance of the esophageal lumen leads to the build-up of upward pressure upon air inflation when NGTs are placed in the esophagus. Furthermore, manual compression of the upper abdomen increases intragastric pressure, and pressure swings are therefore observed in cases of intragastric NGT placement. When all the criteria listed above are fulfilled, intragastric placement of an NGT can be accurately verified. Auscultation of an audible gurgle with inflating air and visual inspection of tube aspirates are common methods to evaluate NGT placement in many institutions. While these methods are convenient and less costly, they have been
IFMBE Proceedings Vol. 35
Application of a Manometric Technique to Verify Nasogastric Tube Placement in Intubated, Mechanically Ventilated Patients
reported unreliable for confirmation of NGT placement [2]. Auscultation over the stomach can detect sounds transmitted through a tube that is inadvertently present in the bronchial tree, esophagus, or pharynx. Aspiration of gastric content is not always possible even when the tube is correctly positioned in the stomach. Use of capnometry is another bed-side method that can quickly and accurately verify intratracheal feeding tube placement [5]. However, the method cannot distinguish NGT placement between the mouth, esophagus, or stomach and it may be incorrect if the tube lumen is not fully unobstructed. The cuff pressure manometer used in this study is originally intended to check cuff pressure of endotracheal tubes and is widely available in the intensive care unit and operating room. The manometer can be connected to a disposable suction catheter and then used to verify the correct placement of NGTs with very high accuracy. The only cost of this technique is an existing manometer and a disposable suction tube. Use of the manometric technique to verify NGT placement could be considered for the purpose of gastric decompression and in cases where a roentgenogram is not available or deemed necessary. However, if an NGT is placed for administration of medications or feedings, we still recommend a radiographic approach to confirm correct positioning of NGT prior to its use, since the money and time have to be spent to protect patient safety [1]. A potential advantage of the manometric technique is that the method can not only verify the intragastric NGT placement but also may identify its actual position. When an NGT stays in the oral cavity, the pressure reading is zero (equal to the atmospheric pressure). Once the NGT enters the esophagus, positive pressure readings and upward pressure upon air pump inflation are noted. If the tube is misplaced into the trachea, it can be easily recognized by observing synchronous changes of pressure along with mechanical ventilation. Therefore, if the operators continuously observe pressure readings during NGT insertion, they can instantly identify the position of NGT and adjust its movement until achieving the correct intragastric placement. Use of manometry guidance may change clinical NGT insertion from a blind insertion manner into a perceivable procedure and potentially improve the quality of NGT placement. Further studies are required to validate these clinical findings. This study has several limitations. First, the manometric technique cannot verify whether an NGT is placed in the stomach or more distal gastrointestinal tract, a critical consideration for enteral nutrition. The desired depth of NGT insertion was determined to just reach the stomach in this study. Therefore, when a nasointestinal tube is placed for administering jejunal feedings, the manometric technique may not be useful to verify the correct placement. Second,
265
this study did not validate the influence of negative pressure on pressure readings in spontaneously breathing patients. The negative pulmonary pressure elicited by inspiration may affect the obtained pressure readings. Third, small-bore nasoenteral tubes, whose placement verification was considered more difficult due to their small lumen and flexibility, were not evaluated in this study. Accordingly, the results of this study may not be extrapolated to these types of feeding tube until further studies have been conducted.
V. CONCLUSIONS This study has demonstrated that the manometric technique using a cuff pressure manometer can accurately verify NGT placement in intubated, mechanically ventilated patients. In the aim of gastric decompression or in settings where roentgenograms are not available, the manometric technique is a convenient, inexpensive and highly accurate method to verify NGT placement. This technique may have the potential to reduce the complications of NGT placement and improve patient safety.
ACKNOWLEDGMENT The authors are grateful to the nurse anesthetists in E-DA Hospital for their assistance in nasogastric tube placement in this study.
REFERENCES 1. Carey TS, Holcombe BJ (1991) Endotracheal intubation as a risk factor for complications of nasoenteric tube insertion. Crit Care Med 19:427– 429 2. Seguin P, Le Bouquin V, Aguillon D et al. (2005) Testing nasogastric tube placement: evaluation of three different methods in intensive care unit. Ann Fr Anesth Reanim 24:594-599 3. Hanson RL (1979) Predictive criteria for length of nasogastric tube insertion for tube feeding. JPEN J Parenter Enteral Nutr 3:160-163 4. Turnbull D, Webber S, Hamnegard CH et al. (2007) Intra-abdominal pressure measurement: validation of intragastric pressure as a measure of intra-abdominal pressure. Br J Anaesth 98:628-634 5. Araujo-Preza CE, Melhado ME, Gutierrez FJ et al. (2002) Use of capnometry to verify feeding tube placement. Crit Care Med 30:22552259
Author: Hung-Shu Chen Institute: Institute of biomedical engineering, National Cheng Kung University/ Department of Anesthesiology, E-DA Hospital, I-Shou University Street: No.1, University Road City: Tainan City Country: Taiwan (R.O.C.) Email:
[email protected]
IFMBE Proceedings Vol. 35
Assessment of Diabetics with Various Degrees of Autonomic Neuropathy Based on Cross-Approximate Entropy C.C. Chiu1, S.J. Yeh2, and T.Y. Li1 2
1 Department of Automatic Control Engineering, Feng Chia University, Taichung, Taiwan, R.O.C. Section of Neurology and Neurophysiology, Cheng-Ching General Hospital, Taichung, Taiwan, R.O.C.
Abstract— Noninvasive assessment of the severity of autonomic neuropathy in diabetic patients is an important theme for discussion. In this study, we investigate the feasibility of using the nonlinear cross-approximate entropy (Co-ApEn) of mean arterial blood pressure (MABP) and mean CBF velocity (MCBFV) as a signature to assess diabetics with various degrees of autonomic neuropathy. To this aim, 69 subjects were recruited. Among them, 11 were healthy adults (normal subjects), 15 were diabetics with no autonomic neuropathy symptoms, 25 were diabetics with mild autonomic neuropathy symptoms, and 18 were diabetics with severe autonomic neuropathy symptoms. For each subject, continuous CBFV was measured using a TCD (Transcranial Doppler ultrasound, EME TC2020), and continuous ABP recorded using a Finapres (Ohmeda 2300) device in supine position. The Co-ApEn calculation of MABP and MCBFV with length of 200 data points is normalized. The normalization method is achieved by subtracting the mean and divided by the standard deviation. The ANOVA and Scheffe between the four groups are applied for the statistical analysis. Four groups have statistically significant differences among each other. Especially, there exists statistical significance between normal subjects and diabetics with autonomic neuropathy. These results indicate that the Co-ApEn analysis of similarity between the MABP and MCBFV can be helpful to be a noninvasive preliminary screening test for diabetics with or without neuropathy. Keywords— Diabetics, autonomic neuropathy, nonlinear, cross-approximate entropy (Co-ApEn).
I. INTRODUCTION The cerebral autoregulation (CA) mechanism refers to the cerebral blood flow (CBF) tendency to maintain relatively constant in the brain despite changes in mean arterial blood pressure (MABP) in the interval from 50-170 mmHg [1]. Over the last decade, considerable advances have been developed in the safety and accessibility of non-invasive equipment. A technique using a transcranial Doppler (TCD) was introduced to evaluate the dynamic response of CA in humans [2]. Rapid drops in arterial blood pressure (ABP) caused by the release of thigh blood pressure cuffs were
used as an autoregulatory stimulus. The ABP and CBF velocity (CBFV) were compared during the autoregulatory process. The use of CBFV instead of CBF was questioned at that time. Some investigators using the same paradigm [3] validated this approach. They demonstrated that the relative changes in CBFV had an extremely close correlation reflecting relative changes in CBF during autoregulation testing. ABP can also be acquired non-invasively using a finger cuff device (Finapres BP monitor). A high-speed servo system in the Finapres inflates and deflates the cuff rapidly to maintain the photoplethysmographic output constant at the unloaded state. Using the autoregulatory curve made by ABP and CBFV as a model to measure whether the pressure is normal or impaired in humans, CA is more a concept rather than a physically measurable entity [4]. Noninvasive CA assessment has been developed and studied using either static or dynamic methods [5]. Most tests require the introduction of variations in ABP using traditional physiologic or pharmacological manipulation. It is a challenge to find appropriate methods to assess CA noninvasively and reliably using simple and acceptable procedures. Recent investigations have shown that the autoregulatory dynamic response can be identified from spontaneous fluctuations in MABP and CBFV [6]. Some investigators assessed the dynamic relationship between spontaneous MABP and CBFV using transfer function analysis in either normal subjects [7][8] or in autonomic failure patients [9]. Some investigators used spontaneous blood pressure changes as input signals to test CA [10][11]. Spectral and transfer function analyses of CBFV and ABP were performed using fast Fourier transform (FFT) in their experiments. However, the stationary property and time resolution are two critical problems for spectral analysis. Another study was made to explore spontaneous beat-to-beat fluctuations in MABP and breath-by-breath variability in endtidal CO2 (EtCO2) in continuous recordings obtained from normal subjects at rest to estimate the dynamic influences of arterial blood pressure and CO2 on CBFV [12]. The chaotic analysis will be implemented to assess dynamic CA in diabetics by a nonlinear measure, cross approximate entropy
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 266–269, 2011. www.springerlink.com
Assessment of Diabetics with Various Degrees of Autonomic Neuropathy Based on Cross-Approximate Entropy
(Co-ApEn) in this study. Co-ApEn focuses on the similarity between MBP and MCBFV. Because autonomic nervous system was impaired in diabetics, it affected vasomotor control of peripheral and brain vessels and reduced baroreflex sensitivity. By using the parameters of chaotic analysis to quantify CA, we expect normal autoregulation system would be lower similarity between the MABP and MCBFV, but impaired autoregulation would be more similarity between the MABP and MCBFV.
267
approximately 5 minutes in both supine and tilt-up positions using a custom-developed data acquisition system. The personal computer combined with a general purpose data acquisition board and LabVIEW environment for acquiring signals correctly was developed in our previous study [15]. The sampling rate needed to acquire the analog data from TCD and Finapres is adjustable in this system. In our experiments, the sampling rate was set to 60 Hz. B. Preprocessing
II. MATERIALS AND METHODS A. Subjects and Measurements Four groups of subjects were recruited in this study including 11 healthy adults for normal subjects (group 1, 3 men, 8 women) with a mean age of 56.4±8.6 years, 15 diabetics without autonomic neuropathywith a mean age of 52.5±16.27 years, 25 diabetics with mild autonomic neuropathy (group 3, 15 men, 10 women) with a mean age of 67.5±8.8 years, and 18 diabetics with severe autonomic neuropathy (group 4, 12 men, 6 women) with a mean age of 61.6±10.9 years. The subjects in the healthy group were included only if they had no history of vascular disease, heart problems, hypertension, migraine, epilepsy, cerebral aneurysm, intra-cerebral bleeding or other pre-existing neurological conditions. None of the subjects were receiving any medication during the time of the study. Here we describe abnormalities in autonomic functions determined using Low’s non-invasive techniques for the assessment of autonomic functions [13,14] with minor modification by replacing the sudomotor examination with the sympathetic skin response (SSR) test. Low’s criteria incorporated a composite score (maximum score 10) that gives an overall indication of the severity of autonomic abnormalities [14]. Three components that make up the composite score are, the adrenergic index (AI, an indicator of sympathetic function) maximum score 4, the cardiovascular heart rate index (CVHRI, indicator of parasympathetic function) maximum score 3, and the sudomotor index, maximum score 3. In our study, the patient with composite score 0-3 is determined as without autonomic failure, 3-6 as mild autonomic failure, and 6-10 as severe autonomic failure. CBFV was measured in the right middle cerebral artery using TCD (transcranial Doppler ultrasound, EME TC2020) in conjunction with a 5-MHz transducer fixed over the temporal bones using an elastic headband. Continuous ABP recordings were obtained through the Finapres (Ohmeda 2300) device with the cuff attached to the middle finger of the right hand. Data acquisition was started after a 10-min relaxation in supine position. Spontaneous ABP and CBFV were recorded simultaneously to a PC for off-line analysis. The acquisition periods were
The Finapres device was fully automated. The blood volume under an inflatable finger cuff was measured with an infrared plethysmograph, and kept constant to a set point value by controlling the cuff pressure in response to volume changes in the finger artery. Using a built-in servo adjustment mechanism, a proper volume-clamped set point was established and adjusted at regular intervals. However, this procedure interrupted the blood pressure recording (usually for 2-3 beats every 70 beats). We called the artifacts caused by the regular servo adjustment in the continuously acquired pulse signal “servo components” because both ABP and CBFV signals were simultaneously acquired, displayed, and stored into a PC. The direct removal of servo artifacts from the ABP was not appropriate because it would result in different time duration in comparison with the CBFV signal. Therefore, a signal relaxation algorithm was presented [16] to compensate for the artifacts caused by the servo components. An example of servo component signal relaxation is interpreted in Fig. 1. After extracting the servo component Si and identifying the Pi and Pk waveforms (i.e., the complete front and rear pulse beat waveforms closest to Si ), the Si segment is then substituted by a proper amount of Pi ’s from the front part of Si and Pk ’s from the rear part of Si alternatively. In the final stage, the residual time duration of the Si segment, which is not long enough to be substituted by either Pi or Pk , is interpolated linearly in order to maintain the signal duration as in the original Si segment.
Fig. 1 An example of a continuously acquired ABP signal from Finapres with the servo artifacts Si . Pi , and Pk representing the complete front and rear pulse beat waveforms closest to Si respectively
IFMBE Proceedings Vol. 35
268
C.C. Chiu, S.J. Yeh, and T.Y. Li
The mean ABP value was calculated for eachheart beat as follows.
MABPi =
1 Vi − Vi −1
Vi −1
∑ x( k )
(1)
k =Vi −1
Where x(⋅) is the ABP pulse signal acquired continuously from the Finapres analog output port. Vi − 1 is the wavetrough time index in the (i-1)th pulse beat. Vi is the time index of the wave-trough in the ith pulse beat. Therefore, MABPi is the calculated mean APB value for the ith pulse beat. Similarly, the mean CBFV value was calculated for each heart beat as follows. MCBFVi =
1 Di − Di −1
Di −1
∑ y (k )
Cross-approximate entropy (Co-ApEn) is a measure the pattern similarity between two time series. No matter how the distribution of the signal pattern [17]. Low value of CoApEn indicates that two signals were similar, and high value of the Co-ApEn indicates that two signals are not similar. Co-ApEn can be estimated as follows. X(i) =[x(i)…x(i + m −1)] i =1, , N − m +1 Y( j) =[y( j)…y(i + m −1)] j =1, , N − m +1
(3)
The x(i) and y(j) are two time series signals. N is the data length. Symbol m is the vector dimension. Then X(i) and Y(j) are two sets of m-vectors. The distance between the vectors X(i), Y(j) as the maximum absolute difference between their corresponding elements. The corresponding elements are constructed as follows. (4)
max [ x(i + k ) − y ( j + k ) ]
k = 0, m −1
≦
With a given X(i), find the number of (4) that is r and the ratio of this number to the total number of m-vectors (Nm+1). The equation is constructed as follows.
(5)
N − m +1
≦
(4) r. Then take the logarithm and average it over i. The equation is constructed as follows. m φ xy (r ) =
N − m +1 1 ∑ ln C xym (i ) N − m + 1 i =1
(6)
m+1 m +1 Increase m by 1 and repeat (4)-(6) to find C xy (i ),φ xy (i ) . m m+1 Then subtraction between the φ xy (i ) and φ xy (i ) . The CoApEn is constructed as follows. m m+1 Co − ApEn(m, r , N ) = φ xy (r ) − φ xy (r )
k = Di −1
C. Cross-Approximate Entropy (Co-ApEn)
m N xy (i )
m N xy (i ) is the number of Y(j) satisfying the requirement
(2)
Where y(⋅) is the CBFV signal continuously acquired from the analog output port of the TCD. Di−1 is the time index of the wave-trough in the CBFV signal corresponding to the (i1)th pulse beat and Di is the time index of the wave-trough in the CBFV signal corresponding to the ith pulse beat. MCBFVi is the mean CBFV value for the ith pulse beat. The wave-peak and wave-trough of each cardiac cycle can be marked from the ABP and CBFV signals using the approach proposed in the previous study [15]. Afterward, the MABP and MCBFV time series calculated using Equations (1) and (2) are placed at regular intervals equal to their mean heart period.
d [ X (i ), Y ( j )] =
m C xy (i ) =
(7)
III. RESULTS AND DISCUSSION In this study, the cross-approximate entropy (Co-ApEn) analysis is applied. The Co-ApEn calculation of MABP and MCBFV with length of 200 data points is used, and r is set to be 0.1. Before we calculate the joint approximate entropy, the signal needs to be normalized. The normalization method is achieved by subtracting the mean and divided by the standard deviation. The results of Co-ApEn analysis between the MABP and MCBFV are listed in Table 1. The results of the Co-ApEn are consistent with the expected normal low similarity between the MABP and MCBFV, while patients with autonomic neuropathy have higher similarity between the MABP and MCBFV. Table 1 The Co-ApEn analysis of similarity between the MABP and MCBFV results Subjects
Mean
SD
Normal
11
0.5520
0.08006
Without DAN
15
0.4852
0.06363
Mild DAN
25
0.4627
0.09283
Severe DAN
18
0.4549
0.06920
The ANOVA analysis among these four groups is applied for the statistical analysis. The ANOVA results show that Co-ApEn values are statistically significant differences among these four groups. Also, we adopt the Scheffe method to do further statistical analysis of multiple comparisons. The significant results of multiple comparisons are listed in Table 2. The result shows that there exists statistical significance between normal subjects and diabetics with autonomic neuropathy, either mild or severe, but no significant difference between diabetics without autonomic neuropathy and with autonomic neuropathy.
IFMBE Proceedings Vol. 35
Assessment of Diabetics with Various Degrees of Autonomic Neuropathy Based on Cross-Approximate Entropy Table 2 The statistical results of multiple comparisons by Scheffe method Normal Normal
Without Mild DAN DAN 0.309
Without DAN
0.309
Mild DAN
0.041*
0.935
Severe DAN
0.033*
0.861
Severe DAN
0.041*
0.033*
0.935
0.861 0.999
0.999
Note: * indicates that p<0.05 with significant difference.
IV. CONCLUSION This study focused on using the nonlinear crossapproximate entropy (Co-ApEn) feature to assess the effects of changes on MABP and MCBFV without any special maneuvers needed for the measurement. The results showed statistically significant difference in Co-ApEn among these four groups, but also with statistical significance of CoApEn between normal subjects and diabetics with autonomic neuropathy. These results indicate that Co-ApEn analysis of similarity between the MABP and MCBFV can be helpful to be a noninvasive preliminary screening test for diabetics with or without autonomic neuropathy.
269
[7] Diehl, R.R., Linden, D., Lücke, D., Berlit, P. (1998) Spontaneous blood pressure oscillations and cerebral autoregulation. Clinical Autonomic Research 8:7-12. [8] Zhang, R., Zuckerman, J.H., Giller, C.A., Levine, B.D. (1998) Transfer function analysis of dynamic cerebral autoregulation in humans. American Journal of Physiology 274:H233-241. [9] Blaber, A.P., Bondar, R.L., Stein, F., Dunphy, P.T., Moradshahi, P., Kassam, M.S., Freeman, R. (1997) Transfer function analysis of cerebral autoregulation dynamics in autonomic failure patients. Stroke 28:1686-1692. [10] Kuo, T.B.J., Chern, C.M., Sheng, W.Y., Wong, W.J., Hu, H.H. (1998) Frequency domain analysis of cerebral blood flow velocity and its correlation with arterial blood pressure. Journal of Cerebral Blood Flow and Metabolism 18:311-318. [11] Chern, C.M., Kuo, T.B., Sheng, W.Y., Wong, W.J., Luk, Y.O., Hsu, L.C., Hu, H.H. (1999) Spectral analysis of arterial blood pressure and cerebral blood flow velocity during supine rest and orthostasis. Journal of Cerebral Blood Flow & Metabolism 19:1136-1141. [12] Panerai, R.B., Simpson, D.M., Deverson, S.T., Mathony, P., Hayes, P., Evans, D.H. (2000) Multivariate dynamic analysis of cerebral blood flow regulation in humans. IEEE Transactions on Biomedical Engineering 47: 419-423. [13] Low. P. (1993) Autonomic nervous system function. Journal of Clinical Neurophysiology 10:14-27. [14] Low, P. (1993) Composite autonomic scoring scale for laboratory quantification of generalized autonomic failure. Mayo Clinic Proceedings 68:748-752. [15] Chiu, C.C., Yeh, S.J., Lin, R.C. (1995) Data acquisition and validation analysis for Finapres signals. Chinese Journal of Medical and Biological Engineering 15:47-58. [16] Chiu, C.C., Yeh, S.J., Liau, B.Y. (2005) Assessment of cerebral autoregulation dynamics in diabetics using time-domain crosscorrelation analysis. Journal of Medical and Biological Engineering 25:53-59. [17] Akay, M. ed. (2001) Nonlinear Bilmedical Signal ProcessingDynamic Analysis and Processing Volume , pp. 73-91. Institute of Electrical and Electronics Engineers, Inc., New York.
Ⅱ
ACKNOWLEDGMENT The authors would like to thank the National Science Council, Taiwan, ROC, for supporting this research under Contracts NSC 99-2221-E-035 -054 -MY3.
REFERENCES [1] Lassen, N.A. (1959) Cerebral blood flow and oxygen consumption in man. Physiological Reviews 39:183-238. [2] Aaslid, R., Lindegaard, K.F., Sorteberg, W., Nornes, H. (1989) Cerebral autoregulation dynamics in humans. Stroke 20:45-52. [3] Newell, D.W., Aaslid, R., Lam, A., Mayberg, T.S., Winn, H.R. (1994) Comparison of flow and velocity during dynamic autoregulation testing in humans. Stroke 25:793-797. [4] Panerai, R.B. (1998) Assessment of cerebral pressure autoregulation in humans-a review of measurement methods. Physiological Measurement 19:305-338. [5] Tiecks, F.P., Lam, A.M., Aaslid, R., Newell, D.W. (1995) Comparison of static and dynamic cerebral autoregulation measurements. Stroke 26:1014-1019. [6] van Beek, A.H., Claassen, J.A., Rikkert, M.G. and Jansen, R.W. (2008) Cerebral autoregulation: an overview of current concepts and methodology with special focus on the elderly. Journal of Cerebral Blood Flow & Metabolism 28:1071–1085.
Author: Chuang-Chien Chiu Institute: Feng Chia University Street: No. 100, Wenhwa Rd., City: Taichung Country: Taiwan, R.O.C. Email:
[email protected]
IFMBE Proceedings Vol. 35
Automated Diagnosis of Melanoma Based on Nonlinear Complexity Features N. Karami and A. Esteki Department of Biomedical Engineering, Shahid Beheshti University (Medical Campus), Tehran, Iran
Abstract— In recent years, several diagnostic methods have been proposed aiming at early detection of malignant melanoma tumor which is among the most frequent types of skin cancer. In this paper we discuss a new approach based on complexity analysis for classification of pigmented skin lesions using dermatological images. Features that describe the structure and color of lesions, and show high discriminative characteristics are extracted using Approximate Entropy and Sample Entropy and these features are used to construct a classification module based on support vector machines (SVM) for recognition of malignant melanoma from benign nevus. Experimental results showed that combination of proposed nonlinear features led to a classification accuracy of 91.3%. Keywords— Melanoma, Nonlinear feature extraction, complexity, Support Vector Machine.
I. INTRODUCTION AND OBJECTIVES Malignant melanoma is among the most frequent types of skin cancer, and unfortunately it is increasing worldwide. Although several researches have been done in the field of melanoma biology and treatment, if the tumor is diagnosed in an advanced stage, it would practically be incurable [1]. However, if diagnosed at an early stage, it can be cured without complications. The early diagnosis of malignant melanoma is therefore a crucial issue for dermatologists [2]. In order to improve the rate of accurate diagnosis of melanoma, it is important to develop efficient schemes for clinical diagnosis and to support dermatologists with computer-aided diagnosis systems. A significant amount of studies have proven that quantification of lesion features may be of essential importance in clinical practice, because lesions can be identified based on measurable features extracted from their image [3], [4], [5], [6]. Features like color variations and diversity of the structure of lesions are two characteristics upon which a dermatologist can categorize a lesion as being malignant melanoma (fig. 1) or benign nevus (fig. 2), for this reason different methods are proposed for the quantification of these features. For instance, average of the intensity, hue, and saturation of the pixels within a lesion are used to represent its color features, and the structure of a lesion is judged based on its dissimilarity and angular second moment parameters. These two parameters
are obtained from the gray level cooccurrence matrix (GLCM) of the lesion image. This study is a resumption of the study that has been done in our department on 2009[7]. In this paper we proposed a new approach for extracting complexity features for a lesion. There are different techniques to determine complexity of a dataset. We have used two general measures, Approximate Entropy (ApEn) and Sample Entropy which differs from ApEn as its calculation does not involve counting self matches for each sequence of a pattern [8]. ApEn and SampEn have been used as efficient measures in biosignal processing and image processing [9] [10] [11] [12] and their application in detection of melanoma is the subject of our investigation. The features extracted using ApEn and SamEn - when it seems to be needed- are used as input data of support vector machine (SVM) for classification. Using proper features and applying SVM kernel led to relatively promising results. In addition, the inputs of our method are simple clinical images. This fact not only differentiates our technique from similar works [13] (which uses fluorescence images), but also makes it an inexpensive system for lesion detection.
Fig. 1 Malignant melanoma (Courtesy of www.dermnet.com)
Fig. 2 Benign nevus (Courtesy of www.dermnet.com)
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 270–274, 2011. www.springerlink.com
Automated Diagnosis of Melanoma Based on Nonlinear Complexity Features
A. Complexity Measures
271
II. MATERIALS AND METHODS
If we have a finite sequence XN=[x1, x2, x3, ..., xN] and select two proper values for parameters m and r, ApEn and SampEn are computed as in references [9] and [11] respectively. ”m” is a positive integer that refers to the length of the patterns compared to each other. “r” is a positive real number that implies the vector comparison distance. It is usually set between 10% and 25% of the standard deviation of the data set (Pincus,1994). Although literature is the best guideline to select the optimum value for m and r, it depends on the data base on which it is being applied. In this study “m” and “r” are set 5 and 0.2 respectively. If we have a two dimensional spatial series{x(i),y(i)} (i = 1,2,3,…,N), its “regularity” can be evaluated in a multiple dimensional space that a series of vectors are constructed and expressed as follows: 1. Construct m dimension vectors of spatial series consecutively as O(i) = [(x(i),y(i)),…,(x(i+m-1),y(i+m-1))], i = 1,2,…, N-m+1 2. The distance between two vectors O(i) and O(j) (j≠i) can be defined as the maximum deference in their respective corresponding elements: [14]
3. For each vector O(i), a measure that describes similarity between the vector O(i) and all other vectors O(j) (j ≠ i) can be defined as
Where
A. Description of Image Data Set The image data set used in this study is an extraction of two dermatology atlases (the dataset used in Ref 20) published online at: http://www.dermis.net (T. L. Diepgen, G. Yihune, et al), and http://www.dermnet.com. The whole data set consists of 160 images, 80 of them are benign nevus and the rest images are malignant melanoma cases. Leave one out is used as the cross validation method. In this way we used the whole dataset except one image for training and repeated this for the whole dataset. Once again half of the melanoma images plus half of the nevus images were used as the training set, and the other half of each of the categories formed the test set. B. Preprocessing Everything that might corrupt the image and hence the image processing results must be removed. Sometimes the lesion is covered with hairs. Preprocessing is required in order to remove them and make image analysis more convenient. To remove the hairs from the images, an improved version of Dull Razor [15] is used. The hair removal algorithm performs the following steps to eliminate hairs: 1) It identifies dark hair locations by a generalized grayscale morphological closing operation. 2) It verifies the shape of the hair as a thin and long structure and replaces the verified pixels using bilinear interpolation. 3) It smoothes the replaced hair pixels using the “roifill” function in MATLAB®. To fill a specified region, “roifill” smoothly interpolates inward from the pixel values on the boundary of the region by solving Laplace’s equation. It
4. By defining
The finite spatial series consisting of N couples of data points are used to estimate the two dimensional approximate entropy(AE2D) value of the spatial series which is defined as
The parameter “a” is set 0.2 to correspond to one dimensional computations.
Fig. 3 Up, before hair removal (Courtesy of www.dermnet.com). Bottom,after hair removal
IFMBE Proceedings Vol. 35
272
N. Karami and A. Esteki
should be noted that by using this method, most of the hairs are removed; though traces of hair remain in some cases. Fig. 3 shows an example of hair removal. C. Segmentation After limiting the images to the lesion criterion, we extracted lesion from skin so that lesion complexity is computed with the least percentage of noise. It should be considered that as computation of AppEn and SampEn do not depend on the dc bias there is no need to use techniques that are used in most of Image analysis methods like relative color features.[7][16] The problem of segmentation involves the separation of the skin lesion from the healthy skin. Several attempts have been made in literature concerning skin lesion segmentation [6], [17], [18]. Our approach is thresholding which is based on the fact that the values of pixels that belong to a skin lesion differ from the values of the background. Image thresholding is performed using the intensity value, therefore images are first converted into gray scale. By choosing an upper and a lower value it is possible to isolate pixels that have values within the desired range. The information for the upper and the lower limits can be extracted from the image histogram, where different objects are represented as peaks. After segmentation, values of all non lesion pixels were set to zero.
In order to assess the significance of every feature for separating two groups of lesions, we should rank the features and then based on classification results we can decide how many features are actually need to achieve a desired level of separation. Several classifiers can be used to learn the categories from the produced features. The purpose of this study is mainly to show the potential of complexity features for learning features from the lesion images. As a consequence, the choice of the classifier is not a crucial aspect of our study and we chose to demonstrate our method using a support vector machine (SVM). Support vector classifiers [19] are commonly used because of several attractive features, such as simplicity of implementation, a small number of free parameters to be tuned, the ability to deal with high-dimensional input data and good generalization performance on many pattern recognition problems. Support vector classifiers with linear, polynomial and radial basis function (rbf) kernels are trained to classify the test data. For evaluating the performance of a classifier the following parameters are calculated: Accuracy = (TP+TN) / (P+N); Sensitivity = TP/(TP+FN); Specificity = TN/(FP+TN) P = total number of positive cases, N= total number of negative cases, TP= true positive, FP= false positive, TN= true negative, FN= false negative.
D. Feature Extraction Two main types of features were extracted, which both describe the structure of lesions in gray scale and RGB layers. The preprocessed lesion image is considered as a matrix containing columns and rows. Every column and row is determined as a signal or spatial series for which ApEn and SampEn is calculated. Then mean, norm, ApEn of all rows and columns, root mean square of row ApEn and its respective column ApEn, AE2D, … are used to create features that describe the texture of lesion. Two dimensional ApEn and SampEn is computed for all the images in the database. ApEn and SampEn is computed using grayscale image matrix and the three RGB layers. E. Feature Selection and Classification Classifying the data using all the features, does not essentially yield good results. Moreover extracting many independent features from a large database has high computational costs. As mentioned before, all of our features is computed from the primary computation for rows and columns and only addition of a new layer of color image to features may result in more computation.
III. RESULTS The results are computed in two situations. In the first one, features that are extracted from grayscale image and the red layer of the color image is used. In the second one, the green and blue layer’s features are added to improve the classification results. Among the various kernels used for classifying the test set, the linear kernel with a boundary condition of 0.3 produced the best results. For this kernel, the classification accuracy versus the number of features is shown in fig. 4. Classification accuracy reaches its peak (91.3%) when the first 76 features in ranking are applied for classification. At this point sensitivity is 91.3% and specificity is 91.3%. Yet if we only use features extracted from Grayscale Image and the red layer of color digital image, sensitivity and specificity would be 91% and 88% respectively, with the accuracy of 89%. In this case just 37 features are used. The results show that red layer of the image contains the most discriminate features among the three layers. Table 1 1ists some of the studies in this domain. We have added our method to the bottom of this table to compare our study to those of others. In this table main characteristics of different
IFMBE Proceedings Vol. 35
Automated Diagnosis of Melanoma Based on Nonlinear Complexity Features
Fig. 4 Classification accuracy for the linear kernel with the boundary value of 0.3 Table 1 Comparison between main characteristics of different studies Source Smith et al, 2000 Farina et al, 2000 Faziloglu et al.2003 Chen et al.2003 Cheng et al.2008 Schmid-Saugeon,2003 Celebi,2007 Tabatabaee et al, 2009 Our method
Sample size 60 237 258 258 285 100 564 160 160
Melanoma
Sens.
Spec.
(%) 47 28 50 50 56 50 15.6 50 50
86 80 84.3 89 86 82 93.3 82.5 91
94 46 83 83 70 80 92.3 92.5 88
160
50
91.3
91.3
(using Grayscale and Red layer)
Our method (using Grayscale and RGB layers)
studies, like the number of samples, the ratio of the melanoma cases to the whole number of images, sensitivity and specificity of the classification algorithm are reported. The sample size and the percentage of the melanoma cases included in the data set are not similar for different techniques. Thus it does not make sense to draw a conclusion just by looking at the sensitivity and specificity of a method. Considering the sample size and the percentage of melanoma cases used in our study shows that we have achieved a level of sensitivity and specificity which is more reliable than many of the studies presented in table 1.
IV. CONCLUSIONS The technical achievements of recent years in the areas of image processing allow the improvement of image analysis systems. Such tools may serve as diagnostic adjuncts for dermatologists for the confirmation of a diagnosis; also the
273
introduction of melanoma diagnosis systems can enhance the quality of medical care, particularly in areas where a specialized dermatologist is not available. In such cases, an expert system may detect the possibility of a serious skin lesion and warn of the need for early treatment. We know from skin cancer research that a unique feature is not sufficient to diagnose precisely skin cancer, and that the combination of different criteria is the key to the early detection of malignant melanoma and other types of skin cancer. Combining color and structure features for classification, as done in this research, show that measuring these features can be a good guide for a diagnostic system to classify skin lesions. In this paper a new scheme for extracting the features of skin lesions from digital images has been presented. The results of the present study show that complexity features create a promising tool for extracting discriminative lesion structure features, which can be used along with each other to reach a high level of classification.
REFERENCES 1. Hersey P., Advances in the non-surgical treatment of melanoma, Expert Opinion on Investigational Drugs, vol. 11, no. 1,2002, pp. 75–85 2. Saugeon, P. S., Guillod, J., Thiran, J. P., Towards a computer-aided diagnosis system for pigmented skin lesions, Computerized Medical Imaging and Graphics, vol. 27,2003, pp. 65–78 3. Green, A., Martin, N., Pfitzner, J., O'Rourke, M., Knight, N., Computer image analysis in the diagnosis of melanoma, Journal of the American Academy of Dermatology, vol. 31, no. 6,1994, pp. 958-964 4. Kjoelen, A., Thompson, M., Umbaugh, S., Moss, R., Stoecker, W., “Performance of artificial intelligence methods In automated detection of melanoma, IEEE Engineering in Medicine and Biology, vol. 14, no.4, 1995, pp. 411-416 5. Ganster, H., Pinz, P., Rohrer, R., Wildling, E., Binder, M., Kittler, H.,“Automated melanoma recognition, IEEE ransactions on Medical Imaging, vol. 20, no. 3, 2001, pp. 233 -239 6. Maglogiannis, I. , Zafiropoulos E., Kyranoudis C., Intelligent segmentation and classification of pigmented skin lesions in dermatological images, SETN 2006, LNAI 3955, Springer-Verlag Berlin Heidelberg, pp. 214 – 223 7. Tabatabaee k.,Esteki A., Extraction of skin lesion texture features based on independent component analysis,skin Research and Technology, wiley, 2009,433-439 8. Pham Tuan D, GeoEntropy a measure of complexity and similarity, pattern recognition, vol 43, 2010, pp. 887 -896 9. Pincus Steven M., Approximate entropy as a measure of system complexity;Proc. Nati. Acad. Sci. USA,88: 2297-2301, 1991. 10. Zhenghui Hu , Pengcheng Shi. Complexity Analysis of FMRI time sequences, IEEE ICIP,2006, pp. 2861-2864 11. Joshua S. Richman, J. Randall Morman, Physiological time-series analysis using approximate entropy and sample entropy, Am J Physiol Heart Circ Physiol 278, 2000,pp. H2039–H2049 12. Pincus S.M., Approximate entropy in cardiology, Herzschr Elektrophys,2000, pp. 11:139–150 13. Stockmeier, H. G., Bäumler, W., Szeimies, R. M., Theis, F. J., Lang , E. W., Puntonet, C. G. , Classification of skin lesions by fluorescence diagnosis and independent component analysis, Biomedical Engineering, Austria,2004, pp. 417-054
IFMBE Proceedings Vol. 35
274
N. Karami and A. Esteki
14. Wang Wei,Qiang Li,Guojie Zhoa, Novel approach based on Chaotic oscillator for machinery fault diagnosis ,measurement,vol 41, 2008, pp. 901-911 15. Lee T., Ng V., Gallagher R.,Coldman A.,McLean D., DullRazor:a software approach to hair removal, Computers in Biology and Medicine, vol. 27, no. 6,1997, pp. 533-43 16. Cheng Y, Swamisai R, Umbaugh SE, Moss RH, Stoecker WV, Teegala S, Srinivasan SK., Skin lesion classification using relative color features, Skin Res Technol, 2008, pp. 14:53–64 17. Zhang, Z., Stoecker, W. V., Moss, R. H., “Border detection on digitized skin tumor images,” IEEE Transactions on Medical Imaging, vol. 19, no. 11,2000, pp.1128 -1143
18. Maglogiannis, I., Automated segmentation and registration of dermatological images, Journal of Mathematical Modelling and Algorithms, Springer Science Business Media, vol. 2,2003, pp. 277-294 19. Cristianini N., Taylor J. S., An introduction to support vector machine, University of Cambridge Press, 2000.
Author: Nazanin Karami Institute: Department of Biomedical Engineering, Shahid Beheshti University(Medical Campus) Street: Chamran Highway (Evin) City: Tehran Country: Iran Email:
[email protected]
IFMBE Proceedings Vol. 35
Bowel Ischemia Monitoring Using Rapid Sampling Microdialysis Biosensor System E.P. Córcoles1,*, S. Deeba2,*, G.B. Hanna2, P. Paraskeva2, M.G. Boutelle1, and A. Darzi2 1
2
Department of Bioengineering, RSM Building, Imperial College London, South Kensington Campus, London, SW7 2AZ, UK Department of Surgery and Cancer Technology, Imperial College London, St Mary’s Hospital, Praed Street, London W2 1NY, UK * Equal contribution
Abstract— Objective: Intestinal ischemia, during and after surgery, is a major cause of sepsis leading to multiple organ failure. Complications associated with gastrointestinal surgery can be severe enough to present high mortality rate among patients. Anastomosis site, the connection site of the transected bowels, is prone to leak, leading to intestinal ischemia. Monitoring metabolic changes in the bowel intraoperatively and post-surgery is of great interest as an early marker of intestinal ischemia. Methods: Microdialysis has proved its efficacy in monitoring ischemia in bowel. We have developed an on-line rsMD technique for monitoring metabolites in the human bowel during gastrointestinal surgery. This analyses electrochemically the dialysate glucose and lactate at high time resolution (typically 30sec.) The system consists of a flow injection analysis (FIA) system coupled to an enzyme based amperometric detector. This work compares the metabolic changes monitored in healthy human bowel during surgery and in the compromised anastomotic site of swine models. Results: Metabolic response of ischemia in healthy human tissue showed a 20 min therapeutic window, subtend by collateral flow. In contrast, microdialysis monitoring in anastomosis site in animal models showed a rapid response 5 min after artery transection, indicating the severe effect of ischemia when the tissue blood supply was compromised. Conclusion: On-line rapid sampling microdialysis biosensor system can be used to monitor the transit from healthy to ischemic tissue in human bowel and to predict an early ischemic event in the anastomotic segment of porcine animal models. Keywords— Microdialysis, biosensors, anastomosis, bowel ischemia.
I. INTRODUCTION Intestinal ischemia can be defined as a limitation of oxygen delivered to the tissue to meet the required metabolic rates. Anastomosis is performed commonly in gastrointestinal surgical procedure to connect the transected bowels. One of the most common complications after this procedure is anastomosis leak. The bowel becomes ischemic if perfusion near the site of the join fails, developing in major sepsis and increasing the length of hospital stay and the risk of mortality [1]. Postoperative
mortality associated with anastomotic complication ranges from 6% to 22% and accounts for approximately one third of all deaths following colorectal cancer [1]. Determinations of intestinal ischemia are at present rudimentary and require the skills of the clinician to make a decision based on the appearance and texture of the tissue. Currently, clinical process such laser Doppler [2], tonometry [3], fluorescence [4], non-invasive imaging methods [5] and blood measurement [6] can detect poor blood perfusion of the tissue, but doing so they do not reveal the signature of ischemia. A rapid detection of ischemia is necessary for an early diagnose of anastomosis leak and to study the physiology of the compromised bowel anastomosis. Microdialysis is an extraction technique developed in the early seventies to sample the extracellular fluid from brain tissue [7]. Microdialysis offers the possibility of metabolic monitoring of tissue governed by a satisfactory perfusion [8]. Bowel microdialysis has been used, to date, for animal studies [9], [10], [11] and for intraperitoneal (extramural) monitoring in humans [12], [13]. Whilst some evidence of ischemia were found, the poor time resolution of the assay coupled to the large volume of the intraperitoneal cavity prevented this method from being useful during human bowel surgery. Matthienssen et al. carried out a microdialysis study in anastomosis site by inserting the probe free in the patients’ peritoneal cavity [14]. Metabolic changes typical of ischemia were monitored within 24-48 hours after the surgery. Commercial microdialysis analyzers monitor changes every 3-4 hours, missing events that could be relevant for an early diagnosis and a better understanding of the tissue metabolism. On-line rapid sampling microdialysis biosensor system automatically samples and analyzes, typically every 30 seconds, allowing continuous monitoring during 24h for up to five days [15], [16]. The on-line rapid sampling microdialysis biosensor system was proved to be a feasible technique to monitor the metabolic changes that take place intramurally in a segment of human colon during and after resection in the clinical environment [17]. This communication is a comparative study of the biochemical effect of ischemia on bowel anastomosis in animal models
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 275–278, 2011. www.springerlink.com
276
E.P. Córcoles et al.
and healthy human bowel monitored with the on-line rapid sampling microdialysis biosensor system.
II. EXPERIMENTAL SECTION Rapid sampling on-line biosensor system was used for monitoring of intramural glucose and lactate concentrations from the dialysate of a microdialysis probe tunneled in the seromuscular layer of seven patient left colon and in a different study in a peri-anastomotic segment in 5 colo-colic stapled anastomosis in a swine model. A. Surgical Procedure Seven patients undergoing laparotomy and left colectomy participated in the study. All patient procedures were carried out by a consultant colorectal surgeon. Ethical approval for this trial was obtained from St. Mary’s Hospital Local Regional Ethics Committee, and informed consent obtained from all participants. The clinical microdialysis probe (CMA 62; 3 cm membrane, CMA, Stockholm, Sweden) was tunneled in the left colon wall before transecting any feeding vessel. The probe outlet was connected to the biosensor system. Glucose and lactate dialysate were monitored online every 15 seconds until the specimen was completely resected and delivered outside the surgical field. Three 6 months old male large white strain piglets weighing between 25-30 kg were used for the anastomosis experiments. All animal procedures were conducted by a senior colorectal surgery fellow and carried out under specific license in the animal facility from the French Ministry of Agriculture: license number n° A 78760. All the animals were sacrificed at the end of the data collection. A microdialysis probe (MAB 7.11, 1 cm length, 0.6 mm diameter, 15 kDa cut-off, Microbiotech se/AB, Stockholm, Sweden) was tunneled in the bowel wall of the colonic segment, next to the stapling device, to ensure a distance of the probe from the anastomosis site of 2 mm. The device was then fired to construct the anastomosis and the colotomies closed. Baseline values were obtained for 20 minutes, before the feeding artery of both segments of the anastomosis was clamped. Five anastomosis were constructed in total and microdialysis monitoring of glucose and lactate ischemic values were recorded every 30 seconds for 25 minutes. A control microdialysis probe MAB 7.11 (Microbiotech se/AB, Stockholm, Sweden) tunneled in a separate segment of colon recorded glucose and lactate concentrations throughout the entire procedure in an off-line mode. B. Rapid Sampling Microdialysis (rsMD) On-Line Assay The assay consists of a flow injection analysis (FIA) system coupled to an enzyme based amperometric biosensor
for glucose and lactate [18], [19]. In both monitoring, the microdialysis probe inflow was connected to a CMA 400 microdialysis syringe pump (CMA/Microdialysis, Stockholm, Sweden) via a sterile tube extension (1.5meters, 0.4 mL, Alaris® Medical Systems, Dublin, OH). A sterile saline solution (perfusion fluid: 147 mM sodium, 4 mM potassium, 2.3 mM calcium, 156 mM chloride) was perfused between 4 and 5.6 µl/min. This rate was chosen to ensure rapid delivery of dialysate through the outflow tube, elongated with 20 cm (animal model) and 1.5 m (human patients) FEP tubing (Microbiotech se/AB, Stockholm, Sweden) and connected to the dual on-line assay system. The dialysate stream is sampled into a 200 nl dual internal loop custom-made valve (Valco Instruments, Schenkon, Switzerland). Every 30 seconds the valve switches and dialysate is sampled into an analysis flow stream that is accelerated by buffer solution (0.1 M sodium citrate, 10 mM sodium chloride, 1.5 mM ferrocene monocarboxylate and 1 mM Ethylenedeamine tetraacetate, EDTA) at pH of 7.0 and pumped at 200 µl min–1 by an HPLC pump (Rheos 2000, Flux Instruments, Basel, Switzerland). Dialysate is directed to enzyme reactors containing glucose oxidase for glucose or lactate oxidase for lactate and horseradish peroxidase (Genzyme Diagnostics, Kent, UK). The system is coupled to radial flow glassy carbon electrode (BASi, West Lafayette, IN) held at a potential of –100 mV (Ag/AgCl, [Cl-] =10 mM) by a potentiostat (E-DAQ, Sydney, Australia), where the electrochemical detection of the analytes occurs. The products of the oxidation of the enzymatic reaction (ferrocinum ions) are reduced by the electrode, producing a current peak every 30 seconds. The rsMD data was collected at 100 Hz using a Powerlab 8 S/P analogue/digital converter and Chart® 5.6 software (ADInstruments, New South Wales, Australia) running on a Powerbook portable computer (Apple Computers, CA, USA). The current peaks were converted to concentration based on a pre-, intraoperative and post-calibration with standards of glucose and lactate at different concentrations due to the proportionality to substrate concentrations. C. Data Analysis The rsMD data were analyzed using Chart 5.6. Current peak data is typically converted to glucose and lactate concentration traces and plotted over time for further analysis. Time lag between dialysate leaving the probe and analysis was corrected and trials were all aligned at time zero defined when the feeding artery was clamped. Graphs and analysis were performed with Igor Pro 6 (WaveMetrics, Portland, OR, USA). If necessary, noise cancelling processing techniques were applied using Matlab 7.5 (MathWorks, Natick, MA, USA) [20].
IFMBE Proceedings Vol. 35
Bowel Ischemia Monitoring Using Rapid Sampling Microdialysis Biosensor System
277
Data from both studies was compared to contrast ischemia effects in compromised and healthy tissue. However, due to the difference in concentration and time scale, the percentage of the metabolites concentration change was calculated. The time was scaled following the common factor of considering zero the time when the main feeding artery was clamped. At this time, the concentration for glucose and lactate was considered the initial concentration, from which the percentage was determined.
III. RESULTS AND DISCUSSION Glucose and lactate dialysate averaged data from the monitoring of seven patients healthy tissue and five compromised anastomotic tissue is presented as mean ± standard error. The percentage of change was plotted over a common axis of time as shown in Figure 1. Healthy human tissue monitored during surgery showed glucose levels maintained a pattern similar to baseline levels for an average of 17 minutes following the transection of the main feeding artery. A rebound can be observed that ended approximately 5 minutes later, when the last artery had been transected. However, levels maintained during another 12 minutes before fell continuously. Lactate levels increased slowly during the first 17 minutes after arterial transection to then rise dramatically. This suggests a therapeutic window where the surgeon could still perform other actions without compromising the tissue. Furthermore, from the surgical perspective, this could represent a highly important period during the transplantation of organs or highly risky surgical operations. From a metabolic prospect, this glucose rebound levels suggest a supplementary source of glucose to the tissue or equally a decrease in metabolic rate. We hypothesize an additional supply by collateral flow that explains the momentary increase in glucose following the transection of the main feeding artery. In the case of anastomotic swine tissue, a constant decrease in glucose was observed during 30 minutes following the clamping of the artery feeding the anastomosis. Tissue metabolic demand consumes glucose and it is unable to replenish it from the compromised blood supply. Lactate levels increased during the first 20 minutes to then decrease dramatically. The inefficient oxygen delivery to the tissue characteristic of anaerobic metabolism justifies this increase of lactate contrary to the systemic hyperlactatemia shown by Tenhunen et al. [9]. The flushing effect of the microdialysis probe perfusion rate, where the rate of delivery of perfusate is faster than the rate of lactate production explains the remarkable decrease of lactate. Figure 1 depicts clearly the rapid reaction of anastomotic tissue to ischemia 5 minutes after transection of the artery.
Fig. 1 Healthy human data versus compromised pig data. Percentage of metabolites levels (mean ± sem) monitored in human healthy bowel wall during surgery (diamond traces) and in the anastomosis site of porcine bowel wall models (circle traces). Glucose traces drop from baseline values, while lactate traces rise. Alignment at the transection of main feeding artery (t=0)
This result suggests that once the tissue is being damaged and repaired, such in the case of resecting and connecting the tissue, a further damage (ischemia) will cause a traumatic effect to the tissue. Bowel with surgical stress and previous ischemic insults is more susceptible to damage [21], [22]. While healthy human bowel relied on collateral flow to cope with the ischemic event produced by the clamping, the animal monitoring showed a rapid change in metabolism immediately after clamping. Hence, the 20 minutes therapeutic window observed during the human monitoring was not present in the anastomotic tissue.
IV. CONCLUSION This study showed that the on-line rapid sampling microdialysis biosensor system can be used to monitor the transit from healthy to ischemic tissue in human bowel and to predict an early ischemic event in the anastomotic segment of porcine animal models.
IFMBE Proceedings Vol. 35
278
E.P. Córcoles et al.
Our on-line rapid sampling microdialysis system eliminates all of the disadvantages of off-line sampling mode, normally operating every 3-4 hours, which introduces a long lag period, is laborious, and has a high probability of failing to reveal events due to dilution into a pooled sample. Since our system samples every 30 seconds using an automatic mechanism, there is an immediate feedback to the clinician. Although additional physiological measurements and a greater number of animal subjects are necessary to confirm metabolic results found, the microdialysis monitoring has illustrated the biochemical difference of the effect of ischemia between healthy and compromised bowel.
ACKNOWLEDGEMENTS The authors would like to acknowledge the veterinary team for their assistance during the monitoring and Covidien management for their services and support throughout the experiment organization.
REFERENCES 1. Alberts J.C, Parvaiz A, and Moran B.J (2003) Predicting risk and diminishing the consequences of anastomotic dehiscence following rectal resection. Colorect Dis 5: 478-482 2. Hoff D.A, Gregersen H, and Hatlebakk J.G (2009) Mucosal blood flow measurements using laser Doppler perfusion monitoring. World J Gastroenterol 15:198-203 3. Cerny V, and Cvachovec K (2000) Gastric tonometry and intramucosal pH - theoretical principles and clinical application. Physio Res 49:289-298 4. Behrendt F.F, Tolba R.H, Overhaus M, Hirner A, Minor T, and Kalff J.C (2004) Indocyanine Green Fluorescence Measurement of Intestinal Transit and Gut Perfusion after Intestinal Manipulation. Eur Surgic Res 36:210-218 5. Bradbury M.S, Kavanagh P.V, Bechtold R.E, Chen M.Y, Ott D.J, Regan J.D, and Weber T.M (2002) Mesenteric Venous Thrombosis: Diagnosis and Noninvasive Imaging. Radiographics 22: 527-542 6. Assadian A, Assadian O, Senekowitsch C, Rotter R, Bahrami S, Furst W, Jaksch W, Hagmuller G.W, and Hubl W, (2006) Plasma d-Lactate as a Potential Early Marker for Colon Ischaemia After Open Aortic Reconstruction. Eur J Vasc and Endovasc Surg 31:470-474 7. Hagstroem-Toft E, Enoksson S, Moberg E, Bolinder J, and Arner P, (1997) Absolute concentrations of glycerol and lactate in human skeletal muscle, adipose tissue, and blood. Am J Physio 273: E584E592 8. Ungerstedt U, and Pycock C, (1974) Functional correlates of dopamine neurotransmission. Bull Schweiz Akad Med Wiss 30:44-55 9. Tenhunen J. J, Kosunen H, Alhava E, Tuomisto L, and Takala J. A, (1999) Intestinal Luminal Microdialysis: A New Approach to Assess Gut Mucosal Ischemia. Anesthesiology 91: 1807-1815
10. Sommer T. and Larsen J. F, (2004) Intraperitoneal and intraluminal microdialysis in the detection of experimental regional intestinal ischaemia. Br J Surg 91:855-61 11. Solligård E, Juel I.S, Bakkelund K, Jynge P, Tvedt K. E, Johnsen H, Aadahl P, and Gronbech J. E, (2005) Gut luminal microdialysis of glycerol as a marker of intestinal ischemic injury and recovery. Crit Care Med 33: 2278-2285 12. Jansson K, Ungerstedt J, Jonsson T, Redler B, Andersson M, Ungerstedt U, and Norgren L, (2003) Human intraperitoneal microdialysis: increased lactate/pyruvate ratio suggests early visceral ischaemia. A pilot study. Scand J Gastroenterol 38:1007-11 13. Jansson K, Jansson M, Andersson M, Magnuson A, Ungerstedt U, and Norgren L, (2005) Normal values and differences between intraperitoneal and subcutaneous microdialysis in patients after noncomplicated gastrointestinal surgery. Scand J Clin Lab Invest 65: 273282 14. Matthiessen P, Strand I, Jansson K, Tårnquist C, Andersson M, Rutegård J.R, and Norgren L, (2007) Is Early Detection of Anastomotic Leakage Possible by Intraperitoneal Microdialysis and Intraperitoneal Cytokines After Anterior Resection of the Rectum for Cancer? Dis Colon Rectum 50: 1918-1927 15. Boutelle M.G. and Fillenz M, (1996) Clinical Microdialysis: The Role of On-line Measurement and Quantitative Microdialysis. Acta Neurochir Suppl 67: 13-20 16. Bhatia R, Hashemi P, Razzaq A, Parkin M. C, Hopwood S. E,. Boutelle M.G, and Strong A. J, (2006) Application of rapid-sampling, online microdialysis to the monitoring of brain metabolism during aneurysm surgery. Neurosurgery 58: 313-320 17. Deeba S, Corcoles E, Hanna G, Pareskevas P, Aziz O, Boutelle M, and Darzi A, (2008) Use of rapid sampling microdialysis for intraoperative monitoring of bowel ischemia Dis Colon Rectum 51:1408-1413 18. Jones D. A, Parkin M. C, Langemann H, Landolt H, Hopwood S. E, Strong A. J, and Boutelle M.G, (2002) On-line monitoring in neurointensive care - Enzyme-based electrochemical assay for simultaneous, continuous monitoring of glucose and lactate from critical care patients. J Electroanal Chem 538: 243-252 19. Parkin M. C, Hopwood S. E, Jones D. A, Hashemi P, Landolt H, Fabricius M, Lauritzen M, Boutelle M. G, and Strong A. J, (2005) Dynamic changes in brain glucose and lactate in pericontusional areas of the human cerebral cortex, monitored with rapid sampling on-line microdialysis: relationship with depolarisation-like events. J Cereb Blood Flow Metab 25:402-413 20. Feuerstein D, Parker K. H, and Boutelle M. G, (2009) Practical methods for noise removal: applications to spikes, non-stationary quasi-periodic noise and baseline drift. Anal Chem 81:4987-4994 21. Kuzu M. A, Koksoy C, and Kale I. T, (1998) Reperfusion injury delays healing of intestinal anastomosis in a rat. Am J Surg 175:34851 22. Kuzu M. A, Tanik A, Kale I. T, Aslar A. K, Koksoy C, and Terzi C, (2000) Effect of Ischemia/Reperfusion as a Systemic Phenomenon on Anastomotic Healing in the Left Colon. World J Surg 24:990-994
Author: Emma P. Córcoles Institute: Universiti Teknologi Malaysia Street: UTM City: Skudai Country: Malaysia Email:
[email protected]
IFMBE Proceedings Vol. 35
Changes in Cortical Blood Oxygenation Responding to Arithmetical Tasks and Measured by Near-Infrared Spectroscopy N.N.P. Trinh, N.H. Binh, D.D. Thien, T.Q.D. Khoa, and V.V. Toi Biomedical Engineering Department, International University of Vietnam National University, Ho Chi Minh City, Viet Nam
Abstract— Performing mathematical process in cognitive tasks has been widely investigated. With brain imaging techniques, the functions of the brain during the performance of tasks can be examined. In this study, we investigated the hemodynamic concentration changes in oxyhemoglobin (HbO2) and deoxyhemoglobin (HHb) in cortical regions by multi - channel Near Infrared Spectroscopy (NIRS). The whole sample consisted of 10 young and healthy students of International University. The subjects calculated numeric formulas and digit number embedded in the text by addition task and subtraction task. Moreover, our experiments were conducted at three levels: easy, moderate and difficult. We found that the changes in oxygenated in calculation task appeared in parietal region.
Tanida [4] discussed about mental arithmetic measured by multi-channel NIRS. To our knowledge, Ritcher et al. [5] used NIRS techniques to measure HbO2 by arithmetical task in the parietal regions of both of hemispheres by arithmetical task. That study was the first NIRS findings on mental arithmetic to confirm that parietal regions are involved in the processing of arithmetic tasks. Moreover, in a recent functional magnetic resonance imaging (fMRI) studied employing a very large sample, Pinel and Dehaene [6] could confirm a more left-sided activation in the precentral area, frontal, and parietal cortex involved in mental arithmetic.
Keywords— Parietal region, arithmetic, cognitive task, Near Infrared Spectroscopy (NIRS).
I. INTRODUCTION Historical experiment: The skill to perform arithmetic task is an early learned skill used in everyday life. Simple arithmetic is a crucial component of numerical cognition. In significantly, addition and subtract were two operations which were performed in our research. Mapping brain studies in arithmetic: Brain imaging techniques have been used to investigate the neural activities involved in mathematical thinking. Piazza et al. [1] showed that parietal and especially intraparietal areas are the main key regions in number processing. In this study, they also found that symbol arithmetic and non-symbol arithmetic are performed in the parietal region. Accordingly, the complex combination of cerebral network is carried out the mathematical processing. Therefore, Cowell et al. [2] investigated that additional prefrontal activations have been reported in studies examining arithmetic. NIRS is non-invasive technique to measure the change of HbO2 in the cortical cortex. Earlier NIRS studies, according to Hoshi and Tamura [3], they used mental arithmetic as a task to stimulate prefrontal brain regions. Furthermore,
Fig. 1 Parietal region involved in mental arithmetic Neuroscientific methods in children and adults: Mathematics plays an important role in education when children start to attend primary school. Cantlon et al. [7] found that preschool children develop their significant abilities to understand numerical task in early life. Some studies were stated the different aspects between children and adult involved in mental arithmetic. According to Ansari and Dhital [8], they used an event-related functional magnetic resonance imaging to determine brain activation in metal arithmetic increase of parietal region in adults which was compared with the increase of frontal region in children. With NIRS techniques, there are more advantages to measure the changes of HbO2 in children. This measurement is not required strict motion restriction which is flexible for children, even adults. Okatomo [9] investigated the level of children' understanding by using NIRS method.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 279–282, 2011. www.springerlink.com
280
N.N.P. Trinh et al.
Comparisons numeric task and word task:
10-20 systems [10]:
It is important to remember the presentation of a mathematical problem which affects its process. In recent study, Richter et al.[5] used numerical formulas and numeric embedded in text as two arithmetical types to stimulate participants. Furthermore, we also stated that the process of solving problems was presented as a formula mainly requires attention and memory processes, whereas word problems.
The International 10–20 system of Electroencephalography (EEG) is an internationally recognized method for describing and applying the location of electrodes in an EEG, based on the relationship between the location of an electrode and the underlying area of cerebral cortex. Each site on the electrode has a letter to identify the lobe and a number to identify the hemisphere location.
Research questions: Additionally, we hypothesized that oxygenation changes during calculation tasks would be higher in word problems compared to numerical tasks because more information had to be processed. The aim of this study is to augment the results by Richter et al. [5] by investigating the effects of different and complex modal tasks on student’s mathematical problem solving. This investigation found the change in HbO2 by different level tasks for solving arithmetical problems in parietal left hemisphere. In this study, we want to show that NIRS is able to detect cortical activation as assumed by current imaging studies, as advantages of lower costs and easier application.
II. METHODOLOGY A. NIRS Measurement We used Shimadzu FOIRE-3000 NIRS with three different wave length of 780, 805 and 830 nm to monitor changes in the oxyhemoglobin (HbO), deoxyhemoglobin (DeHbO) and total hemoglobin (TotalHb). The distance between pairs of emission and detector probes is set at 3 cm, which enables cerebral blood volume measurements at a 2 to 3cm depth from the skin of the head. It is important to keep in mind that Shimadzu FOIRE3000 is not able to measure the optical path length. Therefore, no absolute concentrations are measured, but only concentration changes.
B. Procedure 1. Participant Ten healthy adults (5 males, 2 females, mean age: 19 - 22 years) are included in our experiment. The participants were right-handed person and had no abnormal disorders. The inform consent was also written in this study. During NIRS measurement, the NIRS arrays are placed on the participant’s head. 2. Stimulus material Arithmetic tasks were performed in three different levels: three-digit number, four-digit number, five-digit number. They were presented numerically using Arabic numbers (e.g. 143 + 234 =?) and numerical embedded in text by Vietnamese language (e.g. ‘‘Lan tiết kiệm được 26.000 vnđ trong ống heo. Mẹ cho Lan thêm 10.000 vnđ tiền tiết kiệm. Hỏi Lan có tất cả bao nhiêu tiền trong ống heo?’’). They were also presented in calculation condition where the correct result was selected one correction result of three suggestions.
Fig. 3 The experimental trials (a) numerical task (b) numeric embedded in text Before each experiment, the participants were instructed how to solve questions by using a mouse click in their right hands. Fig. 1&2 Electrodes were used and placed over parietal lobe which relates to knowledge of numbers, according to the international 10-20-EEG-system [10]. This panel displays the arrangement of the optodes on parietal area (red emitters, blue detectors)
3. Protocol: We studied in two procedures: Period 1: We measured two hemispheres to determine the left parietal region involved in higher changes HbO2 than
IFMBE Proceedings Vol. 35
Changes in Cortical Blood Oxygenation Responding to Arithmetical Tasks and Measured by Near-Infrared Spectroscopy
the changes in HbO2 in right parietal area for arithmetical task and higher changes in HbO2. Period 2: We evaluated what are the changes of HbO2 between three different levels.
281
In assumption, we compared two different kinds of tasks to be separated into three different levels (eg. Channel 7 of a subject was showed in these pictures).
III. RESULT We got result from the first procedure. There were 24 channels for both of hemispheres. From data analysis, channels 15, 18, 19, 22 in the left parietal area had significant increase in the changes of HbO2. On the other words, both kinds of mathematical task were also changed HbO2 in the region of interest.
(a)
(b)
Fig. 6 Easy level in two tasks: (a) numerical formula task; (b) numericembedded in text
(a)
Fig. 4 Numeric embedded in text
(a)
(b)
Fig. 7 Moderate level in two tasks: (a) numerical formula task; (b) numeric-embedded in text
(b) (a)
Fig. 5 The concentration of HbO2 in left cortex was performed by Fusion
(b)
3D technique of (a) Rest state and (b) Task state
Fig. 8 Difficult level in two tasks: (a) numerical formula task; (b) numericembedded in text task
In the second procedure, we evaluated the region of interest in left parietal area. We collected data from numerical formula task and numeric embedded in text task. Two tasks were presented in three different levels: easy, moderate and difficult tasks. In easy level, three-digit number was performed in both kinds of task and four-digit number in moderate rate was showed in similar way. However, five-digit number with negative result were improved the difficulty level in mental arithmetic task.
We analyzed data and removed noise signal by Matlab program. Then, we calculated average changes of concentration HbO2 in separated tasks. From analyzed data, we evaluated that the change in concentration of HbO2 increased from easy level to moderate level in both kinds of tasks. Furthermore, we investigated that there were more peaks in the signal of moderate level instead of easy level. The
IFMBE Proceedings Vol. 35
282
N.N.P. Trinh et al.
reason was participants who were performed in less arithmetical trials of moderate task than trials of easy task with similar protocol. However, in difficult level, the changes in concentration of HbO2 were less than other levels. We hypothesized that the participants suffered time pressure of experiment.
ACKNOWLEDGMENT We would like to thank Vietnam National Foundation for Science and Technology Development-NAFOSTED for supporting attendances and presentation. This research was partly supported by a grant from Shimadzu Asia-Pacific Pte. Ltd. and research fund from International University of Vietnam National Universities in Ho Chi Minh City. We would like to thank Dr Nguyen Thi Cam Huyen, Ho Chi Minh City Medicine and Pharmacy University for helpful suggestions. Finally, our deepest appreciation to BME staffs and friends us who helped us collect information.
REFERENCES Fig. 9 The changes in concentration of HbO2 in numerical tasks
Fig. 10 The changes in concentration of HbO2 in numeric embedded in text
The oxygenation changes during calculation tasks would be higher in word problems compared to numerical tasks because more information had to be processed.
IV. DISCUSSION We affirmed evidence that parts of the parietal lobe of the left hemisphere play a crucial role in the processing and solving of arithmetic problems for both word and numeric problems, which is in accordance with study results described in current literature, and that this activation can be measured by multi-channel NIRS. In the future, we assume that we will find the change in oxygenated (O2Hb) and deoxygenated hemoglobin (HHb) in different tasks: reading and calculating problems in response to the influence of sex differences. Future studies will be improved to differentiate between difficulty and complexity. Our study will contribute to education of Vietnam in future studies regarding changing mathematical methods in school.
1. Piazza M, Pinel P, Le Bihan D, Dehaene S (2007). A magnitude code common to numerosities and number symbols in human intraparietal cortex. Neuron 53:293–305 2. Cowell SF, Egan GF, Code C, Harasty J, Watson JD (2000). The functional neuroanatomy of simple calculation and number repetition: a parametric PET activation study. Neuroimage 12:565–573 3. Hoshi Y., Tamura M. (1993). Detection of dynamic changes in cerebral oxygenation coupled to neuronal function during mental work in man. Neurosci Lett 150:5–8 4. Tanida M., Sakatani K., Takano R., Tagai K. (2004). Relation between asymmetry of prefrontal cortex activities and the autonomic nervous system during a mental arithmetic task: near infrared spectroscopy study. Neurosci Lett 369:69–74 5. Richter MM, Zierhut KC, Dresler T, Plichta MM, Ehlis AC, Reiss K, Pekrun R, Fallgatter AJ (2009). Changes in cortical blood oxygenation during arithmetical tasks measured by near-infrared spectroscopy. J Neural Transm 116:267–273 6. Pinel P, Dehaene S (2009). Beyond hemispheric dominance: brain regions underlying the joint lateralization of language and arithmetic to the left hemisphere. J Cogn Neurosci 7. Cantlon JF, Platt ML, Brannon EM (2009). Beyond the number domain. Trends Cogn Sci 13:83–91 8. Ansari D, Dhital B (2006). Age-related changes in the activation of the intraparietal sulcus during nonsymbolic magnitude processing: an event-related functional magnetic resonance imaging study. J Cogn Neurosci 18:1820–1828 9. Okamoto N, Eda H., Kuroda, Y., Maesako, T.(2008). Effectiveness of NIRS in Educational Research. Automation Congress, 2008. WAC 2008. World 1-6 10. WIKIPEDIA at http://en.wikipedia.org/wiki/1020_system_(EEG)
Author: Truong Quang Dang Khoa Institute: Biomedical Engineering Department – Ho Chi Minh City International University Street: Quarter 6, Linh Trung, Thu Duc Dist. City: Ho Chi Minh Country: Viet Nam Email:
[email protected]
IFMBE Proceedings Vol. 35
Color Coded Heart Rate Monitoring System Using ANT+ F.K. Che Harun1, N. Uyop1, M.F. Ab Aziz1, N.H. Mahmood1, M.F. Kamarudin2, and A. Linoby3 1
Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310, Skudai, Johor INSTEDT, Bangunan Kejora, Jalan Dato' Onn, 81930 Bandar Penawar, Johor, Malaysia 3 Universiti Teknologi MARA (UiTM) Pahang 26400 Bandar Tun Abdul Razak Jengka, Pahang, Malaysia 2
Abstract— In this paper, a prototype of portable device for monitoring athletes’ heart rate during exercise is presented. A color coded bracelet was design as a wrist heart rate monitor which mainly communicates with user via color-coding technology using the RGB LED Strap. This color-coding method makes the heart rate easier to monitor and enabling the user to identify their heart rate level at a certain time. Different colors signify different ranges of heart rate. Heart rate information was transferred from a Garmin heart rate strap (chest strap) to the user color coded bracelet using ANT+ technology. This information is then used to display heart rate level of the user on the bracelet using a color code. The device demonstrated that it is very helpful for athletes and coaches to monitor the fitness level of athletes and regulate their exercise training regime in a more effective and safer manner. Keywords— Electrocardiogram, wireless heart rate, heart rate monitor, color coded, ANT+.
I. INTRODUCTION Nowadays, the use of a heart rate monitor is very common and not merely use at the hospital as a patient monitor. Generally, heart rate monitor was used by a person who cares about their heart to ensure that they have a normal heart rate. The early detection of the abnormal heart rate can help to prevent from the serious disease. In the sport field, the heart rate monitor is needed to determine the range of heart rate. This range of heart rate should be compatible with the exercise done by an athlete to get an optimum exercise and to prevent from serious injury. Research and development of a heart rate monitor in the sport field began in 1977 when the first wireless electrocardiogram (ECG) heart rate monitor is invented by Polar Electro and used as a training aid for the Finnish National Cross Country Ski team [5,10]. The transmitter was attached to the chest by either disposable electrodes or an elastic chest belt, while the receiver was a watch worn on the wrist. Another invention is a small and portable device which detects the frequency and physiological movements, including pulse and may be worn upon the wrist of a user. The device known as Conlan [4] is also configured to record
information, and upload that information to a computer for later processing desired. However, this device does not provide any form of display means. As a result, a person using this device would neither be able to determine their pulse rate nor heart rate or even exercise heart rate zones while exercising. An ambulatory monitoring device for the measurement of heart rate, step rate and respiration signals of human subjects during running exercise is described in [1]. The monitor which is fixed on an elastic belt can be worn around the subject's chest. The research introduced utilize Microchip PIC16F876 with built-in A/D converters for sampling an analogue signals and transmits them wirelessly to a computer via radio frequency (RF) transceivers. The first garment with integrated heart sensors in the form of a sports bra is introduced in [11]. Special material in the sport bra will sense the number of heart beat and then transmit it to a wrist receiver. This garment provides a comfortable alternative to the chest strap. In terms of monitoring the heart rate by using color as an indicator for different heart rate zone, current research could be found in [2,3,10]. In [10], the interactive personal coaching and training system is introduced, which contain the color indications of the display digits on the monitor. However, it is difficult to interpret the color since the size is small. The pioneer for the color-coded bracelet heart rate monitor invention is patented in [2]. This prototyping can monitor appropriate heart rate exercise training zone while performing an exercise by sending feedback to a user via specific color (LED). Hence, this paper is extending the work in [2,3], which is concentrating on using ANT+ technology to transfer the heart rate information to the bracelet using the color code (LED).
II. METHODOLOGY In this methodology section, the hardware involved in designing this color coded bracelet heart rate monitoring system is described. Heart rate monitor is an important device to the human health and life [6,7]. Thus, an appropriate communication protocol should be applied.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 283–286, 2011. www.springerlink.com
284
F.K. Che Harun et al.
Zigbee, ANT and Bluetooth are the type of communication protocol that usually used. However, ANT is the most suitable communication protocol due to ultra low power consumption compared to Bluetooth, and is much faster compared to ZigBee [8]. Besides that, ANT is also easy to use, low system cost, scalable and flexible and it enable sensors to operate for up to three years on a coin cell battery. Therefore, in this paper, ANT+ which is based on ANT is chosen based on these advantages. Nordic is a long time partner of ANT wireless. The Nordic’s nRF24L01 shares the same carrier-based 2.4 GHz frequency bandwidth with Bluetooth and Zigbee and in some case, the protocol will be disturbed by Bluetooth and Zigbee. However, Nordic’s nRF24L01 can work in 83 channels with 1MHZ bandwidth. So the protocol architecture consists of the frequency agility with a scheduled list to guarantee the robustness of the communication protocol [9]. The chest strap worn on the athlete chest will communicate with this ANT+ to transmit the heart rate data and then the microcontroller will process this heart rate data and display the result through an LED strip. To control and program the heart rate data that had been received from ANT+, the microcontroller is used due to its size and Quad Flat No leads type. This microcontroller have features such as an internal oscillator block or resonator modes up to 32 MHz and 3 external clock modes up to 32 MHz, programmable code protection and self programmable under software control [13]. Initially, microcontroller will calculate the heart rate maximum (HRmax) and determine target heart rate zone desired by user. This HRmax obtained by inserting the user age and gender information. The formula use to calculate the HRmax is presented in equation (1) and (2) respectively for male and female. Male
: HRmax= 206.9 - (0.67 x age)
(1)
Female : HRmax= 212.9 - (0.67 x age)
(2)
Then, equation (3) is used to calculate the target heart rate (THR) zone. THR = HRmax x % intensity
(3)
The color of LED strip will indicates the target heart rate zone desired by user. The LED strip chosen is a RGB LED with a serial peripheral interface (SPI) controlled. It is the 5V multicolor strips and is controlled by an HL1606 chips. It gives a total of eight possible colors [12] as shown in Table 1. This LED strip has a very bright color which can be seen by the user or coach clearly. The complete block diagram of heart rate monitoring system design from chest strap to bracelet is shown in Figure 1. The chest strap (transmitter) will detects the voltage different on the skin during every heart beat. The nRF24L01 which is act as wireless protocol will communicate with chest. Finally, the data is processed by microcontroller and the result is displayed through the LED strip.
Table 1 Available colors in 5V multicolor LED strip Red
Green
Blue
Visible color
Off On Off Off Off On On On
Off Off On Off On Off On On
Off Off Off On On On Off On
Off Red Green Blue Cyan Magenta Yellow White
nRF24L01 Chest strap
LED strip
Microcontroller
Fig. 1 Block diagram heart rate monitoring system
III. RESULT AND DISCUSSION The prototype of color coded heart rate monitoring system is successfully designed. The LED strip shows the different colors that indicate the range of certain heart rate. The range of heart rate can be classified into five zones. Each zone has own intensity, which determined by percentage of maximum heart rate. The zone depends on what exercise training that user desired. Table 2 describes the target heart rate zone [2] and Table 2 shows the different color of LED strip that had been designed. It can be seen from Table 2 and 3, the color coded is very helpful for training purposes. For example, if a coach wants to know whether their athletes are fit or not, it simply can be detected by looking at the Red color of the LED indicator. This invention gives the coach more flexibility and easier monitoring technique instead of manually calculate the heart beat of their athletes. The example of how this portable heart rate monitoring system is worn as a wrist is shown in Figure 2, although the size still needed to be reduce and more reliable for the athletes.
IFMBE Proceedings Vol. 35
Color Coded Heart Rate Monitoring System Using ANT+
285
Table 2 Description for target heart rate zone Target zone Zone1: Very light
Intensity (%) 50-60%
Zone2: Light
60-70%
Zone3: Moderate
70-80%
Zone4: High
80-90%
Zone5: Maximum
90-100%
Exercise training recommended For recovery and cool-down exercises throughout the training season. Everybody for long training sessions during base training periods and for recovery exercises during competition season. Runners progressing towards events or looking for performance gains, particularly for half and full marathon training. Experienced runners for all year round training in varying length. Becomes more important during pre competition season. Very experienced and fit runners. Short intervals only, usually in final preparation for short running events.
Table 3 Different color of LED strip based on heart rate zone
Fig. 2 Example of device worn upon wrist
IV. CONCLUSIONS As a conclusion, the portable color coded heart rate monitoring system is design to be noninvasive, lightweight and easily worn so that the athlete feel comfortable and will not impeded their activities. This device also gives an advantage to coach to easily get the information about the athlete heart rate condition by only monitoring the LED indicator on the bracelet that wears by athlete. Future works should be focused on reducing the size of LED strips/bracelet to be worn as a wrist band and improving the data communication between transmitter and receiver.
ACKNOWLEDGMENT The author would like to sincerely thank to Ministry of Higher Education Malaysia (Sport Section) for the support and Universiti Teknologi Malaysia (UTM) for the facilities and equipment provided for this research.
REFERENCES
Blue Zone1
Yellow Zone2
Green Zone3
Purple Zone4
Red Zone5
[1] Daoming Z. and Branko C. “Monitoring Physiological signals during running exersice”, Proc. of 23rd Annual EMBS Int. Conference, 2001. pp. 3332-3335. [2] A.Linoby, “Color-coded Heart Rate monitoring bracelets (COBRA) to guide specific exercise training intensity.”, RMI-UiTM (Patent), 2009. pp. 1-12. [3] A.Linoby, M. A. Amat, “Color-coded Heart Rate Motitoring Bracelets (COBRA)”, Innovation and Invention Design (IDD), 2010, Malaysia.
IFMBE Proceedings Vol. 35
286
F.K. Che Harun et al.
[4] Conlan R.W, “Activity monitoring apparatus with configurable filters”, US Pattern No. 5197489, 1993. [5] Moore J. and Zouridakis G., “Biomedical technology and devices handbook”, CRC Press, 2004. [6] Burke, E., “ Precision heart rate training”, Human Kinetics Publishers, 1998. [7] Mahmood, N.H, Zakaria, N.A, Jalaludin, S.N, Sharifmudin, N.B, “Relationship study of heart rate and systolic blood pressure for healthy peoples”, Proceedings of 2nd International Conference on Mechanical and Electronics Engineering, ICMEE, 2010, Volume 2. pp.179-182. [8] Edwards C., “Healthcares’s hi-tech lifelines”, Journal of Engineering & technology”, 2008. Volume 3/14. pp.36-39. [9] Chen, Z., Hu, C.,Liao J, Liu S., “Protocol architecture for wireless body area network based on nRF24L01”, IEEE International Conference on Automation and Logistics, 2008. pp. 3050-3054.
[10] Adidas micoach, “The interactive personal coaching and training system”, article access on 10 January 2011 at http://www.adidas.com/my/micoach/. [11] Parker S, “History of heart rate monitor”, article access on 10 January 2011 at http://www.articlesbase.com/. [12] Datasheet of HL1606. “Dual RGB LED controller”. [13] Datasheet of PIC12F1822/16F182X. “Microchip Controller”.
Author: Fauzan Khairi Che Harun Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Universiti Teknologi Malaysia UTM Johor Bahru Johor Bahru, Johor MALAYSIA
[email protected]
Control Brain Machine Interface for a Power Wheelchair C.R. Hema1 and M.P. Paulraj 2 1
Akshaya College of Engineering and Technology, Anna University of Technology Coimbatore, India 2 School of Mechatronic Engineering, University Malaysia Perlis, Perlis, Malaysia
Abstract–– Controlling a power wheelchair using a brain machine interface (BMI) requiressufficient subject training. A neural network based BMI design using motor imagery of four states is used to control the navigation of a power wheelchair. The online experiment results are presented for two indoor navigation protocols. Twotrained subjects participated in the study.Performance of the real-time experiments is assessed based on the targets reached, time taken to reach the targets and on completion of a given navigation protocol.A performance rate of 85.7% is achievable by both subjects for the real-time experiments. Keywords–– Brain Machine Interfaces, Neural Networks, EEG Signal Processing.
I. INTRODUCTION ABMI is a digital link between the human brain and an external device such as a wheelchair, prosthesis and a computer. BMIs come in handy when peripheral nervous system and muscular system are affected by motor neuron disorders.Classification of hand motor imagery (MI)is a popular modality for designing BMI. The ability of an individual to control his EEG through imaginary motor movements enables him to control devices.The spatiotemporal pattern changes in the EEG can be recognized and associated with subject’s, imagined hand movements.A BMIacquires and decodes the MI signals and translates human thoughts into actions. The EEG electrodes are mainly chosen to be placed on the scalp overlying the sensorimotor cortex where the recorded EEG signals are sensitive to the movements. Basic and clinical research has yielded detailed knowledge of the signals that comprise the EEG. For the major EEG rhythms and for a variety of evoked potentials, their sites and mechanisms of origin and their relationships with specific aspects of brain function, are no longer wholly obscure. Numerous studies have demonstrated correlations between EEG signals and actual or imagined movements and between EEG signals and mental tasks [1]. MI modifies the neuronal activity in the primary sensorimotor areas in a very similar way as observable with real executive movements [1, 2].MI is also the most popular methodology
employed by majority BMI researchers for robot control [3], virtual keyboard [4] and a simulated wheelchair [5]. This can be attributed primarily to the purely cognitive nature of these methods as opposed to the requirement of stimulus in the P300 and evoked EEG- potential methods [1].With adequate training and motivation, majority of the subjects can learn to control the intensities of specific frequency bands, which can be used as a control signal for an external device [6]. Motor imagery has been under study to translate the EEG signal into left and right movement of a computer cursor. To analyze the EEG signals different methods have been proposed in the literature [7,8]. Our goal is to use motor imagery to control the stop, forward, leftand right movements of a power wheelchair. The BMI is calibrated using synchronous experiments [9]. Features are extracted from the Mu, Beta and Gamma rhythms of the raw MI signals. Neural network classifier is used to identify the four task signals. Offline and real-time experiments are conducted to validate the performance of the BMI algorithm.
II. METHODS A. Synchronous Experiments Synchronous experiments are conducted to calibrate the BMI system, using MI signals recordedthrough synchronousnavigation protocols. A generalized classifier modelis determined.Two healthy voluntary subjects (S1 & S2) aged 16 and 46, participated in the experiments. MI signals for the four tasks, relax, forward, left and right hand movements are recorded as per the protocolgiven below: Task 1 – Relax:The subject is requested to relax as much as possible and to think of nothing in particular more similar to a meditative state. This task is considered as the baseline task and used as a stop control measure of the EEG. Task 2 – Forward:For this task an upward arrow is shown in the computer monitor. Subjects are requested to fixate on the monitor showing the upward arrow and imagine moving both arms in a forward direction like moving a joystick with
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 287–291, 2011. www.springerlink.com
288
C.R. Hema and M.P. Paulraj
both arms; the subjects are requested to hold the thought for three seconds.
EEG signals are recorded using an ADInstrument Power Lab amplifier and two gold plated cup electrodes. MI signals for the four tasks are recorded from the C3 and C4 locations as per the 10-20 International standards. The EEG signals are amplified and sampled at 200 Hz. At the time of data recording the subjects are free from illness or medication. In this experiment artifacts such as eye blinks were not removed. The EEG is recorded for 3seconds for each task per trial. Band pass filtering is applied to the raw signals to extract the Mu, Beta and Gamma bands, using Chebyshev IR filters. Band power features of six frequency components from 8 Hz to 40 Hz are extracted. A dynamic Elman neural network (ENN) is trained using a back propagation (BP) training algorithm. The BP training algorithm involves three stages, the feed forward of the input training pattern, the calculation and back propagation of the associated weight error and the weight adjustments. The ENN is modeled using 6 input neurons, 9 hidden neuron and 4 output neurons. The input data are normalized using a binary normalization algorithm [12].Training is conducted until the average error falls below 0.01 or reaches a maximum iteration limit of 1000. Mean square error is used as a stopping criterion. 200 data samples are used in this experiment. Data samples are chosen randomly. 75% of the data samples are used to train the classifier to recognize the four motor tasks. The network is tested with 25% of the data samples. The ENN has an average classificationaccuracy of 94.86% and maximum classification accuracy of 98.12%. The modeled ENN is tested on a prototype power wheelchair for real-time navigation in an indoor environment. The power wheelchair is provided with a collision avoidance system and a shared control algorithm to avoid accidents during the testing.
15
15
C
D
D B
C 7
6 R
3
E
R
10
2 L
L
E
10
B F R
5
4 L
5
B
Y Distan ce (m)
Task 4 – Right: The subjects are requested to fixate on the monitor showing a right arrow, the subjects are requested to imagine moving their right hand in the direction of the arrow, and the subjects are requested to hold the thought for three seconds.
In the second phase of the experiments the modeled ENN is employed for real–time power wheelchair control. The output of the classifier is translated into control signals through a BMI interface to operate a power wheelchair.MI signals are sent to the interface every 3s by the subject. The subjects are given twoprotocols to drive the power wheelchair using the BMI in an indoor environment. The navigation protocols are shown in Figure 1. The subjects are required to drive the wheelchair following the pre-specified path (direction indicated by arrows) from the starting point at 1 to the finish point at 8. The shared control is activated only when the subject fails to execute a desired mental state; this ensures that the subject does not get wedged in a location if unable to execute the required mental state.
Y Distance (m)
Task 3 – Left: The subjects are requested to fixate on the monitor showing a left arrow, the subjects are requested to imagine moving their left hand in the direction of the arrow and the subjects are requested to hold the thought for three seconds. Subjects are also instructed to repeat the same process in each trial.
III. ONLINE EXPERIMENTS
F 4
R
5
A
A
A
L
2 R
7
3
1 0
L 5
8
Start
Finish
1.5
3
X Distance (m)
(a)
Finish
5
0
1.5
R L 8
6 1 Start
3
5
X Distance (m)
(b)
Fig. 1(a) & (b) Real-time experiment trajectories (a) protocol 1 (b) protocol 2 The protocolsare designed to verify the performance of subjects to maneuver the wheelchair around obstacles. The trajectory covers a distance of 17.5 m. The protocol involves basic level of automatic control by the wheelchair, which is primarily incorporated for safety aspects. The trajectories require the execution of all the four mental states.Subjectsundergo a training session to master the navigation control of the power wheelchair using the BMI before attempting the protocol. The real-time navigation results are logged for time taken to reach each target and total distance traversed in each trial. Results of best trials of the subjects are shown in Table I. Time in seconds to reach a target is indicated in columns 4 to 9, hyphen indicates a missed target.
IFMBE Proceedings Vol. 35
Control Brain Machine Interface for a Power Wheelchair
15
289 15
15
C
D
C
C
D
E
E
E
10
B F 5
Y Distance (m)
Y Distance (m)
10
Y Distance (m)
10
B F
B F 5
5
A
A
A
Start
0
Finish
1.5
Start
3
0
5
D
1.5
X Distance (m)
Start
Finish
3
0
5
1.5
3
5
X Distance (m)
X Distance (m)
(a)
Finish
(b)
(c)
Fig. 2 Trajectory achieved for Navigation Protocol 1: (a) Subject S1 on Day 2 (b) Subject S1 on Day 3 (c) Subject S2 on Day 3 15
15
C
15
C
D
D
E
E
E
10
B F
Y Distance (m)
10
Y Distance (m)
Y Distance (m)
10
B F 5
5
B F 5
A AA
A
A
0
Finish
Start
1.5
3
X Distance (m)
D BBB
C
5
0
Finish
Start
1.5
3
X Distance (m)
(a)
(b)
5
0
Finish
Start
1.5
3
5
X Distance (m)
(c)
Fig. 3 Trajectory achieved for Protocol 2: (a) Subject S2 on Day 2; (b) Subject S2 on Day 3;(c) Subject S1 on Day 3 Navigation performance is based on the targets reached and time taken to reach the targets. The time taken to reach targets points 2 to 8 and total distance traversed are logged for each session. Additionally the trajectory is also plotted manually to study the deviation from the protocols. During experimental sessions on day 1, the subjects had difficulty in switching between the motor tasks; this was more profound when they were close to an obstacle. Their performance was better when obstacles were not present in their immediate path. In most sessions the subjects were not able to complete the trajectory.
Performance of the subjects were seen to improve on the second day, Figures 2(a) and 3(a) show the trajectory (sequence of arrows) of subject S1 and S2 for the protocol 1 and 2 respectively, two sessions are indicated in each figure, the two sessions are defined by black and gray arrows; it can be observed from the figures that the trajectory does not follow the defined protocol. Subjects required more time to change state to left or right signals. There were some occasions were the subjects had difficulty in switching between states which made the robot chair to rotate 360
IFMBE Proceedings Vol. 35
290
C.R. Hema and M.P. Paulraj
S1 S2
Time taken for Manual Drive [Location 1 to 8] (seconds)
2
36.67
-
47
28
-
60
19
65
219
71.4
17.4
3
35.25
60
55
-
40
19
10
20
204
85.7
16.3
2
49.26
10
5
64
-
60
11
80
230
85.7
22.3
3
45.35
89
20
-
53
94
14
19
289
85.7
19.2
Time Taken to Reach Target Locations (seconds)
2
3
4
5
degrees at a given location. On day 3 the subjects were able to complete the trajectory, however they missed one or two targets. It was also observed that subjects preferred to avoid the obstacles and took to a more straight path in the middle of the room, one of the subjects reported that he preferred to execute left or right signals prior to reaching an obstacle, rather than reach the obstacle and then turn. Table 1 shows the time logs, distance covered and performance of the two subjects for protocol 1 and protocol 2, for one of the sessions; the third column shows the time taken by the subjects to manually drive the robot chair using the joystick. The time taken to reach each target location is recorded in columns 4 to 10, a hyphen indicate, targets missed by the subjects. From the Table 1 it is observed that both subjects bypassed atleast one target location. One challenge to the subject is the absence of obstacles in the center, since all obstacles where placed on the sides of the room; the subjects are seen to follow a path away from the obstacles. An example is shown in Figure 2 (a), on day 2 the subject S1 is not able to switch immediately between forward and left mental states. However on day 3 the subject was able to achieve the trajectory to some extent as shown in Figure 2(b). Both subjects were able to execute all the four mental states to drive the wheelchair. The best trajectory result of subject S2 for protocol 1 is shown in Figure 2(c). Similarly for protocol 2, the variations in the trajectory performed by the two subjects can be observed in Figure 3. As in the previous case the performance of the subjects is seen to improve on day 3.
IV. CONCLUSION Motor Imagery based BMI design to control a power wheelchair is presented. Synchronous and asynchronous experimental results are presented and discussed. Real-time
6
7
8
Total 1 to 8
Performance %
Protocol
Subject
Table 1 Real-Time Navigation Performance of Subjects Using The Bmi Power Wheelchair
Total Distance covered ( m)
navigation results show the feasibility of designing a four state BMI using onlytwo electrodes. Real-time experiments validate the proposed BMI design for real-time navigation. Though the results for real-time navigation are significant, it’s also observed from the experimental results that the subjects require more training sessions to execute left and right turns and to use the stop control.Results also show that the shared control plays a significant role in completion of the navigation protocol. Future works will focus on real-time experimentations for more complex protocols.EEG signals have potential applicability beyond the restoration of lost movement and rehabilitation in paraplegics and would enable normal individuals to have direct brain control of external devices in their daily lives.
REFERENCES [1] J.R. Wolpaw, N.Birbaumer, D.J. McFarland, G Pfurtscheller and T.M. Vaughan, ‘Brain-computer interfaces for communication and control’, Clinical Neurophysiology. vol.113, no.6, pp.767-791, 2002. [2] G. Pfurtscheller and C. Neuper, “Motor Imagery and Direct BrainComputer Communication,” IEEE Transactions Vol. 89, No.7, pp 1123-1134, J [3] Jose del R. Millan, Frederic Renkens, Josep Mourino and Wulfram Gerstner, “Noninvasive brain-actuated control of a mobile robot by human EEG,” IEEE Transactions on Biomedical Engineering, vol.51.no.6, pp.1026-1033, 2004. [4] B. Obermaier, G.R. Muller and G. Pfurtscheller, “Virtual Keyboard controlled by spontaneous EEG activity” IEEE Trans. Neural Net. System and Rehabilitation Engg. Vol11, pp422-426, 2003. [5] Johan Philips, Jose del R. Millan, Gerolf Vanacker, Eileen Lew, Ferran Galan, Pierre W Ferrez, Hendrik van Brusel and Marnix Nutin, “ Adaptive shared control of a brain-actuated simulated wheelchair,” Proc. of IEEE 10th Intel. Conf. on Rehabilitation Robotics, Netherlands, pp.408 -414, 2007. [6] S. Cososchi, R. Stungaru, A Unureanu and M. Ungureanu, “EEG Features Extraction for Motor Imagery,” Proc. 28th IEEE EMBS Annual Intl. Conf., USA, pp 1142 - 1145, 2006.
IFMBE Proceedings Vol. 35
Control Brain Machine Interface for a Power Wheelchair [7] G. Pfurtscheller, C. Neuper, A. Schlogl and K Lugger, “Separability of EEG Signals recorded During right and left motor imagery using adaptive autoregressive parameters,” IEEE Trans. Rehabilitation Engineering, Vol.6, No.3, Sep 1998 , pp 316 – 325. [8] C.W. Anderson, E.A. Stolz and S.Shamsunder , “Discriminating Mental Tasks using EEG represented by AR Models”, IEEE-EMBC and CMBEC , pp 875-876,1997.
291 [9] Hema C.R., Paulraj M.P., S.Yaacob, A.H. Adom, Nagarajan R, ‘Motor Imagery Signal Classification for a Four State Brain Machine Interface’, International Journal of Biomedical Sciences, Vol.3 No.1, pp. 76-81 December 2008. [10] D.G. Domenick, “International 10-20 Electrode Placement System for Sleep”, 1998. http://members.aol.com/aduial/1020sys.html. [11] A.P. Engelbrecht, Computational Intelligence an Introduction, John Wiley and Sons Ltd. 2002. [12] S.N.Sivanandam, M.Paulraj, Introduction to Artificial Neural Networks Vikas Publishing House, India. 2003.
IFMBE Proceedings Vol. 35
Design and Development of Microcontroller Based ECG Simulator A.D. Paul, K.R. Urzoshi, R.S. Datta, A. Arsalan, and A.M. Azad Department of Electrical and Electronic Engineering, BRAC University, Dhaka, Bangladesh
Abstract— For the training of doctors as well as for design, development and testing of automatic ECG machines, a subject with a known abnormality of heart is essentially required. ECG simulator is an electronic device used to simulate such subject for the above mentioned purpose [1]. The significance of the ECG simulator is that the subject has been replaced. The simulator is a useful tool for electrocardiograph calibration and monitoring, to incorporate in educational tasks and clinical environments for early detection of faulty behavior. The device is based on a microcontroller and generates the basic ECG wave with variable BPM and arrhythmia waves using lookup table. The signals can be used for testing, servicing, calibration, and development of the ECG monitoring instruments. This machine is very cheap than commercial machine available in the market. Keywords— Lookuptable, PWM, variable BPM option, arrhythmia.
I. INTRODUCTION Electrocardiogram (ECG) is a graphic recording of the electrical potentials rhythmically produced by the heart muscle. The simulator described here produces a suitable artificial signal. The industrial simulator is rather expensive and is not manufactured in Bangladesh. The purpose of the thesis is to design a simulator which uses a microcontroller chip and a network of capacitors, resistors and inductors to generate the signals, which will be very cheap and easily available. Our objective was to make the simulator generate the basic ECG wave with a variable beats per minute (BPM), and the two common arrhythmia waves.
II. THE HEART’S ELECTRICAL SYSTEM A. Basic ECG Waves The electric potential generated by the heart appears throughout the body and can be measured across its surface. The typical ECG waveform is shown is figure 1 [1]. The signal is characterized by five peaks and valleys labeled with the successive letters PQRST and U [1]. In figure 2 [1] we can see the different wave shapes obtained from different parts of the heart. By adding all these waves with Fourier series would give the basic ECG wave shown in figure 1.
Fig. 1 Basic ECG wave The electrocardiogram is composed of waves and complexes. Waves and complexes in the normal sinus rhythm are the P wave, PR Interval, PR Segment, QRS Complex, ST Segment, QT Interval and T wave the intervals are shown in figure1. The P wave duration is normally less than 0.12 second [2] and the amplitude is normally less than 0.25 mV [1]. The duration of the QRS complex is normally 0.06 to 0.1 seconds [2]. It is measured from the beginning of the Q wave to the end of the S wave. The QRS voltage is as small as 0.5-0.7mV. The duration of the T wave is measured from the beginning of the wave to the end. The area encompassed by the T wave may be a little smaller or larger than that encompassed by the QRS complex; it is usually about two-thirds that of the latter. The upstroke of the T wave is less steep than the down stroke. The wave followed by the T wave is called U wave. It is not always seen in normal ECG waves. It is common in slow heart rates and reflects heart abnormality. The PR interval measured from the beginning of the P wave to the beginning of the QRS complex. The difference between the intervals as measured to the beginning of the Q wave, and as measured to the R wave, is usually about 0.02 second but may be as 0.04 second [1]. The QT interval is measured from the beginning of the Q wave of the QRS complex to the end of the T wave [2]. The duration of the QT interval varies with age, gender, and heart rate. It should not exceed 0.40 second. The duration of the ST segment represents the amount of time during which the ventricular muscles is depolarized [1]. It is determined by measuring the interval of time from the end of the S wave to the beginning of the T wave.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 292–295, 2011. www.springerlink.com
Design and Development of Microcontroller Based ECG Simulator
293
Fig. 2 Fig. 5
B. The Cardiac Arrhythmias Here only two most common arrhythmias are being discussed, that is sinus tachycardia (100 to 160 BPM) shown in figure 3 and sinus bradycardia (less than 60BPM) shown in figure 4 [3].
Figures: Insert figures where appropriate (as close as
A. Lookup Table The lookup table is a mathematical table to generate waves. We calculated the lookup table by drawing each ECG wave in a graph paper. The samples can be taken from 128 to 1024 on the x-axis; these samples can be taken according to our convenience. The y-axis has to be 256. Taking 256 on the y-axis generated a wave of amplitude 2.4 volts. B. Generation of the Basic ECG and Arrhythimas
Fig. 3
We calculated the look-up table for the ECG wave and used the C program to generate the ECG waves. We downloaded the program into the microcontroller and using the filter network we saw the output in the digital oscilloscope. We calculated the time period of the wave shape and from there we found that how many beats are coming in one minute. By that we can calculate beats per minute (BPM). We used a selector switch which allows selecting three different options normal ECG wave, Sinus tachycardia of 136 BPM and sinus bradycardia of 38 BPM. We put a variable BPM option (60BPM, 78 BPM and 120 BPM) for the normal ECG wave using 10kΩ variable resistor.
Fig. 4
III. PROJECT OVERVIEW The microcontroller (ATMega32) is programmed to generate the normal ECG waves, bradycardia and tachycardia waves using TCCR1A and TCCR1B registers. TCCR1A has 4 different modes, to generate PWM we use mode number 3 called fast PWM Output compare register is called OCR, it generates a value to compare the mode of output [4]. The output from the microcontroller is fed into the RC low pass filter network which is responsible to give curvature to the square wave digital pulses. By the varying duty cycle the low pass filter charges while the pwm signal is on and discharges while the pwm is off, generating the analog output voltage. The circuit used is shown in figure 5.
IV. RESULTS Once the power supply and the output leads are connected, the selector switch enables to give the desired wave shape output. The simulator can generate normal ECG wave of three variable BPM of 60, 78 and 120 relatively in Fig. 7, Fig. 8 and Fig. 9, tachycardia of 136 BPM (Fig. 10), bradycardia of 38 BPM (Fig. 11). Fig. 6 gives the clear idea of the operation of the ECG Simulator.
IFMBE Proceedings Vol. 35
294
A.D. Paul et al.
Fig. 6
Fig. 11
V. CONCLUSIONS
Fig. 7
We have successfully generated the basic ECG wave with a variable BPM option and the two arrhythmia waves. The same simulator can be implemented and developed to have a 3 lead or a 12 lead output to connect all 12 leads of the ECG machine to the simulator. The commercially available ECG simulators cost in the range of 200 USD to 1000 USD [5]. These simulators have many more features and are far more sophisticated. Our ECG simulator fulfills the low range simulator features. And the cost is far cheaper. It is only about 11.35 USD. The microcontroller based ECG simulator has not been implemented in Bangladesh as yet. If it can be manufactured locally it can be used in hospitals and for medical research work. It will also make possible to open a new door to the biomedical engineering sector of Bangladesh.
ACKNOWLEDGEMENT
Fig. 8
We would like to thank BRAC University for providing the funds for this project. We would also like to thank our teachers, fellow students and staff of the department of electrical and electronic engineering for their endless support.
REFERENCES Fig. 9
Fig. 10
1. Gurpinder Kaur, Design and Development of Dual Channel ECG Simulator and Peak Detector, Master’s Thesis, Thapar Institute of Engineering & Technology , Deemed University, 2006. 2. Mc Gill. The EKG Waveform [Accessed 2nd August 2010] 3. Understanding Electrocardiography physiological and interpretive concepts. Edwin G. Zalis and Mary H. Conover. 4. AT-mega32 datasheet 5. www.amazon.com 6. Mayank Gupta, ECG Simulator, Chameli Devi Institute of Technology & Management. 7. Paul J. Michalek, an authentic ECG simulator, University of Central Florida, 2006. 8. Muhammad Shah Noor bin Minir, Shamsun Nahar,Tahsina Farah, Automatic Detection of Premature Ventricular Contraction Beat in ECG Signal, Bangladesh University of Engineering & Technology, March 2009
IFMBE Proceedings Vol. 35
Design and Development of Microcontroller Based ECG Simulator
295
9. A. E. Martínez, E Rossi and L Nicola Siri, Microprocessor-based simulator of Surface ECG signals, 16th Argentine Bioengineering Congress and the 5th Conference of Clinical Engineering. 10. Candan Caner & Mehmet Engin & Erkan Zeki Engin, The Programmable ECG Simulator, Published online: 25 March 2008, Springer Science + Business Media, LLC 2008. 11. W.G. John, (1998) “Medical Instrumentation: Application and Design” Edition. 3rd, Toronto: Johan Wiley and sons 12. Cromwell L, Weibe F. J., “Biomedical Instrumentation and Measurements” 13. Vasilevich M Y 2004, ”A Generator for Testing Signals of electroencephalographs And Electrocardiographs”, Biomedical Engineering, 38(3), 162-4 14. Wikipedia. 2010. The Human Heart [Accessed 2nd August 2010] Web:
15. American Heart Association.[Accessed 6th December 2010] Web:http://www.americanheart.org/presenter.jhtml?identifier=34
16. Cardiovascular Physiology Concepts. Electrocardiogram (EKG, ECG) [Accessed 2nd August 2010] Web: 17. Heart and Circulatory, Tachycardia, Harvard Medical School for InteliHealth. Source on the Internet: Aetna InteliHealth, Web: 18. Dale Technology, ECG/Arrhythmia Simulator - DALE13, 5200 Convair Drive, Carson City, NV 89706, Source on the Internet: da letech.com, Web: http://www.daletech.com/main/product_info.php 19. M311 ECG Simulator, Fogg System Company, Inc. 15592 East Batavia Drive; Aurora, CO 80011, Source on the Internet: all wave.com, Web: 20. Heart Electrical Activity, Source on the Internet: HeartSite.com, Web: http://www.heartsite.com/html/electrical_activity.html 21. Lookup Table, Wikipedia. Web: http://en.wikipedia.org/wiki/Lookup_table
IFMBE Proceedings Vol. 35
Development of CW CO2 Laser Percussion Technique S. Sano, Y. Hashishin, and T. Nakayama Department of Electric and Electronic Engineering, Faculty of Science and Engineering, Kinki University, 3-4-1 Kowakae, Higashi-Osaka City, Japan 577-8502
Abstract— Biological tissue has a complex structure consisting of various kinds of tissues. CO2 laser radiation is strongly absorbed by the water in biological tissue. It is thus used to make incisions in biological tissue. However, the desired incision cannot be made when the state of the biological tissue (especially, its water content) varies during irradiation. It is thus necessary to monitor the state of tissue during laser irradiation. The present study considers laser-induced sound. When a laser beam is used to irradiate biological tissue, the water in the tissue absorbs the laser energy. The temperature of the irradiated region increases, causing the tissue to expand rapidly and eventually explode. Laser-induced sound is simultaneously generated. Laser scalpels are currently used as a form of laser treatment. This laser-induced sound is thought to contain information about the kind of biological tissue being irradiated and its state. In the present study, the kind of the sample and information about its state are obtained by irradiating simulated body tissues by a CW (Continuous Wave) CO2 laser beam and analyzing the generated laser-induced sound. The results reveal that it should be possible to identify the sample being irradiated in real time by analyzing the laser-induced sound characteristics, which differ according to the kind and the state of the tissue. Keywords— Laser Percussion, CW CO2 Laser, Biological Tissue, Laser-induced Sound, Microphone.
diseased tissue. Moreover, it is difficult to perform the desired excision because the state of the tissue changes continuously during laser irradiation. These problems can be overcome by using the sound generated by laser irradiation (hereafter, laser-induced sound). This laser-induced sound has been used to monitor the irradiated subject [3,4]. When CO2 laser pulses are used to irradiate different samples that simulate living tissue, the characteristics of the laser-induced sound depend on the kind of sample being irradiated [5]. In the present study, the kind and state of the sample for irradiation were changed by using a CW CO2 laser for irradiation laser.
II. EXPERIMENTAL SETUP A. Irradiation Sample The irradiation samples used were gelatin (Nitta Gelatin, G-0384K), pork, spongy bone, cortical bone, meniscus, fat, and bone marrow. These samples were used to simulate living tissue. The gelatin had dimensions 20 mmh × 10 mmw × 20 mmt and a concentration of 10 wt%. B. Experimental Setup
I. INTRODUCTION In addition to conventional scalpels, electric and laser scalpels are used for incising tissue in surgical operations [1]. CO2 lasers are widely used for laser scalpels. The energy of the irradiated CO2 laser beam is strongly absorbed by the water in biological tissue, causing the tissue temperature to rise rapidly and evaporating the water [2]. Biological tissue is removed along with the water. In addition to incising, laser scalpels can perform coagulation necrosis and homeostasis. When using a conventional scalpel, the feeling transmitted from the tissue to the surgeon’s hand through the scalpel can assist the surgeon identify the biological tissue that is being incised in addition to identification by sight. However, when using a laser scalpel no sensation is transmitted to the surgeon’s hand and thus biological tissue can only be identified by sight. Consequently, healthy tissue is sometimes accidentally irradiated by the laser rather than
Fig. 1 shows the experimental setup used in this study. To prevent sound from entering from outside, the irradiation area was enclosed in a sound-proof box. The CW CO2 laser beam (power: 50 W; wavelength: 10.6 μm) was focused onto the sample surface by a ZnSe lens (focal length: 10 cm). The laser irradiation was 80 ms; it was controlled using a mechanical shutter. C. Method for Measuring the Laser-Induced Sound Waveform A microphone was used to measure the laser-induced sound generated by laser irradiation of a sample over a frequency range of 20–70 kHz. The microphone was set at a distance of 10 cm from the sample surface and at an angle of 45° to the surface. Frequency analysis of the generated laser-induced sound was performed using analytical software TDS waveform utility.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 296–299, 2011. www.springerlink.com
Development of CW CO2 Laser Percussion Technique
III. ANALYSIS OF LASER-INDUCED SOUND WAVEFORMS OF SAMPLES
Fig. 2 shows the waveform of the laser-induced sound produced by laser irradiation of 10 wt% gelatin. The laser power was 10 W, the irradiated area was 1 mm in diameter, the laser intensity was 1.27 kW/cm2, and the irradiation time was 1 s. The laser-induced sound has an intermittent waveform. Fig. 3 shows expanded plots of the first laser-induced sounds for the six samples (i.e., pork, spongy bone, cortical bone, meniscus, fat, and bone marrow). The laser power was 10 W, the irradiated area was 1 mm in diameter, the laser intensity was 1.27 kW/cm2, and the irradiation time was 80 ms.
Mirror Micro phone
Lens 10 cm
45° 10 cm Sample
Oscilloscope
Fig. 1 Experimental set up. To prevent external sound from entering, the laser-irradiated area is enclosed in a sound-proof box. The CW CO2 laser beam is focused by a ZnSe lens (focal length: 10 cm)
The laser-induced sound waveform was characterized by (a) the pressure amplitude, (b) the attenuation time, (c) the ratio of the first and third peaks, (d) the fall time (third peak) (see Fig. 4). These parameters are defined as follows. The pressure amplitude is defined as the difference between the maximum and minimum pressures of the laser-induced sound. The attenuation time is defined as the time that the laser-induced sound pressure decreases from it maximum value to 1/10th its maximum value. The ratio of the first to third peaks is defined as the ratio of the first peak height to the third peak height. The fall time is defined as the time for the pressure to change from 0 to its value at the third induced sound peak. Fig. 5 shows histograms of these parameters for the six samples. Considering fat and spongy bone as an example, it should be possible to distinguish fat and spongy bone
Laser-induced sound pressure amplitude (V)
Sound-proof box
CW CO2 laser
PC
because they have different pressure amplitude, whereas their attenuation times, ratios of the first and third peaks, and the fall time of the third peak are almost the same. Thus, it should be possible to use the above analysis to identify different tissues. Frequency analysis of the induced-sound waveform of pork shown in Fig. 2 was performed between 0 and 15 ms. Fig. 6 shows the result. The frequency with the highest amplitude was 40 kHz. Fig. 7 shows the frequency characteristics of the samples shown in Fig. 3. Frequency analysis was performed between 0 and 120 µs. Spongy bone had many counts between 37 and 50 kHz. Bone marrow had a peak at a frequency of 37 kHz. Meniscus had many counts between 25 and 50 kHz. Fat had a peaks at 30 and 43 kHz. Pork had many counts between 24 and 70 kHz. Cortical bone had many counts between 37 and 43 kHz. 1.0 First induced sound 0.5 0 –0.5 –1.0 –20
0
20
40
60
80
100
120
Time (ms)
Fig. 2 Laser-induced sound waveform by CW CO2 laser. The induced sound has an intermittent waveform. The laser-induced sound generated at a time of 0 ms is termed the first laser-induced sound Laser-induced sound pressure amplitude (V)
Mechanical shutter
297
2.0 1.0 0
Spongy bone
Bone marrow
Meniscus
-1.0 -2.0 2.0 1.0
Fat
Pork
Cortical bone
0 -1.0 -2.0
0 30 60 90 120
0 30 60 90 120 Time (μs)
0 30 60 90 120
Fig. 3 First laser-induced sound waveforms of various tissues. The samples exhibit different sound amplitudes and waveforms
IFMBE Proceedings Vol. 35
S. Sano, Y. Hashishin, and T. Nakayama
Laser-induced sound pressure amplitude (V)
298
Maximum pressure
15
Maximum
10
Time (μs) (a) Pressure amplitude
Count
Minimum Time (μs) (b) Attenuation time
5
0 0 0
20
40
60
80
100
Frequency (kHz)
Third peak
Fall time Time (μs)
First peak (c) Ratio of the first and third peaks
(d) Fall time (third peak)
Fig. 6 Frequency characteristics of pork. Frequency analysis was performed between 0 and 15 ms. The frequency with the highest amplitude is 40 kHz 300
Count
Fig. 4 Analysis of laser-induced sound waveforms (1) pressure amplitude,
200
20
(c) Ratio of the first and third peaks
Fat
1 Fat
Bone marrow
0 Meniscus
Fat
Bone marrow
Meniscus
Cortical bone
Pork
1
2
Cortical bone
2
3
Spongy bone
3
Pork
Falling time (μs)
4
Spongy bone
Peak ratio (V)
Bone marrow
(b) Attenuation time 4
Fat
Pork
0 20 40 60 80 100 Frequency (kHz)
Cortical bone
20 40 60 80 100 Frequency (kHz)
20 40 60 80 100 Frequency (kHz)
Fig. 7 Laser-induced sound frequency characteristics of various tissues. Frequency analysis was performed from 0 to 120 µs Thus, the various simulated tissues can be identified using the above-described frequency analysis. Fat and bone marrow had a low number of counts at the generated frequencies. Spongy bone had different peak frequencies from the other samples. Meniscus and cortical bone had few counts above a frequency of 50 kHz, although pork, meniscus, and cortical bone have similar peak frequencies. This indicates that tissue can be identified using the above-described frequency analysis.
(d) Fall time (third peak)
Fig. 5 Laser-induced sound properties of various tissues. (a) The maximum pressure amplitude, (b) attenuation time, (c) first and third peaks ratio, (d) fall time (for the third peak)
Meniscus
100 0
Meniscus
0
(a) Pressure amplitude 5
0
Count
300
60
Cortical bone
Fat
Bone marrow
Meniscus
Cortical bone
Pork
0
80 40
Bone marrow
100 0
Spongy bone
1.0
Spongy bone
200
100
Pork
2.0
Attenuation time (μs)
3.0
Spongy bone
Maximum pressure amplitude (V)
(2) attenuation time, (3) ratio of the first and third peaks, (4) fall time
IV. CONCLUSION An intermittent laser-induced sound waveform was generated by various samples. The laser-induced sound waveform varied depending on the kind of tissue. This
IFMBE Proceedings Vol. 35
Development of CW CO2 Laser Percussion Technique
suggests that diseased tissue and its condition could be detected in real time by analyzing the laser-induced sound. The laser irradiation parameters could also be controlled based on the laser-induced sound, making it possible to adjust the incision volume in real time. If this laser percussion technique was realized in practice, it may allow even relatively inexperienced doctors to perform laser treatment safely and reliably.
299
REFERENCES 1. U. Kubo, Weekly Igaku no Ayumi, 124, 1983, Ishiyaku Pub,Inc., pp. 352-357. (Japanese) 2. M. Niemz (1996) Laser-Tissue Interaction Springer-Verlag pp 64-65. 3. T. Kurita, T. Ono and N. Morita, Journal of Materials Proceeding Technology 97 (2000) 168-173. 4. Stephen W. Kercal, Rpger A. Kisner, Marvin B. Klein, G. David Bacher and Bruno Pouet, Proceedings of SPIE 3852 (1999) 81–92. 5. S. Sano, Report on the Topical Meeting of the Laser Society of Japan, RTM-338, 2005, pp. 19-24. (Japanese)
ACKNOELEDGEMENT This research was supported by a Grant-in-Aid for Scientific Research (C) in Japan.
IFMBE Proceedings Vol. 35
Electric Field Measurement for Biomedical Application Using GNU Radio I. Hieda1 and K.C. Nam2 1
National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba-city, Japan 2 Department of Medical Engineering, Yonsei University College of Medicine, Seoul, Korea
Abstract— The authors are studying a technique to measure inside of the human body by utilizing simple electromagnetic method. An antenna as small as a few centimeters square generates electric field at Radio Frequency (RF), and the same type of antenna measures electric field intensity where two antennas are located 15 to 50 cm apart from each other. Although the method does not provide fine resolution because of the principle, it has such advantages as safe and economy so that it can be applied to medical screening devices and welfare equipment. Experiments and simulations had been performed to reveal basic characteristics of the method. One of the technical issues suggested by the experiments was instability of the raw receiving signals. Because signal changes caused by human body were small, improvement of the stability was essential for medical application. This paper reports preliminary experiment where the human body was measured by a new experimental system. The previous experimental system consisted of a general-purpose spectrum analyzer that only measured amplitude of input signal in a certain frequency range. The new measurement system, introduced in this paper, consisted of a GNU Radio compliant peripheral and GNU Radio software. GNU Radio is an open source project where user can build special purpose receiver and transmitter in conjunction with a simplified peripheral. The measurement results indicated advantages of the new experimental system with GNU Radio.
weak electric field at radio frequency (RF), and distribution of moisture and fat inside the body is imaged from the measured signals. This simple, safe and cost-effective method would be applied to medical screenings and welfare applications that include abdominal fat CT, urine volume sensor, and dehydration alarm. Experiments and simulations had been performed to reveal basic characteristics of the method [4-6]. One of the technical issues suggested by the experiments was instability of the raw receiving signals. Because signal changes caused by the human body were small, improvement of the stability was essential for medical application. This paper reports preliminary experiment where the human body was measured by a new experimental system. The previous experimental system consisted of a generalpurpose spectrum analyzer that only measured amplitude of input signal in a certain frequency range. On the other hand, the new measurement system consisted of a GNU Radio compliant peripheral and GNU Radio software [7]. GNU Radio is an open source project where user can build special purpose receiver and transmitter with a user-specific configuration in conjunction with a simplified peripheral.
Keywords— electric field intensity, biomedical measurement, permittivity, GNU Radio, SDR.
I. INTRODUCTION The authors are developing an electromagnetic measurement technology that can be applied to biomedical imaging [1-4]. Technological advances brought high-level medical equipment that contributed to medical care and welfare. On the other hand, such high-level and large-scaled medical equipment is one of the financial burdens of most of the developed countries. In addition, those devices rarely deliver benefits to people in the developing countries. From the point of view, simple and easy equipment for medical and health care is expected. Figure 1 shows an overview of the authors' development system. A certain portion of the human body is scanned by N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 300–304, 2011. www.springerlink.com
Fig. 1 Overview of the experiment
Electric Field Measurement for Biomedical Application Using GNU Radio
301
Fig. 2 Measurement system built with GNU Radio Companion (GUI of GNU Radio; upper) and FFT plot of the received signal (lower) Advantages of the GNU Radio measurement system were discussed in comparison with the former system that consisted of generic spectrum analyzer.
II. METHOD A. Overview of the Measurement Figure 1 describes the measurement configuration as well. A male subject stood still on the moving table, whose
age, weight, and height were 52years, 60kg, and 169cm, respectively. Transmitting antenna was made of a pair of copper plates that were 30mm square and 40mm apart from each other perpendicularly. Radio frequency power of 10mW at 48MHz was fed to the antenna so that electric field was generated around the antenna. The probe, receiving antenna, detected electric field intensity that was influenced by the subject. The two antennas were set 350mm apart from each other on wooden stands at the height of breast of the subject. The moving table brought the subject
IFMBE Proceedings Vol. 35
302
I. Hieda and K.C. Nam
controlled by the measurement system to make his breast cross between the two antennas. The electric field intensity at the receiving antenna was changed according to the position of the subject’s body. The signal strength was measured by a newly-introduced GNU Radio system and by a spectrum analyzer that had been used for experiments. B. GNU Radio System GNU Radio is an open source project of Software Defined Radio (SDR). It includes software drivers, library for C++ and Python [8], and graphic user interface (GUI) to build user-specified SDR. The project targeted peripherals whose specifications are open to the public. The signal received by the probe was fed to a Universal Software Radio Peripheral 2 (USRP2) [7], a GNU Radio compliant peripheral, via an amplifier. Figure 2(a) shows the block diagram of measurement system by GNU Radio GUI system that can generate SDR runtime Python scripts. USRP2 converted the signal to digital data and sent to the host computer where the runtime SDR program was running. The received data were filtered and simply dumped to a binary file as well as monitored by FFT scope as shown in Fig 2(b). The data file was analyzed offline after the measurement.
Fig. 3 Relation between amplitude of the received signal (Y) and distance of the subject from home position (X) where the data were averaged for the respective 10 measurements
C. Experiment The table moved 650mm straightly to scan the breast with the antenna pair one-dimensionally. A single measurement took 45 seconds, and the body was measured 10 times. For the power source of the transmitting antenna, battery powered oscillator was used whose output power and frequency were 10mW and 48MHz, respectively. For the comparison, the same subject was measured 10 times by the spectrum analyzer with sweep generator built in the analyzer that was used in the previous studies. The other conditions were the same as the measurement by GNU Radio system.
III. RESULTS AND DISCUSSION Figure 3 shows the relation between amplitude of the received signal and distance of the subject from home position. Ten measured data under each measurement condition were averaged and standardized (or divided) by the amplitude at home position of the moving table. Because the signal changes of the GNU Radio was remarkably greater than that of the spectrum analyzer, Y scales of the two measurements differed each other.
Fig. 4 Relation between amplitude of the signal and subject location for each measurement where (a) GNU Radio and (b) spectrum analyzer were used respectively
IFMBE Proceedings Vol. 35
Electric Field Measurement for Biomedical Application Using GNU Radio
303
Although significances of the signal changes were different, the curves had common tendency. After the moving table started from the home position, electric field intensity decreased. This was caused by the Faraday shield effect of the body of the subject that was reported in the previous paper [6]. When location of the table reached 200mm, the breast of the subject started crossing the centerline of two antennas. Because the human body contained moisture whose relative permittivity was 50 to 100, remarkably greater than that of the air, electric field intensity increased [5]. At 370mm, when the body located in between two antennas, electric field intensities peaked. The curves were almost symmetric with respect to the location of 370mm that reflected symmetrical condition of the measurement. There were two notable differences between the two measured data although they had such similar tendency. One of them was the significance of signal changes. Especially, the shielding effect of GNU Radio was remarkably larger than that of the spectrum analyzer. The previous study suggests that unwanted leakage between signal source and measurement circuit in the spectrum analyzer weakened the signal changes [4]. Because it was generic measurement device, it was difficult to exclude the unwanted link. The peak heights of the GNU Radio and the spectrum analyzer from the respective minimum signal strength were 0.08 and 0.017, respectively. This difference was thought to be caused by unwanted signal leakage, too. The other difference was fluctuations along the signal curves. The signal of the GNU Radio was smoother than that of the spectrum analyzer. Figure 4 shows all 10 measured signal intensities under each condition. Obviously, the signal strengths of spectrum analyzer were more turbulent than those of the GNU Radio.
Table 1 shows variances for three position, 220, 350 and 500mm of measured data in Fig. 4, where first minimum of signal strength, peak, and the second minimum points, respectively. At 350mm of the moving table position, variance of the spectrum analyzer was smaller than that of the GNU Radio at the significance level of 5%. Because signal changes of the spectrum analyzer were remarkably smaller than those of the GNU Radio, the variance was smaller no matter how turbulent it was. Table 2 shows variances where the signal strength was standardized by signal changes, or respective maximum value subtracted by respective minimum value. All GNU Radio variances were significantly smaller than those of the spectrum analyzer at the significance level of 1%. The spectrum analyzer was a generic measurement device that measured signal strength of relatively strong signal and large signal changes. The developing method only caused small signal changes by the human body that was difficult to be acquired by the generic device. It only output 12bit fixed integer data. Generally, specially designed equipment was necessary to acquire such data for biomedical study that cost time and money. On the other hand, the GNU Radio was designed for study and development of communication where precision of amplitude and phase of the signal was essential. USRP2, a peripheral for the GNU Radio, had two 16bit A/D converters and output 64bit complex float data. The GNU Radio project offered GUI and Python library to build receiver and transmitter easily, according to user specification. The user could change the total design of the equipment quickly without additional cost. It was suitable for study of electromagnetic methods of biomedical measurement as well.
Table 1 Test for equal variance (Standardized by respective maximum
Results of the experiment suggest GNU Radio contributed more precise measurement than generic measurement device. The authors plan to build prototype of the measurement system utilizing the GNU Radio and scan the human body two-dimensionally. Proper application of the method would be surveyed according to progress of the study.
value) X position (mm)
350
220
500
Spectrum Analyzer
.0055
.0055 (p<.05)
.008
GNU Radio
.0059
.011
.011
IV. CONCLUSION
REFERENCES
Table 2 Test for equal variance (Standardized by respective signal change) X position (mm) Spectrum Analyzer GNU Radio
220
350
500
.3.4x10-5 .30x10-4 (p<.01)
.11x10-3 .30x10-4 (p<.01)
.12x 10-3 .67x10-4 (p<.01)
1. Hieda I, Nam K C, Takahashi A (2004) Basic characteristics of the radio imaging method for biomedical applications, Med Eng & Phys 26:431-437 2. Hieda I, Nam K C (2005) 2D Image Construction from Low Resolution Response of a New Non-invasive Measurement for Medical Application, ETRI Journal 27-4:385-393
IFMBE Proceedings Vol. 35
304
I. Hieda and K.C. Nam
3. K.C. Nam and I. Hieda (2007) Dielectric Measurement using Radio Imaging Method for Tomography, ICEBI, Graz, Austria, IFMBE 17, pp. 472-475 4. Hieda I, Nam K C (2009) Improvement of the Measurement Quality of Radio Imaging Method for Biomedical Application, Proc World Congress on Medical Physics and Biomedical Engineering (WC2009), Munich, Germany, IFMBE 25/II: 128-131 5. Hieda I, Nam K C (2008) FDTD Simulation of Radio Imaging Method for Biomedical Application, Proc BioMed 2008, Kuala Lumpur, Malaysia, IFMBE 21: 3-2-5-2
6. Hieda I, Nam K C (2010) The Frequency Dependence of the Effect of the Human Body Conductivity in the Radio Imaging Method for Medical Application Proc. 6th World Congress of Biomechanics (WCB 2010), Singapore, IFMBE 31, pp 1562-1565 7. GNU Radio at http://gnuradio.org/ 8. Python Programming Language at http://www.python.org/ Author: I. Hieda Institute: National Institute of Advanced Industrial Science and Technology (AIST) Street: 1-1-1 Higashi City: Tsukuba-city Country: Japan Email: [email protected]
IFMBE Proceedings Vol. 35
EZ430-Chronos Watch as a Wireless Health Monitoring Device I.N.A. Mohd Nordin, P.S. Chee, M. Mohd Addi, and F.K. Che Harun Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310, Skudai, Johor
Abstract— The paper describes a wireless health monitoring system, developed in LabVIEW, which is able to transmit and receive a patient’s body signal wirelessly to an eZ430Chronossport watch.Patient’s vital body signal such as heart rate andpercentage of oxygen saturation will be wirelessly transmitted to a programmable eZ430-Chronos watchto easepatient monitoring. The watch, used as a monitoring device will be worn by the doctor or any other individuals who is responsible in monitoring a patient’s condition (i.e.nurse or any of the patient’s family members). The system enables the user to be alerted immediatelyshould any abnormalities occur to a monitored patient and he/she can be attendedinstantly. This would improve patient monitoring system remotely within the wireless protocol given.
Ming, et al. [4]proposed a portable ECG measurement device based on MSP430 MCU, which is small, light, low powered and used to analyze arrhythmia. The device uses a16-bit MCU-MSP430 with the integration of amplifier circuits, filter circuit, wireless transmitting-receiving circuit, data management, keyboard and LCD display. In this design, thetypes of electrodes used are the ones that stick ontothe patient’s chest, making the device suitable for long monitoring. Even aggressive movementsduring surgery would not give much errors to the measurement [4].
Keywords— wireless monitoring device, heart rate, oxygen saturation, eZ430-Chronos.
I. INTRODUCTION Fig. 1 Texas Instrument eZ430-Chronos programmable sport watch [5] Research in the field of telemedicine has developed and expanded vastly over the years, where at present, wireless monitoring devicesarewidely used in enhancing the provision of emergency medical care. Such applications provide increased flexibility and portability for physiological parameter measurements in areas where cables are not feasible. In 1999, Pettis et al., [1]evaluated the efficiency of handheld computer screens used by cardiologist. Electrocardiogram (ECG) signals from 12-leads attached to the patient were transmitted from remote locations to the cardiologists. They believed that if the interpretation of ECG data from the newly designed computer-based system were reliable, the treatment time for cardiac patients will be shortened. Diagnosis by cardiologist was proven to be significantly similar when viewing LCD-displayed ECG and paperdisplayed ECG[1]. Due to patient’s physical restrictions when connected to cables and machines, Taylor,et al.,[2]had designed wireless transducers for cardiotocography via radio frequency (RF) telemetry, where there was no wire connection between the transmitter and the receiver. Thus, patients feel more confortable and at ease to move. There are also many other existing researches regarding wireless transmission of ECG signals, and the transmission is always to a laptop or a personal computer (PC)[3-4].
Figure 1 shows an eZ430-Chronos sport watch, a product of Texas Instrument which has been widely used in various wireless applications.Zhou Z. [6] has developed the watch to be used as an electronic door locking device [6]. The watch was programmed to communicate wirelessly with the device attached to the door for the door to be locked and unlocked. This is done by tapping on the watch’s 3-axisaccelerometer which is located on the watch’s liquid crystal display (LCD). Thisproves that the wireless communication of the watch can be developed as long as the RF transceiver used is suitable for low power wireless applications. The objective of this project is toexpand the application of telemedicine with a slightly different approach by utilizing the current wireless wearable device, eZ430-Chronos watch (from Texas Instrument) as a wireless health monitoring device. Ahealth monitoring system using LabVIEW is developed forwireless transmissionofheart rate and oxygen saturation percentage to the watch using RF telemetry.
II. METHODOLOGY Figure 2 shows the block diagram of the proposed watchbased wireless health monitoring system which consists of an ECG processing circuit system, a pulse oximeter module,
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 305–307, 2011. www.springerlink.com
306
I.N.A. Mohd Nordin et al.
a computer transmitter station and a receiver station which is the eZ430-Chronos watch. The ECG electrodes will acquire heart signals while the pulse oximeter module will get the oxygen saturation signal from the patient. The analog inputs (ECG signal) and UART input (oxygen saturation digital signal) will then be integrated to a computer and processed by LabVIEW. The process includes the calculation of heart rate and percentage of oxygen saturation and the wireless data transmission to the receiver station (sport watch).
Pulse oximeter module: The digital signal from the module is connected to the computer for data integration using UART. LabVIEW (computer): The ECG signal is processed by a system developed in LabVIEW to calculate and display the value of heart rate and oxygen saturation. It is also programmed to send the packets of heart rate and oxygen saturation data from the LabVIEW transmitter station via RF module (USB access point), which is wirelessly paired with the existing transceiver in the eZ430-Chronos watch. B. Receiver Part
ECG electrodes
Pulse Oximeter Module
eZ430-ChronosWatch: The eZ430-Chronos sport watch can be reprogrammed using Code Composer Studio software, which can be written in C language, to displaythe received heart rate and oxygen saturation data from the LabVIEW transmitter station whenever required. Simplified programming and debugging can be achieved by using the eZ430 USB emulator which comes with the Chronos sport watch, (refer to Figure 3).
ECG electrodes and pulse oximeter worn by a patient. ECG amplifier circuit
UART
DAQ Card
LabVIEW (computer)
Fig. 3 USB emulator for watch programming and debugging USB RF module
III. RESULTS
(Transmitter Station)
Figure 4 shows the sample of an amplified ECG signal from the ECG amplifier circuit which is connected to the ECG electrodes attached to the patient’s chest. To verify the accuracy of the program developed in LabVIEW, a patient
WIRELESS TRANSMISSION
Receiver station
Fig. 2 Block diagram of wireless health monitoring system A. Transmitter Part ECG Signal Processing Circuit:In this system, the ECG signal is acquired from the electrodes attached to the body. The small ECG signal, which is usually in the range of mV is amplified and connected to the computer via the input port of the Data Acquisition (DAQ) Card.
Fig. 4 GUI (Graphic User Interface) for heart rate transmitter station
IFMBE Proceedings Vol. 35
EZ430-Chronos Watch as a Wireless Health Monitoring Device simulator, a device that mimicshuman ECG signal is used as a reference. The heart rate calculation developed in LabVIEW was proven to be reliable as it gives the same heart rate value that is set at the patient simulator. A. Transmitter Station The developed LabVIEW program is able to send the calculated heart rate and oxygen saturation data in real time continuously to the eZ430-Chronos watch via a serial port through the USB RF module. The LabVIEW program(in run mode) will simultaneously interpret the input data and transmit them wirelessly to the watch.
307
been developedis restricted to only patients that are critically ill and are not able to walk around. The patient being monitored must be attached to the computer/machine for continuous monitoring, which is a disadvantage in mobility of the system. For future development, we are upgrading the wireless watch monitoring system to a wearable miniature processing circuit that is able to transmit three inputs (heart rate, oxygen saturation and temperature) or more to the watch wirelessly and controlled by a PIC16 microcontroller, a smaller processor compared to a computer. The miniaturize circuit can be placed into a sling bag to be worn by the patient and is more portable.
B. Receiver Station
V. CONCLUSION
Figure 5 shows the received heart rate valueafter the data is transmitted from LabVIEW.One additional feature developed using the watch is the alarm mode. The alarm mode will alert the user who is wearing the watch if the heart rate and oxygen saturation data of the patient is out of normal range. The abnormal range of data can be set from the LabVIEW front panel.
A wireless communication can be established between the RF module connected at the computer (transmitter station) and the eZ430-Chronos watch (receiver station). The heart rate value calculated from the ECG signal and the percentage of oxygen saturation acquired from a patient can be wirelessly transmitted and displayed at the watch. The wireless health monitoring system enables the user to monitor the patient’s heart rate and percentage of oxygen saturation remotely within the wireless protocol range.
ACKNOWLEDGEMENT
Fig. 5 The transmittedsample heart rate of 60bpm and 120bpmdisplayed on the watch
The author would like to sincerely thank the Ministry of Higher Education Malaysia for funding support and Universiti Teknologi Malaysia (UTM) for the facilities and equipment provided for this research.
REFERENCES IV. DISCUSSION One of the advantages of the developed wireless health monitoring system is that the person who is responsible to monitor the patient’s condition can do it while doing other activities and be alerted instantly when the alarm mode is activated.. It would also be useful in clinics where nurses can be alerted and attend to the patient immediately when the patient’s heart rate and percentage of oxygen saturation is out of the normal range. The disadvantage of the wireless transmission is that the RF access point only transfers data up to 20m of distance from the watch. Currently, the monitoring system that has
1. Pettis, K., et al., Evaluation of the efficacy of hand-held computer screens for cardiologists' interpretations of 12-lead electrocardiograms. American Heart Journal, 1999. 138(4): p. 765-770. 2. Taylor, J., et al., Towards multi-patient leadless and wireless cardiotocography via RF telemetry. Medical Engineering &Physics, 1999. 20(10): p. 764-772. 3. Chen, F., et al., SmartPad: A Wireless, Adhesive-Electrode-Free, Autonomous ECG Acquisition System. 30th Annual International IEEE EMBS Conference Vancouver, British Columbia, Canada, August 2024, 2008. 4. Ming, H., Z. Yajun, and H. Xiaoping. Portable ECG Measurement Device based on MSP430 MCU. 2008. 5. Incorporated, T.I., eZ430-Chronos development kit. 2009 6. Zhou, Z., Secure Wireless Door Lock. EZ430-Chronos Projects2010. https://ziyan.info/2010/01/secure-wireless-door-lock/.
IFMBE Proceedings Vol. 35
Face Detection for Drivers’ Drowsiness Using Computer Vision V.V. Dixit, A.V. Deshpande, and D. Ganage Sinhgad College of Engineering, Pune, SKN College of Engineering Pune
Abstract— In the present study, a vehicle driver drowsiness warning or alertness system using image processing technique with fuzzy logic inference is developed and investigated using MATLAB, but the processing speed on hardware is main constrained of this technique. The principle of the proposed system in this paper using OpenCV (Open Source Computer Vision) library is based on the real time facial images analysis for warning the driver of drowsiness or in attention to prevent traffic accidents. The facial images of driver are taken by a camera which is installed on the dashboard in front of the driver. An algorithm and an inference are proposed to determine the level of fatigue by measuring the eyelid blinking duration and face detection to track the eyes, and warn the driver accordingly. If the eyes are found closed for 5 or 8 consecutive frames, the system draws the conclusion that the driver is falling asleep and issues a warning signal. The system is also able to detect when the eyes cannot be found. Present paper gives the overview of the different techniques for detecting drowsy driver and significance of the problem, face detection techniques, drowsiness detection system structure, system flowchart, introduction to OpenCV. The proposed system may be evaluated for the effect of drowsiness warning under various operation conditions. We are trying to obtain the experimental results, which will propose the expert system, to work out effectively for increasing safety in driving. The detail of image processing technique and the characteristic also been studied. Keywords— Drowsiness Detection, Face OpenCV, Drowsiness Monitoring, Warning.
Detection,
I. INTRODUCTION Due to the increase in the amount of automobile in recent years, problems created by accidents have become more complex as well [12]. The official investigation reports of traffic accidents point out those dangerous driving behaviors, such as drunk and drowsy driving, have taken a high proportion among all the accidents, it is necessary to develop an appropriate driver drowsiness and alertness system that can directly improve the driving safety [13]. However, several complicated issues are involved with keeping an eye on drivers all the time to wipe out all possible hazards. Driver fatigue is a significant factor in a large number of vehicle accidents. Recent statistics estimate that annually 1,200 deaths and 76,000 injuries can be
attributed to fatigue related crashes [12]. The development of technologies for detecting or preventing drowsiness at the wheel is a major challenge in the field of accident avoidance systems. Because of the hazard that drowsiness presents on the road, methods need to be developed for counteracting its affects. The aim of this paper is to develop an algorithm for drowsiness or alertness detection system. The focus will be placed on designing a system that will accurately monitor the open or closed state of the driver’s eyes in real-time [5]. By monitoring the eyes, it is believed that the symptoms of driver fatigue can be detected early enough to avoid a car accident [3] [5] [6]. Detection of fatigue involves a sequence of images of a face, and the observation of eye movements and blink patterns. The analysis of face images is a popular research area with applications such as face recognition, virtual tools, and human identification security systems [3]. This paper work is focused on the localization of the eyes, which involves looking at the entire image of the face, and determining the position of the eyes, by a proposing well image processing algorithm. Once the position of the eyes is located, the system is designed to determine whether the eyes are opened or closed, and detect fatigue. This paper proposes a computer vision based driver drowsiness system, use of OpenCV in image processing, interfacing of web camera to OpenCV, Haar classifiers like feature based face detection methods, eyelid identification, and developing alertness system using eyelid status to warn the driver accordingly by providing an alarm. The remainder of this paper is organized as follows. Section II describes the different techniques for detecting drowsy driver and significance of the problem. In Section III describes proposed system structure, system flowchart and introduction to OpenCV. In Section IV describes the interfacing of web camera to OpenCV. Section V Haar classifiers like feature based face detection techniques. In Section VI describes the results and finally we make conclusion in Section VII.
II. DROWSINESS DETECTION TECHNIQUES There are several types of drowsiness detection but possible techniques for detecting drowsiness in drivers can
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 308–311, 2011. www.springerlink.com
Face Detection for Drivers’ Drowsiness Using Computer Vision
be generally divided into the Following categories: sensing of physiological characteristics, sensing of driver operation, sensing of vehicle response, monitoring the response of driver [11]. A. Monitoring Physiological Characteristics From all the methods, the techniques that are best, based on accuracy are the ones based on human physiological phenomena [12]. This technique is implemented in two ways: measuring changes in physiological signals, such as brain waves, heart rate, and eye blinking; and measuring physical changes such as sagging posture, leaning of the driver’s head and the open/closed status of the eyes [4]. The first technique, while most accurate, is not realistic, since sensing electrodes would have to be attached directly onto the driver’s body, and hence be annoying and distracting to the driver. In addition, long time driving would result in perspiration on the sensors, diminishing their ability to monitor accurately. The second technique is well suited for real world driving conditions since it can be non-intrusive by using optical sensors of video cameras to detect changes. B. Remaining Methods Driver operation and vehicle behavior can be implemented by monitoring the steering wheel movement, accelerator or brake patterns, vehicle speed, lateral acceleration, and lateral displacement. These too are nonintrusive ways of detecting drowsiness, but are limited to vehicle type and driver conditions [11]. The final technique for detecting drowsiness is by monitoring the response of the driver. This involves periodically requesting the driver to send a response to the system to indicate alertness. The problem with this technique is that it will eventually become tiresome and annoying to the driver [7].
309
III. PRPOSED SYSTEM STRUCTURE AND SYSTEM FLOWCHART
Day by day there is great improvement on Microprocessor in recent years; a large, 2D image can be easily process by a these microprocessor. To process this image, the MATLAB can be used. The image analysis techniques using MATLAB have been greatly accepted and applied. But problem when it will work on real time video bit stream or frames of images from video, it takes very long processing time to process these images and hence system can fail with real time. The solution of this long processing time is proposed in this paper. Figure 1 shows vision based driver alertness system structure. Figure 2 shows the flow chart of the entire process analyzing whether a warning should be signaled. The proposed design is implementing with the help of OpenCV. Microsoft Visual Studio Express Edition 2008 is the plat form to be used for OpenCV library. First, the system obtains images from video bit stream of driver’s face by an imaging system i.e. CCD camera or Digital camera, which is installed on the dashboard in front of the driver. Then the acquired face is cropped to locating only eye. By determining the eye area for each image, the first face is detected then the upper and lower eyelids are extracted. Using eyelid, the system detects eye-blinks i.e. eye opening and closing of the driver. Finally, using the eye-blink information, the system presumes how much drowsiness the driver is felling.
C. Significance of the Problem i) A drowsy/sleepy driver is unable to determine when He/she will have an uncontrolled sleep. ii) Fall in sleep crashes are very serious in terms of injury Severity. iii) An accident involving driver drowsiness has a high fatality rate because the perception, recognition and vehicle control abilities reduces sharply while falling asleep. iv) Driver drowsiness detection technologies can reduce the risk of a catastrophic accident by warning the driver of his/her drowsiness. IFMBE Proceedings Vol. 35
Fig. 1 Vision Based Driver Alertness System
310
V.V. Dixit, A.V. Deshpande, and D. Ganage
the command is cvCapture* capture = cvCaptureFromCAM (0) is used; its meaning is capture image from video device # 0. Now capture frame from the video the command is IplImage* img = 0. OpenCV also supports the obtaining the images from several cameras simultaneously [2] [15].
V. FACE DETECTION TECHNIQUES Face detection is the process of detecting faces in images or videos. Face detection in this project is carried out using the OpenCV library [14]. A. Method of Detecting Face Objects Haar-like classifiers are used for detection of the face in image or video. The classifier is better described as a cascade of boosted classifiers working with haar-like features.
Fig. 2 Flowchart of the System A. Introduction to OpenCV OpenCV stands for Open Source Computer Vision Library includes over 500 functions implementing computer vision, image processing and general-purpose numeric algorithms [2]. Portable and very efficient implemented in C/C++. Has BSD-like license that is, absolutely free for academic and commercial use [15]. OpenCV because of Computer Vision Market is large and continues to grow. There is no standard API (like OpenGL and DirectX in graphics, or Open SSL in cryptography), most of CV software is of 3 kinds:
i) Cascade – meaning the classifier consists of several simpler classifiers, called stages that are applied to a region of the image until the region is rejected by classifiers or passes all stages. ii) Boosted – this means that the classifiers themselves at every stage are complex and are built from basic classifiers using different boosting techniques. iii) Haar-like features – Haar is a wavelet transform that detects certain types of features. The OpenCV face detection algorithm uses these haar-like features shown in figure 3.
i) Research code (slow, unstable, independent, incompatible data types for every library/toolbox) ii) Very expensive commercial toolkits (Like Halcon, MATLAB + Simulink, etc) Specialized solutions bundled with hardware (Video surveillance, Manufacturing control systems, Medical equipment). Standard library would simplify development of new applications and solutions much easier. Special optimization for Intel Architectures: i) Creates new usage models by achieving real-time performance for quite “heavy” algorithms (like face detection) ii) Makes Intel platforms attractive for CV developers.
IV. INTERFACING OF WEB CAMERA TO OPVENCV OpenCV library supports capturing image from a camera or a video file (AVI). To initializing capture from a camera
Fig. 3 Haar-like features used by OpenCV OpenCV provides a number of object detection functions. First a dataset contained in the form of a XML file which is available within OpenCV called haarcascade_frontalface_alt2.xml is loaded in the memory. This file contains information about human faces. Once the
IFMBE Proceedings Vol. 35
Face Detection for Drivers’ Drowsiness Using Computer Vision
file is loaded, a function named cvHaarDetectObject is called. This function finds rectangular regions that are most likely faces in each image frame captured by a camera in real time, and the function returns those regions as a sequence of rectangles. Each time it considers overlapping regions in the image frame. It also applies some heuristics to reduce number of analyzed regions, such as Canny prunning. After it has proceeded and collected the candidate rectangles, it groups them and returns a sequence of average rectangles for each large enough group. The size of rectangles that represents faces is measured, and the largest rectangle is assumed to be the user’s face. If the size of the face is greater than a threshold value, the system automatically logs in the user. Using this technique OpenCV can detect images that contain faces.
VI. RESULT This section describes the result of interfacing of Web Cam to OpenCV shown in figure 4 and face detection algorithm using haar-like classifiers features shown in figure 5:
Fig. 4 Webcam interfacing to OpenCV
Fig. 5 Face detection using Haar-like features
311
VII. CONCLUSION In this paper, we proposed an image processing based system structure for face detection for drivers’ drowsiness and for its alertness using computer vision. Image processing techniques with OpenCV can be achieves highly accurate and reliable detection of face. Using inbuilt function of library we tested the first initialization of web camera. The Haar-based classifiers features of OpenCv are used to detect the face from video bit stream. This algorithm was tested on Microsoft Visual Studio Express Edition 2008 platform.
REFERENCES [1] Rafel C. Gonzalez and Richard E. Woods, “Digital Image Processing”, Pearson Education: Low Price Edition, Seventh Indian Reprint, 2001. [2] Gary Bradski and Adrian Kaehler, “Learning OpenCV Computer Vision with the OpenCV Library”, O’Reilly Media, Inc., First Edition, September 2008. [3] Ilkwon Park, Jung-Ho Ahn and Hyeran Byun, “Efficient Measurement of Eye Blinking under Various Illumination Conditions for Drowsiness Detection System”, IEEE 18th International Conference on Pattern Recognition Proceedings (2006). [4] Mai Suzuki, Nozomi yamamoto, osami Yamamoto, Tomoaki Nakano and Shin Yamamoto, “Measurement of Driver Consciousness by Image Processing – A Method for Presuming Driver Drowsiness by Eye-Blinks coping with Individual Differences”, IEEE International Conference on Systems, Man, and Cybernetics (2006), Taiwan, pp 2891-2896. [5] Pooneh R. Tabrizi and Reza A. Zoroofi, “Open/Closed Eye Analysis for Drowsiness Detection”, IEEE Image processing Theory, Tools & Applications Proceedings (2008). [6] Jian-Da Wu, Tuo-Rung Chen, “Development of a drowsiness warning system based on the fuzzy logic images analysis”, Science Direct: Expert Systems with Applications 34 (2008), pp 1556-1561. [7] Helmi Adly Mohd. Noor, Rosziati Ibrahim, “Image Processing Using Eyelid Blinking and Mouth Yawning to Measure Human’s Fatigue Level”, IEEE 3rd Asia International Conference on Modelling & Simulation (2009), pp 326- 331. [8] Prof. Steven Zucker, Dan Andrei Iancu, “Eye Detection Using Variants of the Hough Transform”, Computational Vision and Biological Perception Project, Spring 2004, pp 1-12. [9] Hyun Hoi James Kim, “Survey paper: Face Detection and Face Recognition”, In Proceedings, International Conference on Computing and Video Processing, pp 2245-2252. [10] Vladimir Vezhnevets, Vassili Sazonov and Alla Andreeve, “A Survey on Pixel-Based Skin Color Detection Techniques”, Graphics and Media Laboratory, Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow, Russia. [11] Nusirwan Anwar bin Abdul Rahman, Kit Chong Wei and John See, “RGB-H-CbCr Skin Color Model for Human Face Detection”, Faculty of Information Technology, Multimedia University. [12] DROWSY DRIVING AND AUTOMOBILE CRASHES: http://www.nhtsa.dot.gov/people/injury/drowsy_driving1/drowsy.html [13] Tang-Hsien Chang, Chun-hung Lin, Chih-sheng Hsu and Yao-jan Wu, “A Vision Based Vehicle Behavior Monitoring and Warning System”, IEEE 2003, pp 443-448. [14] Paul Viola and Michael J. Jones, “Robust Real-Time Face Detection”, International Journal of Computer Vision 57(2), 2004, pp 137-154. [15] BSD-like license absolutely free for academic and commercial use and available at: http://www.sourceforge.net/projects/opencvlibrary
IFMBE Proceedings Vol. 35
Histological Study to Estimate Risks of Radon Inhalation Dose on a Lung Cancer: In vivo A.H. Ismail, M.S. Jaafar, and F.H. Mustafa Radiation and Medical Physics, School of Physics/Universiti Sains Malaysia, 11800USM, Pulau Penang, Malaysia
Abstract— The aim of this study is to estimate risks of inhalation dose (543±26.87B/m3) of indoor radon gas on a lung cancer for the exposed Rabbits by using a histological method. For this purpose, new exposure technique equipped with the nuclear track detectors type CR-39 (CR-39NTDs) has been designed to expose the Rabbits for 90 days. To get an optimum time of exposure the make damage on the lugs, it has been classified into three times 30, 60 and 90 days. Radium -226 (4 pieces) are used as a source of radon inside the exposure chamber, thus, to estimate and get a real value of radon concentration, long (Dosimeter of CR-39NTDs) and short (RAD7) term measurements have been used. The results show that at 60 days of inhalation, Rabbit’s lungs started to create spots of (damage cell) cancer, thus, at 90 day of exposure, increase of the cancer spots are increased obviously , and this is agreement with the principle of long of exposure create more damage. Keywords— Lung cancer, Rabbit, CR-39 NTDs, Radium226, in vivo.
I.
INTRODUCTION
When radon (222Rn) and its short-lived decay products are inhaled, the alpha particles emitted by the deposited decay products dominate a radiation dose to lung tissue; these products, especially those attached to small size aerosols or that remain in an unattached form, cause damage to sensitive lung cells, thereby increasing the probability of developing cancer. The Radon acts mainly as the source of its decay products, which actually deliver the dose to the lungs. However, as a convenient abbreviation, the health effects of radon decay products are often referred to simply as the health effects of radon [1-3]. Inhalation of radon and its short-lived decay products are considering a largest human exposure. One third of the deposited decay products from radon are transported from the lungs into the bloodstream. Therefore, blood itself and several other organs are exposed to alpha particles that decay from radon and its daughters [3]. Alpha particle has two protons bound with two neutrons into an identical particle to make a helium nucleus. It is a heavy nuclear particle, and denoted by the Greek alphabet, α. It has a net spin of zero, and it has a highly ionization form of particle radiation, low penetration and has low
velocity into the matter. Therefore, it will be losing most of its energy in a short distance [4]. Present research tried to investigate creating of lung cancer from effective inhalation dose of the radon gas via a histological process of a few male Rabbits (in vivo study). Nuclear track detectors type CR-39 (CR-39NTDs) used as a detection technique. More details about mechanizing detection of CR-39NTDs explained by the Nikazic [5].
II. A.
METHODOLOGY
Research Materials
CR-39 detectors used in the present study were produced from INTERCAST EUROPE SRL (PARMA-ITALY), and cut into (1 ×1.5) cm engraved code. Its efficiency in was 80.3±1.67% at the optimum etching condition (6N NaOH 70C0 at 9 hours etching [6]. Radium (226Ra; 5µCi) used as a radon source. Healthy male Rabbits (3 cases +1 control) with other equipments used in present work. B.
Experimental Procedures
In the first stage, suitable radon exposure chambers have modified, by the accepted life system for the Rabbits. Exposure chamber installed inside Biophysics laboratory in School of Physics / Universiti Sains Malaysia , it consists four sources of Radium (226Ra), electric fan to simulate indoor air radon , six radon dosimeters equipped with the CR-39NTDs, one radiation dosimeter to measure radiation dose inside the chamber , RAD7 to measure radon concentration to short measurements , as shown in Fig.1. In the second stage: Three male Rabbits (black, same family, their ages 7-8 months) putted inside the exposure chamber for different time of exposure, 30 , 60 and 90 days as displayed in Fig.1. After every 30 day, histological process for of one Rabbits has been done. In third stage, histological process has been done, inside histology Lab., School of Biology Science / Universiti Sains Malaysia. In fact, this process was done under the recommendations of the ethic committee. This process consists of;
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 312 – 314, 2011. www.springerlink.com
Histological Study to Estimate Risks of Radon Inhalation Dose on a Lung Cancer: In vivo
Radiation dosimeter
Radon dosimeters Fan
RAD7
313
In addition, the lung cancer has been observed clearly at the 90 days of exposure, as shown in Fig.5 number 3. This means that the time of exposure even to low radiation dose be effect to create a cancer cell. Therefore, present results are in agreement with the essentials of the radiation interaction with the tissue.
Radium
Fig. 1 Simplified photograph of the radon inhalation chamber
Fig. 2 Removal of the Rabbit’s Lungs anaesthetization, removal of the organ (lungs), fixation, dehydration, clearing, impregnation (infiltration), embedding, microtome and staining.
Fig. 3 Tissue cross-section of the Rabbit’s lung (30 day of exposure to radon gas)
III.
RESULTS AND DISCUSSION
Inhalation dose of radon gas depended on the time of exposure, at 30 days of exposure to 456Bq/m3 of indoor radon, dedication of cancer tissue on the rabbit’s lung don’t appeared, as shown in Fig.3 . Number zero in Fig.1 represents the lung tissue (slide) for the Rabbits was not exposed to radon (control), and number one is for the rabbits that exposed to radon concentration (543±26.87B/m3) within 30 days. While a time of exposure increased to 60 days for the same dose and concentration make an increase of deformation of the exposure tissue without any indication of cancer, as shown in Fig.4, number 2.
IFMBE Proceedings Vol. 35
314
A.H. Ismail, M.S. Jaafar, and F.H. Mustafa
IV.
CONCLUSIONS
Inhalation low dose of radon gas has estimated for the male Rabbits (4 Rabbits: One control + three case) within 90 days of exposure by fabricate new exposure technique of nuclear track detector. Histological process has been done for the lungs of Rabbits every 30 day of exposure. The results proved that at 30 days of exposure not create cancer cells into the lungs, while increasing time to 60 days make unknown deformation in the lungs. In addition, 90 days of exposure create cells of cancer clearly. Thus, one concluded that the low dose of inhalation dose at long exposure will be creating cancer cells into the lungs.
ACKNOWLEDGMENT This study supported by a grant from Universiti Sains Malaysia (1001/PFIZIK/843099).
REFERENCES
Fig. 4 Lung tissue (number 2) after 60 days of exposure
1. ICRP PUBLICATION 92, Relative Biological Effectiveness (RBE), Quality Factor (Q), and Radiation Weighting Factor (wR) 2. Somlai J. , Szeiler G., Szabo P. ,et al.,(2009). Radiation dose of workers originating from radon in the show cave of Tapolca, Hungary. J. of Radioanalytical and Nuclear Chemistry, l.279, 219-225 3. Geoffrey P. Jacobs. (1998) . A review on the effects of ionizing radiation on blood and blood components. Radiation Physics and Chemistry 53, 511-523 4. Ismail A.H , Jaafar M.S . (2011) .Interaction of low-intensity nuclear radiation dose with the human blood: Using the new technique of CR39NTDs for an in vitro study. Applied Radiation and Isotopes, in press 5. Nikezic D, Yu K.N. (2004) .Formation and growth of tracks in nuclear track materials. Materials Science and Engineering. R46 .51 6. Ismail A.H , Jaafar M.S . (2011). Design and Construct Optimum Dosimeter to Detect Airborne Radon and Thoron Gas: Experimental Study. Nucl. Instr. and Meth. In Phys. Res. B, in press
Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Fig. 5 Lung tissue (number 3) after 90 days of exposure IFMBE Proceedings Vol. 35
Asaad Hamid Ismail Universiti Sains Malaysia/School of Physics Jalan Sungai Dua ,11800 Pulau Penang Malaysia [email protected]
Influence of Hair Color on Photodynamic Dose Activation in PDT for Scalp Diseases F.H. Mustafa1, M.S. Jaafar1, A.H. Ismail1, A.F. Omar1, H.A. Houssein2, and Z.A. Timimi1 1
2
Universiti sains Malaysia / Department of medical physics, School of physics, Penang, Malaysia Universiti sains Malaysia / Department of engineering physics, School of physics, Penang, Malaysia
Abstract— In this study, an attempt has been made to differentiate between photodynamic dose for the persons with various color hairs by analyzing a hair parameter of light reflected from the skin surface and epidermis by absorption and scattering with chromophore. Furthermore, the purpose of this study was to show the effect of hair color and laser in skin during PDT, and to interpret changing in the light transmission ratio with blonde, light brown and black hair. In this study, engineering system software was advanced systems analysis program (ASAP) used to create realistic skin model and analysis the data. Three models of scalp skin were used; a scalp skin with blonde, light brown and black hair. The light transmitted through the skin sample, and the incident light was detected with a voxel element. The effect of optical characteristics of the hairs, such as hair color on PDT procedure is demonstrated. Results indicate that absorbing of the light due to the hair reduces the values of the fluence rate and light transmission, and is related to these characteristics of hair and melanin concentration in each type of hair. The model predicts that black hair got minimum fluence rate than light brown and blonde hair on the skin surface. It concluded that the hair parameter absorbing is one of the important features to take into account when evaluating dose. Therefore, a new method for dose evaluation from skin curves is proposed for PDT procedure. Additionally, we present the general model for light scattering, showing that the scattering by skin surface of different penetration depth, and the scattering and absorption processes by the hair's interior, affect the interaction of the laser and photosensitizer into the tissue. Keywords— Hair color, PDT, Skin and hair optics, psoriasis, actinic keratoses.
I. INTRODUCTION Recently there has been an increased interest in the laser administration via skin in PDT, for both local therapeutic effects on diseases skin as well as for systemic delivery of dose. In either case, chromophore concentrations in epidermis and hair parameter need to be taken up by the skin. Regardless of the skin type of laser delivery, an understanding of the hair parameter optics becomes a critical factor in the development of an optimal dose injection[1]. The effects of anatomical site, gender age, and race on the skin permeability in PDT have been reported in numerous publications. [2].
The most common areas for psoriasis in human are shown on the scalp[3, 4]. Initially, a series of classical treatment is used to treat this disease but that attempt has been failed. Today, phototherapy and PDT have become a regular modality and effective way for psoriasis and actinic keratoses treatment[4]. In spite of the various by topical treatments available for the treatment of superficial skin disease in scalp skin, many patients not succeed to respond adequately or may develop side effects. The goal of PDT treatment of psoriasis and actinic keratoses is to cure or locally control the disease while minimizing complications in normal tissues and reducing pain regarding therapy by investigation the influence of optical properties of skin and hair. PDT, has now reached the level of being an accepted treatment for a number of diseases, Many countries have approved its use[5]. The number of papers on PDT published in a world, both clinical and theoretical, seems to be steadily increasing. However, it has been observed that many of the investigators are obviously unaware of the early work done in this field and hence, repeat many of the experiments reported earlier, to enhance PDT procedure[6]. Chromophore and hairs in the skin behave as strong scatterers of light and affect the surface reflection of skin. The chromophore concentration of skin and hair parameter of human skin has become an important factor to change light fluence rate for PDT systems. Multi aspects to study skin optics and several researches on this topic have primarily focused on light fluence rate with the optics of skin and hair. Hair color is mostly the result of pigments-chemical compounds that reflect certain wavelengths of visible light. There are two main pigments found in human hair: Eumelanin has an elliptical shape, which gives color to brown or black hair and is dark pigment. The higher the concentration of eumelanin the darker the hair[7]. Pheomelanin is what produces the color in blonde or red hair. The higher the concentration of phaeomelanin, the lighter the hair. Unlike eumelanin, phaeomelanin is smaller, partly oval and has a rod shape[7]. Many different research fields have conducted extensive research on the procedure therapy of PDT for psoriasis in skin[6, 8]. The most fields of photochemistry, photophysics, photobiology, computer simulation and bio optics vision have been most active in the study of light therapy in
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 315–319, 2011. www.springerlink.com
316
F.H. Mustafa et al.
skin[9]. Studies in each of these fields provide us knowledge and in sights regarding different aspects of the optical skin. Currently, to have an efficient and a fast rendering approach, in particular, to enhance PDT and to develop a real system of the correct dose injection by a various way, ASAP software is one of the ways used for this purpose. One of the ways by which we attempt our algorithm can be improved in terms of performance as well as quality is by the use a proper of light fluence rate by the consideration of skin optics and hair color on the skin surface. Furthermore, our goal is to show the role of the hair color and connected researches in these different areas within a single unified framework.
Laser beam
Hair color; Blonde, light brown and Black Stratum corneum
Dermis
Epidermis
II. METHODOLOGY Fig. 1 Skin with the three types of the hair color; blonde, light brown and black hair. And illustrate the direction of the laser 635 nm wavelength to the samples
A. Software (ASAP) Advanced systems analysis program (ASAP) is one of the best simulation programs for using engineering optics to skin optics study, in which it can create a skin, hair modeling and laser source in a very good way. And it's also effective and sensitive to obtaining a good result, through that we could create a skin with performance quality of engineering, and through that could show the effectiveness of hair on tissue with laser on photodynamic dose. The setup, as shown in Fig. 1, comprises an injection laser, skin sample, and three types of hair color.
C. Laser Source Red diode laser with the wavelength of 635nm and power of (5 mW) has been used in this research. And the diagram in the process of striking laser on the target has been designed, that used the amounts of 1000,000 rays and directed to the target. Inside the skin through a VOXEL element it could measure the fluence rate of laser injection and radiation in a very sensitive way.
B. Skin Layers and Hair Modeling The proposed skin tissue model is composed of three layers with distinct optical parameter properties. These include stratum corneum (0.015 mm), epidermal (0.0875 mm), and dermal layer (1.8 mm). A hair model on the skin surface has been built to investigate the scattering of red laser light by human hair. For researching the effectiveness of hair color on the skin in laser skin interaction and to show the effectiveness of photodynamic dose in PDT, through the advantages of hair modeling in RSM(Realistic skin Model) in ASAP the process of creating hair has been done, through using enough information in the program ASAP data. As it has shown in the picture, the amount of hair was 200 in each cm-2. The length of hair was 2mm on the skin surfaces. And the diameter was 0.1 mm, and it also created angle with normal skin, which made of 45°. This model has been re again with the same features that mentioned but with different hair colors, in which every time a color was changed.
III. RESULT AND DISCUSSION This article will show the most common complications and side effects of tissue optics encountered during PDT. Appear the hair parameter affects most people during laser therapy, regardless of ethnic background or skin color[10]. Farhad et al (2010) found that, presents of skin types lead to significant change of both transmission and photodynamic dose activation. Skin may either appear with or without hair; there may be long or short, high or low number of hair density, and the hair color of black to white discoloration. With the given limitations, care should be taken to control these variables. Because of the rate controlling properties of the stratum corneum and epidermis, most researchers tend to use only the epidermis. But, depending upon the need of the study epidermis layer, or dermis can be used. An attempt has been made to enhance dose injection for each skin with different hair color. It, however, gives a clear indication of the effectiveness of the hair parameter in this subject.
IFMBE Proceedings Vol. 35
Influence of Hair Color on Photodynamic Dose Activation in PDT for Scalp Diseases
T= φ/φ◦ * 100%
(1)
where T is the percentage transmission , φ is the total energy transmitted through the skin sample of thickness z, and φ◦ is the incident energy at surface of skin.
1400 Exp. Black hair Exp. Light brown hair Exp. Blonde hair
Fluence rate mW . mm -3
1300 1200 1100 1000 900 800 700
(a) 0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Skin depth (Z) mm 1400 Fitting cubic black hair Fitting cubic light brown hair Fitting cubic blonde hair
1300
Fluence rate mW. mm -3
While hair itself stops several amounts of the delivery rays in target, various types of hair color are available on the human body that is caused to reduce the entire red laser during laser therapy. One studying of light in PDT still held by many researchers is that light affects the growth of PDT and has effect pain during laser therapy. For example, the hair color is certainly one of the major barriers in the human skin of light sources and synthetic minimizing intensity light for skin depth. The length, color, and style of our hair in the skin surface together are appearing sometimes problem to the dermatologists. More importantly, is a tumor location is often a reflection of how we predict a good dose to direct the light to the target. The gradual automatic loss of laser power over time restricts how we can consideration the good way to delivery light on the skin. The fluence rate (φ), which is an important parameter in light delivery for activation drug, was obtained as depend on the skin depth, from the curves of the Fig. 2. (φ) obtained experimentally and was depending on the skin depth (Z), for various hair colors. The differences between the above results were due to the extra melanin of light brown and black hair compared to the blonde hair. There are many factors that can affect the reduced value of fluence rate from (1278.1 mW.mm-3) at (0.02 mm) to (890.84 mW.mm-3) at (0.08 mm). The amounts of light in the diffusion region are decreased and converted to scattered photons and absorbed photons because the transition photons from the stratum corneum to the epidermis were chromophore dependence and skin depth dependence. The part of fig 2b represents a fitting curve that was not linearly but an exponential relation. The correlation coefficients were 0.97 to 0.98 for all curves. Curve fitting by MATLAB software was used to estimate the transmission through epidermis and to calculate the photodynamic dose per penetration depth of skin. The transmittance or transmission is the ratio of relative fluence rate of light in the skin surface to the fluence rate of any point of the skin depth (Z). In most cases, the transmission is the primary factor in determining laser permission across the skin.The following equation was used to calculate transmittance:
317
1200 1100
1000 900
800
700
(b) 0
0.02
0.04
0.06
0.08
0.1
0.12
Skin depth (Z) mm
Fig. 2 Influence of hair colors on the fluence rate of various skin depths. a- Experimental data
b-Cubic fitting data
The total amount of incident fluence rate penetrating into the skin is broken into portions of reflected, absorbed, transmitted and scattered light. Figure 3 shows the transmission of the skin for scalp skin of different hair color. For the skin with blonde hair the transmittance was 92.4 %, but for light brown and black hair the transmittance was 91.82 % and 90.9 % respectively, at a depth of 0.02 mm. The epidermal transmittance was 67.5% for the blonde hair and 65.6 %, 63 % for the light brown and black hair respectively, at a depth of 0.08 mm, (Fig. 1). The loss of injection dose from skin depths Z=0.02 mm to 0.08 mm for skin with the blonde, light brown and dark hair are 25, 26 and 28 %, respectively.
IFMBE Proceedings Vol. 35
318
F.H. Mustafa et al.
Table 1 Calculation of parameters of transmission and photodynamic dose per penetration depth on the skin sample for different hair color
φ ∗
φ ∗
Fig. 3 Ratio of the transmission of light in skin depth as a function of hair color on the skin
Use of photosensitizers which absorb light at 635 nm wavelengths between these limits has been an important focus of research, since light at longer wavelengths can penetrate deeper into the skin. The effective penetration depth of a given wavelength of light is a function of the optical properties, such as absorption and scattering of the tissue and hair concentration. The photodynamic dose per penetration depth (light dose) is the number of photons absorbed by photosensitizing drug per gram of tissue (ph/g) in a skin. It was related on the depth and fluence rate. It was calculated as in the following equation[11]: D*= ε D t (λ/hcρ)φ
(2)
Typically, the administrated drug dose =5µg/g.tissue, molecular weight of drug=600 g/mole, tissue concentration of drug (D) =5×10-6 moles. liter-1,the tissue density(ρ)=1.g.tissue.cm-3, the extinction coefficient of drug(ε)= 104 cm-1. Moles-1.liter, time exposure (t) =600 sec, speed of light (c )= 3×108 m/sec, Planks constant (h)=6.36 ×10-34 Js, wavelength (λ )= 635 nm and fluence rate of light (φ) and its value shown in table1 [11]. Thus, a hair property is the physical phenomena which can change the amounts of photodynamic activation in the epidermis and skin depth. These changes may be related to the number of photons loss. The rate of reduction of the total dose to the delivery light comes from chromophore concentration, melanin of the hair color and skin optical parameters. Table 1 shows the absorption parameters of laser source and PDT in skin, including fluence rate, transmission of laser and photodynamic dose per penetration depth (D*) in various skin depths for different hair color. It can be observed that, there is a decrease of photodynamic dose for
activation of the photosensitizer in the presence of black and light brown compared to blonde hair. This difference appears clearly in Table 1. The absorption dose of light for different hair color with skin depth 0.02 and 0.08 mm as shown in table demonstrated that skin with blonde hair have higher transmission of light and got higher photodynamic dose per penetration depth than skin of light brown and black hair. The result indicates that, the hair color on the skin surface was affected on the photosensitizer activation in PDT. When the transmission of laser 635 nm wavelength from skin depth Z=0.02 to 0.08 mm. On the other hand, found that at skin depth 0.02 mm the transmission of laser has a higher value as compared with skin depth 0.08 mm for all skin models. This was indicated that the chromophore concentration is responsible to make a barrier for laser transmission. Compared with skin of blond hair, light brown and black hair has a less transmission in skin depth. Accordingly, the photosensitizer is more activating in skin with blonde hair and considerable changing took place in different skin depth. A laser in PDT is a device that produces light of a single wavelength to activate photosensitizer in tissue. These lasers produce continues energy light that is taken up by the tissue and photosensitizer. However, both the skin and hair have a melanin which also absorbs the laser's energy, by this reason some of the amount light is reduced. So the light delivery to the main target has been minimizing. Therefore, the laser light has to be applied long enough to activate the target, but not too long to heat up the surrounding skin, causing damage. The effect factor that helps to determine a good or bad outcome and safety procedure between the laser and tissue interaction in PDT for epidermis disease treatment, is the ratio of melanin in a skin layer and hair color. As shown in figure take a look at each hair color type and change in absorption light dose is appeared clearly. By this result, we can classify the good outcome without a side effect of the skin lesion in PDT according to hair
IFMBE Proceedings Vol. 35
Influence of Hair Color on Photodynamic Dose Activation in PDT for Scalp Diseases
color as follows; the skin with blonde hair is a good candidature to PDT irradiation by red laser for lesion of epidermis not absorbs more light, without side effects. Whenever, the skin of the scalp with darker hair; black, and light brown are bad candidates to PDT, because it is absorbed more light and create a side effect and painful.
IV. CONCLUSION An activation photosensitizer, laser and relation color of hair appeared as a task factors to change photodynamic dose rate and this is one of the most challenging problems in the field of PDT for epidermis layer, both in terms of research and of development. Consideration of hair color during therapy will increase the quality of PDT treatment in patients. Our results demonstrated and concluded that the types of a hair color, concentration, and penetration depth into the skin affect the absorptive and scattering processes to impact photodynamic dose in PDT for superficial diseases. Computation simulation considerations show that the effect of hair of different colors is affecting light penetration and dose injection.
ACKNOWLEDGMENT The authors are grateful for the Universiti Sains Malaysia, School of physics and Department of Medical Physics for their technical assistance and financial support.
319
3. Taneja, A., A. Racette, Z. Gourgouliatos et al.,(2004) Broad-band UVB fiber-optic comb for the treatment of scalp psoriasis: a pilot study. International journal of dermatology. 43(6): p. 462-467. 4. Markham, T. and P. Collins,(2001) Topical 5 aminolaevulinic acid photodynamic therapy for extensive scalp actinic keratoses. British Journal of Dermatology. 145(3): p. 502-504. 5. Sieron, A. and S. Kwiatek,(2009) Twenty years of experience with PDD and PDT in Poland--Review. Photodiagnosis and Photodynamic Therapy. 6(2): p. 73-78. 6. Tandon, Y.K., M.F. Yang, and E.D. Baron,(2008) Role of photodynamic therapy in psoriasis: a brief review. Photodermatology, photoimmunology & photomedicine. 24(5): p. 222-230. 7. Yan, L., H. Lian, W. Kazumasa et al.,(2005) Comparison of Structural and Chemical Properties of Black and Red Human Hair Melanosomes<sup>¶. Photochemistry and Photobiology. 81(1): p. 135-144. 8. Gilchrest, B.,(2010) PHOTODYNAMIC THERAPY AND SELECTED OFF-LABEL USES. SUPPLEMENT PROCEEDINGS FROM THE 2010: p. 10. 9. Wilson, B.C. and M.S. Patterson,(2008) The physics, biophysics and technology of photodynamic therapy. Physics in medicine and biology. 53: p. R61. 10. Mustafa, F.H., M.S. Jaafar, A.H. Ismail et al.,(2010) Red Diode Laser in the Treatment of Epidermal Disease in PDT International Conference on Biomedical Engineering Systems and Technologies Proceedings. 11. AAPM,(2005) Photodynamic therapy dosimetry. Journal of Medical Physics: p. 1-30.
Author: Institute: Street: City: Country: Email:
REFERENCES 1. Kohen, E., R. Santus, and J.G. Hirschberg, Optical Properties of the Skin, in Photobiology. 1995, Academic Press: San Diego. p. 303-322. 2. Juzenas, P., M. von Beckerath, and L. zPetras Juzenas,(2006) Influence of Physiological Parameters on the Production of Protoporphyrin in Human Skin by Topical Application of 5Aminolevulinic Acid and its Hexylester. J. Med. Sci. 6(4): p. 546-553.
IFMBE Proceedings Vol. 35
Farhad Hamad Mustafa Universiti Sains Malaysia USM-11800 Penang Malaysia [email protected]
Measurement and Diagnosis Assessment of Plethysmographycal Record M. Augustynek, M. Penhaker, J. Semkovic, P. Penhakerova, and M. Cerny VSB – Technical University of Ostrava, FEECS/Department of Measurement and Control, Ostrava, Czech Republic
Abstract— Nowadays the plethysmography record evaluation is standard non-invasive method for vessels condition appreciation (US) used in angiology and occupational therapy. The goal of this project was to realize alternative method of telemetric data measurement from pulse oximeter Criticare 504 USP. The pulse oximeter support digital and analog outputs for data communication. There are ECG, Plethysmographycal curve, Blood oxygenation and alarm setting transmitted. There was realized high resolution data telemetric transfer for digital PC processing. Keywords— Pulse wave, Pulse oximeter, Plethysmography, ECG, Beat, Saturation, Communication, RS-232 interface, Microprocessor.
I. INTRODUCTION Pletysmographycal method is based on the passing of light beams through tissue. The light beams are emitted from the light source with constant intensity and it is placed to the inner side of phalanx. On the opposite side is sensor – photodiode. After passing through tissue of phalanx, the light beams falls on photodiode. Rhythmical changes of tissue originated from hearth activity causes change of electric current, which is displayed on the plotter as content wave. In systole during the rising of blood content, it is absorbed more light beams, so less of them falls to the photodiode than in diastole, when the absorption is lesser.
Fig. 1 Pulse oximeter Critcare 504 USP
Pulse oximeter Criticare USP (Fig. 1) takes the plethysmographycal curve, ECG curve, saturation and beat. In contradistinction, for example, to ECG curve there is not uniquely determined the pathologic waveform of plethysmographycal curves and the evaluation of findings is only on a doctor and therefore it is very subjective. Nevertheless this kind of examinee is very important and it may find serious health problems. For doctor it is necessary to save measured curves for possibility of later use of them or for comparison of them. Till this time, the output from plethysmography has been only printed on connected printer. From this point of view, using of computer seems like very effective. The result of this work is creating of communication interconnection with using of economically unassuming parts from view of purchase cost as well as from view of operating cost. The tendency was to use as many in that time accessible parts as possible. However this tendency was mainly subordinated to sufficient communication speed and precision of transferred data.
II. THE DESIGN OF COMMUNICATION INTERCONNECTION For possibility of transferring data from pulse oximeter to computer, it is necessary first to connect the output of pulse oximeter and input of computer. Pulse oximeter contains two analog outputs and one digital output with interface RS-232. Computers have usually two interfaces RS-232, so the easiest way of realization would be just using this interface. This solution has, except the simplicity and low price, the advantage, that we would gather data taken by pulse oximeter – plethysmographycal curve, saturation and beat – over one cable. However we wouldn’t get the ECG curve, because this is not supported by the device. But the biggest disadvantage of this solution is too low quality of transferred curves, which is probably caused by insufficient sampling rate of device. So it is necessary to use another solution with using of already mentioned analog outputs, where we can route signals with help of configuration menu. The output signal is continual and is on the BNC connectors on the rear panel of device. It is possible to route to analog outputs all signals, which the device processes (Table 1). Output signal level is
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 320–323, 2011. www.springerlink.com
Measurement and Diagnosis Assessment of Plethysmographycal Record
Table 1 Analog outputs modes and ranges Output mode Test ECG curve Plethysmographycal curve
Output signal Testing signal ECG Plethysmography
Alarm state
0–1 V
O2 saturation 0–100%
0–1 V
O2 saturation 50–100%
0–1 V
Beat 0–250
0–1 V
between 0 – 1 VDC. For exact setting of device measuring analog outputs, it is possible to use testing signal. Connection design is the following: On the first analog output is plethysmographycal curve, on the second one is ECG curve and on the interface RS-232 are beat and saturation. Beat and saturation progress in time isn’t important, a doctor is interested only in steady-state value, so the sample rate is irrelevant there and therefore these values may be transmitted over RS-232 interface. On the contrary the sample rate is very important for plethysmographycal and ECG curve, because it is necessary to know their waveform and therefore their progress in time and with regard to insufficient sample rate of device internal A/D converter, these values must be sampled and converted outside device. However computer isn’t usually equipped with analog inputs and purchasing of A/D conversion card is too expensive and in addition it isn’t possible to put this card into notebook. So first it is necessary to create some interlink, which would convert analog signal to digital and then send it to any standard computer input. As an ideal interlink appears a simple microprocessor, which would receive signals from both analog outputs and one RS-232 interface and then all of these accepted data transmit to computer RS232 interface.
III. THE SELECTION OF SUITABLE MICROPROCESSOR Data transferring via RS-232 is digital, so it is necessary to convert analog signals to digital. For this purpose the microprocessor must be equipped with at least two-canal A/D converter. For accuracy it is enough to use only eightbit A/D converter, but with regard to displaying curves on monitor, it would be better to use more accurate A/D converter. If we use eight-bit converter, then we have 28 = 256 different levels of signal. But if the monitor’s resolution is 1024x768, which is relative commonly used resolution, then we have vertically 1024 pixels. Therefore for smoother curve it would be better to use ten-bit A/D converter,
321
whereby we have exact 1024 levels. Converting sample rate was determined to 150 Hz, which is sufficient frequency for plethysmographycal and ECG curve. Further the microprocessor should be equipped with two interfaces, which are able to communicate with RS-232, one for receiving data from the device, second one for transmitting data to computer. But after consideration, it was taken the view, that theoretically one interface is enough. We only receive data from the device and we don’t need transmit any data to the device, conversely we only transmit data to the computer and, if the program in the microprocessor and the receiving program in the computer are well programmed, we don’t need to receive any data from the computer. Receiving and transmitting are quite independent procedures and therefore they can be processed simultaneously. Interface RS-232 use for data transfer (without control inputs and outputs) three-wired cable, one wire for transmitting, one for receiving and one is ground. So it would be sufficient to connect receiving wire to the device, transmitting wire to computer and ground to both. However the disadvantage there is transfer speed. Although the receiving and transmitting are processed separately, they cannot use different speed. Adjusted transfer speed applies to whole interface, thus for receiving as well as for transmitting. The maximum transfer speed of pulse oximeter is 19200 bauds and while using one interface, the transfer speed must be the same for transmitting from microprocessor to computer. The computers are able to use the transfer speed also ten times higher. So it is needed to consider, if the transfer speed 19200 baud is enough for transmitting from microprocessor to computer. Main criteria of this consideration are data transfer requirement and real-time presentation in computer. Transfer speed expressed in bauds means number of transferred bits per second. By using the transfer speed 19200 bauds, it will be transferred 19200 per second. Transferring of one byte takes eight information bits and two control bits (start-bit and stop-bit), totally ten bits. So the microprocessor is able to transfer 1920 bytes per second. For our needs the microprocessor must transfer converted values from A/D converter (thus actual value of plethysmographycal and ECG curve) and values received from RS-232 (saturation and beat) before next A/D conversion. Using sample rate 150 Hz, the A/D conversions are executed one per 6.66 milliseconds. In case we use eight-bit A/D converter, then the microprocessor must transmit four bytes per 6.66 milliseconds. If the microprocessor has ability to transmit 1920 bytes per second, then we can easily calculate that he is able to transmit 12.7872 bytes per 6.66 milliseconds. We may transmit up to 12 bytes per one A/D conversion, which gives us the possibility to use for example more precision
IFMBE Proceedings Vol. 35
322
M. Augustynek et al.
A/D converter or to transmit some control bytes per every transmitting cycle. From this it is clear, that the transfer speed 19200 bauds are sufficient and we may use the microprocessor with only one RS-232 compatible interface.
IV. INTEGRATION OF MICROPROCESSOR As interlink for transferring data from pulse oximeter to computer, it was created a circuit, whose main component is microprocessor PIC16F873 of firm Microchip (Fig. 2). This microprocessor is, with regard to its architecture and accessories, very cheap. It is equipped with five-canal ten-bit A/D converter and one USART interface, which is able to communicate with RS-232 interface. Microprocessor PIC16F873 has possibility of connecting maximum and minimum reference voltage for A/D converter. In regard to fact, that the output analog signals from pulse oximeter vary from 0 to 1 volt, the possibility for connecting minimum reference voltage is unused (in that case the minimum is ground or 0 volt), while the possibility of connecting maximum reference voltage is used (if it be to the contrary the maximum reference voltage would be supply voltage, thereby the range of A/D converter would be greatly unused). The most ideal would be connecting of 1 volt as the maximum reference voltage.
Fig. 2 Communication device with embedded microprocessor For amplification of both input signals from 0 – 1 volt to 0 – 5 V is used circuit TL082. This circuit contains two operational amplifiers, which are based on BI-FET II technology. Among others they are suitable for application, where the input signal has fast changes. The advantage is possibility of easy replacement, because many operational amplifiers have the same assignation of pins.
For creating of negative voltage for operational amplifiers supply voltage, it is used circuit 7660. This circuit converts the input voltage in range from 0 to 10 volt to exactly opposite (negative) voltage (from 0 to -10 volt). Although the microprocessor USART interface is designed for serial communication, it is not fully compatible with RS-232 interface. While the method and possibilities of setting of messages transmitting and receiving are the same, the voltage levels are different. For RS-232 interface, the voltage for logic 0 is equal to value between 3 and 25 volt and for logic 1 it is equal to -3 to -25 volt. For the microprocessor USART interface, the voltage for logic 0 is 0 volt and for logic 1 it is equal to supply voltage (5 volt), which is equal to standard TTL levels. Circuit TC 232 is derived from circuit MAX 232 and is able to convert TTL voltage levels to RS-232 voltage levels and vice versa. It is directly designed for serial communication between RS-232 interface and other serial interface with TTL voltage levels.
V. CONCLUSION Pulse oximeter is designed for standalone running without support of computer, nevertheless the using of computer increases work efficiency with this device and it greatly brings easement and acceleration for handling with measured data. We cannot say that the using of computer brings totally new possibilities in plethysmography. It is mainly due to fact, that every doctor judges the plethysmographycal curves only visually without application of some uniquely given method. As long as there isn’t accepted unique apparatus for judgment of plethysmographycal curves, then the computer will serve only as terminal and storage space for measured data and brings no increase of informational value of measured curves with help of various analyses. For all that the usage of computer is beyond controversy irreplaceable and important (and on the present already necessary) easement of work. The schemes of how to connect pulse oximeter with computer are certainly many. One possibility is to buy new pulse oximeter with full support of communication with computer. But except this possibility the new pulse oximeter doesn’t bring so much function improvement, therefore it isn’t of advantage to buy new pulse oximeter with regard to price. Cheaper and more effective is creating of own interconnection of present oximeter with computer. Next it was taken into consideration as lowest purchase price of necessary parts for realization of communication interconnection as possible. But the most important factor was speed and quality of data transfer. The change of communication method between communication device and computer from communication
IFMBE Proceedings Vol. 35
Measurement and Diagnosis Assessment of Plethysmographycal Record
323
Project 1M0567 “Centre of applied electronics”, student grant agency SV 4501141 “Biomedical engineering systems VII” and TACR TA01010632 “SCADA system for control and measurement of process in real time”. Also supported by project MSM6198910027 Consuming Computer Simulation and Optimization.
REFERENCES
Fig. 3 Overall wiring of telemetric chain using RS-232 interface to communication using USB interface seems like the most possible improvement in the future. In the present exist microprocessors supporting communications over USB interface, unfortunately they weren’t at disposal in the time of realization. Nevertheless, the quality of designed and realized communicator is absolutely compliant in regard to transfer quality as well as purchase prices of parts. For processing of data transferred from pulse oximeter to computer, it was created program in programming environment Borland Delphi version 5. Program is designed for easy control and during measuring itself it is as much automatic as possible. So the staff does not have to pay big attention to controlling program and may consecrate to patient and measuring. Program is able to automatically find serial port, to which is communicator connected, during processing measure the program automatically changes between measures, which should be done within the frame of particular investigation. Saved curves is possible to examine any time later, visualize them and move cursors above them at will, by the help of them we can determine for example time difference between two peaks of graph, curves are able to print. This work brings, except concrete practical utilization directly in hospital, also hint of future possible way of interconnection progress between medical devices and computer. In this work there was discussed the interconnection of only one device and computer. But the speed of computers, microprocessors and communications is increasing. In the future there might be interconnected more medical devices together with one computer, whether by help of another interlink (likewise in this work) or directly, as far as the computer and medical devices will support this option.
ACKNOWLEDGMENT The work and the contribution were supported by the project: Ministry of Education of the Czech Republic under
1. Prauzek, M., Penhaker, M., Methods of comparing ECG reconstruction. In 2nd Internacional Conference on Biomedical Engineering and Informatics, Tianjin: Tianjin University of Technology, 2009. Stránky 675-678, ISBN: 978-1-4244-4133-4, IEEE Catalog number: CFP0993D-PRT 2. Prauzek, M., Penhaker, M., Bernabucci, I., Conforto, S., ECG - precordial leads reconstruction. In Abstract Book of 9th International Conference on Information Technology and Applications in Biomedicine. Larnaca: University of Cyprus, 2009. Stránka 71, ISBN: 978-14244-5379-5 3. Korpas, D., Psychological Intolerance to Implantable Cardioverter Defibrillator, In Biomedical Papers-Olomouc, Volume 152, Issue: 1 p.: 147-149 , June 2008 , ISSN: 1213-8118 4. Penhaker, M., Augustynek, M. Finger Plethysmography Classification by Orthogonal Transformatios Electrical Parameters. In 2010 The 2nd International Conference on Telecom Technology and Applications, ICTTA 2010. March 19-21, 2010, Bali Island, Indonesia, Volume 2NJ. IEEE Conference Publishing Services 2010, 2010, p. 173–177. ISBN 978-0-7695-3982-9, DOI: 10.1109/ICCEA.2010.188 5. Cerny, M. Movement Activity Monitoringof Elederly People – Application in Remote Home Care Systems In Proceedings of 2010 Second International Conference on Computer Engineering and Applications ICCEA 2010, 19. – 21. March 2010, Bali Island, Indonesia, Volume 2NJ. IEEE Conference Publishing Services, 2010 p. ISBN 978-07695-3982-9 6. Garani, G., Adam, G. K. Qualitative modelling of manufacturing machinery In Book - 32nd Annual Conference on IEEE Industrial Electronics, IECON 2006 VOLS 1-11 p. 1059-1064, 2006 , ISSN: 1553572X, ISBN: 978-1-4244-0135-2 7. Vasickova, Z., Augustynek, M., New method for detection of epileptic seizure, Journal of Vibroengineering, Volume 11, Issue 2, 2009, pp.279-282, (2009) ISSN 1392 - 8716 8. Skapa, J., Siska, P., Vasinek, V., Vanda, J. Identification of external quantities using redistribution of optical power - art. no. 70031R In Book OPTICAL SENSORS 2008 Volume 7003, p. R31-R31, 2008,ISSN: 0277-786X , ISBN: 978-0-8194-7201-4 9. Majernik,J., Molcan,M., Majernikova, Z.: Evaluation of posture stability in patients with vestibular diseases, SAMI 2010, 8th IEEE International Symposium on Applied Machine Intelligence and Informatics, Óbuda University, Budapest, 10. Bernabucci, I., Conforto, S., Schmid, M., D'Alessio, T. A bio-inspired controller of an upper arm model in a perturbed environment, In proceedings The 2007 International Conference on Intelligent Sensors, Sensor Networks and Information Processing Pages: 549-553 ISBN: 978-1-4244-1501-4 Author: Martin Augystynek Institute: VSB-TUO, FEECS, DMC Street: 17. listopadu 15 City: Ostrava Country: Czech Republic Email: [email protected]
IFMBE Proceedings Vol. 35
Measurement of Available Chlorine in Electrolyzed Water Using Electrical Conductivity K. Umimoto1, H. Kawanishi1, Y. Shimamoto1, M. Miyata1, S. Nagata1, and J. Yanagida2 1
Department of Biomedical Engineering, Osaka Electro-Communication University, Osaka, Japan 2 Department of Medical Technology, Kobe Tokiwa University, Kobe, Japan
Abstract— Strongly acidic and strongly alkaline electrolyzed water (SAcEW, SAlEW) are readily prepared from water containing NaCl by direct current electrolysis. SAcEW has a strong bactericidal activity due to the available chlorine (AC) concentration. We developed a device for measuring electrical conductivity in SAcEW instead of real measurement of AC concentration. The electrical conductivity in SAcEW was significantly correlated with the AC concentration. This approach is machineless and reagentless for measuring, thereby permitting simple real time analysis. The electrical conductivity is useful for measuring AC concentration in SAcEW at home care. Keywords— Electrolyzed water, Available chlorine, Electrical conductivity, Bactericidal activity, Disinfectant.
I. INTRODUCTION When water containing NaCl is electrolyzed by a direct current in a container with a membrane partition, a strongly acidic electrolyzed water (SAcEW) is generated on the anode side and a strongly alkaline electrolyzed water (SAlEW) is generated on the cathode side (Fig. 1). SAcEW is defined that pH is less than 2.7, oxido-reduction potential (ORP) is more than 1100 mV and available chlorine concentration (AC) is 20~60 ppm. SAcEW has attracted considerable interest in a medical field. Especially, it has been reported that SAcEW has a strong bactericidal activity due to the available chlorine (AC) concentration, including HClO and Cl2[1][2] [3]. We had already developed a compact machine to provide electrolyzed water for home care[4][5], but, when considering that this machine providing SAcEW is used at home care, it is complicated to measure AC concentration due to bactericidal activity. In this present study, we developed a device to measure AC concentration in SAcEW with electrical conductivity and examined its efficiency.
Anode side H2O →1/2O2 + 2H+ + 2e2Cl- → Cl2 + 2eCl2 (aq) + H2O ↔ HClO + HCl Cathode side H2O + 2 e- → 1/2H2 + OHNa+ OH-↔ NaOH
+
Fig. 1 Principle of electrolyzed water production
II. METHODS Our machine of making electrolyzed water consists of cell, electrode and diaphragm. The cell for making electrolyzed water was formed from acrylic resin. The celle was 12 centimeter (cm) in length, 20 cm in width, 20 cm in height, with a capacity of 4 liters. A titanium-coated platinum was used as a electrode (10 cm 20 cm). Glassine paper coated silicone was used as a diaphragm because it was very low cost and easy to exchange.
×
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 324–327, 2011. www.springerlink.com
Measurement of Available Chlorine in Electrolyzed Water Using Electrical Conductivity
Cell Diaphragm
Fig. 2 Device of making electrolyzed water The distance between electrode was 1 cm (Fig. 2). Device of measuring electrical conductivity The device for measuring electrical conductivity was made by using the principal of Kohlraush bridge. A titanium was used as a electrode and the distance between electrodes was 1 cm. The cell constant( ) was determined with KCl solution (0.01mol/l, 0.02mol/l, 0.1mol/l) and its value was 8.1(cm-1). The electrical conductivity(K) is calculated by this formula and expressed after temperature compensation.
θ
θ ㎝ θ Ω
K = / Rx K (S/ ), (cm-1), Rx( ) Rx( ) is determined with Kohlraush bridge circuit.
Ω
325
1. Analysis of SAcEW SAcEW and SAlEW were produced by direct current electrolysis (DC 20 V, 20 min) a 0.1% NaCl solution in a cell partitioned by a membrane. The pH, ORP and AC concentration of SAcEW were measured. The measurement of the AC concentration in SAcEW was as follows. Fresh SAEW was obtained, then a 0.5 ml phosphate buffer solution (pH 6.5) and 0.2 g of diethyl-pphenylenediamine (DPD) were added to 10 ml of SAEW. After mixing, 0.1 g of potassium lodide (KI) was added and after 2 minutes, it was measured by a spectrophotometer with an optical density of 511 nm. 2. Electrical conductivity and AC concentration SAcEW with the AC concentration from 20 to 60 ppm was prepared. The AC concentration of SAcEW was measured by DPD method and its electrical conductivity was measured with our device. The predicted AC concentration in SAEW was calculated by electrical conductivity. 3. Bactericidal activity
Three strains of bacteria were prepared to investigate the bctericidal activity of SAcEW. Staphylococcus aureus (S. aureus) belong to typical gram positive bacteria. Pseudomonas.aeruginosa(P.aeruginosa) belongs to gram negative bacteria. Bacillus cereus (B. cereus) belongs to genus bacillus, gram positive bacteria and is aerobic bacteia formingspore, also, it is caused of food poisoning. The bactericidal activity was examined as follows. These three kinds of bacteria were cultivated at 37°C for 24 hours under aerobic-culture in petri dishes containing nutrient agar medium, respectively. After cultivation, each one colony was incubated with fresh SAcEW, also this solutions were added onto the fresh petri dishes and were cultivated for 48 hours. The colony of bacteria in the petri dish was counted and SAcEW was judged as having bactericidal activity.
Electrode III. RESULTS 1. Analysis of electrolyzed water The pH, ORP and AC levels of SAcEW and SAlEW were 2.2, 1150 mV, 23 ppm and 11.7 and -900 mV and 0 ppm, respectively.
Kohlraush bridge circuit
Fig. 3 Device for measuring electrical conductivity
2. Relation between electrical conductivity and AC concentration There was significantly correlation between electrical conductivity and AC concentration in SAcEW(P<0.01) (Fig.4).
IFMBE Proceedings Vol. 35
326
K. Umimoto et al.
The predicted AC concentration in SAcEW was calculated with this fitted line of Fig.4. There was significantly correlation between actual and predicted AC concentration(Fig. 5).
Table 1 Bctericidal activity of SAcEW Bacteria
S. aureus
Control
54.25×10 cfu /ml
SAcEW
0
4
cfu /ml
P.aeruginosa 56.5×10 cfu /ml 0
5
cfu /ml
B.cereus 22.2×105 cfu /ml 0 cfu /ml
IV. DISCUSSION
Fig. 4 Relation between electrical conductivity and AC concentration
Fig. 5 Relation between actual and predictrd AC concentration 4. Bactericidal activity
The results of bactericidal activity of SAcEW are shown in Table 1. There were many colonies of S. aureus, P.aeruginosa and B. cereus in each petri dish as controls. While there were no colony of bacteria in each petri dish after mixed SAcEW. The strong bactericidal activity of SAcEW was observed.
It is well known that SAcEW has a strong bactericidal activity due to AC concentration. We developed a compact device to produce SAcEW and SAlEW by electrolysis DC10~20V within 10~20 min. This device is easy and safety to handle because of preparing only water containing NaCl. SAcEW has strong bactericidal activity against S. aureus, P.aeruginosa and B. cereus similar to chemical disinfectants (Table 1). SAcEW which is diluted with tap water reverts to ordinary water again. Considering the economic and environmental factors makes SAcEW highly suitable as a disinfectant, since it reverts to normal water again after use. An advantage of SAcEW is its low toxicity compared to other disinfectants. Safety tests on animals revealed no toxicity when SAcEW was ingested by mice, nor were any problems reported in accumulative irritation tests or in sensitization tests on rabbits’ skin [4]. In this study, when considering that this machine providing SAcEW is used at home care, it is complicated to measure AC concentration due to bactericidal activity. And also, it needs some chemical agents and a machine such as UVspectrophotometer. However, at home care it does not need to know the AC concentration level correctly, so we pay attention to the electrical conductivity. we made a device of electrical conductivity for measuring AC concentration in SAcEW. The electrical conductivity in SAcEW was correlated with the AC concentration(Fig.4) and we could detect the AC level in SAcEW approximately with electrical conductivity(Fig. 5). On the other hands, SAcEW is suitable for use as a disinfectant, but it is not physiological because of strong acidic pH. Therefore, electrolyzed water with a neutral pH is needed for various purposes. We had already reported that neutral electrolyzed water(NEW) could be obtained by simply mixing equal amounts of SAcEW on the anode side and SAlEW on the cathode side. The pH of NEW was neutralized by mixing cathode and anode water, however, its
IFMBE Proceedings Vol. 35
Measurement of Available Chlorine in Electrolyzed Water Using Electrical Conductivity
AC level remained [5]. And we made a automatic controller for mixing SAcEW and SAlEW [6]. Generally NEW is produced with using 2~3% of HCl solution, however, our method is safety and is more suitable to produce NEW in the home. One of the characteristics of NEW is that HClO state is very stable in neutral pH and its bactericidal activity maintains for long periods. In this study, it is clear that our device is useful for measurement of AC concentration in SAcEW for using electrolyzed water as a disinfectant at home care .
V. CONCLUSIONS We developed a device of electrical conductivity for measureing AC concentration in SAcEW. The electrical conductivity in SAcEW was significantly correlated with the AC concentration and the predicted AC concentration calculated by electrical conductivity was correlated with the actual AC concentration. Our device is useful for detecting AC concentration due to bactericidal activity in SAcEW.
327
REFERENCES 1. K. Hotta, K. Kawaguchi, K. Saito, K. Ochi, T. akayama: “Antimicrobial activity of electrolyzed NaCl solution: effect on the growth of Streptomyces SPP,” Actinomyatologica, Vol. 8, pp. 51-56, 1994 2. K. Umimoto, Y. Emori, H. Fujita, K. Jokei, “Evaluation of strong acidic electrolyzed water for the disinfection,” IEEE-EMBS, APMBE, Proc., 2003 3. Koichi Umimoto, Yasuhira Kanno, Takeomi Kawabata, Kazuyuki Jokei, “Characteristics of strong acidic electrolyzed water,” IFMBE/APCMBE, Proc. Vol.8, 2005 4. H. Matumoto, M. Kohno, Y. Ohtani, I. Nohara, K. Hotta, T. Nakayama, T. Suzuki, “The safety of ESAAS, ” Tokyo: Ohmsha, pp. 7, 389., 1997 5. K. Umimoto, H. Kawanishi, K. Kobayashi and J. Yanagida: “Development of a Device to Provide Electrolyzed Water for Home Care.” Biomed 2008, IFMBE Proc. Vol 21, pp. 738–741, 2008 6. K. Umimoto, H.Kawanishi, Y.tachibana, N.kawai, S.nagata and J. yanagida “Development of automatic controller for providing multi electrolyzed water.” IFMBE Proc. Vol 25.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Koichi Umimoto Osaka Electro-Communication University 1130-70, Kiyotaki Shijonawate, Osaka Japan [email protected]
Measuring the Depth of Sleep by Near Infrared Spectroscopy and Polysomnography N.T.M. Thanh, L.H. Duy, L.Q. Khai, T.Q.D. Khoa, and V.V. Toi Biomedical Engineering Department, International University of Vietnam National University, Ho Chi Minh City, Viet Nam
Abstract— Sleep is very important in our life. To understand the basic idea what are sleep, we use the near infrared spectroscopy (NIRS) and polysomnography to measure the depth of sleep. By using NIRS we can determine the neuron activities and brain activities. The purpose of this study was to determine the stages of sleep called “rapid eyes movement” (REM) and “non-rapid eyes movement” (NREM). The NIRS probes were placed in the frontal region. The subject slept in a dark room while the Oxy-hemoglobin and deOxy-hemoglobin were recorded by the NIRS. We found a clear difference in the change of deOxy-hemoglobin between these two stages. Keywords— Brain activities; rapid eyes movement (REM); non rapid eyes movement (NREM); Polysomnography; EEG.
I. INTRODUCTION Non-REM sleep is usually considered as a compensatory ‘resting’ state for the brain, following the intense waking brain activity. Indeed, previous brain imaging studies showed that the brain was less active during periods of nonREM sleep as compared to periods of wakefulness. We can consider the stage 1 like a transition between arousal and asleep state. Stage 1 showed the transition between the awakening and sleep [1]. At this stage, we drift in and out of sleep and can be awakened easily. The eyes move slowly and muscle activity slows [2]. During this stage, many people experience sudden muscle contractions preceded by a sensation of falling. The analyzed signal from EEG showed us Alpha activity decreased, activation is scared, and the EEG consisted mostly of low voltage, mixed frequency activity, much of it at 3-7 Hz [3]. In stage 2, eye movements are rare and brain waves become slower with only an occasional burst of rapid brain waves (12-14 Hz) called “sleep spindles and the EMG is low to moderate [4]. When a person enters stage 3, high amplitude (>75 mV) and extremely slow brain waves (0.5-2 Hz) called “Delta waves” appear in the EEG. These waves are interspersed with smaller, faster waves. EOG and EMG continue as before. In stage 4, there is a quantitative increase in delta waves so that they come to dominate the EEG tracing [5].
Stages 3 and 4 are referred to as deep sleep, and it is very difficult to wake someone from them. In deep sleep, there is no eye movement or muscle activity [6]. This is when some children experience bedwetting, sleepwalking or night terrors. Wakefulness is a state in which the person is aware of and responds to sensory input from the environment [7]. Sleep can be defined as a state of behavioral quiescence accompanied by an elevated arousal threshold and a species-specific sleep posture [8]. Behavior characteristic of human sleep includes a typical recumbent sleep posture, closed eyes, and decrease in or absence of movements [9]. The most common device to measure the depth of sleep is Polysomnography. Polysomnography is a medical diagnostic test which offers a large amount of information by recording the activity of various organ systems for a period of several hours with the aim of diagnosing pathological conditions associated with sleep [10]. The electroencephalogram (EEG) is popularly known as “brain waves”. The exact physiologic bases of the voltage variations are not entirely known, but it is believed that they emanate largely from changes in voltage of the membranes of nerve cells. The electrooculogram (EOG) which records eye movement .Since the eyeball is like a small battery, an electrode placed on the skin near the eye will record a change in voltage as the eye rotates in its socket. The electromyogram (EMG) is a record of the electrical activity which emanates from active muscles. It may also be recorded from electrodes on the skin surface overlying a muscle. In humans, the EMG is typically recorded from under the chin, since muscles in this area show very dramatic changes associated with the sleep stages [11]. Besides Polysomnography, Near-infrared Spectroscopy is recently used in research with many salient features. Nearinfrared spectroscopy (NIRS) is a spectroscopic method that uses the near-infrared region of the electromagnetic spectrum (from about 800 nm to 2500 nm) [12]. Typical applications include pharmaceutical, medical diagnostics (including blood sugar and oximetry), food and agrochemical quality control, as well as combustion research. The advantages of NIRS are non-ionizing radiation, safe,
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 328 – 331, 2011. www.springerlink.com
Measuring the Depth of Sleep by Near Infrared Sp pectroscopy and Polysomnography
on-invasively. By all of the wonderful supp porting features from NIRS, we can do experiment directly on o human without any affects or harms to them. t transition in The aim of this study was to determine the NREM state (stage 1, stage2, stage 3, and staage 4) by using Near Infrared Spectroscopy (NIRS) and Pollysomnography. NIRS analyzed results showed us the oxygen nated hemoglobin (OxyHb) and deoxygenated hemoglob bin (deOxyHb) levels in frontal area. And therefore, we can c distinguish between awake and light sleep (stage 1) clearrly.
II. MATERIALS AND METHODDS A. Materials U Co.LTD, JaWith fNIRS (FOIRE – 3000, SHIMAZU pan), we identified the activity of oxy and d deoxy of the patient brain through sleeping period. Polysomnography T information supplied the information of the sleep time. This provide time and stage of sleep that the object was in. We also use polysomnography (EEG, EOG, and EMG) to measure the stage of sleep. By co-ordinate two methods at the same time, we could mnography and compare the results in fNIRS and polysom entering the discussion if it is possible to id dentify the deep of sleep through fNIRS measurement.
329
Table 1 Relevant information on the four suubjects investigated. This table performed that the time of subject sleep in nnormal and in experiment. And the subjects do not have history of sleep apneea
When we performed the experiiment, the subjects were completely in good condition. Befoore they sleep, we did not use any kind of medicine and subjeect were directed to keep relax. Relaxation-time is very impportant because we need balance of the signal that we recorded [13]. During sessions, subjects were lying on a comfortabble bed within a dark and sound-isolated room. After the exxperiment, they did not complain anything about the experriment in terms of their health and condition of Lab-room. Ten probes (arranged in 2x5 arraays) were placed over the scalp on the frontal area of the suubject. This setting provided us 13 channels for measuremeent on NIRS.
Fig. 3 NIRS’s probe settingg on forehead And electrode of Polysomnograaphy were placed in the head like in the Figure 4 below, it giive us data from Alice. Fig. 1 Alice 5 at BMEIU
Fig. 2 NIRSS at BMEIU
B. Experiment Four male subjects (average age: 21.67±1.53 years old) voluntarily participated in the study All off these subjects were healthy and free of neurological and d psychological signs and symptoms. i in deBefore the experiments, each subject was informed tail about the purpose and procedures of th he study. Then they were asked to fill out a questionnaire which w was kept confidential and included patient’s identificcation, age and gender. Informed consent and health inforrmation inquiry form that had been filled out by the volun nteers was also obtained.
Fig. 4 Polysomnography for sleep study The test started 5 minutes after sleeper in their position. The time for each test was abouut 4 hours, started from 12AM to 5AM. There was no distuurbed to subjects during their sleep.
IFMBE Proceedings Vol. 35
330
N.T.M. Thanh et al.
III. RESULT AND DISCUSSION A.
Distinguish Difference between Oxy-Hemoglobin and Deoxy-Hemoglobin When Awakening and Sleeping
The results of our experiments are all showed that [HbO2] decreased while sleeping was higher than when they awaked. With these results, we can detect 2 main phases awake and light sleep clearly. Sleeping means that subjects do nothing, they do not need too much oxygen for the activities [14]. Therefore, the deoxy-hemoglobin was higher than oxy-hemoglobin. When the subjects sleep, the deoxyhemoglobin concentration increased and vice versa, the oxy-hemoglobin decreased. There was a sharply different which we had recorded between two states and based on it, the two stages were distinguished. These figures below are data taken within an hour in one subject of our research.
Fig. 5 Result in Polysomnography. Orange Square is corresponding with epoch 35 – 59 (30 second/epoch) show that the subjects is awake
Fig. 6 Result in NIRS at channel 8. Oranges square is in suitable time with Fig [5]. Obviously, HbO2 stays higher than Hb when awakening
Fig. 7 The result in Polysomnography. Green circle performed that the subject was sleeping in epoch 63 - 111
Fig. 8 Green circle performs the result of channel 8 NIRS appropriate with the remark in Fig [7]. Hb becomes lower than HbO2 while sleeping
The association between NIRS and Polysomnography performed persuasive results in order to detect human sleep stages. In awakening, NIRS signal express concentration of HbO2 higher than Hb and it tends to decrease lower than Hb while sleeping. B. Detect Four Special Waves Using Matlab Programming We also involved in the data collected from NIRS with FFT method (Fast Fourier Transform) on Matlab. In these studies, we saw that NIRS performed like frequencies and divided the signal of NIRS into each epoch (1 epoch =60 seconds in real time). During the signal processing, we found that NIRS signals have several special frequencies. These frequencies appeared randomly in our subjects when they sleep.
Fig. 9 Four special frequencies in Oxy-Hemoglobin during sleep
Fig. 10 Four special frequencies in Deoxy-Hemoglobin during sleep Moreover, we found that these signals could become the standard to determine stages of sleep. Like in EEG, they use Alpha, Beta, Delta; Theta waves to detect the stages of sleep [15]. These waves appeared in different stages of sleep in random. Therefore, we decided to take 1 sample to try find out their characteristic. This sample is taken from channel 8 of one subject in 1 hour. In this calculation, we count the present of each separate frequency in different stages and display in percent. Figure 11 shows the statistical table of NIRS epochs we set up when we combined the NIRS data with the result in Polysomnography. Each NIRS epoch contains 60 second and suitable with each stages of sleep in Polysomnography calculation. With all two results, our research found that the appearance of each special frequency in each stage is not similar. Typically, in WAKE condition, the present of frequencies:
IFMBE Proceedings Vol. 35
Measuring the Depth of Sleep by Near Infrared Spectroscopy and Polysomnography
1.5Hz and 6Hz become higher than the other stages (NREM). Therefore, determining difference between WAKE and NREM is easier when we depend on observing concentration of Hb and HbO2 with analyzing the special frequencies. This is only our first comment but we hope it is the premise for our research to continue determination of sleep stages depending on these frequencies in NIRS better.
331
partly supported by a grant from Shimadzu Asia-Pacific Pte. Ltd. and research fund from International University of Vietnam National Universities in Ho Chi Minh City. We would like to thank Dr Nguyen Thi Cam Huyen, Ho Chi Minh City Medicine and Pharmacy University for helpful suggestions. Finally, our deepest appreciation to BME staffs and friends us who helped us collect information.
150%
REFERENCES 1.5 Hz
100%
3 Hz 50%
5 Hz
0%
6 Hz Wake S1
S2 S3/S4
Fig. 11 Statistic of sleep at channel 8 150% 1.5Hz
100%
3Hz 50%
5Hz
0%
6Hz WAKE S1
S2 S3/S4
Fig. 12 Statistic of sleep at channel 7
IV. CONCLUSION Our research now can distinguish two conditions in whole sleep research: awake and sleep. Moreover, we also determine four types of frequencies: 1,5Hz, 3Hz, 5Hz and 6Hz. The appearance of these frequencies contributes our research capability in segregating sleep stages deeper.
1. Dang-Vu TT, Desseilles M, Peigneux P, Maquet P. A role for sleep in brain plasticity. PediatrRehabil 2006; 9: 98-118. 2. Steriade M, McCarley RW.Brain Control of Wakefulness and Sleep. New York: Springer, 2005. 3. Tinguely G, Finelli LA, Landolt HP, Borbely AA, Achermann P. Functional EEG topography in sleep and waking: state-dependent and state-independent fatures. Neuroimage 2006; 32: 283-92. 4. Massimini M, Huber R, Ferrarelli F, Hill S, Tononi G. The sleep slow oscillation as a traveling wave. J Neurosci 2004; 24: 6862-70. 5. Steriade M, Nunez A, Amzica F. A novel slow (< 1 Hz) oscillation of neocortical neurons in Vivo: depolarizing and hyperpolarizing components. J Neurosci 1993d; 13: 3252-65. 6. Achermann P, Borbely AA. Low-frequency (< 1 Hz) oscillations in the human sleep electroencephalogram.Neuroscience 1997; 81: 21322. 7. Steriade M, McCarley RW.Brain Control of Wakefulness and Sleep. New York: Springer, 2005. 8. Steriade M, Timofeev I, Grenier F. Natural waking and sleep states: a view from inside neocortical neurons. J Neurophysiol 2001; 85: 196985. 9. Frank MG, Heller HC.Development of REM and slow wave sleep in the rat. Am J Physiol1997; 272: R1792-9. 10. Sergio Fantini'1, Payal Aggarwal, Aggarwal", Kathleen Chen, Chena, Maria Angela Franceschini, and Bruce L. Ehrenberg,Near infrared spectroscopy and polysomnography during all night sleep in human subjects. 11. Timofeev I, Steriade M. Low-frequency rhythms in the thalamus of intact-cortex and decorticated cats. J Neurophysiol 1996; 76: 4152-68. 12. Sergio Fantini'1, Payal Aggarwal, Aggarwal", Kathleen Chen, Chena, Maria Angela Franceschini, and Bruce L. Ehrenberg,Near infrared spectroscopy and polysomnography during all night sleep in human subjects. 13. Lee-Chiong TL, Sateia M, Carskadon M, eds. Sleep Medicine. Philadelphia, PA: Hanley & Belfus(Elsevier), 2002. 14. Cauter EV, Spiegel K. Circadian and sleep control of hormonal secretions. In: Zee PC and Turek FW, editors.Regulation of sleep and circadian rhythms. Vol 133. New York: Marcel Dekker, Inc., 1999: 397425. 15. Steriade M, Nunez A, Amzica F. Intracellular analysis of relations between the slow (< 1 Hz) neocortical oscillation and other sleep rhythms of the electroencephalogram. J Neurosci 1993c; 13: 3266-83.
ACKNOWLEDGMENT We would like to thank Vietnam National Foundation for Science and Technology Development-NAFOSTED for supporting attendances and presentation. This research was
IFMBE Proceedings Vol. 35
Multi-frequency Microwave Radiometer System for Measuring Deep Brain Temperature in New Born Infants T. Sugiura1, H. Hirata1, J.W. Hand2, and S. Mizushina3 2
1 Shizuoka University, Hamamatsu, Japan Imperial College School of Medicine, London, UK 3 Enegene Co. Ltd., Hamamatsu, Japan
Abstract— Hypothermic brain treatment for newborn babies are currently hindered by the lack of appropriate techniques for continuous and non-invasive measurement of deep brain temperature. Microwave radiometry (MWR) is one of the promising methods that is completely passive and inherently safe. Five-band microwave radiometer system and its feasibility were reported with a confidence interval level of the temperature estimation of about 1.6 °C at 5 cm depth from the surface. This result was not good enough for clinical application because clinical requirement is less than 1 °C for both accuracy and stability. This paper describes the improved result of temperature resolutions of the five radiometer receivers, and shows the new confidence interval obtained form temperature measurement experiment using an agar phantom based on a water-bath. Temperature resolutions were 0.103, 0.129, 0.138, 0.105 and 0.111 °C for 1.2, 1.65, 2.3, 3.0 and 3.6 GHz receiver, respectively, and new confidence interval was 0.51 °C at 5 cm from surface. We believe that the system takes a step closer to the clinical hypothermic treatment. Keywords— hypothermia, brain temperature, microwave radiometry, ischemia, infants.
I. INTRODUCTION Although cooling the brain of a newborn baby can reduce neuro-developmental impairment after a hypoxic-ischemic insult [1,2], clinical trials are currently hindered by the difficulty in measuring brain temperature non-invasively as well as continuously. Successful clinical application of brain cooling requires that, as with any other therapy, the distribution and dose of cooling is known. This is particularly important as studies using Magnetic Resonance Imaging (MRI) have demonstrated that in newborn infants not only are the basal ganglia significantly warmer than more superficial cerebral tissues [35], but injury to the deep brain structures predicts severe neurological impairment while cortical injury is relatively benign. Invasive methods for direct measurement of deep brain temperature can not be easily justified for ethical reasons. Correlations between deep brain temperature and surrogate measures such as tympanic membrane, nasopharyngeal, esophageal or rectal temperatures are uncertain, particularly
in an infant undergoing active therapeutic cooling. Animal models may be misleading due to the differences in cerebral metabolism, blood flow or geometry. MRI and MR spectroscopy methods have been used to measure temperature changes in the brain, however, these require access to complex equipments and they are not suitable for routine measurements in a hospital repeated over a prolonged period of time. These large-size equipments may however have value in validating other methods. One of the possible alternative methods for non-invasive temperature sensing and monitoring that is completely passive and inherently safe is microwave radiometry (MWR).. We proposed a multi-frequency microwave radiometry as a non-invasive monitoring method of deep brain temperature and fabricated a five-band receiver system and reported its measurement performance of about 1.6 K 2σ-confidence interval at 5 cm depth from the surface of a water-bath phantom with similar temperature distribution as infant’s brain [6]. Because the clinical requirement is less than 1 K, further improvement of MWR system were essential for a successful hypothermia treatment. We have done a couple of actions to reduce background noise in order to obtain the better temperature resolutions of five microwave receivers and tried to retrieve the temperature profile in the phantom. This paper describes the current feasibility of the MWR system for clinical hypothermic treatment.
II. MICROWAVE RADIOMETRY A. Brightness Temperature MWR involves measuring the power in the microwave region of the natural thermal radiation from body tissues to obtain the so-called brightness temperature of the tissue under observation. Brightness temperature TB is defined as
TB ,i =
Pi k ⋅ Δf i
(1)
where Pi is the thermal radiation power received by the antenna in a bandwidth of Dfi around a frequency fi, and k is Boltzmann’s constant. Because some radiation power from
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 332 – 335, 2011. www.springerlink.com
Multi-frequency Microwave Radiometer System for Measuring Deep Brain Temperature in New Born Infants
the tissues, Ptissue, reflects at the boundary between the tissue and the antenna, brightness temperature detected by the antenna, TB,i, thus is expressed as
TB ,i =
(1 − Ri ) Ptissue,i Pi = = (1 − Ri ) TB ,i ,tissue k Δf k Δf
(2)
where Ri is the unknown reflection coefficient at the tissueantenna interface at frequency fi. The Ri can be cancelled by the thermal power of the reference noise source (RNS) in the radiometer in the balance mode operation [7]. According to the Rayleigh-Jeans law, at microwave frequencies the thermal radiation intensity is proportional to the absolute temperature and so
TB , meas ,i = TB ,tissue ,i =
1 1 − Ri
∫∫∫
afv
Wi (r )T (r )dv
C. Five-Band Radiometer System Figure 2 shows a block diagram of a five-band radiometer system. Five separate Dicke radiometers have center frequencies of 1.2, 1.65, 2.3, 3.0 and 3.6 GHz (0.4 GHz bandwidth each). Antenna is a single dual-polarized short waveguide type which is in contact with the head surface and covers the entire frequency range (1 – 3.8 GHz) in our system. Figure 3 shows an appearance of the system. Five receivers are inside a metal-lined wooden box both for thermal insulation and electromagnetic shielding. SW Driver
(3)
RF AMP. Circulator Mixer
Where T(r) is the absolute temperature in an incremental volume of tissue dv located at r, Wi(r) is the radiometric weighting function (WF) and the integration is over the antenna’s field of view (afv).
RNS
Detector
Lock-in Amp
IF AMP.
PC
Local Oscillator Band 1 : 1.2 GHz Band 2 : 1.65 GHz
Weighting function is a ratio of the power which is reached to the antenna surface to the power generated in an incremental volume of tissue (Fig.1). From the definition of antenna reciprocal theory theorem, the ratio can be expressed as 2 1 ⋅σ ⋅ E ( r ) 2 = (4) 2 1 1 − Ri ⋅ ⋅ σ r E dv ( ) ∫∫∫afv 2 where E is the electric field intensity induced in tissue by the antenna operated in the active mode. When the geometry and dielectric properties of tissue, the de-ionized water bolus and the dielectric-filling for antenna are given, the WF can be derived numerically by electromagnetic analysis. In the present work, a two-dimensional model is used to represent the infant’s brain.
Wi ( r )
TB,i,meas,i
T(r)
Isolator
PIN S
B. Weighting Functions
Band 3 : 2.3 GHz
Antenna
Band 5 : 3.6 GHz
Object
Fig. 2 Five-band microwave radiometer All receivers are switched at a rate of 1 kHz between individual RNS and antenna. PC adjusted RNS temperature to maintain zero output of the lock-in amplifiers
PC Lock-in Amp
Control circuits Power sources
dv
Fig. 1 Brightness temperature and weighting function Ri is a power reflection coefficient at antenna-tissue interface
Band 4 : 3.0 GHz
Receivers
Ri
Wi(r)
333
Fig. 3 Five-band microwave radiometer system Five receivers are inside a metal-lined wooden box both for thermal insulation and electromagnetic shielding. The system operates in a normal room (not in a shielded room)
IFMBE Proceedings Vol. 35
T. Sugiura et al.
D. Temperature Retrieval Method
Because finding an estimate for the temperature distribution from a set of measured brightness temperatures (five brightness temperatures in our system) is an inverse problem, we have previously reported a method to use a priori knowledge of the temperature profile in the brain in order to avoid solving an inverse problem [8]. A priori temperature profile in the brain is expressed as
T ( z ) = T0 + ΔT ⋅ SF ( z )
5
2
(6)
n =1
where TB,measured,i is the measured brightness temperature, i is the center frequency of each receiver and TB,model,i is the pseudo brightness temperature at fi calculated by using the priori temperature profile and radiometric weighting functions. From eq.(4), one DT was obtained and considering that MWR measures thermal noise, 2σ-confidence interval of temperature estimation was calculated by Monte-Carlo technique [8].
σ = 0.111 K
y = 0.9934x-1.8914
㪊㪎 㪊㪌 35 㪊㪊 㪊㪈 31 㪉㪐 㪉㪎 27 㪉㪌 㪉㪊 23 㪉㪈
19 㪈㪐
㪉㪇 20
(5)
where T0 is the surface temperature (cooling bolus temperature), DT is the temperature difference between the surface and the center of the brain and SF(z) is the shape function which is the normalized temperature profile [6]. SF(z) was calculated based on the simulated result by Van Leeuwen [9]. Estimated temperature profile was achieved by minimizing the error function below,
Error = ∑ (TB ,mod el ,i − TB, measured ,i )
RNS Temperature (°C)
334
㪉㪌 㪊㪇 㪊㪌 25 30 35 Water temperature (°C)
㪋㪇 40
Fig. 4 Calibration curve of 3.6 GHz receiver Standard deviation was used as the temperature resolution
B. Temperature Measurement Experiment
Using the five-band radiometer system, we made a temperature measurement experiment on a phantom. An arrangement of the phantom which emulated the profile in brain is illustrated in Fig.5. Temperatures at eight different locations along the depth were monitored by thermocouples for reference. Radiometer receivers Heater & Stirrer RNS Temp. Control
III. EXPERIMENT AND RESULT PC
A. System Calibration
Pt thermometer
Five receivers were calibrated by measuring the brightness temperatures of water in a water-bath. The water temperature was gradually increased so that there was enough time to balance the power received by antenna (TB,i) and the RNS temperature (TRNS) which was controlled by a PC. When the balance was achieved (the output of lock-in amplifier became zero), TRNS was recorded in a PC as TB,i of water, and calibration curves were plotted against the water temperature recorded by a platinum thermometer. A calibration curve for 3.6 GHz receiver is shown in Fig.4 where water temperature is plotted along abscissa and RNS temperature is along ordinate. A regression line was retrieved and the standard deviation was 0.111 K which was defined as the temperature resolution or stability of 3.6 GHz receiver. Another resolutions were 0.103, 0.129, 0.138, 0.105 for 1.2, 1.65, 2.3, 3.6 GHz receivers, respectively. These were much improved from those reported previously [6].
Fig. 5 An arrangement for measurement experiment Temperature profile phantom consists of water, acrylic and agar. Agar surface was cooled by water (15 °C)
Measured brightness temperatures were 20.7, 17.9, 12.2, 10.4, 7.8 °C, for 1.2, 1.65, 2.3, 3.0, 3.6 GHz receivers, respectively. A result of the measurement and estimation of the temperature profile with 2σ-confidence interval is illustrated in Fig.6 where the depth from the antenna surface given in mm is plotted along the abscissa and the temperature given in °C is plotted along the ordinate. A central smooth solid line represents the temperature-versus-depth profile estimated with the five-band MWR. Outer two solid lines indicate 2σ-confidence interval (0.51 K) of the temperature estimation. The closed circles represent the readings taken with the thermocouples.
IFMBE Proceedings Vol. 35
Multi-frequency Microwave Radiometer System for Measuring Deep Brain Temperature in New Born Infants
Temperature (°C)
40
error = 2.2 °C
30
received power by the antenna, more precise calculation of SAR in the target is also a corner stone in this measurement.
estimated
335
ACKNOWLEDGMENT
20
2σ = 0.5 1 °C *
center of brain
10
This work was supported in part by the Grant-in Aid for Scien-tific Research, No. 20560394, the Ministry of Education, Culture, Sports, Science and Technology.
0
10 20 30 40 50 60 Distance from antenna surface (°C)
Fig. 6 Retrieved temperature profile in phantom Measurement stability at 5 cm depth (assumed to be the center of infant’s brain) was ± 0.51 °C
IV. DISCUSSION It is to be noted that medical thermometer must measures to an accuracy of 0.1 °C. However non-invasive temperature measurement of inside a biological body is challenges and difficulties, especially for a microwave radiometer which measure thermal noise power, which is several orders of magnitude smaller than that of background noise since the system must operate at room temperature, not at liquid helium temperature like radio telescope. Moreover, it is expected to operate in a normal room (not in a shielded room). Therefore, the present target value of measurement accuracy of our system is 1 °C. According to measurement engineering, accuracy consists of precision and resolution. In this study, the temperature resolution which reflects the stability of the whole system was 0.51 °K at 5 cm depth. This was much improved from previous value. However, precision error there was about 2 °C. Though we have no clear known cause for this error, microbubbles formed on the antenna surface and/or unanticipated variation of the phantom temperature is suspected. A method to remove air bubbles during measurement experiment is now under consideration. A new temperature phantom with higher stability may help to evaluate the system more accurately. Weighting functions for five center frequencies were obtained through the calculation of specific absorption rate (SAR) by FDTD (finite-difference time-domain) method. Since WFs have a direct impact on the estimation of the
REFERENCES 1. Perlman M, Shah P (2008) Time to adopt cooling for neonatal hypoxic-ischemic encephalopathy: Response to a previous commentary. Pediatr 121: 616-618 2. Williams CE, Gunn AJ, Mallard C et al. (1992) Outcome after ischemia in the developing sheep brain: an electroencephalographic and histological study. Ann Neurol 31: 14-21 3. Cady EB, D'Souza PC, Penrice J (1995) The estimation of local brain temperature by in vivo 1H magnetic resonance spectroscopy. Magn Reson Med 33: 862-867 4. Mellergard P (1994) Monitoring of rectal, epidural, and intraventricular temperature in neurosurgical patients. Acta Neurochir Suppl Wien 60: 485-487 5. Mellergard P (1995) Intracerebral temperature in neurosurgical patients: intracerebral temperature gradients and relationships to consciousness level. Surg Neurol 43: 91-95 6. Sugiura T, Hoshino S, Sawayama Y (2006) Five-band microwave radiometer system for non-invasive measurement of deep brain temperature in newborn infants: First phantom study. Proc PIERS : 395-398 7. Lüdeke KM, Schiek B, Kohler J (1978) Radiation balance microwave thermograph for industrial and medical applications. Electronic Letters 14: 194-196 8. Maruyama K, Mizushina S, Sugiura T et al. (2000) Feasibility of Non-invasive Measurement of Deep Brain Temperature in New-born Infants by Multi-frequency Microwave Radiometry. IEEE Trans MTT 48: 2141-2147 9. Van Leeuwen GMJ, Hand JW, Edwards AD et al. (2000) Numerical modeling of temperature distributions within the neonatal head. Paediatr Res 48: 351-356
Author: Toshifumi Sugiura Institute: Research Institute of Electronics, Shizuoka University Street: 3-5-1 Johoku City: Hamamatsu Country: Japan Email: [email protected], [email protected]
IFMBE Proceedings Vol. 35
Number of Pulses of rTMS Affects the Inter-reversal Time of Perceptual Reversal K. Nojima1, S. Ge2, Y. Katayama3, and K. Iramina1,3 1
Graduate School of Life Sciences, Kyushu University, Fukuoka, Japan 2 School of Electronic Engineering and Optoelectronic Technology, Nanjing University of Science and Technology, Nanjing, Jiangsu, China 3 Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
Abstract— In this study, we applied repetitive transcranial magnetic stimulation (rTMS) to investigate the effects of number of pulses of rTMS on the phenomenon of perceptual reversal. It has been known that the right SPL (superior parietal lobule) has a role in perceptual reversal of the spinning wheel illusion. We measured the IRT (inter-reversal time) of perceptual reversal. We compared the effects of 1Hz 60pulses, 1Hz 120pulses and 1Hz 240pulses rTMS. The results showed that when 1Hz 60pulses of rTMS was applied over the right SPL, the IRT was significantly smaller. On the other hand, when 1Hz 240pulses of rTMS was applied over the right SPL, the IRT was significantly longer IRT. When 1Hz 120pulses of rTMS was applied over the right SPL, there were no significant differences. Moreover, to investigate whether these effects are affected by the rTMS condition with stimulus frequency, or not, we compared the effects of 0.25Hz 60pulses, 0.5Hz 60pulses, 0.25Hz 120pulses and 0.5Hz 120pulses of rTMS. When 0.25Hz 60pulses or 0.5Hz 60pulses of rTMS was applied over the right SPL, the IRT was significantly smaller. When 0.25Hz 120pulses or 0.5Hz 120pulses of rTMS was applied over the right SPL, there were no significant differences. Therefore, it was found that the IRT of perceptual reversal primarily affected by the number of pulses and doesn’t affect the stimulus frequency of rTMS. Keywords–– rTMS, perceptual reversal, superior parietal lobule.
previous study, we have reported that rTMS applied over the right SPL influences the IRT of perceptual reversal [7] [8] and have suggested that the IRT of perceptual reversal might be affected by the number of pulses [9] [10]. In this study, we focus on the number of pulses of rTMS and investigate more closely detail the effects on the IRT of perceptual reversal.
II. METHODS The spinning wheel illusion was used as the ambiguous figure in this study (Fig.1). All the stimuli were controlled by PC and presented on a CRT monitor with background set to gray (71.2cd/m2). All experiments were conducted in a darkroom. The subject's head was steadied by a chinrest. The stimuli were presented on a 5.8×5.8cm square (4.74°× 4.74°) at the center of the PC monitor at a distance of 700mm from the subject. The TMS stimulator was a MagStim Super Rapid Stimulator (Magstim comp., Whitland, UK). A figure-of-eight 70mm coil was used. to investigate the effects of number of pulses of rTMS, 1Hz 60pulses, 1Hz 120pulses, and 1Hz 240pulses biphasic rTMS were used in the present study. Also, to investigate the effects of stimuls frequency, 0.25Hz 60pulses, 0.5Hz 60pulses, 0.25Hz 120pulses, 0.5Hz 120pulses of rTMS were used. Stimulus
I. INTRODUCTION rTMS is a non-invasive and painless method of modifying the cortical activity. rTMS allows repetitive pulses of electricity to be passed through a coil placed on the scalp. A magnetic field is generated around the coil. This field penetrates the skull and induces the eddy current in the cortex. This causes the facilitation or inhibition of neuronal tissues to be changed [1]-[4]. Ambiguous figures are visual stimuli that can be interpreted in multiple ways by the human visual system. Perceptual reversal refers to the spontaneous switches in perception between several possible interpretations for a given ambiguous figure. Past research with ambiguous figures indicated that the parietal area, especially, the SPL is involved in perceptual reversal [5] [6]. In our
Fig. 1 Time sequence of this experiment. Subjects were required to respond by clicking a mouse button to indicate the perceptual reversal in the direction of rotation. The time interval between two successive perceptual reversals was automatically recorded as the IRT
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 336 – 339, 2011. www.springerlink.com
Number of Pulses of rTMS Affects the Inter-reversal Time of Perceptual Reversal
strength was set at 90% of the subjects’ individual resting motor threshold (MT). To estimate the MT, we applied stimulation over the motor area in the right hemisphere and monitored thumb movement in the left hand. Based on previous findings that response time is more affected by rTMS on the right SPL than the left SPL [11], we decided to apply the stimuli over the right hemisphere. The coil was placed tangential to the surface of the skull, and the center of the coil was positioned over electrode site P4 of the international 10-20 system (Fig.2). The induced current in the brain was parallel to the Oz-Pz midline, and flowed from posterior to anterior regions. To compare the rTMS effect over the right SPL with that over other cortex, rTMS trains were also applied over the right PTL of the subject. For this, the coil was positioned over electrode site T6 of the international 10-20 system. Thus, the induced current in the brain flowed from posterior to lateral regions. Also, these stimulus points were confirmed by the system of MRI and infrared camera (Brain Sight, Rogue Research Inc., Canada). Furthermore, to incorporate a No-TMS control condition, the same procedures were followed while no rTMS trains were applied over the subject's skull. The time sequence of the experiment is shown in Fig.1. The experiment started with rTMS stimulation. This was followed immediately by the presentation of the spinning wheel illusion. Continuous repetition of this sequence yields the perception of a spinning wheel moving either clockwise or counterclockwise, with switches between these two perceptions occurring spontaneously. Subjects were required to respond as quickly and accurately as possible by clicking a mouse button to indicate the perceptual reversal in the direction of rotation. The time interval between two successive perceptual reversals was automatically recorded as the IRT. The application sequences of SPL stimulation, PTL stimulation and No
Fig. 2 The stimulus point on the brain, when rTMS was applied over the right superior parietal lobule (SPL). The coil was positioned over electrode site P4 of the international 10-20 system. This point was a confirmed by the system of MRI and infrared camera
337
Fig. 3 Percentage difference in normalized average of inter-reversal time of the right SPL. In this experiment, rTMS with 1Hz 60pulses, 1Hz 120pulses and 1Hz 240pulses were applied. Baseleine: No TMS experiment. Percentage difference was significantly smaller with 1Hz 60pulses. In contrast, it was significantly longer with 1Hz 240pulses
TMS were randomized across subjects. 11 subjects (aged 21-44 years) participated in all the tests of the present study. All were right handed, and had normal or corrected-tonormal visual acuity. All the subjects participated in practice trials before the experiment in order to familiarize themselves with the perceptual reversal of the spinning wheel illusion and to learn to make efficient responses. Before starting this experiment, all subjects were introduced to the aim of this study, the procedures, hazards of TMS and the management of the data. All subjects consented to participate in this experiment.
III. RESULTS Previous studies have indicated that the intervals of perceptual alternation follow a gamma distribution in ambiguous figure perception [12]. Hence, we considered that the IRT would exhibit a gamma distribution in the present study. In each experiment, averaged IRTs were calculated as AIRT. In order to remove the influence of individual differences, the ratio of the AIRT of each stimulus condition to the AIRT of the No-TMS condition was calculated as the normalized AIRT. The mean percentage difference in normalized AIRT of SPL is shown in Fig.3. This experiment is used rTMS with 1Hz 60pulses, 1Hz 120pulses and 1Hz 240pulses. the mean percentage difference in normalized AIRT of PTL is shown in Fig.4;using Wilcoxon signedranks test, it was found that, compared to the No-TMS condition, the AIRT of SPL condition was significantly smaller with 1Hz 60pulses (SPL stimuli < No-TMS, P<0.05). In contrast, it was found that, compared to the No-TMS condition, the AIRT of SPL condition was significantly longer with 1Hz 240pulses (SPL stimuli>No-TMS, P<0.001). On
IFMBE Proceedings Vol. 35
338
K. Nojima et al.
right PTL, the obtained data we; with 0.25Hz 60pulses: P=0.0.922: with 0.5Hz 60pulses: P=0.157; with 0.25Hz 120pulses: P=0.0.278 and with 0.5Hz 120pulses: P=0.232.
Fig. 4 Percentage difference in normalized average of inter-reversal time of the right PTL. In this experiment, rTMS with 1Hz 60pulses, 1Hz 120pulses and 1Hz 240pulses were applied. Baseleine: No TMS experiment. There were no significant differences
the other hand, there were no significant differencesïbetween IRTs for rTMS with 1Hz 120pulses (P=0.966) over the right SPL. As for the right PTL, the obtained data we: with 1Hz 60pulses: P=0.765; with 1Hz 120pulses: P=0.577; with 1Hz 240pulses: P=0.240.
Fig. 6 Percentage difference in normalized average of inter-reversal time of the right PTL. In this experiment, rTMS with 0.25Hz 60pulses, 0.5Hz 60pulses, 0.25Hz 120pulses and 0.5Hz 120pulses were applied. Baseleine: No TMS experiment. There were no significant differences
IV. CONCLUSION AND DISCUSSION
Fig. 5 Percentage difference in normalized average of inter-reversal time of the right SPL. In this experiment, rTMS with 0.25Hz 60pulses, 0.5Hz 60pulses, 0.25Hz 120pulses and 0.5Hz 120pulses were applied. Baseleine: No TMS experiment. Percentage difference was significantly smaller with 0.25Hz 60pulses and 0.5Hz 60pulses
To investigate the effects of stimulus frequency of rTMS, we applied 0.25Hz 60pulses, 0.5Hz 60pulses, 0.25Hz 120pulses and 0.5Hz 120pulses of rTMS. The mean percentage difference in normalized AIRT of SPL is shown in Fig.5; the mean percentage difference in normalized AIRT of PTL is shown in Fig.6. It was found that, compared to the No-TMS condition, the AIRT of SPL condition was significantly smaller with 0.25Hz 60pulses and 0.5Hz 60pulses (SPL stimuli < No-TMS, P<0.05). There were no significant differences between IRTs for rTMS with 0.25Hz 120pulses (P=0.700) and 0.5Hz 120pulses (P=0.638) over the right SPL. As for the
When rTMS was applied over the right SPL, we confirmed that the IRT of perceptual reversal was influenced. On the other hand, when rTMS was applied over the right PTL, there were no significant differences. Therefore, it was found that the right SPL is related to the perceptual reversal. This result is corrected to the previous study [5] [6] [12]. To investigate rTMS effects of the number of pulses, the experiment of rTMS condition with 1Hz 60pulses, 1Hz 120pulses and 1Hz 240pulses were performed. rTMS condition with 60pulses caused shorter IRT, 240pulses caused longer IRT, and 120pulses did not affect on IRT. The condition with 120pulses produced intermediate levels of IRT. These results show that he IRT of perceptual reversal might be affected by the number of pulses. Also the results of experiment with 0.25Hz 60pulses, 0.5Hz 60pulses, 0.25Hz 120pulses and 0.5Hz 120pulses of rTMS shows that the stimulus frequency of rTMS did not affected the IRT of perceptual reversal. Therefore, in this study, we showed that rTMS condition with number of pulses primarily affect the rTMS effects of perceptual reversal of ambiguous figures. There was s reported that the number of pulses in rTMS produces stronger anti-epileptic effects than the stimulation time [13]. Our results also show that the number of pulses in rTMS induces the difference effects but the fleaquency of rTMS didn’t induce effects. Therefore, our results suggest that changing the number of pulses in rTMS produced parametric changes in the IRT.
IFMBE Proceedings Vol. 35
Number of Pulses of rTMS Affects the Inter-reversal Time of Perceptual Reversal
ACKNOWLEDGMENT This work was supported in part by Grant-in Aid for Scientific Research, Ministry of Education, Science, Sports, Culture and Technology, Japan (No.21300168).
REFERENCES [1] T. Wu, M. Sommer, F. Tergau, and W. Paulus “Lasting influence of repetitive transcranial magnetic stimulation on intracortical excitability in human subjects”, Neuroscience Letters, 287, pp.37-47, 2000. [2] W. Gerschlager, H.R. Siebner, and J. C. Rothwell “Dexreased corticospinal excitability after subthreshold 1Hz rTMS over lateral premotor cortex.”, Neurology., 57, pp.449-455, 2001. [3] E. M. Robertson, H. Theoret, and A. Pascual-Leone “Studies in Cognition: The Problems Solved and Created by Transcranial Magnetic Stimulation”, Journal of Cognitive Neuroscience, 15, 7, pp.948-965, 2003. [4] F. Maeda, J. P. Keenan, J. M. Tormos, H. Topka, and A. PascalLeone “Interindividual variability of the modulatory effects of repetitive transcranial magnetic stimulation on cortical excitability”, Exp Brain Res, 113, pp.425-430, 2000. [5] A. Kleinschmidt, C. Buchel, S. Zeki, and R. Frackowiak “Human brain activity during spontaneously reversing perception of ambiguous figures”, Proceeding of Royal Society of London, B.265, pp.2427-2433, 1998. [6] E. D. Lumer, K. J. Friston, and G. Rees “Neural Correlates of Perceptual Rivalry in the Human Brain”, 280, pp.1930-1933, 1998.
339
[7] S. Ge, S. Ueno, and K. Iramina “Effects of Repetitive TranscranialMagnetic Stimulation on Perceptual Reversal”, J. Magn. Soc. Jpn., 32, pp.458-461, 2008. [8] S. Ge, S. Ueno, and K. Iramina “The rTMS Effects on Perceptual Reversal of Ambiguous Figures”, Proceedings of the 29th Annual International Conference of the IEEE EMBS, pp.4743-4746, 2007. [9] K. Nojima, S Ge, Y. Katayama, S. Ueno and K. Iramina ”Effect of the stimulus frequency and pulse number of rTMS on the interreversal time of perceptual reversal on the right SPL”, Journal of Applied Physics, 107, 9, pp.1-3, 2010. [10] K. Nojima, S Ge, Y. Katayama and K. Iramina ”Time change of perceptural reversal of ambiguous figures by rTMS”, Proceedings of the 32th Annual International Conference of the IEEE EMBS, pp.6579-6, 2010. [11] D. M. Beck, N. Muggleton, V. Walsh, and N. Lavie : “Right Parietal Cortex Plays a Critical Role in Change Blindness”, Cerebral Cortex, Vol.16, pp.712-717, 2006. [12] P. Sterzer, M. O. Russ, C. Preibisch, and A. Kleinschmidt “Neural Correlates of Spontaneous Direction Reversals in Ambiguous Apparent Visual Motion”, Neuroimag, 15, 4, pp.908-916, 2002. [13] A. Oliviero, V. Di Lazzaro, O. Piazza, P. Profice, M. A. Pennisi, F. Della Corte and P. Tonali “Cerebral blood flow and metabolic changes produced by repetitive magnetic brain stimulation”, J Neurol, 246, pp.1164-1168, 1999. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Kazuhisa Nojima Graduate School of Life Sciences, Kyushu University 744 Motooka Nishi-ku Fukuoka Japan [email protected]
Permittivity of Urine between Benign and Malignant Breast Tumour E.S. Arjmand1, H.N. Ting1, C.H. Yip2, and N.A. Mohd Taib2 1
University Malaya/Department of Biomedical, Faculty of Engineering, 50603 Kuala Lumpur, Malaysia 2 University Malaya/Department of Surgery, Faculty of Medicine, 50603 Kuala Lumpur, Malaysia
Abstract— Permittivity is one of the properties of a material that shows transmission of an external electric field. The study is going to determine the permittivity of urine for patients with benign breast tumour and patients with malignant breast tumuor. Samples of urine were collected from 20 subjects with benign breast tumour and 20 subjects with malignant breast tumour. The study determines the significant differences in permittvity between patients with benign breast tumour and patients with malignant breast tumour. The result showed that there were significant differences in permittivity between these groups of patients at certain frequencies. Keywords— Permittivity, benign, malignant, breast tumour, urine, dielectric constant, loss factor, loss tangent.
I. INTRODUCTION When electric fields affect a substance, permittivity is generated. Each material has a specific dielectric property; the permittivity of different material varies according to the amount energy each is able to store when it is affected by an electrical field. Detecting the early stage of Breast cancer is very important for diagnosis and reducing the rate of mortality. A lot of unnecessary biopsies are carried out because of the high rate of false positives in mammography [1]. For most of the malignant patients, their breast lesions demonstrate as negative in mammography [2]. MRI is used when other diagnostic methods fail in finding a major source [3]. MRI has high sensitivity and lower specificity for breast cancer detection. Because of its brilliant sensitivity MRI is used for the detection of invasive breast cancer. By using dynamic contrast enhanced MRI (DCE-MRI) the sensitivities of it are usually greater than 90% for invasive cancers [4, 5] . 80% of patients must be localized using the ultrasound technique after detection by MRI [3]. Using ultrasound, overall cancer detection can be increased by 17% [7] and the number of unnecessary biopsies can be decreased to 40% [6]. Sometimes ultrasound is not able to determine if the mass is malignant, then biopsy will be needed. Biopsy is an invasive detection method. Biopsy has high accuracy but it is not comfortable for patients in addition to it are high cost [8].
The purpose of this research is to analyze the permittivity of urine in order to differentiate between malignant tumuor from benign tumuor. Three parameters of permittivity are investigated: dielectric constant, loss factor and loss tangent.
II. METHODOLOGY In this experiment urine samples were collected from 20 patients with benign breast tumour and 20 patients with malignant breast tumour. Network Vector Analyzer model 85070E was used for measuring the permittivity. The instrument was calibrated each time before every measurement. Calibration helped to increase measurement accuracy and reducing errors. The permittivity was measured using Agilent vector network analyzer between 10 MHz to 20 GHz. Independent samples t-test was used to determine the significant differences in permittivity between patients with benign breast tumour and patients with malignant breast tumour.
III. RESULTS AND DISCUSSIONS The graphs were plotted by Excel in range of frequency between 0.4 GHz to 20 GHz. Figure 1 shows the average graph of dielectric constant of benign patients and breast cancer patients. The average graph of breast cancer was more fluctuated and this characteristic was significant and leaded to distinguish it from benign patients. The minimum point of breast cancer patients graph is at 1.9GHz, it was around 39. Benign patients and breast cancer patients’ graphs intersected each other in 4 points: 1.
First of all they intersected each other around 12.4GHz with value of 53.2. 2. The second point was between frequencies 13.6and 14.2 GHz. 3. The third point was around 16.6 GHz that the value is around 45. 4. The last point was between 19 and 19.6 GHz with value near to 39.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 340–343, 2011. www.springerlink.com
Permittivity of Urine between Benign and Malignant Breast Tumour
341
for breast cancer graphs were around 15.6 GHz. The end point of graphs were different; the end point for benign patients was around 32 and the for breast cancer patients is 29.
Fig. 1 The average graph of dielectric constant In Figure 2, the differences between averages of benign patients graph and average of breast cancer patients graph is represented. Fig. 3 The Average graph of Loss Tangent
Fig. 2 The average graph of loss Factor All of them had minimum value around the same frequency point 20 and they continued to terminate around 30. Breast cancerous patients graph showed more fluctuations than the other groups. The most differences between graphs were observed after 1.16GHz.The graphs of the breast cancer patients was unstable and showed many fluctuations, benign patients graph was stable and didn’t show any fluctuations .The maximum point
In Figure 3, the curve demonstrated unstable waveform (with fluctuations) of breast cancer patients graph discriminated it from benign patient’s graph. After frequency of 12.4 GHz, the most significant differences in graph are observed. Maximum permittivity value of breast cancer patients after 12.4GHz was between 18.8 and 1.8 GHz that reflected the value 0.77. Intersection of benign graphs and breast cancer patient is in the point of 17.2 GHz. These two graphs are similar to each other at 18.4 GHz which mean that they had the same values at this frequency point. The end point of benign patients and breast cancer patients graph was at frequency value of 0.82. Independent samples t-test was carried out to determine the significant differences in dieletric constant, loss factor and loss tangent across the groups. Table 1 shows breast cancer patients and benign patients have the significant differences in some frequencies .therefore they can be distinguish by dielectric constant in some frequencies. Table 2 shows the frequencies that have significant differences in Loss Factor. Thus the benign patients and breast cancer patients can be distinguished in some frequencies by loss Factor. Loss Factor has more significant Levels than dielectric constant.
IFMBE Proceedings Vol. 35
342
E.S. Arjmand et al.
Table 1 Significant levels in Dielectric Constant
Table 3 Significant Levels in Loss Tangent
Frequency
Significant Level
Frequency (GHz)
Significant Levels
Frequency (GHz)
Significant Levels
0.4
0.032
10.8
0.044
16.4
0.024
15.6
0.027
11
0.041
16.6
0.016
18.2
0.048
11.2
0.043
16.8
0.012
18.4
0.041
11.4
0.029
17
0.01
18.6
0.021
11.6
0.031
17.2
0.008
18.8
0.021
11.8
0.019
17.4
0.011
19
0.008
12
0.01
17.6
0.008
19.2
0.005
12.2
0.007
17.8
0.008
19.4
0.007
19.6
0.007
19.8
0.012
20
0.01
Table 2 Significant levels in Loss Factor Frequency ( GHz) 0.4
Significant Level 0.001
Frequency (GHz) 3.8
Significant Level 0.003
0.6
0.001
4
0.006
0.8
0.001
4.2
0.015
1
0.001
4.4
0.04
1.2
0.001
16.4
0.016
1.4
0.001
16.6
0.012
1.6
0.001
16.8
0.01
1.8
0.001
17
0.017
12.4
0.005
18
0.009
12.6
0.004
18.2
0.006
12.8
0.004
18.4
0.009
13
0.008
18.6
0.012
13.2
0.005
18.8
0.006
13.4
0.009
19
0.017
13.6
0.019
19.2
0.012
13.8
0.011
19.4
0.011
14
0.05
19.6
0.014
16
0.038
19.8
0.015
16.2
0.026
20
0.028
IV. CONCLUSIONS The permittivity of urine has been investigated for patients with benign and malignant breast tumour. The graphs and statistical analysis showed that permittivity of the urine can be used to distinguish between benign and malignant breast tumour. Significant differences in dielectric constant, loss factor and loss tangent were observed at certain frequencies for patients with benign and malignant breast tumour.
2
0.001
17.2
0.023
2.2
0.001
17.4
0.022
2.4
0.001
17.6
0.044
2.6
0.001
17.8
0.027
2.8
0.001
18
0.027
3
0.001
18.4
0.021
3.2
0.001
18.6
0.28
ACKNOWLEDGMENT
3.4
0.001
18.8
0.035
3.6
0.001
19
0.036
Special thank to University of Malaya for funding this research under PPP grant.
Table 3 shows the significant differences between Breast cancer patients and benign patients in loss Tangent; they can be distinguished in some frequencies that they have significant levels.
IFMBE Proceedings Vol. 35
Permittivity of Urine between Benign and Malignant Breast Tumour
REFERENCES 1. Chen, CM, Chung, YH, Hon KC, Hung GS, Tiu CM, Chiou HJ, Chiou SY (2003) Breast lesions on sonograms: Computer-aided diagnosis with nearly setting-independent features and artificial neural networks. Radiology 226 (2): 504-514 2. Burns, RP (1997) Image-guided breast biopsy. American Journal of Surgery 173 (1): 9-11 3. de Bresser, J, De Vos, B, Van der Ent, F and Hulsewe, K (2010) Breast MRI in clinically and mammographically occult breast cancer presenting with an axillary metastasis: A systematic review. Ejso 36(2): 114-119 4. Hlawatsch, A, Hlawatsch, A, Teifke, A, Schmidt, M and Thelen M (2002) Preoperative assessment of breast cancer: Sonography versus MR imaging. American Journal of Roentgenology 179(6): 1493-1501 5. Fischer, U, Kopka, L and Grabbe, E (1999) Breast carcinoma: Effect of preoperative contrast-enhanced MR imaging on the therapeutic approach. Radiology 213 (3): 881-888
343 6. André, M, Galperin, M, Olson LK, Richman K, Payrovi S and Phan P (2002). Improving the accuracy of diagnostic breast ultrasound 7. 7.Drukker, K., Giger, ML, Horsch, K, Kupinski MA, Vyborny, CJ and Mendelson EB (2002) Computerized lesion detection on breast ultrasound. Medical Physics 29(7): 1438-1446 8. Eltoukhy, MM, Faye, I, & Samir, BB (2010) Breast cancer diagnosis in digital mammogram using multiscale curvelet transform. Computerized Medical Imaging and Graphics 34(4): 269-276
Author:
Dr. Hua-Nong TING
Institute: Street: City: Country: Email:
University of Malaya Jalan Pantai Baharu Kuala Lumpur Malaysia [email protected]
IFMBE Proceedings Vol. 35
Problems and Solution When Developing Intermittent Pneumatic Compression N.H. Kim, H.S. Seong, J.W. Moon, and J.M. Cho Dept. Biomedical Engineering, Inje University, Gimhae, Republic of Korea
Abstract— This paper discusses the problems for the development of the intermittent pneumatic compression system using an air pump/motor assembly and solenoid valve without using an air pressure sensor. Each problem faced during development was analyzed through experiment and proper solution was provided and tested. The experience described in this paper can be applied to the development of similar equipment. Keywords— Air pressure monitoring, air pump/motor control, solenoid valve control, low power portable medical equipment.
I. INTRODUCTION Intermittent pneumatic compression (IPC) is used widely for persons who have varicose veins of lower extremities [1]. This method helps the blood flow in vein by compressing intermittently lower extremities with a pressurecontrolled cuff [1] [2]. Currently, many of IPCs in the market consist of an air pump/motor assembly, pressure sensor, solenoid valves, cuff, and microcontroller. The basic structure of this equipment is similar to that of blood pressure monitor or massage device using air compression. This study focuses on the development of portable low power IPC equipment without using an air pressure sensor and suggests solutions to the problems that can occur during development.
II. PROBLEMS AND SOLUTIONS A. System Configuration The suggested IPC design is shown in Figure 1. In this design MSP430F2012 microcontroller which has 2 Kbyte+ 256 byte of code memory and 128 byte for data memory was chosen as a main control unit. In addition, an air pump/motor assembly, normally-closed type solenoid valve for air flow control, and air cuff were incorporated in this design. An air pump/motor assembly and solenoid valve in the system were connected to the each end of the air tube embedded in cuff. In addition, a push button switch was used to switch operating mode and a tri-color LED was
Fig. 1 The block diagram of the IPC employed to indicate the current operating mode to help the user. Four issues were considered for the design of the IPC; portability, low power, ease of use, and low cost. Three predefined pressure levels were prepared and each pressure level can be selected by the push button. The pressure level of 40-60 mmHg for the lowest pressure (LP) mode, 45-65mmHg for medium level pressure (MP), and 70-100 mmHg for the strongest (SP) mode were predefined respectively. Once the pressure level reaches to its predefined threshold value, the current pressure is maintained for five seconds and the pressure decreases for the predefined time by opening the solenoid value. Figure 2 shows the state diagram of the system. Five operating modes were prepared; sleep mode (SL), standby mode (SB), air exhaust mode (AE), LP, MP, SP. Each operating mode can be selected by pressing the button and two types of button pressing were provided depending on the pressing time; short-time pressing (STP) for the time shorter than 2 seconds and long-time pressing (LTP) for longer pressing. SL mode is specially designed and used for power saving while the system is not active state.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 344–347, 2011. www.springerlink.com
Problems and Solution When Developing Intermittent Pneumatic Compression
345
Fig. 2 The program flow chart of the IPC B. Problem Caused by Back-EMF The back-EMF appears when the current flowing an inductive load is turns off and it influence to the supplying voltage of the system, finally the supply voltage dropout causes unpredictable operating of the system. The back-EMF can be significantly reduced by connecting a diode such as fast recovery or Schottky diode in parallel with the air pump/motor coil and solenoid valve coil. C. Measurement of Air Pressure without Using Pressure Sensor For the cost down the air pressure in cuff is measured indirectly without using expensive air pressure sensor in this development. A current-sensing resistor is connected in series with the air pump/motor assembly drive circuit. The voltage dropout across this resistor is proportional to the driving current flow of the air pump/motor assembly. The voltage across the current sensing resistor is converted to digital by the 10-bit ADC built in MSP430F2012 microcontroller, where internal 1.5V reference voltage was used for the conversion. At the same time the air pressure in cuff was measured by an analog air pressure gauge (Welch Allyn CE0123) to get the absolute air pressure in cuff. With
Fig. 3 Partial circuit of the system used to drive air pump-motor assembly and measure voltage at current sense resistor
this experiment we could get the reference table which is used to convert the voltage across the current sensing resistor into absolute air pressure. The voltage across the current sensing resistor is converted to digital by the 10-bit ADC built in MSP430F2012
IFMBE Proceedings Vol. 35
346
N.H. Kim et al.
Fig. 4 Voltage across the current sensing resistor for the same air pressure varies depending on the supply voltage However, the voltage across the current sensing resistor for the same absolute air pressure varies depending upon supply voltage as shown in Figure 4. In this system supply voltage is also measured by built-in analog-to-digital converter to compensate this variation. The voltage across the current sensing resistor is depicted in shown Figure 5. It shows an overshoot when the motor/pumper starts up and the measurement of air pressure was not carried out for this transient duration. D. Batteries for Power Source Two 1.5V AA batteries were used for power supply of the IPC equipment to use as portable. As the battery voltage drops down the time to reach to the predefined pressure level gets longer and it cannot supply sufficient air pressure to compress lower extremities. The equipment is designed so that it does not operate if the supply voltage drops below 2.6 V. Fig. 5 Upper waveform shows voltage measured at the current sensing resistor. Lower waveform shows magnified for the “A” in upper wave form
III. CONCLUSIONS microcontroller, where internal 1.5V reference voltage was used for the conversion. At the same time the air pressure in cuff was measured by an analog air pressure gauge (Welch Allyn CE0123) to get the absolute air pressure in cuff. With this experiment we could get the reference table which is used to convert the voltage across the current sensing resistor into absolute air pressure.
Most of problems occurred during the development of an intermittent pneumatic compression equipment was examined and proper solutions were provided through experiment. The final prototype was tested by prospective users and satisfied with the four conditions; portability, low power, ease of use, and low cost.
IFMBE Proceedings Vol. 35
Problems and Solution When Developing Intermittent Pneumatic Compression
It could measure the air pressure approximately without using an air pressure sensor. However, it cannot measure the air pressure accurately because the voltage across the current sensing resistor varies depending on the supply voltage. Therefore this indirect air pressure measuring method suggested in this development can be applied only to the system which has constant power supply voltage. The system needs to minimize power consumption to use as portable. A simple way to low power consumption, system is set to low-power standby mode when it is not operating. Based on the results of this study, it would be useful when developing blood pressure monitor or massage machine using air compression.
347
REFERENCES 1. R. J. MORRIS (2008)“Intermittent pneumatic compression—systems and applications” Journal of Medical Engineering & Technology, Vol. 32, No. 3, May/June 2008, 179 – 188 2. http://aboutcompressionstockings.com/
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Namhoon Kim Inje University 607, Obang-dong Gimhae Korea [email protected]
Pulse Oximetry Color Coded Heart Rate Monitoring System Using ZigBee F.K. Che Harun1, N. Zulkarnain1, M.F. Ab Aziz1, N.H. Mahmood1, M.F. Kamarudin2, and A. Linoby3 1
Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310, Skudai, Johor INSTEDT, Bangunan Kejora, Jalan Dato' Onn, 81930 Bandar Penawar - Desaru, Johor Darul Takzim 3 Universiti Teknologi MARA (UiTM) Pahang 26400 Bandar Tun Abdul Razak Jengka, Pahang, Malaysia 2
Abstract— This paper proposed a color coded bracelet prototype heart rate monitoring system communicating with wireless sensor network technology. The use of an integrated design with microcontroller and a ZigBee transceiver gives the possibility of developing wireless devices with small dimensions, low power consumption, and good computing capability. The pulse oximeter is used as a sensor to detect the heart rate from the user while ZigBee modules get all commands from microcontroller unit with complete ZigBee stack inside. Then, the heart rate information is obtained using a pulse oximeter, and it is displayed through the multicolor light emitting diode (LED) indicator on the bracelet. The LED indicator can show the different colors that indicate the range of heart rate. The results will help coaches as well as individual to monitor and regulate the user exercise training using a computer. Keywords— Pulse oximetry, heart rate monitor, ZigBee wireless sensor technology, WSN.
I. INTRODUCTION Recently, many heart rate monitors have been developed with numerical digital display providing an indication of target heart rate. Researches on health monitoring were developed for many applications such as home care, hospital patients’ care and sport training. Nowadays, heart rate monitor is a device which is very important to human health and life. Zigbee communication protocol is applied in this paper as it provides a small volume, high expansion, low power consumption, stylization and two-way transmission. A portable real-time wireless health monitoring system for remote monitoring the patients’ heart rate and oxygen saturation in blood using the ZigBee wireless network was developed by Watthanawisuth et. al. [1]. From their experimental results, the system can successfully install for testing in patient’s home for health care monitoring and the wireless sensor network can operate on an area of 10-15 square meters. Huang et. al.[2] presented the wireless sensor networks (WSN) to observe the human physiological signals by ZigBee, which is provided with lower power consumption and small volume. They developed a suite of home care sensor network system with three embedded sensors (temperature, humidity, and light sensor) to observe
heart rate and blood pressure. Other works that extensively used the Zigbee wireless network are discussed in [4,7]. In order to reduce the size, weight and power consumption of the system, a microcontroller based heart rate monitor was implemented. It explains how a single-chip microcontroller can be used to analyze heart beat rate signals in real-time. It can also be used to control patients or athletic person over a long period. Conlan R. W. [8] issued a pattern that describes a small and portable device which detects the frequency and physiological movements, including pulse and may be worn upon the wrist of a user. Another research used a microcontroller to develop a monitoring system that can monitor the heart rates of rehabilitation patients who are taking physical therapy inside a rehabilitation center so that it gives the physical therapist early warning if necessary [4]. Such digital display of target heart rate did not provide for ease of reading the display under the most conditions of use, particularly when the user is exercising vigorously. Hence, this paper proposed an innovation to respond to this problem by providing an easily viewed target heart rate monitor which displayed a uniform color homogenously across the entire surface of the monitor, enabling a user to tell at a brief glance, whether they are exercising at a suitable intensity. The proposed innovation will be programmed to automatically suggest user with the optimum heart rate zones in accordance to their exercise goal, without having to perform any entangled calculation. The work will only be focused for sport training application and inspired by COBRA [5,6]. The study of such a practical device would certainly help many athletes, coaches as well as public to monitor and regulate their exercise training in the more effective and safer manner.
II. METHODOLOGY Figure 1 shows the whole steps of the work for designing the prototype that communicates with ZigBee. The work starts by gathering all the data and information needed from the pulse oximeter. The PIC16F1827 microcontroller is used due to the small power, has a surface amount microcontroller
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 348–351, 2011. www.springerlink.com
Pulse Oximetry Color Coded Heart Rate Monitoring System Using ZigBee
349
and serial port interface (SPI) [1]. he data that have been processed by microcontroller will be displayed through the LED indicator on the bracelet as well as on cthe computer screen. This LED indicator can show the different colors that indicate the range of heart rate. The data from the microcontroller is also sent to the computer via ZigBee wireless network connected to the microchip MRF24J40 for remote monitoring.
red, green, and blue elements can be mixed to produce extra color for the code. Unlike many other RGB LEDs, this unit has a diffused white lens that blends the colors without any additional diffusers or reflectors. An RGB LED is actually three single color LEDs (Red, Green and Blue) combined into a single package.
Fig. 1 The block diagram of methodology
Fig. 2 The conceptual drawing of the bracelet
Generally, the pulse oximeter already had built in processor to process SPO2 and heart rate data internally. These data are then sent digitally to the microcontroller and then Zigbee through UART interface. Then, the LabVIEW software is used to acquire the heart rate data through UART to USB converter connected to the Xbee controller. Figure 2 shows the conceptual drawing of the bracelet. This design of bracelet has used the cloth material for more ergonomic design. Conductive thread was used to connect the electronic devices with the multicolor LEDs. A board consists of ZigBee and the microcontroller module is placed in the middle of the bracelet. In this study, the pulse oximeter is used to measure the heart beat and tracks heart beats per minute continuously. It also measures the amount of oxygen as a percentage of haemoglobin molecules that are oxygenated versus the total amount of haemoglobin molecules.The ZigBee Module used in this work consist of the Microchip MRF24J40MA. The MRF24J40MA is a 2.4 GHz IEEE Std. 802.15.4™ compliant, surface mounted module with integrated crystal, internal voltage regulators, matching circuitry and PCB antenna [2]. The integrated module design frees the integrator from extensive radio frequency (RF) and antenna design, and regulatory compliance testing, allowing quicker time to market. RGB Multicolor LED indicator is used to display the different colors that signify the range of heart rate. The multi-color LEDs that contain
Conductive thread is a creative way to connect various electronics onto clothing. This thread can carry current for power and signals. Using conductive thread, LED displays and electronic circuits is made on any flexible fabric. It can also replace solder in many instances and create circuits on almost any hard or flexible surface. Therefore, this thread has been used in this work to ensure the bracelet will be comfortable to user.
III. RESULT AND DISCUSSION The prototype design of bracelet as shown in Figure 3 with multicolor LED shows the different colors that signify the range of the heart rate. The range of heart rate can be classified into five zones, which are healthy heart zone, temperate zone, aerobic zone, anaerobic zone and redline zone [5]. For each zone, it has own intensity, which determined by percentage of maximum heart rate and the range of sport zones are shown in Figure 4. In the exercise mode selection, the athlete requires to determine which of their desired exercise training zones by selection in exercise mode selection view. Within each training zone, distinct physiological effects take place to enhance specific fitness components. First zone is the most
IFMBE Proceedings Vol. 35
350
F.K. Che Harun et al.
workouts. Then, in the zone 2, the training will develop and enhance cardiovascular system, which is the body’s ability to transport oxygen to, and carbon dioxide away from, the working muscles. Furthermore, it will have greater benefits in terms of calorie burning and improved aerobic capacity. For the zone 4, training will develop and increase the lactic acid system, which is the ability of the body to remove and delay the accumulation of lactic acid. Last zone is the zone 5 that will only be possible for short periods. It effectively trains fast twitch muscle fibres and helps to develop speed. This zone is reserved for interval running and only the very fit individuals are able to train effectively within this zone.
Fig. 3 Bracelet Prototype design
IV. CONCLUSIONS
Fig. 4 Sport zones base on percentage of maximum heart rate comfortable zone, reached by walking briskly. In this zone, the heart will strength and the muscle mass is improved while the body fat, cholesterol and blood pressure are reduced. The user will get healthier in this zone, but not fitter. Training within zone 2 develops the basic endurance and endurance and aerobic capacity. Activities in this zone purposely to the height fat burning process for weight loss, warm-up phase and to allow muscles to re-energize with glycogen, which has been expended during those faster
As a conclusion, the color coded bracelets heart rate monitoring system using Zigbee is built to help athletes feel comfortable and easy to monitor the range of heart rate from own wrist via LED multicolor displayed on the bracelet. This system device also gives an advantage to coach to easily get the information about the athlete heart rate condition by monitoring the athlete’s heart rate by using PC via ZigBee wireless. Therefore, this device will automatically help an athlete to increase their performance in the sport field also for their healthy life. Further work will be focused on reducing the size of prototype and improve the communication of Zigbee between the transmitter (the chest strap) and the receiver (the bracelet). The application of this invention is not limited to the sport field. It can be further expanded to be used or to place at other places such as at the bed (to monitor the patient heart rate) and head band (to monitor the swimming athletes' heart rate).
ACKNOWLEDGMENT A special “Thank You” for the Ministry of Higher Education Malaysia (Sport Section) for the financial support and Universiti Teknologi Malaysia (UTM) for the facilities and equipment provided for this research.
IFMBE Proceedings Vol. 35
Pulse Oximetry Color Coded Heart Rate Monitoring System Using ZigBee
REFERENCES [1] N. Watthanawisuth, et al., "Wireless wearable pulse oximeter for health monitoring using ZigBee wireless sensor network," ECTICON, 2010, pp. 575-579. [2] M. Huang, et al., "The wireless sensor network for home-care system using ZigBee," Proc. Of Intelligent Information Hiding and Multimedia Signal Processing, 2007, pp. 643-646. [3] L. Man, "Design of a Heart Rate Monitor Device," IFMBE Proceedings, 2009, Volume 26, 137-142, pp. 137-142. [4] J. Park, et. al., "A zigbee network-based multi-channel heart rate monitoring system for exercising rehabilitation patients," 2008, pp. 1-4.
351
[5] A.Linoby, “Color-coded Heart Rate monitoring bracelets (COBRA) to guide specific exercise training intensity.”, RMIUiTM (Patent), 2009. pp. 1-12. [6] A.Linoby, M. A. Amat, “Color-coded Heart Rate Motitoring Bracelets (COBRA)”, Innovation and Invention Design (IDD), 2010, Malaysia. [7] Mukhopadhyay, S. et.al., "A zigbee based wearable physiological parameters monitoring system", IEEE Sensor Journal, 2010. Volume PP Issue 99, 2010. (accepted for publication). Date access:10/1/2011. [8] Conlan R.W, “Activity monitoring apparatus with configurable filters”, US Pattern No. 5197489, 1993.
IFMBE Proceedings Vol. 35
Study of Electromagnetic Field Radiation on the Human Muscle Activity M.S.F. Mansor1, W.A.B. Wan Abas1, and W.N.L. Wan Mahadi2 1
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Malaysia Department of Electrical Engineering, Faculty of Engineering, University of Malaya, Malaysia
2
Abstract— The use of electrical appliances has enhanced man’s quality of life. As a result, the demand for electricity rapidly increased. Although electricity provides advantages to their daily living, people are exposed to the associated electromagnetic field radiation, generally without realizing it. This study evaluates the impact of electromagnetic field radiation on the human muscles by using electromyography (EMG) apparatus. Radiation is measured inside the office room furnished by fluorescent lights as well as a computer with liquid crystal display (LCD) or cathode ray tube (CRT) screen. 17 subjects with no history of chronic musculoskeletal or abdominal pain participated in this study. The electromyograms (EMGs) of four forearm muscles, namely the flexor carpi radialis (FCR), extensor carpi radialis longus (ECRL), flexor carpi ulnaris (FCU), and extensor carpi ulnaris (ECU) are recorded with a sampling rate of 2000 Hz. The analysis of variance (ANOVA) is performed to determine the impact of electromagnetic field radiation on the muscles. It is found that CRT radiation with fluorescent light (Case 5) has the highest effect on root mean square (RMS) of EMG. The RMS of EMG is increased from Case 1 (without any radiation) to Case 5 by 11.9 % for the FCR muscle, 12.0 % for the ECRL muscle, 12.1 % for the ECU muscle, and 11.8 % for the FCU muscle. Furthermore, the extensor muscles are shown to be more active compared to flexor muscles in RMS of EMG by 52.6 % when exposed to the radiation. The muscle is observed to be more active when the electromagnetic field radiation is increased. Keywords— Electromagnetic Field Radiation, Electromyography (EMG), Muscle Activity, Analysis of Variance (ANOVA).
I. INTRODUCTION An electromagnetic field is a combination of an electric field (E-field) and a magnetic field (M-field). It consists of energies which are moving together through space at the speed of light. The E-field is produced by stationary charges whilst the M-field is produced mainly by the motion of electric charges. The magnitude of the M-field per unit area is known as the magnetic flux density and is measured in Tesla (T). Electromagnetic field radiation generally comes from the natural and man-made sources. The natural electromagnetic
field radiation is emanated from the earth [1]. Man-made electromagnetic field has destructive effects to the human, living things, and environment. This is due to the heat created and energy radiated out from the application of man-made E-field [2]. This man-made electromagnetic field radiation could be classified into several different ranges, which are distinguished by the frequency and wavelength of the electromagnetic waves. The electromagnetic spectrum is differentiated into two sections according to different frequencies, namely ionising and non-ionising radiations, Fig. 1.
Fig. 1 The electromagnetic spectrum [3] Ionizing radiation is radiation that has enough energy to remove electrons from an atom, creating what is known as free radicals (atoms with unpaired electrons that are usually highly reactive) in living matter. Non-ionizing radiation refers to any type of electromagnetic radiation that does not carry enough energy per quantum to ionize atoms or molecules.
II. HUMAN MUSCLE Muscle tissue is composed of cells that have the special ability to shorten (contract) in order to produce movements of body parts. The main purpose of the muscles is to produce force that will be creating motion. There are three types of muscle, namely skeletal muscle, smooth muscle, and cardiac muscle.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 352–355, 2011. www.springerlink.com
Study of Electromagnetic Field Radiation on the Human Muscle Activity
353
A. Electromyography (EMG)
B. Test Procedure
EMG is a method to record the action potential along the muscle fiber’s surface as a result of electrical activities of many motor units [4, 5]. Application of EMG involved in medical research, sport training, rehabilitation, and ergonomics. Maximal Voluntary Contraction (MVC) is defined as a method to normalize the recorded data in percentage in order to standardize them. It solves the problem of how effective is a muscle in achieving a required task and what capacity level of muscle did the task [6].
The researcher explained all the procedures of the task to be performed to the subjects before every testing. The subjects sat on the standard office chair and the computer was placed on the standard computer table. They were instructed to carry out the typing activity for 30 seconds prior to the measurement of first trial to ensure that all the signals registered correctly [9]. The subjects were instructed to type at their optimal speed, a given test paragraph, in order to standardize the muscle activation of measured forearm muscle. The full methodology flow chart is shown in Fig. 2.
B. Muscle Activity The active muscles produce more force compared to acting isometrically at the same length during stretch. In EMG, muscle activity can be investigated by analyzing the data in amplitude domain. Amplitude provides information about the level of muscle activation, affected by the numbers of active motor units and the firing rates [5]. In a more specific way, signal processing of amplitude domain comprises of root mean square (RMS) which is provides a more rigorous measure of information content of the EMG signal because it measures the energy of the signal [7]. It can be used to compare the EMG muscle activity for each case with different levels of electromagnetic field radiation and determine the most active muscle as well [8]. The active muscles produce more force, and less energy and chemical changes compared to acting isometrically at the same length during stretch. C. Muscle Selection Typing is a performance that involves the activation of the forearm muscles and finger muscles. A previous study by Gerard et al. [9] had chosen three pairs of extensor and flexor muscles to investigate the effect on the muscles while typing. These muscles are selected due to their involvement in lifting and downward pressing motion of finger during typing. According to the research by Gerard et al. [9], the muscles involved in the typing activities are the FCU and FCR, and ECU and ECRL. Thus these muscles are selected in the current research.
Fig. 2 Flow chart of methodology for typing experiment C. Exposure Cases Five trials, which involved typing performance, involved exposure to the electromagnetic field radiation at five different levels. These five cases of exposure are as follows: CASE 1: Without any radiation – control group CASE 2: Expose to 1 LCD screen CASE 3: Expose to 1 LCD screen and 4 fluorescent light CASE 4: Expose 1 CRT screen CASE 5: Expose to 1 CRT screen and 4 fluorescent light
III. METHODOLOGY D. EMG Measurement A. Electromagnetic Field Measurement The measurement is concentrated to the strength of electromagnetic field radiation from the fluorescent lights and computer screen. EMDEX Snap is used in to capture the strength of electromagnetic field radiation of the fluorescent light and computer screen.
The four concerned muscles of the upper extremity in the study are the FCU, ECU, FCR, and ECRL. The first pair was FCU with ECU and the second pair of muscles was FCR with ECRL. They were chosen due to their being highly involved in keyboard typing activity. Prior to the measurement, the surface electrodes were placed on the skin of muscle under study.
IFMBE Proceedings Vol. 35
354
M.S.F. Mansor, W.A.B. Wan Abas, and W.N.L. Wan Mahadi
E. Signal Processing The EMG data is digitally filtered by using second order Butterworth band pass filter. The digital filtered EMG signal is processed to calculate RMS parameter in order to determine which muscle is active when the electromagnetic field radiation is exposed. There were five levels of exposure in this experiment. The whole EMG data were recorded for each case, each case involving 20 minutes of typing task. EMG data were analyzed by using EMGworks®3.5 Software (Delsys Inc).
Fig. 3 (a) shows the muscle activities increased from Case 1 to Case 5. Fig. 3 (b) showed that the mean of normalized RMS of EMG signal is fluctuated for all exposure cases. All graphs are exhibiting the same pattern for all muscles. The standard deviation (SE) for FCR muscle is ranging 1.5 %, ECRL muscle is 3.4 %, ECU muscle is 2.0 %, and FCU muscle is 1.5 % from time T1 to time T10 for all 17 subjects.
IV. RESULT A. Electromagnetic Field Radiation Strength The measurement was made for five different exposure cases where each case represented a different situation. The magnitude of the electromagnetic field radiation for the five different exposure cases during the experiment is summarized in Table 1. Table 1 Electromagnetic field radiation measurement from EMDEX Snap with different cases of exposure Cases Case 1: Without any electromagnetic field radiation Case 2: Turn on computer with LCD monitor only Case 3: Turn on computer with LCD and 4 fluorescent lights Case 4: Turn on computer with CRT only Case 5: Turn on computer with CRT and 4 fluorescent lights
Electromagnetic field radiation (μT) < 0.01 0.04 0.16 0.18 0.30
B. EMG Measurement 17 subjects with no history of chronic musculoskeletal were involved in this experiment. The concerned quantitative parameters were normalized RMS of EMG signal.
Fig. 3 The 5 exposure cases, separated (a) and the 5 exposure cases, averaged (b) graph of normalized RMS of EMG values vs. time for FCR muscle In order to investigate deeply as to which cases are significantly different from the control group, the Post-hoc testing was performed using Tukey's method with α = 0.05. The multiple comparisons of Case 1 to the other cases (Case 2 to Case 5) were carried out. The Post-hoc result is summarized in Table 2. The P-values are recorded to be less than 0.05 for all muscles. It was proven statistically that muscle activity had increased when the strength of the electromagnetic field radiation was increased.
V. DISCUSSION C. Statistical Analysis The EMG data (which was taken to represent the activeness of the muscles) was examined throughout the typing task to identify any pattern of change during the 20 minute’s experimental period of the five exposure cases. Case 1 represents the control group, where the condition involved no electromagnetic field radiation. The level of electromagnetic field radiation was increased from Case 2 to Case 5.
Although the RMS of EMG results demonstrated an overall significant difference between the cases in all muscles, it was of interest to investigate whether the increase or reduction of the muscle activities varied with the cases. Means and standard deviation (SD) of the EMG all four muscles are summarized in the Table 3. In addition, Fig. 4 shows the bar graph of the mean of the RMS of EMG values for each case.
IFMBE Proceedings Vol. 35
Study of Electromagnetic Field Radiation on the Human Muscle Activity
355
Table 2 Summary of Post-Hoc (Tukey’s HSD) for cases with α = 0.05 for
VI. CONCLUSION
RMS analysis
FCR
Cases (I) Case 1
ECRL
Case 1
ECU
Case 1
FCU
Case 1
Muscle
Cases (J) Case 2 Case 3 Case 4 Case 5 Case 2 Case 3 Case 4 Case 5 Case 2 Case 3 Case 4 Case 5 Case 2 Case 3 Case 4 Case 5
P-value
Conclusion
< 0.001 < 0.001 < 0.001 < 0.001 < 0.001 < 0.001 < 0.001 < 0.001 < 0.001 < 0.001 < 0.001 < 0.001 < 0.001 < 0.001 < 0.001 < 0.001
Case 1 Case 2 Case 1 Case 3 Case 1 Case 4 Case 1 Case 5 Case 1 Case 2 Case 1 Case 3 Case 1 Case 4 Case 1 Case 5 Case 1 Case 2 Case 1 Case 3 Case 1 Case 4 Case 1 Case 5 Case 1 Case 2 Case 1 Case 3 Case 1 Case 4 Case 1 Case 5
Table 3 Mean and SD of normalized RMS values Muscles
1 10.775 ± 0.389 49.503 ± 0.223 26.686 ± 0.189 16.676 ± 0.217
FCR ECRL ECU FCU
Cases (mean ± SD) 2 3 4 13.707 16.649 19.568 ± 0.135 ± 0.256 ± 0.107 53.103 55.076 58.480 ± 1.143 ± 0.799 ± 0.233 29.557 32.589 35.449 ± 0.534 ± 0.176 ± 0.199 22.663 22.663 25.684 ± 0.281 ± 0.281 ± 0.175
5 22.673 ± 0.197 61.541 ± 0.192 38.799 ± 0.219 28.522 ± 0.137
It is clear that the extensor muscles (ECRL and ECU) were more active compared to the flexor muscles (FCR and FCU) by 55 %, Fig. 4. For the extensor muscle, the ECRL was more active than the ECU by 41 % while for flexor muscles, the FCU was more active than the FCU by 26 %. For all muscles, the normalized RMS of EMG value increased from Case 1 to Case 5.
All data had shown increment in all cases for RMS of EMG analysis. Thus, it is concluded that the muscle is more active when the radiation is increased. It is found that CRT radiation with fluorescent light (Case 5) has the highest effect on RMS of EMG. The RMS of EMG is increased from Case 1 (without any radiation) to Case 5 by 11.9 % for the FCR muscle, 12.0 % for the ECRL muscle, 12.1 % for the ECU muscle, and 11.8 % for the FCU muscle. Furthermore, the extensor muscles are shown to be more active compared to the flexor muscles in RMS of EMG by 52.6 % when exposed to the radiation. The muscle activity is observed to be more acutely affected when the electromagnetic field radiation is increased.
REFERENCES 1. Becker RO, (1990) Cross Currents: The Promise of Electromedicine. The Perils of Electropollution. New York: G.P.Putnam's Sons. 2. Miller AB, To T, Agnew DA, Wall C, and Green LM. (1996) Leukemia Following Occupational Exposure to 60-Hz Electric and Magnetic Fields among Ontario Electric Utility Workers. American Journal of Epidemiology, 144(2), 150-160. 3. Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) at http://www.arpansa.gov.au/radiationprotection/basics/ion_nonion.cfm 4. Basmajian JV and De Luca CJ. (1985) Muscle Alive: Their functions revealed by electromyography (5th ed.). Baltimore: Williams and Wilkins. 5. De Luca CJ. (1997) The use of surface electromyography in biomechanics. Journal of Applied Biomechanics , 13 (2), 135-163. 6. Konrad P. (version 1.0 April 2005) The ABC of EMG: A pratical introduction to kinesiological electromyography. In USA: Noraxon Inc. User Manual. 7. De Luca CJ. (2006) Electromyograpgy. Encyclopedia of Medical Device and Instrumentation. (John G. Wester ed. ed.): Jan Wiley Publisher. 8. Delsys. (2008) Myomonitor ® IV EMG System: User’s Guide. Boston: Delsys Inc. USA. 9. Gerard PVG, Hanneka L, and Ab DH. (2007) Effects of vertical keyboard design on typing performance, user comfort and muscle tension. Applied Ergonomic,s (38), 99-107.
Normaliz ed R MS of E MG V alue (% )
Bar Chart of Normalized RMS of EMG Value for Specific Muscles 70 60 50
C as e 1
40
C as e 2
30 20
C as e 3
10
C as e 4 C as e 5
0 FCR
E CRL
ECU
Author: Mohd Shuhaibul Fadly Mansor Institute: University of Malaya City: Kuala Lumpur Country: Malaysia Email: [email protected]
FCU
Musc le
Fig. 4 Bar chart of normalized RMS of EMG values for specific muscles IFMBE Proceedings Vol. 35
The pH Sensitivity of the Polarization Capacitance on Stainless-Steel Electrodes J.G. Bau and H.C. Chen Department of Biomedical Engineering, Hungkuang University, Taichung, Taiwan
Abstract— The quantitative analysis of electrode polarization and pH measurement has been of great importance in in situ and in vivo monitoring of biological applications. The cyclic voltammograph and transient current of the electrode polarization of the step potential was characterized and evaluated. By using a stainless steel electrode, we were able to clearly differentiate among KCl solutions ranging from pH 5.76 to pH 10.13. Our results indicated that the polarization current was highly sensitive to pH value. The CV with scan rate 1 volt/s has the best pH discrimination. The advantages of this technique include minimized electrode and short response time, making the stainless steel electrode a practical choice for either in situ or in vivo tests. Keywords— Double layer, Capacitance, Stainless-steel, Cyclic voltammetry, Impedance.
I. INTRODUCTION Stainless-steel is low toxicity to living tissue and therefore is widely used for a variety of electrodes for detecting bioelectric events and for stimulating tissue. Its high mechanical strength make it possible for the implementation of microelectrodes made by mechanical sharpening. The impedance of stainless-steel electrode has been intensively studies for several decades. Consider a stainless-steel electrode immersed in an aqueous electrolyte. If within the potential range bordered by the hydrogen and oxygen evolution reactions no Faradaic reactions proceeds, the interfacial impedance could be mainly influenced by various adsorption and desorption. The typical equivalent circuit of the interface in the presence of adsorption, elaborated by Ershler, Frumkin and Melik-Gaykazyan, and Lorenz, is shown in Figure 1. The polarization capacitance is of the form
C (ω ) =
σ
Fig. 1 The equivalent circuit of the interfacial impedance in the present of adsorption, whereσad is the coefficient of Warburg impedance, Wad , defined as Z(Wad)= σad(iω)-1/2 and Cdl, Cad, and Rad, stand for double layer and adsorption capacitances, and adsorption resistance, respectively
II. MATERIALS AND METHODS
1 iω ( Z (ω ) − Rs )
= Cdl +
σ
applied to the measurement of the impedance spectra on platinum [2,3], gold [4,5], and mercury [6] electrodes. Consider the aqueous solution with various hydrogen ion concentration. Because of the relation between ad and the concentration of the adsorbed ion, if the electrode is in the cathodic potential range, the adsorption should contain the contribution from the hydrogen adsorption. Besides, even the hydrogen ion concentrations are usually less than that of other ions such as Na+ and Cl- in ordinary physiological condition, water molecular is abundant within the inner Helmholtz plane, nearby the electrode surface, and H3O+ has higher mobility, the interfacial impedance will be influenced by adsorption and desorption of hydrogen ion significantly. In other words, the interfacial impedance should vary with the pH, evenly, its dynamic behavior should be mainly determined by pH. In this investigation, we set ourselves the task of learning how significantly the solution pH influence the polarization impedance.
1 + σ ad Cad
A. Scheme
C ad (1) iω + Rad Cad iω
where ad is the coefficient of Warburg impedance, Wad , defined as Z(Wad)= σad(i )-1/2 and Cdl, Cad, and Rad, stand for double layer and adsorption capacitances, and adsorption resistance, respectively [1]. This model has been
ω
An electrochemical analyzer system (model 608B, CH Instruments) was applied to perform the measurements. Using chronoamperometry (CA), the 10-2 sec potential step, in which the -0.5 volts potential of the working electrode is stepped against the reference electrode, the transient current from the polarizaion current of the electric double layer occurring at the interface was measured with 1M Hz
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 356–358, 2011. www.springerlink.com
The pH Sensitivity of the Polarization Capacitance on Stainless-Steel Electrodes
357
sampling rate. Besides, the polarization were also investigated by using cyclic voltammetry (CV), with scan rate 1, 10, 100, 1000, 10000 volt/sec respectively. Reference electrode is Ag/AgCl. All the measurements were conducted at a room temperature of 26 ±1 .
℃ ℃
B. Electrode Preparation Conventional stainless acupuncture needles were used as the working. The needle size was 30G before etching, and the outer surface was well insulated, except for the 5mm tip. The needles were then etched in HCl solution. All of the needles were cleaned in deionized water for 30 seconds after etching. The exposed area of needles was reduced to about 3.5 mm2 after the preparation.
Fig. 2 Cyclic voltammogram at 1 V/s for solutions A, B, C, D and E
C. Sample Solutions Preparation To study the influence of hydrogen ion, 0.11M KCl (pH 5.76) solution was titrated with 1M NaOH to five solutions of Table 1. Table 1 Compositions of the sample solutions in base 0.11M KCl solutions Solutions
Additional 1M NaOH
pH
A
0
5.76
B
10 μl
NA
C
20 μl
NA
D
30 μl
NA
E
40 μl
10.13
Fig. 3 Cyclic voltammogram at 10 V/s for solutions A, B, C, D and E
III. RESULTS The polarization i-E curves of Fig 2, 3, 4, 5 and 6 are CV in solutions A, B, C, D and E with scan rate 1, 10, 100, 1000, 10000 volt/sec respectively. The curves with higher cathodic current are also has larger anodic current. If the scan rate was higher, 104 volts/s, the interface reveals cathodic current (hydrogen adsorption) in all of the cathodic potential range; on the other hand, the larger is the hysteresis in CV was found at lower scan rate. The solutions with different pH are clearly discriminated at scan rate 1 volt/s. The higher is the scan rate, the worse distinguishable is the pH difference; in other words, the lower is the scan rate of CV, the better is the pH resolution. The similar result is revealed on the Fig 7. The solution with lower pH is obtained the largest transient current.
Fig. 4 Cyclic voltammogram at 100 V/s for solutions A, B, C, D and E
IFMBE Proceedings Vol. 35
358
J.G. Bau and H.C. Chen
Fig. 5 Cyclic voltammogram at 1000 V/s for solutions A, B, C, D and E
the hydrogen and oxygen evolution reactions, no Faradaic reactions proceeds, the responded current in both CV and CA studies could be mainly influenced by various adsorption and desorption of hydrogen ions. In our previous studies using CA to compare the influence of HCl and KCl [7], we found the transient current profile of the solution 0.1 M KCl with 10-4 M additional HCl was much larger than that of the 0.11 M KCl solution. It has suggested that the interfacial polarization behavior should be mainly determined by pH. The results using CV in this study are consistent with the prior works. The 0.5 volts potential step with 10-2 sec period is similar with the CV with scan rate 50 volts/s (0.5 volt / 10-2 sec = 50 volts/s). The results indicate that CV with the lowest scan rate provides the best pH resolution. It reveals the dynamic behavior of the hydrogen adsorption is in order of half second. Solutions of different pH values could be discriminated by the quantitative analysis of the electrode polarization current in KCl solutions. The lower is the scan rate of CV, the better is the pH resolution. This technique provides the advantages of minimized electrode and short response time and warrants further investigation to assess its applicability in in vitro and in vivo tests.
REFERENCES
Fig. 6 Cyclic voltammogram at 10000 V/s for solutions A, B, C, D and E
Fig. 7 Chronoamperometry with -0.5 volts step potential for solutions A, B, C, D and E
IV. DISCUSSIONS AND CONCLUSIONS The higher is the H3O+ concentration, the larger is the polarization currents not only in the cyclic voltammogram but also in the chronoamperometry. Because the applied potential in both CV and CA studies did not exceed the borders of
T. Pajkossy and D.M. Kolb, Double layer capacitance of Pt(111) single crystal electrodes Electrochim. Acta 46 (2001), p. 3063. E. Sibert, R. Faure and R. Durand, Electrosorption impedance on Pt(111) in sulphuric media and nature of the ‘unusal’ state Electrochem. Commun. 3 (2001), p. 181. E. Sibert, R. Faure and R. Durand, Pt(111) electrosorption impedance in mixed electrolyte J. Electroanal. Chem. 528 (2002), p. 39. Z. Kerner and T. Pajkossy, Measurement of adsorption rates of anions on Au(111) electrodes by impedance spectroscopy Electrochim. Acta 47 (2002), p. 2055. T. Pajkossy, Th. Wandlowski and D.M. Kolb, Impedance aspects of anion adsorption on gold single crystal electrodes J. Electroanal. Chem. 414 (1996), p. 209. R.D. Armstrong, W.P. Race and H.R. Thirsk, J. Electroanal. Chem. 27 (1970), p. 21. J.G. Bau, H.C. Chen, B.Y. Liau, M.T. Tsai, H.M. Chen, pH Dependence of Double Layer Capacitance of Stainless Steel Electrodes in KCl Aqueous Solution Proceeding of 2010 International Symposium on Biomedical Engineering, Kaohsiung, Taiwan, Dec. 10 to Dec. 11, 2010
Author: Bau, Jian-Guo Institute: Department of Biomedical Engineering, Hungkuang University Street: 34 Chung-Chie Rd, Sha Lu, City: Taichung Country: Taiwan, Republic of China Email: [email protected]
IFMBE Proceedings Vol. 35
Ultrasound Dosimetery Using Microbubbles E. Rezayat1, E. Zahedi2, and J. Tavakkoli3 1 2
Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran 3 Department of Physics, Ryerson University, Toronto, Ontario, Canada
Abstract— In this paper, a new technique based on nonlinear resonance of microbubbles is investigated in order to estimate the amplitude of an ultrasound wave pressure field. First, the existing theoretical model is reviewed. Then, an experimental setup consisting of a bubble generator and transmitting/receiving ultrasound transducers operating in the 1 MHz frequency range is described. The effect of background noise is also taken into account. Results show that the second harmonic oscillations are detectable, paving the way to develop a quantitative method for in vivo calibration of ultrasound waves. Keywords— Ultrasound, Calibration, Microbubbles, Nonlinear Resonance.
I. INTRODUCTION Ultrasound waves produce a variety of thermal and cavitational effects in biological tissues [1]. Given their widespread use in medical diagnostic and treatment since the sixties [2], stringent standards have been developed to keep the intensity of these waves within safe levels [3]. Thus, calibration techniques have also been actively developed such as: wideband hydrophones, calorimetric, optical and radiation-force measurement systems [3]. When the intensity increases, as it is the case in HIFU (high intensity focused ultrasound) and lithotripters, calibration becomes more delicate [1]. However, all of these techniques rely on in-vitro calibration as the setup consists usually of a temperaturecontrolled water tank and a 3-D scanner which records the intensity of the ultrasound field. At very high intensities, the hydrophone can be placed on the focal point of the transducer with great protection to avoid destruction [1]. By using “derating” formulas, the intensity of the ultrasound wave is extrapolated by assuming a certain amount of [4]). Generally, attenuation (around 0.069 . MHz. cm the (spatial-peak temporal-average intensity), (spatial-peak pulse-average intensity), MI (mechanical index) as well as PII (pulse intensity integral) are measured. All of these measurements require an estimation of the amplitude of the pressure wave [4]. On the other hand, microbubbles have received considerable attention as agents for drug delivery, targeting
and image contrast [5]. The significant feature of microbubbles is that when they are exposed to an ultrasound field of enough magnitude, radial oscillations occur. Other types of oscillations may happen depending on the radius of the microbubble, its environment and the intensity of the ultrasound field. When used as contrast agents, linear radial oscillations which happen at MI values of less than 0.1 [6] are useful in making more visible the flow (in which bubbles are inserted). For higher values of the MI (0.1 < MI < 0.5) i.e. at an acoustic pressure of 0.1 to 0.5 MPa (assuming a frequency of 1 MHz), non-linear bubble oscillations happen. As reflections from the surrounding tissues can be considered in the linear range at these MI, the non-linear oscillations creating harmonics can be attributed to the microbubbles therefore enhancing the image contrast. An application of non-linear acoustic emission is the expanding field of harmonic imaging, where harmonics emitted by tissues themselves are of interest. Numerous studies have been carried out in determining the threshold of non-linear oscillations [7-10] and theoretical as well as experimental aspects have been investigated thoroughly. An interesting aspect of these non-linear oscillations is that they can be put to use to estimate the amplitude of the acoustic pressure both in-vivo and in-vitro. In this paper, theoretical aspects of this non-linear resonance are first discussed. Then, experimental results showing the second harmonic using microbubbles created in the laboratory are presented. Finally, a roadmap is drawn which could eventually lead to in-vivo calibration of ultrasound waves intensity.
II. THEORY OF MICROBUBBLE RESONANCE The main equation which describes pure-gas bubble resonance is the well-known Rayleigh-Plesset equation [7]: 3
RR
2
R
2
P0
3γ
2σ R0
2σ
4 0
0
(1) Where the following parameters are defined: • •
R: instantaneous radius of bubble [m] and : first and second time derivatives of radius
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 359–362, 2011. www.springerlink.com
360
E. Rezayat, E. Zahedi, and J. Tavakkoli
• • • • • • •
R0: bubble radius at rest [m] at 20°C] ρ: water density [1000 . σ: surface tension of water [0.0725 . ] at 20°C ] c: sound celerity in water [1480 . : polytopic exponent [usually assumed unity] P0: ambient pressure [about 100 kPa] Pac = P1cos(2πft): applied sinusoidal (frequency f) acoustic pressure [Pa] to the bubble at instant t.
Equ. 1 being non-linear, in general the solutions are obtained using numerical approaches. However, if the amplitude of the oscillations can be assumed small compared to the bubble radius at rest, Miller [11] has developed an analytical solution for the Rayleigh-Plesset equation: p Ω p
ρω
be controllable, an ambiguity does exist if the first harmonic is used. In other words, in the linear oscillation region, the backscattering of bigger bubbles will interfere with the pressure backscattered by bubbles at their resonance. This problem can be resolved using the second harmonic which shows a single peak (Fig. 1–b): here, the amplitude continues to decline inversely proportional to the radius. It can also be shown [11] that the amplitude of the second harmonic is proportional to the square of the excitation amplitude. Another limitation of using the first harmonic is the fact that reflections from surrounding tissues do occur at the same frequency as the excitation, adding to the ambiguity in amplitude determination. Therefore, the information from the bubble resonance might be “lost” in the clutter emitted by the background.
(2) Ω
Y
(3)
Where: • • • •
: Backscattered pressure at first harmonic : Backscattered pressure at second harmonic : Normalized frequency r: radial distance from bubble
•
1
Ω
•
1
4Ω
Ω δ 4Ω δ
• •
Y
Ω
Ω δ
In this work, Equ. 2 and Equ. 3 have been used in order to estimate the amplitude of the first two harmonics. Using these equations, the backscattered pressure has been plotted versus bubble radius for the first (Fig. 1–a) and second harmonics (Fig. 1– b) for a radiating pressure of 20 kPa, assuming an insonating frequency of 1 MHz. As it can be seen, the amplitude of both harmonics show a peak at the resonance radius (here R0 = 3.15 μm). However, the amplitude of the first harmonic rises uniformly after resonance and reaches the same amount of pressure (156 Pa) at a radius of R0 = 48.96 μm (where the radius is approximately 15 times larger). This limitation has been previously highlighted by Miller [11] as a major factor explaining why there might be some ambiguity when trying to determine the amplitude or frequency of the exciting wave using the first harmonic. Here, a similar problem arises: as the radius of the bubbles may not
Fig. 1 Amplitude of backscattered pressure (radiating pressure 20 kPa & frequency 1 MHz) versus radius at rest. a- First harmonic, b- Second harmonic Although tissues do emit some degree of non-linear oscillations [12], their amplitude is much less that the resonating bubble as long as the insonating pressure remains low. In order to quantify this effect, Equ. 4 describes the amplitude of the second harmonic component generated in water with traveled distance: λ
Where: • • •
(4)
: amplitude of pressure wave excitation [Pa] : amplitude the second harmonic produced from non-linearity of water [Pa] z: travelled distance by the acoustic wave
IFMBE Proceedings Vol. 35
Ultrasound Dosimetery Using Microbubbles
361
In order to lessen these non-linear effects, the bubblereceiver distance should be minimized and the wavelength maximized. Given a 1 MHz excitation, a distance of 5 cm and insonating pressure of 20 kPa, the second harmonic intrinsically generated by water will be only 5% of the first harmonic. Although biological tissues will behave differently, these non-linear effects are ignored in this work. Another, more serious source of non-linear behavior is the second harmonic emitted by the signal generator (impurity of the excitation spectrum). A theoretical solution to mitigate this problem would be to subtract the background harmonic emission (without bubbles) from the received signal (where bubbles are present).
III. EXPERIMENT In order to investigate the theoretical model presented in section II, an experimental setup consisting of a water tank and piezoelectric elements is used (Figure 2). The transmitter consists of a 2 cm diameter piezoelectric transducer (PZT-4) with a resonance frequency of 1 MHz, excited by a signal generator producing a sine wave at 1.1 MHz. Another piezoelectric transducer (2 cm diameter, resonance frequency of 1.7 MHz) is connected to a dataacquisition board via a preamplifier.
transducer to the preamplifier, a high-pass filter (10 kHz cutoff) is used. The analog signal is acquired at a rate of 50 MHz and a 12-bit resolution. The software in charge of the user interface and recording of data is entirely developed using LabView. In order to produce microbubbles, a fine metallic (200 µm diameter) wire made of Remanium is excited by monopolar pulses 2 ms wide at a repetition frequency of 100 Hz. A cloud of bubbles appear with each pulse at the cathode where the amplitude is -2 V. But this cloud can be easily dispersed by microcurrents inside the water tank. Thus the cathode is inserted in a 5 cm tube made of a 2 mm thick Plexiglas to prevent bubble drifting. This tube effectively guides the bubbles to the region of interest. The path of the microbubbles is highlighted with a red laser beam. To further reduce the effect of high-frequency noise (due to nearby emitting radio stations), the whole setup is wrapped inside a fine metallic mesh, creating a Faraday cage.
IV. RESULTS Figure 3 shows the background noise without any bubbles being generated. The amplitude of the excitation is 5 V (emitting transducer at 1.1 MHz). The spectrogram (Fig. 3–b) shows that despite the shield and other noise limiting precautions, many harmonics appear in the high frequency range. Another noise factor is the direct coupling between the emitting and receiving transducers.
Fig. 2 Schematic diagram of the experimental setup The two transducers form an angle of 90° in order to minimize direct coupling between transmitter and receiver. In order to eliminate low-frequency noises due to electromagnetic pickup in cables connecting the receiving
Fig. 3 Background noise in the absence of bubbles a- Time domain, b- Spectrogram
IFMBE Proceedings Vol. 35
362
E. Rezayat, E. Zahedi, and J. Tavakkoli
Figure 4 shows the received signal when bubbles are generated (pulse generator on). As it can be seen, the harmonics appear clearly when the bubbles reach the overlapping insonating regions both transducers (Fig. 4–b).
The main limitations can be summarized as follows. The model used in this paper (Miller [11]) is only valid for small amplitude variations. Therefore other models should be investigated for higher amplitudes. The range of bubble radius can not be easily controlled in an electrolysis circuit. To solve this last problem, the use of calibrated lipoprotein microbubbles is being considered. This model was investigated for single frequency sinusoidal ultrasound wave. For other (e.g. pulsating) waves, this model should be expanded. Finally, although at low insonating pressures the non-linear effects of surrounding tissues is low (only 5%, refer to section II), at 200 kPa, the second harmonic intrinsically generated by water will be 50%. Therefore the new model should take into consideration these effects as well.
ACKNOWLEDGMENTS
Fig. 4 Received signal when microbubbles are generated. Second harmonic
We thank Mr. M. Heydari and Mr. M. Shaban for their kind technical assistance in the experiments and the Medical Products Distribution Iran Co. for providing the components for the experiemntal setup.
is observed when passing microbubbles reach the probed region a- Time domain, b- Spectrogram
The large oscillations starting at around 3.8 ms and 14 ms (Fig. 4–a) are attributed to the passage of two successive clouds of bubbles. The sudden variations (spikes) are the effect of external electrical noise. The amplitude of the second harmonic oscillation is 60 mV (estimated using a BPF, not shown here).
V. CONCLUSIONS A relatively simple experimental setup has been showcased which supports the theory exposed in the previous sections. The advantage of using bubble resonance to determine the intensity of ultrasound waves is the noninvasive nature of such a technique as microbubbles are considered as being biocompatible elements. Another advantage is the relative ease of production of microbubbles compared to a setup consisting of a hydrophone and 3D scanner and water tank. Although one can not expect a good spatial resolution with such a setup, the peak pressure can be determined in the area being insonated. The major advantage of this approach is to determine the amplitude of the ultrasound wave in vivo by using the information of the second harmonic amplitude.
REFERENCES 1. Harris G R (2005) Progress in medical ultrasound exposimetry. IEEE Trans. UFFC 52, pp. 717–736 2. IEC 150 (1963) Testing and calibration of ultrasonic therapeutic equipment. 3. IEEE Standard 790-1989 (1990). IEEE Guide for medical ultrasound field parameter measurements. 4. NEMA Standards Publication UD 2 (1993) Acoustic output measurement standard for diagnostic ultrasound equipment. Revision 2. 5. Vos H J (2010) Single microbubble imaging. Rotterdam, the Netherlands, PhD Thesis. 6. Ochoa L N et al. (2007) Noninvasive vascular diagnosis contrastenhanced ultrasound. Springer, Practical Guide to Therapy 7. Lauterbom W (1967) Numerical investigation of non-linear oscillation of gas bubbles in liquid. J. Acoust. Sot. Am. 59, pp 283-293. 8. Haak A, Lavarello R and O’Brien W D (2007) Semiautomatic detection of microbubble ultrasound contrast agent destruction applied to definity® using support vector machines. IEEE Ultrasonics Symposium, pp 660-663 9. Natori M , Tsuchiyaz T , Umemura S , Shiina T et al. (1998) Threshold of microbubble agents collapse by ultrasound irradiation. IEEE Ultrasonics Symposium, pp1435-1438 10. Doinikov A A and Bouakaz A (2009) Theoretical model for the threshold onset of contrast microbubble oscillations. IEEE International Ultrasonics Symposium Proceedings, pp 1833-1835 11. Miller D L L (2005) Ultrasonic detection of resonant cavitation bubbles in a flow tube by their second-harmonic emissions. Ultrasonics, vol 19, Num 5, pp 217-224. 12. Nyborg W L (1978) Physical principles of ultrasound, ultrasound: its application in medicine and biology. Elsevier, New York
IFMBE Proceedings Vol. 35
Visualization of Measured Data for Wireless Devices BluesenseAD O. Krejcar1,2, D. Janckulik1, and M. Kelnar1 1
2
Department of Measurement and Control, FEECS, VSB Technical University of Ostrava, Ostrava-Poruba, Czech Republic Department of Information Technologies, FIM, University of Hradec Kralove, Rokitanskeho 62, Hradec Kralove, Czech Republic
Abstract— This paper deals with the visualization of data measured by the wireless Bluetooth module BluesenseAD. The module measures the input voltage signal by an A / D converter and sends it via bluetooth to a computer for processing. BluesenseAD can be used for measurement in many applications ranging from simple measurement of voltage, temperature up to the ECG curves. Data visualization is proceeded in the form of timing chart display, and more so, it can be saved and thus available for any subsequent detailed analysis. The user interface provides access to basic utilities for configuring the serial link, parameters of the measured signals display, readout and control instrument for direct communication. The whole program is developed in C # development environment in Microsoft Visual Studio 2008. Keywords— Bluetooth, visualization, A/D converter, C#.
I. INTRODUCTION In practice, it is often necessary to measure the various signals in one place and in some ways to process the data or visualize them in another place. By quite a simple abstraction, intelligent sensors can be placed on the body of a moving man and visualize such motion or position of each limb. Options for implementation of data link system between the sensors and processing the data are plentiful, some of them are more suitable than others. Technology for data transmission is divided into two key groups, the first one is wired and the other one is the wireless connection. Both of them have their advantages and disadvantages, in this example wiring connection would be very difficult to execute, regarding the required mobility of the measuring chain, what's more, with the increasing length of the wire connections the increasing interference enters. And in similar situations, wireless connections are used preferably. These days, Wi-Fi, ZigBee and Bluetooth technology are widely used as data connection. ZigBee is wireless communication technology subjected to the standard IEEE 802.15.4. It is designed for connecting low powered devices in the PAN network. [1] ZigBee is designed to be simple and flexible technology for creating even larger wireless networks over short distances, with no large volumes of data required for transmission.
Bluetooth communication technology is defined by IEEE 802.15.1 for wireless personal area network (PAN) with a short range. [2] The largest use is in the implementation of the Bluetooth connection between the fixed and end mobile devices, especially since it features very low power consumption, low cost and, equally important, simple configuration. However, this does not preclude usage of the Bluetooth networking. Standard supports connection of up to eight devices, of which only one can be master (controls the communication), such a network is called Piconet [3]. The subject of this article, as the name implies, is a visualization of data measured by a wireless module. In this case, a Bluetooth module BluesenseAD. BluesenseAD's is OEM bluetooth modul of Corscience company designed for easy implementation of the Bluetooth network to the measurement and control chains. The module is equipped with a serial line emulation and eightchanneled, twelvebit A / D converter. Thanks to a special antenna module, it has up to 25m range at very low current consumption, it is also suitable for battery-powered applications. BluesenseAD network can operate as master or slave device. [4] The communication protocol which is used by BluesenseAD modules is designed for easy implementation in microprocessors, so it is very economical due to computational memory and CPU time. The communication protocol is subjected to standard ISO13239. The program which will allow an elegant way to access the configuration options for bluetooth modules, receiving, decoding, processing and displaying data can be developed in many programming languages, due to modern trends directly Java, VB, C + + or C # are offered. The aim of visualizaton part of the utility programme is not only a bare ploting a graph over time, but also the possibility of adjusting the display depending on the nature of the measured data. The point is that the process of ECG display mode requires a different mode of the visualization than the temperature process. Regarding the user's comfort it is needed for this very reason, to implement the program tool that will allow efficient enough to work with the measured data for all available display modes. This is essentially a tool for setting special features, readout or tools for measured data analysis.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 363–366, 2011. www.springerlink.com
364
O. Krejcar, D. Janckulik, and M. Kelnar
II. CURRENT STATE In the previous chapter, a few words was said about the aims imposed on the application, which shall facilitate communication with the wireless module as well as visualize the measured data. Then, the application can be split into two functional units that can operate independently. The first functional part of the application will be communication part, which provides setting parameters of the bluetooth module, serial port, diagnostics of communication and organization of the serial line protocol. For the Bluetooth module BluesenseAD connection a few basic steps are needed first. The computer, to receive data from BluesenseAD modul, must also be equipped with a Bluetooth device. In the case of desktop computers, it is needed to use Bluetooth cards or other USB donge device, for laptops (netbooks), the situation is much easier because they already contain the standard Bluetooth. Bluetooth driver installation and configuration follows, the driver also make sa virtual serial port accesible, which provides the communication. After these steps of pairing between devices will take place and BluesenseAD is connected to the computer. This procedure is required each first time you connect any device, after its successful completion application can communicate with. A. Bluetooth Module The part of first group the management of commucation serial line protocol is. Standardized communication protocol is a precondition for a stable and resilient communication between devices. Communication between BluesenseAD and the computer is executed on a arrhythmic seriál line in the both directions, that means duplex. Type of transmission is commonly referred as asynchronous but the correct name is arrhythmical because between the facilities there is some synchronization in the form of special marks at the beginning and end of the frame. [6] For the detection of communication errors cyclic redundancy sum CRC (Cyclic Redundancy Check) is used which even allows, in some cases, to correct transmission error [7]. Communications part of the application must correctly assemble these frames and send via bluetooth and then receive the feedback of device, decode the frame correctly and assess whether an error in communication occured, or to correct this error according to CRC or otherwise properly evaluate. The second functional part the visualization part of the application is, it makes setting of communication parameters accesible, display characteristics, features of a graphical environment, the possibility of help for all
components, all of that should be intuitive so that the inexperienced user is able to use all the basic possibilities of application without any problems. The requirement is that the visualization is performed in two separate windows. One of them should include all elements relating to the setting of communication and the parameters of BluesenseAD module. Furthermore, access to the communication protocol in the form of direct procurement of the whole frame in text form and also to record the communication. In the second window measured data visualization will take place. The main component for data visualization a chart which will show the waveforms of the measured signals is. For optimal viewing of the waveform it is desirable that the graph allows setting the scale for horizontal and vertical axis, changing the way of signal displaying and many other features. That all should be made available through intuitive form so that learning how to work with graphs would not require the user to study a long time. Of course, it is essential that the application will also include tools for linking visualization and communication parts, which are more or less based only on the correct transmission of data between the two regions. The only comprehensive solution for measured data visualization directly to the wireless modules BluesenseAD is supplied directly by the company Corscience named BlueConsole. This application provides detailed but fully manual parameter setting of BluesenseAD modul and very easy plotting of graph of the measured data. To assess other available solution it is needed to divede the particular application again into two functional units and to consider them separately. The reason is that there are no solutions that would be just a little usable for solving this problem. As for the unit communication, there is no other solution available on the grounds that the communication involves the compilation and decoding of frames transmitted, and its structure is unique for BluesenseAD modules. While, the whole visualization has several available solutions. One of the programs for data visualization is NIBP_panel application, which specifically serves to visualize blood pressure, the author is Martin Augustýnek and the application is created in Matlab, so it can be very easily adapted to other purposes. Another solution is to use professional visualization tools such as Control Web or LabView, although it is not entirely specific and definitive solution, but for pure data visualization it is one of the easiest solutions. These partial solutions for data, measured by wireless BluesenseAD module, visualization are unsatisfactory because they don't contain communication part. The better option is a software BlueConsole, which is directly
IFMBE Proceedings Vol. 35
Visualization of Measured Data for Wireless Devices BluesenseAD
365
designed specifically for the measurement modules and provides a fairly clear way to set up a wireless module. However, it is unsatisfactory in terms of data visualization in the chart. It does not permit to set benchmarks as needed, so in result the depicted signal is confused, also it has no instrument for readout, which also makes it difficult to work with the chart. Because of these reasons, it is necessary to develop a completely new application that would better suit the requirements imposed. This is mainly to ensure that application is able to fully communicate with BluesenseAD module and and then visualize the measured data.
into a text file. An integral part of the transfer of data visualization, along with receiving the control signals from thevisualization.
Communication Initialization
Frames Management
Data Flow to Visualisation
COM Port Control
III. NEW SOLUTION In accordance with the previous chapter the very application developnemt will be executed. That especially means that the application will be separated into two functional units which will function separately, and their cooperation will ensured by the proper transmission of data. The middle of the application is Bluesense Control, this part also manages the communication between the application and the wireless Bluetooth module BluesenseAD. It also ensures the transfer of data visualization. The relationship between parts of applications and measurement module is shown in Figure 1.
Communication Records in Text Form
Fig. 2 Bluesense Control Block Schema
Graph Control
Graph Painting
Data Flow to Communication
Data Processing
Fig. 1 Basic Schema Bluesense Control is essential and the most important part of the application. It manages communication with module BluesenseAD on a virtual serial port and tranfering the data for visualization. Simplified block diagram of this application is in Figure 2. Block of the communication access initialization makes accesible the settings of serial line parameters and verification of their settings. Block of frames management represents the compilation, decoding and evaluation of a data frame. Control of a serial port represents directly hardware access, that means sending frames in accordance with the communication protocol and reading of incoming frames as well. The text record of communication has two forms of this record, the first one is the list of communication directly into the login window of the application, the second form of entry means an entry
Graphical Output
Print
Fig. 3 Scope Block Schema BluesenseAD communication protocol uses the principles of the current point-to-Point Protocol (PPP). The frame starts with a specific symbol of the start flag (0xFC) and ends with the symbol of end flag (0xFD). These symbols can appear nowhere else in the frame. Immediately after the start flag a packet number follows. Number of
IFMBE Proceedings Vol. 35
366
O. Krejcar, D. Janckulik, and M. Kelnar
packet is in size of 1 byte so its value can range from 0 – 255. If the number overflows the packets begin numbering again from zero, independently incoming and outgoing packets. Thanks to numbering of packets it is possible to capture the missing frame and require its re-send. After a packet number identifier of the command (Command) in size of 2 bytes follows, the less significant byte is transmitted first. The following data field (payload) can be of different sizes, and its content depends on command. Checksum (Checksum) has a length of 2 bytes and is used for communication security, or even to correct errors. Checksum corresponds with the CRC16 – CCITT standard. Data from the Control Bluesense part transmit already decoded in the form of numbers, in the integer data type, to the Scope. Among the control commands, which Bluesense Control transmit, information about the analog-digital converter channels, sampling frequency and range of the transducer, are. Scope then forwards requests to start or stop the measurement.
IV. CONCLUSIONS The development of this application was able to find satisfactory solutions to the defined problem. Application functionality was verified on a practical case, when on A / DC input a signal generator was involved, application reliably communicated with the module and graphs were correctly portrayed. This application also meets all requirements specified in the application properties. Very strong and pleasant change from the original application is the ability to change the chart axis scales in a very elegant way. This solution is mainly used in laboratories for longdistance, wireless measurement by Bluetooth modules and especially thanks to the universal concept of the application.
ACKNOWLEDGMENT This research was supported in part by (1) „Centre for Applied Cybernetics“, Ministry of Education of the Czech Republic under project 1M0567, (2) „SMEW – Smart Environments at Workplaces“, Grant Agency of the Czech Republic, GACR P403/10/1310, (3) „SCADA system for control and monitoring of processes in Real Time“, Technology Agency of the Czech Republic, TACR, TA01010632 and (4) "User Adaptive Systems", VSB Technical University of Ostrava under project SP/2011/22.
REFERENCES 1. Corscience OEM modules - Bluesense with bluetooth at http://www.corscience.de/en/medical-engineering/oemodmsolutions/oem-modules/bluesense-ad-bluetooth.html 2. Zhou Y, Medidi M (2007) An Energy-aware Multi-hop Tree Scatternet for Bluetooth networks. In Ieee International Conference on Communications, 2007 Glasgow, Scotland. Washington: IEEE, 2007. s.5564 5569. ISBN 987-1-4244-0352-3, ISSN 1550 3607 3. Walma M (2007) Pipelined cyclic redundancy check (CRC) calculation. In 16th International Conference on Computer Communications and Networks. 2007 Honolulu, Hawaii. New York: IEEE, 2007. s.365 – 370. ISBN: 987-1-4244-1250-1, ISSN: 10952055 4. ZedGraph at zedgraph.org/wiki/index.php?title=Main_Page 5. Krejcar O, (2009) Problem Solving of Low Data Throughput on Mobile Devices by Artefacts Prebuffering”, In EURASIP Journal on Wireless Communications and Networking, Article ID 802523, 8 pages. Hindawi publishing corp., New York, USA, DOI 10.1155/2009/802523 6. Krejcar O, Janckulik D, Motalova L, Musil K, Penhaker M (2010) Real Time ECG Measurement and Visualization on Mobile Monitoring Stations of Biotelemetric System. In The 2nd Asian Conference on Intelligent Information and Database Systems, ACIIDS 2010, Hue City, Vietnam. Springer, Advances in Intelligent Information and Database Systems, Studies in Computational Intelligence, SCI Vol. 283. pp. 67-78. N.T. Nguyen et al. (Eds.). Springer-Verlag, Berlin, Heidelberg. DOI 10.1007/978-3-642-120909_6 7. Krejcar O, Janckulik D, Motalova L (2009) Complex Biomedical System with Biotelemetric Monitoring of Life Functions. In Proceedings of the IEEE Eurocon 2009, May 18-23, 2009, St. Petersburg, Russia. pp. 138141. DOI 10.1109/EURCON.2009.5167618 8. Prauzek M, Penhaker M, Bernabucci I, Conforto S (2009) ECG precordial leads reconstruction. In Abstract Book of 9th International Conference on Information Technology and Applications in Biomedicine. Larnaca: University of Cyprus. pp. 71 9. Tucnik P (2010) Optimization of Automated Trading System's Interaction with Market Environment, 9th International Conference on Business Informatics Research, Univ. Rostock, Rostock, Germany, Lecture Notes in Business Information Processing, Vol. 64, pp. 55-61 10. Mikulecky P (2009) Remarks on Ubiquitous Intelligent Supportive Spaces, 15th American Conference on Applied Mathematics/International Conference on Computational and Information Science, Univ Houston, Houston, TX, pp. 523-528, ISBN: 978-960-474-071-0 11. Penhaker, M., Cerny, M., “ The Circadian Cycle Monitoring “ Conference Information: 5th International Summer School and Symposium on Medical Devices and Biosensors, JUN 01-03, 2008 Hong Kong, Peoples R China, Pages: 41-43, 2008, 12. Cerny M., Penhaker M. Biotelemetry In conference proceedings 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, JUN 16-20, 2008 Riga, LATVIA, Volume: 20 Pages: 405408 Published: 2008, ISSN: 1680-0737, ISBN: 978-3-540-69366-6 Author: Ondrej Krejcar Institute: VSB TU Ostrava, FEECS Street: 17. Listopadu 15 City: Ostrava Poruba Country: Czech Republic Email: [email protected]
IFMBE Proceedings Vol. 35
Voltammetric Approach for In-vivo Detecting Dopamine Level of Rat’s Brain G.C. Chen1, H.Z. Han1, T.C. Tsai1, C.C. Cheng2, and J.J. Jason Chen1 1
Institute of Biomedical Engineering, National Cheng-Kung University, Tainan, Taiwan 2 National Cheng-Kung University Hospital, Tainan, Taiwan
Abstract— Parkinson’s disease (PD) is caused by insufficient release of dopamine (DA). Knowing the DA level of animal’s brain could provide a direct evidence for evaluating novel treatments for PD, especially in the acute or long-term studies. In this study, we used nanogold particle surface modified platinum electrodes to measure DA using cyclic voltammetry and amperometry technique for in-vitro DA sensing. For in-vivo study, deep brain stimulation (DBS) was applied to increase DA release as a validation step for in-vivo DA detection using amperometry technique. For the future application of in-vivo DA sensing in freely moving animal, miniaturization of the system which can be incorporated to a radio frequency transmitter is essential. Our ultimate goal is to apply the wireless miniature DA sensing unit for in-vivo DA recording as a direct evident and evaluation tool for novel treatment such as repetitive transcranial magnetic stimulation (rTMS) treatment for PD animal study. Keywords— Parkinson’s disease, Dopamine, Amperometry, Cyclic voltammetry, Radio frequency (RF).
I. INTRODUCTION Parkinson’s disease (PD) is caused by the loss of neurotransmitter dopamine (DA) that is related to motion control. Dopamine has been one of the important neurotransmitters in the biological system, linking to psychosis and locomotion modulation in the biological system [1, 2]. DA also has good electrochemical activity which is easily oxidized with a suitable potential. This electrochemical property enables the use of electrochemical technique for DA detection in past decades [3]. Voltammetry and amperometry are the common methodologies which are suitable for real time detection of electrochemically active compounds [4]. For wireless transmission of DA level, an electrochemical sensing system based on direct current amperometry unidirectionally transmitted via an infrared (IR) system was developed [5]. However, the IR transmission is easily interrupted between TX and RX if the light transmission pathway has been blocked. An improved one-way telemetric system, called optoelectronic transmission system (OPT), utilized 10 photodiodes as RX to ameliorate the interrupted event [6]. In addition to IR transmission, Bluetooth-based wireless
transmission system, which provided bi-directional communication at faster transmission rate was developed [7]. However, large size and heavy weight for experimental animal like rat have been the tradeoff of Bluetooth wireless transmission. In this study, we integrate a micro-stimulator for inducing the dopamine and amperometry recording circuit with wireless transmission module for detecting in-vivo DA level of freely-moving rats.
II. MATERIALS AND METHODS A. Sensing of Dopamine The three-electrode used for amperometry and voltammetry DA sensing systems comprised of the working electrode (WE), which was a platinum wire (Φ= 25 μm) and coated gold nanoparticles (NPs), as depicted in Fig. 1. The reference electrode (RE) is an Ag/AgCl electrode and the counter electrode (CE) is a stainless wire or platinum wire.
Fig. 1 Processes of Au-NPs-MPA microelectrode preparation To oxidize the DA, a constant current voltage of 0.28 V was provided by a DAC controlled by microprocessor of PIC18F452The three electrode voltammetric circuit can be fabricated which has been described in detail elsewhere (Fig. 2.) [8]. Finally, the signal is transmitted to DAQ and then be recorded by the LABVIEW 8.0.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 367–370, 2011. www.springerlink.com
368
G.C. Chen et al.
caused by electrode insertion, adjust the electrode to the recoding process carefully. For the beginning of the trial, the high density stimulation with 1 sec was applied continuously (100 Hz, 1 sec, 300 μA) to find out the maximal responsive location in MFB from depth of 7.0 mm to 9.4 mm beneath the dura. Once the depth of stimulation site was changed, treat stimulation again for 6~10 times. It takes 3 min recording for baseline before and after the stimulation. The chosen site was stimulated with 20 Hz, 50 Hz and 100 Hz in the constant current of 300 μA. Finally, the stimulation intensity of 100 μA and 300 μA with constant stimulation for 100 times were performed to compare to the DA levels under different intensities. The overall experimental design is summarized in Figure 4. D. Wireless DA Sensing System Fig. 2 Circuit digram of the three electrode voltammetric circuit B. Micro-electrical Stimulator The DBS is performed by micro-electrical simulator. The constant current simulation wave with pulse width of 1 ms can be generated. The stimulation parameters including frequency, intensity and on-off cycles can be chosen by DIP switches. For experiment, two models of stimulation intensity (100 μA and 300 μA) and three stimulation frequencies of 20 Hz, 50 Hz, and 100 Hz can be chosen. The constant stimulation time was set in 1 sec. AD8801 was controlled by microcontroller to generate the pulse (1.5~3.5 V) and DC voltage (2.5 V) for -1~1 V biphasic pulse. (Fig. 3).
The three electrode sensing system is an improved and miniaturized circuit. A wireless RF transmitter module has been modified to adapt our DA sensing and stimulation modules. The wireless RF transmitter module contain two main blocks: one is the transmission station (TX), and receiving station (RX). TX has 5 pre-amplifiers with gain 10, band pass filter at 0.8 to 7 kHz, ; and battery with regulator. The input range of TX is 4 mVp-p, and the sampling rate is 50 KHz. The transmission range is 3 meter. The overall weight of TX is only 2.7 grams which is quite suitable for small animal studies. RX will amplify the signal for 60 times, and the center transmit frequency is 3.2 GHz. The input range of RX is -0.5 to 0.5 V.
Fig. 4 The flow chart of whole process for experiment Fig. 3 The circuit diagram of the constant current stimulator. Because of AD8801 only has positive output, and the stimulate pulse is biphasic C. Evoked Dopamine Release with Electrical Stimulation During the electrical stimulation experiment, WE and simulation electrode were implanted slowly into the MFB. After 20 min of restoration time for the physical damage
III. RESULTS AND DISCUSSION The constant-current micro-stimulator was verified by varying the load. The relationship between voltage outputs under varied load is shown in Fig. 5. We can observe that the constant current stimulation can be operated at load below 15 kΩ.
IFMBE Proceedings Vol. 35
Voltammetric Approach for In-vivo Detecting Dopamine Level of Rat’s Brain
369
The in vivo verifications of DA sensing were induced by drug as well as electrical stimulation. The responsive current of five stimulation trials is shown in Figure 6(a). The DA level rose abruptly following the high intensity stimulation and decreased gradually in 30 sec and returned to baseline. In different stimulation frequencies, Figure 7 shows the In-vivo recording of DA during 50 and 100 Hz stimulation.
(a)
(b) Fig. 7 Effects with different electrical stimulation frequencies of (a) 100 Hz and (b) 50 Hz, and the stimulation intensity were fixed at 300μA in MFB
IV. DISCUSSION AND CONCLUSION
(c)
Fig. 5 (a) The biphasic square pulse of stimulating wave (b) The high intensity model of 100 Hz with 300μA intensity for 1 sec duration (c) Testing of constant current stimulator’s voltage output under load impedance from 1 kΩ to 25 kΩ
In the striatum containing the highest density of dopaminergic terminals, there are diverse afferent, efferent, and internal neurons, such as 5-HT, glutamatergic (Glu), acetylcholine (ACh) etc [9]. In addition, the extracellular DA level is below 100 nM in general, but the interferences are in relatively high concentration, e.g. DOPAC in 20 nM and ascorbic acid in 200 nM [10]. The electrochemical approach for DA detection not only provides high temporal resolution of amperometry but also use Au-NPs-MPA electrode for repelling the interferences with negative group to achieve the better selectivity. The DBS is an appropriate method for increasing DA level with immediate response, and without some side effects, just like lose efficacy. DBS is also a functional neurosurgical approach. This study presents preliminary results of a pilot study that will be further developed for building a platform for evaluating freely moving PD rats’ relationship between behaviors and treatment such as rTMS.
REFERENCES
Fig. 6 The amperometric response with a repetitive high intensity (100 Hz, 300μA) of stimulation at MFB (b) an experiment which is no response with low intensity (100 Hz, 100μA)
1. J. R. Cooper, et al., The biochemical basis of neuropharmacology, 8th ed. Oxford ; New York: Oxford University Press, 2003 2. Michael and L. M. Borland, Electrochemical Methods for Neuroscience, 1st ed. Boca Raton: CRC Press, 2007 3. M. R. Li, et al., "Electrochemical quartz crystal impedance study on immobilization of glucose oxidase in a polymer grown from dopamine oxidation at an Au electrode for glucose sensing," Electrochimica Acta, vol. 51, pp. 5478-5486, Jul 28 2006 4. P. Palij and J. A. Stamford, "Real-Time Monitoring of Endogenous Noradrenaline Release in Rat-Brain Slices Using Fast Cyclic Voltammetry .1. Characterization of Evoked Noradrenaline Efflux and Uptake from Nerve-Terminals in the Bed Nucleus of Stria Terminalis, Pars Ventralis," Brain Research, vol. 587, pp. 137-146, Jul 31 1992 5. G. Voskerician, et al., "In vivo inflammatory and wound healing effects of gold electrode voltammetry for MEMS micro-reservoir drug delivery device," Ieee Transactions on Biomedical Engineering, vol. 51, pp. 627-635, Apr 2004
IFMBE Proceedings Vol. 35
370
G.C. Chen et al.
6. F. Crespi, "Wireless in vivo voltammetric measurements of neurotransmitters in freely behaving rats," Biosensors & Bioelectronics, vol. 25, pp. 2425-2430, Jul 15 2010 7. P. A. Garris, et al., "Wireless transmission of fast-scan cyclic voltammetry at a carbon-fiber microelectrode: proof of principle," J Neurosci Methods, vol. 140, pp. 103-15, Dec 30 2004 8. Cheng, “Design of an Intelligent Voltammetric System and Fabrication Optimisation of Its Carbon Fiber Electrode,” PhD. Thesis., NCKU Tainan Taiwan, 2003 9. H. F. Urbanski, "Photoperiodic modulation of luteinizing hormone secretion in orchidectomized Syrian hamsters and the influence of excitatory amino acids," Endocrinology, vol. 131, pp. 1665-9, Oct 1992
10. F. Gonon, et al., "Voltammetry in the Striatum of Chronic Freely Moving Rats - Detection of Catechols and Ascorbic-Acid," Brain Research, vol. 223, pp. 69-80, 1981
Author: Guan-Cheng Chen Institute: Institute of Biomedical Engineering, National Cheng Kung University Street: No.1 Daxue Rd. East Dist. City: Tainan City Country: Taiwan (R.O.C.) Email: [email protected]
IFMBE Proceedings Vol. 35
Wearable ECG Recorder with Acceleration Sensors for Measuring Daily Stress Y. Okada1,2, T.Y. Yoto1, T.A. Suzuki1, S. Sakuragawa1, H. Mineta3, and T. Sugiura4 1
Industrial Research Institute of Shizuoka Prefecture, Shizuoka, Japan Graduate School of Science and Technology, Shizuoka University, Hamamatsu, Japan 3 Department of Otolaryngology, Hamamatsu University School of Medicine, Hamamatsu, Japan 4 Research Institute of Electronics, Shizuoka University, Hamamatsu, Japan 2
Abstract— A small and light-weight wearable electrocardiograph (ECG) equipment with three accelerometers (x, y and zaxis) was developed for prolonged monitoring of autonomic nervous system in daily life. It consists of an amplifier, a bandpass filter, a microcomputer with an AD converter, a triaxial accelerometer, and a memory card. Four parameters can be sampled at 1 kHz (10 bits) for more than 24 hours, maximum 27 hours, with a default battery and a memory card (1 GB). The availability of the system was tested for three subjects for three days by replacing the battery and the memory card every 24 hours under each environment. Both short-term and circadian rhythms of the autonomic nervous system were clearly observed. The change of the autonomic nervous system from body movement (i.e. walking or turning over) was observed by check acceleration data. The feasibility of the application in clinical practice is also discussed. Keywords— autonomic nervous system, stress, heart rate variability, wearable monitoring, acceleration.
I. INTRODUCTION In modern urban Asia, some findings have related high socio-ecological stress to a change in industrial structure and life style [1]. In Japan, according to the government's Comprehensive Survey of Living Conditions (2005), 48.2 % of those surveyed had trouble and stress caused by everyday life or work. For such chronic stresses, it is important to monitor the level of stress because it is the cause of lifestyle-related and/or mental diseases. There are two non-invasive approaches to measure and evaluate chronic stresses. One is the measurement of cortisol, a stress metabolite, as a physiological index (biomarker) [2]. Stress metabolites are secreted following stress stimulation. They pass through the endocrine system and appear within the blood and saliva. Although such biomarkers can easily be measured by using a simple instrument, continuous sampling of such biomarkers is not realistic. In addition, after stress stimulation, a certain time delay from several minutes to several hours occurs, and the time resolution of a stress response is poor because it appears as an integral calculus value a long time before and after stress stimulation [3]. The other
non-invasive approach is the measurement of autonomic nervous system (ANS) activity which is observed by changes such as blood pressure, heartbeat, and temperature [4]. Among them, heart rate variability (HRV) analysis is the method most frequently used [5]. Many studies have already reported the effectiveness of HRV using electrocardiogram (ECG) recording instruments, clinical Holter monitors and multi-telemeters [6]. However, when HRV is analyzed, the following issues must be considered: 1) Most Holter monitors have sampling frequencies ≤ 250 Hz (sampling interval is ≥ 4 ms). Thus the detection accuracy over time is not so good [7]. 2) With a normal multi-telemetry system, the measurement circumstances and the movement of a subject are limited, and the duration of operation is generally short. For accurate analysis of ANS activity (stress changes) in various situations, basic behavioral information such as walking, sitting, lying etc. would be helpful. With these in mind, we focused on the development of a small and light-weight ECG recorder with three acceleration sensors, which monitors and saves data for more than 24 hours with sufficient timefrequency resolution. This paper describes the experimental results of the system and demonstrates its validity for assessing the long-term stress response at work and household duties in everyday life. One of the feasibilities of clinical applications is also discussed.
II. SYSTEM OF MONITORING AUTONOMIC NERVOUS SYSTEM USING HRV
This system, which was developed in this research, consists of a wearable electrocardiograph with three acceleration sensors and HRV analysis software for long-term ECG data. A. ECG Recorder Fig. 1 shows the ECG recorder with the case open (left) and closed (right). The size is W44 x D17 x H58 mm and the weight is 45 g including a battery and a memory card. The body is foldable and made of plastic. The memory card and battery can be easily replaced.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 371–374, 2011. www.springerlink.com
372
Y. Okada et al.
detected. Recorded data on the memory card are converted from binary to text data (CSV format). B. HRV and Acceleration Analysis Software i) HRV Analysis
Fig. 1 ECG recorder
ECG amp
Filter
Microcomputer
Memory card
3-axis acceleration sensor
For analysis of autonomic nervous system, the ECG is processed for HRV spectrum analysis. R wave detection is processed automatically from long term ECG data, RR interval time-series data is built. RR interval data are re-sampled at 4 Hz after spline interpolation, a heart rate variability spectrum was calculated with a serial FFT using a Hanning window. A conventional LF/HF power ratio was used as the sympathetic nervous index. Frequency ranges are as follows; LF: 0.04 - 0.15 Hz, HF: 0.15 - 0.40 Hz [6]. The length of data points for FFT can be set to any integer in this algorithm. ii)
Fig. 2 Block diagram of ECG recorder Fig. 2 shows a block diagram of the electrocardiograph. It consists of an ECG amplifier (Burr-Brown INA326 and Texas Instruments OPA2335, 50 dB), a filter (0.1 - 100 Hz), 3-axis acceleration sensor (ST Microelectronics Co.), a micro-computer (4ch analog-to-digital converter and memory card controller, ADuC840, Analog Devices Inc.) and a memory card. These are mounted on a printed-circuit board that has a four-layer structure which sandwiches the power source and ground layers. ECG and the acceleration data are recorded at a 1 kHz sampling rate for accurate HRV analysis. Acceleration data are used to monitor the subject’s posture and/or body movement simultaneously with ECG. The sensor has a full scale of ±2 G (gravitation). A mobile type MMC (multi-media card) memory card is used. ECG and three acceleration data are buffered as binary data in the RAM (random memory access) area of a microcomputer. Buffered data are written on the memory card by a single block write-command every 512 bytes, which is a block size of the MMC mobile card. One giga byte (GB) MMC mobile card records four channel data (ECG and three acceleration data) for up to 27 h with a 1 kHz sampling rate. After measurement, the memory card is dismounted, and the data are transferred to a personal computer (PC) for signal processing (off-line analysis). There are two switches (ON/OFF, START/STOP) and two light emitting diodes (LEDs). Green LED is a indicator for detection of ECG, and red LED is for monitoring of operating mode. Green LED flashes when each R-wave is
Trajectory Monitoring
Acceleration signals were integrated twice to obtain the trajectories of subjects. A 3-dimensional (3-D) view and three 2-dimensional (2-D) views (Left - Right, Top - Bottom, Forward - Backward planes) are presented. The 3-D viewing location can be set easily by a mouse operation. This function is supposed to be used for monitoring the movements of vertigo patients in medical applications.
III. VERIFICATION EXPERIMENT For monitoring the autonomic nervous system activity in daily life using this system, verification experiment were conducted. Subjects were 3 healthy male college students (23.7 ± 0.6 yr) for an office work simulation experiment. Their ECG and 3-axis acceleration data were recorded for 3 days except while taking a bath. They were given PC-related tasks (data input) from 9:00 to 17:00. They took 15 min rest (include 3 min rest with closed eyes) every hour. Memory cards and batteries were replaced every 24 h. The ECG recorder was fixed with tape to the chest (Fig. 3). ECG electrodes were placed on the chest wall according to the NASA induction method (Fig. 3). For analysis of autonomic nervous system, the ECG data were processed for R wave detection and HRV spectrum analysis and LF/HF power ratio was calculated. For trajectories monitoring of subjects, 3-axis acceleration data were integrated. The aim and the details of this experiment were explained to them, and written informed consent was obtained
IFMBE Proceedings Vol. 35
Wearable ECG Recorder with Acceleration Sensors for Measuring Daily Stress
from all subjects. This study was approved by the ethics committees of University of Shizuoka and Hamamatsu University School of Medicine.
373
that it is elevated sympathetic nervous activity transiently, which from acceleration data seem to occur from body movement such as walking or rolling over during sleep. 1500 1000 500 0 1.5 1 0.5 Gx Gy
Fig. 3 Electrode configuration (NASA induction)
Gz 9:00
10:00
IV. RESULTS
11:00
12:00
Fig. 5 During PC-related task
One of the results of a long term recording is shown in Fig. 4 – 7. Gx
1500 1000 500 0 1.5
Gy Gz 19:40:00
1
19:40:10
Fig. 6 Three-axis acceleration data on walking
0.5 0 Gx Gy
12:00 15:00 18:00 21:00 0:00
3:00
6:00
9:00
Fig. 4 RRI, LF/HF and 3-axis acceleration data Gx (Left - Right), Gy (Top - Bottom), Gz (Forward - Backward) on dairy life LF/HF was averaged at every 15 minutes
B
-F
R-
L R-L
T-B
T- B
Fig. 4 shows the time series data of RRI, sympathetic nervous activity (LF/HF) and 3-axis acceleration data Gx (Left Right), Gy (Top - Bottom), Gz (Forward - Backward) (24 hours trend). A condition of continuous FFT is as follows, data length: 128 s (using hanning window), overlap: 68 s (shift of 60 s). LF/HF was averaged at every 15 minutes. It shows a slow change of sympathetic nervous activity, which shows a circadian rhythm clearly, lowest in the early morning and highest in the late afternoon. The acceleration data show the body movement and posture change. The LF/HF data shows
B-F
9:00
T-B
Gz
R-L
F-B
Fig. 7 Three-dimensional (3-D) and 2-D trajectories of body movements
IFMBE Proceedings Vol. 35
374
Y. Okada et al.
Fig. 5 shows data for 3 h during PC-related tasks (data input) in the morning. A condition of continuous FFT is as follows, data length: 128 s (using hanning window), overlap: 68 s (shift of 60 s). The LF/HF data shows rising tension during task and easing tension during rest clearly except for walk. Fig. 6 shows data on walking. Fig. 7 shows threedimensional (3-D) and 2-D trajectories of body movements. The data length is 20 s on walking. After filtered (LPF: cutoff 1.2 Hz), the data is found by double integration.
V. DISCUSSION It is well-known that humans have various psychological, chemical and physical stresses in social life, and these are the causes of diseases such as insomnia, depression or autonomic imbalance, etc. [8]. Various methods have advanced to assess these stresses in daily life [9]. Measurement of autonomic nervous activity by HRV spectrum analysis is one of the widely-used methods and is able to assess chronic as well as acute stresses, since heart rates can be monitored continuously and non-invasively under unconstrained conditions. However, a normal Holter electrocardiograph has a sampling frequency generally < 125 Hz [7]. For the accurate detection of R waves and thus, reliable stress evaluation, sampling frequency > 1 kHz is desirable [8]. A multi-telemeter system requires a PC or a recorder for data recording, and the measuring system (sending and receiving) is generally large. For this reason, a behavioral field of the subject is limited in the range where data communication is possible. The system developed in this study is light-weight and smaller than a normal Holter electrocardiograph and multi-telemeter equipment. In addition, it has three acceleration sensors (left-right, top-bottom, forwardbackward) to monitor body movements, which enables us to estimate what the subject was doing at that time. This will be useful information in understanding the stress variations for an extended period of time. Replacement of the battery and the memory card is simple and easy, and an ECG can be measured without time limitations. Even a year-long monitoring of ANS activities (stress) is possible. Three-axis acceleration data were used to identify the subject’s behavior (walking, reclining, lying down, or sitting). Thus it is possible to separate psychological accentuations of sympathetic nervous activity from those induced by simple posture changes or body movements. Monitoring a vertigo patient’s behavior in everyday life is one of the promising
applications in the medical field. It would be very helpful to know the correlation pattern between ANS activity and the timing of a dizzy turn which might be estimated by postural changes (acceleration signals) in the understanding and treatment of vertiginous patients. Quantification of 3-D and 2D trajectories remains a challenge.
VI. CONCLUSIONS A wearable electrocardiograph with three acceleration sensors and its analysis program for monitoring ANS changes and body movements for extended period of time were developed. The system availability was demonstrated by the consecutive three-day experiment and analysis data showed long-term change of ANS and short-term change by tension or body movement. Clinical applications for monitoring vertigo patients are in progress.
REFERENCES 1. Mercado S et al. (2007) Responding to the Health Vulnerabilities of the Urban Poor in the “New Urban Settings” of Asia, at http://www.who.or.jp/2007/Bellagio.pdf 2. Yamaguchi T, Shioji I, Sugimoto A et al. (2002) Psychological stress increases bilirubin metabolites in human urine. Biochem Biophys Res Commun 293: 517–520 3. Hellhammer DH, Wüst S, Kudielka BM (2009) Salivary cortisol as a biomarker in stress research. Psychoneuroendocriono 34 : 163-171 4. Wilczynska A, De Meester F, Singh Ram B et al. (2010) Heart rate and blood pressure in the context of nutritional and psychological analysis: a case study. Eur J Med Res 15, Suppl 2 : 217-213 5. Task Force of the European Society of Cardiology the North American Society of Pacing Electrophysiology (1996) Heart Rate Variability, Standards of Measurement, Physiological Interpretation, and Clinical Use. Circulation 93: 1043-1065 6. Ito H, Nozaki M, Kaji Y et al. (2001) Shift work modifies the circadian patterns of heart rate variability in nurses. Int J Cardiol 79: 231-236 7. Kobayashi H, Ishibashi K, Nogchi H et al. (1999) Heart Rate Variability: An Index for monitoring and analyzing human autonomic activities. Appl Human Sci 18: 53-59 8. Nakata A, Haratani T, Takahashi et al. (2004) Job stress, social support, and prevalence of insomnia in a population of Japanese daytime workers. Soc Sci Med 59: 1719-1730 9. Pickering TG, Devereux RB, James GD et al. (1996) Environmental influences on blood pressure and the role of job strain. J. Hypertens Suppl 14: S179-S185
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Yoshio Okada Industrial Research Institute of Shizuoka Prefecture Aoi-ku, Makigaya 2078 Shizuoka Japan [email protected], [email protected]
Wireless Sensor Network for Flexible pH Array Sensor J.C. Chou1,2,3,*, C.C. Chen2, and M.S. Wu3 1
Department of Electronic Engineering, National Yunlin University of Science and Technology, Douliou, Yunlin, Taiwan, R.O.C. 2 Graduate School of Engineering Science and Technology, National Yunlin University of Science and Technology, Douliou, Yunlin, Taiwan, R.O.C. 3 Graduate School of Optoelectronics, National Yunlin University of Science and Technology, Douliou, Yunlin, Taiwan, R.O.C.
Abstract— In this study, a wireless sensor network (WSN) for flexible pH array sensor has been developed by virtual software of National Instrument (NI) Laboratory Virtual Instrument Engineering Workbench (LabVIEW). The measurement data were received from measurement node and sent to a gateway connecting with computer through WSN. The WSN has been combined successfully with flexible pH array sensor. And the sensitivity and linearity of the flexible pH array sensor received through WSN are 53.39 mV/pH and 0.990 in pH concentrations between pH1 and pH13 at room temperate (25 oC), respectively. Therefore, the developed WSN can be applied to the flexible pH array sensor practically. Keywords— Wireless sensor network, Flexible array sensor, Radio frequency sputtering, Screen-printing, pH sensitivity.
I. INTRODUCTION Due to sensing devices should be small and low price, the thick-film technology became more suitable and promising for large-scale, cost-effective, fast and highly reproducible production of electrochemical sensors [1, 2]. Manufacturing sensors and biosensors on plastic by means of screen-printing has attracted considerable attention that owing to the proliferation of handheld, portable consumer electronics. Plastic substrates possess many attractive advantages of biocompatibility, flexibility, light weight, shock resistance, softness and transparency [3]. And screenprinting is especially recommended as simple and fast method for mass production of disposable electrochemical sensors [4]. Hence, the development of biosensor tended towards low cost, small size and easy fabrication by screen printing in the future [5]. Wireless sensor network (WSN) is an emerging technology that has a wide range of potential applications including environment monitoring, smart spaces, medical systems, and robotic exploration. Such networks consist of large numbers for distributed nodes that organize themselves into
a multihop wireless network. Each node has one or more sensors, embedded processors, and low power radios, and is normally battery operated. Typically, these nodes coordinate to perform a common task [6]. As the promising application of wireless sensor networks, the global chip makers and organizations established wireless sensor network technology standards such as IEEE 802.15.4, MiWi, SimpleTI, SMAC, Smart Dust [7], ZigBee, Z-Wave, etc. And the application range from environmental monitoring to position of personnel and productions, industrial control, building and home automation, health care, and medical fields. In 2010, the estimative shipments of the wireless sensor network nodes are more than 100 million, and the sales amount are over two billion U.S. dollars. According to the mention above, the primary researches focused on the attempt to use the NI wireless sensor network module for V-T measurement system. And the flexible array sensor was applied to detect the response voltage for pH buffer solution, in this study.
II. EXPERIMENTAL A. Fabrication of Flexible pH Array Sensor In this study, the flexible electrodes were fabricated in basis with polyethylene terephthalate (PET) substrate, and using the radio frequency (R.F.) sputtering system to deposit ruthenium dioxide (RuO2) as the sensing membrane on the PET substrate [8, 9]. Afterward the screen-printed technique was used to coat the conductive layer (silver leading wire) and the insulation layer (epoxy paste) on the sensing membrane (RuO2 layer). The novel structure of 2×4 flexible array sensor was improved from the prior 1×2 structure [10, 11]. The flexible array sensor was produced using a semiautomatic screen printer (Model HJ-55AD3, Taiwan, R.O.C.). And each piece of PET substrate has three array sensors. First, the PET substrate was cleaned with deionized
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 375–379, 2011. www.springerlink.com
376
J.C. Chou, C.C. Chen, and M.S. Wu
(D. I.) water and ethanol in an ultrasonic oscillator 10 minutes and baked at 120 oC for 10 minutes. Then, eight RuO2 thin films were deposited on the PET substrate by using radio frequency sputtering system. And eight strips of silver paste were coated on PET substrate for slightly connected with eight RuO2 thin films and baked at 120 oC over for 20 minutes. Finally, the insulating layer of epoxy paste was coated on the top of array sensor and baked at 140 oC over for 40 minutes. The flexible array sensor electrode consists of a flexible PET substrate, eight RuO2 sensing films with 2.5 mm diameter, eight leading wires, and sensing windows on insulation layer of epoxy with 1.5 mm diameter. The diagram of the flexible array sensor was shown in Fig. 1.
Fig. 2 Diagram of array sensor measurement system
III. RESULT AND DISCUSSION A. Function of WSN Measurement Platform
Fig. 1 Fabrication framework of flexible array sensor B. Measurement System In this study, the detection temperature was controlled at 25 oC during measuring process. The hydrogen ion detection has three methods in this study. They were multi-meter (Model: Agilent HP34401A, Agilent, U.S.A.), NI data acquisition system (DAQ card) and NI wireless sensor network module (WSN), respectively. Among the HP34401A multi-meter for measurement has some disadvantages, such as only single sensor measurement one time, stationary for measurement instrument. However, NI DAQ card and NI WSN system have some advantages of multi channel (Max 16 channels), portable and the cheap price of device. Finally, the NI LabVIEW 2009 (Model: LabVIEW 2009, National Instrument Corp., U.S.A.) was selected for analytic software to estimate the sensitivity and linearity. The voltage-time (V-T) curves were performed by using the voltage-time (V-T) measurement system which consists of NI DAQ card or NI WSN system, a read-out circuit, twoelectrode cell and a personal computer (PC) as shown in Fig. 2. The operation parameter of the temperature were set for room temperature and the measurement time was set for 5 minute.
In this study, the operational interface of the WSN measurement system with LabVIEW software was designed. The interface of the WSN measurement system was divided into three parts: Parameter Setting, Measure Curve, and Data Analysis. As mentioned above, the interface of the measurement system is shown in Fig. 3. Parameter Setting: The interface of “Parameter Setting” was composed of “Selecting Instrument”, “Physical Channels”, “Global Calibrations (scale)” and “Time Interval (sec/scale)”. The “Selecting Instrument” of “Step 1” has three modes that are “HP 34401 (RS-232)”, “NI Instruments” and “Wireless Sensor Network”, respectively. The mode of “HP 34401 (RS-232)” used the instrument of HP 34401A multimeter to acquire measurement data via the RS-232 cable. Another mode of “NI Instruments” was based on DAQ card of NI USB-6210 that was an acquisition device with 16 inputs, 16-bits and 250kS/s multifunction I/O. Finally, the mode of the “Wireless Sensor Network” was used for wireless measurement. Furthermore, the “Physical Channels” of “Step 1” was to decide the channel amount with measurement. The “Global Calibrations (scale)” of “Step 2” was to set the interval of scale with system measure. And the “Time Interval (sec/scale)” of “Step 2” was the septal second for each scale. Finally, pushed the button of “Step 3” and decided the saving path of the measurement data. Measure Curve: The interface of “Measure Curve” wascomposed of the measure diagram and value with real-time display and two buttons, Pause and Stop. The coordinate units of curve were output voltage (mV) and time (sec), and the measuring value can show 16 channels maximal. The operation interface of the “Measure Cureve” was shown in Fig. 4.
IFMBE Proceedings Vol. 35
Wireless Sensor Network for Flexible pH Array Sensor
377
Table 1 Sensitivities and linearities of flexible pH array sensor were measured with I-V, WSN, and DAQ card systems I-V Measurement WSN Measurement DAQ Card System System System Sensitivity Linearity Sensitivity Linearity Sensitivity Linearity No (mV/pH) (mV/pH) (mV/pH) 53.02 0.967 53.80 0.998 1 56.74 0.998 51.75 0.989 52.72 0.998 2 54.41 0.997 3
50.04
0.996
52.25
0.990
51.86
0.999
4
49.77
0.994
53.39
0.988
52.54
0.998
B. Measurement Results of Current-Voltage System The current-voltage (I-V) measurement system is composed of Keithley 236 Semiconductor Parameter Analyzer and a readout circuit of n-MOSFET. The sensitivity and linearity of flexible array sensor measured in the concentrations between pH1 and pH13 at room temperature (25 oC) were represented in Table 1. The ranges of sensitivity and linearity were from 49.77 mV/ pH to 56.74 mV/pH and from 0.994 to 0.998, respectively. At present, the pH array sensors have been fabricated successfully and applied to I-V measurement system. Fig. 3 Interface of “Parameter Setting” for WSN measurement system Data Analysis: The interface of “Data Analysis” was designed for the analyses of sensitivity and linearity, which is shown in Fig. 4.
C. Measurement Results of Voltage-Time System The V-T measurement system for hydrogen ion detection has two methods in our laboratory, which were HP 34401A multi-meter and DAQ card, respectively. The results of sensitivity and linearity for V-T measurement with HP 34401A multimeter device are 53.38 mV/pH and 0.999 (not shown in Table 1) from pH1 to pH13 at 25 oC, respectively. Another V-T measurement system is DAQ card. In comparison between the HP 34401A multi-meter and DAQ card, the DAQ card was set the same condition that single sensor was measured one time. The measurement results of sensitivity and linearity measured by V-T measurement system with DAQ card device are 53.80 mV/pH and 0.998 from pH1 to pH13 at 25 oC, respectively, as shown in Table 1. The sensitivity and linearity of above two devices were almost the same. According to above results, the DAQ card device is feasible for V-T measurement. D. Measurement Results of WSN System
Fig. 4 Interfaces of “Measure Curve” and “Data Analysis” for WSN measurement system
In this study, the characteristic of the hydrogen array sensor was measured by using WSN measurement system. And the WSN measurement system was based on DAQ card
IFMBE Proceedings Vol. 35
378
J.C. Chou, C.C. Chen, and M.S. Wu
voltage-time (V-T) measurement system. The voltage-timebased WSN measurement system was comprised of read-out circuit, four-channel analog input measurement node (NI WSN-3202, USA) and a gateway coordinate to connect with personal computer (PC). The measurement system was based on LabVIEW software that was designed for a real-time and dynamic measurement system. The analog signal was acquired for the measurement node WSN-3202 that input analog signal range was set from -10V to +10V. The analog signal was transformed to digital signal via WSN-3202. Afterwards, the digital signal was transmitted to the gateway coordinator (WSN-9791 Ethernet Gateway). Finally, the gateway collected the all measurement data from each measurement nodes (WSN-3202) and sent the data to a computer via Ethernet cable. Then, the received digital signal was analyzed via computer software for LabVIEW, and calculated the sensitivity and linearity. Furthermore, the hardware architecture of wireless sensor network module is comprised of sensor device, read-out circuit and measurement node of WSN-3202, as shown in Fig. 5.
Fig. 6 Measurement curve and interface of WSN measurement system
IV. CONCLUSIONS The wireless sensor network measurement system with flexible array sensor has been presented. Then the LabVIEW software was based on a personal computer and the measurement data can be acquired, processed, analyzed, and presented via wireless sensor network. And according to the experimental results, the sensitivities and linearity of flexible array sensor measured by V-T measurement system with DAQ card or WSN system are excellent. Hence, they can replace the conventional and expensive of instrument for VT measurement system.
ACKNOWLEDGMENT
Fig. 5 Hardware architecture of wireless sensor network Finally, the measurement curve and interface of WSN measurement system is shown in Fig. 6. And the sensitivity and linearity of flexible array sensor in the concentrations between pH1 and pH13 were represented in Table 1. The sensitivities and the linearities were 51.75 mV/ pH to 53.39 mV/pH and 0.967 to 0.990, respectively. In order to prove that the NI-WSN device was feasible for pH measurement, the same sensor was used for DAQ card device, and the sensitivity and linearity of flexible array sensor in the concentrations between pH1 and pH13 were represented in Table 1. The sensitivities and linearity were from 51.86 mV/ pH to 53.80 mV/pH and from 0.998 to 0.999, respectively. According to above results, the NI-WSN device was feasible for hydrogen ion detection.
The authors would like to thank the National Chip Implementation Center (CIC) to support for CMOS fabrication. This study is supported by National Science Council, The Republic of China, under the contracts NSC 97-2221E-224-058-MY3.
REFERENCES 1. Tymecki Ł, Zwierkowska E, Koncki R (2005) Strip bioelectrochemical cell for potentiometric measurements fabricated by screenprinting. Anal. Chim. Acta 538: 251-256 2. Tymecki Ł, Zwierkowska E, Koncki R (2004) Screen-printed reference electrodes for potentiometric measurements. Anal. Chim. Acta 526: 3-11 3. Michael C M, Habib A, Wang D et al. (2007) Highly ordered nanowire arrays on plastic substrates for ultrasensitive flexible chemical sensors. Nat. Mater. 6: 379-384 4. Robert K, Marco M (1997) Screen-printed ruthenium dioxide electrodes for pH measurements. Anal. Chim. Acta 351: 143-149 5. Akyildiz I F, Su W, Sankarasubramaniam Y et al. (2002) Wireless sensor networks: a survey. Computer Networks 38: 393-422
IFMBE Proceedings Vol. 35
Wireless Sensor Network for Flexible pH Array Sensor 6. Wei Y, John H, Estrin E D (2004) Medium access control with coordinated adaptive sleeping for wireless sensor networks. IEEE/ACM Trans. Networking 12: 493-506 7. Ilyas M, Mahgoub I (2006) Smart dust : sensor network applications, architecture, and design. Boca Raton: CRC/Taylor & Francis 8. Tsai Y H (2007) Fabrication and analysis of the ascorbic acid biosensor based on the ruthenium oxide sensing electrode, Master thesis, Graduate School of Optoelectronics, National Yunlin Institute of Technology, Yunlin, Taiwan 9. Chou J C, Tsai Y H, Chen C C (2008) Development of a disposable all-solid-state ascorbic acid biosensor and miniaturized reference electrode fabricated on single substrate. IEEE Sens. J. 8: 1571-1577 10. Chou J C, Chen W C, Chen C C (2009) Flexible sensor array with programmable measurement system, Proceedings of The International Conference on Chemical and Biomolecular Engineering (ICCBE 2009), Tokyo, Japan, 2009, pp. 340-344
379 11. Chou J C, Chen C C (2009) Weighted data fusion for flexible pH sensors array. Biomed. Eng. Appl. Basis Commun. 21: 365-369
Author: Jung-Chuan Chou Institute: Department of Electronic Engineering, National Yunlin University of Science and Technology, Street: 123, Sec.3, University Rd. City: Douliou, Yunlin Country: Taiwan, R.O.C. Email: [email protected]
IFMBE Proceedings Vol. 35
Application of Gold Nanoparticles for Enhanced Photo-Thermal Therapy of Urothelial Carcinoma Y.J. Wu1, C.H. Chen1,2, H.S.W. Chang3, W.C. Chen4, and J.J. Jason Chen1 1
Institute of Biomedical Engineering, National Cheng Kung University, Tainan, Taiwan 2 China Medical University and Beigang Hospital, Chiayi, Taiwan 3 Institute of Biomedical Engineering, Chung Yuan Christian University, Taichung, Taiwan 4 Department of urology, School of Medicine, China Medical University and Hospital, Taoyuan, Taiwan
Abstract— The aim of this study is to utilize photothermal therapy (PTT) to treat urothelial cancer using the unique optical properties- surface plasmon resonance of gold nanoparticles (GNPs). The GNPs were conjugated with anti-EGFR and antiMUC7 antibody for tumor targeting. The conjugated GNPs were exposed under a green light laser (532nm) to produce the enough thermal energy and to kill the transitional cell carcinomas (TCC). Our results show that the cancer cells (MBT2, T24, 9202, 8301) were damaged at relatively lower energy (10 W/cm2, 1.6 Hz with 300 ms) compared with the group without added GNPs. The damage was directly related to the applied laser energy and irradiation time. The therapeutic effects of superficial disease in situ by intravesical GNP agents instillation will be further performed in animal study of C3H mice. It is expected that the mini-invasive technology of PPT with GNPs not only reduces the high recurrence of TCC but also avoids the side effects from traditional chemotherapy. Keywords— Gold nanoparticels, Photothermal therapy, Surface plasmon resonance, Transitional cell carcinoma, Adjuvant therapy.
I. INTRODUCTION Photothermal therapy (PTT) is a manner of treatment, which is based on the nanoparticles particular optical properties and using laser light at the same wavelength to irradiate. Because of the unique optical properties of gold nanoparticles (GNPs), various PPT techniques have been successfully used for cancer treatment in recent years [1]. In virtue of the arrangement of free electrons from particle surface, called surface plasmon resonance, the region of gold nanoparticle absorption wavelength depends on their particle size, shape and chemical structure [2]. The GNPs resulted in excellent photothermal property. Compared with other types of nanoparticles such as core-shell nanoparticles, cerium oxide (CeO2), TiO2, ZnO, magnetic nanoparticles or quantum dots, GNPs not only posses excellent chemical stability but also exhibit high affinities to biomolecules for cancer treatment or detection. In addition, GNPs have more biological compatibility and non-cytotoxicity which can provide the best possibility to clinical applications in the future.
The application of cancer therapy using nanotechnology has been widely applied to breast cancer, oral cavity cancer, cervical, lung carcinoma and brain cancer etc. However, the treatment for bladder cancer has been rarely investigated in the past research, especially in vivo study. Bladder carcinoma is a relatively common malignancy in urinary tract. Patients who have primary transitional cell carcinoma (TCC) in bladder were usually more than 90% [3]. In addition, superficial disease represents the majority (70-85%) of all diagnosed cases [4]. Bacillus Calmette-Guerin (BCG) is used as a potent intravesical therapy for superficial bladder cancer (non-muscle invasive), but the high recurrence in the superficial bladder carcinoma is widespread. Even though intravesical adjuvant therapy such as Bacillus CalmetteGuerin or Mitomycin-C (MMC) or Epirubicin etc. have been used in clinical, the recurrence rate is still close to 50% (40% and 53% in BCG and chemotherapy) with 15% disease progression in the test groups. These results show poor therapeutic effect with less benefit in any instillation agent. Furthermore, these adjuvant therapies may affect the patient inclination for treatment due to some side effects involved cystitis, fever, haematuria or urinary frequency. In recent years, the potential markers for urothelial carcinoma included epidermal growth factor receptor (EGFR), mucin 7(MUC7) and cytokeratin 20 (CK20) have been developed [5]. In our previous study, we use these targets as photothermal cancer therapeutic agents in conjunction with spherical gold nanoparticles of an average diameter of 48 nm. Research has shown that spherical shaped particles with diameters between 30 and 50 nm have the most uptake into mammalian cells [6]. Moreover, if the radii of particles were less than 50 nm and have higher density, it will move closer to endothelium layer and enhance the treatment effects. In this study, 48 nm gold nanoparticles is used as an adjuvant therapy to assist the surgical operation in superficial bladder cancer treatment. The therapeutic effect of superficial bladder cancer in situ is evaluated. TCC cell lines are used to test the curative effect as a prerequisite for the next carcinoma in situ animal study.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 380–383, 2011. www.springerlink.com
Application of Gold Nanoparticles for Enhanced Photo-Thermal Therapy of Urothelial Carcinoma
381
II. MATERIALS AND METHODS This study selected four kinds of TCC cell lines including murine bladder cancer (MBT-2) and human bladder cancer (T24, 9202, 8301) to verify the therapeutic effect of monoclonal anti-MUC7 antibody/GNPs in comparison with polyclonal anti-EGFR antibody/GNPs used in previous study [7]. A. Bladder Tumor Cells Culture The malignant urothelial cell lines (UCC), MBT-2 (Murine) T24, 9202 and 8301 (Human) were cultured as monolayers at 37oc in atmospheric pressure. The culture medium of Roswell Park Memorial Institure 1640 (Gibco-BRL) containes 10% fetal bovine serum and 1% penicillinstreptomycin. Fig. 1 Reduction of antibody disulfide bonds B. Preparation of Gold Nanoparticles The GNPs were prepared using chloroauric acid reduction according to the method developed by Turkevich et al [8]. In brief, hydrogen tetrachloroaurate (III) trihydrate (Sigma) was diluted in D.D. water (Millipore) to a final concentration of 1.0×10-3 M. Then 34.6×10-3 M trisodium citrate (J.T. Baker) was added to the boiling gold chloroauric acid solution under vigorous stirring until it is well dispersed. When the color of solution turned to purplish red, the formed GNPs solution was removed from heat to cool at room temperature for at least 30 min. The particle suspension was centrifuged and diluted in 20×10-3 M HEPES buffer (pH7.4, Sigma) to the final concentration of 0.8 optical density at 532 nm. For better sterility, the particle suspension was finally filtered by Millipore filter of 0.22 μM. C. Semi anti-EGFR and anti-Mucin7 Antibody Labeled GNPs In order to enhance the efficiency of conjugation between GNPs and antibody compare with hydrophobic interaction, half-antibody fragments creation is adopted [9]. In principle, because of the GNPs have a high affinity with sulfhydryl group, the half-antibody will be produced by this reduction and has great ability to combine with each other (as shown in Figure 1). The steps to reduce the antibodies followed the works of Mahnke et al [10]. The antibody was dissolved at a concentration of 10 mg/ml in 20 mM sodium phosphate buffer (PH7.5) containing 150 mM NaCl and 10 mM ethylene diamine tetraacetic acid (EDTA). Then 6 mg of 2mercaptoethylamine (2-MEA) was added to the antibody solution.
After the solution dissolved, the reaction mixture was incubated for 90 min at 37°C. Lastly, the reduced antibody was purified by gel filtration column equilibrated with the same buffer containing 5 mM EDTA to remove the excess 2-MEA. We further changed the dialysis buffer 2 times to assure that the desalted antibody was purified. D. GNP/Cell Carcinoma Incubation and Laser Therapy Experiment The IgG conjugated GNPs were added into above urinary bladder carcinoma cell lines in 6-well tissue culture plate with 1 cc per well. After incubation at 37oc for 30 min, the cell/anti-EGFR/Au were washed three times to remove the unbound suspension. The 532 nm green light laser system (IDAS) provides a stable and safe power for experiment. The wavelength is overlapped with the GNPs absorption region to promote the thermal efficiency. The pulse mode was used to prevent overheating the medium. After incubation for 30 min, the cell lines which immersed GNPs suspension were exposed to 532 nm laser at various power densities for 500 times. The cell images were taken using Motic AE21 microscope under 40 x to test cell viability stained with 0.4% trypan blue (Sigma). E. Orthotopic Bladder Cancer Animal Model In order to verify the efficiency of the therapy, an animal study is required. The urinary bladder cancer model developed by Xiao et al was adopted [11]. Firstly, we use Credé’s method to evacuate the urine after the C3H mouse was anaesthetized. Then the mouse bladder was catheterized with a 24gauge plastic cannula by some inert lubricant for agents
IFMBE Proceedings Vol. 35
382
Y.J. Wu et al.
seeding. The bladder mucosa was destroyed with 0.1 N of hydrochloric acid (HCl) and neutralized by potassium hydroxide (KOH) of 0.1 N. After waiting for 15 s, the bladder was flushed with some sterile phosphate-buffered saline (PBS). Immediately after the mice bladder instillation, the 1×106 MBT2 cells were inoculated via the urethra for tumor seeding. In order to evaluate the tumor progression, the mice were sacrificed for section examination on each 14 days post-inoculation.
III. RESULTS AND DISCUSSION A. Preparation of Gold Nanoparticles From the image shown in Figure 2, we can observe the size and shape of gold nanoparticles by atomic force microscope (AFM). The particles were spherical shaped, and the average diameter was 48 nm. The UV/Vis spectrophotometer shows that the absorption spectrum was at about 532 nm. The wavelength was overlapped with the absorption region of laser with better thermal efficiency.
Fig. 3 (A),(B) SEM Images demonstrate the existence of semi antiEGFR/GNPs with different magnifications (10000 x and 100000 x). (C) The SEM-EDS analysis of the element Au distribution of MBT2 cells
Fig. 2 AFM and UV/Vis spectrophotometer image shows the GNPs size, shape and absorption wavelength
B. Observation of Cells/Semi Antibody/GNPs In order to confirm the existence of our semi antiEGFR/GNPs, we used scanning electron microscope (SEM) and transmission electron microscopy (TEM) to observe the distribution of gold. As the SEM images shown in Figure 3, we can notice that the semi anti-EGFR/GNPs were binding to the MBT2 cells clearly in various parts. The weight of element Au occupied 5.36% in terms of total element content which proved the existence of GNPs. The TEM images (as shown in Figure 4B) also illustrate the existence of the semi anti-EGFR/GNPs. We can find that there is a small amount of cellular uptake of nanoparticles through endocytosis. The size-dependent GNPs may potentially move into the tumors which makes it easier to enhance the treatment [8].
Fig. 4 (A) TEM image of MBT2 cell lines. (B) TEM image of MBT2 cell lines with added semi anti-EGFR/GNPs C. Laser Therapy Experiment For the laser experiment, different kinds of TCC (MBT2, T24, 9202, 8301) cell lines have been tested. As shown in Figure 5, our preliminary results demonstrate that the cancer cells were damaged at relatively lower energy (10 W/300 ms) compared with the cell without added semi antibody/GNPs (30 W/300 ms). Furthermore, the combination of two kinds of antibody/GNPs provided stronger ability for targeting tumor and improving the treatment effect.
IFMBE Proceedings Vol. 35
Application of Gold Nanoparticles for Enhanced Photo-Thermal Therapy of Urothelial Carcinoma
383
IV. CONCLUSION Our preliminary results indicate the immunized GNPs can be used as a PTT agent to assist the cancer therapy. The EGFR and MUC7 antibodies which combine with GNPs can target on the specific region of cell membrane. Therefore, the laser power used was about half of the energy to injury the cancer cells when the GNPs were added. The overlapping wavelength between our GNPs and laser will produce the stronger thermal energy and consequently promote the efficiency of cancer treatment.
ACKNOWLEDGMENT This study is supported by a NSC grant in Taiwan. NSC grant number: 98-2320-B-039-006.
REFERENCES Fig. 5 Observation of MBT2 cell damage after laser treatment with different conditionings D. Observation of C3H Mice Animal Model The orthotopic bladder cancer model was verified by the histological section, as shown in Figure 6. The result shows apparent tumor in the bladder lumen with EGFR over expression in the tumors. Most important of all, all of the tumors were superficial without muscle invasiveness. The superficial bladder cancer is the most suitable time for initiation of adjuvant intravesical therapies.
Fig. 6 Histological sections of C3H mice bladder tumor with different magnifications (40 x, 100 x and 100000 x). We can notice that the arrangement of epidermal cells and nuclear contours were irregular (excessive proliferation of tumor cells)
1. Lu W, Arumugam S R, Senapati D et al. (2010) Multifunctional OvalShaped Gold-Nanoparticle Based Selective Detection of Breast Cancer Cells Using Simple Colorimetric and Highly Sensitive TwoPhoton Scattering Assay. J Am Chem Soc 4:1739-1749 2. Link S, El-Sayed M A (2003) Optical properties and ultrafast dynamics of metallic nanocrystals. Annu Rev Phys Chem 54:331-346 3. Lamm D L, Torti F M (1996) Bladder Cancer. J Clin 46:93-112 4. Puntoni M, Zanardi S, Branchi D et al. (2007) Prognostic Effect of DNA Aneuploidy from Bladder Washings in Superficial Bladder Cancer. Cancer Epidemiol Biomarkers Prev 16:979-983 5. Villares G J, Zigler M, Blehm K et al. (2007) Targeting EGFR in bladder cancer. World J Urol 25:573-579 6. Chithrani B D, Ghazani A A, Chan W C W (2006) Determining the Size and Shape Dependence of Gold Nanoparticle Uptake into Mammalian Cells. Nano Letters 6:662-668 7. Chen C H, Wu Y J, Chang H S W et al. (2010) Photothermal Therapy of Urothelial Cancer Using Anti-EGFR/au Nanoparticles, IFMBE Proc. vol 31, World Congress on Biomech. & Biomed. Eng., Singapore, 2010, pp 1185–1188 8. Kimling J, Maier M, Okenve B et al. (2006) Turkevich Method for Gold Nanoparticle Synthesis Revisited. J Phys Chem B 110:1570015707 9. Hirsch J D, Haugland R P (2005) Conjugation of Antibodies to Biotin. Humana Press, Totowa 10. Mahnke K, Qian Y, Knop J et al. (2003) Induction of CD4+/CD25+ regulatory T cells by targeting of antigens to immature dendritic cells. Blood 101:4862-4869 11. Xiao Z, McCallum T J, Brown K M et al.(1999) Characterization of a novel transplantable orthotopic rat bladder transitional cell tumour model. Br J Cancer 81:638–646
Author: Yi-Jhen Wu Institute: Institute of Biomedical Engineering, National Cheng Kung University Street: No.1 Daxue Rd. East Dist. City: Tainan 701 Country: Taiwan Email: [email protected]
IFMBE Proceedings Vol. 35
Nanobiosensor for the Detection and Quantification of Specific DNA Sequences in Degraded Biological Samples M.E. Ali1, U. Hashim1, S. Mustafa2, Y.B. Che Man2, and M.H.M. Yusop2 1
Institute of Nano Electronic Engineering (INEE), Universiti Malaysia Perlis, Kangar, Malaysia 2 Halal Products Research Institute, Universiti Putra Malaysia, Serdang, Malaysia
Abstract— A 27-nucleotide AluI fragment of swine cytochrome b (cytb) gene was integrated to a 3-nm diameter citrate-tannate coated gold nanoparticle to fabricate a species specific nanobiosensor. The biosensor was applied to authenticate pork adulteration in autoclaved pork-beef mixtures. The sensor was found to be sensitive enough to detect 0.5% and 1% pork in raw and 2.5-h autoclaved mixed samples in a single step without any separation or washing. The hybridization kinetics of the hybrid sensor was studied with synthetic targets from moderate to extreme target concentrations and a sigmoidal relationship was found. The kinetic curve was used to develop a convenient method for quantifying and counting target DNA copy number. The biosensor probe was hybridized with a target DNA that was several-folds shorter than a typical PCR-template. This offered the detection and quantitation of potential targets in highly processed meat products or extensively degraded samples where PCR-based identification technique might not work due to the degradation of comparatively longer DNA. The assay was a viable alternative approach of qPCR for detecting, quantifying and counting copy number of shorter size DNA sequences in degraded samples to address a range of biological problems such as food analysis, biodiagnostics, environmental monitoring, genetic screening and forensic investigations. Keywords— Species specific nanobiosensor, hybrid nanobioprobe, hybridization kinetics, sigmoidal relationship, synthetic oligo-targets.
I. INTRODUCTION Selective detection of specific DNA sequences is increasingly important to address a wide range of biological issues such as bio-diagnostics and genetics [1-3], food analysis [4] and forensics [5]. Over the recent years, multitudes of PCR assays are developed outlining the detection of species specific DNA sequences for meat and meat product authentication [4, 6-7]. Better stability, codon-degeneracy and universal tissue distribution are some of the factors that made DNA to be the analyte of choice [6-7]. The key region of choosing PCR as a preferred analytical tool is its extraordinary ability to amplify a selective segment of DNA from as little as single copy to easily detectable quantities and the consequent amelioration of sample purification processes [3].
However, the PCR process itself has some difficult-to control-limitations that often produces artifacts in the final results [6]. It also frequently produces cross-species amplification when shorter DNA target is used [6-7]. Hybrid bio-materials composed of functionalized nanoparticles, covalently linked to biomolecules such as peptides, proteins and polynucleotides, are spectracularly interesting for their size dependent properties and dimensional similarities to biomacromolecules [8-9].These nanobioconjugates are potential agents for multiplexed bioassays, materials synthesis, ultrasensitive optical detection and imaging, in vivo magnetic resonance imaging (MRI), longcirculating carriers for targeted drug release and structural scaffold for tissue engineering [8-10]. Thiol-capped gold nanocrystals (GNCs), covalently linked to fluorophore-lebeled olionucleotide through metalsulfur bond are shown to detect specific sequences and single-nucleotide mismatches [8-9]. However, such studies are limited to the laboratory level model experiments with synthetic oligo-targets. No studies so far explored the sequence and mismatch detecting power of the fluorophorelabeled-oligo-nanoparticle conjugates in heterogeneous biological samples. Hybridization kinetics of such nanobioconjugates is also need to be explored. In the current report, we have structurally and functionally integrated a 27-nucleotide segment of swine mitochondrial (mt) cytb gene to a 3-nm diameter citrate-tannatecoated gold nanocrystal to fabricate a novel class of species specific nanobiosensor to determine pork adulteration in raw and highly processed mixed meat specimens. Verification and quantification of pork adulteration in meat and meat products are important for Halal authentication, allergy and cholesterol related health issues and enforcement of accurate food labeling to protect consumer interests [4, 6-7].
II. MATARIALS AND METHODS A. Design of Swine Specific Oligo-Probes A 27-bp AluI fragment (429-455bp) of swine (Sus scrofa) cytb (GenBank # HM010474 in NCBI data base) was chosen
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 384–387, 2011. www.springerlink.com
Nanobiosensor for the Detection and Quantification of Specific DNA Sequences in Degraded Biological Samples
as a porcine specific marker. This fragment showed high degree of intraspecies similarities and interspecies dissimilarities by NCBI-BLAST analysis against nonredundant nucleotide collection and ClustalW alignment program. The probes were synthesized by IDT, USA, with suggested modifications as shown in Table 1. The synthetic targets (complementary, non-complementary and single-mismatch) were supplied by the 1st Base, Malaysia. Table 1 Oligonucleotide sequences used in the study Name Sequences (5ƍĺ3ƍ) Probe aTMR-A6CTGATAGTAGATTTGTGATGACCGTAG-A6(CH2)6SH Complementary target CTACGGTCATCACAA ATCTACTATCAG Non-complementary target ACGTAACTGCTGTGGCCTGGTCGCTGA Single mismatched target CTACGGTCATCACAAAT bTTACTATCAG a 6-carboxy tetra-methyl rhodamine, bmismatched base
B. Synthesis of Colloidal God Nanoparticles Small gold nanoparticles ((GNPs) were prepared according to bibliography [11].The colloidal sol was characterized by Hitachi 7100 transmission electron microscope and PerkinElmer Lamda 25 UV-vis spectroscopy. The average size of the particles was assigned to 3 ± 0.2 nm in diameter by measuring 500 particles. The approximate number and concentration of the particles were calculated according to Heiss et al [12-13] and were found to be 2.01 x 1011 NPS/µl and 335 nM. C. Preparation of Hybrid Nanobioprobes The custom made probes were mixed with GNPs in a ratio of 3:1 and the mixture was incubated overnight at 20°C in a shaking water bath. The oligo-conjugated particles were aged and purified according to Maxwell et al [8].The average number of attached oligo-probe per particle was also determined by 2-mercaptoethanol digestion following Maxwell and coworkers [8].
385
E. Specificity and Sensitivity in Mixed Biological Samples Separate aliquots of 100g of pork-beef binary admixtures were prepared by mixing fresh pork and beef in a ratio of 100:0, 50:50, 25:75, 10:90, 5:95, 1:99, 0.5: 99.5, 0.1:99.9 and 0:100 (w/w). The mixtures were autoclaved at 120°C for 2.5-h and DNAs were extracted from 100 mg sample in triplicates using MasterPureTM DNA Purification Kit (Epicentre Biotechnologies, Madison, USA) as per the protocol supplied by the manufacturer. The purity and concentration of extracted DNA samples were checked by Eppendorf UVvis Biophotometer (Eppendorf, Germany). The extracted total DNA (500 µg/ml) was digested with AluI (New England Biolabs, UK) restriction enzymes. The digestions were performed in a total volume of 1 ml, containing 600 µl of total DNA, 300 U of restriction enzymes and 100 µl of digestion buffer (New England Biolabs, UK) for 8-h at 37°C in a shaking water bath. The digestions were confirmed by electrophoresis on 3% agarose gel. The hybridization reaction was performed in a total volume of 2.5 ml in triplicates with 10 nM probes and 60 µg/ml of AluI digested mixed DNA. LOD of mixed sample was determined by incubating 10 nM hybrid probes with serially diluted AluI digested pork-beef DNA mixture for 60 min in a 2.5 ml reaction volume. F. Fluorescence Measurement The emission spectra were collected in 10-mm cuvette with 2-ml volume in PerkinElmer LS55 fluorescence spectrometer with excitation at 545-nm. Each spectrum was an average of 5-scan at the scan speed of 200 nm/min with 5-nm slit width. The background was subtracted by replacing sample with 2:1 ratio of 10 mM PBS and hybridization buffer. For the determination of LOD, a series of fluorescence spectra were obtained in triplicates and average fluorescence intensity at 579nm was plotted as a function of target concentration.
III. RESULTS AND DISCUSSION
D. Specificity and Sensitivity Tests An aliquot of the purified nanoparticle probes was diluted to 10 nM with hybridization buffer (90 mM KCl, 10 mM Tris, pH 8). To determine specificity, the probes were incubated with a 4-fold excess (60 nM) of complementary, non-complementary and single-mismatch targets (Table 1) at 70°C for 5 min to allow strand separation and then at 40°C for 30-60 min to allow hybridization. To determine the limit of detection (LOD), the 8-fold excess of complementary targets were serially diluted from 120 nM to 3.66 nM with hybridization buffer and were incubated for 60min with 10 nM nanoparticle probes.
A. Detection and Quantification Principle Earlier studies has shown that hybrid materials composed of single-stranded DNA (ssDNA) covalently linked to a small gold nanoparticle (2-3 nm in diameter) via sulfur-gold bond at one extremity and a fluorescent dye to the other, can assume two distinct conformations: (1) a constrained conformation with stem-loop or arch-like appearance before target binding and (2) a straight conformation with rodlike appearance after target binding. In the closed structure, the fluorophore and the GNP are held in close proximity and the fluorescence is quenched by non-radiative energy
IFMBE Proceedings Vol. 35
386
M.E. Ali et al.
transfer from dye to the metal. On the other hand, in the open state the fluorophore is thrown in far apart (>2 nm) from the metal particle and emits fluorescence [8-9]. Thus it can be assumed that the degree of fluorescence emission depends on the degree of target binding. The maximum fluorescence is observed when the probe is saturated with the targets and the base-line fluorescence is realized in the absence of any targets. Based on this assumption, a standard curve can be generated with known concentrations of probes and targets and the concentration of the unknown can be obtained by plugging the observed fluorescence in the standard curve. Fig. 2 Detection of specific DNA sequences and single nucleotide mismatches using swine specific nanobiosensor probes. From top to bottom are perfectly complementary (red curve); single nucleotide mismatch (green curve); non-complementary targets (pink curve) and freeprobe (blue curve) According to the latter group, low ionic strength hybridization buffer (90 mM KCl, 10 mM Tris, pH 8.0) more precisely differentiates perfectly matched and mismatched sequences at ambient temperatures. Although, the latter group achieved higher sensitivity, the gold particles they used were too small and unstable above 50°C. Using relatively more stable citrate-tannate coated GNPs with relatively large diameter and low ionic strength hybridization buffer [9] we achieved specificity which was higher than Maxwell et al [8] and close to Dubertret et al [9]. Fig. 1 Schematic presentation of quantification and operating principles of swine nanobiosensor probes B. Species Specificity of the Prepared Nanobiosensor The fluorescence spectra of 10 nM porcine nanobiosensor probes with 4-fold molar excess (60 nM) of complementary (red curve: top one), single-mismatch (green curve: 2nd from the top) and non-complementary targets (pink curve: 3rd from the top) are shown in Fig.3. Only base line fluorescence was observed with the non-complementary targets. However, single-mismatch targets produced 65-70% less fluorescence than that of the perfectly match targets. Thus it was clearly demonstrated that the fabricated nanobiosensor was highly specific in discriminating complementary, noncomplementary and single-mismatch sequences. Maxwell et al achieved 55% quenching with 2.5 nm diameter gold nanoparticle probes where gold nanoparticles were produced by reducing sodium borohydride [11]. Dubertret et al achieved 75% reduction in fluorescence with molecular beacon and 1.4 nm diameter gold nanocrystals [12].
C. Pork Detection in Mixed Biological Samples The fluorescence spectra 2.5-h autoclaved pork-beef binary admixtures in various percentages are shown in Fig.3.
Fig. 3 Pork detection in autoclaved pork-beef binary admixtures by swine specific nanobiosensor probes. From top to bottom are 100%, 50%, 25%, 10%, 5%, 1%, 0.5%, 0.1%, and 0% pork in pork-beef mixture. The base line fluorescence of the free probes was mixed with that of 0% pork
IFMBE Proceedings Vol. 35
Nanobiosensor for the Detection and Quantification of Specific DNA Sequences in Degraded Biological Samples
The swine specific biosensor probe clearly detected 1% (sky-blue curve: 6th from the top in Fig. 3) in extensively autoclaved pork-beef mixtures. This clearly reflected the high sensitivity and specificity of hybrid nanoparticle conjugates to trace out target DNA in food products processed by severe heat and pressure that are implicated to degrade DNA [6-7]. The higher sensitivity (0.5%) of the hybrid nanobioprobe was achieved in raw meat mixtures (not shown). This is probably due to the more available targets in unprocessed meat samples. Real-time PCR with less than 120 bp DNA template are reported to detect 0.1% adulteration in moderately sterilized game bird species [7]. However, PCR assays with shorter size template DNA are reported to produce artifacts results and cross-species amplification [6]. Moreover, PCR cannot use the template DNA as short as 27 bp which is used in this study. The size of the probe DNA used in this report is comparable to that of the primers used in PCR assays [4, 6-7]. Thus it is not possible to apply PCR assays in extremely degraded samples where the current method can be applied. D. Hybridization Kinetics and Target Quantification A plot of target DNA concentration versus fluorescence intensity at constant concentration of probe yielded a hyperbolic curve (Fig. 4). This reflected that at too low concentration of target (< 1/8 fold), probe-target interaction is too low to open the closed structure (Fig. 1) and at too high concentration of target (> 6 fold), the probe is saturated with the targets. Using a moderate concentration of targets (shown by blue circle) a linear curve was obtained with R2 = 0.995. Thus the curve was applied to quantify and calculate target DNA copy number in mixed samples.
Fig. 4 Standard curves of the hybridization kinetics of swine nanobiosensor probes with synthetic target. The probe to target copy number ratios are shown in each points. The linear part of the curve is shown in the inset
387
IV. CONCLUSIONS A convenient method for the detection, quantification and calculating copy number of target DNA in highly degraded mixed biological samples is developed by combination of biology and nanotechnology. We believe our approach will find application in food analysis, genetic screening, biodiagnostics and forensic investigation.
ACKNOWLEDGMENT This Research was supported by grants “RUGS No. 9031” to Prof. Y. B. Che Man and “MOSTI No. 05-01-35SF-1030” to Prof. U. Hashim.
REFERENCES 1. Rees J (2002) Complex disease and the new clinical sciences. Science 296: 698-701 2. Hood L, Galas D (2003) The Digital Code of DNA. Nature 421: 444448. 3. Li H, Rothberg LJ (2004) Label free colorimetric detection of specific sequences in genomic DNA amplified by polymerase chain reaction. J Am Chem Soc 126: 10958-10961 4. Che Man YB, Aida A A, Raha AR, Son R (2007) Identification of pork derivatives in food products using species specific polymerase chain reaction (PCR) for halal verification. Food Control. 18: 885-889 5. Butler JM (2005) Forensic DNA typing-biology, technology and genetics of STR markers. Elsevier Academic Press, USA. 2nd Edn. 6. Hird H, Chisholm A, Sanchez A, Harnandez M, Goodier R, Schneede K, Boltz C, Popping B (2006) Effect of heat and pressure processing on DNA fragmentation and implication for the detection of meat using a real-time polymerase chain reaction. Food Addit Contam 23 (7): 645-650 7. Rojas M, Gonzalez I, Pavon MA, Pegels N, Logo A, Harnandez PE, Garcia T, Martin R (2010) Novel TaqMan real-time polymerase chain reaction assay for verifying the authenticity of meat and commercial meat products from game bird. Food Addit Contam: Part A 27: 749-763 8. Maxwell DJ, Taylor JR, Nie S (2002) Self-assembled nanoparticle probes for recognition and detection of biomolecules. J Am Chem Soc 124: 9606-9612 9. Dubertret B, Calame M, Libchaber AJ (2001) Single-mismatch detection using gold -quenched fluorescent oligonucleotides. Nat Biotechnol 19 (4) : 365-370 10. Kurtis A, Wilkinson C (2001) Nanotechniques and approaches in biotechnology.Trends Biotechnol.19 : 97-101 11. Preparing colloidal gold for electron microscopy. Technical data sheet #787, Polyscience Inc. Rev: 002, Active: 09/Feb/2009. 12. Haiss W, Thanh NTK., Aveyard Ferning DG (2007) Determination of size and concentration of gold nanoparticles from UV-vis Spectra. Anal Chem 79: 4215-4224 13. Using UV-vis as a tool to determine size and concentration of spherical gold nanoparticles (SGNPs) (2008) Nanopartz Tech. note 801 * Corresponding author: Md. Eaqub Ali Institute: INEE, University Malaysia Perlis Street: Kangar-Alor Star, City: Kangar Country: Malaysia Email:[email protected]
IFMBE Proceedings Vol. 35
Polysilicon Nanogap Formation Using Size Expansion Technique for Biosensor Application T. Nazwa, U. Hashim, and T.S. Dhahi Institute of Nano Electronic Engineering (INEE) University Malaysia Perlis (UniMAP), Researcher, Perlis, Malaysia
Abstract— Nanobiosensor based on nanogap capacitor is widely used for measuring dielectric properties of DNA, protein and biomolecule. The purpose of this paper is to report on the fabrication and characterization polysilicon nanogap patterning using novelty technique. Overall, the polysilicon nanogap pattern was fabricated based on conventional lithographic techniques. For size expansion technique, by employing simple dry thermal oxidation, the couple of nanogap pattern has been expanded to lowest nanogap value. The progress of nanogap pattern expansion were verified by using SEM. Conductivity, resistivity and capacitance test were performed to characterize and to measure electrical behavior of full device fabrication.SEM characterization emphasis on the expansion of polysilicon nanogap pattern increasing with respect to oxidation time. Electrical characterization shows that nanogap enhanced the sensitivity of the device at the value of nano Ampere of current. These simple least-cost method does not require complicated nanolithography method of fabrication but still possible to serve as biomolecular junction. This approach can be applied extensively to different design of nanogap structure down to several nanometer levels of dimensions. A method of preparing a nanogap electrode according to the present innovation has an advantage of providing active surface that can be easily be modified for immobilizations of biomolecules. Keywords— Lateral nanogap, biosensor, Polysilicon, oxide semiconductor, conventional lithographic.
I. INTRODUCTION The present invention relates to a process of forming an electrode having a nanogap. A nanogap electrode means an electrode having a gap distance of about 1-100 nm. As a process of preparing a nanogap electrode comes out in recent years, new technology has been developed at a fast speed in the field of measuring and applying the characteristics of nano-tube, nano-particle, encountered nano-wire, or the like as well as materials having the size of nanometer scale, such as protein and DNA. Nanogap device can be employed to the measurement of whole range of biological recognition systems including other protein-biomolecule binding interactions and nucleic acid hybridization [6]. But, the capability of devices were hampered by the resolution limit and mechanical instabilities of commonly used electron –sensitive resists.
Measuring the electrical properties is become limited by the difficulties when trying to hook a single molecule and place it between two electrodes. At the nanometer level, the optimum size of nanogap achieved is depending on the material selection. Most of the invention of nanogap using metal as electrode that is required additional process like annealing to get desired pattern [1]. This requirement will add constraint in material selection for nanogap pattern. For thin metallic nanogap pattern, numerous properties need to be bothered including wetting behavior, recrystallization temperature and granulity. In recent years, there has been proposed a method of forming a nanogap or angstrom (Å) gap by means of mechanical break junction [2]. But these methods are only practical for a method of forming a very narrow gap of about 1~5 nm, but still complicated to prepare a nanogap having the range of 4-100 nm. Moreover, these methods are not easy to be commercialized due to its low reproducibility and it is impossible to prepare an arbitrary-shaped nanogap or multiple nanogaps. On the other hand, a method of forming a nanogap electrode on a semiconductor substrate by wet-etching of a mesa structure is publicly known [3]. These methods however, cannot be a cost-effective and it is very difficult to prepare the nanogap of 100 nm or less due to the restriction of its process when the conventional semiconductor process technologies are used. Most of the methods of preparing a nanogap electrode by employing the metallic pattern is formed by electron beam lithography, photo lithography, X-ray lithography, and printing method [7]. But most of the methods are very complicated and hard to control. By the aid of development in modern chemistry, the expectation to get electrical function for one molecule with desired electrical function is reasonable. A number of methods for fabricating nanogap electrodes have already been established. But our goal is to develop a least-cost method of fabrication that can be applied in batch production. This method is introduced and compared to get better finding of the technique. The inspection of the nanogap pattern will be done by using JEOL SEM observation. Further characterization of full device in terms of resistivity, capacity and permittivity has been analyzed by using Semiconductor Parameter Analyzer (SPA), Spectrum analyzer, IV-CV Station.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 388–392, 2011. www.springerlink.com
Polysilicon Nanogap Formation Using Size Expansion Technique for Biosensor Application
II. EXPERIMENTAL A. Fabrication Process Prior to fabrication process, silicon substrates (100) 4 inch size need to be all set. Before applying further process onto wafer, some properties like thickness (Si thickness) and sheet resistance have been checked. The silicon substrates were cleaned and rinsed with de-ionized water. The proper wet cleaning of substrates were done by using RCA1 at 75’C to reduce undesirable particles, organic particle ,metal ion complex lying on the wafer. The deposition of Si3N4 were employed by using Plasma-enhanced chemical vapor deposition (PECVD).Plasma nitrides always contain a large amount of hydrogen which provides the enhancement in electrical conductivity, stability and mechanical stress of thin layer of wafer. By using low pressure chemical vapor deposition (LPCVD) machine, about 400 nm layer of polysilicon has been deposited on top of nitrate film at 400’C with 80 sccm silane gases supplied. The thickness of polysilicon is designed to be thicker to bear the loading of compression stress from Aluminium deposition plus to increase the value of capacitor. Theoretically, for every 1µm of SiO2 grown, about 0.46 µm of silicon is consumed. The thickness required to support the expansion of nanogap pattern were initially estimated based on kinetics of thermal oxidation principles. It is estimated that for every 1µm of SiO2 grown, about 0.46µm of silicon is consumed (Ruska 1987). Each layer thickness of deposition was checked by Spectrophotometer and Hawk 3D Nanoprofiler. Aluminium is required to be deposited about 150nm by using thermal evaporator machine. For further reason, aluminium was acting as a hard mask to bear the inflexible ion bombardment during poly-silicon RIE process. Then, for the conventional photolithography process, a positive photoresist were coated onto the flat surface of the substrates. The thickness could be estimated by adding 3 drops of photoresist (PR12000A) can provide about 1000nm. During photolithography process, the substrates were exposed to 10s through mask 1 shown in fig. 1(b). After development, aluminium layer was removed in the alum etch medium. The substrates were soft baked for 20s to remove the residue solvent used in the development. A permanent pattern is thus were created in the substrate after the removal of photoresist step by using acetone. Polysilicon nanogap pattern on the substartes were dry etched by using RIE recipe of 50sccm of SF6 , 10sccm of O2 ,1.0Pa of pressure, 250 bias and etch for 10s.Then, completely etched the aluminium by using alum etches to get final pattern of nanogap structure. The dry
389
oxidation were carried out at 1000 ‘C for different time ranging starting from 40mins until the oxidation growth were stop. The polysilicon nanogap patterns were expanded to the closest gap size. Same procedures were applied to fabrication of pad using mask 2 in fig.2. The Ti/Au will be deposited about 60nm and 100nm respectively. By using another pad chrome mask, photolithography process was employed to produce the Ti/Au pad for electrical characterization of nanogap electrodes.
(a)
(b)
(c)
(d)
Fig. 1 (a) Design specification of the mask 1 (b) Schematic design of mask 1- shows the actual arrangement of device design on chrome mask which consist of 160 dies with 6 different designs. Design is having no gap. (c) Design specification of the mask 2 (d) Schematic design of mask 2
Fig. 2 Process flow; (a) Starting Material, (b)Deposit Si3N4, (c)Deposit poly-silicon, (d) Deposit Al, (e) Resist coating, (f) Soft bake, (g) Exposure mask, (h) Develop resist (i) Poly-silicon RIE, (j) Alum etch and stripe resist, ion(k) Dry oxidation, (l) Poly-silicon nanogap pattern with pad Pt/Au fabrication( Electrical checking of the device can be performed on the fabricated pad)(Repeat step (a) to (j) for mask 2). Fig. 3 shows the circuit after serial impedance is measured, a simple resistor model is developed representing the substrate and polysilicon layer. The capacitor also found in series to describe the device with no liquid test
IFMBE Proceedings Vol. 35
390
T. Nazwa, U. Hashim, and T.S. Dhahi
RTOTAL=RSUB+RPOLYSILICON +RTI +RAU
CGAP
(a) (a)
CSAMPLE
(b) Fig. 3(a) Lumped-element model of dry nanogap electrodes;(b) (b)
Lump-element model of spacer and gap
III. RESULTS Polysilicon is chosen as a pattern because it is known to be compatible with high temperature processing and interfaces very well with thermal Si3N4. Complete conversion of microgap down to lower nanogap profile width can be reliable by adding some simple technique. In this project, conventional photolithography with combination of dry thermal oxidation is proposed to transform from 3.13 µm to 42nm. This technique, include experimentally, thermal expansion of nanogap can enlarge the nanogap pattern so the optimum nanolevel gap can be achieved. The initial dimension of chrome mask is 3.73 µm. After photolithography taken place, the sizes were photolithographically reduced to 3.13µm .When the layer thickness increased, the coupling effect of diffusion and chemical reaction became dominant and the growth of the layer of oxide became nonlinear. The polysilicon patterns were expanded and caused the nanogap to reduce to 2.57µm after 2 hours oxidation. Owing to the nature of oxidation process, the nanogap keeps on expanding during dry thermal oxidation. After completing 7 hours oxidation time, rapid expansion of polysilicon nanogap pattern up to 42nm is acquired after six times oxidation accomplished. Further increasing the oxidation time does not lead to any expansion of nanogap pattern. This means that, after 7 hours oxidation, the polysilicon layer is completely oxidized. By indicating the trend of oxide growth with respect to time, the expansion of nanogap can be easily controlled to the maximum oxide growth.
(c)
(d)
(e)
Fig. 4 SEM image of poly-silicon pattern after dry oxidation for (a) 1 hour (b) 2 hours (c) 3 hours (d) 4 hours (e) 5 hours (f)6 hours (g) 7 hours (sample 1)
IFMBE Proceedings Vol. 35
Polysilicon Nanogap Formation Using Size Expansion Technique for Biosensor Application
391
RTOTAL CAIR
(f)
Fig. 5 (C-F) characterization for dry nanogap device (sample 1) (g)
Fig. 4 (continued)
IV. DISCUSSIONS Fig. 5 demonstrates the electrical characterization that has been carried out to check the performance of device. In fig. 4(a), the Capacitance–Frequency (C–F) characterization of nanogap by connecting a two-point probe method were done by using a Keithley 4200 semiconductor characterization system. Experiments were performed for dry measured capacitance for typically by sweeping frequencies from 1 Hz to 1 MHz at room temperature with 30 MV input signal(0 V,DC, Offset ). The results were indicated that the device is very stable. The model is accurate for100 Hz to 100 kHz but deviates at very low and high frequency. Further increasing the value of frequency were not lead to the increasing of the capacity and remain stable until the frequency reach to 100 kHz, then capacitance was increased rapidly similar to electrical behavior that was to be seen with the frequency increasing and the permittivity staying almost stable until it reached the frequency value of 100 kHz, then the permittivity increased rapidly, and by physical view we can confirm that the conductivity between the gap is almost non-existent. To understand metal-oxide-semiconductor (MOS) Capacitor CV curves, a high frequency (HF) CV curve for an n-type semiconductor substrate is illustrated in Figure 8. A CV curve can be divided into three regions: accumulation, depletion, and inversion. Each of the three regions is described for an n-type complementary MOS (CMOS). This application is conducted using the Keithley Model 4200-SPAS to make C-V measurements.
Fig. 6 The CV curve of a-Si micro-gap structure Figure 6 shows the accumulation region for an n-type CMOS when applied with a negative voltage. A C-V test has measured the nanogap capacitance in the strong accumulation region, where for an n-type MOS-C, the voltage was positive enough that the capacitance remains constant, and the CV curve slope is flat. The oxide thickness can be extracted from the oxide capacitance. However, the CV curve for a very thin oxide often cannot “saturate” to a flat slope. As the electrode voltage moved toward the negative values, the CMOS started to differ from the parallel-plate capacitor. Roughly at the point where the electrode voltage became negative, the following occur. The negative electrode were electrostatically repelled the electrons from the substrate- tooxide/well-to-oxide interface. A carrier-depleted area forms beneath the oxide, creating an insulator. (The absence of freemoving charges distinguishes an insulator from a conductor). As a result, the HF-CV analyzer measures two capacitances in series: the oxide capacitance and the depletion capacitance. As the electrode voltage becomes more negative, the following occur. (i) The depletion zone penetrates more deeply into the semiconductor. (ii) The depletion capacitance becomes smaller, and consequently, the total measured capacitance becomes smaller. Therefore, the CV curve slope is negative in the depletion region.
IFMBE Proceedings Vol. 35
392
T. Nazwa, U. Hashim, and T.S. Dhahi
In the third region, for an n-type CMOS, as the electrode voltage decreased beyond the threshold voltage, dynamic carrier generation and recombination move toward the net carrier generation. The negative electrode voltage both generates electron-hole pairs and attracts the minority carriers toward the electrode. Again, because the oxide is a good insulator, these minority carriers accumulated at the substrate- to-oxide/well-to-oxide interface. The accumulated minority-carrier layer is called the inversion layer because the carrier polarity is inverted. Above a certain negative electrode voltage, most available minority carriers are in the inversion layer, and further electrode-voltage decreases do not deplete the semiconductor further. That is, the depletion region reaches a maximum depth. The Keithley 4200 SPA has built-in support to control external IV analyzers used to provide more clarification and to offer a more rigorous examination of a-Si micro-gap material. The IV resistance curve in Figure 7 shows that the resistance derived is high (RES=4.13891e009ohm); therefore, it behaves as an insulator due to the lack of voltage.
Fig. 7 IV capacitor curve of the nanostructure For the polysilicon nanogap structure, as the electrode voltage moves toward the negative values, the CMOS starts to differ from the parallel-plate capacitor. Roughly, at the point where the electrode voltage is applied, a constant current is passed through the unconnected nanogap. This current is held constant, which forms a micro-gap due to the isolation of the electrode from another and the linear curve is clear between the applied voltage and current for the capacitor.
V. CONCLUSIONS The methods to fabricate and characterize nanogap were demonstrated. Two chrome masks are used to fabricate this micro-gap, where polysilicon material is used to pattern the gap. The electrical characterizations in this research were done by using Semiconductor Parameter Analyzer (SPA), Spectrum Analyzer, IV-CV Station for electrical characteristic, Conductivity, resistivity and capacitance tests are performed to
characterize and check the structure of the device, which resulted in a small micro-gap as was revealed by further I-V curve results that showed a current in nano-amps. These devices are not just have the potential to serve as biomolecular junctions because their size reduces electrode polarization effects regardless of frequency but still can maintain the bioactivity during characterization of the biomolecules.
ACKNOWLEDGMENT We are grateful for fruitful discussions with our collaborators at the Institute of Nano Electronic Engineering (INEE) at University Malaysia Perlis (UniMAP). This work was supported by INEE at UniMAP, through the Nano Technology project. The views expressed in this publication are those of the authors and do not necessarily reflect the official view of the funding agencies on the subject.
REFERENCES 1. R.Somenath ,Z.Gao (2009), Nanostructure-based electrical biosensors, Journal of Nano today 4,pp.318-334 2. H. Spelthahn, A.Phogossian & M. J.Schoning (2009), Self-alligned nanogaps and nanochannels via conventional photolithography and pattern-size reduction technique. Journal of Eletrochemica Acta 54, pp.6010-6014 3. F.Favier (2009), Nanogaps for sensing. Journal of Procedia Chemistry 1,pp. 746-749 4. C.Xiang, J.Kim & R.Penner.(2009), Reconnectable Sub-5nm Nanogaps in ultralong Gold nanowires,. Nano Lett.,9 (5) pp.2133-2138 5. N.Bastianon,(2004)Fabricating Nano-gap Metal Electrodes using photolithography,NNIN REU.pp.22-24 6. C.Tsai.,T.Chang, C.Chen, F.Ko. &P.Chen,(2005),An ultra sensitive DNA detection by using gold nanoparticle multilayer in nanogap electrode, Journal of Microelectronic Engineering 78-79,pp. 546-555 7. M.Yi,K.Jeong,& P.Lee (2005),Theoretical and Experimental study towards of nanogap biosensor, Journal of Biosensor and Bioelectronics 20,pp.1320-1326 8. D.Carlo, H.Kang, X.Zeng, K.Jeong & P.Lee,(2003) Nanogap –based dielectric immunosensing, Transducer 03T,the 12th International Conference on Solid State Sensors, Actuators and Microsystems,Boston,June 8-12,2003,pp.1180-1183 9. H.Kageisima,M.Uematsu,K.Akagi,S.Tsuneyeki,T.Akiyama, & T. Siraishi(2006);Mechanism of oxide deformation during silicon thermal oxidation,Journal of Pysica B 376-377,pp. 407-410 10. C.Kao, W.H.Sung & C.S.Chen (2008); Investigation of te doping thickness effects of polysilicon oxide by rapid thermal N20 oxidation, Journal of Microelectronic Engineering 85,pp408-413 11. E.Lee & J.Roh (1991); Effect of wet oxidation on the electrical properties of sub 10-nm thick silicon nitride films, Journal of tin solids film,2005,pp. 246-251 12. T.Tanii,T.Hosaka,T.Miyake,G.Zhang,T.Zako,T.Funatsu & I.Odomori (2005);Prefential immobilization of biomolecules on silicon microstructure array by means of electron beam lithograpy on organosilane self-assembled monolayer resist, Journal of Applied Surface Science,234,pp102-106 13. F.Bordi, C.Cametti & T.Gili( 2001); Reduction of the contribution of electrode polarization effects in the radiowave dielectric measurements of highly conductive biological cell suspensions, Journal of Bioelectrochemistry, 54(2001),pp.53-61
IFMBE Proceedings Vol. 35
A Modified Beer-Lambert Model of Skin Diffuse Reflectance for the Determination of Melanin Pigments A.F.M. Hani1, H. Nugroho1, N. Mohd Noor2, K.F. Rahim2, and R. Baba2 1
Centre for Intelligent Signal and Imaging Research, Universiti Teknologi PETRONAS, 31750 Tronoh, Malaysia 2 Dermatology Department, Hospital Kuala Lumpur, 50586 Kuala Lumpur, Malaysia
Abstract— In skin care industry and dermatology, analysis of human skin tone is an important parameter for evaluating the current condition of the skin. Research has shown that the human skin tones are due to the combination of skin chromophores (pigments) such as melanin, haemoglobin, bilirubin and beta-carotene. Several works on skin modelling have been reported but none has been specifically developed to classify and measure types of melanin. In this research, the spectral responses of different human skin phototypes are investigated for melanin pigment (pheomelanin and eumelanin) analysis. We propose skin pigmentation model based on a modified Beer-Lambert model of skin diffuse reflectance to measure types of melanin. A clinical study involving 118 participants with different skin phototypes (SPTs) is conducted where the skin reflectance data of participants are measured using Spectrophotometer Konica Minolta 2600c. Applying the data to the proposed skin model, it was found the pheomelanin concentration is -4.6E5± 5.4E-6 moles/l for SPT III, -5.9E-5±6.4E6 moles/l for SPT IV, and -8.2E-5±9.8E-6moles/l for SPT V and the eumelanin concentration is 9.7E-5±7.3E-6 moles/l for SPT III, 1.2E-4±1.03E-5 moles/l for SPT IV, and 1.6E-4±1.7E-5 moles/l for SPT V. Results show that proposed model can be used for pigmentation analysis to measure melanin pigments types in skin. Keywords— skin chromophores, melanin, pheomelanin, eumelanin, modified Beer-Lambert law.
skins. Pheomelanin gives rise to the pinkish to reddish colours in human skin while eumelanin gives rise tobrownish and blackish colours. Skin choromophores such as haemoglobin are found in blood. There are two types of haemoglobin; oxygenated and deoxygenated haemoglobin. In the arteries, 90-95% is oxygenated haemoglobin, and in the veins, only 47% of the haemoglobin is oxygenated [Meglinsky, 2003]. Several works on skin modelling have been reported and generally they can be categorised into deterministic and non deterministic approaches. A. Deterministic Approaches Anderson [Anderson, 1981], Wan [Wan, 1981], Diffey [Diffey, 1983], Cotton [Cotton, 1996] and Doi [Doi, 2003] used Kulbelka-Munk (K-M) model to compute absorption and scattering coefficients of incident light in skin tissues. Another approach is based on multi spectral imaging. One of these approaches, called Spectrophotometric Intracutaneous Analysis (SIA) is used for early identification of malignant melanoma in human skin [Moncrieff, 2002]. Another approach is based on spectrophotometer. Weather [Weather, 1989] and Dwyer [Dwyer, 1998] reported a high correlation between spectral data of a specific light spectrum obtained from spectrophotometer and concentration skin chromosphores (melanin and haemoglobin).
I. INTRODUCTION In skin care industry and dermatology, analysis of human skin tone is an important parameter for evaluating the current condition of the skin. Research has shown that the human skin tones are due to the combination of skin chromophores (pigments) such as melanin, haemoglobin, bilirubin and beta-carotene [Buxton, 2003]. Tsumura however proposed that skin tone is mainly determined by melanin in the epidermal layer and haemoglobin in the derma layer [Tsumura, 2008]. Melanin is produced by melanocytes. Melanin has different two components namely, pheomelanin (reddish skin appearance) and eumelanin (brownish skin appearance) [Diffey, 1983]. Eumelanin is more abundant in people with dark skin. Pheomelanin is found in both light and dark
B. Non Deterministic Approaches One of the non deterministic approaches is the Monte Carlo method. Prahl proposed a Monte Carlo based algorithm to model light transport in tissue during laser radiation [Prahl, 1989]. Wang extended the simulation to model light transport in multi layered tissue [Wang, 1995]. Both studies are fundamental for the analysis of skin using Monte Carlo simulation. The other non deterministic approach is the Independent Component Analysis (ICA). Tsumura [Tsumura, 1999] separated the spatial distribution of melanin and haemoglobin by employing linear independent component analysis of a skin colour image. Ahmad Fadzil et al. [Ahmad Fadzil, 2009] reported that they able to measure repigmentation correctly in vitiligo cases.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 393–397, 2011. www.springerlink.com
394
A.F.M. Hani et al.
In general the above methods can be used to characterise the structure and properties of skin. However they do not classify nor measure in detail the types of melanin (eumelanin and pheomelanin) which is important in understanding the underlying causes of skin pigmentation disorder. In this research, the spectral reflectance of human skin will be applied to a proposed pigmentation model in order to analyse and measure the melanin content (pheomelanin and eumelanin) for use in clinical assessment of skin pigmenation disorders.
II. SKIN PIGMENTATION MODEL FOR MELANIN PIGMENTS
Askin (λ ) = ε eumelanin (λ )C eumelanin I eumelanin (λ ) +
ANALYSIS
Wang and Steven L. Jacques reported a Monte Carlo simulation of light transport in multi-layered tissue (MCML) that it is still used nowadays as the state of art of skin optics simulation [Tsumura, 2008]. The MCML program constitutes the following operations for photons:- (1) Photon launching, (2) generating the propagation distance, (3) moving the photon, (4) internal reflection, (5) photon absorption, (6) changing photon direction by scattering, (7) calculating observable quantities such as diffuse reflectance and specular reflectance. This forward model is iteratively used in the inverse optical scattering techniques to obtain the skin quantitative values of pigmentation from the measured diffuse reflectance. However, the Monte Carlo skin analysis requires a great deal of computation time to perform the inverse optical scattering calculation. Shimada proposed a simple analytical model based on modified Beer-Lambert Law for an inhomogeneous scattering medium [Shimada, 2001]. The absorbance, A, is defined from the reflectance, R, of the skin which is considered to be a semi-infinite medium. A = − log10 R (1) The absorbance, A of a homogeneous scattering medium, with molar absorption and molar concentration are ε and C, can be calculated as follows, A = εC l (C ) + G
m
∑ Ai (λ ) + G (λ ) i =1
A(λ ) =
(3)
m
∑ ε (λ )C l (C , … , C i
i =1
i i
1
ε pheomelani n (λ )C pheomelani n I pheomelani n (λ ) +
ε oxy − haemoglobi n (λ )C oxy − haemoglobi n I oxy − haemoglobi n (λ ) + ε deoxy − haemoglobi n (λ )C deoxy − haemoglobi n I deoxy − haemoglobi n (λ ) + ε beta − carotene (λ )C beta − carotene I beta − carotene (λ ) + ε biliribin (λ )C bilirubin I bilirubin (λ ) + A0 (λ ) A0 (λ ) = A0' (λ ) + G (λ )
m
, λ ) + G (λ )
(4) (5)
where A0 (λ ) is the absorbance of skin base.
III. CLINICAL STUDY A baseline clinical study is conducted to analyse melanin pigmentation for participants with no skin pigmentation disorder of different skin phototypes (SPT) using on Fitzpatrick classification as shown in Table 1. Based on Fitzpatrick skin phototype (SPT) classification, it accepted that:i. Asians generally have skin types III, IV and V. ii. Caucasians generally have skin types I and II, and iii.Africans and South Asians generally have skin types VI. Table 1 Fitzpatrick Skin Phototype (SPT)
(2)
,
where G and l (C ) are scattering loss and mean path length, respectively. For an inhomogeneous scattering medium, the absorbance, A for each wavelength proposed by Shimada [13] is given as follows
A(λ ) =
where the subscript, i, refers to the ith chromophore. l i (λ ) is the path length in the area in which the ith chromophore is distributed. l i (λ ) depends on not only Ci but also C1, . . . , Cm, but the effects of Cj (j≠ i) are small. The path length in the area where plural chromophores exist is considered as path length of both chromophores. l i (λ ) and G(λ) vary with wavelength λ because the scattering coefficients are different at each λ. Skin has 4 types of pigments, eumelanin, pheomelanin, beta-carotene and bilirubin. It can be formulated as follows,
SP T I II III
Unexposed Skin Colour White White White
IV V VI
Light brown Brown Dark brown
Sun Response History Always burns, never tans Always burns, tans minimally Burns minimally, tans gradually and uniformly Burns minimally, always tans well Rarely burns, tans darkly Never burns, tans darkly
In the clinical study, the classification of the study population (SPT III, IV and V) is determined from the L*a*b
IFMBE Proceedings Vol. 35
A Modified Beer-Lambert Model of Skin Diffuse Reflectance for the Determination of Melanin Pigments
values of the buttock based on Lee Yin Yin [Lee Yin Yin, 2009]. Standard classifications of SPT are based on interview and judgment of physician. Lee Yin Yin reported a relationship between L*a*b values and SPT classification. She computed means of L*a*b values of SPT III, IV and V using K-means clustering method. We classify participant based on their nearest Euclidean distance computed from their L*a*b values and Dr Lee’s mean values. This approach to obtain the SPT classification is used due to its objectiveness. The process flow of the study protocol is shown in Figure 1 below. Participant
395
SPT Distribution 56 41
21
0 SPT I
0
0 SPT III
SPT III
SPT IV
SPT V
SPT VI
Fig. 3 SPT Distribution Figure 4 shows the spectral reflectance data of SPT III, IV and V participants.
Sign informed consent f orm Spectral Reflectance
Data samples of Buttocks obtained using Spectrophotometer
60 SPT III SPT IV
50
SPT V 40 Intensity
Spectral analysis of skin ref lectance data
Melanin pigmentation analysis
30 20 10
Fig. 1 Process Flow of Study Protocol
0
For each participant, we obtained their spectral reflectance data using spectrophotometer Konica Minolta 2500 C and apply the proposed skin model to obtain the pheomelanin and eumelanin concentration.
IV. RESULTS AND ANALYSIS Spectral reflectance data of 118 participants (22 females and 96 males) were measured in the study. Figure3 show the participant distribution based on ethnic origin.
350
450
550
650
750
Wavelength
Fig. 4 Spectral Reflectance Data of SPT III, SPT IV and SPT V Participants As seen from Figure 4, it is observed that the spectral reflectance curves of SPT II, SPT III and SPT IV are clearly differentiated. Using multiple linear regression analysis (Eq. 5) obtained from the proposed skin pigment model, we can estimate the concentration of pheomelanin and eumelanin. Figures 5 and 6 show the estimated pheomelanin and eumelanin from each participant.
Ethnic Origin 4% 2% 4%
4%
62%
Pheomelanin
African
15% 9%
Arabic
0.00E+00
Cambodian
-1.00E-05 2
Caucasian Chinese
-2.00E-05
Indian
-3.00E-05 moles/l
Malay
Fig. 2 Ethnic origin distribution
3
4
5
-4.00E-05 -5.00E-05 -6.00E-05 -7.00E-05 -8.00E-05
The SPT distribution is shown in Figure 3; 21 participants with SPT III, 56 with SPT IV, and 41 with SPT V. Note that no participants in SPT I, II and VI categories were available at the time of study. IFMBE Proceedings Vol. 35
-9.00E-05 -1.00E-04
SPT
Fig. 5 Estimated concentration of pheomelanin
6
396
A.F.M. Hani et al. Eumelanin 2.00E-04
moles/l
1.50E-04 1.00E-04 5.00E-05 0.00E+00 2
3
4
5
6
SPT
moles/l for SPT III, 1.2E-4±1.03E-5 moles/l for SPT IV, and 1.6E-4 ±1.7E-5 moles/l for SPT V. As reported from other researchers, higher pheomelanin concentrations and lower eumelanin concentrations are expected in lower SPT. The melanin pigmentation analysis from this study is in concurrence with previous findings of skin melanin pigment for different skin phototypes. Thus, the proposed skin model can be used to infer melanin pigments namely pheomelanin and eumelanin concentration from spectral reflectance data of skin.
Fig. 6 Estimated concentration of eumelanin Experiment performed by Vincensi et al (Vincensi, 1998) indicated that increasing skin pigment (SPT I to VI) correlates with changes in pheomelanin/melanin production ratios among SPT. Lower SPT will have higher pheomelanin and lower eumelanin production ratio. From Figures 5 and 6, it can be seen that the data obtained confirms that the model can be used to estimate the concentrations of pheomelanin and eumelanin in skin. Table 2 shows the mean +/- standard deviation values of pheomelanin and eumelanin concentration for the various SPT participants. Table 2 Mean±SD of Pheomelanin and Eumelanin Concentrations in skin phototypes Melanin Pheomelanin Eumelanin
SPT III (moles/l)
SPT IV (moles/l)
SPT V (moles/l)
-4.6E5± 5.4E-6
-5.9E-5 ±6.4E-6
9.7E-5 ±7.3E-6
1.2E-4 ±1.03E-5
-8.2E-5 ±9.8E6 1.6E-4 ±1.7E-5
The pheomelanin concentration decreases as the SPT increases and the eumelanin increases as the SPT increase. A negative value of pheomelanin indicates negative contribution to the overall absorption process.
V. CONCLUSION In this paper, we have developed a skin pigmentation model for melanin pigment (eumelanin /pheomelanin) analysis. The proposed model is based on modified Beer-Lambert law of skin reflectance model. Clinical study involving 118 participants with three different skin phototypes (SPTs) is conducted. In the study, it was found that the pheomelanin concentration is -4.6E5±5.4E-6 moles/l for SPT III, -5.9E5±6.4E-6 moles/l for SPT IV, and -8.2E-5±9.8E-6 moles/l for SPT V) and the eumelanin concentration is 9.7E-5±7.3E-6
ACKNOWLEDGMENT The research work is a collaborative work between Universiti Teknologi PETRONAS and Hospital Kuala Lumpur. The authors would like to thank the assistance of Romuald Jolivot from University of Burgundy in the data collection.
REFERENCES ff
1. Paul Buxton (2003) ABC of Dermatology, BMJ Publishing 2. Tsumura N, Kawazoe D, Nakaguchi T et al. (2008) Regression-Based Model of Skin Di use Reflectance for Skin Color Analysis. Optical Review Vol. 15 No.6:292:294. 3. 3. Diffey, B. (1983) A mathematical model for ultraviolet optics in skin. Physics in Medicine and Biology 28: 647–657 4. Meglinsky, I. ,and Matcher, S. (2003) Computer simulation of the skin reflectance spectra. Computer Methods and Program in Biomedicine 179–186. 5. Anderson, Parish. (1981) The optics of human skin. Journal of Investigative Dermatology 1981: 13-19 6. Wan, Anderson and Parish J. (1981) Analytical modeling for the optical properties of the skin with in vitro andin vivo applications. Photochemstry and Photobiology , 1981: 493–499 7. Cotton, S and Claridge, E. (1996) Developing a predictive model of skin colouring. SPIE vol. 2708: 814-825 8. Doi M and Tominaga S. (2003) Spectral estimation of human skin color using the Kubelka-Munk theory, Proc. SPIE 5008, 221; doi:10.1117/12.472026 9. Moncrieff M, Cotton S, Claridge E et al. (2002) Spectrophotometric Intracutaneous Analysis: a new technique for imaging pigmented skin lesions , British Journal of Dermatology 2002:448–457 10. Weather JW, Saffar MH, Leslie G, et al. (1989) A portable scanning reflectance spectrophotometer using visible wavelength for the rapid measurement of skin pigments. Physics in Medicine and Biology 1989 11. Dwyer T, Muller HK, Blizzard L et al. (1998) The use of spectrophotometry to estimate melanin density in Caucasian . Cancer Epidemiology, Biomarkers and Prevention 1998:203-206. 12. Prahl S, Keijze M, Jacques S, et al. (1989) A Monte Carlo model of light propagation in tissue. SPIE Institute Series 5 1989:102-111. 13. Wang L, Jacques S,and Zheng L. (1995) MCML – Monte Carlo modelling of light transport in multi-layered tissues. Computer Methods and Programs in Biomedicine, 1995 14. Tsumura N, Haneishi H, Miyake Y. (1999) Independent component analysis of skin colour image. J. of Optic. Soc. of Am A 1999: 21692176.
IFMBE Proceedings Vol. 35
A Modified Beer-Lambert Model of Skin Diffuse Reflectance for the Determination of Melanin Pigments 15. Ahmad MH, Norashikin, S., Suraiya, HH. and Nugroho, H. (2009) Independent component analysis for assessing therapeutic response in vitiligo skin disorder. Journal of Medical Engineering Technology 2009:101-109 16. Shimada M, Yamada Y, Itoh M, et al. (2001) Melanin and blood concentration in human skin studied by multiple regression analysis: assessment by Monte Carlo simulation. Physics in Medicine and Biology 2001:2397–2406
397
17. Lee Yin Yin. (2009) Measurement of Skin Photo Type, Advanced Master of Dermatology Thesis, Bangi: UKM, 2009. 18. Vincensi MR, d’Ischia M, Napolitano A, Procaccini EM, et al. (1998) Phaeomelanin versus eumelanin as a chemical indicator of ultraviolet sensitivity in fair-skinned subjects at high risk for melanoma: a pilot study. Melanoma Res 8:53–8
IFMBE Proceedings Vol. 35
A Review of ECG Peaks Detection and Classification T.I. Amani1, S.S.N. Alhady1, U.K. Ngah1, and A.R.W. Abdullah2 1
Imaging and Computational Intelligence Group (ICI), Universiti Sains Malaysia, School of Electrical Electronics Engineering, 14300, Nibong Tebal, Seberang Prai Selatan, Pulau Pinang, Malaysia 2 Universiti Sains Malaysia, Pediatric Department, 16150, Kubang Kerian, Kelantan, Malaysia
Abstract— This paper describes several methods used in identifying peaks of Electrocardiogram (ECG) signals. Precise recognition of ECG peaks will provide useful information for doctors to diagnose any heart disorder or abnormalities as well as for cardiac arrhythmias classification. Generally, several methods have been applied in detecting real ECG peaks. These include template matching, wavelet transform, fuzzy logic and neural network. A review based on technical works, experimental testing and investigation from experts, researchers and professionals have been carried out to analyze the techniques in terms of accuracy and suitability for ECG analysis. In addition, this paper summarizes details of technical works done by others based on their respective methods. As a result, neural network is proposed for future ECG implementation systems due to its unique characteristics even though some limitations of the network might also be inherent. Keywords— Fuzzy logic, wavelet transform, template matching and neural network.
I. INTRODUCTION Heart diseases are the major killer that causes mortality all over the country [1], [2]. This dire situation has initiated serious attention action from scientists, technical expertise and health care professionals. Much efforts have been invested in implementing various technologies for heart disorder diagnosis to enable doctors recognize earlier symptoms of heart problems for further assistance. An electrocardiogram (ECG) system provides signals containing useful information to doctors. Several cardiac arrhythmias could be easily identified when abnormalities of ECG signals are observed. Generally, normal healthy ECG signals have P, Q, R, S and T waves with standard measurement values and these could be different in terms of features or morphological attributes for abnormal ECG signals [3]. A review has been done which summarized methods used in ECG analysis and peaks detection. The methods found to be involved are fuzzy logic, wavelet transform, template matching and neural networks.
II. ECG ANALYSIS TECHNIQUES A. Fuzzy Logic Fuzzy logic is multi valued logic which means it is not limited to specific values or numbers. In contrast to Boolean logic which possessed logic 0 and 1, fuzzy logic not only has logic 0 and 1 but it is also accommodates values in between them. Using the fuzzy methods, it is easy to check, modify, add or delete any fuzzy variables whenever it is necessary to obtain better automated analysis. Due to this factor, it has also been extensively used in cardiac arrhythmias classification such as Left bundle branch block (LBBB), Normal sinus rhythm (NSR), Ventricular fibrillation (VF), and so on [4]. Fuzzy classifier was implemented to discriminate those arrhythmias comprises of two major function blocks which is the Electrocardiogram (ECG) Parametizer, and Fuzzy classifier. Two blocks work mutually with their respective tasks. In the ECG Parametizer, ECG features such as peak intervals, amplitudes, gradients and so on are detected using Daubechies wavelets technique. These features then used to calculate non-linear parameters of ECG signals. They are described as spectral entropy, poincare plot geometry, largest Lyapunov exponent and detrended fluctuation analysis. All of these non-linear parameters will be fed as inputs to fuzzy classifiers for arrhythmias classification by applying Mamdani fuzzy method.
Fig. 1 Structure of fuzzy ECG classifier [4] Figure 1shows the algorithm of the proposed ECG fuzzy classifier which can be summarized as below:
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 398–402, 2011. www.springerlink.com
A Review of ECG Peaks Detection and Classification
• • • •
399
The first step is initialization whereby ECG data is obtained from database and features extraction is done including the pre-processing stage. Fuzzification – Calculation of fuzzy equivalence relation for inputs-outputs matching. Defuzzification – Matching data from the pre-processing stage with the data from the training set. Classification – Cardiac arrhythmia will be classified according to their characteristics vectors.
Table 1tabulates the range of input parameters used in fuzzy classification model. Table 1 Input parameters range [4]
i.
ii.
iii.
If half search interval contains more than two candidate peaks, just selected two out of all the peaks whichever having largest average ∆YP. Equation (1) is used to calculate largest average ∆YP [5]. Average - ∆YP = ½ [ ∆YP(i) + ∆YP(i-1)] (1) If half search interval contains only two candidate peaks, then the fuzzy criteria is applied, else if half search interval contains only one candidate peak, select that particular peak as the real peak. In the case when half search interval not contain any candidate peaks, then all the non-candidate peaks are treated as candidate peaks. Then, step (i), (ii), (iii are executed) to identify the real peak. If no candidate peaks, the real peaks are considered absent. The real peak identified in the first half search interval is selected as the P peak while in the second half search interval is considered as the T peak as it follows the subsequent alphabet letter.
Table 2 Cardiac arrhythmia classification [4] Fig. 2 ECG intervals division [5] Once the P and T waves are identified, the remaining waveforms considered as noisy peaks and thus, filtered out. Finally, the attributes of P and T waves were calculated for diagnosis purposes [5]. This method might be less accurate since it is more to assumption based. The result obtained to classify cardiac arrhythmia as shown in figure 2. The accuracy achieved is 93.13% [4]. Another study revealed that fuzzy logic could be used to recognize P and T waves of ECG signals using fuzzy criteria defined. By considering all the waves in between the search interval, both of these two waves can be correctly identified according to the fuzzy criteria. Fuzzy criteria: If there is no interwave segment in between two candidate peaks, then both peaks will be selected as a real-biphasic-peak and the process to identify real peaks is over. Otherwise, if it is more than two, then, it should follow fuzzy membership calculation in order to find peak P and T accurately. There are several waves associated in between QRS complex wave to the subsequent of QRS wave. There might be 4 possibilities encountered in this search interval:
B. Wavelet Transform Another method used for ECG peaks detection is wavelet transform. It is normally used for analyzing heart rate fluctuations due to its ability processing data at different scales and resolutions. Besides that, wavelets are normally used to represent data and other functions whenever the equations satisfy certain mathematical expressions. Basically, a wavelet equation depends on two parameters, scale a, and position . These parameters vary continuously over the real (jεz , z is an integer set), then the numbers. If scale wavelet is called dyadic wavelet and its corresponding transform is called Discrete Wavelet Transform (DWT). The related equation [6] is
IFMBE Proceedings Vol. 35
(1)
400
T.I. Amani et al.
Fourier transform of ψ 2j(t) must satify the equation (2) to cover the whole frequency domain. (2) Then, R wave peak is determined by analyzing slopes of given ECG signals. In most cases, ECG signals will be shifted due to the shifting of associated iso-electric baseline under various abnormalities. In order to resolve this problem, threshold level is introduced. Peaks exceeding the threshold level will be counted as R waves. Then, pre and post gradients of the wave will be computed. ECG signals obtained will be characterized according to their respective features such as P wave onsets, QRS complex, T wave, QRS duration and so on. After characterization, ECG signals contains noise components will be identified and removed by filtering using DWT method. The signals also de-trended to avoid baseline shifting due to the low frequency component by decomposed them into 5 levels as in figure (3).
Fig. 3 Decomposed 5 levels signal [6] Then, reconstruction of approximation (A5) and details (D5) signals at level 5 is done for the summation of A5 and D5. The summation of both parameters represented baseline shifting in a low frequency. This low frequency is removedto get real ECG without baseline shifting. Hence, the remaining signals which contain high frequency and also needed to be removed are de-trended. Discarding high frequency noises might also lose some information of the original signals including sharpest features. This problem is accomplished by applied thresholding, which it discards only the signal parts that exceed the defined limit. The final signal (de-trended and de-noises) now contains only high amplitude spikes. These are denoted as R-waves. However, there are also some noisy spikes after the R wave with the amplitude approximately as R wave. This been solved by setting threshold voltage label (TP) by considering average amplitude of R waves. Thus, any spikes appeared after R waves assumed as noises and are ignored. R wave is the sharpest peak in normal ECG with positive and negative slopes. Positive slope refers to a pre-gradient (before R-wave peak) while negative slope refers to postgradient (after R-wave). In order to detect R-peak, both slopes are calculated and if the values exceed the TP limit, it is taken as the R-wave [6].
C. Template Matching Template matching also frequently applied in ECG analysis. Template matching is simply defined as taking a portion of an image that is going to be matched with the template image (source image) [7]. Past attempts have been made to detect real time P and R wave in exercise electrocardiogram by using cross-correlation and template matching algorithm. This cross-correlation technique will generate a co-efficient value which is used to quantify the similarity between arrhythmia waveform template and examined ECG signals. High cross-correlation co-efficient indicates many similarities in terms of morphological attributes between the template signal and the examined signal [8], [9]. P wave in ECG signals has small amplitude and is hardly detected. Thus, template matching is used to improve the accuracy in P wave detection. Maximum detection accuracy of P wave will be the same as R wave procedures. ECG signals given will be divided into corresponding segments according to the respective templates. Referring to the figure (4), RE means R wave point estimation, RI is represented interval from the previous R peak signal to the next R peak signal. RT-1 stands for previous R wave time position while RT is the latest R wave time position. RA and RB are used for range estimation purposes. The same abbreviations also applied for P wave. Point estimation for R wave (RE) is calculated from RT-1 + RI. The range estimation has been set within the limit RA ≤RE ≤ RB and defined as RE – (RI*0.15) and RE + (RI*0.15). Range estimation value is very much useful to select the range that maximizes the match between template and ECG signals. After ECG signals fed into this system, the system will trace the peaks that exceed the threshold limit within the range estimation and mark as R peak candidates. Then, correlation was performed between all candidates of R peaks and R template signal to identify accurately next R wave time position (RT). A candidate signals which the correlation more than 0.85 will be defined as the next R wave (RT). R interval (RI) and R wave template were updating during identifation process once each new ECG signal arrives. This is to ensure the R wave can be distinguished from noise and artifacts.
IFMBE Proceedings Vol. 35
Fig. 4 Template matching diagram [8]
A Review of ECG Peaks Detection and Classification
401
Another study showing that how template matching being used to classify ECG signals of Normal Sinus Rhythm (NSR), Ventricular Fibrillation (VF) and Ventricular Tachycardia (VT). Firstly, the templates of those signals are prepared. Template for NSR and VT are obtained from single beat representation of the signals. The characteristic to distinguish both signals are easily recognized as NSR clearly has complete PQRST peaks while VT signal has wide QRS complex. VF signal possessed behavior of noiselike characteristics. Therefore, it is better approach to represent noise-like characteristics for VF template by applying multi-beat waveforms signature. Then, examined ECG signals are cross-correlated with the templates to find correlation coefficient. The equation used to calculate correlation coefficient is as follows [9]
layer and an output layer is the best ANN structure. The parameters value of input layer used is 48 neurons, hidden layer consists of 25 neurons, output layer used 7 neurons, learning rate at 0.01 and error goal at 0.0001 with optimum value of momentum term. The simulation output of ANN training and relationship between number of neurons and evaluation factor presented as follows
(3)
whereby x denoted for windowed signal length, y represented for template length, m is means of the series and l is stands for lag of the correlated series. If the correlation coefficient value is approximate to 1, there is high similarity in terms of morphological attributes between templates and examined ECG signals. Thus, unknown signals will be classified according to the types of those templates. If the value of coefficient near to 0, it means otherwise [9]. D. Neural Network Another effort to classify ECG image is by neural network approach. Features including mean, minimum and maximum value, range, variance, standard deviation and mean absolute deviation were extracted using wavelet transformation and fed to Artificial Neural Network (ANN) classifier. In this experiment, features vector of each original image, horizontal, vertical, diagonal details data, approximation and image reconstruction consists of 48 data. It represented as (48x1) in matrix form. Since 63 ECG images were used, thus, it resulted of (48x63) total data features. These have been simplified by dividing data into 9 batches make it becomes (48x7) per batch for easier processing. Then, empirical evaluation (EV) factor has been used to find the best architecture of ANN by applying the formula depicted as follow [10]
Based on computation done, it is observed that neural network architecture with feed forward backpropagation method of 3 layers consists of one input layer, one hidden
Fig. 5 Simulation of ANN training [10]
This proposed ANN has been trained by applying 63 ECG images represented different types of heart diseases. It is followed by another 60 ECG images to check system performance accuracy and classification of the chosen algorithm. The accuracy is computed given by the expression [10] Simulation carried out showed that performance accuracy is 92% [10]. Neural network exhibits unique characteristics. It is independent processing system, which means degradation of major network performance does not affect the other network’s functions. Furthermore, latest matlab version also provides multifunction of neural networks which facilitate faster in computerized diagnosis.
III. CONCLUSIONS As a conclusion, respective methods process the information signals in their own way. Justification of capability recommended neural network being used for further implementation of ECG analysis system. This is due to its advantages even though there might also be some inherent limitations of the method proposed.
IFMBE Proceedings Vol. 35
402
T.I. Amani et al.
REFERENCES [1] Atul Sethi, Siddharth Arora, Abhishek Ballaney, Frequency domain analysis of ECG signals using auto-associative neural networks, International Conference on Biomedical and Pharmaceutical Engineering 2006 (ICBPE 2006), 531-536 [2] P.Sasikala, Dr. R.S.D. WahidaBanu, Extraction of P wave and T wave in electrocardiogram using wavelet transform, International Journal of Computer Science and Information Technologies, Vol. 2(1), 2011, 489-493 [3] Scheidt S, Basic Electrocardiography: Abnormalities of electrocardiographic patterns, Vol. 6/36. 32 pp. Ciba Pharmaceutical Company, Summit, N.J [4] Mrs.B.Anuradha, V.C. Veera Reddy, (2005-2008), Cardiac arrhythmia classification using fuzzy classifiers, 353-359 [5] S.S. Mehta, S.C. Saxena and H.K. Verma, Recognition of P and T waves in electrocardiograms using fuzzy theory, Proceedings RC IEEE-EMBS & 14th BMESI – 1995, 2.54-2.55
[6] M.A.Khayer and M.A. Haque, ECG peak detection using wavelet tran form, 3rd International Conference on Electrical & Computer Engineering ICECE 2004, 28-30 December 2004, Dhaka, Bangladesh, ISBN 984-32-1804-4, 518-521 [7] A website at http://en.wikipedia.org/wiki/Template_matching [8] Hiroki Hasegawa, Takuya Watanabe and Takashi Uozumi, Retime P and R wave detection exercise electrocardiogram [9] Foo Joo Chin, Qiang Fang, member IEEE, Tao Zhang, Irena Cosic, senior member IEEE, A fast critical arrhythmic ECG waveform identification method using cross-correlation and multiple template matching, 32nd Annual International Conference of the IEEE EMBS Buenos Aires, Argentina, August 31-September 4,2010,1922-1925 [10] Mazhar B.Tayel, Mohamed E.El-Bouridy, ECG images classification using features extraction based on wavelet transformation and neuralnetwork, AIML 06 International Conference, 13-15 June 2006, Sharm El Sheikh, Egypt, 105-107
IFMBE Proceedings Vol. 35
An Image Approach Model of RBC Flow in Microcirculation W.C. Lin1, H.H. Liu1, R.S. Liu2, and K.P. Lin1 1
2
Department of Electrical Engineering, Chung Yuan Christian University, Taiwan Department of Nuclear Medicine, Taipei Veterans General Hospital and National Yang-Ming University Medical School, Taiwan
Abstract— Information in blood flow of microcirculation plays an important role in health assessment. Recently, some numerical and experimental studies of blood flow in large arteries have been utilizing complex models although the simulations can be adapted to microcirculation flows. However, the complex formula of blood flow in microvascular networks is difficult to employ in practical application. Therefore, a building block approach was proposed to simplify the blood flow model. Frame-to-frame visual inspection was taken as a strategy to determine the characteristics of curvature and velocity along interested segment in unitary microvessel. This useful finding is a simple method to construct a simple model that can be helpful to understand parameters and the relation between curvature and velocity in the RBC route. Keywords— microcirculation, blood flow, visual inspection, building block.
Recently, some numerical and experimental studies of blood flow in large arteries have attempted to accurately replicate in vivo arterial geometries while others have utilized complex models. They introduced an IB-LBM method for simulating RBC deformation and motion in shear and channel flow [12]. Although the simulations can be adapted to microcirculation flows, the complex formula of blood flow in microvascular networks is difficult to employ in practical application. The aim of this study was to develop a building block approach in order to simplify the blood flow model. It may become a useful tool for modeling RBCs flow in microcirculation. This framework could help the numerical simulation of blood flow for describing blood flow through the factual data in microvascular research.
II. METHODS I. INTRODUCTION The relationship between blood flow in microcirculation and the clinical physiology in blood circulation has been a wide-reaching and in-depth understanding. Various risk factors of diseases can be related to corresponding changes in microcirculation. For instance, Raynaud’s syndrome[1,2], hypertension[3,4] or diabetes[5,6] are usually accompanied with impaired microcirculation. Therefore, information in blood flow of microcirculation is essential in health assessment and angiopathy prevention. Thus, dynamic observation of microvascular mechanisms provides a deeper understanding of diseases and their relationship to the physiological function of microcirculation. Quantization of the red blood cell (RBC) velocity in micro-vessels is a means of such observation. However, the flow measurement of RBC in micro vessels is still a challenge with current techniques. The flow in large vessels is able to be measured by using electro-magnetic blood flowmeter or ultrasonic Doppler flowmeter. There has been plenty of useful information obtained from the changes in blood flow and viscous properties of blood during physiological events. A major limitation of such measurements unable to relate microvascular perfusion to a specific given tissue has been observed within individual micro-vessels to the topographical succession of arterioles, capillaries, and venules.
A. Microscopy and Imaging Acquisition There have been different blood flow measure techniques introduced in other studies [9-11]. The electromagnetic blood flowmeter and ultrasonic Doppler flowmeter are useful for larger blood vessel measurement but not suitable for microcirculation. In general, intravital video microscopy involves the use of a fluorescence microscope in a living animal for realtime observation, monitoring, recording can be applied in quantitative analysis of specific variables and events. However, some detrimental effects may arise with fluorescence microscopy on living tissues, such as the following: (1) The fluorophore itself may interfere with the signaling pathway or alter cellular function in some way. (2) The excitation light itself may damage the living tissue, which may affect the behavior of the sample or even cause its death. (3) The effect arises as a result of the combination of fluorophores and excitation light is clearly not an option with live cell imaging. Our microscopic system without fluorescently labeling provides precise and continuous quantitative data of blood flow rate in individual small vessels. The microcirculation is imaged by the penetrating white light from the side. The green channel is absorbed by hemoglobin of erythrocytes
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 403–406, 2011. www.springerlink.com
404
W.C. Lin et al.
which can be observed as dark moving cells. A magnifying lens projects the image onto a camera. The imaging light collected by the central part of the light guide is optically isolated from the illuminating white light from the lightemitting diodes (LEDs). These LEDs are arranged in a ring form at the tip of the light guide, which directly illuminate the area of interest (see Fig.1). B. Building Blocks of RBCs Flow by Visual Inspection Cutaneous red blood cell velocity in vivo can be measured by using microscopy technique. However, unlike simulated blood flow images, there is no standard to determine the accuracy of the techniques for computing blood flow velocities. The principle of the frame-to-frame visual inspection method is one of the ways to determine the blood velocity by directly observing the movement of cells between two consecutive frames. Many recent studies have been done on the reference of blood flow through visual inspection method. There exists a commercialized software named “Cap-Image” (http://www.drzeintl.de/CAP_english.htm) which is based on the same theory with computer-assisted method to calculate RBC velocity. Therefore, the frame-to-frame visual inspection was taken as a golden standard on assisting software development for RBC velocity estimation. For our study target, the RBCs of frog are Nucleated (having a nucleus) and are considerably bigger in size than human RBC. As a result, it has optimum characteristic of appearance for observing RBC motion in microvessel.
The curvature theory, that is local expressions, was used to discuss the route of RBC motion. The curvature is determined as
κ
κ =
y '' (1 + y '2 ) 3/ 2
(1)
Where y = f(x) denotes the equation of curve lines, x is the position of RBC center in coordinate axis. In this study, the frog RBC motion for 500 individual pictures was retrieved from the video. The interval between the neighboring two pictures was measured one thirtieth second. Fig. 2(a) shows the direction of RBC motion in microvessels. Fig. 2(b) displays the major microvessel that was selected for the statistics by random sampling. There are different groups can be found in this microvessel of sharp curves that the approach three pictures continue to flow as Fig. 2(c) shows. Therefore, for the experiment of accurate statistics, 88 specimens were collected for further research. For the preparation of three continuing pictures, the differentiable RBC center was searched by naked eyes. Thus the three points of RBC center can be determined with the nonlinear curve f(x) fitting and this curve line is similar to the process of the RBC motion. After the RBC center was confirmed, we can choose 110 samples of the RBCs to compute diameter (18.64 μm) and standard deviation (3.68).
III. EXPERIMENT RESULTS
Fig. 1 The schematic representation of the microscopic system. The microcirculation is directly penetrated and illuminated from the side by white lighting source. The green channel is absorbed by hemoglobin of erythrocytes which are observed as dark moving cells
The result of fitting put into the equation (1) that the curvature of second RBC (C1) can be calculated. Figure 3 depicts the curvature total number of individual range at the second RBC. The RBC transit in the curve of microvessel that the curvatures show a Gaussian distribution accumulate between 1.2× 10−8 and 2.9× 10−8. Note that the curvature range between 1 × 10−9 and 3.5× 10−8 are separated to 6 intervals so that the curvature value 1.5× 10−8 on behalf of the range between 1.2× 10−8 and 1.7× 10−8. The rest curvature may be deduced by analogy. The curvature depends on the average velocity in the second RBC as shown in Figure 4.The velocity decrease with increasing curvature, the outcome reveals the way of the RBC motion on the larger curvature that the average velocity is slow. On the last, the nonlinear equation y = f(x) = a2 + b + cwas used to fit Fig. 4 and the constant of a, b, c
IFMBE Proceedings Vol. 35
An Image Approach Model of RBC Flow in Microcirculation
405
and d can be determined as 3.15× 104, −3.34× 103 and4.46× 102. This useful finding is a simple method to construct a model that can know therelation between curvature and velocity in the RBC route.
Number
30
20
10
0
0
1
2
3 −8
−1
Curvature (10 m )
Fig. 3 The curvature total number of individual range at the second RBC 700 600
400
−6
V (10 m/s)
500
300 200 0.5
1.0
1.5
2.0
2.5 −8
3.0
3.5
−1
Curvature (10 m ) Fig. 4 The average curvature of individual range dependence of the average velocity of second RBC
IV. CONCLUSIONS
Fig. 2 (a) The RBC flow directions are shown in microvessel network of frog web; (b) one section in microvessel was selected for the statistics by random sampling; (c) The symbols of C0, C1 and C2 are indicated that same RBC center at difference time in successive frames.
Building block approach is a fast and effective way for modeling RBCs flow in microcirculation. Using obtained parameters, which are the curvature and velocity, we could rebuild blood flow with realistic properties. In the future study, the tumor capillary bed will be modeled as a capillary tree of bifurcating segments whose geometrical construction involves deterministic and random parameters. Furthermore, the simulated data of blood flow could be a reference in angiopathy physiology research.
ACKNOWLEDGMENT This research was granted and supported by National Research Program for the Department of Industrial Technology (DoIT) of the Ministry of Economic Affairs (MOEA), R.O.C
IFMBE Proceedings Vol. 35
406
W.C. Lin et al.
(99-EC-17-A-19-S1-163),Genomic Medicine, Taiwan (Molecular and Genetic Imaging Core, NSC99-3112-B-010-015) and NSC99-2221-E-033-012-MY3 . We thank them for their generous financial assistance.
REFERENCES [1]
[2]
[3]
[4]
[5]
[6]
Wollersheim H., Reyenga J. and Thien T . Laser Doppler velocimetry of fingertips during heat provocation in normals and in patients with Raynaud’s phenomenon Scand, J. Clin. Lab. Invest. 48, 91–5(1988). Bertuglia S., Leger P., Colantuoni A., Coppini G., Bendayan P. and Boccalon H. Different flowmotion patterns inhealthy controls and patients with Raynaud’s phenomenon Technol, Health Care 7 ,113–23 (1999). E. Bonacci, N. Santacroce, N. D’Amico, and R. Mattace., Nail-fold capillaroscopy in the study of microcirculation in elderly hypertensive patients, Arch. Gerontol. Geriatr. suppl. 5, 79-83 (1996). Cesarone M R, Incandela L, Ledda A, De Sanctis M T,Steigerwalt R, Pellegrini L, Bucci M, Belcaro G andCiccarelli R. Pressure and microcirculatory effects of treatment with lercanidipine in hypertensive patients and invascular patients with hypertension Angiology 51, 53– 63 (2000). Chung-Hsing Chang, Rong-Kung Tsai, Wen-Chuan Wu, Song-Ling Kuo, and Hsin-Su Yu., Use of dynamic capillaroscopy for studying cutaneous microcirculation in patients with diabetes mellitus, Mircrovascular Research 53, 121-127 (1997). E. Tibiriçá, E. Rodrigues, R.A. Cobas and M.B. Gomes., Endothelial function in patients with type 1 diabetes evaluated by skin capillary recruitment, Mircrovascular Research 73, 107-112 (2007).
[7]
Jain, R.K. Ward-Hartley, K. Tumor Blood Flow-Characterization, Modifications, and Role in Hyperthermia,Sonics and Ultrasonics, IEEE Transactions on,31(5)504- 525,1984. [8] Katsuyoshi Hori, March Suzuki , Shigeru Tanda, Sachiko Saito .Characterization of Heterogeneous Distribution of Tumor Blood Flow in the Rat. Cancer Science.Volume 82 Issue 1, pp. 109 – 117, 1991. [9] Arthur M. Iga, Sandip Sarkar, Kevin M. Sales, Marc C. Winslet and Alexander M. Seifalian, Quantitating Therapeutic Disruption of Tumor Blood Flow with Intravital Video Microscopy, Cancer Research 2006; 66: (24). December 15, 2006. [10] Sugii Y., Nishio S., Okamoto K. In vivo PIV measurement of red blood cell velocity field in microvessels considering mesentery motion. Physiological Measurement, Volume 23, Number 2, pp. 403416(14),2002. [11] Bollinger A, Butti P, Barras J P, Trachsler H and Siegenthaler W, Red blood cell velocity in nailfold capillaries of man measured by a television microscopy technique Microvasc. Res. 7 62–72, (1974). [12] Moore JA et al, Computational blood flow modeling based on in vivo measurements. Annals of Biomedical Engineering ,1999, 27(5):62740.
Author: Kang-Ping Lin Institute: Department of Electrical Engineering, Chung Yuan Christian University Street: 200, Chung Pei Red. City: Chung Li, 32023 Country: Taiwan (ROC) Email: [email protected]
IFMBE Proceedings Vol. 35
An Image-Based Anatomical Network Model and Modelling of Circulation of Mouse Retinal Vasculature P. Ganesan1, S. He2, H. Xu3, and Y.H. Yau1 1
Faculty of Engineering, University of Malaya, Kuala Lumpur, 50603, Malaysia 2 School of Engineering, University of Aberdeen, Aberdeen, AB24 1TR, UK 3 School of Medicine, Dentistry and Biomedical Science, Queen's University Belfast, Belfast, BT12 6BA, UK
Abstract— The paper presents an image-based network model of retinal vasculature taking account of the 3D vascular distribution of the retina. Mouse retinas were prepared using flat-mount technique and vascular images were obtained using confocal microscopy. The vascular morphometric information obtained from confocal images was used for the model development. The network model developed directly represents the vascular geometry of all the large vessels of the arteriolar and venular trees and models the capillaries using uniformly distributed meshes. The vasculatures in different layers of the retina, namely the superficial, intermediate and deep layer, were modelled separately in the network and were linked through connecting vessels. The branching data of the vasculatures was recorded using the method of connectivity matrix of network (the graph theory). Such an approach is able to take into account the detailed vasculature of individual retinas concerned. Using such network model, circulation analyses to predict the spatial distribution of the pressure, flow, hematocrit, apparent viscosity and wall shear stress in the entire retinal vasculature can be carried out. Keywords— Murine retina; network topology; morphological data; network model; spatial pressure; flowrate; velocity; and wall shear stress.
I. INTRODUCTION Circulation analyses using a detailed image-based anatomical vascular model of physiological systems have been proven useful in enhancing biologists' understanding of the hemodynamics of the system and thereby improve the treatment of circulation related diseases. This is important since vascular circulation related diseases such as hypertension, atherosclerosis and diabetes are major health problems in modern society. In this regards, murine retina, which has a vascular distribution similar to that of human retina (holangiotic type), is often used as a substitute for human retina for studies of retinal vasculature, hemodynamics and blood flow regulation under both physiological and pathological conditions. Although murine retinal vasculature has been examined for many years, the understanding of the hemodynamics of the network of the retina is still incomplete. Circulation analysis using a detailed image-based
anatomical vascular model of retinal vasculature can potentially produce important information to improve our understanding. A relatively good understanding of the retinal anatomy and vascular network including human retina has been developed through extensive studies using staining and perfusion techniques to reveal the vasculature [1]. Retinal vasculature can be described using a three-layer distribution model with the artery, arteriolar branches and the veins in the top (or superficial) layer and the capillary networks in the middle (or intermediate) layer and the bottom (or deep) layer. The venular branches sit in the deep layer and move transversely to connect the veins in the superficial layer. In addition, retinal vasculature and circulation, which can be visualized directly in vivo, have been observed using various fundus imaging techniques. For example, directional laser Doppler velocimetry (LDV) has been used for velocity and flowrate measurement and retinal vessel analyser (RVA) for online retinal vascular diameter measurement [2]. However, very little work has been done in the context of circulation modelling of retinal network based on a detailed description of the topology of the retinal vasculature [3]. This is despite of the great potential such work has in enhancing our understanding of the circulation in the retina. It is known from previous studies that the circulation in microvasculature can be modeled using Poiseuille’s equation assuming the blood as a Newtonian fluid with a constant viscosity and the blood vessels are considered to be rigid. Such assumptions are justified to be a good approximation provided the non-dimensional Womersley and Reynolds numbers is significantly less than unity. However, blood flow especially in microvessel of a diameter <100 µm, can be significantly affected by the non-Newtonian rheological properties of blood. This is primarily due to the comparable length scale of the vessel diameter and blood cells. Red blood cells (RBCs) which have a larger fraction than other blood cells are found to play a huge role in determining the viscosity of the blood. Blood hematocrit (HD), which influences blood viscosity, is defined by the ratio of RBCs volume over the total blood volume. There are four major phenomena in microvasculature which are known to have significant effect on blood viscosity in
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 407–410, 2011. www.springerlink.com
408
P. Ganesan et al.
microvasculature, namely, Fahraeus effect, FahraeusLindqvist effect, plasma skimming effect (or phase separation effect) and the in vivo viscosity effect. Pries et al. [4,5] are acknowledged for their major contributions in characterizing the phenomena above using mathematical models which take into account all the important blood flow parameters. These mathematical models can be incorporated with the network model developed to predict the distribution HD and apparent viscosity in the retina. The primary objective of this paper is to present an image-based network model for retinal vasculature inclusive of capillary vasculature, which can represent an individual retina or a case study of retinal vasculature. The secondary objective is to demonstrate the use of such network model to carry out numerical simulations of circulation, which can provide detailed information of the spatial distribution of the pressure, flow, hematocrit, apparent viscosity and wall shear stress in the entire retinal vasculature.
II. METHODOLOGY A. Animal Preparation and the Confocal Microscopy Female C57BL/6 mice (8 to 12 weeks old, weight~20g) from the Medical Research Facility of the University of Aberdeen were used in this study. The animals were cared for in accordance with ARVO Statement for the Use of Animals in Ophthalmic and Vision Research and under the regulations of the UK Animal License Act 1986 (UK). Confocal images of retinal flatmounts were used to quantify retinal microvasculature. Retinal vessels were revealed by either Evans blue injection or vascular cell adhesion molecule-1 (VCAM-1) staining. Briefly, for Evans blue injection, 100µL of 2% (wt/vol) Evans blue dye (Sigma) was injected into normal C57BL/6 mouse through its tail vein. The animal was killed by inhalation of CO2 10 minutes after. The eyes were removed and immersed in 2% (wt/vol) paraformaldehyde (Agar Scientific Ltd., Cambridge, UK) immediately for an hour; flatmount retinas were then prepared for confocal microscopy (LSM510 META, Carl Zeiss). For VCAM-1 staining, retinal tissues were fixed and permeablized and then stained with anti mouse VCAM-1 monoclonal antibody (BD Biosciences, Oxford, UK). Flatmount retinas were then examined by confocal microscopy (LSM510 META). B. Images and Measurement Confocal images of a matrix size of 512 x 512 pixels were obtained with either a 10x or a 20x objective lens. To image retinal vessels at different depths of retina, Z-stack
images were taken. ‘Zeiss LSM Image Browser’ software was used to analyse the images of the retinal vasculature at each layers (depth) and to construct 3D images. Vessel diameters, lengths, geometric distribution, capillary density etc were measured from the confocal images using the software. C. Morphology and Topology of Retinal Vasculature of a Mouse Retina Fig. 1 shows topological images of a single mouse retinal vasculature obtained, which have been used as the basis of the development of a detail network representing mouse retina. The retina contains six arteries and six veins which are labelled as A1 to A6 and V1 to V6, respectively. The mainstream vessels which run from the optic disc to the periphery are referred to as artery and vein and vessels that branches out from the mainstream vessels are referred to as arteriole and venule, respectively, in this study. Figs. 1a and 1b are images of the superficial and deep layers of the network topology of the mouse retina. The retinal arteries and veins are sequentially distributed around the optic disc (in the centre) and they are distinguished based on the branching pattern and the size of the vessels. Basically, arteries give rise to side-arm branches, which then, progressively divide into dichotomous branches of arterioles. As in arteries, side-arm branches also arise from veins and give rise to venule branches. D. The Complete Retinal Network Model A network model of the complete retinal vasculature to imitate the vascular distribution in the mouse retina has been developed. The model is based on a direct representation of the large vessels of arterial network and venous network in the retina and a model of capillary networks. That is each and every single of the large vessels in the vasculature up to precapillary and post-capillary vessels have been identified and quantified individually. However, a similar method of quantification is not possible for the capillary plexus since the number of the vessel is huge in quantity. Without neglecting much of the detailed distribution of the capillaries in the retina, a model of the capillary vasculature using uniformly distributed meshes have been developed. The complete network model consists of three layers of vasculature, namely, superficial (arterial network), intermediate (capillary network) and deep layers (capillary network and venous network), exactly like the vascular distribution in the retina. The vasculature in the different layers are linked through vessel segments, namely connecting vessels. The vasculature branching data has been recorded using connectivity matrix of network (or the graph theory).
IFMBE Proceedings Vol. 35
An Image-Based Anatomical Network Model and Modelling of Circulation of Mouse Retinal Vasculature
⋅ Q=
4 π Dij ΔP 128 μij Lij ij
409
(1)
The inlet boundaries are located at the root of the arterial trees, where blood flow enters the network. There are six of them in total corresponding to the six arterial trees. The outlet boundaries are located at the roots of the venous trees where the blood flow exits the network through the mainstream veins. Again, there are six of them in total corresponding to the six venous trees. These boundaries are located at the centre of the network topology of the retina. The pressures at the inlet and outlet boundaries were chosen as 40 mmHg and 0 mmHg, respectively.
III. RESULTS AND DISCUSSION
Fig. 1 Network topologies of the mouse retinal vasculature; (a) superficial layer; (b) deep layer
E. Circulation Modeling The circulation is modeled using Poiseuille’s law, which is based on laminar flow in a rigid vessel. The blood flow in a vessel segment with start and end nodes, i and j, is dependent on the pressure differential (∆Pij), the vessel segment's diameter and the length (Dij and Lij) and the blood viscosity (µ ij), namely
The topology of the large vessels of the arterial and venous networks is shown in Fig. 2. Vessel segments in those networks are shown as straight lines, which were based on the locations of the start and end nodes, neglecting the curvature of the vessels. The vascular distribution (topology) of the arterial and venous trees are quite different. Crudely, the feeding arterioles branch out from the mainstream of a tree at an acute angle to form an equal branching, whereas the draining venules branch out from the mainstream of a tree often at a right angle. The variation of the size of the arterial and venous trees is clearly shown in those figures. The statistics of the global pressure, velocity and wall shear stress of the arterial and venous trees vs. vessel order number (or vessel diameter) are shown in Fig. 3 using Box plot. For example, the pressure is positively correlated with order numbers in the arterial trees but negatively correlated in the venous trees. The uneven distribution of the network size and the irregular branching patterns of the arterial and venous trees mean that the hemodynamic parameters can vary from tree to tree. Therefore, it is useful to plot a detailed distribution of a hemodynamic parameter for each tree of the arterial and venous networks as shown for the velocity in Fig. 2 using contour plots. The network model developed here can be used for predicting the development of the circulation in the retinal vasculature under pathological conditions over time, e.g., for arteriosclerosis, hypertension, diabetes, retinal vein venular occlusion, etc. The above pathological conditions can be simulated using the network model by specifying changes in diameters for the required vessels or a group of vessels to investigate their influence on the circulation within the network. The ability of the model to provide global and local information in responding to arbitrary changes applied to the network model can be of particular use to biologists to design better treatment method.
IFMBE Proceedings Vol. 35
410
P. Ganesan et al.
The study has produced detailed global and local information of hemodynamics of the mouse retinal circulation. Such information is only obtainable using an anatomical model such as that used in this study.
Fig. 3 Predictions of hemodynamic parameters vs. order numbers (or diameter) in the arterial network (left) and venous network (right). From top to bottom; segmental pressure (mmHg); velocity (mm/s); wall shear stress, WSS (dyne/cm2)
REFERENCES Fig. 2 The distribution of the velocity (mm/s) in the arterial network (a) and venous network (b)
IV. CONCLUSIONS A sophisticated image-based network model using the connectivity matrix approach has been developed for mouse retinal vasculature. The model directly represents the detailed morphology and dimensions of blood vessels of the subject retina up to vessels prior to capillary vessels (i.e., pre-capillary and post-capillary in the arterial trees and venous trees, respectively).
1. Henkind P. (1967). Radial peripapillary capillaries of the retina. I. Anatomy: Human and comparative. Br. J. Ophthalmol. 51:115–123. 2. Nagel E, Vilser W, Lanzl I. (2001) Retinal vessel reaction to short term iop elevation in ocular hypertensive and glaucoma patients. Eur. J. Ophthalmol. 11:338–344. 3. Ganesan P, He S, Xu H. (2010) Development of an image-based network model of retinal vasculature. Ann. Biomed. Eng. 38(4):1566– 1585 4. Pries A, Neuhaus D, Gaehtgens P. (1992) Blood viscosity in tube flow; dependence on diameter and hematocrit. Am. J. Physiol. 263:H1770– H1778 5. Pries A, Secomb T, Gaehtgens P. (1996) Biophysical aspects of blood flow in the microvasculature. Cardiovasc. Res. 32: 654–667.
IFMBE Proceedings Vol. 35
Analysis of Normal and Atherosclerotic Blood Vessels Using 2D Finite Element Models K. Kamalanand1, S. Srinivasan1, and S. Ramakrishnan2 1
Department of Instrumentation Engineering, Madras Institute of technology Campus, Anna University Chennai, Chennai 600044, India 2 Department of Applied Mechanics, Biomedical Engineering Group, Indian Institute of Technology Madras, Chennai 600036, India
Abstract— Analysis of blood vessel mechanics in normal and diseased conditions is essential for disease research, medical device design and treatment planning. In this work, 2D finite element models of normal vessel and atherosclerotic vessels with 50% and 90% plaque deposition were developed and were meshed using Delaunay triangulation method. The transient analysis was performed and the parameters such as total displacement, Von Mises stress and strain energy density were analyzed for normal and atherosclerotic vessels. Results demonstrate that an inverse relation exists between the considered mechanical parameters over the vessel surface and the percentage of plaque deposited on the inner vessel wall. It was further observed that the total displacement and Von Mises stress decrease nonlinearly with increasing plaque percentage. Whereas, the strain energy density decreases almost linearly with increase in plaque deposition. In this paper, the objectives of the study, methodology and significant observations are presented. Keywords— Blood vessel, atherosclerosis, plaque, finite element model.
I. INTRODUCTION Blood vessels either delivering oxygenated blood - arteries, arterioles, capillaries or returning with carbon dioxide – veins and venules, display highly nonlinear elastic and anisotropic mechanical behavior and exhibit complex material properties [1,2]. Knowledge of blood vessel mechanics is fundamental to the understanding of vascular function in health and disease. The blood vessel mechanics change from vessel to vessel [3], change with ageing [4], and pathologies [1,5,6]. Even though a large amount of information is available on the histological structure of arteries [7], relatively little attention has been paid to the mechanical properties of the arteries. Atherosclerotic vascular disease is a common cause of morbidity and mortality in developed countries. As people age, they tend to develop fatty plaques within the walls of their blood vessels [8]. Sudden rupture of a plaque triggers the development of an acute coronary syndrome such as unstable angina, myocardial infarction or sudden death [9].
Understanding the mechanical response of blood vessels to physiologic loads is necessary before ideal therapeutic solutions can be realized. Analytic results can help physicians in designing and choosing appropriate therapies. For this reason, blood vessel constitutive models are needed [10]. Constitutive models provide relationships between stress and strain fields in the form of equations that cannot be summarized by a single parameter [10]. Even though the simplified models cannot completely explain the actual behavior, when simplified assumptions are made some important information is often revealed [8, 11-13]. In recent studies, researchers have frequently used numerical methods for analyzing the mechanics of blood vessels. [14,15]. The finite element method is a powerful technique for finding approximate solution of a partial differential equation where the domain boundaries of a given problem are complex [16]. It has now become one of the fundamental numerical approaches for solving problems arising in many applications, including biomedical simulation. In the finite element method, a complex domain is discretized into a number of elements, such as that a set of basis functions can be defined on the elements to approximate the solution [8]. The objective of this work is to develop finite element models of normal and atherosclerotic blood vessels and to analyze the influence of plaque deposition on the arterial vessel mechanics.
II. METHODOLOGY A. Generation of 2D Finite Element Model of Blood Vessels The 2D cross sectional model of the thoracic aorta was developed using Comsol 3.5a. FEM models were developed for normal vessels and vessels with 50% and 90% plaque deposition. The geometry and mechanical properties of the vessel were adopted from literature [7,17] The stiffness of the plaque component was taken to be 0.5 times that of the vessel stiffness. Further, the boundary conditions were applied to the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 411–414, 2011. www.springerlink.com
412
K. Kamalanand, S. Srinivasan, and S. Ramakrishnan
developed models and the developed surfaces were meshed using Delaunay triangulation method [18]. Triangulation is a subdivision of a geometric object into simplices or triangles. Delaunay triangulation for a set P of points in the plane is a triangulation DT(P) such that no point in P is inside the circumcircle of any triangle in DT(P). The circumcircle always passes through all three vertices of a triangle. Its center is at the point where all the perpendicular bisectors of the triangle's sides meet. Delaunay triangulations maximize the minimum angle of all the angles of the triangles in the triangulation for avoiding skinny triangles. The mesh quality was improved by fine tuning the mesh. The quality of the developed mesh for normal vessel, vessel with 50% plaque deposition and vessel with 90% plaque deposition is shown in Fig 1(a), (b) and (c) respectively.
where, λk (k=1,2,3) are the principal stretch ratios and J is the total volume ratio. The strain energy function can be expressed in terms of
and J ,
Further, the stress-strain behavior is defined using the following equations. Writing the current position of a material point as x and the reference position of the same point as X, the deformation gradient is then defined as
Then J , the total volume change at the point, is J = det(F) For simplicity, the deformation gradient with the volume change eliminated is defined as
Then, the deviatoric stretch matrix is introduced as So that the first strain invariant is given by and the second strain invariant is given by
Then the stresses associated with the strain energy function are given by
Fig. 1 Mesh quality for (a) normal vessel (b) vessel with 50% plaque deposition and (c) vessel with 90% plaque deposition
B. Basis Function The stress–strain relationship is developed for a general strain energy function based on strain invariants. The first and second strain invariants and volume ratio J are chosen as the variables in the strain energy function [19]. They are defined as:
where, dev means deviatoric and is calculated as dev(A) = A − 1/3 trace(A) for matrix A. I is the Identity matrix. For incompressible material, J = 1, U is a function of I1 and I2 only [19]. The potential function of the arterial material is as follows:
where,
, i=1,2,3, The material constants are the shear modulus, μ1 and the bulk modulus, K1. IFMBE Proceedings Vol. 35
Analysis of Normal and Atherosclerotic Blood Vessels Using 2D Finite Element Models
Fig 2(a), (b) and (c) shows the variation of Von Mises Stress over the vessel surface for normal vessel, vessel with 50% plaque deposition and vessel with 90% plaque deposition respectively. It is seen that the Von Mises stress in the normal vessel surface is in the range of 0.163 to 0.318Pa. Whereas, in the case of the vessel with 50% plaque deposition, the vessel surface experiences a stress in the range of 0.117 to 0.2 Pa. In the case of the vessel with 90% plaque formation, the stress experienced by the vessel surface is further lowered to a value of 7.658e-4 Pa.
The variation of total displacement is shown as a function of the percentage of plaque deposited on the inner wall of the vessel in Fig 3. It is seen that an inverse relation exists between the total displacement and the percentage of plaque. The total displacement decreases nonlinearly with increasing plaque percentage.
1.0
0.8
Total Displacement
III. RESULTS AND DISCUSSIONS
413
0.6
0.4
0.2
0.0 0
20
40
60
80
100
Percentage of plaque (a)
Fig. 3 Variation of total displacement shown as a function of the percentage of plaque deposition
1.0
Von Mises Stress
0.8
(b)
0.6
0.4
0.2
0.0 0
20
40
60
80
100
Percentage of Plaque
Fig. 4 Variation of Von Mises Stress shown as a function of the percentage of plaque deposition (c)
Fig. 2 Variation of Von Mises Stress over the vessel surface for (a) normal vessel, (b) vessel with 50% plaque formation and (c) vessel with 90% plaque formation
Fig 4 shows the variation of Von Mises stress as a function of the percentage of plaque deposition. It is seen that the variation of the Von Mises stress is similar to that of the total displacement. A nonlinear decrease is found in the value of the stress with increasing plaque percentage.
IFMBE Proceedings Vol. 35
414
K. Kamalanand, S. Srinivasan, and S. Ramakrishnan
Similarly, the variation of strain energy density is shown as a function of the percentage of plaque deposited on the inner wall of the vessel, in Fig 5. It is seen that the SED decreases almost linearly with increasing plaque percentage. Results demonstrate that normal vessels experience more stresses due to applied internal loads. In the case of atherosclerotic blood vessels, the stress experienced by the vessel wall is much lowered due to the deposition of plaque material on the inner wall of the vessel.
Strain Energy Density
1.0
0.8
0.6
0.4
0.2
0.0 0
20
40
60
80
100
Percentage of plaque
Fig. 5 Variation of strain energy density shown as a function of the percentage of plaque deposition
IV. CONCLUSIONS In this work, 2D finite element models of normal vessel and atherosclerotic vessels with 50% and 90% plaque deposition were developed using Comsol 3.5a. Boundary conditions were applied to the developed models and a distributed load was applied on the inner wall of the vessel. Further, the developed models were meshed using Delaunay triangulation method. The developed vessels were subjected to a transient analysis and the parameters such as total displacement, Von Mises stress and strain energy density were analyzed for normal and atherosclerotic vessels. Results demonstrate that normal vessels experience more stresses due to applied internal loads. In the case of atherosclerotic blood vessels, the stress experienced by the vessel wall is much lowered due to the deposition of plaque material on the inner wall of the vessel. It appears that an inverse relation exists between the considered mechanical parameters and the percentage of plaque deposited on the inner vessel wall. It was further observed that the total displacement and Von Mises stress decrease nonlinearly with increasing plaque percentage. Whereas, the strain energy density decreases almost linearly with increase in plaque deposition.
This study seems to be clinically important since the analysis of vascular mechanics in normal and diseased states is essential for designing stents, surgery planning, treatment and diagnosis of vascular diseases.
REFERENCES 1. Martin A. Zulliger, Pierre Fridez, Kozaburo Hayashi, Nikos Stergiopulos, (2004) A strain energy function for arteries accounting for wall composition and structure, Journal of Biomechanics 37:989–1000 2. Spencer, A.J.M., (1980) Continuum Mechanics. Longman Scientific & Technical, Essex 3. Cox, R.H., (1978) Passive mechanics and connective tissue composition of canine arteries. American Journal of Physiology 234 (5): H533–H541 4. Andreotti, L., Bussotti, A., Cammelli, D., di Giovine, F., Sampognaro, S., Sterrantino, G., Varcasia, G., Arcangeli, P., (1985) Aortic connective tissue in ageing—a biochemical study. Angiology 36 (12): 872–879 5. Ghassan S. Kassab, (2006) Biomechanics of the cardiovascular system: the aorta as an illustratory example, J. R. Soc. Interface 3, 719–740S 6. Wuyts, F.L., Vanhuyse, V.J., Langewouters, G.J., Decraemer, W.F., Raman, E.R., Buyle, S., (1995) Elastic properties of human aortas in relation to age and atherosclerosis: a structural model. Physics in Medicine & Biology 40 (10), 1577–1597 7. Alfred Hager, Harald Kaemmerer, Ulrike Rapp-Bernhardt, Sebastian Blücher, Karl Rapp, Thomas M. Bernhardt, Michael Galanski and John Hess, (2002) Diameters of the thoracic aorta throughout life as measured with helical computed tomography, J Thorac Cardiovasc Surg, 123:1060-1066 8. Ciornei florina carmen, alaci stelian, ciornei mariana cătălina, Irimescu Luminita, Cerlincă Delia Aurora, (2010) Finite Element Analysis For A Simplified Model Of A Blood Vessel With Lesion, Annals Of The Oradea University, Fascicle of Management and Technological Engineering, Volume IX (XIX), NR1 9. Luis H. Arroyo, Richard T. Lee, (1999) Mechanisms of plaque rupture: mechanical and biologic interactions, Cardiovascular Research 41:369– 375 10. Vito, R.P., Dixon, S.A., (2003) Blood Vessel Constitutive models,1995-2002, Annu. Rev. Biomed. Eng. 5:413-39 11. Payne, S.J., (2004) A Two-Layer model of the Static Behaviour of Blood Vessel Walls, Proc. 26-th Annual International conference IEEE EMBS, San Francisco, CA, USA 12. VanBavel Ed, Siersma, P., Spaan, J.A.E., (2003) Elasticity of passive blood vessels: a new concept, Am. J. Physiol. Heart Circ. Physiol., 285 H1986-H2000, 13. Orosz, M., Molnarka G., Nadasy G., Raffai G., Kozmann G., Monos E., (1999) Validity of viscoelastic models of blood vessel wall, Acta Physiol Hung., 86(3-4): 265-71 14. PL Rinderu, ET Rinderu, L Gruionu, and C Bratianu, (2003) A FEM study of aortic haemodynamics in the case of stenosis, Acta of bioengineering and biomechanics, 5(2) 15. S. Cavalcanti, (1995) Haemodynamics of an artery with mild stenosis, 28(4): 387-399 16. Zeyun Yu, Holst, M. J., McCammon, J.A., (2008) High-fidelity geometric modeling for biomedical applications, Finite Elements in Analysis and Design 44:715-723 17. Fluid-Structure Interaction In A Network Of Blood Vessels, Solved With Comsol Multiphysics 3.5a, 2008 18. Shewchuk, J.R., ‘Lecture notes on Delaunay mesh generation’, http://citeseer.ist.psu.edu/ 19. Y. Shen, K. Chandrashekhara,W.F. Breig, L.R. Oliver, (2005) Finite element analysis ofV-ribbed belts using neural network based hyperelastic material model, International Journal of Non-Linear Mechanics 40:875 – 890
IFMBE Proceedings Vol. 35
Comparative Analysis of Preprocessing Techniques for Quantification of Heart Rate Variability W.M.H. Wan Mahmud and M.B. Malarvili Faculty of Biomedical & Health Science Engineering, Universiti Teknologi Malaysia, Johor, Malaysia
Abstract— In this paper, a comparative analysis of preprocessing techniques for quantification of heart rate variability (HRV) were performed. These preprocessing techniques are used to transform the Electrocardiogram (ECG) to HRV so that appropriate for spectral and non linear analysis. A number of preprocessing techniques were investigated in this study. In order to evaluate the performance of the preprocessing methods, the differences between the frequency spectrum of the HRV were measured by contrasting the merit indices. Among the preprocessing techniques studied, the result indicate that the utilization of heart rate values instead of heart period values in the derivation of HRV results in more accurate spectrum. Furthermore, the result support that the preprocessing technique based on the convolution of inverse interval values with the rectangular window and the cubic interpolation of inverse interval values are efficient methods for quantification of HRV. Keywords— heart period, heart rate, heart rate variability, data preprocessing.
Transform (FFT) analysis or by autoregressive modern techniques. However, the processing of HRV and their analysis in frequency domain are not straight forward. The RR series must be first submitted to preprocessing procedures to produce a series of equidistantly sampled data suitable for spectral analysis. There are various methods to quantify the HRV and it can be derived from either heart period or heart rate. These signal have the same informative content, but the results obtained from each have shown considerable discrepancies, in as much as the relationship between them is non linear [1]. Different preprocessing techniques of HRV will result in dissimilar spectrum and it is essential that the estimated spectrum exhibit a minimum of harmonic components and other artifacts, therefore being accurate as possible. Therefore, in this paper a number of 12 different preprocessing techniques available in the literature are investigated by comparing its merit indices from its spectrum in order to identify optimum preprocessing technique to quantify HRV.
I. INTRODUCTION Heart rate variability (HRV) is a measure of variations in heart rate. The time intervals between consecutive heart beats are measured through electrocardiogram (ECG) from the R wave to the next R wave, and they are conventionally named RR intervals (refer to Figure 1). These RR intervals is also known as heart period and the inversed of the intervals is known as the heart rate [1]. Analysis of HRV has become an important tool in detection of cardiac and other diseases, as it is non-invasive and provides prognostic information in patients [2]. Previous studies by Akselrod et al. [3], Pomeranz et al. [4] and Pagani et al. [5] suggested that spectral analysis of HRV could be used as a tool for sympathovagal modulating activity of cardiovascular function. Spectral analysis of HRV has been extensively used, not only in basic research but also in clinical medicine [3-5]. The analysis of the signal in frequency domain can be performed either by classical techniques such as Blackman-Tukey’s correlogram and Welch’s periodogram which were based on Fast Fourier
Fig. 1 ECG and its component
II. METHOD The following subsections explain the steps involved in this study.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 415–419, 2011. www.springerlink.com
416
Mahmud and M.B. Malarvili W.M.H. Wan M
A. Data Acquisition In this work, the ECG signals were reco orded from 5 healthy adults lying at rest by using the CleveLab C ECG machine. This ECG machine recorded a standard s three leads ECG signals. The data were recordeed at sampling frequency of 250Hz. ECG from lead 2 is useed in this study. The ECG were filtered to remove 60Hz po owerline noise. The sampling frequency of the ECG is 250Hzz. B. QRS Detection QRS detection algorithm is used to extraact the R points of the ECG and it is an important parameter in i obtaining the RR intervals. For this work, Pan and Tomp pkins algorithm [6] were used for QRS detection. This algorrithm is chosen in this work because it has sensitivity of 99.6 69% and a positive predictivity of 99.77% when tested usin ng the MIT/BIH database. The QRS detection algorithm dev veloped by Pan and Tompkins recognizes QRS complex bassed on analyses of the slope, amplitude, and width [6]. Figure 2 shows the process involved in the t QRS detection algorithm. Firstly, a band-pass filter consists of the cascaded of low-pass and high-pass filters were used. By using the low-pass filter, it eliminates the no oise such as the EMG and 60Hz power line noise. High-paass filter eliminates motion artifacts, P wave and T wave. After that, the signal is differentiated, to obtain informaation on slope, overcome the baseline drift problem and acccentuates QRS complexes relatives to P & T wave. Squaring operation is then performed to emphasis the higher freq quency component and attenuates the lower frequency comp ponent. A moving average filter is used in order to get a sm mooth signal for further analysis.
Finally, an adaptive threshold iss applied to detect the R waves. The interval of detected connsecutive R wave is calculated and denoted as RR intervval (RRi) series. These intervals are not equidistantly/non-rregularly sampled. Thus, not appropriate for spectral or nonn-linear analysis. Therefore, it need to be preprocessed annd resampled so that can be analyzed in spectral domain. C. Removal of Outliers Next, removal of outliers is perfoormed. It is the RRi data wanted parts of the RRi validation and the removal of unw series. Any RR sequence which contain artifacts due to QRS missed detections or false deteections, or ectopic beats, are acknowledge to skew the data annd adversely affect index V spectral. Therefore, the or parameters estimated from HRV outliers are removed from the RRi bbefore further analysis. The outliers is defined as:
where C is a constant and the suggested value is 1.5 [7]. D. Preprocessing Techniques The preprocessing techniques useed in this study are listed in Table 1. The preprocessing techhniques are divided into two main part which are derived frrom the heart period and the other part are the one derived from heart rate. Heart whereas heart rate is the period is series of RR interval w inverse of RR interval.
ECG
Low Pass Filter
High Pass Filter
Differentiator
Band Pass Filter
Moving Average Filter
Squaring Operation
Fig. 2 Filter stages of the QRS detecctor
nal: To obtain the instanInstantaneous heart period sign taneous heart period, f(t), the heart period is calculated such that (f(t) = t[n + 1] - t[n] for t[n] ≤ t < t[n + 1]) for the time of occurence of the nth cardiac evvent, t[n], to the time of occurrence of the succeeding one, tt[n + 1] and resampled at 4Hz. Delayed heart period is Delayed heart period signal: D quite similar to the instantaneous hheart period and the difference is only that at the functionn values where from the time of occurrence of the nth cardiaac event, t[n], to the time of occurrence of the succeeding one, t[n + 1], it is the delayed heart period if (f(t) = t[n] - t[n - 1] for t[n] ≤ t < t[n mpled at 4Hz. + 1]). The sultant series is then resam
IFMBE Proceedings Vol. 35
Comparative Analysis of Preprocessing Techniques for Quantification of Heart Rate Variability
417
Table 1 Preprocessing techniques investigated Preprocessing Techniques 1 2 3 4 5 6
7 8 9 10 11 12
Description
Derived from heart period heart period instantaneous heart period signal delayed of heart period signal linear-interpolated of heart period signal cubic-interpolated of heart period signal convolution of heart period signal with the rectangular window Derived from heart rate heart rate instantaneous heart rate signal delayed of heart rate signal linear-interpolated of heart rate signal cubic-interpolated of heart rate signal convolution of heart rate signal with the rectangular window
Linear-interpolated of heart period signal: Preprocessing technique of linear-interpolated heart period values is obtain by linear interpolation of the heart period before it is resampled at 4Hz. Cubic-interpolated of heart period signal: Preprocessing technique of cubic-interpolated heart period values is obtain by cubic interpolation of the heart period before it is resampled at 4Hz. Convolution of instantaneous heart period signal with rectangular window: This method was suggested by Berger et al. [8]. In this technique, the instantaneous heart period is convolved with a rectangular window and divided by the window width. More information of the method can be found at [8].
III. PERFORMANCE EVALUATION In an attempt to evaluate the performance of the various preprocessing methods, the differences between the spectrum of the preprocessed RRi were surveyed by contrasting the merit indices [1]. The merit indices used in this study are: (a) The leakage rate, defined as a measure (in percentage) of leakage components with respect to the whole spectral components. The leakage components are those located outside a narrow window centered at the frequency with the highest peak. It is experimentally determined that a width of 12Δf (where Δf = fs/N) for the windows is a sufficient value to isolate the peak related to the harmonics and artifacts of the signals.
(b) The numbers of leakage spectral components with amplitude greater than 1% of the total spectral components. IV. RESULTS AND DISCUSSION Table 2 present the computed leakage rate for each preprocessing techniques, for 5 different sets of data respectively while Table 3 exhibits the number of leakage spectral components. Figure 3 and Figure 4 represent the graph for the values in Table 2 and Table 3 respectively. Table 2 Leakage rate for spectrum of different preprocessing techniques for 5 data sets Preprocessing techniques 1 2 3 4 5 6 7 8 9 10 11 12
Leakage Rate(%) DataE
DataD
DataC
DataB
DataA
57.56 46.28 46.28 34.61 26.51 25.56 56.82 42.49 42.49 34.46 26.29 25.56
46.45 34.97 34.97 11.58 9.52 9.27 44.89 31.77 31.77 11.53 9.05 8.14
46.99 33.51 33.51 31.96 24.97 22.82 45.46 31.92 31.92 25.94 17.41 18.62
41.76 33.31 36.71 17.96 14.41 15.01 54.30 39.55 47.00 17.77 12.82 13.24
53.66 37.05 37.05 30.75 23.53 23.16 56.08 35.52 35.52 30.04 23.78 22.57
Table 3 Number of leakage component for spectrum of different preprocessing techniques for 5 data sets Preprocessing techniques 1 2 3 4 5 6 7 8 9 10 11 12
Number of Leakage Component DataE
DataD
DataC
DataB
DataA
30 28 28 25 25 20 30 30 30 25 25 20
33 25 25 23 20 20 32 28 28 22 21 20
32 31 31 24 22 15 31 27 27 24 22 15
28 29 30 25 21 18 30 28 26 25 22 18
35 30 30 28 23 23 34 30 30 28 23 20
Table 2 and Figure 3 show that, for all data sets considered, for the preprocessing techniques derived from heart period, technique 5 and 6 gave the lowest leakage rates while technique 1 presented the highest one. For the
IFMBE Proceedings Vol. 35
418
W.M.H. Wan Mahmud and M.B. Malarvili preprocessing technique vs leakage rate 100 dataE dataD dataC dataB dataA
90 80
leakage rate(%)
70 60 50 40 30 20 10 0
1
2
3
4
5 6 7 8 preprocessing technique
9
10
11
12
Fig. 3 Leakage rate for spectrum of different preprocessing techniques for 5 data sets
preprocessing techniques derived from heart rate, technique 11 and 12 gave the lowest leakage rate and technique 7 presented the highest leakage rate. The preprocessing techniques derived from heart rate has lower values of leakage rate compared to the techniques derived from heart period. Table 3 and Figure 4 shows the number of leakage componets at 1% for each preprocessing techniques for all 5 data sets. For the preprocessing techniques derived from heart period, technique 5 and 6 presented the lowest number of leakage components while technique 1 presented the highest one. For the preprocessing techniques derived from heart rate, technique 11 and 12 presented the lowest number of leakage components while technique 7 presented the highest one. Also, the preprocessing techniques derived from heart rate has lower numbers of leakage components compared to the techniques derived from heart period. preprocessing technique vs leakage component at 1% 50 dataE dataD dataC dataB dataA
45
number of leakage component
40 35 30 25
In this study, 12 types of preprocessing technique to quantify HRV from ECG have been investigated and the spectrum of HRV produced from each preprocessing techniques were compared using two merit indices in order to identify superior preprocessing method. The results indicate that the leakage rate when using heart rate is lower than using the heart period. This results agree with result reported by Mohn [9], which suggested that the spectrum of heart rate is superior to the spectrum of heart period. Besides, the results also shows that the preprocessing technique based on the convolution of heart rate values with the rectangular window and the cubic interpolation of heart rate values are adequate approaches for quantification of HRV. The results support the study by Guimarase et. al. [1] which suggested that these preprocessing techniques are efficient and effortless.
V. CONCLUSIONS In this study, 12 types of preprocessing technique to quantify HRV from ECG have been investigated and the spectrum of HRV produced from each preprocessing techniques were compared using two merit indices in order to identify superior preprocessing method. The preprocessing techniques investigated were the heart period, instantaneous heart period, delayed heart period, linear interpolation of heart period values, cubic interpolation of heart period values, convolution of the instantaneous heart period with the rectangular window and the other 6 techniques were obtained by using the same methods as stated before but instead of using the heart period values, heart rate values were used. Based on the results obtained, the use of heart rate instead of the heart period in the derivation of HRV results in more accurate spectrum. Furthermore, the result also indicate that preprocessing techniques based on the convolution of the inatantaneous heart rate values with the rectangular window and the cubic interpolation of heart rate values were efficient and effortless for quantifying the HRV.
20 15
ACKNOWLEDGMENT
10 5 0
1
2
3
4
5 6 7 8 preprocessing technique
9
10
11
12
Fig. 4 Number of leakage component for spectrum of different preprocessing techniques for 5 data sets
The authors would like to acknowledge Ministry of Higher Education (MOHE) for proving grant through Fundamental Research Grant Scheme (FRGS) to conduct this research.
IFMBE Proceedings Vol. 35
Comparative Analysis of Preprocessing Techniques for Quantification of Heart Rate Variability
REFERENCES [1] H.N. Guimaraes, R.A.S. Santos, “A comparative analysis of preprocessing techniques of cardiac event series for the study of heart rhythm variability using simulated signals,” Journal of Brazilian, 1998. [2] R. U. Acharya, A. Kumar, P. S. Bhat, C. M. Lim, S. S. Iyengar, N. Kannathai, and S. M. Krishnan, “Classification of cardiac abnormalities using heart rate signals,” Med. Biol. Eng. Comput. , vol. 42, pp. 288 293, 2004. [3] Akselrod S, Gordon D, Ubel FA, Shannon DC, Berger AC & Cohen RJ, “Power spectrum analysis of heart rate fluctuation: a quantitative probe of beat-to-beat cardiovascular control,” Science, 213: 220-222,1981. [4] Pomeranz B, Macaulay RJB, Caudill MA, Kutz I, Adam D, Gordon D, Kilborn KM, Barger AC, Shannon DC, Cohen RJ & Benson H, “Assessment of autonomic function in humans by heart rate spectral analysis,” American Journal of Physiology, 248: H151-H153, 1985. [5] Pagani M, Lombardi F, Guzzetti S, Rimoldi O, Furlan R, Pizzinelli P, Sandrone G, Malfatto G, Dell’Orto S, Piccaluga E, Turiel M, Baselli G, Cerutti S & Malliani, “A Power spectral analysis of heart rate and arterial pressure variabilities as a marker of sympatho-vagal interaction in man and conscious dog,” Circulation Research, 59: 178-193, 1986.
419
[6] Pan J, Tompkins W “A real-time QRS detection algorithm,” 1985. [7] Hawkins, D.M. Identification of Outliers. New York: Chapman and Hall, 1980. [8] Berger RD, Akselrod S, Gordon D & Cohen RJ, “An efficient algorithm for spectral analysis of heart rate variability,”IEEE Transactions on Biomedical Engineering, 33: 900-904, 1986. [9] Mohn RK, “Suggestions for the harmonic analysis of point process data,” Computers and Biomedical Research, 9: 521-530, 1976.
Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 35
Malarvili Bala Krishnan Faculty of Biomedical & Health Science Engineering, UTM Johor Malaysia [email protected]
Detection of Influence of Stimuli or Sevices on the Physical Condition and Satisfaction with Unconscious Response Reflecting Activities of Autonomic Nervous System H. Okawai, S. Ichisawa, and K. Numata Iwate University / Dept. of civil and environmental engineering, Faculty of engineering, Morioka, Japan
Abstract— Response of a human for a service accepted is not always accurate if it is obtained by questionnaire survey. This paper describes a trial to detect accurate response by using respiration and pulse information at sleep as a message from autonomic nervous system as well as one’s will in awake. In experiment services of healing were adopted to stimulate subjects. As a result, satisfaction as unconscious response resulted from autonomic nervous system activities was detected. Keywords— autonomic nervous system, unconscious response, body motion wave (BMW), dynamic air pressure sensor, satisfaction.
I. INTRODUCTION We live with accepting stimuli or services in circumstances of nature and society. In order to consider how much the stimuli or services influences us or supply satisfaction to us, a scale for evaluating would be desired. Questionnaire survey is often used to get responses of people, customer, accepted some stimuli or services. However, the response is usually modulated by reason, knowledge or consideration, so that it is not always accurate to express honest mental activity. Do biological activities instead of above mental activity answer honestly? This was investigated. Therefore, if the response or judgment for a service accepted is able to be put into autonomic nervous system in charge of our living, a scale will be determined to improve reliability in the response. In order to study the response, healing music and aroma recognized widely and some bedding now on developing were adopted in concrete at the present study.
II. FLOW OF A SERVISE FOR A HUMAN SYSTEM A. Model for a Human System in Stimuli or Service A human is an elaborate system in a sense. Here at the present study a usual idea, a system has a mechanism to
produce an output modulated by its characteristics for an input, was applied. In Fig.1, an input is a signal, a stimulus or a service. Articles, information, energy and labor et al, are examples for services. These are input to a system both intentionally or unintentionally. Then, the system is influenced and makes a response, viz., an output for speech, action and physical condition, etc. Input Service
・Article ・Information ・Energy, ・Labor, ・・・etc.
Human (System)
Output Influence (Response)
・physical condition ・ speech ・action, ・・・etc.
Fig. 1 Model revealing flow of a service for a human system
B. Resulting Mechanism of Conscious and Unconscious Responses for a Service (Hypothesis) Figure 2 suggested a mechanism of resulting a conscious response and an unconscious response for an input of a service to a human system. Here, article, information, energy and labor were listed for example. The above service flows as follows. A service is input to sensors, for example, of sense of sight by sensing channel 1. After here, there are two functions, the one is a conscious response and the other is an unconscious response. At first, the flow to conscious response is as follows. A biological signal produced by sensing stimuli flows along channel 2, processes of mental activity, at one side and channels 3, biological activity, at the other. Then, channel 4, interaction between mental and biological activities, functions. Then mental activity, such as satisfaction and emotions, probably make biological activities better. Also a good biological activity will make mental activity better. Then a human will answer satisfying for a conscious response.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 420–423, 2011. www.springerlink.com
Detection of Influence of Stimuli or Sevices on the Physical Condition and Satisfaction
On the other hand, there would be possible to answer not satisfying. Because, a signal for mental activity is modulated by reason and consideration et.al, as shown by channel 5 and next it gets to output channel 6 for speech and action et.al. This output is also a conscious response. Thus the information from here is not always accurate because of above modulation.
・ Article ・Information ・Energy, ・Labor, ・・・ etc. ①
Conscious response Mental activity
②
⑤
④
Sensing
reason consideration etc.
③
・・
⑥ Speech, Action, etc.
Unconscious response Biological activity
⑦
Physical condition
421
electric signals, sampled at the rate 400 Hz, 16 bit and stored in a personal computer. The signal was processed with Chart v4.2.2 (AD Instrument) and programmable software VEE Pro ver6.0 (Agilent Technologies). Here, in subject’s body at sleep some continuous motions are generated resulting in respiration, pulse and other unconscious action et al, so that thus motions are detected as pressure waves. Thus obtained pressure waves here are neither electrocardiogram nor respiration gas. All waves express body motions. Therefore, we named this wave “body motion wave”. This wave was filtered to two waves at this paper in Fig.4. The one is “respiration-origin BMW, R-BMW” and the other is “pulse-origin BMW, P-BMW”. This method is applicable not only for checking effects of services by detecting physical condition but also for checking health state in daily life and evaluating welfare living environment.
Fig. 2 Conscious and unconscious response for various inputs
On the contrary, there are two lines in the biological activity to unconscious response as a physical condition. The one is direct channel 3 and the other is channel 4, via a mental activity. These two signals are combined and sent by channel 7 for output. This unconscious response might be an honest answer because it was not modulated by such as reason. This unconscious response is an example reflecting autonomic nervous system activities such as respiration and pulse. Daily life is roughly classified into two items of in awake and at sleep. For the former, mental activities modulated by reason and emotion et al, are expressed by cerebrum for a conscious response. On the contrary, for the latter, because of unconscious state, autonomic nervous system activity is superior so that physical activities are expressed for an unconscious response. This is why the answer is put into unconscious response. Thus, measurement system developed at the present study if set at home would make possible to take data.
III. MEASUREMENT METHOD FOR UNCONSCIOUS RESPONSE (PHYSCAL CONDITION)
BMW: Body Motion wave
Dynamic air-pressure sensor
1s
PC Air pad Detector
Sampling frequency 400 Hz
A/D converter
Fig. 3 Measurement system for body motion waves V
BMW:Body motion wave (original wave)
R-BMW: Respiration-origin BMW
Band pass Filter 0.15~0.47 Hz
V respiration 1呼吸 P-BMW : Pulse-origin BMW 脈
V
5s
pulse
Band pass Filter 3~41 Hz
A. Principle: Method to Take Data
Fig. 4 Body motion wave (BMW) can be filtered into components of
At the present study, physical condition, unconscious response, was adopted to be detected. The data obtained here has respiration and pulse at sleep. A pressure sensor, named “dynamic air pressure sensor” (M.I.Labs), was used in order to fabricate a non-restraint, non-attachment measurement system. As shown in Fig. 3, it was set on a bed to measure dynamic air pressure arisen between the sensor and a subject’s body at lying. The pressure variation detected with the sensor was converted to
respiration-origin and pulse-origin BMWs: R-BMW and P-BMW
B. Reproducibility and Development of Accuracy in the Method At first, the data obtained under following conditions. a) Data were taken for two consecutive nights; b) a subject spent two days in the approximately same state, and c) the same environment at sleep. As a result, in Fig.5, it was found as follows.
IFMBE Proceedings Vol. 35
422
H. Okawai, S. Ichisawa, and K. Numata
Pulse rate
1) Both respiration and pulse rates shift patterns showed approximately the same respectively. 2) The pattern showed periodicity having increase and decrease. The period is 60 120 minutes depending upon individuals. 3) For the bad physical condition, the pattern showed a different manner for the same subject. The above foundlings obtained from experiments showed reproducibility in the present method and in physical condition. The total number of subjects is already more than 200 for 20s normal subject, and now on increasing. An example of bad condition was lack of sleep of previous day. However, for elderly people the physiological reproducibility has not yet affirmed because of not inadequate data in number.
(herself) for approximately 30 min to 1 h in the time of 2 h to 0.5 h before going bed. (b) Aroma: A subject does not use aroma daily. This subject spent time with any aroma chosen by him (herself) for 1-2 h before going bed. The aroma was available for approximately 3h after going bed. (c) Beddings: two types, now on developing, not open to the public were adopted. A subject cannot recognize its difference from the normal one by senses of sight nor touch. The normal bedding is an article on the market; major materials are polyester and cotton. Thus, for stimuli (a) and (b), a subject felt healing in conscious state, on the contrary, for stimulus (c), did not recognize difference from the normal. Subjects, 22 -30 normal male and female, did not drink every day, went to school or work office every weekday, spent time the experiment house four nights. Thus consecutive four days test was set between Monday to Thursday or Tuesday to Friday.
V. RESULT NORMAL(1st day)
Pulse rate
Time(minute)
NORMAL(2nd day)
”
Fig. 5 Example of pulse rate shift for two consecutive nights. Normal means usually using bedding
IV. EXPERIMENT A. Experiment Day and Condition for Subjects Consecutive four days were set, a subject was asked to spend for days as usual, approximately same state. For the first and second days, normal day (usual day), physical reproducibility for a subject was checked. For third and fourth days, stimuli were tried. At the present, because of elementary study, the stimuli of, so to speak, healing, relaxing or improving circumstances were adopted as follows. (a) Healing music: A subject does not hear healing music daily. This subject hear any music chosen by himself
1. Healing music: Attention was put in the first 180 minute. As shown in Fig.6, respiration rate showed 14-18 /min at the normal, and 13-15 /min at the music night. Thus it showed decrease by 2-3/min. Pulse rate started with 78 and became less to 65 at the normal. However, it showed 62-65 at music night. Thus it showed decrease by approximately 10. Four subjects of four showed decrease approximately at the same. 2. Aroma Respiration rate showed 12-14 in both the normal and the aroma, that is, did not show decrease. On the contrary, pulse rate started with 65 in both and shifted with approximately 60 in the normal and 50-55 in the aroma. It showed decrease by approximately 7. Four subjects of five showed approximately at the same, that is, it did not decrease in respiration rate and decreased in pulse rate. 3. Beddings Respiration rate showed 17-18 in the normal and 15-18 in the developed. It showed decrease by approximately 2. A great deal of decrease was seen in 4 h. Pulse rate showed 55-60 in the normal and decreased by keeping less than 55 in the developed.
VI. DICUSSION At the present study, the decrease in respiratory and/or pulse rates means healing effect, because autonomic nervous system relaxed. 1. Healing stimuli (service) gave rise to as follows: a) Satisfaction of mental activity in awake:
IFMBE Proceedings Vol. 35
Respiratory rate
Detection of Influence of Stimuli or Sevices on the Physical Condition and Satisfaction
A subject chose music or aroma he/she liked. This means mental or emotional activity of the subject’s satisfaction. This is a conscious response. Judging from this experiment, the answer was honest one. b) Satisfaction of biological activity at sleep: Biological activity answered satisfaction by decreasing respiration and/or pulse rates for an unconscious response. c) Relationship between two: When a conscious response showed satisfaction, unconscious response also did satisfaction. d) Unconscious response only: In bedding stimulus, an unconscious response showed satisfaction nevertheless conscious response did nothing. 2. Difference in flow: Music input and flowed by channel 2 and then channel 4 in Fig.2, while the aroma, chemical material, input and flowed by channel 2 and channel 3 simultaneously. Bedding material worked by neither channel 2 nor channel 4. It did by only channel 3.
approximately 14~18
NORMAL Time(minute)
Respiratory rate
423
approximately 13~15
MUSIC
VII. CONCLUSION Fig. 6(a) Respiratory rate shift measured for subject enjoyed a healing
Pulse rate
music
Some simple stimuli, services, have been tried. As a result, an unconscious response reflecting autonomic nervous system activities at sleep were expressed when a conscious response was expressed in awake. In addition, the unconscious response was expressed also when mental activities did not work. However, these results were only for healing or relaxing stimuli in this paper. Therefore, further study including other stimuli, services, will be worth studying.
ACKNOWLEDGMENT
approximately 65~78
A grant for the research of music and aroma was provided by NEXER project, 2009, of Research Institute of Science and Technology for Science, Japan.
NORMAL Time(minute)
Author: Hiroaki OKAWAI Institute: Iwate University / Dept. of civil and environmental
Pulse rate
engineering, Faculty of engineering Street: City: Country: Email:
approximately 62~65
MUSIC
Fig. 6 (b) Pulse rate shift measured for subject enjoyed a healing music
IFMBE Proceedings Vol. 35
4-3-5 Ueda Morioka Japan [email protected]
Determination of Reflectance Optical Sensor Array Configuration Using 3-Layer Tissue Model and Monte Carlo Simulation N.A. Jumadi1,2, K.B. Gan3, M.A. Mohd Ali1, and E. Zahedi4 1
Department of Electrical, Electronics and System Engineering, Universiti Kebangsaan Malaysia, Bangi, Malaysia 2 Department of Electronics Engineering, Universiti Tun Hussein Onn Malaysia, Batu Pahat, Malaysia 3 Institute of Space Science, Universiti Kebangsaan Malaysia, Bangi, Malaysia 4 Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran
Abstract— A new reflectance optical sensor array for locating fetal signal transabdominally has been determined in this study. The selection of optical sensor array is based on the highest Irradiance (μW/m2) value estimated on respected detectors. A three-layer semi-infinite tissue model which consists of maternal, amniotic fluid sac and fetal tissues is employed to study the optical sensor array configuration. By using statistical error approach, the number of rays injected to the system can be set to 1 million rays with ±3.2% of simulation error. The simulation results obtained from Monte Carlo technique reveal that diamond configuration is the most suitable configuration of reflectance optical sensor array with 40mm of emitter-detector separation. The selected configuration will be useful in detecting fetal signal independently of probe position. Keywords— eflectance optical sensor array, transabdominal, semi-infinite tissue, optical properties, Monte Carlo simulation.
I. INTRODUCTION The use of optical sensor method in detecting and measuring the fetal oximetry or heart rate applications has been adopted by many researchers [2-5]. There are a few available commercial optical sensors for fetal oximetry in the market but all inherit the same limitation; the technique is invasive to mothers. In this case, the reflectance optical sensors need to be inserted through the uterine cervix before attaching it on the head of the baby [6]. Besides that, it is reported that the sensor will leave mark on the skin of the baby and can cause uncomfortable to mother [7]. Instead of measuring transvaginally, the transabdominal measuring has been proposed by Zourabian et al. and Vintzileos et al. [3, 4]. Nevertheless, none of this work employs reflectance optical sensor array technique since both are using bulky equipment which is spectroscopy to emit light. Only recently, Gan et al. [5] has successfully measured the fetal heart rate signal transabdominally using a low-cost optical light emitting diode (LED) and a silicon photodetector. The clinical results from this study confirm that the position of probe (reflectance optical sensor) affects the signal-to-noise
ratio, SNR of the fetal signal. As suggested, the implementation of optical sensor array that can automatically select the highest signal may help to overcome this problem. Therefore, this study is conducted with objective to determine the most suitable configuration of the reflectance optical sensor array based on the highest Irradiance value. To achieve the objective, the light propagation through the tissue model mimics the pregnant women’s abdomen will be simulated.
II. METHODOLOGY A. Geometry of Tissue Model and Optical Properties Previous studies highlight the importance of understanding the light behavior in tissue and show that the tissue characteristics give significant impact to the instrument performance [8, 9]. As shown in Figure 1, the three-layer tissue model is comprised of maternal tissue, amniotic fluid sac and fetal tissue. Here, this three-layer tissue of pregnant women is assumed as a semi-infinite tissue model to resemble a realistic condition. In the realistic condition, the light usually interacts with deeper tissue and will experience multiple scattering and absorption events. Therefore to mimic this condition, the fetal layer has been extended to 100mm along the y-axis. The remaining thickness values of each layer is taken from previous study [10] which are 24mm and 13mm for maternal and amniotic fluid layers respectively.
Maternal Layer, dm Amniotic Fluid Layer, damn Fetal Layer, df
Fig. 1 Geometry of the 3-layer tissue model (not to scale) [10]. The maternal layer thickness, dm is 24mm, while thickness for amniotic fluid layer, damn and fetal layer, df are 13mm and 100mm respectively. This semi-infinite model itself is in three-dimensional but for illustration purpose, only x-y plane is shown
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 424–427, 2011. www.springerlink.com
Determination of Reflectance Optical Sensor Array Configuration Using 3-Layer Tissue Model and Monte Carlo Simulation
Meanwhile, the optical properties used in this paper are for near-infrared (NIR) region (890nm). The absorption coefficient, µ a; reduced scattering, µ s’; anisotropy factor, g and refractive index, n values are chosen from reported data [10] and are listed in Table 1. Note that the fetal layer is considered to have the same optical properties values as the mother. Table 1 Optical properties of the tissue constituents at 890nm [11] Layer Maternal
Description Absorption coefficient Reduced Scattering Anisotropy factor Refractive index
Unit mm-1 mm-1 NA NA
Symbol µa µ s’ g nr
Value 0.008 0.89 0.80 1.30
Amniotic Fluid
Absorption coefficient Reduced Scattering Anisotropy factor Refractive index
mm-1 mm-1 NA NA
µa µ s’ g nr
0.002 0.01 0.85 1.30
Fetal
Assumed same as maternal layer
B. Monte Carlo Simulation The Monte Carlo method is used mainly in mathematical and physical problems specifically to model the behavior of stochastic variables. Such approach allows any simulations to be run repeatedly by using various random numbers. This key pertinent of Monte Carlo method has been used widely in demonstrating the light behavior in tissue model. With the advancement in optical software, investigation on how the light behaves in transabdominal fetal oximetry instrument can be done. The TracePRO software (Lambda Research Corporation, Littleton, MA) has been used to simulate the amount of lights received by the detectors. This optical software applies the Monte Carlo algorithm in order to simulate the light propagation. In TracePRO, there are three steps required to develop an optical system. First step is to build or import the geometrical model. The semi-infinite tissue model as depicted in Figure 1 is drawn in three-dimension. Next is to apply the defined properties of each component involve such as the optical, material and surface properties on the respected components of the generated model. Final step is ray tracing which enable user to select desired wavelength and offers variety of analysis options. The rays of light is in the form of flux; not a photon and is controlled by flux threshold as well as the number of starting rays. The graphical user interface (GUI) of this software enables the model to be viewed according to the user’s interest. One way to accurately predict the performance of the optical system is by injecting huge number of rays to the system. However, injecting huge number of rays means more computing time is needed. One question raised is how
425
many rays will be needed for a simulation. Rather than using trial and error method [11], an estimation of error based on ray statistics (Bernoulli trial) [12] will be considered to determine the required number of rays. Equation 1 [12] below is used to calculate the simulation error. The simulation error is formulated from the inverse relationship of signal-to-noise ratio of the probability density law. It can be seen that the simulation error or noiseto-signal ratio is depends on the number of rays, N and the probability of a ray arriving at the detector, p. The number of rays at detector is represented by n. Small value in simulation error implies good accuracy in simulation. Hence, suitable sample size can be determined from the equation below.
(1 − p ) (1 − p ) Noise σ = = = Signal x Np n
(1)
C. Reflectance Optical Sensor Model and Configuration of Sensor Array The simplified model of reflectance optical sensor array is consists of one light source and several detectors. The light source referred herein as emitter is simulated with a characteristic of IN6266 LED. The wavelength of this emitter is 890nm with half angle of 20 degree. On the other hand, the detector is modeled as a square with dimension of 10, 10, and 0.1 for length, width and height respectively. All units are in mm. The dimension of detector is assigned such way so that the surface area will be 100mm2 as to resemble the real active area of a silicon photodetector, OEM (Edmund Optics, N57-513). Previous finding [11] shows that there is a direct relationship between the emitter-detector separation and detected optical power contributed from the fetal layer. Therefore, the initial emitter-detector separation is set to 40mm. The reason why this value is selected has been discussed elsewhere [5]. First of all, a square configuration will be simulated using eight detectors, Dn numbered from 1 to 8. The detectors are positioned adjacent to each other with the emitter, E located at the origin (x=y=z=0). Figure 2 below shows the square configurations with dimension of 110mmx140mm.
110mm
140mm
Fig. 2 Square configuration of sensor array in TracePRO software
IFMBE Proceedings Vol. 35
426
N.A. Jumadi et al.
D. Data Analysis The statistical error is calculated and plotted using Microsoft® Excel (Microsoft Inc., Redmond, WA). Irradiance is one of the fundamental qualities of radiometry concept. It is computed by dividing the incident flux received at the respected detector over the area of the detector. The incident flux value can be referred to the flux report generated from the simulation in TracePRO software. The calculation of Irradiance is also being done in Excel spreadsheet and is plotted using suitable chart type.
III. RESULT/DISCUSSION A. Estimation of Error in Monte Carlo Simulation To compute the simulation error, the probability, p is set to be 0.01%. As shown in Region A (Figure 3), there is a considerable growth in simulation error for N less than 1 million. However, different trend can be seen in Region B which covers the interest sample of rays from 1 million to 6 million. According to the graph, the error is 0.032 or ±3.2% for 1 million rays and is reduced to 0.013 or ±1.3% at 6 million rays. The tendency of simulation error to become zero is likely happen when increasing the number of rays more than 6 million. It is understood that in ray tracing, selecting smaller sample will help to minimize computing time [11]. Since the percentage of error for N=1 million is within the acceptable range, it is advantage to choose sample size with N=1 million. Therefore, to compromise the accuracy and time spends for tracing the lights, one million rays in the form of flux is injected to the three-layer tissue model.
the probability of rays getting to the detectors will still be the same so long that the number of rays is not change. Despite the excellent results shown in the diamond configuration, a star configuration (Fig. 4c) is also simulated. It seems that this configuration gives good results too. However, the limitation is the initial setting for emitterdetector separation is not at 40mm for D4, D5, D6 and D7. Due to the shape of the star configuration, the separation of D6 and D7 with respect to emitter, for instance; is set at 30mm. Previous finding [11] states that for separation less than 40mm, the total signal contributed by the fetal will be less due to the banana pattern of the photon migration process. For this reason, this configuration could not be used. Based on the simulation results presented here, the diamond configuration has been selected. The proposed configuration will be beneficial during probing the fetal signal. During practical implementation, the sensor array will be put on the middle of the maternal’s abdomen. For example, if the fetus is a bit off from the probing area, at least, the nearest detector will still be able to pick up the reflected signal of the fetus. Details on calculated Irradiance values as well as number of rays hitting the surface of the detectors can be referred to Table 2.
A
B
Fig. 3 Simulation error versus number of rays, N
B. Configuration of Sensor Array By referring to the plotted Irradiance values of square configuration (Fig. 4a), it is noticeable that the Irradiance values of D1, D3, D6 and D8 are too low with D8 having zero incident flux value. This is probably due to the positions of these detectors are a bit far from the emitter resulting more rays are converged to the detectors that closest to the emitter. On the other hand, other detectors (D2, D4, D5 and D7) show strong Irradiance values with the highest value (30.1±0.96 µW/m2) is at D2. Based on this encouraging results, another sensor array can be established which is diamond configuration (Fig. 4b). This diamond configuration is a subset of the square configuration. Using this configuration, another simulation is carried out. Nevertheless, the resulted Irradiance values remain the same. This is because the same number of rays, N is used together with the same setting. In other word,
Table 2 Calculated Irradiance values (μW/m2) and number of rays hitting the surface of respected detectors. The diamond configuration is not shown here since it is a subset (D2, D4, D5 and D7) of the square configuration Configuration
Detector
Number of rays
Square
D1 D2 D3 D4 D5 D6 D7 D8
2 67 3 65 48 4 58 0
Irradiance [μW/m2] 0.0424 30.1 8.33 28.6 19.8 10.6 24.5 0
Star
D2 D4 D5 D6 D7
67 36 28 32 29
30.1 30.4 9.72 19.9 34.1
IFMBE Proceedings Vol. 35
Determination of Reflectance Optical Sensor Array Configuration Using 3-Layer Tissue Model and Monte Carlo Simulation
(a)
(b)
427
(c)
Fig. 4 The Irradiance values are plotted against respected detectors for three different configurations which are; a) square, b) diamond and c) star. The initial emitter-detector separation is fixed to 40mm. The Dn (n=1-8) represents the numbered detectors while E indicates the emitter or light source. The emitter is centered at the origin (x=0, y=0, z=0)
IV. CONCLUSIONS In summary, the preliminary modeling of this paper shows that the reflectance optical sensor array can be determined by measuring the strength of the Irradiance. For this reason, the diamond configuration with emitter-detector separation at 40mm is chosen as the most suitable reflectance optical sensor array configuration. By using the statistical error approach, the number of rays injected into the optical system is 1 million rays with simulation error of ±3.2%. On the other hand, the possibility to employ this type of configuration for non-invasive fetal oximetry is promising. Further work will involve circuit development of reflectance optical sensor array using multiple wavelengths of near-infrared light for transabdominal fetal oximetry application.
ACKNOWLEDGMENT Authors would like to thank HPI Innovation Sdn Bhd for the TracePro software. This work has been supported by research university grant UKM-AP-TKP-07-2009.
REFERENCES 1. Jacques S.L. 1989. Time resolved reflectance spectroscopy in turbid tissue. IEEE Trans. Biomed. Eng. 36: 1155-1161 2. Ramanujam, N., Vishnoi, G., Hielscher, A.H., Rode, M.E., Forouzan, I. & Chance, B. 2000. Photon migration through the fetal head in utero using continuous wave, near infrared spectroscopy: clinical and experimental model studies. J. Biomed. Opt. pp. 163-172.
3. Zourabian, A., Chance, B., Ramanujam, N., Martha, R. & David, A.B. 2000. Trans-abdominal monitoring of fetal arterial blood oxygenation using pulse oximetry. J. Biomed. Opt. 5:391-405. 4. Vintzileos, A.M., Nioka, S., Lake, M., et al. 2005. Transabdominal fetal pulse oximetry using near-infrared spectroscopy. Am J. Obstet. Gynaecol. 192:129-133. 5. Gan, K.B., Zahedi, E. & Mohd. Ali, M.A. 2009. Transabdominal Fetal Heart Rate Detection Using NIR Photoplethysmography: Instrumentation and Clinical Results. IEEE Trans. Biomed. Eng. 56: 2075-2082 6. Reuss J.L. 2004. Factors influencing fetal pulse oximetry performance. J. Clin. Monit. 18: 13-24 7. Maesel A. et al. 1996. Fetal pulse oximetry a methodological study. Acta. Obstet. Gynecol. Scand. 75: 144-148 8. P. D. Mannheimer, J. R. Casciani, M. E. Fein, and S. L. Nierlich. 1997. Wavelength selection for low-saturation pulse oximetry. IEEE Trans. Biomed. Eng. 44: 148–158 9. Reuss J.L. 2005. Multilayer modeling of reflectance pulse oximetry. IEEE Trans. Biomed. Eng. 52: 153–159 10. Zahedi, E. & Gan. K.B. 2008. Applicability of adaptive noise cancellation to fetal heart rate detection using photoplethysmography. Comput. Biol. Med. 38: 31-41 11. Gan K.B. 2009. Non-invasive fetal heart rate detection using nearinfrared and adaptive filtering. Available online from: (http://ptsldigital.ukm.my) 12. Frieden B.R. 1991. Probability, statistical optics, and data testing. Springer-Verlag, Berlin Address of the corresponding author: Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 35
Nur Anida Jumadi Universiti Kebangsaan Malaysia Selangor Malaysia [email protected]
Effects of ECM Degradation Rate, Adhesion, and Drag on Cell Migration in 3D H.C. Wong and W.C. Tang Department of Biomedical Engineering, University of California, Irvine, USA
Abstract— Receptors on the cell surface are used to bind to specific proteins in the extracellular matrix (ECM), and intracellular forces generated by the actomyosin can be transferred to these sites of contact. These cell-ECM interactions are important for cancer invasion, cell differentiation, and wound healing. In particular, cancer metastasis is a process that is highly dependent on cell migration through a 3D matrix. Cells secrete enzymes, such as matrix metalloproteinases (MMPs), in order to degrade some of the proteins in the ECM and migrate through it. Our 3D model of cell migration serves to investigate how the degradation rate of the ECM influences quantities important for cell-ECM interactions and cell motility. In our model, both the cell-ECM traction and the drag force depended on the bound receptor concentration, and timedependent equations were used to model the reaction between the MMPs and the ECM proteins. Simulations were conducted over a range of degradation coefficient (α) values of 0 to 1 nm2/s·molec and for drag coefficients of 0.5, 1, and 2 pN·s/μm. Results show that both the cell-ECM traction and the drag force were larger in magnitude at lower values α where the concentrations of the bound receptors were the highest. Though the cell velocity showed a biphasic relationship with respect to α, this relationship became less pronounced for lower values of the drag coefficient where cell-ECM traction became the most important force acting on the cell. The relative contribution of the traction and drag force is important in determining the range of α needed for maximal cell migration speed. Keywords— ECM degradation, cell-ECM interactions, traction, mathematical model, cell migration.
I. INTRODUCTION Cells interact with the extracellular matrix (ECM), a network of fibrous proteins and soluble factors, via surface receptors that bind to specific extracellular proteins. Forces that are generated in the interior of the cell due toactomyosin contractions are able to be imparted to the external environment through these sites of cell-ECM contact [1]. Cell-ECM interactions are important for a variety of biological processes, including cancer invasion [2], cell differentiation [3], and wound healing [4].In particular, cancer metastasis is a process that is highly dependent on cell migration through a 3D matrix [5]. In fact, cell migration in a 3D environment is different than that on a 2D substrate. In
the human body, cells are surrounded by the ECM, and it is necessary for the cells to secrete enzymes, such as matrix metalloproteinases (MMPs), in order to degrade some of the proteins in the external environment before migrating through it [6]. Since it is usually difficult to perform experiments on cells in a 3D matrix, computer simulations have proven to be useful for developing a better understanding of how cells move through the tissues in our body. Our model serves to investigate how the degradation rate of the ECM influences quantities important for cell-ECM interactions and cell motility. We were also interested in how the relative contributions of cell-ECM traction and drag force impacted cell migration. In particular, the expressions used for the cell-ECM traction and viscous drag forces follow those used by Harjanto and Zaman [7]. However, the dependency of the drag force on the bound receptor concentration was also included, as others have done [8, 9, 10]. Ordinary differential equations were used to model the chemical reaction between the MMPs and the ECM proteins.
II. METHODS The cell was assumed to be a sphere with a radius of 25 µm. Both the Young’s moduli of the cell and the extracellular space were assumed to be 1 kPa. Due to the symmetry in the unidirectional movement of the cell, only half of the cell was explicitly modeled. Tetrahedral elements were used to discretize the cellular space, and the Solid Mechanics module of COMSOL Multiphysics 4.0a was used to solve for the displacements and velocities of the cell from t = 0 to 300 seconds with a time step of 50 seconds. In our model, the cell was assumed to exert traction on the ECM by transmitting the intracellular actomyosin forces to the cell surface receptors that are bound to specific proteins present in the extracellular environment. Thus, it was assumed that the cell-ECM traction was proportional to the bound receptor concentration (Cb), which is equivalent to the bound ECM protein concentration. Since it has been observed that the cell exerts more traction on stiffer substrates [11], we also assumed that this traction was proportional to the Young’s modulus of the ECM:
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 428–431, 2011. www.springerlink.com
Ftraction = β1Cb EECM ,
(1)
Effects of ECM Degradation Rate, Adhesion, and Drag on Cell Migration in 3D
whereβ1 is referred to as the adhesion coefficient [7]. The range of β1 was determined by using a ratio of 0.3 of experimentally observed traction magnitudes to substrate stiffnesses[12, 13] and by assuming that the concentration of bound receptors was 10,000 molecules/µm2 [14]. Thus, β1 was 3 × 10-5µm2 in our simulations. It has been shown that traction forces are normal to the cell boundary for migration on 2D substrates [15]. This argument was extended to our 3D model in which the tractions were normal to the cell surface. Moreover, cells also encounter viscous force due to the interactions between the cell surface receptors and the proteins in the ECM. In our model, the drag force per area exerted on the cell surface was given by:
Fdrag,i = β2Cb vi ,
(2)
whereβ2 is referred to as the drag coefficient and vi is the cell velocity [8, 9, 10], and i = 1, 2, and 3. The concentration of the bound receptors was determined using standard reaction kinetic equations where the forward rate constant was 10-3µm2/s [14] and the reverse rate constant was 1 s-1 [8]. It has been shown that cells have polarity in which the anterior is more strongly attached to the matrix than the posterior. To model this, the reverse rate coefficient was increased linearly from the anterior to the posterior of the cell, where it was 100 times as large [16]. This caused the concentration of bound receptors to decrease towards the rear of the cell, which led to less cell-ECM traction at that location as well. Furthermore, it was assumed that diffusion of the cell receptors and ECM proteins were negligible. Both the initial concentrations of the receptors and ECM proteins were10,000 molecules/µm2 in the simulations [14]. The process of ECM protein degradation was modeled by time-dependent ordinary differential equations. In the simulations, the concentration of MMPs was assumed to be at steady state, which meant that the production and degradation rates of the MMPs themselves occurred on a much larger time scale than those for the ECM. As a result, the time-dependent differential equation for the ECM proteins reduced to
dCECM 2 = −α CECM dt
429
III. RESULTS As the ECM protein degradation coefficient was increased, the concentration of the ECM declined and caused the bound receptor (equivalent to the bound ECM protein) concentration on the cell surface to decrease as well. For instance, for α = 0.2, 0.4, 0.6, 0.8, and 1 nm2/s·molec, the maximum bound receptor concentration was about 3905, 2591, 1934, 1542, and 1282 molecules/μm2, respectively. Since the traction and the drag force were both proportional to Cb, these quantities also declined as α was increased. In fact, the degradation profile of Cb was similar to those observed for both types of forces. Figure 1 shows the effects of the varying the degradation coefficient on (a) the maximum (anterior) and minimum (posterior) cell-ECM tractions and (b) the maximum drag force per area exerted on the cell surface at t = 300 sec for β2 = 0.5, 1, and 2pN·s/μm. Note that varying drag coefficient only altered the drag force and not the traction. Figure 2 shows a plots of the average cell velocity in the primary direction of motion for (a) β2 = 2 pN·s/μm, (b) β2 = 1 pN·s/μm, and (c) β2 = 0.5 pN·s/μm. Note that doubling
(3)
Where α is the ECM protein degradation coefficient [5, 6]. In the simulations, this reaction occurred uniformly throughout the extracellular space. After solving for CECM at each time step, it was used in the receptor-ligand kinetic equation to determine Cb. Note thatCb was also at steadystate at each time step because the time constant of the receptor-ligand system was much smaller than that for the ECM protein degradation equation.
Fig. 1 Plot of (a) maximum and minimum cell-ECM traction and (b) maximum drag force per area for β2 = 2, 1, and 0.5 pN-s/um at t = 300 sec
IFMBE Proceedings Vol. 35
430
H.C. Wong and W.C. Tang
the lower degradation coefficient values, and generally declined as the degradation coefficient was increased. Since the traction was dependent on the bound ECM protein concentration, increases in α caused the cell to exert less force on the ECM. The cell migrated at a lower speed as α was increased because the cell-ECM traction was the more dominant force. In particular, as α was increased from 0 to 1 nm2/s·molec, the traction decreased by approximately 82%, whereas the average cell velocity decreased by about 36%. The cell velocity did not decrease as substantially as the traction because of the presence ofdrag forces that resisted cell movement. It has been shown that cell migration speed exhibits a biphasic relationship with respect to the ECM protein concentration [7]. We have shown here that when the drag coefficient is low enough, the cell-ECM traction dominates and the biphasic relationship is not as pronounced. As the drag coefficient is increased, the biphasic relationship between the migration speed andα becomes more evident. In particular, as β2 was increased from 0.5 to 1 and2 pN·s/μm, the drag force hada greater impact on the cell migration speed. As a consequence, it was no longer the case that the maximum cell velocity occurred at about α = 0.07nm2/s·molec (forβ2 = 0.5 pN·s/μm). In fact, maximum velocities were observed at greater degradation values as β2 was increased. Specifically, maximum velocities were observed at about α = 0.2 and 0.4 nm2/s·molec for β2 = 1 and 2pN·s/μm, respectively. However, the variations in the cell velocities became less significant as the drag coefficient was increased, as mentioned earlier. Fig. 2 Plot of the average cell velocity in the direction of motion as a function of the degradation coefficient for (a) β2 = 2 pN·s/μm, (b) β2 = 1 pN·s/μm, and (c) β2 = 0.5 pN·s/μm thedrag coefficient resulted in a reduction in cell velocity by a factor of about two as well, but the drag force did not increase as significantly. This was because the drag force was a function of both the drag coefficient and the cell velocity. As a result, increases in the former were counteracted by decreases in the latter. The percentage change of the maximum cell velocity from the minimum was also calculated for each data set, and was about 9%, 14%, and 57% for β2 = 2, 1, and 0.5 pN·s/μm, respectively. In other words, larger differences were observed for lower drag coefficient values.
IV. DISCUSSION For β2 = 0.5 pN·s/μm, the cell-ECM traction was the more important force acting on the cell surface. As a consequence, both the traction and velocity were the largest at
V. CONCLUSION For traction and drag forces that depend on the bound ECM protein concentration, we have shown how changes in the drag coefficient can influence the cell velocity degradation profile. Similar results are expected if the adhesion coefficient was altered instead of the drag coefficient. In particular, decreases in the adhesion coefficient would cause the cell-ECM traction to decline and for the drag forces to have a greater impact on the movement of the cell. In other words, the relative contributions of the adhesion and drag coefficients are important in determining what range of degradation coefficient is needed to achieve optimal cell migration speed. These simulation results should help to elucidate several biological processes that rely on ECM degradation and migration through a 3D environment, including cancer metastasis. Since uniform ECM degradation was considered in this paper, a more realistic portrayal of local ECM protein degradation and its effects on cell migration is currently being investigated.
IFMBE Proceedings Vol. 35
Effects of ECM Degradation Rate, Adhesion, and Drag on Cell Migration in 3D
ACKNOWLEDGMENT The ARCS Foundation provided financial support for the main author during this study.
REFERENCES 1. Rhee S (2009) Fibroblasts in three dimensional matrices: cell migration and matrix remodeling. ExpMol Med 41(12): 858-865 2. Ramis-Conde I, Drasdo D, Anderson A, Chaplain M (2008) Modeling the influence of the E-cadherin-β-catenin pathway in cancer cell invasion: a multiscale approach. Biophys J 95(1): 155-165 3. Pompe T, Glorius S, Bischoff T, et al. (2009) Dissecting the impact of matrix anchorage and elasticity in cell adhesion. Biophys J 97(8): 2154-2163 4. Sakamoto K, Owada Y, Shikama Y, et al. (2009) Involvement of Na+/Ca2+ exchanger in migration and contraction of rat cultured tendon fibroblasts. J Physiol 587(Pt 22): 5345-5359 5. Jeon J, Quaranta V, Cummings P (2010) An off-lattice hybrid discrete-continuum model of tumor growth and invasion. Biophys J 98(1): 37-47 6. Painter K, Armstrong N, Sherratt J (2010) The impact of adhesion on cellular invasion processes in cancer and development. J TheorBiol 264(3): 1057-1067 7. Harjanto D, Zaman M (2010) Computational study of proteolysisdriven single cell migration in a three-dimensional matrix. Ann Biomed Eng 38(5):1815-1825
431
8. Gracheva M, Othmer H (2004) A continuum model of motility in ameboid cells. Bull Math Biol 66(1): 167-193 9. Larippa K, Mogilner A (2006) Transport of a 1D viscoelastic actinmyosin strip of gel as a model of a crawling cell. Physica A 372(1): 113-123 10. Sarvestani A, Jabbari E (2009). Analysis of cell locomotion on ligand gradient substrates. BiotechnolBioeng 103(2): 424-429 11. Ghosh K, Pan Z, Guan E, et al. (2007) Cell adaptation to a physiologically relevant ECM mimic with different viscoelastic properties. Biomaterials 28(4): 671-679 12. Sen S, Kumar S (2010) Combining mechanical and optical approaches to dissect cellular mechanobiology. J Biomech 43(1): 45-54 13. Hur S, Zhao Y, Li Y, Botvinick E, Chien S (2009) Live cells exert 3dimensional traction forces on their substrata. Cell MolBioeng 2(3): 425-436 14. Roy S, Qi H (2010) A computational biomimetic study of cell crawling. Biomech Model Mechanobiol 9(5): 573-581 15. Dembo M, Wang Y (1999) Stresses at the cell-to-substrate interface during locomotion of fibroblasts. Biophys J 76(4): 2307-2316 16. DiMilla P, Barbee K, Lauffenburger D (1991) Mathematical model for the effects of adhesion and mechanics on cell migration speed. Biophys J 60(1): 15-37 Corresponding Author: Henry Wong Institute: University of California, Irvine Street: 3120 Natural Sciences II City: Irvine Country: USA Email: [email protected]
IFMBE Proceedings Vol. 35
Finite Element Analysis of Different Ferrule Heights of Endodontically Treated Tooth J. Kashani, M.R. Abdul Kadir, and Z. Arabshahi Medical Implant Technology Group, Faculty of Biomedical Engineering & Health Sciences, Universiti Teknologi Malaysia, Skudai, Malaysia
Abstract— Objective: To investigate the influence of ferrule height on the crown mechanical resistance and stress distribution through the root and luting cement to explain restoration lose and root fracture pattern. Material and methods: Threedimensional models of an adult maxilla and root of incisor tooth were developed from a Computed Tomography scan images. Periodontal ligament, luting cement, crown and custom post were reconstructed on the computer . A static load of 50N was applied to the crown at 70° to the occlusal plan. Results: Design with no ferrule had the most crown displacement and 2mm ferrule had the least. Also 2mm ferrule design had the lowest root and luting cement stress Conclusion: The study suggests that a ferrule increases mechanical resistance of crown. Furthermore, a ferrule decreases stress in dentin and luting cement; consequently, the fracture and losing restoration risk decline. Keywords— Ferrule effect, Post, Finite Element Analysis, Endodontically treated teeth, root fracture.
I. INTRODUCTION There is a continuously high demand for the restoration of endodontically treated tooth as a result of trauma, deep decay or for restoration [1]. In endodontic treatment, pulp is removed and a post is placed inside the tooth. The purpose of the post which is placed in an endodontically treated tooth is to retain a core in the tooth with extensive loss of coronal tooth structure [2, 3]. Custom cast posts were used for many decades [4] and some studies reported a high success rate for them [5, 6, 7]. A dental ferrule is an encircling band of cast metal around the coronal surface of the tooth. It has been proposed that the use of a ferrule as part of the core or artificial crown may be of benefit in reinforcing endodontically treated teeth [8]. “Ferrule effect” is defined by a 360-degree metal crown collar surrounding parallel walls of dentine and extending coronal to the shoulder of the preparation [9]. Several studies have shown the ferrule design affects the pattern of root fracture in restored teeth and improve fracture resistance [10, 11, 12, 13, 14, 15] and ferrule height is critical factor to mechanical resistance [9, 16].
The purposes of this study were to reconstruct the maxilla and an endodontically treated maxillary incisor tooth from Computed Tomography (CT) scan images of a real patient and investigate the effect of ferrule height on mechanical crown resistance and the stress distribution on the root and luting cement by Finite Element Analysis (FEA).
II. MATERIAL AND
METHOD
A three dimensional (3D) model of an adult maxilla and root of incisor tooth were developed from a CT scan image set of 98 slices, with 1mm slice thickness. Using an image processing software package (Mimics, Materialise NV, Leuven, Belgium) and based on the Hounsfield Unit, cortical and cancellous bone were separated and modeled. 3D model of a custom post, luting cement and crown were created in a commercially 3D modeling software (SolidWorks 2009, Dassault Systèmes, USA) on the basis of root geometry as shown in Fig. 1 Post was designed with 10mm length inside the root canal. Periodontal ligament (PDL) was modeled based on root shape of the tooth with a thickness of 0.25mm [17] and subtracted from the volume of the cortical and cancellous bone. To compare different heights of ferrule, five models were developed: no ferrule, 0.5mm,1mm, 1.5mm and 2mm ferrule height (Fig. 2). A commercially available FEA software (CosmosWorks 2009, Dassault Systèmes, USA) was used for static analysis of the model. The models were meshed using 0.5 mm parabolic tetrahedral elements with an average number of 547500 elements and 759150 nodes per model. A 50 N static load was applied lingually to the middle of lingual surface of the crown at 70˚ to the occlusal plan (Fig. 1) to simulate masticatory force. In order to maxilla has no movement, the models were fixed on the outer surface of the maxilla with no rotation or translation allowed. Several studies reported that dentin in endodontically treated teeth were more brittle than dentin in teeth with pulps [18, 19]. However, some other studies showed that the mechanical properties of endodonticed and normal dentin were comparable [20, 21]. In this study, dentin for the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 432–435, 2011. www.springerlink.com
Finite Element Analysis of Different Ferrule Heights of Endodontically Treated Tooth
endodontically treated model was assigned with the same mechanical properties as the dentin in teeth with pulp and all materials were assumed to be homogenous, isotropic and linear elastic. The mechanical properties of the various simulated components were assigned according to Table 1.
Fig. 1 Image of no ferrule model showing components and applied force
Fig. 2 Five models with different dentin ferrule height Table 1 Mechanical properties of the components used in FEA models Material
Elastic Modulus (MPa)
Poisson’s ratio
Cortical Bone [23]
13700
0.3
Cancellous bone [23]
1370
0.3
Dentine [22]
18600
0.32
Periodontal Ligament [24]
0.069
0.45
Porcelain [25]
69000
0.28
Gutta Percha [24]
140
0.45
Luting cement [22]
18600
0.28
Steel [22]
210000
0.3
433
III. RESULTS FEA method has been used and shown to be a useful tool when investigating complex systems that are difficult to standardize during in vitro and in vivo studies. VonMises equivalent stress and strain, were chosen as critical parameters for evaluation of the results obtained with FEA. Moreover, average value was used for stress and strain which is mean of magnitude of all elements. Crown displacement also selected to demonstrate mechanical crown resistance. To calculate average crown displacement, the mean displacements of all elements of crown were considered. Fig. 3 shows the average displacement of the crown. It demonstrates that crown displacement was reduced by increasing ferrule height. Crown displacement has reverse relation with mechanical resistance. It means that 2.0 mm ferrule is the most mechanically resistant design and no ferrule is the least. In order to accurately read and explain stress and strain in the root and luting cement, the root was divided into seven sections with the thickness of 2 mm (Fig. 4A) and luting cement divided into ten sections with the thickness of 1mm (Fig. 4B). Fig. 5 illustrates that average strain at cement – dentin interface for all models decrease along the root to approximately around middle length of the cement (mid-cement) after which is increased and in apical area was the maximum. From cervical towards apical side, for the first 4mm, cement of no ferrule design tolerated the highest strain while with increasing ferrule height, the magnitude of strain was reduced. From mid-cement till apical area, there was no significant difference between cement strain of models. Fig. 6 demonstrates that increasing ferrule height can slightly reduce magnitude of luting cement stress in the cervical area till approximately mid – cement. From mid-cement till apical area, there is no difference between deferent designs. Stress magnitude in all models slightly reduce from cervical margin to mid-cement area after that it remarkably increases up to apical area which its stress magnitude is almost twice of cervical area. Dentin stress at different sections was remarkably increased in all models from cervical margin towards apical area except the last section (apical area) which was sharply decreased (Fig. 7). The presence of a ferrule resulted in a marked declaim in the stress of dentin from cervical area till roughly middle of the root (Fig. 7) also increasing ferrule height results reducing dentin stress at the same area.
Fig. 3 The influence of ferrule height on average crown displacement
IFMBE Proceedings Vol. 35
434
J. Kashani, M.R. Abdul Kadir, and Z. Arabshahi
endodontically treated teeth, using two dimensional (2D) FEM [26, 27] or three dimensional (3D) FEM [22, 23, 25]. Since the tooth structure is not symmetrical, 3D FEM will be more accurate and gives better results than the 2D method [28, 29].
Fig. 4 Root sections (A) and luting cement sections (B)
Fig. 7 Average VonMises stress at the root in different sections and for different ferrule heights
Fig. 5 Average strain at dentin – luting cement interface in different sections and for different ferrule heights
Fig. 6 Average VonMises stress at denting – luting cement interface in different sections and for different ferrule heights
IV. DISCUSSION Experimental and numerical analysis methods have both been used to analyze the effects of post-core system. Since experimental methods may be too time consuming, expensive and require sophisticated procedure with coarse results, numerical analysis provides an effective tool. For numerical analysis, finite element method (FEM) was normally employed. Several studies investigated stress distribution in
In this study maxilla of a real patient simulated and 3D FEM was exploited to achieve more precision. The results show that considerable differences exist with respect to root, cancellous bone and luting cement stress, luting cement strain and crown displacement between models with different ferrule height. The results of the crown displacement (Fig. 3) confirm that a ferrule preparation increases the mechanical resistance of the crown. There was significant correlation between ferrule height and dentin stress. By increasing ferrule length declined average stress in dentin (Fig. 7); consequently the fracture resistance is increased. These results observed by other researchers experimentally and clinically [9, 10, 30, 31, 32). Luting cement plays an important role for load transfer and stability of the crown. Stress and strain of the luting cement causes breaking down of mechanical bonding of the cement and other materials, resulting in movement of the crown and microleakage of oral fluids which will accelerate the mechanical failure of the restoration. Results of this study illustrate that increasing ferrule height, decrease stress and stress in luting cement (Fig. 5 & Fig. 6); therefore luting cement in restoration with higher ferrule, has more resist to mechanical failure.
V. CONCLUSION The study suggests that a ferrule increases mechanical resistance of crown. Furthermore, a ferrule decreases stress in dentin and luting cement; consequently, models with ferrule were benefited from low fracture and losing restoration risk.
IFMBE Proceedings Vol. 35
Finite Element Analysis of Different Ferrule Heights of Endodontically Treated Tooth
ACKNOWLEDGMENT The authors wish to thank Dr. Abbas Azari for his constructive notes on this work and also Universiti Teknologi Malaysia for its support.
REFERENCES 1. Cheung W. (2005) A review of the management of endodontically treated teeth: Post core and the final restoration. J Am Dent Assoc 136: 611-619. 2. Robbins JW. (1990) Guidlines for the restoration of endodontically treated teeth, J Am Dent Assoc 120:558-566. 3. Goodacre CJ, Spolink KJ. (1994) The prostodontic management of endodontically treated teeth: a literature review. PartI. Success and failour data, treatment concepts, J Prosthodont 3: 243-250. 4. Morgano SM, Milot P. (1993) Clinical success of cast metal posts and cores. J Prosthet Dent 69:11-16 5. Weine FS, Wax AH, Wenckus CS. (1991) Retrospective study of tapered, smooth post systems in place for 10 years or more. J Endodon 17:293-297 6. Walton TR. (2003) An up to 15-year longitudinal study of 515 metalceramic FPDs: part2. Modes of failure and influence of various clinical characteristics. Int J Presthodont 16:177-182 7. BergmanB, Lundquist P, Sjogren U et al. (1989) Restorative and endodontic results after treatment with cast posts and cores. J Prosthet Dent 61:10-15 8. Stankiewics NR., Wilson PR. (2002) The ferrule effect: a literature review. Int Endodon J, 35: 575-581 9. Sorensen JA, Engelman MJ (1990) Ferrule design and fracture resistance of endodontically treated teeth. J Prosthet Dent 63:529-536 10. Barkhordar RA, Ryle R, Jan A. (1989) Effect of metal collars on resistance of endodontically treated teeth to root fracture. J Prosthet Dent 61:676–678 11. Milot P, Stein RS. (1992) Root fracture in endodontically treated teeth related to post selection and crown design. J Prosthet Dent 68, 428–35 12. Ng CC, Al-Bayat MI, Dumbrigue HB et al. (2004) Effect of no ferrule on failure of teeth restored with bonded posts and cores. Gen Dent 52:143–6 13. Rosen H, Partida-Rivera M. (1986) Iatrogenic fracture of roots reinforced with a cervical collar. Oper Dent 11:46–50 14. Gluskin AH, Radke RA, Frost SL et al. (1995) The mandibular incisor: Rethinking guidelines for post and core design. J Endodont 21:33–37 15. Hemmings KW, King PA, Setchell DJ. (1991) Resistance to torsional forces of various post and core designs. J Prosthet Dent 66:325–329 16. Libman WJ, Nicholls JI. (1995) Load fatigue of teeth restored with cast posts and cores and complete crowns. Int J Prosthodont 8: 155–161
435
17. Rees JS, Jacobsen PH. (1997) Elastic modulus of the periodontal ligament. J Biomaterials 18: 995-999 18. Helfer AR, Melnick S, Schilder H. (1972) Determination of moisture content of vital and pulpless teeth. Oral surg oral med oral pathol 34:661-670 19. Rivera EM, Yamauchi M. (1993) Site comparisons of dentine collagen cross links from extracted human teeth. Arch Oral Biol 38: 541-546 20. Huang TJ, Schilder H, Nathanson D. (1991) Effects of moisture content and endodontic treatment on some mechanical properties of human dentine. J Endodon 18: 209-215 21. Sedgley CM, Messer HH. (1992) Are endodontically treated teeth more brittle? J Endodon 18: 332-335 22. Lanza A, Aversa R, Rengo S et al. (2005) 3D FEA of cemented steel, glass and carbon posts in a maxillary incisor. Dent Mater 21:709–715 23. Asmussen E, Peutzfeldt A, Sahafi A. (2005) Finite element analysis of stresses in endodontically treated, dowel-restored teeth. J Prosthet Dent 94:321-329 24. Ruse ND. (2008) Propagation of erroneous data for the modulus of elasticity of periodontal ligament and gutta percha in FEM/FEA papers: A story of broken links. Dental Materials 24:1717-1719 25. Yaman SD, Alacam T, Yaman Y. (1998) Analysis of stress distribution in a maxillary central incisor subjected to various post and core applications. J Endodont 24: 107-111 26. Ersoz E. (2000) Evaluation of stresses caused by dentin pin with finite element stress analysis method. J Oral Rehabil 27: 769-773 27. Holmes DC, Diaz-Arnold AM, Leary JM. (1996) Influence of post dimension on stress distribution in dentin. J Prosthet Dent 75: 140-147 28. Romeed SA, Fok SL, Wilson NH. (2006) A comparison of 2D and 3D finite element analysis of a restored tooth. J Oral Rehabil 33: 209-215 29. Langton CM, Pisharody S, Keyak JH. (2009) A comparison of 2D and 3D finite element analysis of a restored tooth. Med Eng & Physics 31: 668-672 30. Pereira JR, Ornelas FD, Conti PCR et al. (2006) Effect of a crown ferrule on the fracture resistance of endodontically teeth restored with prefabricated posts. J Prosthet Dent 95:50-54 31. Tjan AH, Whang SB (1985) Resistance to root fracture of dowel channels with various thicknesses of buccal dentin walls. J Prosthet Dent 53:496–500 32. Torbjorner A, Karlsson S, Odman PA. (1995) Survival rate and failure charactrestics for two post designs. J Prosthet Dent 73:439-444
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Jamal Kashani Universiti Teknologi Malaysia Skudai Malaysia [email protected]
How to Predict the Fractures Initiation Locus in Human Vertebrae Using Quantitative Computed Tomography(QCT) Based Finite Element Method? A. Zeinali1, B. Hashemi2, and A. Razmjoo3 1
Department of Medical Physics, Urmieh University of Medical Science, Urmieh, Iran 2 Department of Medical Physics, Tarbiat Modares University, Tehran, Iran 3 Department of Mechanical Engineering, Tarbiat Modares University, Tehran, Iran
Abstract— The aim of this study was to present an effective specimen-specific approach for predicting failure initiation load and location in cadaveric vertebrae. Nine thoraco-lumbar vertebrae excised from three cadavers were used as the samples in this study. The samples were then scanned using the QCT and their sectional images were segmentated and converted into 3D voxel based finite element models. Then, a large deformation nonlinear analysis was carried out. The equivalent plastic strain was then obtained and used to predict the load and locations of the failure initiation in each vertebra. Subsequently, all the samples were tested under the uniaxial compression condition and their experimental load displacement diagrams were also obtained. Results showed that in the samples, the failure is initiated and occurred at the loads below the ultimate strength of the samples. The radiographic images showed that the failure initiation happens in the same portion of the vertebral bodies as predicted by the QCT voxel based FEM. The method developed and verified in this study can be regarded as a valuable tool for predicting vertebral failure load. Keywords— Human Vertebra, Quantitative Computed Tomography, Compressive Failure, Finite Element Method.
I. INTRODUCTION Noninvasive prediction of strength in vertebral bodies can provide valuable information for the assessment of the risk of fracture in human vertebrae [1-6]. The aim of this study was to present an effective specimen-specific approach for predicting failure initiation locus in cadaveric vertebrae using a quantitative computed tomography (QCT) voxel-based finite element method (FEM).
II. MATERIALS AND METHOD This study was carried out on nine thoraco-lumbar vertebrae excised from three cadavers with an average age of 42 years old. Then the samples were scanned using the QCT technique. Figure 1 shows a transaxial CT image taken
from the calibration phantom and one of the samples embedded in the vertebral phantom designed specifically for this study.
Fig. 1 The CT image of a vertebra embedded in the phantom Then, the segmentation technique was performed on each sectional image of the samples obtained by the QCT. Finally, the segmented images were directly converted into voxel-based three dimensional finite element models. Empirical relationships between the QCT derived BMD and the trabecular bone elastic modulus were used for assigning required mechanical properties to each element of the models stated [7-10]. Thereafter, a large deformation nonlinear analysis using a linearly elastic-linearly plastic material model with appropriate boundary conditions was carried out. The equivalent plastic strain was then obtained from the nonlinear analysis and was used to predict the load and locations of the failure initiation in each vertebra. Subsequently, all the samples were tested under the uniaxial compression condition and their experimental loaddisplacement diagrams were obtained. To check the accuracy of this prediction against the experiment, the samples were loaded up to the failure initiation load predicted by the FEM. A plane radiographic image of each sample was obtained for determining its’ failure initiation location and comparing it with that predicted by the FEM.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 436–438, 2011. www.springerlink.com
How to Predict the Fractures Initiation Locus in Human Vertebrae
437
III. RESULTS Our results showed that in the samples, the failure is initiated and occurred at the load levels well below the ultimate strength of the samples. The radiographic images showed that the failure initiation happens in the same portion of the vertebral bodies as predicted by the QCT voxel based FEM (Fig.2).
Fig. 4 The correlation between the local fracture loads predicted by the QCT voxel-based nonlinear FEM and the experimental results
IV. CONCLUSION
Fig. 2 The predicted local fractures at the posterior wall of one of samples. Left: The failed elements are shown with red arrows .Right: The plain radiographic image of the same sample from Lateral view. The failed region at the posterior wall is marked with n arrows Similar to the FEM results, a sudden reduction was seen in the slope of the linear portion of the experimental loaddisplacement diagrams which was named “the failure initiation point” (Fig.3).
The prediction of local failures is clinically important, because these failures occur at load levels below the overall ultimate strength of vertebrae and can lead to neurological and skeletal disorders. Hence, the method developed and verified in this study can be regarded as a valuable tool for this purpose, even though more experiments may be required to test if the vertebrae affected by tumors. Our finite element and experimental results indicate that failure initiation may occur bellow ultimate compressive strength of the vertebral body. However, after implementing this procedure on a large number of vertebral samples if it can be shown that the same scenario would occur, failure initiation load should be regarded as an appropriate criterion for fracture risk estimation. If so the QCT-voxel based nonlinear finite element method remains as a powerful tool for the estimation of fracture risk in human vertebral body.
V. REFERENCES
Fig. 3 Sudden reduction in the slope of the linear portion of the experimental load-displacement diagrams of one of samples
It was noted that the load corresponding to this point is strongly correlated with that predicted by the non linear finite element analysis (Fig.4).
[1] Whealan KM, Kwak SD, Tedrow JR, Inoue K, Snyder BD. Noninvasive imaging predicts failure load of the spine with simulated osteolytic defects. J Bone Joint Surg 2000; 82:1240-1251. [2] Snyder B D, Hipp J A, Nazarian Ara. Non-invasive imaging predicts fracture risk due to metastatic skeletal defects. The Orthopaedic Journal at Harvard Medical School, 2004; 87-94. [3] Whyne CM, Hu SS, Lotz JC. Parametric finite element analysis of vertebral bodies affected by tumors. J Biomech 2001; 34: 1317–1324. [4] Tschirhart CE, Nagpurkar A, Whyne CM. Effects of tumor location, shape and surface serration on burst fracture risk in the metastatic spine. J Biomech 2004; 37: 653–660.
IFMBE Proceedings Vol. 35
438
A. Zeinali, B. Hashemi, and A. Razmjoo
[5] Tschirhart CE, Finkelstein JA, Whyne CM. Biomechanics of vertebral level, geometry, and transcortical tumors in the metastatic spine. J Biomech 2007; 40:46–54. [6] Tschirhart CE, Roth SE,. Whyne CM . Biomechanical assessment of stability in the metastatic spine following percutaneous vertebroplasty: effects of cement distribution patterns and volume. J Biomech 2005; 1582–1590. [7] Tadashi S.Kaneko,Marina R.Pejcic,Jamshid Tehranzadeh, Joyce H.Keyak. Relationships between material properties and CT scan data of cortical bone with and without metastatic lesions.Medical Engineering & Physics 2003; 25:445-454.
[8] Tadashi S.Kaneko,Jason S.Bell, Marina R.Pejcic,Jamshid Tehranzadeh, Joyce H.Keyak. Mechanical properties, density and quantitative CT scan data of trabecular bone with and without metastases. Journal of Biomechanics 2004; 37:523-530. [9] Von Strechow. D, Nazarian. A, Zurakowski. D, Muller. R, R. Snyder.BD. Metastatic cancer bone tissue behaves mechanically as a rigid porous foam. Transactions of the Orthopaedic Research Society 2003;28:995. [10] Kopperdahl DL, Morgan EF, Keaveny TM. Quantitative computed tomography estimates of the mechanical properties of human vertebral trabecular bone. J Orthopedic Res 2002; 19: 801-805.
IFMBE Proceedings Vol. 35
Influence of Cancellous Bone Existence in Human Lumbar Spine: A Finite Element Analysis M. Alizadeh1, J. Kashani2, M.R. Abdul Kadir2, and A. Fallahi1 1
Faculty of Mechanical Engineering, Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor, Malaysia 2 Medical Implant Technology Group, Faculty of Biomedical Engineering & Health Science, Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor, Malaysia
Abstract— The aim of this study was developed an accurate computational finite element model (FEM) to simulate biomechanical response of human lumbar spine under physiological functions. In reality, anatomical intricate structure of the spine and also complex deformation in the different defection situation, severe application of finite element analyses. During computational study complexity of finite element model intensify acquire precision results and frequently leads to analyses failure. In this study the importance of cancellous bone contribution during finite element analyses of lumbar spine has been evaluated. A finite element model of lumbar (L2-L3) was generated from Computer Tomography images. Complete disectomy simulated, single cage inserted and posterior instrumentation was added in order to providing sufficient stability. Stress distribution pattern on two different models of vertebral body was included cancellous bone and cortical shell, for the first model, or totally cortical bone ,for the second model, were discussed under various loading situations. The FEM developed in this study demonstrated analogus model with the natural spine anatomy make differences between results, and accurate result will obtain when FEM was exactly similar as natural model. Therefore cancellous bone existence seems necessary in finite element analyses of spine. Keywords— Finite element study, cancellous bone, lumbar interbody fusion, stress distribution.
I. INTRODUCTION Biomechanical behavior of spine can be achieved through experimental study such as in vitro and vivo study or numerical methods such as finite element analyses [1,2]. Although experimental studies are the most straight method for biomechanical behavior, they have some limitations which restricted their applications [3]. In vitro studies performed in the laboratory restricted by specimen availability, differences between specimens and range of samples [4]. In vivo studies provide data from natural situation. However, preparing human specimens is difficult; also in animals case studies anatomical differences caused
inaccurate simulation of human spine behavior [4,5]. Moreover, lack of specimen, high coast and difficulties during simulation of spine motions via different load conditions was also concerned during experimental studies [3]. In compare with experimental study, computational study via finite element analyses has been applied broadly for analyzing biomechanical manner of intact or infected spine because of its theoretical advantages [4-6]. Finite element model have been developed for prediction risk of fracture in the vertebra and intervertebral disc degeneration by stress distribution pattern over each components of spine. Theoretical information obtained by finite element study could be verified by cadaveric in situ testing [5]. Since now, all published researches considered cancellous structure for vertebra body and some articles investigated effects of different properties in order to variety of patient bone construction [7]. Therefore the biomechanical performance and influence of cancellous structure consideration is not definitely illustrated. Posterior lumbar interbody fusion (PLIF) cage has been used after disc removal to provide the neural elements decompression, anterior column reconstruction, spine segmental stabilize, prevent bone graft collapse or displacement [8,9,10]. Biomechanical characteristics of numerous commercial cages under different design were investigated through some researches [8, 11-19]. In this study, a three-dimensional finite element model (FEM) of lumbar segment motion L2-L3, was reconstructed and PLIF cage with posterior instrumentation was inserted. The aim of presented study is to evaluate the biomechanical effects of cancellous structure of spine vertebra by using finite element analysis.
II. MATERIAL AND METHOD A three dimensional (3D) model of the human lumbar spine (L2-L3) was created based on Computed Tomography (CT) dataset of an adult patient. The images were obtained
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 439–442, 2011. www.springerlink.com
440
M. Alizadeh et al.
at 1mm slice thickness. The images were imported into commercial image processing software (Mimics, Materialise NV, Belgium). Cortical and cancellous bone of both vertebra were separated and converted into 3D model by means of Mimics. 3D model of cage and posterior fixation (rods and screws) developed by using a 3D modeling package (SolidWorks 2009, Dassault Systemes, USA). Intervertebral disc (IVD) consists of annulus fibrosus and nucleus pulposus, left facet (LF), right facet (RF), superior and inferior cartilages end plates (SCE, ICE), posterior rigid fixation (PRF) and cage for fusion were created similar as natural geometry of L2 and L3 by means of SolidWorks (Fig. 1). The ligaments, anterior longitudinal ligament (ALL), posterior longitudinal ligament (PLL), intertransvers ligament (ITL) and capsular ligament (CL), were built on the SolidWorks. In this study ligamentum flavum (LF), interspinous ligament (ISL), supraspinous ligament (SSL) and the whole IVD were omitted to simulate the real surgery have been done for spinal cord decompression by means of posterior lumbar interbody fusion (PLIF) with cage and perform spine stabilization with posterior pedicle screw base rigid fixation (Fig.1). The generation of the finite element model and analysis were carried out using CosmosWorks 2009 (Dassault Systemes, USA). The model was meshed using 1.0 mm parabolic tetrahedral elements, consists of 821439 elements and 1170674 nodes. To validate the model, a 10 Nm torque applied to the model with IVD to simulate flexion motion and range of motion (ROM) was calculated. A 10 Nm torque and 150 N vertical axial static loads were applied to the superior surface of the L2 to simulate flexion, and extension. All of the materials were assumed as homogenous, isotropic and linear elastic.
Table 1 Mechanical properties of materials Material Cortical bone [20] Posterior bony elements [20] Cancellous bone [20] Facet [21] annulus fibrosus [22] nucleus pulposus [22] Endplates [20] PEEK Cage [8] Al Ligament [23] CL [23] LF [23] ITL [23] PLL [23]
Elastic modulus (MPa) 12000 3500 100 10 3.4
Poisson's ratio 0.3 0.25 0.3 0.4 0.45
1
0.499
24 3600 7 4 3 7 7
0.4 0.25 0.39 0.39 0.39 0.39 0.39
III. RESULTS A. Validation of the Intact Model The result obtained from intact lumbar spine (L2-L3) finite element model were compared and validated with experimental study [24], with regard to ROM. Pure bending moment of 10 Nm in the main motion plane flexion was applied. Obtained results as shown in Fig.2. In the case of ROM demonstrated, developed FEM of this study behaves similar as real lumbar spine, therefore the finite element model was used to predict biomechanical behavior of human lumbar.
Fig. 2 ROM Comparison Between this Study (FEA) and In Vitro Study
B. VonMises Stress Distribution Fig. 1 Finite Element Model of Fused Lumbar Spine with Posterior Rigid Fixation
The maximum and average VonMises stresses of various components of lumbar spine during two loading conditions, flexion and extension which simulated with 10 Nm moment and 150 N axial load was obtained in two classifications: 1)
IFMBE Proceedings Vol. 35
Influence of Cancellous Bone Existence in Human Lumbar Spine: A Finite Element Analysis
Fig. 3 Average VonMises Stress during Flexion lumbar motion segment included cancellous part and cortical shell for vertebral body and 2) assuming whole vertebral body as cortical bone. Fig. 3 and Fig. 4 demonstrate the average stress values, which has been used to predict the effect of cancellous bone during finite element analyses of lumbar spine. Maximum stress value for superior endplate during flexion in the model included cancellous bone was 6.2635 MPa and average value obtained 0.19142 MPa; Therefore, difference between maximum stress value and average value was more than 50%. These values for posterior fixation implant during extension gained 135.81 MPa and 4.8522 MPa when cancellous structure was presented. Consequently, maximum stress value cannot be accurate and appropriate result for comparison, therefore in this study comparision has beed done based on average VonMisesstress (Fig.3 and Fig.4).
441
were discussed and difference between obtained data during analyses lumbar spine motion segment in two cases, one it was consist of cancellous bone and cortical shell and second model was consist of cortical bone. During this study lumbar spine (L2-L3) with interbody cage which stabilized by posterior spine fixation was subjected to data acquisition. Measured ROM is an important factor during spine studies. Fig. 5 shows, fused lumbar spine with cancellous can flex 1.6o, and in extension this value reached to 0.97o. In the case vertebral body modeled as just cortical bone these values for flexion and extension decreased to 1.1o and 0.82o, respectively. Therefore in flexion 49.02% and in extension 29.16% decline was observed. Maximum values of stress were found on PEEK cage and posterior rigid fixation (Fig. 4 and Fig. 5), nevertheless the maximum difference was observed in superior and inferior endplates. With presence of cancellous bone during flexion and extension average value of stress on endplates reached to 0.87 and 0.50 MPa, respectively, and these values without cancellous bone approached as 0.28 and 0.23 MPa. Therefore 54% and 25.5% decrease was observed on average stress for endplates during flexion and extension.
Fig. 4 Average VonMises Stress during Extension
B. Range of Motion Analyses One of the most important factors which contributed on spine surgeries and implants judgment specified by a parameter witch named range of motion (ROM). During this study which focused on biomechanical effects of precise spine finite element model including cancellous bone, ROM was analyzed as well as VonMises Stress. Fig. 5, illustrates differences occurred during current investigation.
IV. DISCUSSION During this finite element analyses, influence of existence cancellous structure on lumbar spine finite element model, was investigated, stress analyses and ROM IFMBE Proceedings Vol. 35
Fig. 5 ROM Result for Flexion and Extension
442
M. Alizadeh et al.
V. CONCLUSION Founding of this study emphasized accurate and liable result during finite element analyses particularly during biomechanical evaluations; rely on precise FEM analogous to natural anatomy. Therefore cancellous bone omission affected the results noticeably.
ACKNOWLEDGMENT This study was supported by Universiti Technologi Malaysia.
REFERENCES 1. Clausen JD, Goel VK, Sairyo K, Pfeiffer M. (1997) A protocol to evaluate semi-rigid pedicle screw systems. J Biomech Eng 119: 364-366 2. Goel VK, Gilbertson LG. (1997) Basic science of spinal instrumentation. Clin Orthop Relat Res 335:10-31 3. Nabhani F, Wake M.(2002)Computer modelling and stress analysis of the lumbar spine. Journal of Materials Processing Technology.127:40-47 4. Jones AC, Wilcox RK. (2008) Finite element analysis of the spine: Towards a framework of verification, validation and sensitivity analysis. Medical Engineering & Physics 30: 1287–1304 5. Barrey CY, Ponnappan RK, Song J et al. (2008) Biomechanical Evaluation of Pedicle Screw-Based Dynamic Stabilization Devices for the Lumbar Spine: A Systematic Review. SAS Journal2:159-170 6. Zhong ZC, Wei SH, Wang JP et al. (2006) Finite element analysis of the lumbar spine with a new cage using a topology optimization method. Medical Engineering & Physics 28: 90–981256 7. Fan CY, Hsu CC, Chao CK et al. (2010) Biomechanical comparisons of different posterior instrumentation constructs after two-lwvwl ALIF: A finite element study. Medical Engineering & Physics 32: 203-211 8. Tsuang YH, Chiang YF, Hung CY et al. (2009) Comparison of cage application modality in posterior lumbar interbody fusion with posterior instrumentation—A finite element study. Medical Engineering & Physics 31: 565–570 9. Loguidice VA, Johnson RG, Guyer RD et al. (1988) Anterior lumbar interbody fusion. Spine 13:366–9. 10. Pfeiffer M, Griss P, Haake M et al. (1995) Standardized evaluation of long-term results after anterior lumbar interbody fusion. Eur Spine J 5:299–307
11. Fantigrossi A, Galbusera F, Raimondi MT et al. (2007) Biomechanical analysis of cages for posterior lumbar interbody fusion. Medical Engineering & Physics 29: 101–109 12. Denozie G, David N. (2006) Biomechanical comparison between fusion of two vertebrae and implantation of an artificial intervertebral disc. Journal of Biomechanics 39 (2006) 766–775 13. Chen SH, Zhong ZC, Chen CS et al. (2009) Biomechanical comparison between lumbar disc arthroplasty and fusion. Medical Engineering & Physics 31: 244–253 14. Schleicher P, Beth P, Ottenbacher A et al. (2008) Biomechanical evaluation of different asymmetrical posterior stabilization methods for minimally invasive transforaminal lumbar interbody fusion. J Neurosurg Spine 9: 363-371. 15. Grauer JN, Biyani A, Faizan A et al. (2006) Biomechanics of twolevel Charite´ artificial disc placement in comparison to fusion plus single-level disc placement combination. The Spine Journal 6: 659–666 16. Cho W, Wub C, Mehbod AA et al. (2008) Comparison of cage designs for transforaminal lumbar interbody fusion: A biomechanical study. Clinical Biomechanics 23 (2008) 979–985 17. Zhong ZC, Wei SH, Wang JP et al. (2006) Finite element analysis of the lumbar spine with a new cage using a topology optimization method. Medical Engineering & Physics 28: 90–98 18. Shikinami Y, Okuno M. (2007) Mechanical evaluation of novel spinal interbody fusion cages made of bioactive, resorbable composites. Biomaterials 24: 3161–3170 19. Zhang QH, Teo EC. (2008) Finite element application in implant research for treatment of lumbar degenerative disc disease. Medical Engineering & Physics 30: 1246–1256 20. Wang JL, Parnianpour M, Shirazi-Adl A et al. (1997) Development and validation of a viscoelastic finite element model of an L2/L3 motion segment. Theoretical and Applied Fracture Mechanics 28: 81-93 21. Elder BD,Vigneswaran K, Athanasiou KA et al. (2009) Biomechanical, biochemical, and histological characterization of canine lumbar facet joint cartilage. J Neurosurg Spine 10:623–628 22. Zhang QH, Teo EC, Ng HW et al. (2006) Finite element analysis of moment-rotation relationships for human cervical spine. Journal of Biomechanics 39: 189–193 23. Eberlein R, Holzapfel GA, Frohlich M. (2004) Multi-segment FEA of the human lumbar spine including the heterogeneity of the annulus fibrosus. Computational Mechanics 34: 147–163 24. Adams MA, Dolan P. (1996) time-dependent changes in the lumbar spin's resistance to bending. Clinical Biomechanics 11: 194-200 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Mina Alizadeh Univerciti Technologi Malaysia Taman University Johor Malaysia [email protected]
Laser Speckle Contrast Imaging for Perfusion Monitoring in Burn Tissue Phantoms A.K. Jayanthy, N. Sujatha and M. Ramasubba Reddy Indian Institute of Technology, Madras/Department of Applied Mechanics, Biomedical Engineering Group, Chennai, India
Abstract— Modeling and monitoring of blood perfusion in burn skin has been presented in this study. A non invasive Laser speckle contrast imaging (LSCI) system is presented here for monitoring different flow conditions in a burn layer simulated capillary phantom. Speckle contrast variations with respect to changes in burn depth and blood perfusion rate have been studied in this work.
[9], free flap measurements [10] etc. In this paper the application of the laser speckle contrast imaging technique in monitoring blood flow in burn phantoms is described.
Keywords— Laser speckle, LSCI, blood perfusion, tissue phantom, burn wounds.
A random intensity distribution called as a speckle pattern is formed when fairly coherent light is either reflected from a rough surface or propagates through a medium with random refractive index fluctuations [11]. Goodman has developed a detailed theory and explained the first order and second order statistics of speckle patterns [12]. The extremely complex speckle pattern is best described quantitatively by the methods of probability and statistics. Speckle which was originally regarded as an unavoidable and undesired noise has now gained importance in medical imaging. The speckle pattern obtained from a biological specimen is termed as bio-speckle pattern. The contrast C of a speckle pattern is defined as the ratio of the standard deviation (σs) of the intensity variations to the ensemble average of the intensity [12].
I. INTRODUCTION Approximately 23,268 people died in India in the year 2009 due to burns according to the data published by the National Crime Records Bureau(NCRB), India which accounts to 7% of the percentage share of various causes of accidental deaths in 2009 [1]. 21 male and 43 female deaths occur every day due to fire according to the Accidental Death and Suicide Clock-2009 published by NCRB [1]. Burn is a type of injury to the skin or deeper tissues caused by heat, electricity, chemicals, light, radiation or friction. Burn severity is measured related to the depth, total body surface area affected and age of the victim. Burns are classified according to the depth of injury as first, second and third degree injuries. First degree burns causes redness and pain in the epidermis layer of the skin. Second degree burns extends into the dermal layer of the skin and is marked by blisters. Third degree burns extends into deep dermis and may also damage the underlying tissue. Clinical evaluation of burn wounds is usually done by visual and tactile means and is subjective and depends on the experience of the clinician. Hence the development of a non-invasive, accurate and early method for assessment of depth of burns is of vital importance which aids clinicians in determining the course of the treatment for burn patients. The LSCI technique has been used by many researchers around the world as a non-invasive technique for measuring blood flow. Applications in the medical field include monitoring of capillary blood flow [2], retinal blood flow [3, 4], cerebral blood flow [5, 6, 7], blood flow in psoriatic affected palm [8], characterization of atherosclerotic plaques
II. THEORY
C=
Standard Deviation Mean Intensity
=
σs
- (1)
I
If the scattering particles producing the speckle pattern are in relative motion, the optical path differences traversed by light travelling from the various particles to the image plane will be constantly changing. This results in a constantly changing speckle pattern termed as time-varying speckle [2]. The moving scatterers in blood namely the blood cells produce the time varying speckle. Any variation in the velocity of the blood causes a corresponding increase or decrease in the contrast value of the speckle pattern. The decrease in blood flow in case of burn tissues and the corresponding increase in contrast value of the speckle pattern can be utilised in monitoring the progress of the treatment of burn wounds. The speckle pattern is captured by a CCD camera that has a finite integration time and thus some of the fluctuations will be averaged out and the net contrast value of the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 443–446, 2011. www.springerlink.com
444
A.K. Jayanthy, N. Sujatha, and M. Ramasubba Reddy
speckle pattern will decrease. Mathematically the standard deviation (σs) reduces while the mean intensity remains unchanged [2]. The value of the standard deviation (σs) varies from 0 to and the corresponding values of the speckle contrast C lies between 0 and 1 [2].
III. MATERIALS AND METHODS A. Burn Tissue Phantom The blood flow is considered to be very little or nil within burned tissue whereas the underlying tissue retains blood perfusion. Under this assumption, the burn layer is represented by polytetrafluoroethylene (PTFE) sheet and the underlying perfused tissue is represented by a capillary network flow phantom with blood mimicking fluid flowing in the capillary mimicked channels. The capillary network flow phantom was made by grooving grids in perspex sheet as shown in Fig. 1 with a period of 5mm although this period is much larger than that expected in real tissue. The grid was used to mimic a capillary network and was circulated with blood mimicking fluid (CIRS, Model 046) using a syringe pump (Ravel Hiteks Pvt. Ltd, Model RH-SY10). The PTFE sheet was chosen because its scattering properties were similar to values found for tissue [13, 14] and is placed on the capillary phantom. PTFE sheets of thickness 0.16mm and 0.46mm have been chosen for the study to mimic different burn depths. The flow settings in the syringe pump were also varied from 30 to 70ml/hr with increments of 20ml/hr to mimic different capillary flow conditions that might occur during the treatment progress. The phantom was subjected to illumination from the laser source and the speckle pattern from the target is imaged by the imaging system explained in section B. B. Experimental Setup The schematic of the experimental setup used for the analysis is shown in Fig. 2. A laser source (633nm red laser) illuminates the burn tissue phantom and the resulting speckle pattern was imaged by a CCD camera. The speckle pattern is digitized by the frame grabber card and processed on the computer using the developed software. The number of pixels used to compute the local speckle contrast can be selected by the user. The value is typically chosen to balance the trade-off between statistical accuracy which is improved with a larger region and spatial resolution which is improved with a smaller region. A choice of a lower number of pixels reduces the validity of the statistics
Fig. 1 Capillary network flow phantom
Fig. 2 Schematic of experimental setup whereas the choice of a higher number limits the spatial resolution of the technique [15]. In practice, it is found that a square of 7x7 or 5x5 pixels is usually a satisfactory compromise [16]. A choice of 5x5 pixels has been used for the study in this paper. The contrast values thus calculated are then assigned a false color value according to the false color scale in use and the resulting false color contrast map is displayed on the monitor. The false color scale is linear in equal steps of contrast. In general, the contrast range obtained in an image is much smaller than the theoretically possible range of 0 to 1 [2]. In order to make full use of the false-color scale, the user can also have the option of applying a scaling factor to stretch the contrast range to fill the scale [2].
IFMBE Proceedings Vol. 35
Laser Speckle Contrast Imaging for Perfusion Monitoring in Burn Tissue Phantoms
445
Table 1 Contrast values with 0.16mm PTFE sheet
IV. RESULTS Experiments were conducted on two sets of burn tissue phantoms with PTFE sheets of thickness 0.16mm and 0.46mm. A sample speckle image of the capillary network phantom covered with 0.16mm PTFE sheet and the corresponding false color contrast map are shown in Fig. 3(a) and Fig. 3(b). The flow rate was maintained at 30ml/hr.
Flow rate
Contrast value
30ml/hr
0.12888
50ml/hr
0.11921
70ml/hr
0.09826
50
100
Table 2 Contrast values with 0.46mm PTFE sheet
150
200
250
Flow rate
Contrast value
30ml/hr
0.21841
50ml/hr
0.20470
70ml/hr
0.19226
300
350
400
450
500 50
100
150
200
250
300
350
400
450
500
Fig. 3(a) Sample speckle image of the burn skin phantom 0.2 50
0.18
100
0.16
150
0.14
200
0.12
250
0.1
300
0.08
350
0.06
400
0.04
450 500
0.02 100
200
300
400
500
0
Fig. 3(b) Corresponding false colour contrast map of Fig. 3(a) The contrast of one grid completely illuminated by the laser has been computed using eqn. (1). Three sets of images were taken for each condition with a time gap of 10 seconds and the average speckle contrast of the three images was computed. The results are presented in table 1 and table 2.
From table 1 and table 2 values it can be inferred that with an increase in thickness of the PTFE sheet there is a corresponding increase in the contrast values and as the perfusion value increases there is a corresponding decrease in the contrast value. The increase in the speckle contrast value with increase in thickness of the PTFE sheet correlates well with the theory explained in [14] where an increase in size of the speckle pattern was observed with an increase in thickness of the PTFE sheet. A decrease in speckle contrast value correlates well with the theory that the speckle contrast value decreases with increase in flow [16]. The two factors can be combined to visualize the depth of the burn and to simultaneously monitor the increase in perfusion values corresponding to the progress of the treatment given to heal the burn wound.
V. CONCLUSIONS Preliminary results on the application of the LSCI technique to analyze the depth of burns in tissue and to monitor
IFMBE Proceedings Vol. 35
446
A.K. Jayanthy, N. Sujatha, and M. Ramasubba Reddy
the effectiveness of the treatment in terms of increase in blood perfusion are presented in this paper. A more detailed analysis has to be carried out to give a quantitative measurement of burn depth as well as the flow.
ACKNOWLEDGMENT The authors acknowledge the research grant provided by ICSR, IIT Madras for conducting the research.
REFERENCES 1. NCRB at http://ncrb.nic.in 2. David Briers J, Sian Webster (1996) Laser speckle contrast analysis (LASCA): A non-scanning full field technique for monitoring capillary blood flow. J Biomed Opt1:174-179 3. Haiying Cheng, Timothy Q. Duong (2007) Simplified laser speckle imaging analysis method and its application to retinal blood flow imaging. Opt Lett32:2188-2190 4. Leonard W. Winchester, Nee Yin Chou (2006) Measurement of retinal blood velocity, SPIE Proc. vol. 6138, Ophthalmic Technologies XVI, San Jose, CA, USA, 2006, pp 61381N-1-61381N-8 5. Pencheng Li, Songlin Ni et al (2006) Imaging cerebral blood flow through the intact rat skull with temporal laser speckle imaging. Opt Lett31:1824-1826 6. Zakharov P, Volker A C et al (2009) Dynamic laser speckle imaging of cerebral blood flow. Opt Express17:13904-13917 7. Andrew K Dunn, Hayrunnisa Bolay et al (2001) Dynamic imaging of cerebral blood flow using laser speckle. J Cereb Blood Flow Metab21:195-201
8. Jayanthy A K, Sujatha N, Ramasubba Reddy M (2010) Laser speckle contrast imaging based blood flow analysis in normals and in patients with skin diseases, WASET Proc. vol. 69, International Conference on Computer, Electrical, and Systems Science, and Engineering, Singapore, 2010, pp 80-82 9. Nadkarni S K et al (2005) Characterization of artherosclerotic plaques by using laser speckle imaging. Circulation112:885-892 10. Leonard W Winchester, Nee Yin Chou (2006) Monitoring free tissue transfer using laser speckle imaging. SPIE Proc. vol. 6078, Photonic Therapeutics and Diagnostics II, San Jose, CA, USA, 2006, pp 60780G-1-60780G-8 11. Dainty J C, (1975) Laser speckle and related phenomena. SpringerVerlag, New York. 12. Goodman J W, (1976) Some fundamental properties of speckle. J Opt Soc Am66:1145-1150 13. Cheong W F, Prahl S A, Welch A J (1990) A review of the optical properties of biological tissues. IEEE J Quantum Electron26: 2166-2185 14. Ajay Sadhwani et al (1996) Determination of teflon thickness with laser speckle. I. Potential for burn depth analysis. Appl Opt35:57275735 15. Haiying Cheng, Qingming Luo et al (2003) Modified laser speckle imaging method with improved spatial resolution. J Biomed Opt8:559-564 16. David Briers J (2007) Laser speckle contrast imaging for measuring blood flow. Opt Appl37:139-152
Corresponding Author: Dr. N.Sujatha Institute: Indian Institute of Technology, Madras Street: IIT Madras City: Chennai Country: India Email:[email protected]
IFMBE Proceedings Vol. 35
Microdosimetry Modeling Technique for Spherical Cell M. Nazib Adon1, M. Noh Dalimin2, N. Mohd Kassim3, and M.M. Abdul Jamil1 1
Modeling and Simulation Research Laboratory, Faculty of Electrical and Electronic Engineering, 2 Department of Science and Mathematics, Faculty of Science, Art and Heritage, Universiti Tun Hussein Onn Malaysia, Batu Pahat, Malaysia 3 Centre of Optoelectronic Technology (COeT), Faculty of Electrical Engineering, Universiti Teknologi Malaysia, Johor, Malaysia
Abstract— Electroporation is a process of the bio-physical effect on cells exposed to an external electrical field is gaining applications in medical treatments, especially to create pores through a cell membrane and allow uptake of DNA into a cell. The efficacy of this treatment depends on the magnitude and the distribution of electric field applied, in addition to the physiological parameters, such as the conductivities and relative permittivities of the cell membranes and cytoplasm. In addition, physical parameters, such as the thickness and size of the cell also influence the efficiency of the electroporation technique. In this research, the electric field distributions of spherical cells were studied using Finite Integration Techniques (FIT), to explicate the difference in responses of the analytical and numerical cells for a given input voltage. For this purpose, quasistatic approach based on CST EM STUDIO ® software was used. A comparison of the induced transmembrane potential of the analytical against numerical technique shows that not more than 2% was observed in the spherical cell for an applied field of 1V/m to 10nm thick cell membranes.
shape and its orientation to the applied electric field have been assessed [3]. The results obtained have cast light on a number of observations in the process of electroporation.
II. MODELING AND SIMULATION A. Model of a Cell The real cell structure is very complex and difficult to model, thus so many simplified models have been used to facilitate the studying of cells, such as the circuit model [4], parallel plates model [5] and the layered model [6]. Each model has its advantage and suitability. Layered model primarily represents the structures and dielectric properties of a membrane, so it is chosen to evaluate the EM cell reaction under external exposure, as shown in Figure 1.
Keywords— Electric fields, Cells, Finite Integration Technique, CST EM STUDIO ®. θ
I. INTRODUCTION Microdosimetry technique is a quantitative evaluation of the electric field on the cell membrane, in the process of cell electroporation has received considerable interest [1]. However, most of the work up to date is based on the spherical cell shape based on analytical approach [2]. This has motivated us to use numerical techniques to conduct the microdosimetry on the spherical shaped cells. Nevertheless, numerical microdosimetry is confronted with a major difficulty, that is, the treatment of the thin cell membrane, which quite often leads to extremely large memory usage and long running time in simulation. In this paper, the FIT (Finite Integration Technique) for the quasi static condition has been applied to evaluate the electromagnetic (EM) field interaction with the cells in different frequency radiation. The influences of the cells
E
Fig. 1 Double layer spherical cell model The initial cell model in this study is a rigid particle rounded by a thin shell, representing a cell with membrane, with homogeneous, lossy and dispersive dielectric properties for each layer [7]. This model allows us to determine the response of electric fields on the entire cell, including plasma and intracellular membrane.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 447–449, 2011. www.springerlink.com
448
M. Nazib Adon et al.
B. An Analytical Approach In this analytical approach, substructures of the cell were neglected (the cytoplasm was treated as a homogeneous medium which fills the entire cell). In the general Maxwell’s equation, Faraday’s law and Ampere’s law stated that a time changing magnetic field acts as a source of electric field, and in turn the time changing electric field acts as a source of magnetic field. Thus, when the fields vary with time, they are known to be coupled. However, with the quasi static assumption, the time changing rates of both E and H are small enough to be neglected. In this case, the electric and magnetic field can exist independently or known to be uncoupled, can be generated as: 0
The CST EMS simulator is an interactive package that uses Finite Integration Technique analysis (FIT) to solve two dimensional electrostatic problems.
Membrane
Cytoplasm
(1) (2) External medium
With the quasi static approximation, combining the equations (3) and (4), the governing equations of the electrostatic model can be generated to solve the electric field problem. .
Fig. 1 Spherical cell model used in this study
(3) 0
(4)
Where is the continuous condition and is volume charge density. Considering the irrotational of , the scalar electric potential V was defined as: (5)
Two electrodes are placed on both side of the cell. The cell radius is normally 10 µm and membrane thickness is 10 nm. The cell is exposed to a linear polarized plane wave with the electric field E0 of 1V/m in the external medium. The dielectric properties of the cytoplasm, membrane and external medium are listed in table 1.
By substituting (3) into the continuous condition and combining it with (4), the Poisson’s equation is obtained as follows: (6) 0, the Poisson’s equation
In a source free medium, i.e. changes to: 0
Table 1 Dielectric properties of cell compartments [9]
Permittivity (ε) Conductivity (σ)
Cytoplasm #1 55.6 1.43
Membrane #2 11.3 0
External #3 75.3 2.04
which is known as Laplace’s equation. By solving Laplace’s equation, the internal and external potentials and electric field patterns of the object studied are determined.
In the process of cell electroporation, the short direct current (DC) pulses are quite often used, so the quasi-static approximation can be applied in this study [10]. Therefore, the FIT for the quasi-static condition can be performed on the cell model.
C. A Numerical Approach
D. Results and Discussions
The spherical cell was modeled as shown in Figure 2, using the simulation package, CST EM STUDIO® (CST EMS) simulator [8].
Figure 2 shows the maximum electric field intensity calculated by analytically solving the Laplace’s equation, compared with the quasi-static FIT as shown in Figure 3.
(7)
IFMBE Proceedings Vol. 35
Microdosimetry Modeling Technique for Spherical Cell
449
ness also govern the response due to the intensity and distribution of the electric field applied. A comparison between the analytical solution and numerical modeling shows close correlation indicating the viability of the modeling, the model parameters and the geometry chosen. This indicates that electric field analyses could be used for selecting suitable parameters for effective electroporation process. In the future, this fast numerical microdosimetry technique (FIT for quasi-static condition) can be performed on other realistic cell models. Thus, the preliminary findings of this research cannot be understated. Fig. 2 Electric field distributions along the axis parallel to the external field (analytical technique)
ACKNOWLEDGMENT The authors would like to thank Marli. Strydom and Frank Wei from the CST – Computer Simulation Technology AG, Bad Nauheimer Strasse 19, 64289 Darmstadt, Germany for technical advice.
REFERENCES Fig. 3 Electric field distributions along the axis parallel to the external field (numerical technique)
The left part of the separator symbol (‘ ’) on Figure 2 and 3 shows the electric field distribution near the membrane of the cell. The field inside the cytoplasm is slightly higher than the external field strength (1V/m), and the field outside the cell is lower than the external field source. It is observed that the Laplace’s solution and FIT solution with quasi-static have close similarity or identical. However, the largest discrepancy of electric field between them is no more than 2%, which is acceptable in the estimation of the field distribution in cell modeling.
III. CONCLUSIONS The electric field distributions in spherical cells were investigated using analytical method based on Laplace’s equation and numerical FIT analysis for plane wave propagation. The membrane cells showed higher intensities of electric field compared to the cytoplasm and external medium of cell. The field analysis results can be used for evaluation of the electric field distribution effectiveness on the realistic shaped cells. The electrical characteristics of the membrane and the cytoplasm, such as the conductivity and permittivity of the membrane and the cytoplasm as well as membrane thick-
1. Wachtel, H., Methodological approaches to EMF microdosimetry. Bioelectromagnetics, 1992. Suppl 1: p. 159-60. 2. Wang, Z., Y. Alfadhl, and X. Chen. Numerical microdosimetry of complex shaped cell models in electroporation. in Antennas and Propagation Society International Symposium, 2007 IEEE. 2007. 3. Glaser, R., Biophysics. 4th ed2001, German: Springer-Verlag Berlin Heidelberg New York. 4. Deng, J., et al., The Effects of Intense Submicrosecond Electrical Pulses on Cells. Biophysical Journal, 2003. 84(4): p. 2709-2714. 5. Gimsa, J. and D. Wachner, A Unified Resistor-Capacitor Model for Impedance, Dielectrophoresis, Electrorotation, and Induced Transmembrane Potential. Biophysical Journal, 1998. 75(2): p. 11071116. 6. Wang, Z., Electromagnetic Field Interaction with Biological Tissues and Cells, in Faculty of Engineering2009, University of London: London. 7. Alekseev, S.I. and M.C. Ziskin, Millimeter wave power density in aqueous biological samples. Bioelectromagnetics, 2001. 22(4): p. 288-91. 8. CST user's manual, CST EM STUDIO SUITE 2010 9. Liu, L.M. and S.F. Cleary, Absorbed energy distribution from radiofrequency electromagnetic radiation in a mammalian cell model: Effect of membrane-bound water. Journal Name: Bioelectromagnetics; Journal Volume: 16; Journal Issue: 3; Other Information: PBD: 1995, 1995: p. Medium: X; Size: pp. 160-171. 10. Weaver, J.C. and Y.A. Chizmadzhev, Theory of electroporation: A review. Bioelectrochemistry and Bioenergetics, 1996. 41(2): p. 135-160. Author Institute
City Country Email
IFMBE Proceedings Vol. 35
: Mohamad Nazib Adon : Modeling and Simulation Research Laboratory, Faculty of Electrical and Electronic Engineering Universiti Tun Hussein Onn Malaysia : Batu Pahat, Johor : Malaysia : [email protected]
Recurrent Breast Cancer with Proportional Homogeneous Poisson Process C.C. Chang School of Applied Information Sciences, Chung Shan Medical University and Information Technology Office, Chung Shan Medical University Hospital, Taichung, Taiwan
Abstract–– All cancers are usually classified further according to the extent or stage of disease so that therapies may be tailored to the particular disease stage. Moreover, detection of asymptomatic recurrences is associated with prolonged overall survival and survival from the time of initial detection of recurrence. This paper will applies Bayesian reference analysis, widely considered as the most successful method to produce objective, model-based, posterior distributions, to an inferential problem of recurrent breast cancer in survival analysis. A formulation is considered where individuals are expected to experience repeated events, along with concomitant variables. In addition, the sampling distribution of the observations is modeled through a proportional intensity homogeneous Poisson process. The medical records and pathology were accessible by the Chung Shan Medical University Hospital Tumor Registry. Finally, this study attempts to improve surveillance after treatment might lead to earlier detection of relapse, and precise assessment of recurrent status could improve outcome. Keyword–– Reference Analysis, Recurrent Breast cancer, Poisson process, Bayesian Inference.
I. INTRODUCTION The theory for survival data has been sufficiently developed to analyze the function of risk or survival of a patient. The methodology is designed to determine which variables affect the form of the risk function and to obtain estimates of these functions for each individual. This papaer involves the following individuals until the occurrence of some event of interest. Frequently, this event does occur for some units during the period of observation, thus producing censured data. Another characteristic of survival data is that some events of interest are not terminal, events which are able to occur more than once for the same individual, producing the recurrent events. In fact, recurrent events pervade many studies in variety of fields, and hence it is of paramount importance to have appropriate models and methods of statistical analyses. Lifetime data where more than one event is observed on each subject arise in areas such as, manufacturing and industrial reliability, biomedical studies, criminology, demography, the event of primary interest is recurrent, so
that for a given unit the event could be observed more than once during the study (Chang and Cheng, 2007). For example, several tumors may be observed for an individual; medical settings include outbreak of disease (e.g., encephalitis), repeated hospitalization of end-stage renal disease patients, recurrent pneumonia episodes arise in patients with human immunodeficiency syndrome; and angina pectoris in patients with chronic coronary disease. That is, the data on the i-th individual consists of the total number, mi of the events observed over the time period (0, Ti ]
and
the
0 ≤ ti1 ≤ ti 2 ≤ ti 3
ordered
epoch
of
the
mi event,
≤ timi ≤ Ti . Additionally, we may have
covariate information on each subject defined by a vector of censoring indicators. In more studies, interest may lie in understanding and characterizing the event illustrate process for individual subject or may focus on treatment comparisons based on the time to each distinct event, the number of events, the type of events and interdependence between events. The development of statistical models based on counting process data were originally introduced by Aalen (1978). Several methodologies have been proposed to analyze the problem of recurrent events. Lawless and Nadeau (1995) apply the Poisson process to develop models that focus on the expected number of events occurred in determined time interval. There is an extensive literature about point process models, (e.g., Cox and Isham 1980); this approach offers us tools powerful enough able to generalize several situations. In this paper the problem is treated under the focus of punctual counting process. Poisson process has been well studied many recent discussion about lifetime and stochastic process transition data have focused on modeling and analyzing the effects of so called unobserved heterogeneity (e.g., Flinn and Heckman, 1982). In addition, the model presented will be studied under a Bayesian perspective. It is well known that under a Bayesian perspective the posterior distribution for the quantity of interest represents the most complete inference that can be made the respect of this quantity. The posterior distribution combines the information contained in the data with the prior information
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 450–457, 2011. www.springerlink.com
Recurrent Breast Cancer with Proportional Homogeneous Poisson Process
about on the quantity of interest. The use of the prior function that represents lack of prior knowledge about the quantity of interest, has been a constant in the history of the Bayesian inference. Key pointers to the relevant literature include Bayes (1763), Laplace (1812). Reference analysis, introduced by Bernardo (1979) and widely developed by Berger and Bernardo (1989, 1992a, 1992b) is widely considered today the most successful algorithm to derive non-informative prior. In this project, reference analysis is developed for a survival model based on proportional intensity Poisson process, where individuals may expected to experience repeated events and concomitant variables are observed. The methodology is illustrated using the recurrent breast cancer data which medical records and pathology were reviewed for all patients accessible by the Chung Shan Medical University Hospital Tumor Registry. Approval for this retrospective study reviews by the Chung Shan Medical University Hospital Medical Institutional Review Board for now. Section II presents a review of research problem relevant to the study. Section III contains an overview of reference analysis, where the definition is motivated, heuristic derivations of explicit expressions for the one parameter, two parameters, and multi-parameter cases are sequentially presented. In Section IV we describe the survival model. In Section V, the theory is applied to an inference problem, the parameters survival model, for which no objective Bayesian analysis has been previously proposed. Some work items and expected contributions are presented in Section VI.
II. STATEMENT OF THE PROBLEM Breast cancer remains one of the leading causes of cancer-related death among women globally (Parkin et al., 2001; Goldie et al., 2001). Even though the morbidity and the mortality have been decreasing in recent years, the morbidity rates of Breast cancer are the second leading type in women and the mortality rates are the forth of the top ten cancers in Taiwan. The cure rate of breast cancer is quite high if detected early, but approximately 30% of International Federation of Gynecology and Obstetrics (FIGO) stage IB2 to stage IV disease will ultimately recur with modern multimodality treatment (Lai et al., 2001; Waggoner, 2003). Once the primary treatment has failed, the opportunity of secondary cure is slim. Probably several factors exist which indeed affect the ultimate prognosis of early stage breast cancer other than clinical staging. In other words, early detection of recurrence may impact survival. Moreover, detection of asymptomatic recurrences is associated with prolonged overall survival and survival
451
from the time of initial detection of recurrence (BodurkaBevers et al., 2002). Therefore, this paper attempts to improve surveillance after treatment might lead to earlier detection of relapse, and precise assessment of recurrent status could improve outcome. In Taiwan, breast cancer is the most common malignancy for women and contributing to a quarter of all female cancer cases. It remains one of the most pressing medical problems for women. Cancer stage is based on the size of the tumor, whether the cancer is invasive or non-invasive, whether lymph nodes are involved, and whether the cancer has spread beyond the breast which progress through various stages- Stage 0 is used to describe non-invasive breast cancers, such as DCIS and LCIS. In stage 0, there is no evidence of cancer cells or non-cancerous abnormal cells breaking out of the part of the breast in which they started, or of getting through to or invading neighboring normal tissue. Stage I describes invasive breast cancer (cancer cells are breaking through to or invading neighboring normal tissue) Stage II is divided into subcategories known as IIA and IIB. Stage IIA describes invasive breast cancer in which no tumor can be found in the breast, but cancer cells are found in the axillary lymph nodes or the tumor measures 2 centimeters or less and has spread to the axillary lymph nodes or the tumor is larger than 2 centimeters but not larger than 5 centimeters and has not spread to the axillary lymph nodes. Stage IIB describes invasive breast cancer in which the tumor is larger than 2 but no larger than 5 centimeters and has spread to the axillary lymph nodes, or the tumor is larger than 5 centimeters but has not spread to the axillary lymph nodes. Stage III is divided into subcategories known as IIIA, B, and C, and finally, Stage IV describes invasive breast cancer in which the cancer has spread to other organs of the body - usually the lungs, liver, bone, or brain. There is a long time interval for the progression to Invasive phase. Though screening and treat in its early phase, breast cancer will be decreased significantly to its rate of incidence as well as death. Because breast cancer is a cancer which can be controlled and avoided, the studies related to the causes of and the treatment to the breast cancer has been described sufficiently in lots of advanced researches. On the other hand, there are few researches on its relationship between recurrent events and the mortality and incidence rate. Indeed, recurrent breast cancer is a devastating disease for those women unfortunate enough to suffer such an event. Patients with recurrent disease or pelvic metastases have a poor prognosis with a 1-year survival rate between 15 and 20% (Berek and Hacker, 2005). Since, the treatment of recurrent breast cancer is still a clinical challenge. When the recurrence is not
IFMBE Proceedings Vol. 35
452
C.C. Chang
surgically resectable, and/or suitable for curative radiation, therapeutic options are limited. In some advanced countries, the combination of Avastin and Taxane is preferred since this is the Cancer Immunotherapy which was able to show a statistical significant improvement of overall survival (OS) without impairing quality of life due to intolerable toxicity. But one has to be careful, because due to a change in primary therapy since 1999, when concomitant chemotherapy and radiotherapy became standard (Peters III et al., 2000; Morris et al., 1999; Rose et al., 1999; Pearcey et al., 2002), and due to the current investigation of the role of neo-adjuvant chemotherapy (EORTC 55994 (Cochrane Database of Systematic Reviews, 2004)), most people with recurrent breast cancer will have had some challenge with a chemotherapeutic agent. This will influence responses in secondary treatment lines and will limit comparison of new studies with older ones including more chemonaive patients. Therefore, in the absence of surgical/ radiotherapeutic indications, chemotherapy should be targeted to the prolongation of survival with minimum morbidity and to the improvement of subjective symptoms, thus preserving quality of life. Unfortunately, in these conditions, there is no evidence of a significant impact on survival or on quality of life. For these reasons, the role of chemotherapy in recurrent disease remains to be defined and the search for more active and less toxic agents must be continued. However, the question remains: What is the rate of progression, regression, and/or stasis from one stage to another? What is the time interval in which these changes are detected? At stake is the considerable financial cost and costeffectiveness of implementation; in other words, the rate of progression for all the stages of dysplasia from normal to invasive must be established and the frequency of progression for the various stages of cancerous lesion must be known. In order to examine this issue, we would like to highlight the recurrent events frequently are not necessarily fatal on recurrent breast cancer. This paper will apply Bayesian reference analysis to an inferential problem of recurrent breast cancer in survival analysis. A formulation is considered where individuals are expected to experience repeated events, along with concomitant variables. In addition, the sampling distribution of the observations is modeled through a proportional intensity homogeneous Poisson process. This direction will give us hopefully some additional therapy strategies for patients with recurrent breast cancer.
of interest has been the object of many debates within the Bayesian community it is warranty that prior function which, by formal use of theorem Bayes produces a posterior distribution dominated by the information provided by the data (Bernardo, 1997b, Berger et al., 2005). Objective reference priors do not depend on the data, but they depend on the probabilistic model that is assumed to have generated the data, the idea basic is by allows the amount of information about an amount of interest θ, that we expect to learn how a clinical record to provide information regarding θ, is obviously a function of our prior knowledge regarding θ. Thus, if we already have a good prior knowledge of θ then we do not expect to learn much the clinical experience; on the other hand, if the prior knowledge regarding θ is scarce, then the data may be expected to provide a large amount of useful information. In other words, the bigger amount of available prior information, the lesser will be the quantity of information to be expected from the data. An infinitely larger clinical trial would eventually supply to all the information still regarding the amount of interest. Bernardo (2005) called this quantity missing information. Thus, it is natural to define a prior that determine no prior knowledge, or better, that becomes a posterior dominated for the data, as a prior that maximizes the missing information on the quantity of interest. However as missing information is defined as a limit that is not necessarily finite, the reference prior is defined as a type special of limit of a sequence of priors that maximize the expected information of successively having clinical trials. In this section we synthesize the formal construction reference priors as following (Bernardo, 2005). A. One Parameter Definition 1 : Consider a clinical record ε which consists of one observation x from p(x | φ ) , φ ∈ Φ ⊂ ℜ . Let z k = {x1 ,..., xk }the result of k independent replications of ε . Then, under suitable regularity conditions,
{
π k (φ ) = exp ∫ p ( z k φ ) log q(φ z k )dz k
}
(1)
where q(φ | z k ) is an asymptotic approximation to the posterior distribution p(z k | φ ) . The reference posterior distribution is a function π (φ | x ) such that
⎧⎪π (φ | x ) ⎫⎪ lim[ ∫ π k (φ | x )log ⎨ k ) ⎬dφ ] = 0 , Φ k →∞ ⎪⎩ π (φ | x ) ⎪⎭
III. AN OVERVIEW OF REFERENCE ANALYSIS The notion of a non-informative prior, that is, of a prior which describes lack of prior knowledge about the quantity
Xk
where
IFMBE Proceedings Vol. 35
Recurrent Breast Cancer with Proportional Homogeneous Poisson Process
π k (φ | x ) =
p( x | φ )π k (φ )
∫ p( xφ )π (φ )dφ
,
k = 1,2,…
k
Φ
A reference prior φ is a function which, for any data, provide the reference posterior π (φ | x ) by formal use of Bayes theorem, i.e., a positive function π ( x ) such that, for all x ∈ Χ ,
π (φ | x ) =
∫
Φ
p( x | φ )π (φ ) p( x | φ )π (φ )dφ
(2)
Thus the reference prior π (φ ) is the limit of the sequence {π k (φ ), k = 1,2,...} defined by (1) in the precise sense that the information-type limit of the corresponding sequence of posterior distributions {π k (φ | x ), k = 1,2,...} is the posterior
of interest, and suppose that the joint posterior distribution of (φ , ω ) is asymptotically normal with covariance matrix S (φˆ,ωˆ ) , where (φˆ, ωˆ ) is a consistent estimator of (φ , ω ) . Let S (φ , ω ) = H −1 (φ , ω ) is information Fisher matrix. (i). the conditional reference prior of
Proposition 1: (Reference priors under asymptotic normality). Let p ( x | φ ), x ∈ Χ , be a probability model with one realvalued parameter φ ∈ Φ ⊂ ℜ . If the asymptotic posterior distribution of φ given k replications of the clinical record is normal, with standard deviation S (θˆ) such that φˆk is an
estimator consistent and asymptotically sufficient of the In this case the reference prior is given by
φ.
(ii). if π (ω | φ ) is not proper, a compact approximation {Ω i (φ ), i = 1,2...} to Ω (φ ) is required, and the reference prior of
B. One Nuisance Parameter
parameter, where the quantity of interest is φ , and the nuisance parameter is ω . We shall only consider here regular case where joint posterior asymptotic normality may be established. 2: Let , p( x | φ , ω ) (φ , ω ) ∈ Φ × Ω ⊆ ℜ × ℜ be a probability model with two Proposition
real-valued parameters
φ
and
ω,
where
given
φ
is given by
π i (ω | φ ) =
∫
h22 (φ , ω )1 / 2
φ
h22 (φ , ω ) dω 1/ 2
Ω i (φ )
φ
is the quantity
, ω ∈ Ω i (φ )
(iii). within each Ai (φ ) the marginal reference prior of is obtained as
π i (φ ) ∝ exp{∫
π i (ω | φ ) log[ s112 (φ , ω )]dω} 1
Ω (φ )
where
s112 (φ , ω ) = hφ (φ , ω ) = h11 − h12 h22−1 h21 1
(iv). the reference posterior distribution of φ given data {x1 ,... xn } is ⎧
Ω (φ )
n ⎫ [∏ p( x1 | φ ,ω )]π (ω | φ )dω ⎬ l =1 ⎭
Corollary: If the nuisance parameter space Ω(φ ) = Ω
is independent of
φ
−1
, and the functions s11 2 (φ , ω ) and
h222 (φ , ω ) factorize in the form, {s11 (φ ,ω )} 1
1/ 2
{h22 (φ ,ω )} = f 2 (φ )g 2 (ω ) π (ω | φ ) ∝ g 2 (ω ) . The 1/ 2
parametric
Consider now the case where the statistical model p( x | φ , ω ) , (φ , ω ) ∈ Φ × Ω ⊆ ℜ × ℜ contains one nuisance
ω
⎩
where under regularity conditions S (φ ) = h(φ )1 / 2 and h(⋅) is the Fisher information (please refer to Bernardo and Smith, 1994, p. 314). Notice, that in this case, the reference prior is Jeffrey’s prior.
is
1
π (φ | x1 ,...xn ) ∝ π (φ )⎨∫
⎧ 1 ⎫ π (φ ) ∝ ⎨ ⎬ ⎩ S (φ )⎭
ω
π (ω | φ ) ∝ h22 (φ , ω ) 2 , ω ∈ Ω (φ )
obtained from π (φ ) by formal use of Bayes theorem.
453
value
= f1 (φ )g1 (ω ) ,
π (φ ) ∝ f1 (φ ) , reference prior relative the ordered (φ , ω ) is given by
.
Then
π (ω , φ ) = f 1 (φ )g 2 (ω ) , and in this case, there is no need for compact approximation, even if the conditional reference prior is not proper (Bernardo, 2005). C. The Multiparameter Case
The approach to the nuisance parameter are considered above was based on the use of an ordered parametrization whose first and second components were (φ , ω ) , respectively, referred as the parameter of interest and the nuisance parameter. The reference prior for the ordered
IFMBE Proceedings Vol. 35
454
C.C. Chang
parametrization (φ , ω ) , was then successively constructed to obtain π ω (φ , ω ) = π (ω | φ )π (φ ) . When the model parameter vector θ has more than two components, this sequential conditioning idea can obviously be extended by considering θ as an ordered parametrization, θ = (θ1 ,...,θ m ) , and generating, by successive conditioning, a reference prior, relative to this ordered parametrization, of the form
π (θ ) = π (θ m | θ1 ,...,θ m−1 )...π (θ 2 | θ1 )π (θ1 )
(3)
Proposition 3: Let p( x | θ ) , θ = (θ1 ,...,θ m ) be a probability model with m real-valued parameters, let θ1 be the quantity of interest, and suppose that the joint distribution of (θ1 ,...,θ m ) is asymptotically normal with
covariance matrix S (θˆ1 ,...,θˆm ) . Then, if S j is the j × j upper matrix of S, H j = S −j 1 and h jj (θ1 ,...,θ m ) is the ( j, j ) element of H j .
(i). the conditional reference priors are
π (θ m | θ1 ,...,θ m−1 ) ∝ hmm (θ1 ,...,θ m )1 / 2
for
i = m − 1, m − 2,...2 π (θ i | θ1 ,...,θ i −1 ) ∝ ⎡ ... log[ hi +1,i +1 (θ1 ,...,θ m )1 / 2 ]⎤ , ⎢∫Λi +1 ∫Λ m ⎥ exp ⎢ m ⎥ ⎢[∏ π (θ j | θ1 ,...,θ j −1 )]dθ i +1 ⎥ ⎢⎣ j =1 ⎥⎦
θ1
n individuals may experience a single type of recurrent event. Let mi denote the number of events Suppose that
occurring for the i-th individual. Assume that the i-th individual is observed over the interval (0, Ti , where Ti is
]
determined
independently
of
mi
.
Besides that we are going to consider that individual carry a covariate vector represented by x, so data from i-th individual consist of the total number of events mi observed about a time period
(0, Ti ]
in the ordered occurrence,
ti1 ,..., timi and the covariate vector x. It is assumed that the repeated events of an individual with k × 1 covariate vector x occur according to a nonhomogeneous Poisson process with intensity function given by
where
is
{x1 ,..., xn }
have been observed, the reference posterior distribution of the parameter of interest θ1 , is ⎫ ⎧ ⎪ ⎪∫ ... ⎪, ⎪ Λ1 m ⎪⎪ ⎪⎪ π (θ1 | x1 ,..., xn ) ∝ π (θ1 ) ∝ exp ⎨∫ ∏ p( x1 | θ1 ,...,θ m )[ ⎬ Λm j =1 ⎪ ⎪ ⎪ ⎪ m ⎪∏ π (θ j | θ1 ,...,θ j −1 )]dθ 1 ⎪ ⎪⎭ ⎪⎩ j =1
Let
0 ≤ ti1 < ti 2 < ... < timi ≤ Ti where the variable of interest tij denote the continuous failure times for the i-th individual and the j-th occurrence events ( i = 1,..., n and j = 1,..., mi ).
i
⎧ ... log[ s11 (θ1 ,...,θ m )−1 / 2 ]⎫ ⎪⎪ ⎪⎪∫Λ1 ∫Λ m π (θ1 ) ∝ exp ⎨ m ⎬ ⎪ ⎪[∏ π (θ j | θ1 ,...,θ j −1 )]dθ1 ⎭⎪ ⎩⎪ j =1
(iii). after data
IV. MODEL FORMULATION
λx (t ) = λ0 (t ) exp( xi' β ) , t ≥ 0 , i = 1,2,…,n
where dθ j = dθ j × ... × dθ m (ii). the marginal reference prior of
For proof and details see Berger and Bernardo (1992a, 1992b, 1992c).
λ0 (t )
(4)
is a baseline intensity function and
xi = ( xi1 , xi 2 ,..., xik ) and β = β1 ,..., β k is a vector of unknown parameters. The corresponding cumulative or integrated intensity function is t
Λ x (t ) = ∫ λ x (u)du = Λ 0 ( t )e x ' β
(5)
0
where Λ 0 (t ) = λ0 (u )du . ∫ t
0
Methods of analysis will be considered semi parametric if λ0 (t ) is arbitrary, and completely parametric if λ0 (t ) is specified by a parameter vector θ . In the case of the function of baseline hazard to be constant, this is a homogeneous Poisson process (see Cox and Isham, 1980). The Poisson process model (4) is often known as Cox proportional risk model (see, Cox, 1972). Considers a parametric Poisson process where λ0 (t ) = λ0 (t; θ ) . Then, the likelihood function for the
IFMBE Proceedings Vol. 35
Recurrent Breast Cancer with Proportional Homogeneous Poisson Process
model (4) for 1996),
θ
β
and
is given by (see, Cox and Lewis
nucleus of the regression model for which
mi has a Poisson
distribution with average and variance
⎫ ⎧ L(θ , β ) = ∏ ⎨∏ λx1 (tij ,θ )⎬ exp − Λ x1 (Ti ,θ ) i =1 ⎩ j =1 ⎭
{
mi
n
}
⎧ n mi λ0 (tij ;θ ) ⎫ L1 (θ ) = ⎨∏∏ ⎬, ⎩ i =1 j =1 Λ 0 (Ti ;θ ) ⎭
[
n
][
and L2 (θ , β ) = ∏ exp Λ 0 (θ , β )e xi β Λ 0 (Ti ;θ )e xi β i =1
'
'
]
mi
The first likelihood kernel L1 (θ ) arises from the conditional distribution of the event times, given the counts and the second the likelihood kernel L2 (θ , β ) arises form the Poisson distribution of the counts
m1 ,..., mn .
A. Modeling the Baseline Hazard The exponential distribution is one of the simplest and important probability distributions used in the modeling of data that represent the life time. It has been used intensively in the literature of survival and reliability, as for example in study areas on lifetime of items manufactured (Chang, 2008), in research involving survival or time of remission of chronic illnesses (see Feigl and Zelen, 1965). A characteristic of the exponential distribution say respect to the fact of the values next to one of the extremities of the random variable to present maximum probability and next to the other extremity to present probability zero. This characteristic is associated with a probabilistic mechanism that favors the events of higher intensity or lower. The exponential distribution has been extensively used to model the baseline hazard function due to its simplicity and flexibility. This is the particular case where
λ0 (t ) = λ0 (t;θ ) = v
(7)
The corresponding intensity function and integrated intensity function are λ x (t ) = ve x 'β , and
Λ x (T ) = Tve x 'β
(8)
Considering the decomposition in (6) the likelihood function for v , β is given by n
L( v, β ) = ∏ Ti −mi exp[ −vTi e xi β ][ vTi e xi β ]mi '
'
(9)
L2 ( v, β ) = ∏ exp[ −vTi e xi β ][ vTi e xi β ] mi '
'
'
The log-likelihood function (9) is given by
is
n
n
n
i =1
i =1
i =1
l (v, β ) ∝ ∑ mi log(v ) + ∑ mi xi' β − ∑ vTi e x 'β
(10)
Interval estimates and hypothesis tests for the parameters can be performed, in principle, by considering the asymptotic normal distribution of the maximum likelihood estimates and the asymptotic chi-squared distribution of the likelihood ratio statistics, respectively (Lawless, 1982). B. The Matrix of Information of Fisher Associated with the Model The posterior distribution of the parameter is often asymptotically normal (see e.g., Bernardo and Smith, 1994, Sec.5.3). In this case, the reference prior is easily derived. If the posterior distribution is asymptotically normal, than reference prior only depends on Fisher information matrix. Considering the log of the likelihood function (10) we have the first and second derivatives given by n n ' m ∂l = ∑ i − ∑ Ti e xi β ∂v i =1 v i=1
n ' ∂l = ∑ mi xir − ∑ vTi xir e xi β , r = 0,1,…,k ∂β r i =1 n mi ∂l = − ∑ 2 ∂v 2 i =1 v
∂l ∂β r β s
n
= − ∑ vTi xir xis e x1i β , r , s = 0,1,…,k '
i =1
n ' ∂l = −∑ Ti xir e xi β ∂v∂β r i =1
In this way, the elements of the Fisher information matrix are given by n n ' 1 n ∂l (v, β ) m ] = ∑ E[∑ 2i ] = ∑ Ti e xi β 2 ∂v v i =1 i =1 i =1 v n ∂l (v, β ) xβ ] = v ∑ Ti xir xis e = E[− ∂β r β s i =1 , r , s = 0… k
I vv = E[− Iβr βs
I β r v = E[−
i =1
n
E ( mi | xi ) = Var ( mi | xi ) = vTi e xi β .
(6)
which can be decomposed as L(θ , β ) = L1 (θ ) L2 (θ , β ) , where
where
455
the
i =1
IFMBE Proceedings Vol. 35
' i
n ' ∂l (v, β ) ] = ∑ Ti xir e xi β ∂β r v i =1 , r , s = 0… k
456
C.C. Chang
π (v ) ∝ f 2 ( v ) = v −1 / 2
Thus, Fisher information matrix associated with the model is given by ⎡1 n xi' β ⎢ v ∑ Ti e i =1 H (θ ) = H (v, β ) = ⎢ n ⎢ xi' β ⎢∑ Ti xir e ⎣ i =1
The figure 1 represents the reference prior (13).
⎤ ⎥ i =1 ⎥ n xi' β ⎥ v ∑ Ti xir xis e ⎥ i =1 ⎦ n
∑T x e
(13)
xi' β
(11)
i ir
V. REFERENCE ANALYSIS FOR SURVIVAL MODEL PARAMETERS
Following the methodology described in section 3, now we derive the reference prior considering two potential groups (with surgically hysterectomy or not), which corresponds to the ordered partition {v, β } , where β = {β 1 , β 2 ,..., β k } and v is considered to be the quantity of interest (to see Berger and Bernardo, 1992b). Hence the joint posterior distribution of the parameter is often asymptotically normal (see e.g., Bernardo and Smith, 1994. Sec.5.3). In this case, the reference prior is easily derived and reference prior only depends on Fisher information matrix found in (11). So the reference prior relative to this ordered parametrization is of the following form
π (v, β ) = π ( β | v )π (v)
Fig. 1 Reference prior for the parameter
v
It follows that the joint reference prior for parameters and β is given by
v
π (v, β ) = π ( β | v )π (v ) n
(14)
∝ [ ∑ Ti xir xis e xi β ]1 / 2 v −1 / 2 '
i =1
The figure 2 represents the joint reference prior (14), considering in the particular case where T = 100, n = 30, x = 1.
From Corollary of Proposition 3 where the nuisance parameter space Λ( β ) = Λ is independent of v , it is easy to see that
π ( β | v ) = h22
1/ 2
= v1 / 2
= f1 (v )g1 (β )
[∑
T x x e xi β i =1 i ir is n
'
]
1/ 2
and
hv ( v, β ) = h11 − h12 h22−1h21 =
Fig. 2 Joint reference prior for the parameter
' n ⎡ n [ ∑i =1Ti xir e xi β ]2 ⎤ xi' β ⎥ v ⎢∑ Ti e − n ' ⎢⎣ i =1 ∑i=1Ti xir xis e xi β ⎥⎦ = f 2 (v ) g 2 ( β )
−1 / 2
The corresponding reference posterior for T = {t1 ,..., t n } have been observed is
This implies that the conditional reference prior of the nuisance parameter β given the parameter of interest v is n
π ( β | v ) ∝ g1 ( β ) = [ ∑ Ti xir xis e x β ]1 / 2 ' i
v
and
β
1/ 2
(12)
i =1
π (v t1 ,...,tn ) ∝ π (v )∫ L(v, β )π ( β v )dβ Λ
∝ v −1 / 2 ∫
IFMBE Proceedings Vol. 35
n
Λ
[ vTi e
The reference prior needed to obtain a reference posterior for the parameter of interest v is
v after data
X i' β
∏T
i
− mi
exp[−vTi e X i β ] '
(15)
i =1
n
] [ ∑ Ti xir xis e mi
i =1
X i' β 1 / 2
]
dβ
Recurrent Breast Cancer with Proportional Homogeneous Poisson Process
The marginal reference posterior densities (15) can not be obtained explicitly. We will overcome this difficulty by making use of the Markov Chain Monte Carlo (MCMC) methodology to obtain approximations for such densities. In order to make Bayesian inference for the parameters of interest v we implement the MCMC methodology considering the Metropolis-Hastings (see, Hastings, 1970; Chib and Greenberg, 1995).
VI. CONCLUSIONS The purpose of this retrospective analysis of breast cancer is to determine the factors associated with morbidity and survival for recurrent event. In order to analysis survival, this proposed methodology is illustrated using the recurrent breast cancer data which medical records and pathology were reviewed for all patients accessible by the Chung Shan Medical University Hospital Tumor Registry from 1995 to 2008. Approval for this retrospective study reviews by the Chung Shan Medical University Hospital Medical Institutional Review Board for now. Patient demographics evaluated were age at the time of initial stage and site of cancer, treatment history, time from diagnosis to relapse, and time and status from surgery until last followup or death. Exclusion criteria for this study include prior treatment with interferon, retinoid, or chemotherapy (except as radiation sensitization); and lactating females. Further, the goal is to afford the patient the opportunity to have a reasonable quality of life in addition to providing the chance for a cure in the future.
REFERENCES Berek, J. S. and Hacker, N. F. Practical gynaecologic oncology. New York: Lippincott Williams & Wilkins 2005. Berek, J. S. Surgical techniques. In: Berek, J. S., Hacker, N. F. editors. Practical gynecologic oncology. 4th ed. Philadelphia’ Lippincott Williams and Wilkins; 2005; 739-82.
457
Berger, J. O. and Bernardo, J. M. On the development of reference priors. Bayesian Statistics 1992c; 4 (J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds.) Oxford: University Press, 35-60 (with discussion). Berger, J. O. and Bernardo, J. M. Ordered group reference priors with applications to a multinomial problem. Biometrika. 1992a; 79:25-37. Berger, J. O. and Bernardo, J. M. Reference priors in a variance components problem. Bayesian Analysis in Statistics and Econometrics (P. K. Goel and N. S. Iyengar, eds.) Berlin: Springer, 1992b; 323–340. Berger, J. O., Bernardo, J. M., and Sun, D. Reference priors from first principles: A general definition. 2005 Tech. Rep., SAMSI, NC, USA. Bernardo, J. M. and Smith, A. F. M. Bayesian Theory. Chichester: Wiley 1994. Bernardo, J. M. Reference analysis. Handbook of Statistics 2005; 25 (D. K. Dey and C. R. Rao eds.). Amsterdam: Elsevier, 17-90 Bernardo, J. M. Reference posterior distributions for Bayesian inference. J. Roy. Statist. Soc. 1979b; B 41, 113-147 (with discussion). Chang, C. C. and Cheng, C. S. A Structural Design of Clinical Decision Support System for Chronic Diseases Risk Management. CEJ Med. 2007; 2(2):129-139. Chang, C. C. Bayesian Value of Information Analysis with Linear, Exponential, Power law Failure Models for Aging Chronic Diseases JCSE. 2008; 2(2): 201-220. Chib, S. and Greenberg, E. Understanding the Metropolis-Hastings Algorithm, Am. Statist. 1995; 49:327–335. Cox, D. R. and Isham, V. Point Process, London: Chapman & Hall 1980. Cox, D. R. Regression models and life tables (with discussion), J. Roy. Statist. Soc. 1972; B 34: 187-220. Curtin, J. P. and Hoskins, W. J. Pelvic exenteration for gynecologic cancers. Surg Oncol Clin North Am 994; 3:267–276. Feigl, P. and Zelen, M. Estimation of exponential survival probabilities with concomitant Information. Biometrics 1965; 21:826-837. Hastings, W. K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970; 57:97–109. Hoskins, W. J. Surveying the field of gynecologic oncology. Oncol Spectrums 2001; 2:312–313. Lai, C. H., Hong, J. H., Hsueh S, et al. Preoperative prognostic variables and the impact of postoperative adjuvant therapy on the outcomes of stage IB or II breast cancer patients with or without pelvic lymph node metastases. Cancer. 1999; 85:1537-1546. Lawless, J. F. Statisctical Models and Methods for Life Time Data. New York: Jonh Wiley 1982. National Institutes of Health. NIH Consensus Statement Online. 1996; 43(1):1–38. Parkin, D. M., Bray, F. I. and Devesa, S. S. Cancer burden in the year 2000: the global picture. Eur J Cancer. 2001; 37 (suppl):S4-S66.
IFMBE Proceedings Vol. 35
Simulation of the Effects of Electric and Magnetic Loadings on Internal Bone Remodeling A. Fathi Kazerooni1, M. Rabbani2, and M.R. Yazdchi3 1
Department of Biomedical Physics and Engineering, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran 2 Faculty of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran 3 Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
Abstract— Objective: During recent years, a great deal of research has been done for finding methods to prevent, diagnose and heal serious diseases of bone such as osteoporosis. Among the methods proposed, exogenous mechanical, thermal, electric and magnetic stimulation have been of more interest due to the noninvasiveness and efficiency beside costeffectiveness. In fact, it is necessary to obtain the most efficacious magnitude, frequency range and waveshape of electromagnetic fields to be used in healing bone diseases. But this matter has not been of much attention in the investigations. Modeling is an important step in understanding how bone responds to the external loadings. Much work has been done on developing models considering different properties of bone and its behavior under mechanical loadings. However, just a few researchers have included the effects of electric and magnetic loadings in their models. Methods: In this paper, the thermo-piezo-electro-magnetoelastic model of bone has been used and the effects of electric and magnetic fields with various magnitudes, types and frequencies on the existent models, have been evaluated through simulation. Result: It is shown that although the model can illustrate the effects of electric and magnetic loading magnitude alterations and various electric durations, it cannot respond appropriately to the alternating frequencies of sinusoidal waveform. Conclusion: This model should be modified to respond to alternations of electromagnetic field frequencies. Keywords— internal bone remodeling, electric loading, magnetic loading, simulation.
I. INTRODUCTION Living bone is subjected to mechanical loadings in human daily life. Bone tissue has to remain stiff and strong in response to internal and external stresses applied on it. Bone does this task by adding bone mass, changing its geometry or altering its microstructure [1]. According to Wolff’s Law, bone adapts its histological structure in response to imposed long term loadings, and undergoes the process of "stress/strain related remodeling" [1], [2]. Through this remodeling, bone becomes stiffer and denser. For instance, in athletes who practice heavy exercises
such as gymnastics, bones are stronger and stiffer, in comparison with other people. In addition, bone becomes more porous in the case the normal daily loadings are removed. For example, astronauts who travel for a long time out of earth gravity may experience porosity in their bones. The process through which bone cells sense mechanical loadings in their environment and transduce them into a signal interpretable by cells, is called "mechano-transduction" [3], [4]. Frost (1964) proposed two distinct categories of bone remodeling: internal and surface remodeling. Surface remodeling refers to the case in which bone mass in deposited on or removed from the external surface of bone. In the internal remodeling procedure, bone material is added to or resorbed from the existing osteons [4], [2]. The overall amount of old bone removal and new bone deposition in the remodeling cycle is generally in balance. In some specific diseases such as osteoporosis bone resorption is more than bone substitution [4]. When bones are mechanically loaded, they generate potential differences. Fukada and Yasuda suggested that this phenomenon is due to the piezoelectric property of bone, and piezoelectricity became a candidate for the process of bone adaptation to mechanical loading [5]. In piezoelectric materials, when the mechanical stress is removed, an electric potential equal to the previous one is generated in the same place. Therefore it was proposed that external electric fields can stimulate bone remodeling. It was suggested that osteoblasts can be activated by negative charges and osteoclasts by positive charges[6]. Also it is shown that bone is a paramagnetic material and when a magnet is placed over a fracture site, the external magnetic field overrides the two ends of the fracture site together. In addition, the negative magnet enhances cell permeability, which speeds up the rate of healing [7]. The research in this area has led to designing electromagnetic devices for healing bone diseases. However, there are insufficient knowledge about the degree of intensity, frequency, duration, and application methods[8]. Therefore, it is necessary to model the impact of applying electric and magnetic fields to the bone.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 458–462, 2011. www.springerlink.com
Simulation of the Effects of Electric and Magnetic Loadings on Internal Bone Remodeling
459
In spite of existence of many bone models in response to mechanical stress, according to our knowledge, the effect of electric and magnetic loadings on bone remodeling has been restricted to the works of Qin et al [9,10,11,12]. There are three types of electromagnetic devices which are known to enhance bone healing: pulsed and sinusoidal, DC and combined AC-DC electric and magnetic fields [13]. Thus we have investigated the impact of these fields on the bone model. In this paper, a model proposed by Qin et al [11] for internal remodeling of cortical bone is used and simulated under different static and varying electric and magnetic loadings.
The model is simulated by “MATLAB (Simulink)”, in “Normal Simulation” mode, and the solver “ode45 (Dormand-Prince)” is used. The simulation time is 2.5 × 106 seconds (approximately 29 days). In this study, effects of different types of electromagnetic loading such as static, pulsed and sinusoidal fields on bone internal remodeling are evaluated.
II. MATERIALS AND METHODS
In this loading case, the amplitude of magnetic field (ψ) is set to its threshold value, zero.
Cowin et al, based on the model proposed by Martin (1972), suggested that bone is a porous material, in which the porosity changes as a function of time and also they permitted the remodeling rate to be a function of strain. This theory is called "adaptive theory" in which strain causes deposition or resorption of bone mass and consequently changes bone porosity [2], [14]. Based upon these models, Qin included electric and magnetic terms in stress-strain relationships as loadings which affect bone remodeling [11]. In this model, bone is considered as a hollow cylinder consisting of thermo-piezoelectro-magneto elastic material which is subjected to asymmetric loadings [10]. In the equations of adaptive elasticity, ζ denotes volume fraction of bone material. By considering bone matrix as a porous elastic material, material is added to or removed from the pores as ζ changes, maintaining the dimensions of bone. An important assumption made here, is that at constant temperature and zero body force, for all values of ζ, there exists a unique zero-strain state which is taken as the reference. The remodeling rate, at which mass of the porous structure is increased or decreased per unit volume, is denoted by e. . The stress-strain relationship and the numerical example can be found in previous papers [10]. Thermo-electro-magneto-elastic theory yields that bone undergoes remodeling in response to external temperature changes (T), semi-static axial load (P), external pressure (p), electric (φ) and magnetic (ψ) fields. The threshold values for T, P, and p are 0⁰C, 1500N, 2 MPa respectively. The distance to the center of the cylindrical bone is denoted by 'r' which is taken to be 35 mm. The effects of different mechanical and thermal loadings on internal remodeling are fully studied in the papers [10,11].
III. RESULTS A. The Effects of Electric Loadings on Internal Remodeling
The Effects of Static Electric Loadings on Internal Bone Remodeling The variations of internal remodeling parameter (e) under the static electric loadings Φ = -60,-30, 0, 30, 60 Volts, are shown in Fig. 1.
Fig. 1 The effects of various static electric fields on parameter 'e'
Fig. 2 The effects of various pulsed electric fields on parameter 'e'
IFMBE Proceedings Vol. 35
460
A. Fathi Kazerooni, M. Rabbani, and M.R. Yazdchi
The Effects of Pulsed Electric Loadings on Internal Bone Remodeling In this part, the effects of pulsed loadings with different duty cycles are investigated. The amplitude is -30 Volts, the period equals 24 hours (one day) and pulse widths are chosen as 100% (constant loading), 50%, 20% and 10%. The result of applying these loadings to the model is shown in Fig. 2. The Effects of Sine Electric Loadings on Internal Bone Remodeling The values of amplitude and frequencies which were chosen for simulating the effects of sine electric loadings on internal remodeling are -30 Volts and 0 (which denotes constant loading), 0.5, 1, 10, 100 Hz, respectively. The result, which is obtained by running the model in 2.5 × 10 6 seconds, is shown in Fig. 3.
Fig. 4 The effects of various static magnetic fields on parameter 'e' The effects of Pulsed Magnetic Loadings on Internal Bone Remodeling In this part, the amplitude is set to be -0.5 Amperes and the period of pulse loading equals 24 hours. The effects of different pulse durations (100% (constant loading), 50%, 20% and 10%) are obtained and shown in Fig. 5.
Fig. 3 The effects of various sine electric fields on parameter 'e' B. The effects of Magnetic Loadings on Internal Remodeling
Fig. 5 The effects of various pulsed magnetic fields on parameter 'e'
In this loading case, the amplitude of electric field (φ) is set to its threshold value, zero. The Effects of Static Magnetic Loadings on Internal Bone Remodeling The amplitude of static (constant) magnetic can be a cause of changing the internal remodeling of bone. In order to investigate this effect, various values of amplitude were applied to the model and the model was run under these conditions. The amplitude values were chosen to be ψ= -1, 0.5, 0, 0.5, 1 Amperes (A). The result of this simulation is shown in Fig. 4.
Fig. 6 The effects of various sine magnetic fields on parameter 'e'
IFMBE Proceedings Vol. 35
Simulation of the Effects of Electric and Magnetic Loadings on Internal Bone Remodeling
The Effects of Sine magnetic Loadings on Internal Bone Remodeling The amplitude and frequencies used for running the model is set to be -0.5 Amperes and frequencies with values 0 (which denotes constant loading), 0.5, 1, 10, 100 Hz, respectively. The result is as illustrated in the Fig. 6.
IV. DISCUSSION The results of simulating the effects of electric and magnetic fields on bone internal remodeling were demonstrated in the previous section. As it is illustrated in Figures 1 and 4, both negative electric and magnetic fields induce more volume fraction and therefore more bone mass. In addition, by increasing the magnitude, the volume fraction increases. As it was proved before, negative electric and magnetic loadings initiate bone remodeling and formation [6], [7]. Yan et al (1998) in a study on the efficiency of Static Magnetic Fields (SMF) showed that the bone mineral density of the bone adjacent to the magnetized implant was much higher than near the unmagnetized region in femurs of rats [15]. Majhi et al (2010) demonstrated the efficacy of low level capacitively coupled pulsed electric fields on inducing bone mineral density in an ovariectomized rat model of osteoporosis [16]. Therefore, the model used here was simulated with pulsed electric fields with various durations. As it can be seen from Figure 2, as the pulse duration is raised, negative pulsed electric and magnetic fields induce more bone and make it stiffer. The best duration is 100% which yields static field. This outcome is consistent with the clinical findings. Brighton (1981) demonstrated that static Direct Current Electrical Stimulation (DCES) results more effective osteogenic effect than pulsed DCES [17]. Through extensive research it has been proven that Pulsed Electro Magnetic Field (PEMF) stimulation is an effective alternative to surgery. Specific types of low-level Electro Magnetic Fields (EMF) can cause biological responses based on the magnitude, frequency and waveform [18]. It has been shown that using intermittent form of PEMF as bone initiation stimulation leads in better results than continuous one [19]. However, as it can be observed from Figure 5, as the duration of the loading increases, the results in contrast to clinical studies enhance bone production. As illustrated in Figures 3 and 6, sinusoidal electric and magnetic fields with various frequencies are not different in healing process. This outcome is not compatible with the clinical studies. For instance, Brighton et al. (1985) delivered 60 KHz sinusoidal electric wave with a continuous current between 7-10 mA at 5 Volts (peak to peak) via
461
capacitively coupled electrodes to the fracture region of patients and showed that this stimulation can prevent bone loss caused by disuse [20]. Besides, McElhaney et al used a rat model of disused osteoporosis and concluded that capacitively coupled 30Hz sinusoidal electric fields were effective in preventing bone loss, in contrast to 3 Hz electric field with the same voltage [21]. In addition, sinusoidal PEMFs with a variety of frequencies are shown to produce different responses in bone [18].
V. CONCLUSIONS It seems that modifications should be made in the applied model for the case of pulsed magnetic field and sinusoidal electric and magnetic fields, in order to make this model consistent with the clinical and animal trials.
REFERENCES 1. Pearson OM, Lieberman DE (2004) The aging of Wolff's "law": ontogeny and responses to mechanical loading in cortical bone. Am J Phys Anthropol Suppl 39: 63-99. 2. Cowin S, Hegedus D (1976) Bone remodeling I: theory of adaptive elasticity. J Elasticity 6: 313-326. 3. Turner C, Pavalko F (1998) Mechanotransduction and functional response of the skeleton to physical stress: the mechanisms and mechanics of bone adaptation. J Orthop Sci 3: 346-355. 4. Robling A, Castillo A, Turner C (2006) Biomechanical and molecular regulation of bone remodeling. Ann Biomed Eng 8: 455. 5. Bassett C (1967) Biologic significance of piezoelectricity. Calcif Tissue Int 1: 252-272. 6. Spadaro J (1997) Mechanical and electrical interactions in bone remodeling. Bioelectromagnetics 18: 193-202. 7. Albin J (1999) Bone healing with medical magnets. 8. Sert C, Mustafa D, Düz MZ, Akşen F, Kaya A (2002) The preventive effect on bone loss of 50-Hz, 1-mT electromagnetic field in ovariectomized rats. J Bone Miner Res 20: 345-349. 9. He X, Qu C, Qin Q (2008) A theoretical model for surface bone remodeling under electromagnetic loads. ARCH APPL MECH 78: 163175. 10. Qin Q (2007) New Research on Biomaterials. Nova Science Publishers, Australia. 11. Qin Q, Ye J (2004) Thermoelectroelastic solutions for internal bone remodeling under axial and transverse loads. INT J SOLIDS STRUCT 41: 2447-2460. 12. Qu C, Qin Q, Kang Y (2006) A hypothetical mechanism of bone remodeling and modeling under electromagnetic loads. Biomaterials 27: 4050-4057. 13. Rubik B, Becker R, Flower R et al. (2000) Bioelectromagnetics applications in medicine. NIH Panel. 14. Hegedus D, Cowin S (1976) Bone remodeling II: small strain adaptive elasticity. J Elasticity 6: 337-352. 15. Yan QC, Tomita N, Ikada Y (1998) Effects of static magnetic field on bone formation of rat femurs. Med Eng Phys 20: 397-402. 16. Manjhi J, Mathur R, Behari J (2010) Effect of low level capacitivecoupled pulsed electric field stimulation on mineral profile of weightbearing bones in ovariectomized rats. J Biomed Mater Res B Appl Biomater 92: 189-195.
IFMBE Proceedings Vol. 35
462
A. Fathi Kazerooni, M. Rabbani, and M.R. Yazdchi
17. Brighton C (1981) The treatment of non-unions with electricity. J. Bone Jt. Surg. 63: 847. 18. Shupak N, Prato F, Thomas A (2003) Therapeutic uses of pulsed magnetic-field exposure: A review. Radio Sci Bull 307: 9-32. 19. Trock D (2000) Electromagnetic fields and magnets. Rheumatol Rheumatic Dis Clin NA 26: 51-62. 20. Brighton C, Pollack S (1985) Treatment of recalcitrant non-union with a capacitively coupled electrical field. A preliminary report. J. Bone Jt. Surg. (Am.) 67: 577. 21. McElhaney J, Stalnaker R, Bullard R (1968) Electric fields and bone loss of disuse. J Biomech 1: 47-50.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Mohsen Rabbani Faculty of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran Hafez Street Tehran Iran [email protected]
Study of Hematocrit in Relation with Age and Gender Using Low Power Helium – Neon Laser Irradiation H.A.A. Houssein1,*, M.S. Jaafar1, Z. Ali2, Z.A. Timimi1, and F.H. Mustafa1 1
School of Physics, Universiti Sains Malaysia, 11800 Penang, Malaysia School of Mathematical Science, Universiti Sains Malaysia, 11800 Penang, Malaysia * Permanent Address: Department of Physics, Faculty of Science, University of Garyounis, Benghazi – Libya 2
Abstract— The relation between human blood parameter, Hematocrit (HCT) with human age, gender before and after irradiation with low power He–Ne laser (0.95 mw, λ=632.8 nm) has been studied . The relationships were obtained by finding patterns, centroid and peak positions, and flux variations. This findings show that beam parameter, flux peak was very much correlated and can become significant indicator for blood analyses. Thus leading the way as vibrant diagnosis tool to clarify diseases associated with blood.
Gaussian laser beam, with one quadrant of the beam cut away to reveal the radial profile of power density [9]. Here r is the radius of the coaxial circle, w is the effective radius of the beam, pr is the power flowing through the circle, pc is the power density at the center of the laser beam, and ε is the base of the natural logarithms (ε = 2.71828…).
Keywords— He-Ne laser, Hematocrit (HCT), Flux peak and statistical test.
I. INTRODUCTION He-Ne laser radiation of Low power intensity has been found to have a lot of important applications which led to the expanding biomedical use of laser technology, particularly in surgery [1, 2, 3, 4]. This medical application requires detailed information on the mechanisms of their biological effects [1, 3, 5]. Therefore, the study of the mechanism of interaction of a low – intensity laser radiation with living organisms by various methods for the purpose of widening the field of its medical applications is undoubtedly a pressing problem. The solution of this problem will aid in further development of practical medicine. Although the fact that the response of blood to the action of a low – intensity laser radiation gives important information on the mechanism of interaction of laser radiation with a living organism,[ 6,7, 8] only a small number of works have been devoted to such investigation in living organisms. The three important properties of laser light are coherence, collimation, and monochromaticity. No real laser produced light having these characteristics absolutely, The power density profile in any cross-section has the characteristic bell-shaped Gaussian curve when power density at point within that section is plotted versus the radial distance of that point from the axis of the beam. This profile is the same when measured across any diameter of the beam, it would have the shape of a three-dimensional bell if it could be seen by an observer. Figure 1 shows a perspective view of a
Fig. 1 Gaussian laser beam [9] The aim of the present work was to study correlations between blood parameter and beam parameter with human age, and gender.
II. MATERIALS AND METHODS A. Blood Parameter Measurement Fresh human blood parameter such as HCT before and after laser exposure in each blood sample were analyzed by using a Hematology analyzer, Cell – Dyn 1700. These fresh human blood samples were collected from the Health Centre of Universiti Sains Malaysia. A total of 240 blood
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 463 –466, 2011. www.springerlink.com
464
H.A.A. Houssein et al.
samples from 120 males and 120 females were used in this study. Blood samples were obtained from patients with ages ranging from 10 to 80 years old and were labeled with the coding systems according to age, gender, and disease. B. Laser Irradiation The subpopulations of human blood parameters included (HCT) for laser irradiation. A volume of 0.25 ml of blood sample was placed on microscope glass cavity slides 76x26 mm, thickness 1-1.2 mm, cut edges. The sample was irradiated for one second using the 632.8 nm visible beam from a 0.95 mw He-Ne laser. (Encircled Flux Analysis System (EFAS) models 8350, Photon Inc. were employed as a flux detector for the transmitted laser beam). Readings of beam parameter, Flux Peak was recorded after the irradiation. Experiments were repeated for all blood groups under the same experimental conditions; laser beam power, beam source-sample distance, sample detector distance, beam exposure time, room temperature and humidity.
Fig. 2 2-D Contour
C. Statistical Analysis After irradiation was performed, non- irradiated and irradiated samples of the blood parameter, Hematocrit, and the beam parameter,Flux peak, were compared. The paired test was used to determine the differences between the controlled samples and irradiated samples. A two- factor analysis were further conducted to examine the differences in the samples according to the age, gender of patients. The SPSS software version 12 was utilized to perform statistical calculations and analyses of the data.
Fig. 3 3-D Profile Table 1 Hematocrit level for controlled and irradiated blood samples from different donors Blood Parameters Controlled Irradiation Difference P – value
HCT (%) 40.27±0.237 41.83±0.322 1.56±0.236 0.000***
Data are represented as mean ± standard error mean from 240 experiments.
III. RESULTS AND DISCUSSION Flux Analysis and Age The near field is imaged with a microscope objective lens and a photon Beam Profiler Model 2320 CCD camera. Near-field is the region close to a source or aperture. The 2D Contour window displays the near-field image of the beam intensity with specified contour overlays. And from the 2D contour near-field image shows only a few peaks (Figure 2). The 3D Profile window displays the near field image of the beam intensity in a 3-dimensional viewing format. And from the 3D profile near-field image exhibits the peaks are around 60.7% to 90% of the beam intensity (Figure 3). There is a significant difference in mean hematocrit level before and after irradiation (p-value = 0.000) (Table 1).There is a significant increase in mean hematocrit level after laser irradiation.
It is indicated that the differences in mean increase of hematocrit level after irradiation (Table 2) for each age group does not depend on the gender of the patient (p-value = 0.188).Hence the differences in mean hematocrit level before and after irradiation were examined according to age and gender of patient separately(Table 3). There is a significant difference in mean increase of hematocrit level after irradiation among different age groups (p-value = 0.024) and between gender (p-value = 0.055). The mean increase of hematocrit level after irradiation is Table 2 The effect of Age and Gender on the mean level of HCT (%): Gender Male Female
10 – 24 2.13±.74 2.47±.54
Age (in years) 25 – 40 1.22±.71 1.83±.79
p-value >40 .51±.43 1.99±.50
Statistical significance:* p<0.10, ** p<0.05, *** p<0.01.
IFMBE Proceedings Vol. 35
.188
Study of Hematocrit in Relation with Age and Gender Using Low Power Helium – Neon Laser Irradiation
the highest for age-group 10 to 24 years old (Figure 4). The age group 25 to 40 and more than 40 years old show low mean increase of hematocrit level after irradiation. The mean increase of hematocrit level after irradiation is higher for female than male (Figure 5). Table 3 Comparison of difference HCT (%) according to characteristic of respondents:
465
Table 4 The effect of Age and Gender on the mean level of Flux Peak: Age (in years) p-value G Gender 10 – 24 25 – 40 >40 p-value N Male 166.66±6.00 160.65±5.77 157.83±3.52 .054* .002*** F Female 149.05±4.34 169.35±6.42 145.82±4.04 .025** P- value 0.035** 0.889 0.060* Statistical significance:* p<0.10, ** p<0.05, *** p<0.01.
180
Category 10 – 24 years old 25 – 40 years old >40 years old Male Female
Age Gender
Mean ±std error mean 2.35 ± .43 1.48 ± .53 1.15 ± .32 0.98 ± .33 2.14 ±.33
p–value 170
.024**
.055*
Statistical significance:* p<0.10, ** p<0.05, *** p<0.01.
Flux Peak (counts)
Characteristic
160
150
3.0
Gender Male
2.5
Hematocrit (%)
140 2.0
10-24
Female 25-40
more than 40
Age ( in years old)
1.5
Fig. 6 Flux Peak level for male and female in different age groups
1.0
Gender .5
IV. CONCLUSIONS
Male 0.0 10-24
Female 25-40
more than 40
Age (in years old)
Fig. 4 Hematocrit level of males and females with age The difference in mean flux peak after radiation (Table 4) for each age group depend on the gender of the patient (pvalue = 0.002). At age 10 to 24 years old (p-value = 0.035) and at age more than 40 years old (p-value = 0.060), there is a significant difference in mean flux peak after irradiation between gender. The mean flux peak for male is higher than female (Figure 6). At age 25 to 40 years old, there is no significant difference in mean flux peak after irradiation between gender (p-value = 0.889).
ACKNOWLEDGMENT The authors acknowledge the research grant provided by the Universiti Sains Malaysia (USM), Penang, Malaysia that has resulted in this article. We would also like to thank the staff of the Diagnostic laboratory of USM Wellness Centre for their assistance and support of this research.
3 .0
2 .5
H e m a t oc r it (% )
The findings from this study indicate the (632.8 nm; 0.95 mw) low levels He-Ne laser irradiation cause a significant increase in Hematocrit level .The increase is higher for patients age 10 to 24 years old and for female patients.The beam parameter (Flux peak),is affected by both age and gender of patients. The Flux peak level is high for male at age 10 to 24 years old and at age more than 40 years old. Furthermore, the encircled flux analysis demonstrated a good future prospect in blood research, thus leading the way as a vibrant diagnosis tool to clarify diseases associated with blood cells.
2 .0
1 .5
A g e ( i n y e a r s o ld
1 .0
REFERENCES
1 0- 24 .5 2 5- 40 m o re t h a n 4 0
0 .0 M a le
F e m ale
G e n d er
Fig. 5 Hematocrit level of different age groups with gender
1. X. Q. Mi, J. Y. Chen, L. W. Zhou, J. Photochem. Photobiol. B: Biol. 83 (2006) 146 –150. 2. S. Halevy, R. Lubart, H. Reuveni, N. Grossman, Laser Therapy 9 (1997) 159 – 164. 3. T. Lundeberg, M. Malen, Ann. Plast. Surg. 27 (1991) 535 – 537.
IFMBE Proceedings Vol. 35
466
H.A.A. Houssein et al.
4. N. Kipshidze, H. Sahota, H. Wolinsky, R. A. Komorowsky, L. E. Boerboom, S. D. Keane, M. H. Keelan, J. E. Baker, Circulation 90 (1994) 327 – 332. 5. M. Wasik, E. Gorska, M. Modzelewska, K. Nowicki, B. Jakubczak, U. Demkow, J. Phoysiol. Pharmaco. 58, Suppl 5 (2007) 729 – 737. 6. G. A. Zalesskaya and E. G. Sambor, J. Appl. Spectrosc. 72, 2 (2005) 242 – 248. 7. A. N. Korolevich, T. V. Oleinik, and A. Ya. Khairullina, Zh. Prikl. Spektrosk., 57 (1992) 152 – 156.
8. A. Ya. Khairullina and T. V. Oleinik, Zh. Prikl. Spektrosk., 63 (1996) 316 – 322 9. stanley M.shapshay,Endoscopic Laser Surgery Handbook, New York and Basel,MARCEL DEKKER, ING,1987. Author: Hend A. A. Houssein Institute: School of Physics, Universiti Sains Malaysia, 11800 Penang, Malaysia Street: Minden,11800 USM,pulau pinang,Malaysia City: Penang Country:Malaysia Email: [email protected], [email protected]
IFMBE Proceedings Vol. 35
Three-Dimensional Fluid-Structure Interaction Modeling of Expiratory Flow in the Pharyngeal Airway M.R. Rasani, K. Inthavong, and J.Y. Tu School of Aerospace, Mechanical and Manufacturing Engineering, RMIT University, Bundoora, 3083 Vic, Australia
Abstract— This article aims to simulate the interaction between a simplified tongue replica with expiratory air flow and compare both laminar and turbulent flow in the pharyngeal airway. A three-dimensional model with laminar and low-Re SST turbulence model is adopted. An Arbitrary Lagrangian-Eulerian description for the fluid governing equation is coupled to the Lagrangian structural solver via a partitioned approach, allowing deformation of the fluid domain to be captured. Examining the initial constriction height ranging from 0.8mm to 11.0mm and tongue replica modulus from 1.25 MPa to 2.25 MPa, the influence of these parameters on the flow rate and collapsibility of the tongue is investigated and discussed. Numerical simulations confirms expected predisposition of apneic patients with narrower airway opening and reduced airway stiffness to flow obstruction. In addition, more severe tongue collapsibility is predicted if the pharyngeal flow regime is turbulent compared to laminar.
efficient real-time results and was validated using pressurized latex material in their experiments. Indeed, based on the Reynolds Number, the flow is expected to be within the laminar (Re < 2000) regime. However, the complex geometry of the pharyngeal airway, could initiate transition and turbulence at lower Re (4). Hence, the present study intends to instead consider a three dimensional model of expiratory fluid flow coupled to similar replication of the tongue and comparing different breathing regimes – both laminar and low-Re turbulent conditions. The effect of initial airway opening and the tongue stiffness on the collapsibility and flow rate inside the pharyngeal airway is investigated parametrically.
Keywords— Apnea, Fluid-Structure Interaction, Exhalation.
Following the approach of Chouly and co-workers (1), a simplified three-dimensional flow configuration representing the pharyngeal airway is illustrated in Fig. 1 with the tongue idealised as a pressurised shell. With Strouhal Numbers in the order of 10-3 (5), the flow is considered quasi-steady and is characterised by the standard continuity and Navier-Stokes equation. For laminar conditions, these reads:
I. INTRODUCTION Collapse of the human pharyngeal airway during sleep affects 2-4% of adults (1) with an estimated 10% of snorers being at risk to obstructive sleep apnea (2). These episodes of partial or full cessation of breathing per hour influence the quality of sleep, reduce brain oxygen saturation and have been linked to hypertension and heart failures (2). Both neurological and physiological factors have been implicated with apneic syndrome. Previous studies (eg. (3)) have shown that apneic patients have in general, a narrower opening in the oropharynx region (perhaps, due to obesity or tissue build-up). In addition, neurological conditions controlling the tone of airway tissues and respiratory muscles during sleep have also been cited. Not surprisingly, with negative intraluminal pressure and greater wall collapse, much investigation on apnea have been focused on inspiration and not much have dealt with expiratory obstruction. The coupling between a twodimensional, laminar expiratory flow with a shell model developed by Chouly et al. (1) was successful in obtaining
II.
(
∂ u j ui ∂x j
)
COMPUTATIONAL MODEL
∂u i =0 ∂x i =−
1 ∂p ∂ ⎡ ⎛⎜ ∂u i ∂u j ⎢ν + + ρ ∂xi ∂x j ⎢ ⎜⎝ ∂x j ∂xi ⎣
(1) ⎞⎤ ⎟⎥ ⎟⎥ ⎠⎦
(2)
where ui and p are the fluid velocities and pressure (i = 1,2, 3 for three dimensional analysis) and ρi is the fluid density. For turbulent expiratory conditions, avoiding the intensive computational efforts involved with a threedimensional Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS), a Reynolds-Averaged Navier Stokes (RANS) equations coupled to a Shear Stress Transport (SST) k- ω turbulent model is used to model the fluid. The governing equations are essentially similar to (1) and (2) above, but with the inclusion of Reynolds stress
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 467–471, 2011. www.springerlink.com
468
M.R. Rasani, K. Inthavong, and J.Y. Tu
P out Base of Tongue
P in
Fig. 1 Pharyngeal airway model terms in equation (2) – thus, requiring two additional equations to solve for the turbulent kinetic energy, k and dissipation rate, ω, which influences these Reynolds stress terms. These fluid governing equations are solved by a commercial fluid solver, CFX. The governing equation for steady structural deformation is:
∇.σ ij + f = 0
successive iterations. This overall algorithm, coupling ANSYS and CFX, is summarised in Fig. 2.
(3)
where σij is the stress tensor and f is the external forcing term (i, j = 1, 2, 3 for three-dimensional structures) which in this case, includes external pressure pe plus the internal fluid pressure and shear at the interface Γ. Actual mechanical properties of the tongue is inhomogeneous and anisotropic, more so with varying degrees of muscle activation. For simplicity, a homogenous isotropic material is assumed for the tongue undergoing linear small deformation. The hydrostatically pressurized latex tube which replicates the tongue in Chouly et al. (1), is similarly modelled here using an elastic thin shell. A Young’s modulus of 1.75 MPa is used for the shell - giving similar response to experiments in (1), and a Poisson ratio of 0.499 is also used (to account for incompressibility of the water-filled tongue replica). The elastic wall is assumed fixed everywhere except at the face exposed to the pharyngeal flow. This simplified tongue structure is modelled and solved using a commercial finite element solver, ANSYS. In order to account for boundary deformation in the fluid mesh, the fluid governing equations are recast in an Arbitrary Lagrangian-Eulerian (ALE) description in CFX, where the fluid mesh deformation and velocity are solved based on a Laplacian diffusion model. The fluid-structure interaction is achieved by matching the forces and velocities at the common interface between the fluid and structure, via
Fig. 2 Fluid-structure coupling flowchart
III. RESULTS AND DISCUSSION A.
Parametric Investigation
For the purpose of studying the influence of the following physiological variations on the collapsibility in the pharyngeal airway, the following range of parameters were simulated in laminar conditions: 1. 2.
Initial constriction height (ho) Elastic wall modulus (E)
IFMBE Proceedings Vol. 35
0.8 –11.0 mm 1.25 – 2.25 MPa
Three-Dimensional Fluid-Structure Interaction Modeling of Expiratory Flow in the Pharyngeal Airway
Flow rate Q / Qo
30
ho=1.2mm
(h/ho = 0.95)
ho=3.0mm
25
469
(h/ho = 0.96)
ho=5.0mm ho=11mm
20
(h/ho = 0.98)
15
(h/ho = 0.89) (h/ho = 0.92) (h/ho = 0.96)
10 5
(h/ho = 0.82)
(h/ho = 0.87)
(h/ho = 0.92)
(h/ho = 0.64)
(h/ho = 0.73)
(h/ho = 0.83)
0 0
0.5
1
1.5
2
ΔP/Po Fig. 3 Effect of initial airway opening ho on flow rate Q and constriction height h for a fixed tongue replica modulus E = 1.75 MPa (Qo = 10 L/min, Po = 100 Pa and ho = initial constriction height) 3 (h/ho = 0.71)
Flow rate Q/Qo
2.5
(h/ho = 0.78) (h/ho = 0.58)
2
(h/ho = 0.87)
1.5
(h/ho = 0.65)
(h/ho = 0.77)
E=1.25 E=1.75 E=2.25
1 0.5 0 0
0.5
1
1.5
2
ΔP/Po
Fig. 4 Effect of tongue modulus E on flow rate Q and constriction height h for a fixed initial opening ho = 1.2 mm (Qo = 10 L/min, Po = 100 Pa and ho = initial constriction height) The initial constriction height is varied to simulate the effect of different degrees of opening behind the tongue within the oropharynx in apneic or non-apneic subjects. Furthermore, this height is reflective of variations in patient conditions influenced by perhaps, obesity and hypertrophy of tissues surrounding the pharyngeal airway. While, the influence of the modulus of the tongue is intended to represent the variation in mechanical properties in the
human population and is also indicative of the degree of airway tissue compliance. In order to investigate the first parameter (initial constriction height ho), the elastic modulus E and external pressure pe was fixed to 1.75 MPa and 200 Pa respectively. The non-linear variation of the flow rate with intraluminal pressure difference Δp = pin-pout is shown in Fig. 3 for various ho. In particular, the increased flow rate and reduced
IFMBE Proceedings Vol. 35
470
M.R. Rasani, K. Inthavong, and J.Y. Tu
B. Elastic Wall Deflection
The contracting flow underneath the tongue replica leads to an increase in flow velocity, which is accompanied by a corresponding drop in pressure - a 'venturi effect'. This suction pressure (negative pressures typically shown in Fig. 5 underneath the constriction) effectively 'pulls' the tongue replica downwards, thus implying an increasingly posterior collapse of the tongue with increasing expiratory effort. This may lead to increased airway obstruction and perhaps, expiratory snoring. The typical deflection of the tongue replica, viewed axially towards the outlet, is shown in Fig. 6 for several intraluminal pressure differences Δp. The diverging flow downstream of the narrowing oropharynx is likely to
contribute to flow instabilities that leads to flow transitioning to turbulence. These flow structures include vortices and additional diffusion that affect the fluidstructure interaction. In comparison to laminar flow, the low-Re turbulent flow predicts a more severe elastic wall collapse, as illustrated in Fig. 6.
Fig. 5 Pressure contour on mid-sagittal plane of the simplified pharyngeal airway under laminar conditions (for case ho = 1.2mm) 0.0002
z [m]
0.0001 0 -0.0001
Deflection [m]
degree of flow plateauing for higher ho, suggest that there is less flow obstruction with increasing ho. This is indeed the case, as shown in the same figure, where the degree of elastic wall collapse (h/ho) is less severe with increasing initial opening. In addition, current simulation also illustrate typical trend of flow rate limitation (i.e. the exhalation flow rate does not increase linearly proportional with increasing expiratory effort Δp). The second parameter investigated is the effect of tongue modulus on the expiratory flow. In order to investigate this parameter, a configuration with initial constriction height ho of 1.2mm and pe = 200Pa was used. This effect of soft and stiff tongues (modulus 1.25 < E < 2.25MPa) is summarized in Fig. 4. Reduction in modulus translates to increased compliance of the airway cross-section to the applied pressures. This is shown by the greater collapse of the softer tongue with increasing intraluminal pressure difference Δp (for example, at maximum Δp = 200Pa, E = 1.25MPa gives h/ho = 0.58 compared to E = 2.25MPa which gives h/ho = 0.71). As a result, this increased obstruction corresponds to decreased flow rates at lower modulus values. Comparing the relative influence of both parameters, ho and E, to the wall collapse and flow rate, suggest that both parameters are important. For instance, a 40% stiffening of the tongue modulus (E = 1.25 to 1.75 MPa) leads to approximately 11% reduction in wall collapse, while, for a 50% increase in initial airway opening (ho = 0.8 to 1.2mm), a 10% reduction in collapse is exhibited. Compliance of the pharyngeal airway is a function of both its tissue constituents and degree of muscle activation. Hence, neurological factors which influence the musculatory control of the pharyngeal tissues (and hence, its compliance) is also important in these apneic syndrome. As suggested by the results, lack of pharyngeal stiffness encourages greater wall collapse, thus, possibly exacerbating these apneic conditions.
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
-0.0002 -0.0003 P=200 (low-Re turb)
-0.0004
P=200 laminar
-0.0005
P=120 (low-Re turb) -0.0006
P=120 laminar
-0.0007
P=40 (low-Re turb) P=40 laminar
-0.0008
Fig. 6 Deflection profile of tongue replica along its narrowest opening (looking axially towards pharyngeal outlet) at intraluminal pressure difference Δp = 40, 120 and 200 Pa. Note that the vertical axis represents the mid-sagittal plane of the simplified pharyngeal airway and the posterior pharyngeal wall is located horizontally at the bottom at y = -0.00135 m This would suggest a much severe pressure drop is experienced by the elastic wall in transitional or turbulent flow as oppose to laminar flow. This is consistent with numerical findings on a rigid pharynx modelled by Shome et al. (4), where at similar flow rates, a 40% increase in pressure drop is predicted in turbulent flow compared to a laminar flow. This drop is expected to be more severe when the tongue is considered elastic and collapsing, as indicated here. Thus, the flow regime in the pharyngeal airway has significant effects on the degree of apneic obstruction and is important to be considered in pre-operative treatments.
IV.
CONCLUSION
A three-dimensional rendering of the laminar airwaytongue replica investigation by Chouly and co-workers (1, 5)
IFMBE Proceedings Vol. 35
Three-Dimensional Fluid-Structure Interaction Modeling of Expiratory Flow in the Pharyngeal Airway
is presented and its comparison to low-Re turbulence case, have been investigated. The low-Re turbulent fluid-structure interaction simulation predicts more severe pressure drop in the constriction compared to laminar conditions. Parametric study illustrated the tendency of narrower pharyngeal opening and increased airway compliance to apneic obstruction. Hence, stressing the importance of both physiological and neurological factors to the pathogenesis of apneic obstruction. Future work incorporating unsteady inhalation and expiration cycle is intended, to look at the full breathing effect on the tongue dynamics. Ultimately, a realistic pharyngeal airway model is also intended to be used for further fluid-structure interaction investigation.
ACKNOWLEDGEMENTS
471
REFERENCES 1. Chouly F, Van Hirtum A, Lagree P-Y, Pelorson X, & Payan Y (2008) Numerical and Experimental Study of Expiratory Flow in the Case of Major Upper Airway Obstructions with Fluid-Structure Interactions. Journal of Fluids and Structures 24:250-269. 2. Bertram CD (2008) Flow-Induced Oscillation of Collapsed Tubes and Airway Structures. Respiratory Physiology & Neurobiology 163:256265. 3. Brown IG, Bradley TD, Phillipson EA, Zamel N, & Hoffstein V (1985) Pharyngeal Compliance in Snoring Subjects With and Without Obstructive Sleep Apnea. American Review of Respiratory Disease 132(2):211-215. 4. Shome B, et al. (1998) Modelling of Airflow in the Pharynx With Application to Sleep Apnea. Journal of Biomechanical Engineering 120:416-422. 5. Van Hirtum A, Pelorson X, & Lagree P-Y (2005) In Vitro Validation of Some Flow Assumptions for the Prediction of the Pressure Distribution During Obstructive Sleep Apnoea. Medical & Biological Engineering & Computing 43:162-171.
The financial support provided by the Australian Research Council (project ID LP0989452) and by RMIT University through an Emerging Researcher Grant is gratefully acknowledged. In addition, many thanks to Dr. Franz Chouly for many fruitful communications.
IFMBE Proceedings Vol. 35
A Hybrid Trial to Trial Wavelet Coherence and Novelty Detection Scheme for a Fast and Clear Notification of Habituation: An Objective Uncomfortable Loudness Level Measure Mai Mariam Universiti Teknikal Malaysia Melaka, Malaysia
Objective: To determine an uncomfortable loudness (UCL) level is not an easy task, especially in children. A need to objectively measure this level is crucial as the age of hearing devices candidates is getting younger. Previous studies have shown that the feasibility of habituation correlates in late auditory evoked potentials for a measurement technique of UCL identification is promising. &evertheless, a scheme that could provide a fast and clear notification of an UCL level is reached is desirable. The present study has introduced a hybrid trial to trial wavelet coherence and novelty detection scheme to extract and to notify objectively the habituation correlates in late auditory evoked potentials. Methods: Data were obtained from 10 normal hearing subjects. The auditory stimuli were pure tones of 1 kHz with a duration of 40 ms and a constant interstimulus interval (ISI) of 1 s and presented at 50 desibel (dB) sound pressure level (SPL) and 100 dB SPL consecutively. Subject were in relaxing condition and awake over the experiment. Results: The proposed scheme is able to highlight the presence of habituation and the number of stimuli could be reduced up to 60% percent for meaningful results. Conclusions: Therefore, this paper has shown that the proposed approach provides an encouraging foundation for a fast and reliable scheme in developing an objective loudness scaling measurement, in particular in determining UCL level. Keywords— habituation, wavelet coherence, novelty detection, late auditory evoked potential
I. INTRODUCTION Hearing devices; hearing aids and cochlear implants bring a minor hearing disability to a profound hearing loss person back to hearing world. Hearing ability is not only enable us to listen to sound, but it plays a huge role for oral communication skill as well. Hearing devices candidates are now as young as 3 months old baby as an excellent performance in speech perception and production are found in
children who received implantation at a younger age in comparison to the children at an older age [1]. To fit a hearing device to a young child is not an easy task. The behavioral observation audiometry which estimates loudness perceptions by physical or emotional responses of the child, for instance, sudden quieting/screaming, decrease in activity level and turning in search of the sound source has limited application and could not reliably estimate auditory sensitivity [2]. A simple behavioral observation audiometry in children can be misleading and ultimately leading to mismanagement [3]. The most difficult level of loudness perceptions is the uncomfortable loudness (UCL) level. In subjective measure, a patient has to decide whether a particular sound is enough loud for him/her. If an adult would find this task is difficult, it is impossible to perform it in non--cooperative newborns. To make the matter worst, surveys had shown that many hearing aid users are dissatisfied with the loudness of their hearing aid [4] especially the UCL level. In [ 5], we introduced an electrophysiological technique which apply the phenomenon of response reduction due to repeatedly stimulation to differentiate between a high intensity responses of 100 decibel (dB) sound pressure level (SPL) and 50 dB SPL. We found that a unique signature (insignificantly habituate) of large--scale neural correlates of habituation in late auditory evoked potentials (LAEPs) when stimulated with 100dB SPL. In contrast to 100dB SPL stimulated response, 50dB SPL stimulated responses showed a clear habituation behavior. The feasibility of habituation correlates in LAEPs as an objective measurement of loudness scaling has been shown to be promising. However, a scheme that could provide a fast and clear notification of an UCL level reached is desirable. In this paper, the post-processing in [5] is improved by combining the SSTC scheme with the novelty detection approach in order to achieve a fast and a reliable identification scheme for habituation extraction. Similar data from [5] is used and the comparison between the previous and present of post processing results in made. The details the present technique and a brief review of data acquisition are discussed in the following sections.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 472–475, 2011. www.springerlink.com
A Hybrid Trial to Trial Wavelet Coherence and Novelty Detection Scheme for a Fast and Clear Notification of Habituation
473
scale, s and translation parameter, n , the wavelet coherence of two signals a and is defined as
II. METHODOLOGY A. Data acquisition Subjects: 10 normal hearing subjects (4 female and 6 male; mean age: 30 years and 11 months with a standard deviation of 3 years and 9 months) with healthy and had no history of hearing problems with a normal hearing threshold (below 15dB hearing level) were participating in this study. As the stimulation intensity level (100dB SPL) is too high each subject received an audiogram test before and immediately after the experiment to ensure post-experimental effects occurred. Materials and Experiment Paradigms: The auditory stimuli were pure tones of 1 kHz with a duration of 40 ms and a constant interstimulus interval (ISI) of 1 s. The auditory stimuli were presented monaurally to the right ear via a headphone. The electroencephalographic (EEG) recordings were performed in a sound proof room. Subjects were lying on an examination bed and relaxing. Stimuli were presented at two stimulation levels; 50dB SPL (lower sound) and 100dB SPL (higher sound) consecutively with 3 minutes break in between. The signals were recorded using surface electrodes (Ag/AgCl) which were placed at the left and right mastoid, the vertex and the upper forehead. The EEG was sampled at 512 Hz and filtered using a digital filter (bandpass 1Hz-30Hz). Trials that contained artefacts were rejected using threshold detection (amplitude larger than 50µV). At least 800 single sweep LAEPs of each subject were obtained in this way. See [5] for details explanation of data acquisition. B. Habituation We hypothesize that when a sound is too loud, it is hard to draw away one’s attention towards the sound. We have tested this hypothesis in [5] , [7] and we have found that insignificant or absence of habituation behavior responses when subjects were stimulated by a high stimulation level or UCL perceived intensity level in comparison to lower level of stimulations. Our results are supported by a few other previous findings such as more rapid habituation with stimuli of lower intensity [8] and strong stimuli may yield no significant habituation as reported by [9]. Moreover,[10] and [11] have observed that habituation is shown to be less and slower and when a subject pay attention to the stimuli.
(Ω
ϕ ,t
)
a, b is the wavelet cross spectrum between signal a and b , where ϕ and t represent the wavelet and Lets
(W
window to calculate wavelet coherence, respectively. With
)
a , b (s , n ) =
ϕ ,t
ϕ ,t
(1)
To capture the correlation between two consecutive trials, T over the experiment (yielded M trials), equation (1) is modified to
(
)
Z mT (s, n ) = WTm −1 ,Tm (s, n ) , (2) and Z is called trial to trial wavelet coherence with m=2,…,M. Habituation is defined as reduction of response over repeatable stimulations. Hence, should habituation occurs at present trial in comparison to immediately previous trial, the obtained coherence in between trials is towards zero as both trials are not correlated. See [5] for details explanation of trial to trial wavelet coherence and references therein. D. *ovelty Detection Trial to trial provides informative features that describe habituate response or non-habituate response. In order to extract the moment of the habituation presence, we further analyze the wavelet coherence results with novelty detection. We have observed in [5] that reduction of wavelet coherence (habituation) occurs after several trials after the experiment began. Therefore, we take these first η trials (training set) to train a classifier. This classifier is generated by all information of η trials and forms a hypothetical sphere with center c and radius K. To minimize this sphere so that the learning machine free from false negative errors, the problem is solved by η
K 2 + α ∑ φi η
min
c∈F , R∈ℜ ,φ∈ℜ
(3)
i =1
Where, α is the trade-off allows a balance of space volume and number of false positive errors. φ is the slack variables introduce by [13] to reduce classifier sensitivity. Equ. (3) has to be optimized under the constraints 2
Wi − c ≤ K 2 + φi
φi ≥ 0
C. Trial to trial Wavelet Coherence
(Ω a, b )(s, n) (Ω a, a )(s, n)(Ω b, b)(s, n ) ϕ ,t
ϕ ,t
(i = 1,.....,η ) , (i = 1,......,η )
(4)
The generated classifier has all information of insignificant habituate trials. Should the consecutive trial (test trial) after η trials habituates; the generated novelty measure is 1 as it shows a highly different of information. In other words,
IFMBE Proceedings Vol. 35
474
M. Mariam
In previous results as in [5], we could see that habituate responses (low sound) produce a continuous reduction of wavelet coherence in comparison to response of higher sound (insignificant changes of wavelet coherence over time), see [5] for details implementation of trial to trial wavelet coherence and the obtain results. To determine a suitable and reasonable number of trials to be used to train the classifier, based on the results in [5], we apply 50, 100 and 200 first trials training set. In the following figures, we illustrate all trials including training set. Fig. 1(a) and Fig. 1(b) show the grand average of 10 subjects novelty measure for both low and high sound, respectively with 50 trials for training set. We observe that low sound produce more novelty than high sound. The frequency of novelty measures equal to 1, was 1 to 17 trials within first 100 test trials (consecutive trials after 50 training trials). In contrast to low sound, high sound produced less novelty measures and the frequency of novelty measure equal to 1 over 100 test trials was as far as 32 trials.
low sound 1
novelty measure
III. RESULTS
Fig. 2(a) and Fig. 2(b) show the grand average of 10 subjects novelty measure for both low and high sound, respectively with 100 trials for training set. The frequency of novelty measures presence was 18 to 20 trials within first 100 test trials. Interestingly, with 100 training trials to train the classifier, high sound produced zero novelty measures in first 157 test trials and only 8 novelty measures over the experiment which shows that at this point the responses were clearly did not habituate in comparison to low sound.
0.6 0.4
0 0
100
200
300
400
500
600
700
800
500
600
700
800
trial
(a) high sound
1 0.8 0.6 0.4 0.2
low sound 1
novelty measure
0.8
0.2
novelty measure
more novelty measure =1 in an experiment, shows higher degree of habituation.
0 0
100
200
300
400
trial
0.8
(b)
0.6
Fig. 2 The grand average of 10 subjects novelty measures of low sound (a) and high sound (b) with 100 trials for training set
0.4 0.2 0 0
100
200
300
400
500
600
700
800
trial
(a) high sound 0.8
low sound 0.6
1
0.4
0.8
0.2 0 0
100
200
300
400
500
600
700
800
trial
0.6 0.4 0.2
(b) Fig. 1 The grand average of 10 subjects novelty measures of low sound (a) and high sound (b) with 50 trials for training set
novelty measure
novelty measure
1
Fig. 3(a) and Fig. 3(b) show the grand average of 10 subjects novelty measure for both low and high sound, respectively with 200 trials for training set. A major habituation is clearly shown has occurred after approximately 220 trials after experiment began. In contrast, high sound produced novelty measure after 157 trials (same as with 100 training trials) and the number of novelty measures is 95 % lower than low sound.
0 0
100
200
300
400
trial
IFMBE Proceedings Vol. 35
500
600
700
800
A Hybrid Trial to Trial Wavelet Coherence and Novelty Detection Scheme for a Fast and Clear Notification of Habituation
(a)
novelty measure
V. CONCLUSIONS
high sound
1
In the present study, we improved post processing technique to extract habituation in LAEPs by combining trial to trial wavelet coherence with novelty detection. The identification of habituation presence is faster and clearly highlighted in comparison to previous study. These findings, give a promising technique for objectively determine UCL procedure and analysis.
0.8 0.6 0.4 0.2 0 0
475
100
200
300
400
500
600
700
ACKNOWLEDGMENT
800
trial
(b) Fig. 3 The grand average of 10 subjects novelty measures of low sound (a) and high sound (b) with 200 trials for training set
The author would like to thank Computational Diagnostics and Biocybernetics Unit, Saarland University of Applied Sciences and Saarland University Hospital, Homburg, Germany for the LAEP data collection.
IV. DISCUSSIONS
REFERENCES In the present study, we further analyze the results in [5] in order to achieve a fast and reliable identification of habituate response. With a combination of trial to trial wavelet coherence and novelty detection schemes, identification of habituation moment is clearly highlighted. Trial to trial wavelet coherence is a reliable method to extract habituation and novelty detection analysis is able to highlight the abnormality in a train of data which provide a suitable method for identifying a reduction of response in real time. Not all sampling data (512) from wavelet coherence analysis are used in novelty detection analysis. As the measurement is based on the analysis of the neurophysiological effects of auditory habituation reflected in a LAEP component, namely the N100 wave as well as N100 wave may jitter between 80 ms to 120 ms, we used the sampling data from 40 to 60 (signal is sampled at 512 Hz). The N1 wave is generally assumed to reflect sensory processing as in selective attention as well as physical attributes of a stimulus such as intensity [12]. The LAEP can be elicited from the cochlear implant patients [13] and offer technical, physiological, medical and psychological advantages in comparison to early potentials such as auditory brain response [14]. 3 values of training set were applied and it was shown that 200 trials to train the classifier is suitable for this purpose. Both sound levels were clearly differentiated. Based on Fig.3(b), the present approach enable us to estimate that the stimulation level is reaching the higher level of sound as in UCL level when with the absence of novelty measure over approximately 160 or more test trials is observed. In addition, the proposed approach offers a reduction of stimuli needed for meaningful results up to 75% as a highly habituation behavior is identified at approximately 200 trials (including training set) as well as clearly insignificant habituation as in case of high sound, see Fig. 3(a).
1.
2. 3. 4.
5.
6.
7.
8. 9. 10.
11.
12. 13.
14.
Nikolopoulos T.P, O'Donoghue G.M. et al (1999) Age at implantation: its importance in pediatric cochlear implantation. Laryngoscope 10(4):595-599 Thompson G, Folsom R. (1981) Hearing assessment of at-risk infants. Current status of audiometry. Clin Pediatr (Phila)20(4):257-261 J. F. Jerger, D. Hayes (1976) The cross-check principle in pediatric audiometry. Ach Otolaryngol 102(10):614-620 Mueller H.G , Bentler R. A. (2005) Fitting hearing aids using clinical measures of loudness discomfort levels: An evidence-based review of effectiveness. J Am Acad Audiol 16(7):461-472 Mai Mariam , Delb W at al (2009) Comparing the Habituation of Late Auditory Evoked Potentials to Loud and Soft Sound. Physiol Meas 30(2):141-153 Punch J, Joseph A et al (2004) Most Comfortable and Uncomfortable Loudness Levels: Six Decades of Research. Am J Audiol 13(2):144-157 Busse M, Haab L et al (2009) Assessment of aversive stimuli dependent attentional binding by the N170 VEP component. Conf Proc IEEE Eng Med Biol Soc, 2009,pp 3975-3978 Groves P. M , Thompson R.F (1970) Habituation: A dual-process theory. Psychol Rev 77(5):419-450 Wickelgren B. G (1967) Habituation of spinal motorneurons. J Neurophysiol 30(6) :1404-1423 Öhman A , Lader M (1972) Selective Attention and Habituation of the Auditory Averaged Evoked Response in Humans. Physiol Behav 8 (1) 79:85 Ritter W, Vaughan H. G et al (1968) Orienting and habituation to auditory stimuli: a study of short term changes in average evoked responses. Electroencephalogr Clin Neurophysiol 25(6): 550-556 Key A. P. F, Dove G. O (2005) Linking Brainwaves to the Brain: an ERP Primer. Dev Neuropsychol 27(2) :183-215 Hoppe U, Rosanowski F (2001) Loudness Perception and Late Auditory Evoked Potentials in Adult Cochlear Implant Users. Scand Audiol 30(2):119-125 Brix R, Gedlicka W (1991) Late Cortical Auditory Potentials Evoked by Electrostimulation in Deaf and Cochlear Implant Patients. Eur Arch Otorhinolaryngol 248(8):442-444
IFMBE Proceedings Vol. 35
Application of Data Mining on Polynomial Based Approach for ECG Biometric K.A. Sidek1 and I. Khalil2 1 2
Faculty of Engineering, International Islamic University Malaysia, P.O. Box 10, 50728 Kuala Lumpur, Malaysia School of Computer Science and Information Technology, RMIT University, Melbourne, 3001 Victoria, Australia
Abstract— In this paper, the application of data mining techniques on polynomial based approach for better electrocardiogram (ECG) authentication mechanism is presented. Polynomials being used for ECG data processing have a history of nearly two decades. Recently it has been bringing about promising solutions for heart beat recognition problem. General polynomial based approach are used in this research and by using the polynomial coefficients extracted as unique features from the ECG signals, data mining techniques was applied for person identification. A total of 18 ECG recordings from MIT/BIH Normal Sinus Rhythm database (NSRDB) were used for development and evaluation. QRS complexes from each dataset was divided into two parts, the training and the testing dataset which was used to prove the validity of the data mining technique applied. Experimental results was classified using Multilayer Perceptron (MLP) in order to confirm the identity of an individual and was compared with the previous research using polynomials without the use of data mining technique. Our experimentation on a public ECG database suggest that the proposed data mining technique on polynomial based approach significantly improves the identification accuracy by 96% as compared to 87% from the existing study. Keywords— ECG, biometrics, polynomial, Multilayer Perceptron, Neural Networks.
I. INTRODUCTION Identity security is a major concern to a nation’s security, law enforcement and economic interests. It is vital in protecting citizens from the theft or misuse of their identities. Failure to have effective means of identification creates opportunities for criminal activity. It also weakens border and citizenship controls and efforts to combat the financing of crime and terrorism. It has recently been estimated that identity-related fraud alone cost the Australian Government a total of $1.1 billion in 2002 [1]. It is essential that the identities of persons accessing government services, benefits, official documents and positions of trust, can be accurately verified. Identifying people with certainty is time consuming and costly for public and private sector organizations. Due to these important factors, governments work very hard to improve identity security, combat identity crime and protect the identities of citizens from being used for illegal purposes. Current initiatives by several
government agencies recommend the use of biometric authentication systems for security and privacy measures. Biometrics is the science of establishing the identity of an individual based on the physical, chemical or behavioural attributes of the person [2]. It provides airtight security, offers an alternative to traditional methods (knowledge based and token based mechanism) of establishing a person’s identity and becomes a tool of information privacy and confidentiality in security applications and authentication devices in airports, financial institutions, government agencies, commercial buildings and etc. The primary goal in most application is to prevent unauthorized users from accessing protected resources. By using biometrics, it is possible to establish an identity based on who you are, rather than by what you possess, such as an ID card, or what you remember, such as a password [2]. A number of biometric modalities have been investigated in the past, examples of which include physiological traits such as face, fingerprint, voice, iris and behavioural characteristics like gait and keystroke. However, these biometric modalities either cannot provide reliable performance in terms of recognition accuracy such as gait and keystroke or not robust enough against falsification. For instance, face is sensitive to artificial disguise, fingerprint can be recreated by latex, voice is easy to mimic and iris can be falsified by using contact lenses with copied iris features printed on. In the recent years, the electrocardiogram (ECG) has been proposed as a new biometric modality for person identification [3, 4, 5, 6]. The validity of using ECG for biometric recognition is supported by the fact that the physiological and geometrical differences of the heart in different individuals display certain uniqueness in their ECG signals [7]. ECG signal is also a life indicator and thus can be used for liveness detection. Comparing with some other biometrics, ECG based biometric system is expected to be more universal and hard to mimic. The results in [8] also suggest the distinctiveness and stability of ECG as a biometric modality. Several methods have been suggested by researchers to improve the classification accuracy of identifying individuals. In our paper, the use of Multilayer Perceptron as the data mining technique for classification to the polynomial based approach further diversify the techniques for ECG biometric authentication. Polynomials being used for ECG
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 476–479, 2011. www.springerlink.com
Application of Data Mining on Polynomial Based Approach for ECG Biometric
data processing have a history of 15 years. Recently it has been bringing about promising solutions for heart beat recognition problem such as in [9]. Apart from that, polynomial has long been used for biometric authentication purposes; especially in fingerprint based biometric authentications. The proposed data mining technique on polynomial based approach is more practical and efficient in determining the identity of individual compared to just with the polynomial approach by itself. The experimentation results suggest that the classification accuracy significantly improves by 96% as compared to the existing studies in [9]. The remaining of the paper is organized as follows; the next section explains the method of the study which includes the data acquisition process, feature extraction method on polynomial based approach and Multilayer Perceptron as the classification mechanism. Later, Section III discusses about the performance comparison of an existing study using polynomial based approach without applying data mining technique to the ECG database. Finally in Section IV, we conclude the study based on the experimentation and results in the previous section.
477
understand the classification process involved. ECG has been an indispensable clinical tool in many different contexts including the detection and treatment of various cardiac abnormalities for the past decades. The ECG signal is the transthoracic interpretation of the electrical activity of the heart. The normal ECG is composed of a P wave, a QRS complex, and a T wave as ECG features which is commonly known as the PQRST morphology. Each of the waves in the ECG signal describes specific cardiac activities. The P wave represents atrial depolarization, the QRS complex represents ventricular depolarization and the T wave reflects the phase of rapid repolarization of the ventricles. The ECG signal PQRST morphologies are depicted as in Fig. 2.
II. METHODOLOGY Biometric system performs matching in pattern recognition problems between the training and test dataset for unknown features which would later determine the class (identity) of these unknown features. As a result of this learning technique, individuals can be identified for security and privacy purposes. The overall architecture of our proposed system begins with data acquisition of ECG signals, and then the identification of the QRS complex used for the feature extraction procedures. From the QRS waves, coefficients of the polynomial based approach are used as the unique extracted features. By using these coefficients, classification of the features are performed using Multilayer Perceptron Network and finally with this classification results, the identity of unknown attributes can be determined. The proposed model is summarised as in Fig. 1.
Fig. 1 The proposed model As the features were extracted for ECG signals, it is crucial to know the general background of ECG physiology to
Fig. 2 The ECG signal Therefore, by identifying the morphologies of the ECG signal, cardiovascular diseases can be diagnosed. Furthermore, these morphologies varies from one person to another and according to recent research, person identification is possible with morphological pattern matching of the ECG signal [3, 4, 5, 6, 9]. A. Data Acquisition A total of 18 ECG recordings used in this work were taken from an online physiological signal archive for biomedical research which consist of all the subjects from the MIT/BIH Normal Sinus Rhythm database (NSRDB) with sampling rate of 128 Hz. Each recording has 30 seconds of ECG signals recording. NSRDB includes 18 ECG recordings of subjects referred to the BIH Arrhythmia Laboratory which were found to have had no significant arrhythmias. These ECG entries are obtained from among a collection of databases available online from PhysioBank [10] which has been extensively used for benchmarking algorithms pertaining to ECG diagnosis, analysis, compression, biometric and other researches. From the ECG signal itself, amplitude features which are analytical based method that describes the morphological shape of the signal are commonly used to capture the most crucial part of the signal which is the R wave. Since, the R wave corresponds to the highest, sharpest and most obvious
IFMBE Proceedings Vol. 35
478
K.A. Sidek and I. Khalil
peak in an ECG, it becomes the reference when selecting the ECG signal. Then, we select equal points from the left and the right of the identified R wave. We repeat selecting this points which actually covers the QRS complex for 12 times until we obtain 12 QRS complexes for each subject. We have used First Derivative based Technique [11] for automating this feature (i.e. an ECG PQRST signature) extraction.
1 ≤ n ≤ n, are reached [12]. The MLP acts as a classifier, estimates the necessary discriminant functions, and assigns each input vector to a given class. The learning algorithm adapts the weights based on minimizing the error between given output and desired output.
B. Feature Extraction The coefficients of the polynomial equation can be used as the biometric feature for classification purposes. Polynomial based approach work effectively on the QRS complexes. In general, the polynomials take the form of Equation 1,
y = C0 + ∑i=1 Ci × x i N
(1)
After the detection of the QRS complexes from the ECG morphology, using the general high-order polynomial creates accumulated coefficients from the set of QRS waves as in Equation 2,
CQRS = {C1 , C2 ,..., C N }
(2)
where N is the length of the coefficient set for QRS complex and can be obtained by N = |CQRS|-1. Moreover, the value of N also reflects the total number of attributes used in the classification procedures. Each coefficient in the testing dataset is compared against the average value of the same coefficient in the training dataset to determine closeness between the coefficients. In other words, we try to find out the deviation in coefficient values between the coefficients to be tested.
Fig. 3 Multilayer Perceptron Network To determine the relevance of this approach, using the ECG dataset for entries available in NSRDB [10], we use ten-fold cross validation technique to evaluate the generalization accuracy of the induction algorithm. That is, we randomly partitioned the data into ten disjoint sets, then provided each algorithm with half of the set as training and remaining are used as testing cases. We repeated this process ten times using the different possible recognition sets and averaged the resulting accuracies. Thus, the steps for classification using MLP are [12]: Present the component vector, x = [x1, x2,…,xm] to the perceptron. 2. Compute the values of the hidden layer nodes, hj:
1. C. Multilayer Perceptron MLP have a layered, feedforward structure with an error based training algorithm. The architecture of the MLP is completely defined by an input layer (features, in our case the polynomial coefficients), one or more hidden layers, and an output layer (class or ID). Each layer consists of at least one neuron. The input vector is applied to the input layer and passes the network in a forward direction through all layers. Fig. 3 illustrates the configuration of the MLP. A neuron in a hidden layer is connected to every neuron in the layer above it and below it. In Fig. 3, weight wi connects input node xm to hidden node hj, and weight vk connects hj to output node on. Classification starts by assigning the input nodes xm, 1 ≤ m ≤ l equal to the corresponding data vector component. Then data propagates in a forward direction through the perceptron until the output nodes on,
hj =
1 1 + exp[ −( wo n + ∑ lm =1 wi x m )
The activation function of all the unit in MLP is 1 given by the sigmoid function, f(x) = 1 + exp(− x)
3. Calculate the values of the output nodes based on
on =
1 1 + exp[−(von + ∑ lj =1 vk h j )
4. The class o = [o1, o2,…,on] that the perceptron assigns x must be a binary vector. Thus, ok must be the threshold of a certain class at some level τ and depends on the application. 5. Repeat steps 1 until 4 for each given input pattern.
IFMBE Proceedings Vol. 35
Application of Data Mining on Polynomial Based Approach for ECG Biometric
III. EXPERIMENTATION & RESULTS The ECG for each subject consists of 12 QRS complexes which indicate that 12 instances are created for each class. In total, there are 216 instances for all the given classes. To perform classification for person identification, half of the QRS complexes are used as training data and the remaining QRS complexes acts as the testing data. QRS samples from both datasets were tested with the polynomials to check the order of polynomials required for 95% (confidence interval) matching between samples. The polynomial fit as shown in Fig. 4 and 5 shows that subjects 16265 and 16273 obtains 99% matching between the QRS sample and the polynomial fit when using 8th order polynomial function. This means that 99% of the coefficients lied within coefficient boundaries provided by 95% confidence level curve which were calculated from the approximated polynomials.
479
Table 1 Classification Rate Using Polynomial Based Approach Polynomial based approach
Classification Rate
Polynomial without data mining in [9]
87%
Polynomial with data mining
96%
IV. CONCLUSION In this paper, we have demonstrated that applying data mining technique to polynomial based approach on ECG dataset can be effectively used in ECG based biometric authentication. The results of the experimentation on the ECG dataset suggest that the proposed method significantly improves the identification accuracy by 96% as compared to 87% from the existing study in [9]. We also have shown that using the coefficients of the polynomials in the QRS complex is sufficient to act as unique features for person identification without using the whole ECG morphology.
REFERENCES
Fig. 4 Subject 16265 matching of QRS complexes and polynomial fit
Fig. 5 Subject 16273 matching of QRS complexes and polynomial fit When these coefficients were applied to MLP for classification, the results showed an increase of the classification rate from 87% as in [9] to 96%. The average false positive, true positive and area under the ROC curve was 0.2%, 96% and 99.84% respectively which indicates that MLP is a classifier of high precision. The results are summarised as in Table 1.
1. Cuganesan S, Lacey D (2003) Identity fraud in Australia: an evaluation of its nature, cost and extent. Sydney: SIRCA. 2. Jain A K, Flynn P, Ross A, (2008) Handbook of Biometrics, Springer. 3. Biel L, Petterson O, Philipson L, Wide P (2001) ECG Analysis: A New Approach in Human Identification, IEEE Transactions on Instrumentation and Measurement, vol. 50, no.3 4. Israel S A, Irvine J M, Cheng A, Wiederhold M D, Wiederhold B K (2005) ECG to Identify Individuals, Pattern Recognition, vol. 38, pp. 133-142 5. Shen T W, Tompkins W J, Hu Y H (2002) One-lead ECG for identity verification, In Proceedings of the 2nd Conference of the IEEE Engineering in Medicine and Biology Society, vol. 1, pp. 62-63 6. Wang Y, Agrafioti F, Hatzinakos D, Plataniotis K (2008) Analysis of Human Electrocardiogram (ECG) for Biometric Recognition, EURASIP Journal on Advances in Signal Processing, vol. 2008, Article ID 148658, 11 pages. DOI: 10.1155/2008/148658 7. Hoekema R, Ujien G G H, Van Oosterom A (2001) Geometrical Aspect of the Inter Individual Variability of Multi Lead ECG Recordings, IEEE Transactions on Biomedical Engineering, vol. 48, pp. 551-559 8. Wuebbler G et al (2004) Human verification by heart by heart beat signals, Working Group 8.42, Physikalisch-Technische Bundesanstalt (PTB), Berlin, Germany. Available online at: http://www.berlin.ptb.de/8/84/842/BIOMETRIE/842biometriee.html. Retrieved on 15-12-2010 9. Sufi F, Khalil I (2008) Polynomial Distance Measurement for ECG Based Biometric Authentication, Security and Communication Networks, Wiley Interscience. DOI: 10/1002/sec.76 10. Online (Accessed December 2010), “Physiobank: Physiologic signal archives for biomedical research, http://www.physionet.org/physiobank/ 11. Sufi F, Fang Q, Cosic I (2007) ECG R-R Peak Detection on Mobile Phones, Engineerring in Medicine and Biology Society 2007 (EMBS 2007), 29th Annual International Conference of the IEEE, pp. 3697-3700 12. Theis F J, Meyer-Base, A (2010) Biomedical Signal Analysis: Contemporary Methods and Applications, The MIT Press, pp. 164-166
IFMBE Proceedings Vol. 35
Brain Waves after Short duration Exercise Induced by Wooden Tooth Brush as a Physical Agent – A Pilot Study F. Reza1, H. Omar2, A.L. Ahmed2, T. Begum1, M. Muzaimi1,2, and J.M. Abdullah1,2 1
2
Universiti Sains Malaysia, Department of Neurosciences, Laboratory for MEG and ERP studies, Kota Bharu, Malaysia Hospital Universiti Sains Malaysia, Department of Neurosciences, Laboratory for MEG and ERP studies, Kota Bharu, Malaysia
Abstract— Several devices or physical agents have been used for the well being of patients in neurorehabilitation setup. Physical agents improve the quality of life. Miswak-tree twig which is Arabic term of Kayu Sugi in Malay used to cleaning the tooth which has historical and traditional value in different parts of the world. The objective of our present study was to examine the effect of Miswak induced short exercise on excitement level of brain by recording magnetic flux which is related to intracranial electrical current. Spontaneous Magnetoencephalography (MEG) recording was done before and after tooth brushing in four conditions (rest, left hand pinch grip, right hand pinch grip and both hand pinch grip) and neural signal was recorded from eight regions of brain (left and right frontal, parietal, temporal and occipital region). Spectral analysis from 20sec epoch data by Fast Fourier Transform (FFT) was done to obtain mean value of power for alpha (7 – 13 Hz) and beta (13 – 30Hz) frequency bands. The relative power of alpha band was also measured from eight regions of brain. We also recorded visual evoked field (VEF) after and before brushing and measured P100m latency and amplitude but no significant changes were found in latency and amplitude. From spontaneous MEG multivariate analysis of variances showed significant reduction of alpha power after Miswak induced brief exercise in left and right frontal region at both hand pinch condition. But no changes were found in the relative alpha power in regions across the conditions. These results suggest that brain excitement level increased after short duration tooth brushing exercise and the measurement of power is reliable rather than relative power in neurofeedback. Keywords— Brain wave frequency, MEG, Physical agent, Miswak (Kayu Sugi), Neurorehabilitation.
I. INTRODUCTION There are several modern and traditional devices or physical agents have been used for many centuries and continue to be used today as a component of neurorehabilitation. Physical agent can be classify as thermal (e.g. hot pack, ice pack, diathermy), mechanical (e.g. traction, elastic bandage, water whirlpool, ultrasound), or electromagnetic (e.g. ultraviolet, transcutaneous electrical nerve stimulation). These can promote the resolution of inflammation, accelerate tissue
healing, relieve pain, increase soft tissue extensibility and modify muscle tone [1]. Torpedo fish in about 400BC, amber in the 17th century are used for electricity for the treatment of headache and arthritis. By doing research, some physical agents continue to be used for treatment and some are newly emerging. There are several studies on oral hygiene using wooden tooth brush [2, 3] and one paper on surface electromyography of neck, face and head muscles during brushing tooth with Miswak [4]. But there is no study as far as our knowledge goes on oscillatory brain activities before and after brushing tooth with Miswak. Miswak user exerts a horizontal firm force (teeth is vertical) during brushing which is different from other since it’s bristle are placed parallel to the handle, in contrast the perpendicular bristles are in other brushes. However, the objective of this present study is to determine the additional effect of Miswak induced exercise on brain waves besides dental hygiene and musculoskeletal exercise.
II. METHODS A. Spontaneous MEG Recording Five healthy right handed volunteers (one female, mean age± STD, 37±7.17) without any neurological disturbances and drug history were recruited for this study. This study was approved by and followed the Human Ethical Committee of Hospital University Sains Malaysia. The The MEG recording was obtained in a magnetically shielded room using a whole head Electa Neuromag 306 channel MEG system (Helsinki, Finland) in the Laboratory for MEG and ERP studies, Department of Neurosciences, University Science Malaysia. Before recording four small coils were attached to left and right side of forehead and to left and right mastoid process for head position indicator and head digitization points at on nasion and left and right preauricular position for each subject. The sampling rate was 1000 Hz. Each Subject were supplied by a Kayu Sugi (Miswaktooth brush stick from tree twig) and asked to brush their teeth for one minutes (control) (Fig1). Spontaneous MEG and Visual Evoked Fields were recorded before and after
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 480–483, 2011. www.springerlink.com
Brain Waves after Short duration Exercise Induced by Wooden Tooth Brush as a Physical Agent – A Pilot Study
481
tooth brush induced exercise. We also recorded spontaneous MEG from the subjects during right hand pinch grip, left hand pinch grip and both hand pinch grip before and after brushing. Subjects were requested to contract their thumb muscle by pinching thumb and index fingers at small effort (about 10% of maximal voluntary contraction). In order to observe the muscle contraction electromyography (EMG) electrode was placed over the right and left belly of Abductor Pollicis Brevis muscles and EMG was recorded. Spontaneous MEG recording was done for two minutes for each subject and 20 sec epoch duration was picked from each recording for analysis of alpha power using Fast Furrier Transform (fft-size-1024, fft-step 512, hanning type window) and relative alpha power in percentage was calculated by αp/(αp + βp) × 100, p = power value. B. Visual Evoked Field Recording Visual evoked field (VEF) was recorded from the same setup before and after Miswak induced short duration tooth brushing exercise using checker board reversal pattern. Two-color check (black and white) images reverse the spatial position at a rate of every half second. Horizontal pattern orientation was projected in front of subjects on a screen (L70cm× W93cm) positioned at a distance of 1.5 meter. Individual check size was 6cm × 4.5cm and on the screen total check size was 26cm × 35cm. Visual evoked responses were recorded and after that, the averaged waveform was filtered with high pass filter 0.2 Hz and low pass filter 40 Hz. The analysis time was 50 ms before and 500 ms after the stimulus. Base line was set at between 10 and 300ms before stimulation. Three hundred responses were averaged. Peak latencies and amplitude of the P100m responses were measured.
Fig. 1 A representative subject in an experimental environment
III. RESULTS There was no significant difference in latency (before 102ms±7.65, after 102.8ms±8.07) and amplitude (before 39.2 ft/cm ± 9.65, after 37.7 ft/cm ± 17.2) of visual evoked fields before and after Miswak induced short duration tooth brushing exercise. Equivalent Current Dipole (ECD) was in the back of the head (occipital lobe, arrow, Fig 2).
C. Statistical Analysis Alpha band power from MEG spontaneous data were analyzed using multivariate analysis of variance by SPSS 10.0 for windows for four conditions (at rest, right pinch, left pinch and both pinch) and eight regions of brain (left frontal, right frontal, left parietal, right parietal, left temporal, right temporal, left occipital and right occipital) before and after tree twig induced short duration tooth brushing exercise. When significant effects were identified, a Bonferroni post hoc multiple-comparison test of significant difference was performed to identify the specific differences in factors contributing to the variance found in the data. Each region of head consists of 14 MEG channels. Averaged values of all channels for each region were presented here. A paired t-test was done for VEF before and after bushing exercise. The significance level was set at P<0.05.
Fig. 2 P100m latency and amplitude (upper left for ‘before’ and lower left for ‘after’ ) and an equivalent current dipole (ECD, arrows, upper right for ‘before’ and lower right for ‘after’) computed at the peak of the P100m at occipital area (contour map of VEF from a representative subject, right)
IFMBE Proceedings Vol. 35
482
F. Reza et al.
There was significant difference in the reduction of power of alpha frequency band in both hand pinch grip in the left frontal (before 7.45±8.25, after 2.8±.18, p = .022) and right frontal (before 4.65±4.69, after 2.08±.33, p=.036) region before and after Miswak induced short duration tooth brushing exercise. There is clear trend of alpha power reduction in left parietal (before 6.18±7.14, after 3.68 ±1.9) region though it does not reach at significant level (Fig 3).
No significant changes were found in other region across condition. On the other hand there were no significant changes in relative alpha power (Fig 5) in both hand pinch grip in the left frontal (before 70±19.8, after 66.5±12.9, p = .623), right frontal (before 70.4±14.0, after 60.8±3.16, p=.098) and left parietal (before 69.8±15.5, after 69.1±8.2, p=.918) region before and after Miswak induced short duration tooth brushing exercise.
Fig. 5 Relative power of alpha frequency band (value in %) from total Fig. 3 Power of alpha frequency band (value × 10
-24
T/cm) from total averaged data of all subjects at left & right frontal and left parietal region
A representative waveform between 1 to 30 Hz from a subject was presented at Fig 4.
Fig. 4 A representative waveform between 1 to 30 Hz (x- axis, y axis = power value × 10 -27 T/cm) from a subject from left frontal (upper), right frontal (middle) and left parietal (lower) area before (left) and after (right) Miswak induced exercise
averaged data of all subjects at left & right frontal and left parietal region
IV. DISCUSSION In this study we investigated the effect of Miswak induced exercise on sensory modality visual evoked field and the effect on brain oscillation after brief motor exercise. We choose visual evoked field because it is said that after brushing tooth with Miswak there is improvement of vision might be from the physiological micomovement of teeth and facial muscle exercise [4] which in turn would stimulate extra ocular muscles during brushing exercise. But we don’t find any significant changes in visual evoked field after using brushing exercise, however it also does not evidence that there is no improvement in vision. Although visual evoked field is not a part of routine clinical protocol to test for eyesight improvement, it is worthy to localize the visual cortex in patients with organic brain diseases before surgery and our limited sample size should be consider. Another thing is that one of our frequency bands alpha is not phase locked to the stimulus onset and so that is might not be evident in the visual evoked field. The secondary goal of our study was to evaluate the alertness or arousal level of human brain after short duration exercise induced by intermittent compression effect of wooden tooth brush. We choose the wooden tooth brush because it has been using for centuries in some part of the world with great value and it is inexpensive and also
IFMBE Proceedings Vol. 35
Brain Waves after Short duration Exercise Induced by Wooden Tooth Brush as a Physical Agent – A Pilot Study
MEG/MRI compatible, its bristle positions facilitate to exert firm force against teeth. We found significant reduction in power of alpha band frequency after tooth brushing exercise. Kamijo et al in 2004 documented that arousal level reduced after high intensity exercise using spontaneous electroencephalography and contingent negative variation [5]. The reduction in arousal level in rest after and before does not show significant changes but the subtle reduction in alertness was pronounced when the subjects provide extra effort by pinching their hands. This arousal reduction was not sensitive at the relative power changes in alpha frequency level. As the value of alpha power may intermingle with noise artifact that might be cause of over score the value of alpha but it will also affect the beta value. The Fast Fourier Transform (FFT) is separately computed in absolute power in each frequency band or bin, so that an increase in one frequency or one hertz bin does not have any consequence on any other frequency. In contrast, relative power refers to a measurement of the percentage or proportion of total power within each frequency band. Thus, a change in one narrow frequency band can alter the entire spectrum of relative power measures which is less reliable in neurofeedback arrangement [6].
ACKNOWLEDGEMENT This work was supported by Incentive Grant of Universiti Sains Malaysia, 16150 Kubang Kerian, Kota Bharu, Kelantan, Malaysia.
REFERENCES 1. Cameron MH, Shirley K (1998) Physical agents in Rehabilitation: From research to practice. Saunders WB Co, Chapter 1&8 2. Saito Y (1998) The origin of traditional oral hygiene practices and their modern development in Japan. Recent Advances in clinical periodontology. Elsevier Sciences Publishers, Amsterdam. 3. Mostehy EL, Hussei JM (1976) Dental cares and oral hygiene state among school children in Kuwait. Egypt Dent J 22(4):33-50 4. Faruque MR, Ikoma K, Mano Y (2004) Surface electromyography of neck, face and head muscles during alternating compression effect in brushing teeth with Miswak (a tree twig). Saudi J of Disability and rehabilitation 10(3):214-217 5. Kamijo K, Nishihara Y, Hatta Arihiro, Kaneda T, Kida T, Higashiura T, Kuroiwa K (2004) Changes in arousal level by differential exercise intensity. Clinical Neurophysiology 115:2693-2698 6. Hammond DC (2006) Quantitative Electroencephalography patterns Associated with medical conditions. Biofeedback 34(3): 87-94
Correspondence to:
V. CONCLUSIONS This pilot study results suggest that brain excitement level increased after short duration tooth brushing exercise and the measurement of power is reliable rather than relative power in neurofeedback. It seems this result gives us some evidence of potential effect of Miswak (Kayu Sugi) induced exercise which might be a reliable physical agent for the benefit of stroke and cognitive patient in a neurorehabilitation group. Further study is needed to replicate this study in a larger healthy and neurological population.
483
Author: Faruque Reza Institute: Universiti Sains Malaysia, Department of Neurosciences Street: Jalan Sultanah Zainab 2, City: Kota Bharu, Country: Malaysia e-mail:[email protected]
IFMBE Proceedings Vol. 35
Change Point Detection of EEG Signals Based on Particle Swarm Optimization M.F. Mohamed Saaid1, W.A.B. Wan Abas1, H. Aroff2, N. Mokhtar2, R. Ramli1, and Z. Ibrahim3 1
Department of Biomedical Engineering, Faculty of Enginnering, University of Malaya, Kuala Lumpur, Malaysia Department of Electrical Engineering, Faculty of Enginnering, University of Malaya, Kuala Lumpur, Malaysia 3 Department of Mechatronics and Robotics, Center for Artificial Intelligence and Robotics (CAIRO) Faculty of Electrical Engineering Universiti Teknologi Malaysia, Johor Bharu, Malaysia
2
Abstract— This paper proposes a change point detection for electroencephalograms (EEG) signal application based on Particle Swarm Optimization (PSO). As EEG signal is well known consider as non-stationary in nature, we model the signal by using the sinusoidal-Heaviside function, which are capable to represent the change of the behavior of the signal. The parameter of the model with the change point location can be tuned by finding the minimum value of sum squared error. It was showed that the minimum value of sum squared error in the parameter tuning give the exact location of change point. The proposed method is applied to the human EEG during an eye moving task. Keywords— Particle Swarm Optimization, EEG, sinusoidal, change point detection, non-stationary.
I. INTRODUCTION These Electroencephalography (EEG) is the variations of electrical fields in the cortex or on the surface of scalp caused by the physiological activities of the brain. Over the years, there have been many modern methods such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), etc. However, electroencephalogram (EEG) signal is still play a very important role in the diagnosis of brain function [1]. The EEG provides high temporal resolution with millisecond precision [2]. It has been implemented in diagnosing epilepsy [3], brain injury [4], and various sleep disorders [5]. Recently, a Brain Computer Interface (BCI) is developed, which translate EEG signal to actuate a device [6]. Change point detection is a classical problem in signal processing, where this tool help us to decide whether such a change occurred in the characteristics of the considered signal [7]. Various application of change point detections have been applied in EEG signal processing. Early warning system of an epileptic seizure could give a more time to treat the patients [8].The detection algorithm can also be used to segment different sleep stages measured by EEG method [9]. A common practice in time series analysis is to assume that time-varying structure of time series can be well estimated by a piecewise stationary process [10],
which could reduce the complexity of finding the change point. In earlier work by Adak [10], Tree-based Adaptive Segmented Spectrogram algorithm (TASS) was proposed by recursively dividing the signals into two short segments, and the signal from the left side is compared with the right side of the segmented signals. One limitation of the Adak method is that it is not easily extended to multivariate data. Davis et al. [11] proposed the Auto-PARM which used the minimum description length (MDL) principle. The Auto-PARM finds the “best" combination of the number of breakpoints, their locations and AR orders of all stationary segments by minimizing the code length which is computed according to general coding rules. Genetic Algorithm (GA) was implemented to solve the MDL optimization problem. Ombao et al. [12] proposed a smoothed localized complex exponentials (SLEX), where the time series is fitted into the orthogonal basis function. Like Adak, the SLEX model treats a series as piecewise stationary, where the pieces are determined by the choice of basis functions. Recently in 2008, Last et al [13] develop a method for detecting abrupt changes to the time-varying power spectrum of a series, assuming that the series is locally stationary between change-points. However, he claimed that the process is computationally expensive. For real-time, he proposed that computing power spectrum can be done parallelized, independently of any window. Later, he showed the problem of using fixed window-design, where some of the true change point maybe not detected. In this paper, we propose a change point detection algorithm, where the EEG signal is model by the sinusoidal function with a Heaviside function. The parameter of this model was estimated by using the least square approach, which the Particle Swarm Optimization (PSO) [14] is used to find all the parameter in the model, including the change point location. In this paper, the proposed approach was use to detect only a single change in the EEG signal in offline manner.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 484–487, 2011. www.springerlink.com
Change Point Detection of EEG Signals Based on Particle Swarm Optimization
II. PROPOSED MODEL By referring to Figure 1, the EEG sign nal with single change point is given by the parametric modeel y[n ] = f 1 [n ]u[n ] + ( f 2 [n ] − f 1 [n ])u[n − τ ] − f 2 [n]u[n − N ] + ε [n]
(1)
where y[n] is the measured EEG signal, N is i the length of the sampling signal, τ is the location of thee change point, and ε[n] is the error. f1 and f2 are the sinu usoidal function given by
f1[n] = A1 sin(ω1n + φ1 ) + B1
(2)
f 2 [n] = A2 sin(ω 2 n + φ2 ) + B2
(3)
where, A1 and A2 are amplitude, B1 and B2 aree the DC offset, ω1 and ω2 are angular frequency, and ϕ1 and ϕ2 are phase of u is used to the signals. Heaviside unit step function, u[n] represent the change of the EEG signals. Sm mooth approximation of u[n] is given by
u[n] ≈
1 1 + e −2 kn
(4)
nsition speed of where k is the constant that determine the tran the u[n]. To find the value of τ, all the otherrs unknown parameters, A1, A2, B1, B2, ω1, ω2, ϕ1, and ϕ2 caan be estimated by minimizing the sum squared error function n N
SSE =
∑ ( x[n] − y[n])
2
(5)
n =1
N]} is the obserwhere x[n]={x[1], x[2], x[3],…, x[N-1], x[N vation signals.
485
behavior of groups as fish schoolling or bird flocking. A particle, which representing a soluttion knows it best value so far so called pbest. Meanwhile, a solution that carrying the best solution found in a group is called gbest. which are selected from the pbest. Every soluution can be describes as position in the search space, wherre each particle tries to adapt its position using the current velocity and its position, based on the following equation:
vik +1 = qvik + c1 ∗ rand∗ ( pbest− sik ) + c2 ∗ rand∗ (gbest− sik ) sik +1 = sik +1 + vik +1
(6) (7)
where vik , vik +1 , sik ,and sik +1 are vvelocity vector and position vector of particle i at generationn k, respectively. q is the inertia weight, and c1 and c2 are coggnitive and social coefficients, respectively. The random parrameters, rand is applied to improve the convergence rates. The particle movement can be described by Figure 2. Implementation of PSO using m dimension particles can be described as follows [14]: 1. Initialize randomly positioningg vector si and velocity vector vi, where si = [s1, s2, …, sm] aand vi = [v1, v2, …, vm]. 2. Calculate velocity vector vik +1 byy using equation (6). 3. Update new positioning vector sik +1 of particle i by using equation (7). 4. If fitness function, F( sik +1 ) is bbetter than the F(pbesti), sik +1 is set to pbest. 5. If F(pbesti) is better than F(gbest)), pbesti is set to gbest. 6. Stop the process if good fitness is achieved. Otherwise, go to step 2.
Fig. 2 Particle and velocity foormulation [18]
IV. METHODOOLOGY
Fig. 1 EEG signal with single changee
III.
PARTICLE SWARM OPTIMIZAATION
PSO is one of the optimization techniquee developed by Kennedy and Eberhart in 1995 [17], based d on the social
A. EEG Recordings A healthy subject (age ranges 244 to 25), with no known history of neurological disease, w were studied. The EEG
IFMBE Proceedings Vol. 35
486
M.F. Mohamed Saaid et al.
signal from this study was recorded entirely on a g.MOBIlab+, an EEG amplifier from g.tecGuger Technologies Austria. The signal was recorded at a sampling rate of 256Hz. The recording session was conducted in an unelectrically shielded room. The system was setup to record 4 referential channels located at C3, C4, O1, and O2 against a reference at Cz. The ground was located at forehead of the subject. The electrodes were plugged on g.EEGcap that followed 10-20 electrode system. An abrasive skin prepping gel and conductive gel were used to lower impedance and improve electrical tracing. The scalp impedance was maintained under 10kΩ. The EEG system was interfaces with a computer running on Windows 7 64-bit, 4GB RAM and 2.53 GHz Intel Core 2 Duo processor via LabVIEW from National Instruments.
Fig. 3 The cue of task randomly given at 5s of the trail The subject was seated in a chair with approximately 0.6m from the monitor. The session of eye movement tasks were consists of eye-left and eye-right only. Each task was recorded in 10 seconds duration. The cue was given to the participant at 5 seconds of the trial to start perform the task assigned (Figure 3). The participant was instructed to avoid
x[n]
y[n]
Fig. 4 Channel C3 for eye-left task with estimated model, y[n] and change point τ = 0.16
x[n]
y[n]
Fig. 5 Channel O2 for eye-left task task with estimated model, y[n] 1.40E+06 1.20E+06 1.00E+06 8.00E+05 6.00E+05 4.00E+05 2.00E+05 0.00E+00
SSE
0
500
1000
1500
2000
2500
3000
k
Fig. 6 Convergence curve of O2 for eye-left task
IFMBE Proceedings Vol. 35
Change Point Detection of EEG Signals Based on Particle Swarm Optimization
blinking or moving their body during recorrding session to minimal the artifacts present in the signal. B. PSO Implementation u which are In the proposed model, ten particles were used, randomly initialized. The particle contains all a the unknown parameters, si={A1, A2, B1, B2, ω1, ω2, ϕ1, ϕ2,, τ}. Equation (5) is used as fitness function. The proposed model has been 0b, running on implemented using MATLAB 6.0 R2010 2.00GHz Dual-Core AMD processor with 1.8 87 GB RAM.
V. RESULTS Figure 4, and 5 show the EEG signals fro om C3, and O2, respectively. The dashed graph shows the esstimation based on the proposed model, tuned by PSO algorrithm. Figure 6 shows the convergence curve of the PSO process p for O2 channel. Results for C3 is 0.16, indicating thaat the algorithm fails to detect the change occurred. Meanw while, Figure 5 shows the closest estimation recorded on O2 channel, where τ = 1440.14. Based on the convergence curvee, it takes more time to reach the gbest (minimum value).
VI. CONCLUSIONS This paper studied a PSO based change point detection algorithm, which focusing only on single change for the c the PSO EEG signals. Based on the convergence curve, algorithm need to be upgraded so that the detection d can be fast and accurate enough to detect the chaange point, like real-time application of BCI[6]. Furthermoree, the proposed model can be modified to detect more changee points instead of one point.
487
REFERENCEES 1. Smith J, Jones M Jr, Houghton L et al.. (1999) Future of health insurance. N Engl J Med 965:325–329 2. Tseng S Y; Chen R C; Chong F C; Kuuo T S et al. (1995) Evaluation of parametric methods in EEG signal aanalysis. Med Eng Phys 17(1): 71-78 Engel A K et al. (2006) Single3. Debener S, Ullsperger M, Siegel M, E trial EEG-fMRI reveals the dynamics of cognitive function. Trends Cogn Sci 10(12): 558-63 4. Subasi A. (2006) Automatic detection of epileptic seizure using dynamic fuzzy neural networks. Expert Syyst. Appl 31:320-328. 5. Shin, H C, Jia X, Nickl R, Geocadin R G, ve Thakor N V et al. (2008) A subband-based information measuree of EEG during brain injury and recovery after cardiac arrest. IE EEE Trans. Biomed. Eng. 55 (8):1985–1990 6. Petit D, Gagnon J F, Fantini M L, Feriini-Strambi L, Montplaisir J et al. (2004) Sleep and quantitative EEG iin neurodegenerative disorders. J Psychosom Res 56(5):487–496. 7. Vaughan T M, Wolpaw J R, Donchinn E et al. (1996). EEG-based communication: Prospects and problem ms. IEEE Transactions on Rehabilitation Engineering 4:425–430. 8. Basseville M, Nikiforov I V et al. (1993) Detection of Abrupt Change: Theory and Applications. Engllewood Cliffs: Prentice Hall. 9. Qin D (1995) A comparison of techniques for the prediction of epileptic seizures, 8th IEEE Symp. Compputer-Based Medical Systems, Lubbock, TX, 1995. 10. Piryatinskaa A, Terdikb G, Wojbor A. Woyczynskic, Kenneth A. Loparod, Mark S. Schere, Zlotnikf A et aal (2009) Automated detection of neonate EEG sleep stages. Compuuter Methods and Programs in Biomedicine 95:31-46. 11. Adak S (1998) Time-dependent specctral analysis of nonstationary time series. J. Amer. Statist. Assoc. 93:1488–1501. 12. Davis R A, Lee T C M. Rodriguez-Yaam G A et al (2006) Structural break estimation for non-stationary tim me series models. Journal of the American Statistical Association 101: 2223–239. 13. Ombao H, Raz J, von Sachs R, Malowee B et al (2001) Automatic statistical analysis of bivariate non-stationnary time series, JASA 96:543– 560. 14. Last M, Robert Shumway R et al. (20008) Detecting abrupt changes in a piecewise locally stationary time sseries. Journal of Multivariate Analysis 99(2):191-214. 15. Kennedy J, Eberhart R C et al (1995) Particle Swarm Optimization. Proceeding of IEEE International Connference on Neural Networks. IV. Perth, Australia. Piscataway, N. J.: IEEE Service Center, pp 19421948.
ACKNOWLEDGMENT This research is supported financially by y the University of Malaya Post Graduate Research Grantt (PPP PS078/ 2010B), University Malaya Research Gran nt (UMRG-025/ 09AET) and Ministry of Science,Technolog gy, and Innovation (SciendFund MOSTI-13-02-03-3075)). Muhammad Faiz Mohamed Saaid is indebted to Univerrsity of Malaya for granting him a financial support underr Bright Spark Scheme University of Malaya (SBSUM) and d opportunity to do this research.
Address of the corresponding author: Author: Muhammad Faiz Mohamed Saaaid Institute: Department of Biomedical Enngineering, Faculty of Engineering, University of Malayaa City: Kuala Lumpur Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Comparative Analysis of the Optimal Performance Evaluation for Motor Imagery Based EEG-Brain Computer Interface Y.S. Ryu, Y.B. Lee, C.G. Lee, B.W. Lee, J.K. Kim, and M.H. Lee Department of Electrical & Electronic Engineering, Yonsei University, Seoul, Republic of Korea
Abstract— The purpose of this paper is to evaluate the performance of EEG-BCI(Brain Computer Interface) based on motor imagery characteristics for a ubiquitous health service, brain computer interface technology referring to a technique to control an external device using the brain signals without other expressions. The EEG-BCI algorithm used in this paper is composed of a common spatial pattern (CSP) and a least square linear classifier. The CSP is used to obtain the characteristics of event-related desynchronization, and the least square linear classifier classifies the motor imagery EEG data of the left hand or right hand. The effect of a performance factor is important to evaluate the optimal performance for a motor-imagery-based EEG-BCI algorithm. There are five performance factors; the EEG mode, feature calculation, selected CSP channel number, selected classifier and window size. Keywords— Comparative analysis, Optimal performance, Motor imagery, EEG, Brain computer interface.
black screen (2 sec), a fixation cross (2 sec), and a visual cue (left/right motor imagery) (4 sec). One session has 50 trials, and there were a total of three sessions. The EEG readout was measured using PolyG-A (Laxtha, Daejeon, Korea) with the 16 channel of F3, Fz, F4, FC1, FCz, FC2, C3, C1, Cz, C2, C4, CP1, CP2, P3, Pz and P4) with 1 ground channel and 1 reference channel [2]. The experiment’s subjects had no mental illness and were on no medications at the time of the study. The experimental environment was isolated by a curtain, and the subjects concentrated on the monitor and the speaker to detect the visual and auditory cues. Subjects did not move during the experiment. The experiment’s manager monitored the subject's activity, and environmental changes. The motor imagery of the subjects was the imagination of moving the left arm or the right arm [3]. B. Effect of the Performance Factor
I. INTRODUCTION Brain-computer interface technology uses the signals from the brain to control external devices. BCI consists of the invasive type and the non-invasive type. The noninvasive type has several advantages, such as its straightforward usability and the ease with which brain signals can be obtained. Non-invasive BCI measures the EEG from the scalp and evaluates the mental activity of human subjects. The point is the connection between the characteristics of EEG and mental activity [1].
The effect of the performance factor is important in evaluations of the optimal performance of motor-imagerybased EEG-BCI algorithms. There are five types of performance factors, as shown in Table 1, the EEG mode, feature calculation, selected CSP channel number, selected classifier and the window size. The EEG mode has two selections, ERD and RP. Feature calculation has the four selections of log (var) mean, sum (abs) and log (power). The CSP channel number has the four selections of 2, 4, 8 Table 1 Effect of the Performance Factor
II. METHODS
No
Factor
Selection
A. Experimental Setup
1
EEG mode
ERD:1, RP:2
For an accuracy evaluation of the proposed algorithm, a BCI experiment was performed for 10 subjects. The 10 subjects in the second set of experiments were selected from the first experiments. They understood the experimental methods and process as they had obtained some measure of BCI experience. The mental task was motor imagery for the left or right arm. The experimental protocol consisted of a
2
Feature Calculation
Log (var):1, mean:2, sum (abs):3, log (power):4
3
Selected Channels
2, 4, 8, 16
4 5
Selected Classifier Window Size
LDA:1, LS:2, SVM:3 3, 2, 1
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 488–491, 2011. www.springerlink.com
Comparative Analysis of the Optimal Performance Evaluation for Motor Imagery Based EEG-Brain Computer Interface
489
and 16 channel. The classifier has three selections: LDA, LS and SVM. There are three window sizes; 3 sec, 2 sec and 1 sec [4].
III. RESULTS The effect of ERD/RP is shown in Fig. 1. For individual subjects, subjects 1, 2, 8 and 10 showed a considerable difference between the ERD and RP. Regarding the total average value, the ERD average value was larger than the RP average value.
Fig. 2 Effect of the Feature Calculation
Fig. 1 Effect of the ERD/RP The effect of the feature calculation is shown in Fig. 2. For individual subjects, subjects 1, 2, 8 and 10 showed a considerable difference between each feature. In particular, the mean value is much smaller than the other features. Regarding the total average value, the mean average value is smaller than the average values of the other features. The effect of the selected CSP channel is shown in Fig. 3. For individual subjects, subjects 2, 8 and 10 showed little difference between each channel, whereas the other cases showed a specific channel value that was high or low. For the total average value, as the channel number increased, the average value decreased.
Fig. 3 Effect of the Selected CSP channel The effect of the classifier is shown in Fig. 4. For individual subjects, subjects 1, 8 showed approximately 70~80% accuracy, while the other cases were generally below 60% accuracy. Regarding the total average value,
IFMBE Proceedings Vol. 35
490
Y.S. Ryu et al.
LDA showed maximum accuracy, followed by LS and lastly the SVM classifier. The effect of the window size is shown in Fig. 5. For individual subjects, subjects 1 and 8 showed on average 70~80% accuracy, subject 4 showed slightly more than 70% accuracy, and the other cases showed mainly less than 60% accuracy. For the total average value, a window size of 2 seconds showed maximum accuracy, followed by a window size of 3 seconds and lastly a window size of 1 second. In a comparative evaluation of the five performance factors, a t-test and a correlation analysis were conducted. In the EEG mode, the p-value was 0.11, but the correlation value was 0.23, which is a very low value. For the features, there were in general high statistical significances, but the correlation values showed some contrast. For the CSP filter coefficient, for 4 and 16 coefficients, there were high statistical significances (P=0.03). For the classifier, in the cases of LDA and SVM, the p-value was 0.12, a relatively low value. For the window size, when the window size was 2 seconds and 1 second, the p-value was 0.09 [5].
IV. CONCLUSIONS Fig. 4 Effect of the Classifier
Through a comparative analysis using the five performance factors, the optimal performance evaluation conditions for each subject were determined and the average conditions for maximum performance were obtained. In case of the EEG mode, ERD shows a 15% increase in accuracy compared to RP. In the case of feature calculation, the three features show the same performance without a ‘mean’ feature. In the case of the selected CSP channel, ‘2 channel’ shows maximum performance meaning a 9% accuracy increase from the worst case, which was ‘16 channel’. For the classifier, the LDA algorithm shows maximum performance; the worst case was the SVM algorithm, meaning that LDA shows a 3% accuracy increase. In the case of the window size, a two-second window shows maximum performance, with the worst case being the one-second window, meaning that the two-second window shows a 5% accuracy increase. In summary, the average conditions for the maximum performance are the ERD mode, three features without a ‘mean’, the two-CSP channel, the LDA algorithm and a two-second window size [6].
ACKNOWLEDGMENT
Fig. 5 Effect of the Window Size
This work was supported by the Korea Food and Drug Administration. [08142-Medical Device-370, Development of Brain-Computer Interface Medical Instrument Evaluation Technology based on EEG for a Neural Prosthesis, 2008.06 ~ 2009.04]. IFMBE Proceedings Vol. 35
Comparative Analysis of the Optimal Performance Evaluation for Motor Imagery Based EEG-Brain Computer Interface
REFERENCES 1. Dornhege G, Mill´an J. del R, Hinterberger T, McFarland D, and M¨uller K.-R, editors (2006) Towards Brain-Computer Interfacing, MIT Press 2. Pfurtscheller G, Lopes da Silva FH (1999) Event-related EEG/MEG synchronization and desynchronization: basic principles, Clinical Neurophysiology, vol. 110, Issue 11, Nov. 1999, pp. 1842-1857. 3. Ramoser H., Muller-Gerking J., Pfurtscheller G (2000) Optimal spatial filtering of single trial EEG during imagined handmovement, IEEE Transactions on Rehabilitation Engineering, vol. 8, Issue 4, Dec. 2000, pp. 441-446. 4. Yijun Wang, Shangkai Gao, Xiaorong Gao (2005) Common Spatial Pattern Method for Channel Selection in Motor Imagery Based Braincomputer Interface, IEEE-EMBS 2005. 27th Annual International Conference of the Engineering in Medicine and Biology Society, 17-18 Jan. 2006, pp. 5392-5395.
491
5. Guger C, Ramoser H, and Pfurtscheller G (2000) Real-Time EEG Analysis with Subject-Specific Spatial Patterns for a Brain–Computer Interface (BCI), IEEE Transactions on Rehabilitation Engineering, Vol. 8, No. 4, Dec. 2000, pp. 447-456 6. Guger C, Edlinger G, Harkam W, Niedermayer I, Pfurtscheller G (2003) How Many People are Able to Operate an EEG-Based BrainComputer Interface (BCI)?, IEEE Transactions on neural systems and rehabilitation engineering, vol. 11, Issue. 2, Jun. 2003, 145-147.
Author: Myoungho Lee Institute: Department of Electrical & Electronic Engineering, Yonsei University Street: Shinchondong, Seodaemungu City: Seoul Country: Republic of Korea Email: [email protected]
IFMBE Proceedings Vol. 35
Comparison of Influences on P300 Latency in the Case of Stimulating Supramarginal Gyrus and Dorsolateral Prefrontal Cortex by rTMS T. Torii1, K. Nojima2, A. Matsunaga2, M. Iwahashi1, and K. Iramina2 1
Tohwa University/ Department of Medical Electronics Engineering, Fukuoka, Japan 2 Kyushu University/ Graduate School of Systems Life Science, Fukuoka, Japan
Abstract— The purpose of this study is to investigate influence of P300 latency in the case of stimulating SMG (Supramarginal Gyrus) and DLPFC (Dorsolateral Prefrontal Cortex). P300 of ERP (Event-Related Potentials) is induced by applying Odd-Ball task that is composed by the two sounds with different frequency. The Odd-Ball task was executed before and after magnetic stimulation by rTMS (100 stimuli with 1Hz). We compared the P300 latency before and after the rTMS magnetic stimulation. In the case of stimulating left and right SMG, P300 appeared earlier than in the case of no stimulating left and right SMG. In the case of stimulating left and right DLPFC, P300 appeared later than in the case of no stimulating left and right DLPFC. Keywords— rTMS, supramarginal gyrus, dorsolateral prefrontal cortex, ERP, P300.
I. INTRODUCTION TMS (Transcranial Magnetic Stimulation) and rTMS (Repetitive Transcranial Magnetic Stimulation) are the devices which are very important for neuriatria [1], [2] and function study of the brain. TMS is a noninvasive stimulation method to enable direct stimulus to the brain [3], [4]. The combination of TMS and functional brain imaging techniques such as functional magnetic resonance imaging (fMRI), positron emission tomography (PET) and electroencephalograph (EEG) has become an effective tool for the study of the dynamics of human brain. It enables manipulating and measuring the cortical activity simultaneously. However it is difficult to attain high temporal resolution imaging by combined TMS and PET or combined TMS and fMRI. These methods have bad temporal resolution which is over 100 ms resolution [5]-[11]. The many combination studies with the EEG are reported [12]-[20]. There is the report that studied influence on P300 by TMS [20]. In this paper, it was reported that P300 latency delayed when TMS was applied to left SMG at 200ms or 250ms after the OddBall sound stimulation. However, there is no report that investigated the effect of P300 latency in the case of stimulating right SMG, left DLPFC, and right DLPFC.
Therefore, we investigated the effect of P300 latency when the Odd-Ball task was applied before and after rTMS. The stimulation points by rTMS are not only left SMG but also left DLPFC, right SMG and right DLPFC.
II. MATERIALS AND METHODS Fig.1 shows the measurement system which we adopted on this study. We generated stimulus sound by STIM2 of Neuro SCAN. STIM2 outputs trigger and stimulation sound of 1 kHz or 2 kHz. The subject hears stimulation sound with earphones. The electroencephalograph was measured by BIOAMP. BIOAMP starts the measurement with a trigger signal from STIM2 and electroencephalograph is recorded by PC. We set it high-pass filter OFF, low-path filter 50Hz, and ham filter ON. The electroencephalogram was measured at Fz, Cz and Pz of the international ten-twenty system, and each polar contact impedance was set at less than five kilo ohms. The duration time of data recording is 500ms from a start of the sound stimulation, and sampling frequency and the addition number of times are 1,000Hz and 20 times. The data was processed by band-pass digital filter at 0.5-50Hz. We measured not only the electroencephalogram but also the reaction time when target sound was exhibited. The reaction time is measured by clicking left switch of the mouse.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 492–495, 2011. www.springerlink.com
Fig. 1 Measurement system
Comparison of Influences on P300 Latency in the Case of Stimulating Supramarginal Gyrus and Dorsolateral Prefrontal Cortex
493
sound, 1 kHz (non- Target / presentation probability 80%) and 2 kHz (Target / presentation probability 20%). The stimulation sounds are exhibited at random. Stimulation sound is a burst wave, and the duration time of this sound is 50ms. Furthermore, the interval of the stimulation sound is 2500ms, and the sound pressure is 60dB. The stimulation points of rTMS are four (Fig.3), and those are right and left DLPFCs (Dorsolateral Prefrontal Cortex) and right and left SMGs (Supramarginal Gyrus). The strength of the magnetic stimulation is set to 80% of subject's motor threshold. The rTMS is low-frequency magnetic stimulation of 1Hz and is executed after the Odd-Ball Task of control. The number of magnetic stimulation is 100 times. The duration time of magnetic stimulation is 2 ms. Fig. 2 Flowchart of experiment In this figure, the white bars signify 1 kHz (non-target stimulation sound), and the black bars signify 2 kHz (target stimulation sound).
Fig. 3 Stimulus points The magnetic stimulation points by rTMS are four. This figure shows the position of the coil. Fig.2 shows the procedure of the task. I in this figure signifies the Odd-Ball task to become the control. II signifies the magnetic stimulation by rTMS. The magnetic stimulation device is a figure-eight shaped coil (70 mm diameter) of MAGSTM (Super RAPID). In III, we execute the Odd-Ball task after the magnetic stimulation of the pre-process (II), promptly. Before and after the magnetic stimulation, the stimulation sound on the Odd-Ball Task is same. The Odd-Ball task consists of two different frequency
III. RESULT We analyzed the measurement data about seven subjects. Fig.4 shows the electroencephalographic Cz of one subject. The P300 latency is normalized by the control P300 latency. We compared the P300 latencies which were induced in Odd-Ball task before and after the magnetic stimulation. The P300 latency after the magnetic stimulation to SMG is earlier than P300 latency before the magnetic stimulation. The P300 latency after the magnetic stimulation to DLPFC is later than P300 latency before the magnetic stimulation. Fig.5 shows the significant difference before and after the magnetic stimulation about the P300 latency which was induced on Cz. The P300 latency after the rTMS stimulation is normalized by the P300 latency before the rTMS stimulation. In the case of stimulating SMG, P300 latency after the magnetic stimulation is earlier than before the magnetic stimulation. In the case of stimulating DLPFC, P300 latency after the magnetic stimulation is later than before the magnetic stimulation. As a result, we confirmed the significant difference in p<0.05 in all cases. When we stimulated DLPFCs, the delay was observed in P300 latency of Fz and Cz and Pz. On the other hand, the P300 latency was observed early when we stimulated SMGs. Fig.6 shows the reaction time for the target stimulation and Fig.7 shows the means of the normalized P300 latency and reaction time. The reaction time after the rTMS stimulation is normalized by the reaction time before the rTMS stimulation. For the reaction time after the magnetic stimulation, the remarkable increase and decrease was not observed. On the other hand, the P300 latency after the magnetic stimulation depended on the stimulation points. The analysis result of reaction time was different from the analysis result of the P300 latency.
IFMBE Proceedings Vol. 35
494
T. Torii et al.
Fig. 6 Comparison of the normalized reaction time before and after the rTMS stimulation
Fig. 4 Electroencephalogram on Cz of Subject C
Fig. 7 Means of the normalized P300 latency and reaction time *: significant (p<0.05)
IV. DISCUSSION
Fig. 5 Means and standard deviations of the normalized P300 latency In either case, the significant difference of p<0.05 is observed. *: significant (p<0.05)
The effect of the rTMS is classified to 2 types by the frequency of the magnetic stimulation. The low-frequency magnetic stimulation of 1Hz or less inhibits cerebrocortical excitement and the high frequency magnetic stimulation of more than 1Hz promotes excitement [21]. In the lowfrequency magnetic stimulation of 0.4Hz, the P300 latency’s delay was observed [20]. We applied low frequency magnetic stimulation of 1Hz. Then, the inhibition effect is expected. Therefore, it is estimated that the P300 latency
IFMBE Proceedings Vol. 35
Comparison of Influences on P300 Latency in the Case of Stimulating Supramarginal Gyrus and Dorsolateral Prefrontal Cortex
after the magnetic stimulation will delay. In the case of stimulating DLPFC, the delay of the P300 latency caused. In the case of stimulating SMG, the opposite result for the previous report and our estimation was obtained. We considered two matters as a reason of this difference. At first, the response is different by stimulation area. Secondarily, the frequency of the magnetic stimulation becomes to be problem. In this study, the frequency of the magnetic stimulation is just 1Hz and this frequency is border of a low frequency and a high-frequency. Therefore, we think that the frequency of the magnetic stimulation must be changed.
REFERENCES 1. Siebner HR, Rossmeier C, Mentschel C, et al. (2000) Short-term motor improvement after sub-threshold 5-Hz repetitive transcranial magnetic stimulation of the primary motor hand area in parkinson’s disease. J Neurol Sci 178: 91-94 2. Pascual-Leone A, Rubio B, Pallardo F, et al. (1996) Rapid-rate Transcranial magnetic stimulation of left Dorsolateral prefrontal cortex in drug-resistant depression. Lancet 348:233-237 3. S. Komssi, et al. (2002) Ipsi- and contra lateral EEG reactions to transcranial magnetic stimulation. Clinical Neurophysiology 113: 175184 4. A.T. Barker, R. Jalinous and I. L. Freeston (1985) Non-invasive magnetic stimulation of human motor cortex. Lancet: 1106-1107 5. R. J. Ilmoniemi, J Virtanen, J Ruohonen, et al (1997) Neuronal responses to magnetic stimulation reveal cortical reactivity and connectivity. Neuro Report 8: 3537-3540 6. H. Tiitinen, J. Virtanen, R. J. Ilmoniemi, J. Kamppuri, M. Ollikainen, J Ruohonen and R. Näätänen (1999) Separation of contamination caused by coil clicks from responses elicited by transcranial magnetic stimulation. Clin Neurophysiol 110(5): 982-985 7. V. Nikouline, J. Ruohonen, J. Ilmoniemi (1999) The role of the coil click in TMS assessed with simultaneous EEG. Clinical Neurophysiology 110: 1325-1328 8. M. Schüramann V. V. Nikouline, S. Soljanlahti, M. Ollikainen, E. Basar and R. J. Ilmoniemi (2001) EEG responses to combined somatosensory and transcranial magnetic stimulation. Clinical Neurophysiology 112: 19-24 9. T. Paus, R. Jech, C.J. Thomapson R. Comeau, T. Peters, and A. C. Evans (1997) Transcranial magnetic stimulation during positron emission tomography: a new method for studying connectivity of the human cerebral cortex. J. Neurosci 17: 3178-3184 10. P. Fox, R. Ingham, M. S. George, H. Mayberg, J. Inghan, J. Roby, C. Martin, and P. Jerabec (1997) Imaging human intra-cerebral connectivity by PET during TMS. NeuroReport 8: 2787-2791
495
11. D. E. Bohnng, A. Shastri, Z. Nahas, J. P. Lorberbaum, S. W. Anderson, W. R. Dannels, E. U. Haxthause, D. J. Vincent, and M. S. George (1998) Echoplanar BOLD fMRI of brain activation induced by concurrent transcranial magnetic stimulation. Invest. Radiol 33: 336-340 12. EM. Wassermann (1998) Risk and safety of repetitive transcranial magnetic stimulation: report and suggested guidelines from the international workshop on the safety of repetitive transcranial magnetic stimulation. Electroencehalogr Clin Neurophysiol 108: 1-16 13. K. Iramina, T. Maeno, Y. Kowatari and S. Ueno (2002) Effects of Transcranial Magnetic Stimulation on EEG activity. IEEE Trans. Magn. 38: 3347-3349 14. K. Iramina, T. Maeno, Y. Nonaka and S. Ueno (2003) Measurement of evoked EEG induced by transcranial magnetic stimulation. J. Appl. Phys. 93: 6718-6720 15. T. Paus, P. K. Sipila and A. P. Strafella (2001) Synchronization of neuronal activity in the human primary cortex by transcranial magnetic stimulation: an EEG study. J Neurophysiol 86: 1983-1990 16. S. Komssi, S. K. Kähkönen, Bhkonen and R. J. Ilmoniemi (2004) The effect of stimulus intensity on brain responses evoked by transcranial magnetic stimulation. Human brain Mapp 21: 154-164 17. S. K. Kähkönen, Bhkonen, S. Komssi, J. Wilenius and R. J. Ilmoniemi (2005) Prefrontal transcranial magnetic stimulation produces intensity-dependent EEG responses in humans. Neuroimage 24: 955960 18. G. Thut, JR. Ives, F. Kampmann, MA. Pastor, A. Pascual-Leone (2005) A new device and protocol for combining TMS and online recordings of EEG and evoked potentials. J Neurosci Methods: 141 19. M. Iwahashi, T. Arimatsu, S. Ueno and K. Iramina (2008) Differences in evoked EEG by Transcranial Magnetic Stimulation at various stimulus points on the Head. IEEE EMBS 30th: 2570-2573 20. M. Iwahashi, Y. Katayma, S. Ueno and K. Iramina (2009) Effect of transcranial magnetic simulation on P300 event-related potential. Annual International Conference of IEEE EMBS 31st: 1359-1362 21. A. Pascual-Leone, JM. Tormos, JP. Keenan, F. Tarazona, C. Canete and MD. Catala (1998) Study and modulation of cortical excitability with transcranial magnetic stimulation. J Clin Neurophysiol 15: 333343
Author: Tetsuya Torii Institute: Department of Medical Electronics Engineering, Tohwa University Street: 1-1-1Chikushigaoka, Minami-ku City: Fukuoka Country: Japan Email: [email protected]
IFMBE Proceedings Vol. 35
Cortical Connectivity during Isometric Contraction with Concurrent Visual Processing by Partial Directed Coherence S.N. Ramli1, N.M. Safri1,*, R. Sudirman1, N.H. Mahmood, M.A. Othman1, and J. Yunus2 1
Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Johor, Malaysia [email protected] 2 Faculty of Biomedical Engineering and Health Science, Universiti Teknologi Malaysia, 81310 Johor, Malaysia * Corresponding author.
Abstract— Previously, coherence function has been applied to investigate the effects of external visual stimulation on cortico–muscular synchronization. It was proven that brain regions with enhanced 8-12 Hz (alpha)/decreased 31-50 Hz (gamma) power have strong coupling between them and demonstrated as codes of brain functional activity. However, the cortical connectivity was performed using ordinary coherence that does not provide the directional flow of information in the brain. In this paper, cortical directional connectivity in the brain was investigated by adopting partial directed coherence method. Two tasks were investigated, i.e. Control and Count Visual Stimuli tasks. The Control task required subject to maintain first dorsal interosseous muscle contraction without visual stimulation. In Count Visual task, the subject was asked to count certain stimuli in a random series of visual stimuli displayed on a screen while maintained the muscle contraction. Result showed that in control task, most of the sources of the information pathways were from the frontal region. In count visual task, most of the sources of the information pathways came from the occipital, temporal and parietal regions and ended at the central or frontal regions. These results provide information on the cortical functional connectivity during motor action with competing visual processing. As conclusion, frontal region provides the source of information during motor task alone while the sources of information come from occipital, temporal and parietal regions for the concurrent motor task with competing visual processing. Keywords— Partial Directed Coherence, Motor Control, Isometric Contraction, Visual Processing.
I. INTRODUCTION The problems on the determination of the functional connectivity in brain have been for a long time in the center of attention of neurophysiologist, since it is crucial for understanding of information processing mechanisms in brain [1]. The understanding of mental processes requires not only the information on localization but also on mutual relations between the activated structures. Method based on the estimation of correlation/ coherence functions between the activities of pairs of simultaneously analyzed structures have
been the most popular approaches. In fact, little attention has been given to the evolution in the concept of coherence. The ideas of the concept of coherence are known as; ordinary coherence, directed coherence and partial directed coherence. Ordinary coherence focuses on the structures themselves and the mutual synchrony of their activity. Directed coherence, rather than describing mutual synchronicity, also indicates whether and how two structures under study are functionality connected in a given scenario of brain processing according to the specific behavior. However, directed coherence is generalized to the simultaneously analysis of more than just pairs of neural structures. It spreads a balance signal power from one source to another over many possible alternative pathways. While in partial directed coherence concept, direct structural information flow between the two structures determined in directed coherence is provided by measures the relative strength of direct pair wise structure interactions. The effects of external visual stimulation on corticomuscular synchronization has been studied and it was proven that enhanced 8-12 Hz (alpha)/decreased 31-50 Hz (gamma) oscillations in the brain affect strength of the 1330 Hz oscillation of cortico-muscular synchronization [2]. Recent research shows that regions with enhanced / decreased power have strong coupling between them and demonstrated as codes of brain functional activity by using ordinary coherence analysis [3].The studies indicate that synchronization between electroencephalogram (EEG)-EEG signals was quantified as a measure of the degree of interaction between two EEG signals. However, these interactions will not indicate the directions of information flow in the brain regions. Besides, the source and the end of the flow of these EEG signals are also unknown. The present study is the continuation of the previous work. In this paper, we delineate the information pathways or direction of human brain activities during motor task with concurrent visual stimulation using partial directed coherence (PDC) method.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 496–499, 2011. www.springerlink.com
Cortical Connectivity during Isometric Contraction with Concurrent Visual Processing by Partial Directed Coherence
by construction linear, strictly speaking, linear Grangercausality is inferred by PDC.
II. PARTIAL DIRECTED COHERENCE PDC introduces a new frequency domain approach to describe the relationships (direction of information flow) between multivariate time series based on multivariate autoregressive model. This concept is shown to reflect a frequency-domain representation of the concept of Granger causality. By definition, an observed time series xj(n) Granger-causes another series xi(n), if knowledge of xj(n)’s past significantly improves prediction of xi(n) [4]. Grangercausality can be formulated in terms of multivariate vector autoregressive processes. M-dimensional multivariate vector autoregressive processes VAR[p] of order p can be generalized as in the equation (1) [5]. X(t) = ∑
(1)
with p coefficient matrices an, n = 1, 2, …., p, each of dimension M x M. The term represents a M – dimensional Gaussian white noise process with covariance matrix Σ, i.e., ~ N (0, Σ). The off-diagonal matrix elements ai j, n , i, j = 1, …., M of the linear VAR model contain the information about Granger-causal interactions between the components of the multivariate process. For instance, a i j, n ≠ 0 describes a Granger-causal influence of Xj at time lag n on the present value of Xi , i.e., Xj Xi. In various multivariate systems the information flow between subsystems is limited to distinct frequency bands. For such systems it is reasonable to transfer the concept of Granger-causality to the frequency domain. Granger-causal relationships can be detected in the frequency domain by means of the partial directed coherence which is based on modeling the investigated multivariate system by vector autoregressive processes [4]. Therefore, PDC is estimated according to (3) with the condition in (2) while the coefficient matrices aij(r) are estimated by fitting a VAR model of order p as shown in (1). ܣҧ ൌ ͳ െ σୀଵ ܽ ሺݎሻ݁ ିଶగ ǡ ݅ ൌ ݆ െσୀଵ ܽ ሺݎሻ݁ ିଶగ , otherwise
ߨ ሺ݂ሻ ൌ
หҧೕ ሺሻห మ
మ
ටหҧభೕ ሺሻห ାหҧమೕ ሺሻห
497
Remark 1: PDC is normalized with respect to the structure that sends the signal as compared to directed coherence which is normalized to the structure that receives the signal. Remark 2: When i=j, the PDC functions represents how much its own past couples to its present state.
III. METHODOLOGY The whole processes of EEG signal for calculating the value of PDC are shown in Fig. 1. There are several steps to take before the data were analyzed. Control task (EEG)
Visual task (EEG)
Channel function Autoregressive Coefficient function Partial Directed function GNUplot function Fig. 1 The block diagram of general processes of EEG signals A. Data Collection
(2) (3)
The PDC is normalized to [0, 1]. In contrast to coherence and partial coherence it is an asymmetric measure, which enables to detect the direction of influences. A direct Granger-causal influence from process Xj to Xi is inferred by a > 0. Since VAR processes are non-zero value, i.e.,
For this experiment, data have been collected from the previous project. The brain signals (EEG) and the muscle signals (electromyogram, EMG) were recorded from ten normal subjects. EEG was recorded with averaged reference from 19 surface electrodes mounted in a cap (Electro-Cap International, Inc., Eaton, OH), according to the international 10-20 electrodes placement method. Two different experimental data were used according to the two different conditions below: Control task: A monitor screen was placed 1 meter in front of the subjects. Subjects were instructed to fix their eyes on the center of the screen and simultaneously perform the isometric contraction of the first dorsal interosseous (FDI)
IFMBE Proceedings Vol. 35
498
S.N. Ramli et al.
muscle. No image was displayed on the screen during the entire motor action (black screen). This control condition was performed before each “count visual stimulus” (Count Visual) condition.
Gnuplot function that plotted the graph of the value of PDC in 19 x 19 (channel) matrix diagrams.
IV. RESULT Count condition of visual task: Two visual stimuli, an Omark (circle) and an X-mark (cross), were displayed randomly on the screen at 1-s intervals. Each stimulus appeared at the center of the screen for 300 ms with the same size and brightness (70-74 cd/m2). Subjects were instructed to fix their eyes on the center of the screen and count silently the occurrences of the circle stimulus displayed on the screen while maintain the muscle contraction. Subjects were asked to report the counted number at the end of the condition.
19 x 19 matrix layouts of PDC (not shown) were summarized in a diagram according to the international 10-20 system of electrodes placement (Fig. 2). A. Control Task During the task, subject maintained the first dorsal interosseous muscle while having their EEG recorded. Visual stimuli have been ignored.
B. Data Analysis Partial directed coherence method was used for the analysis of EEG-EEG coherence because it facilitates the information pathways or direction of human brain activities during the two tasks as explained in the previous section. In a single input file, there were 32 channels data contained 1 channel for EMG data, 19 channels for EEG data and 12 channels representing other data. However, only data from 19 EEG channels were used and averaged according to each channel. After averaging, the data went through channel function that changes raw EEG data from binary mode to ASCII mode. After the conversion, the data went through autoregressive function. It is the function that determined the value of autoregressive coefficient function by Matrix Direct Inversion method. By forming a set of direct inversions, p = 3 and N = number of data in each channel and ФN = aN, we can establish a matrix system as shown in equation (4). x4 x5 x6 … … xN
=
B
x3 x4 x5 … … xN-1
x2 x1 x3 x2 x 4 x3 … … … … xN-2 xN-3 A
Fig. 2 The international 10-20 system electrodes placement S1
S2
S3
S4
Ф1 Ф2 Ф3 (4) Ф
Then, express the solution using equation (5) Ф
(5)
Next, the averaged EEG signals passed through partial directed coherence function that calculated the value of PDC to model the information pathways between EEG signals using the equation in (2). Meanwhile, the last step was
Fig. 3 The information pathways amongst 19 channels during control task for 4 subjects (S1-S4)
IFMBE Proceedings Vol. 35
Cortical Connectivity during Isometric Contraction with Concurrent Visual Processing by Partial Directed Coherence
In control task, the patterns of the information pathways amongst 19 channels were almost identical (Fig. 3). The sources were coming from the anterior and posterior of the frontal lobe. These are because of the muscle contraction and motor control by the subjects. Moreover, the subjects need to plan and organize the contraction within the duration given and concentrate on the motor action. However, there were also some sources that are coming from the occipital lobe for some subjects although the task was without visual stimulation.
recognition and judgment while parietal lobe is associated with sensory integration. There were also sources that came from the frontal lobe. The frontal lobe involves planning, organizing, concentrating and making decision during both muscle contraction and count visual stimuli tasks, which were done concurrently. For both tasks, the connections between left hemisphere and right hemisphere of brain structure were also found, indicating an inter-relationship between both hemispheres.
V. CONCLUSION
B. Count VisualTask During the task, O-mark and X-mark of visual stimulus are displayed on the screen randomly. Subjects were asked to count O-mark within a duration given and report the counted number at the end of the task. S1
499
S2
The present results show that partial directed coherence method provides the possible information pathways / information flow in human brain during motor task with concurrent visual stimulation. The results provide information on the cortical functional connectivity during motor action with competing visual processing. As conclusion, frontal region provides the source of information during motor task alone while the sources of information come from occipital, temporal and parietal regions for the concurrent motor task with competing visual processing.
ACKNOWLEDGEMENT
S3
The authors gratefully acknowledge the supports of the Malaysian Ministry of Higher Education (MOHE). This work is sponsored by MOHE through FRGS Research Grant No. 78427.
S4
REFERENCES
Fig. 4 The information pathways amongst 19 channels during count visual task for similar subjects as in Fig.3
In count visual stimuli task, the patterns of the information pathways amongst 19 channels were also almost the identical (Fig. 4). Most of the sources were coming from occipital, temporal and parietal lobe. These three lobes are always associated together for visual reception and interpretation. Occipital lobe will function as primary visual reception and interpretation area. Temporal lobe involves shape
1. Katarzyna J. Blinowska (2010) Methods for Determination of Functional Connectivity Brain, IFMBE Proc. vol. 28, 17th International Conference on Biomagnetism Adv. in Biomagnetism, Dubrovnik, Croatia, 2010, pp 195-198 2. Mat Safri N., Murayama N., Hayashida Y., Igasaki T. (2007) Effects of Concurrent Visual Tasks on Cortico-Muscular Synchronization in Humans. Brain Research 1155:81-92 3. Mat Safri N., Murayama N., Igasaki T., Hayashida Y. (2006) Effects of Visual Stimulation on Cortico-Spinal Coherence During Isometric Hand Contraction in Humans. Int. J. of Psychophysiol 61:88-293 4. Luiz A. Baccala and Koichi Sameshima (2001) Partial Directed Coherence: A New Concept in Neural Structure Determination. Biol. Cybern 84:463-474 5. Helmet Lutkepohl (2005) New Introduction to Multiple Time Series Analysis. Springer, Germany
IFMBE Proceedings Vol. 35
Cross Evaluation for Characteristics of Motor Imagery Using Neuro-feedback Based EEG-Brain Computer Interface Y.S. Ryu, Y.B. Lee, W.J. Jeong, S.J. Lee, D.H. Kang, and M.H. Lee Department of Electrical & Electronic Engineering, Yonsei University, Seoul, Republic of Korea
Abstract— In this paper, we evaluated BCI algorithm using CSP for finding out about realistic possibility of BCI based on neurofeedback. BCI algorithm that was comprised of CSP and 3 kinds of classifier such as least square linear classifier, SVM and linear discriminant analysis was evaluated in 10 people. According to the result of the experiment, the effect of neurofeedback is evaluated. And in case of neurofeedback, some subject is exceptional but general trend shows the performance improvement by neurofeedback. This study gives the need for adaptive evaluation of motor imagery using neurofeedback based EEG-BCI that can be related to generic and robust system for automated subject-specific classification of EEG based BCI system, development of EEG based BCI literacy performance evaluation system. Keywords— Cross evaluation, Motor imagery, Neurofeedback, EEG, Brain computer interface.
I. INTRODUCTION The purpose of this paper is to evaluate the performance of EEG-BCI as a motor imagery characteristic for use as a ubiquitous health service to determine of optimal evaluation performance conditions for a range of subjects. The paper also seeks to determine the average conditions under which maximum performance can be attained with a view to proposing the evaluation technology for use with EEG-BCIbased medical instruments while guaranteeing the safety and confidence of such devices.
II. METHODS A. Cross-Evaluation Method
Table 1 Effect of Cross-Evaluation No.
Training
Testing
1
No feedback
No feedback
2
No feedback
Feedback
3
Feedback
No feedback
4
Feedback
feedback
B. Experimental Setup This experiment evaluated the effects of neuro-feedback. The EEG data was processed using a moving window with a window size of 1 second. In the motor imagery, the imagination of the left arm moving activates the C3 position in the brain, while the imagination of the right arm moving activates the C4 position. Accordingly, the EEG signal of C3, C4 position was extracted from the total dataset. Additionally, the data was band-pass-filtered at 9-13 Hz and was then converted using a relative band power spectrum analysis. Finally, the result showed the activation from the left or right motor imagery. C. Real-Time Neuro-feedback In a preliminary study, feedback was not used because this study was focused in an evaluation of BCI-naïve subjects. However, real-time neuro-feedback was used to improve the performance here.
The effect of cross-evaluation shows, as a result, the effect of neuro-feedback. In a preliminary experiment, neurofeedback was not used. In this experiment, real-time neurofeedback was used. Cross-evaluation for neuro-feedback involved four cases, as shown in Table 1. These cases include the simple cases of cases 1 and 4 and the mutual cases of cases 2 and 3. Comparing the results of the simple cases, the effect of neuro-feedback can be obtained. However, the mutual cases also have the meaning as to how the neuro-feedback has an effect on the training or testing sessions. N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 500–502, 2011. www.springerlink.com
Fig. 1 Concept of Neuro-feedback
Cross Evaluation for Characteristics of Motor Imagery Using Neuro-feedback Based EEG-Brain Computer Interface
III. RESULTS For the evaluation of 10 subjects, the evaluation conditions were determined by a comparative analysis using the ERD mode, Log (var) feature, two CSP filter coefficients, the LDA classifier, and a two-second window size. In addition, the training set was a visual cue, and the testing set was an auditory cue. For the effect of neuro-feedback by four combinations, the No. 4 condition showed relatively high accuracy compared to the other conditions. The No. 4 condition was one in which the training and testing session both used feedback. However, there were no statistical significances between the four conditions, with the No. 3 and No. 4 combinations showing the lowest correlation. Moreover, the other combinations of No. 1 and No. 2, No. 1 and No. 3, and No. 2 and No. 4 had relatively high correlations.
501
analysis, the degree of performance improvement showed a low level of statistical significance. This finding shows that there are different results according to the feedback method used. The neuro-feedback method in this research was basic level feedback. Improvements and more advanced types can also be assessed. For cross-evaluation, this type of evaluation has the goal of finding the effect of applying neurofeedback to training or testing sessions rather than determining the statistical meaning. This cross-evaluation is distinct from other related research, and there must be additional comparative analyses to determine the clinical meaning, which would be a very important next step for this line of research.
ACKNOWLEDGMENT This work was supported by the Korea Food and Drug Administration. [08142-Medical Device-370, Development of Brain-Computer Interface Medical Instrument Evaluation Technology based on EEG for a Neural Prosthesis, 2008.06 ~ 2009.04].
REFERENCES
Fig. 2 Effect of Neuro-feedback (By subject)
Fig. 3 Effect of Neuro-feedback (By combination)
IV. CONCLUSIONS For neuro-feedback, on average, for the LDA classifier the performance was improved. However, in the statistical
1. G. Dornhege, J. del R. Mill´an, T. Hinterberger, D. McFarland, and K.-R. M¨uller, editors. “Towards Brain-Computer Interfacing”, MIT Press, 2006. 2. Pfurtscheller G, Lopes da Silva FH., “Event-related EEG/MEG synchronization and desynchronization: basic principles”, Clinical Neurophysiology, vol. 110, Issue 11, Nov. 1999, pp. 1842-1857. 3. Ramoser H., Muller-Gerking J., Pfurtscheller G., “Optimal spatial filtering of single trial EEG during imagined handmovement”, IEEE Transactions on Rehabilitation Engineering, vol. 8, Issue 4, Dec. 2000, pp. 441-446. 4. Yijun Wang, Shangkai Gao, Xiaorong Gao, “Common Spatial Pattern Method for Channel Selection in Motor Imagery Based Braincomputer Interface”, IEEE-EMBS 2005. 27th Annual International Conference of the Engineering in Medicine and Biology Society, 1718 Jan. 2006, pp. 5392-5395. 5. C. Guger, H. Ramoser, and G. Pfurtscheller, “Real-Time EEG Analysis with Subject-Specific Spatial Patterns for a Brain–Computer Interface (BCI)”, IEEE Transactions on Rehabilitation Engineering, Vol. 8, No. 4, Dec. 2000, pp. 447-456 6. C. Guger, G. Edlinger, W. Harkam, I. Niedermayer, G. Pfurtscheller, “How Many People are Able to Operate an EEG-Based BrainComputer Interface (BCI)?”, IEEE Transactions on neural systems and rehabilitation engineering, vol. 11, Issue. 2, Jun. 2003, 145-147. 7. Blankertz B., Losch F., Krauledat M., Dornhege G., Curio G., Muller K.-R., “The Berlin Brain-Computer Interface: Accurate Performance From First-Session in BCI-NaÏve Subjects”, IEEE Transactions on Biomedical Engineering, Volume 55, Issue 10, Oct. 2008, pp. 24522462 8. Blankertz B., Tomioka R., Lemm S., Kawanabe M., Muller K.-R., “Optimizing Spatial filters for Robust EEG Single-Trial Analysis”, IEEE Signal Processing Magazine, vol. 25, Issue 1, Jan. 2008, pp. 41-56
IFMBE Proceedings Vol. 35
502
Y.S. Ryu et al.
9. Pfurtscheller G., C. Neuper, “Motor imagery activates primary sensorimotor area in humans”, Neuroscience Letters, vol. 239, Issue 2-3, Dec. 1997, pp. 65-68 10. F Lotte, M Congedo, A Lecuyer, F Lamarche and B Arnaldi, “A review of classification algorithms for EEG-based brain-computer interfaces”, Journal of Neural Engineering, vol. 4, 2007, R1~R13. 11. Dennis J. Mcfarland, A. Todd Lefkowicz, and Jonathan R. Wolpaw, “Design and operation of an EEG-based brain-computer interface with digital signal processing technology”, Behavior Research Methods, Instruments, & Computers, vol. 29, Issue 3, 1997, pp. 337~345.
12. Leeb R., Lee F., Keinrath C., Scherer R., Bischof H., Pfurtscheller G., “Brain–Computer Communication: Motivation, Aim, and Impact of Exploring a Virtual Apartment”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 15, Issue 4, Dec. 2007, pp. 473-482. Author: Myoungho Lee Institute: Department of Electrical & Electronic Engineering, Yonsei University Street: Shinchondong, Seodaemungu City: Seoul Country: Republic of Korea Email: [email protected]
IFMBE Proceedings Vol. 35
EEG Artifact Signals Tracking and Filtering in Real Time for Command Control Application M. Moghavvemi1, A. Attaran1, and M.H. Moshrefpour Esfahani2 1
Center of Research and Applied Electronics, University of Malaya, 50603, Kuala Lumpur, Malaysia 2 Faculty of Engineering, Multimedia University, 63100, Selangor, Malaysia
Abstract— Brain machine interface (BMI) is a direct communication pathway between human’s brain and an external device. In some researches it is also called BrainComputer interface (BCI). There are two types of motor BMIs: invasive and non-invasive. Research on non-invasive BMIs started in the 1980s by measuring brain electrical activity over the scalp electroencephalogram (EEG). In this paper, an attempt in made to present initial steps on a noninvasive BMI design based on pattern recognition algorithm method on EEG signals. These artifact signals are converted to command signals to control and steer an external object. The EEG signal is contaminated with numerous artifact signals which make the assembly of usable artifact signal very difficult. With help of MATLAB program, tracking and filtering of artifact signals in real time application is presented as well. Keywords— EEG, BMI, Artifact, BCI.
I. INTRODUCTION The brain uses the neuromuscular channels to communicate and control its external environment, however many disorders can disrupt these channels. Amyotrophic lateral sclerosis is one such disorder which impairs the neural pathways and completely paralyses the patient. This disorder affects nearly ten million people around the world. Those most severely affected may lose all voluntary muscle control and may be completely locked-in to their bodies, unable to communicate in any way. These patients however have an active brain and are aware of their surroundings. One of the options for restoring function to these patients is to provide the brain with a system which can decode the brain signals and change them to proper messages for controlling an external device [1] [2]. The non-invasive BMIs which require signal amplification and classification are known as brain computer interface (BCI) [3]. A BCI is a communication system which is not depend on brain’s normal output pathways and EEG signal is the most common signal used in BCI systems [4] [5]. BCI classification algorithms combine machine
learning techniques with biomedical domain knowledge. The EEG signals are non-Gaussian non-stationary so they are very difficult to model well. There is now an annual competition to evaluate the progress of the algorithms for modeling EEG signals. High accuracy classification of EEG signal is required for proper functioning of the system. The essence of EEG signal and acquisition method The encephalogram (EEG) is a recording of electrical activity originating from the brain [6]. It is recorded on the surface of the scalp using electrodes, thus the signal is retrievable non-invasively. The brain consists of billions of neurons making up a large complex neural network and all these neurons generate electrical signal during their communication [7]. The processing of information takes place by the firing or pulsing of many individual neurons. The pulse is in the form of membrane depolarization travelling along the axons of neurons. A series of pulses in the neurons, also known as a spike train, can be considered the coded information processes of the neural network. The EEG is the electrical field potential that results from the spike train of many neurons. Thus, there is a relationship between the spike train and the EEG and the latter also encodes information processes of the neural-network [8]. Measurement and analysis of the EEG can be traced back to Bergers experiments in 1929. Since then it has had wide medical applications, from studying sleep stages to diagnosing neurological irregularities and disorders. It was not until the 1970s that researchers considered using the EEG for communication. Fig. 1 shows waveforms of a 10 second EEG segment containing six recording channels, while the recording sites are illustrated in Fig. 2. In this paper, the1020 System of Electrode Placement is used to collect patient EEG signals, which is based on the relationship between the location of an electrode and the underlying area of cerebral cortex.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 503–506, 2011. www.springerlink.com
504
M. Moghavvemi, A. Attaran, and M.H. Moshrefpour Esfahani
F (n, m, ω ) =
∞
∑ s ( n ) y ( n − m )e
− jω n
(1)
n =−∞
⎛ 2π n ⎞ y (n) = 0.53836 − 0.46164 cos ⎜ ⎟ ⎝ N −1⎠
(2)
Where s(n) is the windowed signal, y(n) is the function of hanning window and F(n, m, w) is the fourier transform. B. Experimental Setup
Fig. 1 Waveforms of a 10 second EEG segment containing six recording channels’
The subject is comfortably seated and EEG signals are recorded using an electrode cap with 19 electrodes distributed according to the 10-20 international electrode system. EEG signals then are digitized with proper amount of samples/s which is a fixed at 250 samples/s. The subject’s task is to concentrate on the task mentally. The operator should communicate with the subject for marking the beginning and end of the mental concentration period so that the subject’s mind can rest on the rest of the time of the experiment in order to avoid fatigue. C. Experimental Task
Fig. 2 Recording sites
II. METHODOLOGY A. Theory Time-frequency signal analysis concerns the processing of non-stationary signals with time-varying frequency components like EEG signal. In order to obtain the timefrequency analysis, the signal has to be cut into slices and be studied by fourier analysis. However the sliced segments are not periodic and since the fourier transform will interpret this jumps as discontinuity or an abrupt variation in signal. In order to avoid these artifacts, windowing is preferred instead of slicing the signal into nonperiodic segments. Popular windows which have been proposed for this purpose are Hamming, Hanning, Kaiser or Bartlett. The resulting local time-frequency analysis procedure is continuous short time fourier transform or windowed fourier transform [9].
Although a mechanical hand has much more freedom of motion compare to the cursor movement, it is much more complicated in terms of modeling and algorithm. So, in the first approach in this paper we set the goal to achieve simple control actions such as 2-D object movement (up-down-left-right) on the computer screen using only mental activities. In this part we need some static training which we record EEG signal for one minute for one of the tasks. The purpose of this step is to acquire calibration data for initialization of feedback parameters. After we achieved the initial signal then we should do the dynamic training to set the speed of the movement properly, in this part we should set a target on the screen and ask the subject to move the cursor and the operator should take the feedbacks. At the end of each session the process should be evaluated. D. Signal Processing We should calculate the power spectrum density for selected electrodes above sensory-motor and parietal areas in both hemispheres (usually C3, C4, P3, and P4) which can be done using different methods such as combination of fast Fourier transform combined with Hanning cosine window for selecting the EEG signals. There are several methods of processing and classifications of signals which should be studied in order to achieve the optimum method [10]. Based on the studied patterns the proper feedback commands are
IFMBE Proceedings Vol. 35
EEG Artifact Signals Tracking and Filtering in Real Time for Command Control Application
generated. The speed of the processing and generating command is crucial in order to achieve real-time response [11]. Many EEG sample signals have been investigated to select usable artifact signals to use as a command control signal. In Fig. 3 an EEG signal of a patient with blinked eye before and after filtering from 9.5 to 10.7Hz frequency in frequency domain is illustrated.
505
In next step, the EEG signal is converted back to time domain containing only the selected frequency band. Figure 4 shows the before and after filtering EEG sample in time domain.
Fig. 3 A sample EEG signal before and after filtering with hanning cosine window from 9.5 to 10.7Hz
IFMBE Proceedings Vol. 35
506
M. Moghavvemi, A. Attaran, and M.H. Moshrefpour Esfahani
Fig. 4 Before and after filtering of a sample EEG signal in time domain
The same procedure can be applied on patients to identify other usable artifact EEG signals to gather enough command signals to steer up, down, left and right on a 2D screen to move the mouse.
III. CONCLUSION In this paper, it is confirmed that with isolation of artifact frequency bands in EEG signal, enough command signals are generated to control an external object mentally. The proposed methodology has shown excellent performance on separating the original EEG signals from heavy line artifact in an EEG data of very low SNR, with fine stability and robust outcome. This procedure is applied on many patients and it’s in progress.
REFERENCES [1] Y. Ning, et al., "Independent Component Analysis and TimeFrequency Method for Noisy EEG Signal Analysis," in Signal Processing, 2006 8th International Conference on, 2006.
[2] X. Wenjie, et al., "High accuracy classification of EEG signal," in Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, 2004, pp. 391-394 Vol.2. [3] M. Moghavvemi and S. Mehrkanoon, "Detection of the onset of epileptic seizure signal from scalp EEG using blind signal separation," Biomedical Engineering - Applications, Basis and Communications, vol. 21, pp. 287-290, 2009. [4] H. Fariborzi, et al., "Design of a low-power microcontrollerbased wireless ECG monitoring system," Selangor, 2007. [5] S. Mehrkanoon, et al., "Real time ocular and facial muscle artifacts removal from EEG signals using LMS adaptive algorithm," Kuala Lumpur, 2007, pp. 1245-1250. [6] M. S. bin Abd Rani and W. bt. Mansor, "Detection of eye blinks from EEG signals for home lighting system activation," in Mechatronics and its Applications, 2009. ISMA '09. 6th International Symposium on, 2009, pp. 1-4. [7] F. Oveisi, "Information spectrum and its application to EEGbased brain-computer interface," in Neural Engineering, 2009. NER '09. 4th International IEEE/EMBS Conference on, 2009, pp. 299-302. [8] Cohen and I. Korhonen, "From the guest editors," Engineering in Medicine and Biology Magazine, IEEE, vol. 20, pp. 23-24, 2001. [9] X. Zhaojun, et al., "Using ICA to Remove Eye Blink and Power Line Artifacts in EEG," in Innovative Computing, Information and Control, 2006. ICICIC '06. First International Conference on, 2006, pp. 107-110. [10] H. Al-Nashash, et al., "EEG signal modeling using adaptive Markov process amplitude," Biomedical Engineering, IEEE Transactions on, vol. 51, pp. 744-751, 2004.
IFMBE Proceedings Vol. 35
EEG Patterns for Driving Wireless Control Robot H. Azmy, N. Mat Safri, F.K. Che Harun, and M.A. Othman Department of Electronic Engineering, Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor, Malaysia
Abstract— Electroencephalogram (EEG) study has significant use for disable people since many people with severe motor disabilities require alternative method for communication and control. Normally this people have normal brain function that can be used to control assistive devices. Therefore this study presents preliminary results of EEG pattern for driving wireless control robot. The objective was to obtain optimum scalp location. For each task, EEG signals were recorded from 19 scalp location. Four recorded tasks were investigated and divided into Task1, Task2, Task3 and Control. All tasks were preceded by Control task. Fast Fourier Transform (FFT) was used to analyze the recorded signals. The difference in power between task and control was analyzed. Results showed that Pz and P4 are the best location for Task1, T4 and P3 for Task2 and Task3 respectively. All these occurred in delta frequency band.
and contribution in the areas of cardiology, muscle physiology and neuroscience by the increasing knowledge along with comprehension about motor dysfunctions [6]. The common problems nowadays people with motor disabilities require alternative communication in order to communicate rather than just being lock inside their body. Common routine for BCI system requires training every time the electrode cap is placed on subjects. Usually it needs more than one electrode to read the signals on scalp and requires longer time to process to get the best accuracy on controlling the device. The objective of this study is to define the best location on scalp position where the control of the robot reaches the target successfully.
Keywords— Brain-computer interface (BCI), electroencephalogram (EEG), Fast Fourier Transform (FFT).
II. METHODOLOGY
I. INTRODUCTION Electroencephalography (EEG) is a procedure to measure electrical activities by electrodes on scalp. EEG signals analysis have mainly involved with Brain –computer interface (BCI) technology which is a direct technological interface between a brain and a computer not requiring any motor output from the user. BCIs detect electrical activity in the brain indirectly and noninvasively through the scalp via EEG. It provides users with communication channels that do not depend on peripheral nerves and muscles [1] and can provide communication and control to people who are totally paralyzed (e.g., amyotrophic lateral sclerosis (ALS) or brainstem stroke) or have other severe motor disabilities[2][3]. This study is focused on BCI and EEG signals from wireless control robot to give significant result for disable people by just only need one electrode connected on the scalp, giving the best speed and accuracy of moving robot to the target and does not require training for subject before begin the task. This study can lead to another application other than moving robot such as, control of cursor movement[4], control of wheelchair [5] or control of any other devices. This will enhance the build of other prototypes systems for EEG-based BCI and become as a foundation
A. Data Collection Experimental setup: Using EEG data monitoring equipments from Nihon Kohden, an EEG cap with 19 channels/electrodes were placed on the scalp of a subject based on 10-20 electrode placement systems. The robot is connected wirelessly to the EEG machine and a computer. Labview program from a computer was used to send signal to control the robot and at the same time create trigger data to EEG machine. The signal sent wirelessly to the robot via Xbee connection while connection to the EEG machine is via NI-USB6008. Human subject: 5 human subjects aged 20-35 years old participated in this study. They have never been tested on any research (never had training session) and does not have any health problem. They gave informed consent to participate in these experiments. Condition: Subjects sat on a chair facing a robot which was located a few centimeters away from their feet and both facing the target. Four conditions were applied, i.e. Control condition and Task conditions (Task 1, Task 2 and Task 3). In Control conditions, subjects were asked to relax (resting) and were instructed to fix their eyes at the target in front of them. In Task 1 condition, subjects were asked to imagine voluntary movement, e.g. imagine moving the robot to a target location in front of them. For Task 2 condition,
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 507–510, 2011. www.springerlink.com
508
H. Azmy et al.
subjects were asked to imagine moving the robot to the target but knowing there is an obstacle in between. Then in Task 3 condition, the subjects were asked to think of moving the robot to the right in order to avoid the obstacle. Data acquisition: EEG signals were obtained according to 10-20 electrode placement system as Figure 1, with reference electrodes attached to left ear (for electrodes on left side of the scalp) and right ear (for electrode on right side of the scalp). Signals were recorded with passbands of 0.5 120 Hz and stored in a computer with a sampling frequency of 1 kHz. A single trial lasted for 8 seconds and was repeated three times. As this study focuses on robot movement signal
Table 1 Frequency ranges in EEG signal Frequency band
Range
Delta
0≤δ<4
Theta
4≤θ<8
Alpha
8 ≤ α < 13
Beta
13 ≤ β < 31
Gamma
31 ≤ γ < 51
High gamma
51≤ high γ < 120
domain using FFT [7] which contained power spectrum value divided into six frequency bands as in Table 1. The EEG data were divided into 8 intervals with each interval consists of 1024 data points. Then the power spectrum at every frequency was compared to get the maximum difference in power (DP). Difference power (DP) = power in task – power in baseline Where baseline is the mean of the four trials of Control conditions.
III. RESULTS AND DISCUSSION
Fig. 1 Probe/electrode/channel location analysis, the basic step of BCI system was followed. First, EEG signal was recorded from the scalp with 19 electrode channels and digitized using acquisition system. Then the digitized signals are subjected to feature extraction procedures, such as spectral analysis or spatial filtering. After such feature extraction, the system has enough information to make decisions and generate the necessary control actions to be delivered to the robot to do desired task as the subject has imagined. Afterwards, translation algorithm converts the EEG feature into command of robot movement. The robot will always be at standby mode to receive any kind of command signal from subject. Frequency classification: EEG frequency classification follows standard set by Terminology Committee of International EEG Waveform society as shown in Table 1. B. Data Analysis EEG signals were recorded and stored with sampling frequency 1kHz. The signals then were converted into frequency
Figure 2, 3 and 4 shows the result of percentage of maximum DP using 1024ms time interval for all trials, 5 subjects and 8 intervals. These figures show the relation between different channels and the frequency bands. The peak for each trial depicts the important frequency and channel. From Figure 2, the maximum DP for Task1 occurred at P3 for trial 1 and 2 and F7 and Fp2 for trial 3. Average DP was calculated for all trials and it is found that maximum DP for Task1 was found at Pz and P4. Both occurred in delta frequency band. As for Task2 (Figure 3), the maximum DP occurred at P3, P4 and T6 for trial 1, T3 for trial 2 and F8 and T4 for trial 3. The average DP for all trials was happened at T4. Meanwhile for Task3 (Figure 4), the maximum DP occurred at T3 and T4 for trial 1 and 2 respectively. However trial 3 shows the maximum DP occurred at P3 and T4. The average DP for all trials for Task3 occurred at P3 location. These preliminary results show that all maximum DP occurred in delta frequency band. This can result a difficult analysis to differentiate the signals for each task since this study focuses on having only one electrode attach on scalp. Future work may need to consider the second and third point of maximum DP as additional parameters that can be used to differentiate each task. Feature extraction and selection also play an important role in classifying EEG signals
IFMBE Proceedings Vol. 35
EEG Patterns for Driving Wireless Control Robot
509
[8]. Therefore, other option from various meethods of classification with extracted features such as wavelet transform dered to be used [9] or neural network [10] also can be consid in this study to give more accurate and prrecise result to meet the objective.
(T6, 70%, delta) (P4, 70%, delta) (P3, 70%, delta)
(P3, 70%, delta)
(a)
(T3,65%, delta)
(a)
(P3, 75%, delta)
(b)
(T4,75%, delta) (F8,75%, delta) (b)
(Fp2, 75%, delta) (F7, 75%, delta)
(c)
Fig. 3 Maximum DP foor Task 2
(c)
Fig. 2 Maximum DP for Task 1
IFMBE Proceedings Vol. 35
510
H. Azmy et al.
IV. CONCLUSSIONS (T3, 57.5%, delta)
It can be concluded that in thiss preliminary results the peaks lie in the delta freq band. Hoowever the channel location is harder to locate since the peaaks appear at more than 1 channel. A combination of channeels might be required to perform a specific given task suchh as take the 2nd and 3rd maximum DP into consideration bbefore generating the result. Further analysis is needed in future work to obtain a single optimum scalp location withh best accuracy in giving signals to wireless control robot. (a)
REFERENCEES (T4, 67.5%, delta)
(b)
(T4, 65%, delta) (P3, 65%, delta)
(c)
Fig. 4 Maximum DP for Task 3
1. Jonathan R. Wolpaw, Niels Birbaumer, William J. Heetderks, Dennis win Schalk, Emanuel Donchin, J. McFarland, P. Hunter Peckham, Gerw Louis A. Quatrano, Charles J. Robinsson, and Theresa M. Vaughan (2000) Brain-Computer Interface Technnology: A Review of The First International Meeting. IEEE Trans. Reehab. Eng. 8:164-173. 2. Eleanor A. Curran and Maria J. Stokkes (2003) Learning to control brain activity: A review of the producction and control of EEG components for driving brain-computer innterface (BCI) systems. Brain and Cognition. 51 (2003):326-336 3. Birbaumer N.,ghanayim N., Hinterbergger T., Iversen I., Kotchoubey A., and Kubler A. et al.(1999) A spellling device for the paralyzed. Nature. 398:297-298. 4. Siti Zuraimi Salleh, Norlaili Mat Saafri & Siti Hajar Aminah Ali (2009) Signal Processing: An International Journal (SPIJ) vol. 3 (5):110-119 5. Iturrate, I.; Antelis, J., Minguez, J. (20009) Synchronous EEG brainactuated wheelchair with automated navigation. Robotics and Automation , ICRA 2009, IEEE Internationnal Conference on 12-17 May 2009, pp 2318-2325 6. Andre Ferreira et al.(2008) Human-maachine interface based on EMG and EEG applied to robotic systems. Journal of NeuroEngineering and Rehabilitation 5:10 7. R. S. Manzoor, R. Gani, V. Jeoti, N. Kaamel and M. Asif (2009) Dwpt based FFT and its application to SNR R estimation in FDM Systems. Signal Processing: An International Jouurnal, 3(2), 2-33, 8. Elif Derya Ubeyli (2009) Statistics oveer features: EEG signals analysis. Computers in Biology and Medicinne 39 : 733 – 741 9. Wei-Yen Hsu, Yung-Nien Sun (20099) EEG-based motor imagery analysis using weighted wavelet transfoorm features. Journal of Neuroscience Methods 176 : 310–318 10. Shin-ichi Ito, Yasue Mitsukara, Minoruu Fakumi, Norio Akamatsu and Rajiv Khosla (2003) An EEG feature ddetection system using the Neural Networks based on genetic algorithms. Proceedings IEEE International Symposium on Computationaal Intelligence in Robotics and Automation, Kobe, Japan, 2003, pp1196-1200
IFMBE Proceedings Vol. 35
Effects of Physical Fatigue onto Brain Rhythms S.C. Ng1 and P. Raveendran2 1
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia Department of Electrical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia
2
Abstract— The present study attempt to investigate the changes of the electroencephalogram (EEG) rhythms due to physical fatigue. Three different methods to enhance EEG signals have been applied. Aside from the Referential method, the Common Average Reference method as well as the Current Source Density method has been applied to the raw data prior to processing. Ten subjects participated in this study. They are required to grab a dynamometer for 30 sets of 30 seconds with both hands. The results indicated an increase in the θ frequency band at the posterior region, wide spread increase of the lower α frequency band as well as an increase in the β frequency at the left motor cortex region. It should be noted that the β frequency component is probably part of the μ rhythm which is associated with motor control. Keywords— Electroencephalogram, Physical fatigue, Current Source Density, Common Average Reference.
the power spectrum of the EEG frequency does not require multiple trials. Studies on power spectrum during fatigue have shown that the α and β frequency bands would increase during Maximum Voluntary Contraction (MVC) when the subject is fatigued. However, it should be noted that most of the EEG studies on physical fatigue is based on the referential EEG data. It has been pointed out by [6] that the referential method may have certain drawbacks since it is dependent on the location of the reference. Thus, it has been suggested using both the Common Average Reference (CAR) method as well as the Current Source Density (CSD) method [7] hand in hand to provide a more complete view of the EEG content. This paper attempts to study the physical fatigue effects on to the EEG data from the perspective of brain rhythm using the CAR as well as the CSD to enhance the EEG data.
I. INTRODUCTION The success in sports is dependent on the proper training method. The proper training involves both the training duration as well as the recovery period. A number of studies have been concerned with the evaluation of training methods. For example, research on fatigue due to various activities such as soccer [1], running [2] and biking [3] has been carried out. When physical fatigue occurs, our muscles feel sore and we would be reluctant to continue the task. Thus, most of these studies focus on the electromyogram (EMG) of the muscle involved [1-3]. However, there are various problems associated with the usage of EMG such as changes in the relationship between the EMG variable and complex physiological phenomena that are still not well understood [4]. Furthermore, the EMG activity has to be recorded during muscle activity to determine the condition of the muscle and this may limit its application. A few groups have extended the fatigue study to include the electroencephalogram (EEG). The two main approaches that have been used to quantify fatigue are the Movement Related Cortical Potential (MRCP) and the power spectrum of the EEG frequency bands [5]. Although MRCP is very popular in fatigue analysis, it requires multiple trials for averaging to obtain the results. The second approach using
II. DATA ACQUISITION A. Subjects Ten healthy male subjects participated in this experiment. The experimental protocol as well as any possible drawbacks of the experiment is explained to the subjects before the start of the experiment. Then, the subjects gave their written informed consent before the experiment began. Each subject was seated comfortably while a 64-channel electrode cap corresponding to the international 10-10 system was placed on his head. However, only 55 EEG channel recordings were made as the other channels were used to record electrooculogram (EOG) and electromyogram (EMG) signals. For the EOG recordings, electrodes were placed around the eyes region and are used to check for the EOG artifacts. Electrodes were placed on the forearm to acquire the EMG signals during the experimental task to indicate the level of muscle activity. The sampling rate used for all the signals was 256 Hz. B. Protocol The experiment begins with the subject seated in a relaxed manner with his eyes closed for two minutes. This is
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 511–515, 2011. www.springerlink.com
512
S.C. Ng and P. Raveendran
followed by another two minutes with his eyes opened. These resting conditions (eyes-closed and eyes-opened of 2 minutes each) were repeated two times before and after the experimental task. For the experimental task, the subject is required to grip a dynamometer as hard as possible for 30 seconds using his right hand. Then, he is allowed a 15second break. During this break, the subject is required to report on his perception of his physical condition. After that, he will change to his left hand and grip the dynamometer for another 30 seconds followed by a 15-second break. The subject is required to repeat this gripping task for 30 times or until he is no longer able to continue with the task.
algorithm [9]. Then, the Stationary Wavelet Transform (SWT) algorithm [10] was applied to the independent components to break them into different frequency bands. The fixed-form threshold is computed. Any components that are above the threshold were considered as artifacts and thus removed. Finally, the frequency component is reconstructed using Wavelet and then recomposed using the inverse of the SOBI weights that have been used to decompose it in the first place. The algorithm for the artifact removal is obtained from [11].
III. SIGNAL PROCESSING
In this study, only the eyes closed data will be process. This is because the eyes-closed condition exists immediately after the experimental task has been completed. The EEG recordings obtained at each location is actually dependent on the reference electrode. Montages have been used to remove this reference electrode effect. The Referential (REF) method is compared with both the Common Average Reference (CAR) method as well as the Current Source Density (CSD) method. In order to compute the CSD, the software is downloaded based on the algorithm given by [7]. The formula to calculate the CAR is given by (4)
A. Fatigue Parameters This study used three parameters (perceptual, physiological and performance) in the assessment of fatigue [8]. The perceptual dimension was obtained through the subjects’ self perception report (SPR). The physiological parameter was quantified using Root Mean Squared (RMS) of the EMG signal. The performance parameter was determined based on the hand grip force (HGF) of the subject. The formulas for these three parameters are as shown below: 1.1
0.1
(1)
Where SPRi is the Self Perception Report for the ith trial and Borgi is the perceived tiredness reported by the subject based on the borg’s scale. ∑
(2)
Where RMSi is the Root Mean Squared of the EMG for the ith trial, EMGi(k) is the EMG value at the kth sample for the ith trial and n is the total samples of EMG value for a certain trial. ∑
(3)
Where HGFi is the Hand Grip Force for the ith trial and Fi(k) is the force value at the kth sample for the ith trial and n is the total samples of force value for a certain trial. B. Artifact Removal The EOG and EMG artifacts were removed before the EEG data was further processed. An automatic EEG artifact removal method that combines the Blind Source Separation method with the Wavelet method is used here. The EEG signals are first decomposed into independent components using the Second Order Blind Identification (SOBI)
C. Enhancing EEG Signals
∑
(4)
Where CARi is the CAR of the ith electrode, EEGi is the referential EEG recording of the ith electrode and n is the number of electrodes. D. Frequency Transformation The Fourier Transform is use to obtain the frequency component of the EEG data. The EEG that has been cleaned using the artifact removal method and enhanced as stated above is then segmented into one second segments. The Fourier Transform for each segment is then found. In order to stabilize the Power Spectral Density (PSD) of the EEG, the PSD is average over 60 seconds of data. By doing this, we will obtain four different time segments before the experiment (B1-B4) and four different time segments after the experiment (A1-A4). Since the PSD value for different subjects are different, we normalize the PSD value. For the comparison across all the subjects, the mean and standard error for the PSD values across the subjects are computed. E. Statistical Method In order to test for statistical significance, we use the data of the time segment immediately after the experimental task (A1) to compare with the four time segments before the experimental task (B1-B4).
IFMBE Proceedings Vol. 35
Effects of Physical Fatigue onto Brain Rhythms
513
Since there will be a lot of results, topographical plots will be used to display the data in a more concise manner. If the colour of the topographical plot is orange, it indicates 2 standard error differences for that frequency after the experiment as compared to before the experiment.
IV. RESULTS
value of A3 is similar with the values before the experimental task (B1 to B4). The changes in the δ frequency range are not significant and thus not shown here. Based on Fig. 4, it can be seen that the REF method and the CAR method indicate increase of 7 Hz θ rhythm at the parietal and occipital region. Using the CSD method indicated increase in the θ rhythm at the left and right temporal region.
A. Fatigue Parameters Based on Fig.1, it can be clearly seen that all the three common indicators for fatigue seem to correspond with one another in their reduction as the trial increase. The reduction from the initial value indicates that fatigue is setting in as the trial increases. The reduction is more significant for the first 15 trials and seems to remain stable after that.
Fig. 3 The mean and standard error of 10 Hz EEG component at C3 before (B) and after (A) the experimental task
Fig. 4 Topographical changes for theta rhythm Fig. 1 Fatigue Parameters B. Artifact Removal Based on Fig. 2, we can see the effects of artifact removal method on to the EEG. It can be seen that from 0-0.5 seconds, the EEG did not change since there are no EOG artifacts there. The changes only occur at the region where the EOG artifacts existed.
EOG
Based on Fig. 5, it can be seen that the increase in the α frequency range is different across the frequencies. Generally, it can be seen that the 10 Hz component is concentrated at the motor cortex region for all the three different methods.
EMG
Fig. 2 Artifact Removed C. Changes in the Brain Rhythm Fig. 3 shows that the normalized PSD of the EEG data before the experimental task is quite constant. After the experimental task the value increased significantly as shown in time segment A1. With more time on recovery, the value reduced as can be seen from the trend of A1 to A3. The IFMBE Proceedings Vol. 35
Fig. 5 Topographical changes for Alpha rhythm
514
S.C. Ng and P. Raveendran
Based on Fig. 6, it can be seen that all the three methods showed an increase of the β frequency band at around the left motor cortex region. The increase for the CSD method is more significant as compared to the other two methods.
Fig. 6 Topographical changes for Beta rhythm
V. DISCUSSION This study has used multiple fatigue parameters to verify that the subjects are physically fatigue. All the three parameters seem to relate to one another very well. Generally, the subject’s performance started deteriorating since the first trial and stabilizes at around the 15th trial. The automatic artifact removal method seems to perform quite well in removing artifacts while maintaining the EEG signal. This is critical as most artifact method remove EEG signal along with the artifacts. The increase of θ rhythm at the posterior region is interesting. Studies on mental fatigue commonly found that the θ rhythm increase at the frontal region of the brain. The frontal part of the brain is associated with mental task [12]. Since this is a physical task, different region of the brain may be involved. The result obtained from REF and CAR is from the occipital and parietal region. These may indicate that the dominant frequency of 7 Hz is in that region and that frequency is widespread. The result of the CSD on the other hand showed that the increase is at the temporal and central region. This shows that there are some 7 Hz components localized there. Careful observation indicates that some of those regions shown are close to the motor cortex region of the brain. A possible explanation on this is that the physical fatigue results in increase of θ rhythm at the motor cortex region as compared to the increase at the frontal region for mental fatigue. For the α frequency band, all three methods showed that the increase is rather widespread and generally covered the motor cortex region. However, the increase only involved the lower α (8-10 Hz) rhythm. An interesting observation is
that the CAR method even showed increase at the frontal region for 8-9 Hz. A possible explanation is that the subject may not just be physically fatigue. The subject may also be visually fatigue as he is required to keep his eyes open throughout the experiment. The 8-9 Hz is within the visual α rhythm which is widespread across the scalp. The CSD method on the hand increases around the central region which is also where the motor cortex region lies. For the β frequency band, although all three methods showed an increase at the left motor cortex region, only the CSD method showed a significant increase. This is quite obvious as the CSD enhance the local rhythms and in this case the μ rhythm. Although the μ rhythm consists mostly of the α frequency range, it also has some β frequency component (20 Hz) to give the μ rhythm its arch shape. This increase may be indicative that the μ rhythm component has increase in amplitude due to physical fatigue. This observation can also be seen from the fact that the 10 Hz EEG component at the motor cortex region increased when physically fatigued and reduced after 2 minutes of recovery.
ACKNOWLEDGMENT The authors would like to acknowledge University of Malaya for supporting this research under for the Research University grant number SF037/2007A.
REFERENCES 1. Rahnama N, Lees A, and Reilly T (2006) Electromyography of selected lower-limb muscles fatigued by exercise at the intensity of soccer match-play. J EMG and Kines 16(3):257–263. 2. Saldanha A, Ekblom N, and Thorstensson A (2008) Central fatigue affects plantar flexor strength after prolonged running. Scandinavian J Med & Sci in Sports 18(3):383–388. 3. Knaflitz M, Molinari F, and Elettronica D (2003) Assessment of muscle fatigue during biking. IEEE Trans on Neural Sys and Rehab Eng 11(1):17–23. 4. Merletti R and Parker P (2004) Electromyography: Physiology, engineering, and noninvasive applications. IEEE, New York. 5. Liu J, Yao B, Siemionow V, Sahgal V, Wang X, Sun J, and Yue G (2005) Fatigue induces greater brain signal reduction during sustained than preparation phase of maximal voluntary contraction. Brain Research 1057:113–126. 6. Tong S and Thakor N (2009) Quantitative EEG Analysis Methods and Clinical Applications. Artech House, London. 7. Kayser J and Tenke CE (2006). Principal components analysis of Laplacian waveforms as a generic method for identifying ERP generator patterns: I. Evaluation with auditory oddball tasks. Clin Neurophysiol, 117(2):348-368.
IFMBE Proceedings Vol. 35
Effects of Physical Fatigue onto Brain Rhythms
515
8. Borg G (1998) Borg’s perceived exertion and pain scales. Human Kinetics. 9. Cichocki A and Amari S (2002) Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications. Wiley. 10. Brychta R, Tuntrakool S, Appalsamy M, Keller N, Robertson D, Shiavi R and Diedrich A (2007) Wavelet Methods for Spike Detection in Mouse Renal Sympathetic Nerve Activity. Biomed Eng, IEEE Trans 54(1): 82–93. 11. Ng SC and Raveendran P (2009) Enhanced μ rhythm extraction using Blind Source Separation and Wavelet Transform. Biomed Eng, IEEE Trans. 54(8):2024–2034.
12. Jensen O, Tesche C (2002) Frontal theta activity in humans increases with memory load in a working memory task. Euro J of Neurosci 15:1395-1399.
Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 35
Ng Siew Cheok University of Malaya Kuala Lumpur Malaysia [email protected]
Evaluation of Motor Imagery Using Combined Cue Based EEG-Brain Computer Interface D.H. Choi, Y.B. Lee, W.J. Jeong, S.J. Lee, D.H. Kang, and M.H. Lee Department of Electrical & Electronic Engineering, Yonsei University, Seoul, Republic of Korea Abstract— In this paper, the combined cue is different from each use of a visual cue and an auditory cue in BCI naïve evaluations. This involves the simultaneous presentation of a visual cue and an auditory cue. According to the result of the experiment, the effect of combined cue is evaluated. In the case of the combined cue, the correlation of the combined cue and a visual cue was found to be higher than that noted in other conditions. The maximum performance increase was 6% from the worst performance case, with an average performance increase of 3%. For subject showing maximum performance, the accuracy was 90.3%. Keywords— Combined cue, Motor imagery, Evaluation, EEG, Brain computer interface.
I. INTRODUCTION Brain-computer interfaces can be used for state-of-the-art medical devices to control and communicate with neural prostheses such as artificial hands, feet and wheelchairs. The importance of BCI neuro-technology places BT-IT-NTCT convergence technology in the 21th century in a position to have a considerable impact on society. To implement a brain-computer interface system, either a noninvasive or an invasive method can be used. With the invasive method, bio-compatibility, infection and ethical issues arise. These issues make noninvasive methods preferable over invasive methods for commercialization. Noninvasive braincomputer interface systems provide a method of control and communication for those who are severely disabled. These methods also allow the creation of a more intelligent and human friendly user interface for computer software [1].
When the starting sound was presented, the direction arrow showed left or right on the monitor. The visual cue showed for four seconds and the blank screen showed with the ending sound. The subjects performed motor imagery continuously during the four seconds of the visual cue. In the EEG data analysis, the visual cue parts were extracted using the window function. In this composition, the total length was 8 seconds, and each experiment had 50 trials. The auditory cue had a blank screen part and a cue part. The auditory cue part gave a 'left' or 'right' sound from the speaker. The monitor showed no direction. The combined cue was a combination of a visual cue and an auditory cue. Therefore, the monitor showed the 'left' or 'right' direction arrow and simultaneously the speaker gave the 'left' or 'right' sound. The position of the EEG measuring electrode was intensively placed at the motor cortex parts of the brain [2]. B. Combined Cue Generally, a BCI experiment uses visual cues and auditory cues. A visual cue typically is a graphical symbol and/or directional signs for motor imagery. Auditory cues use sound effects instead of a visual cue. These methods evaluate the brain's information process performance. In a preliminary study, the classification accuracy was tested using visual cues and auditory cues. In this study, visual cues and auditory cues were combined and simultaneously presented. Thus, the subject trained using a combined cue. This method tested the performance between an individual cue and a combined cue [3].
II. METHODS A. Experimental Setup This experiment evaluates the effects of the combined cue. There were six types of combinations between the training cues and testing cues. Each experiment had 50 trials, and each trial had a two-second blank screen, a twosecond fixation cross, and a four-second visual or auditory cue. Blank screens represented the interval of each trial, and the fixation cross denoted the preparation of motor imagery. N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 516–518, 2011. www.springerlink.com
Fig. 1 Concept of a Combined Cue
Evaluation of Motor Imagery Using Combined Cu ue Based EEG-Brain Computer Interface
517
III. RESULTS The combined cue consisted of a visual cu ue and an auditory cue. Each evaluation had six cases, as shown s in Table 1. In Fig. 2, the x-axis denotes the subject number n and the y-axis is the classification accuracy. Subjecct 1 had nearly 90% accuracy, denoting a well-trained subjeect. In addition, each subject had a specific condition for th he optimal BCI performance. Thus, subject 1 had the optim mal condition in this experimental setting. If the experimentall and analyzing condition changes, the accuracy of each h subject may change. Hence, it is important to determin ne the optimal condition for each person. In Fig. 3 No. 4 and 5 showed onship between good performance, indicating that the relatio the combined cue and the visual cue is optim mal for training and testing BCI algorithms [4]. Table 1 Effect of the Combined Cue No .
Training
Teesting
1
Visual Cue
Audiitory Cue
2
Auditory Cue
Visu ual Cue
3
Combined Cue
Audiitory Cue
4 5 6
Combined Cue Visual Cue Auditory Cue
Visu ual Cue Comb bined Cue Comb bined Cue
Fig. 3 Effect of the Combined Cuue (By combination) For the six types of combinationns, a t-test and a correlation analysis were conducted. Usingg LDA as a classifier, the third and fourth combinations showed a relatively low pvalue (0.09) with a correlation valuue of 0.77. According to an analysis of variance test with thhe six types of combinations and 10 subjects, the accuracy ttrends of the subjects for the six types of combinations show wed low statistical significance. However, the distributions of the six combinations for each subject showed very high sstatistical significance (P = 4.2E-09), indicating that the chaaracteristics of each subject are feasible for application to biometric-method-based EEG [5].
CONCLUSIONNS In this research, as the average pperformance level using a combined cue and a visual cue, thhe accuracy was higher than that of other combinations. In tthe result of the statistical analysis of two combinations off training using the combined cue, the statistical significancce was high, especially in the case of the LDA classifier. Thiss implies that the accuracy difference between the two com mbinations has statistical importance, with a correlation valuue of 0.77. In this result, compared to the results determined by Tubingen University, the effect of the combined cue wass relatively high and the combination of the combined cue and the visual cue had more statistical significance than tthe combinations of the combined cue and the auditory cuee. This result can be applied to effective training and testiing methods using EEGBCI research [6].
Fig. 2 Effect of the Combined Cue (By subbject)
ACKNOWLEDGM MENT This work was supported by thee Korea Food and Drug Administration. [08142-Medical Device-370, Development
IFMBE Proceedings Vol. 35
518
D.H. Choi et al.
of Brain-Computer Interface Medical Instrument Evaluation Technology based on EEG for a Neural Prosthesis, 2008.06 ~ 2009.04].
REFERENCES 1. Blankertz B., Losch F., Krauledat M., Dornhege G., Curio G., Muller K.-R. (2008) The Berlin Brain-Computer Interface: Accurate Performance From First-Session in BCI-NaÏve Subjects, IEEE Transactions on Biomedical Engineering, Volume 55, Issue 10, Oct. 2008, pp. 2452-2462 2. Blankertz B., Tomioka R., Lemm S., Kawanabe M., Muller K.-R (2008) Optimizing Spatial filters for Robust EEG Single-Trial Analysis, IEEE Signal Processing Magazine, vol. 25, Issue 1, Jan. 2008, pp. 41-56 3. Pfurtscheller G., C. Neuper (1997) Motor imagery activates primary sensorimotor area in humans, Neuroscience Letters, vol. 239, Issue 23, Dec. 1997, pp. 65-68
4. Lotte F, Congedo M, Lecuyer A, Lamarche F and Arnaldi B (2007) A review of classification algorithms for EEG-based brain-computer interfaces, Journal of Neural Engineering, vol. 4, 2007, R1~R13. 5. Dennis J. Mcfarland, A. Todd Lefkowicz, and Jonathan R. Wolpaw (1997) Design and operation of an EEG-based brain-computer interface with digital signal processing technology, Behavior Research Methods, Instruments, & Computers, vol. 29, Issue 3, 1997, pp. 337~345. 6. Leeb R., Lee F., Keinrath C., Scherer R., Bischof H., Pfurtscheller G (2007) Brain–Computer Communication: Motivation, Aim, and Impact of Exploring a Virtual Apartment, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 15, Issue 4, Dec. 2007, pp. 473-482. Author: Myoungho Lee Institute: Department of Electrical & Electronic Engineering, Yonsei University Street: Shinchondong, Seodaemungu City: Seoul Country: Republic of Korea Email: [email protected]
IFMBE Proceedings Vol. 35
Fitting and Eliminating to the TMS Induced Artifact on the Measured EEG by the Equivalent Circuit Simulation Improved Performance Y. Katayama and K. Iramina Department of Informatics, Kyushu University, Fukuoka, Japan
Abstract— Transcranial magnetic stimulation (TMS) is the non-invasive stimulus method to the brain by inducing the eddy current within the brain from the TMS coil placed outside the scalp. The induced eddy current stimulates the nerve circuit causes to suppress the brain activity partially in time and space, and it is applied to detect the brain functions etc. When TMS is applied to the brain while measuring the electroencephalography (EEG), the induced artifact caused by TMS covers on the EEG, it is called the TMS artifact. The amplitude of the TMS artifact is generally too large to omit diagnosing the EEG. Therefore some methods have been proposed to negate the TMS artifact from the EEG including the TMS artifact using the EEG only. We proposed a new method for describing the shape of the induced artifact in the EEG applied TMS using two equivalent circuit models, the TMS equipment model and the equivalent circuit model of bioelectric measurement system, under some simplified approximations. This paper shows that some attempts are applied to the proposed method to improve the performance of fitting and eliminating the TMS artifact. One is to derive the strict solution of the TMS artifact by omitting some simplified approximations. Other is the countermeasure against errors calculating inverse matrix while estimating parameters of fitting shape of the TMS artifact. One attempt is found that the strict solution improves the shape fitting but not so far from the approximated solution. Other attempt improves stability of the simulation. These attempts are found that some time parameters which determine the time-transition such as “Tc” should separately treat against other time constant parameters which are derived from circuit parameters. Moreover, considering the residual component this is not described on the circuit model.
the bioelectric and EEG measurement circuit loop. The induced electromotive force appends the TMS related noise to EEG. This noise is called TMS induced artifact, in other words, TMS artifact. To suppress and remove this artifact from EEG, the sample-and-hold circuits [1] and independent component analysis (ICA) [2] is mainly applied to EEG. Another methods are using Kalman filter [3] etc. We have proposed to fit the shape function for the TMS artifact by adjusting the circuit parameters [4] and have been compared with ICA and Kalman filter [5]. The shape function is the approximated solution of the equivalent electronic circuit equation of Fig. 1. The objective in this paper is to improve the performance of the simulation. We have found to solve the equations without some negligible approximations which were applied in the past proposals [4] [5]. Now the new solution is called as strict solution and the conventional solution used in the past is called as approximated solution. The performance improvement of the strict solution is discussed by comparing the fitting results to the TMS artifact on the EEG. On the other hand, it frequently causes calculation error when solving the inverse matrix while estimating parameters of fitting shape of the TMS artifact. Pseudo inverse matrix is applied to suppress the calculation errors, and observing the tendencies falling into the inverse matrix error. Furthermore, discussing what component should be needed in the distorted TMS artifacts.
Keywords— Electroencephalography, Equivalent circuits, Function Fitting, Induced artifact, Transcranial magnetic stimulation.
II. FITTING SOLUTIONS TO THE TMS ARTIFACT A. TMS Equipment System
I. INTRODUCTION Transcranial magnetic stimulation (TMS) is a noninvasive stimulus method to the brain in which it inducing the eddy current by applying the pulse current to the coil putting on the scalp and it has many studies about TMS effects to the electroencephalography (EEG). When applying TMS to the brain while measuring EEG, the magnetic flux from the TMS coil induces the electromotive force into
Fig.1 in the left side shows the equivalent circuit model of the TMS equipment. The inductance L and the internal resistance component r denote the TMS coil. The constant voltage source E, capacitance C, and discharge resistance R are elements in the TMS equipment circuit. The state of the TMS equipment circuit can be changed by exclusively turn on switches SW1, SW2, and SW3 in the circuit in this order.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 519–522, 2011. www.springerlink.com
520
Y. Katayama and K. Iramina
LCEω
Fig. 1 Equivalent circuit of the TMS equipment (left) and biomeasurement system (right)
The TMS equipment system has three modes: a C-charge mode (SW1=ON), an LCr oscillation mode (SW2=ON), and an L-discharge mode (SW3=ON). The electromagnetic induction from the TMS coil in the LCr oscillation mode denotes the TMS application. The shape of TMS pulse has some oscillation types such as mono-phase, bi-phase, and multi-phase, and the bi-phase type (i.e. oscillation is terminated around one cycle) is supposed in this paper. Let the start time to oscillate TMS circuit model in the Fig.1 be t=0, then the circuit equation for the electrical charge q in the capacitance C and the initial conditions are following. d q dt
r dq q 0, L dt LC d L ,q t 0 CE, t dt
dq , dt
1 0
0,
where i is the current from the capacitance C, and is the terminal voltage of the inductance component L. The solution of this differential equation (1) is following. q t
τ
CEe
cos ω
t
CEe
cos ω
t,
2L ,ω r
1 LC
1 ,ψ τ
ψ
sin ω
t
2s 2a
ω
2π , ω
τ ,T
where τ is the time constant of the damping factor, and is the angular frequency of the TMS oscillation, and ω T value is assumed to be the one cycle of the TMS oscilla. Eq. (2s), (2a) are the strict tion, but not linked to the ω and approximated solutions of q. It is found that approximated solution neglects the minor second term of the right hand side in the strict solution. From now on in the approximated solution, some approximations are applied to make the problem easier, in order to show the general effectiveness of this method. The solution of are found from Eq. (1), (2a) or (2s). Eq. (3s), (3a) are the strict and approximated solutions of . t
LCEω
1
ψ
e
cos ω ψ sin ω
t t
3s
e
cos ω
t.
3a
The key parameter concerning the electromagnetic induction to the bioelectric measurement system is dB⁄dt where B is magnetic field generated by a TMS coil. The parameter dB⁄dt is proportional to d ⁄dt, thus the key parameter is proportional to . Let the oscillation be stopped t T T at for the biT phase type TMS circuit. The reason why not let T is considering the difference between the timing (T ) to stop oscillation and the oscillation cycle (T ) determined by TMS circuit parameters. B. Equivalent Circuit of Bioelectric Measurement System Fig.1 in the right side shows the equivalent circuit model of the bioelectric measurement system. This circuit is considered to form the closed circuit by the bioelectric equivalent circuit and the measurement system, which causes the electromagnetic induction from the TMS coil. The resistance component R and the capacitance component C denote the bioelectric equivalent circuit, and an electrical potential of EEG is assumed to measure the terminal voltage of the C , named . The reason of using simpler bioelectric equivalent circuit is to form the solvable circuit equation and to get the solution. The transformer to describe coupling of the electromagnetic induction is replaced by following simple description. Namely, when the terminal voltage of the inductance component L in the TMS coil is , the induced electromotive force k is driven to the bioelectric measurement system as shown in the Fig. 1, where k is a proportional factor. Furthermore, it is assumed that the coupling influence from the bioelectric measurement system to the TMS equipment is negligible, and TMS affects to the human body only in the oscillation mode. The state transition of the bioelectric measurement system depends on that of the TMS equipment system. In the LCr oscillation mode (SW2=ON), the bioelectric measurement system is driven by the induced electromotive force k from the TMS coil. In the discharge mode (SW3=ON), the influence from the TMS coil is negligible, and R C decay is occurred by consuming the energy left in the capacitance component C . The circuit equation for the electron q in the capacitance component C in Fig. 1 is following, dq q , 4 dt τ R R C , q ⁄C , τ where τ is the time constant of R C decay. The electromotive force which is in proportion to Eq. (3s) or (3a)
IFMBE Proceedings Vol. 35
Fitting and Eliminating to the TMS Induced Artifact on the Measured EEG
is applied to the circuit in the oscillation mode 0 t T ), =0 in the discharge mode (T t) . Therefore, and let the initial conditions and the solution of which is based on the solution of q from Eq. (4) are given below. Table 1 Experimental conditions measuring EEG applying TMS Item
Font Style
TMS equipment
Magstim Rapid Magstim Double 70mm Branding Iron 3271-00 (8-shaped) Between C5 and FC5, internally induced
TMS coil TMS location TMS intensity
.
EEG recorder Sampling frequency
1450[Hz]
Gap time of Eximia EEG
2[ms]
1.
Oscillation mode (0 q t 0 kE
t
α kE 1
2.
e
e 1
t T) 0, t 0 cosω t α sinω ψ sin ω
τ ,θ ω Discharge mode (T t) t
Ae
tan
t
offset within the estimation time area. The start time offset t is prepared to adjust the offset within the sampling time around t 0. The parameters t and T are also state transition time parameters. Therefore there are 8 unknown , τ , τ , τ , }. parameters; t , T , kE , T To determine the value of above circuit parameters, the shape functions were fit to the TMS artifact. The least mean square method was applied to solve the parameter estimation. It minimizes min Q, Q
45% of motor threshold Extended international 10--20 system with 60 electrodes Nextim eximia EEG
Location of electrodes
521
y t
t,
,
7
.
where y(t) is the EEG involving the TMS artifact from t=0. The Newton method was applied to determine the parameter update direction, and the golden section method was used to determine the parameter update length.
0,
θ 1⁄
,
e
t e
,
5s 5a
,
, A: unknown constant,
6
where Eq. (5s),(5a) are the strict and approximated solutions of , and an unknown constant A is solved by the continuity condition between Eq. (5s) or (5a) and (6) at t T . C. Parameter Estimation of the Shape Function The shape function of the TMS artifact is formed by Eq. (5s) or (5a) and (6), and unknown parameter to be determined to fit the shape function to the TMS artifact are included into the formula. There are some parameters not presented yet. One of them is that time constant of the bioelectric equivalent circuit τ would have the frequent property because of using the simpler bio equivalent circuit. in the This circuit is driven by angular frequency ω oscillation mode, and is driven by far lower frequency (R C decay) in the discharged mode. This fact means that time constant τ have to take the individual values for each mode. Other parameters to determine the shape function are the offset values of amplitude and start time, and T . The amplieliminates the 0 and 1 order voltage tude offset
Fig. 2 Fit and eliminated results of (a) approximated and (b) strict solutions, and (c) fit result of the strict solution
III. EXPERIMENTAL RESULTS AND DISCUSSION A. Recording Condition of the EEG with TMS Artifact The experiment was to observe the event related potential P300 during an auditory oddball task, and the EEG data applied TMS were measured for one user. Experimental
IFMBE Proceedings Vol. 35
522
Y. Katayama and K. Iramina
conditions of applying TMS and measuring EEG are written in Table 1. The TMS artifact simulation was applied to the popular TMS artifact wave at the electrode location Cz in the measured EEG data. B. Fitting Results of Strict and Approximated Solutions Fig. 2 shows the artifact fitting and elimination and Q values by the approximate and strict solution. It was found that the strict solution shows the improved but not so excessive performance against the approximated solution for fitting to the TMS artifact in the EEG data. In other words, the approximated solution has done the good approximation.
IV. CONCLUSIONS Some attempts are applied to the proposed method to improve the performance. The strict solution of the TMS artifact improves the shape fitting but not so remarkable performance than the approximated one. The solution against errors calculating inverse matrix shows some stabler system. However there is a fundamental solution to handle the state transition parameters separately. Moreover, considering the residual component for the highly distorted TMS artifact.
Fig. 4 Highly distorted TMS artifact on EEG. (up) at C5 (circ.) and Cz Fig. 3 Result of relationship between the TMS event times and calculation
(line), (low) at FC5 (circ.) and fitting result
times to convergence, while re-using previously converged parameters throughout the TMS events
C. Stability Results While Re-using Converged Parameters The workaround of errors calculating inverse matrix caused by around zero determinant helps to be stabler system. Fig. 3 shows the relation of TMS event times to the continuous calculation times to convergence while re-using previous converged parameters in the stabler channel set. Some channels converges by smaller continuous times to use the previous converged values as the initial values of the next calculations. However, other channels drop out of the convergence states. This is because that the state transition time parameters unlock their existence range that could be observed while calculating the simulation. Therefore there is a fundamental solution to handle the state transition parameter separately. D. Highly Distorted TMS Artifacts Fig. 4 shows the highly distorted TMS artifact at C5 and FC5 which are nearby the TMS coil. The TMS artifact at C5 shows the distorted waveform relative to the damped sine wave at Cz. The TMS artifact at FC5 implies that it contains the higher order components in the residual component.
ACKNOWLEDGMENT This study was partially supported by the Magnetic Health Science Foundation.
REFERENCES 1. Virtanen J, Ruohonen J, Naatanen R, and Ilmoniemi R.J (1999) Instrumentation for the measurement of electric brain responses to transcranial magnetic stimulation. Med. Biol. Eng. Comput. 37:322-326 2. Jung T.P, Makeig S, McKeown M.J, Bell A.J, Lee T.W, and Sejnowski T.J (2001) Imaging brain dynamics using independent component analysis. IEEE Proc. vol.89, No.7, pp.1107-1122 3. Morbidi F, Garulli A, Prattichizzo D, Rizzo C, Manganotti P, and Rossi S (2007) Off-line removal of tms-induced artifacts on human electroencephalography by kalman filter. J Neurosci. Methods 162:293-302 4. Katayama Y,Iramina K (2009) Equivalent Circuit Simulation of the Induced Artifacts Resulted from Transcranial Magnetic Stimulation on Human Electro-encephalography. IEEE Trans. Magnetics vol.45,No.10, pp.4833-4836 5. Zilber N.A,Katayama Y,Iramina K, and Erich W (2010) Efficiency test of filtering methods for the removal of transcranial magnetic stimulation artifacts on human electroencephalography with artificially transcranial magnetic stimulation-corrupted signals. J. Appl. Phys. 107,09B305 (3 pages)
IFMBE Proceedings Vol. 35
Gender Identification by Using Fundamental and Formant Frequency for Malay Children H.N. Ting and A.R. Zourmand Department of Biomedical Engineering, Faculty of Engineering, University Malaya, Jalan Pantai Baharu, 50603 Kuala Lumpur, Malaysia
Abstract— This paper investigated on gender identification across the first four formant frequency values and fundamental frequency of vowel /a/ in 360 Malay children in different age groups between 7 to 12. 30 males and 30 females were selected for each age group in this study. All 360 subjects were asked to pronounce vowel /a/ for 5 second and fundamental and formant frequencies were extracted. The accuracy of gender identification was calculated by using different formant and fundamental frequency values. Different combinations of formant and fundamental frequencies were also examined to get a good and acceptable accuracy on the matter of gender identification. The results showed that the F2 was the best factor in gender identification and combination of F2 and F0 improved the accuracy in Malay children. Keywords— Formant Frequency, Fundamental Frequency, Malay Vowel, Gender Identification in Children, Children Speech.
I. INTRODUCTION One of the most important components of the human speech is formant frequency and has been studied in children and adults across different languages so far. As matter of the fact, formant frequencies make vowels identifiable for listener. Because of the different formants position in vowels of male and female, formant frequency can be used in gender identification especially in the first three formant frequencies of vowels [1, 2]. A need to look into the Malay children`s formant and fundamental frequencies of Malay vowels, is obvious and the effect of them on the matter of gender identification is considerable. The aim of the present study is to find any systematic differences related to gender in fundamental and formant frequency values of vowel /a/ which is produced by Malay children between 7 to 12 years old. Peterson and Barney [1] studied the fundamental frequency and the first three formant frequencies of ten American English vowels. Then the study was extended by Hillenbrand et al. [2]. They found numerous differences in terms of average frequencies of F1 and F2 compared with the study of Peterson and Barney. But in both of these studies, the age and sex of children were not specified. Previous studies showed that (Peterson and Barney, Hillenbrand et al. [2] and Bennet [3, 4]) that females had higher
formant frequencies than those of males. These studies, however, did not determine the significant differences of formant frequencies across the genders. Busby and Plant [5] found that there were gender differences in F2 for almost 11 Australian English vowels and in F1 for low vowels. In another study of Swedish children of 11 years old, White [6] reported that female children had significant higher F1 and F2 than those of male children. Eguchi and Hirsh [7] measured fundamental and first two formant frequency values for children between 3 to 13 years old. They reported that the F1 and F2 values for boys were lower than those girls at the same age group and F0 values in 13 year-old boys were also lower than those girls, but this was not for the 11 and 12 years-old children. However, the gender differences for fundamental formant frequencies among Malaysian Malay children are yet to be known. In several studies formant frequencies proposed to use in gender identification and they reported that in general females had higher formant frequency values than males in the same age group. Their result showed that F2 values were a good gender identifier for almost vowels, in contrast F0 values were a poor factor on this matter [1-7] and also they mentioned for younger children there were not any differences between two genders especially in fundamental frequency. [8]
II. METHOD 360 normal Malaysian Malay children between 7 to 12 years old were selected from primary schools around Petaling Jaya and Kuala Lumpur, Malaysia for the study. Each age group consisted of 30 males and 30 females without any vocal pathology or voice disorder, symptoms of cold and flu, allergies, history of smoking, neurological disease and respiratory dysfunctions. All the subjects were asked to pronounce sustained vowel /a/ for 5 seconds, at a comfortable pitch and loudness level. The recording was done using Shure SM58 microphone in a normal room environment. The mouth-to-microphone distance was fixed at 2-3 cm. The speech sounds were digitally recorded by GoldWave [9] digital audio editor at a sampling rate of 20 kHz with 16-bit resolution.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 523–526, 2011. www.springerlink.com
524
H.N. Ting and A.R. Zourmand
Fundamental frequency and first four formant frequency values were extracted and determined for each subject by using the Praat software [10]. Standard formant settings were used: 5500 Hz for maximum formant frequency, 5 numbers of formants, 25 ms for the window length and a dynamic range of 30 dB. A discrimination test was carried out to check the pronunciation of the vowel /a/ by the children. The vowels’ pronunciation was considered correct if 7 out of the 10 listeners found them correctly. The method of Euclidean minimum distance was used to gender identification across each formant and fundamental frequencies. The Euclidean formula was applied to find the distance between 2 or more points. In fact by using this formula, the distance between each subject’s formant and the Mean values of male and female were compared. Based on the minimum distance, the gender of the subject was selected. ,
Other previous studies confirm that the F2 values were the best gender identification factor in formant frequency in all age groups due to lower amount in standard deviation. [3, 5, 11, 12]. They also mentioned that the F0 values were a poor factor in this matter. In contrast Hasek et al [8] found the differences between two genders by using F0 values for 7 to 10 years old children. According to the previous studies, K-factors or F-scaling were also used to describe the relationship between the formant frequency values of males and females [1, 3, 6, 7, and 13]. Kn% = ((Knfemale / Knmale) – 1) * 100 Figure 1 show the K0-4 values across the age groups for vowel /a/. In overall K-factor for Malaysian Malay children in vowel /a/ was about 3.78%. In detail K0-4 were 6.72% , 5.38% , 5.34% , 0.85% and 0.63% respectively. In contrast previous investigators reported that the average of K-factor for English was 19% [1] for Swedish adults 18% [13] for 7 and 8 years old American children was about 10% [3]. It was also reported about 9% by white [6] and 12% by Eguchi and Hirsh [7].
This method was also applied for different combination of formant and fundamental frequencies and calculated the percentage of correct gender identification to find the best formant and combinations on this matter.
III. RESULTS AND DISCUSSIONS Table 1 showed that the percentage of gender identification in different age groups across the fundamental and first four formant frequency. According to the results in this part, in overall F2 had the best accuracy in the matter of gender identification with the average of 65.56% in all age groups. 12-years old children also had the best accuracy between all age groups with the average of 73.33%. In general, Fundamental frequency by 58.33% correct identifying was in the second rank of gender identification. F1, F4 and F3 also had 56.39%, 53.33% and 53.06% correct identification respectively. Table 1 Percentage of gender identification across the frequency Age 7 8 9 10 11
F0 60 61.67 50 63.33 50
F1 53.33 56.67 50 66.67 50
F2 66.67 65 56.67 71.67 55
F3 50 56.67 50 63.33 56.67
F4 66.67 58.33 61.67 68.33 58.33
12 All (360)
71.67
70
73.33
61.67
63.33
58.33
56.39
65.56
53.06
53.33
Fig. 1 Percentage of K-Factors across age groups Table 2 Percentage of gender identification across the frequency Age
F1,F2
F2,F4
F0,F1
F0,F2
7
66.67
70
51.67
66.67
8
61.67
55
56.7
65 56.7
9
55
60
50
10
70
68.33
65
70
11
53.33
56.67
50
53.33
12 All (360)
68.33
68.33
71.67
75
63.06
62.78
57.5
65
IFMBE Proceedings Vol. 35
Gender Identification by Using Fundamental and Formant Frequency for Malay Children
Table 2 illustrated the results of gender identification accuracy across the combination of two different fundamental and formant frequencies in different age groups. Based on the results of table 1, different combination of formant and fundamental frequencies were selected and calculated the accuracy of gender identification for all of them. According to Gelfer and Mikos [14] results, combination of the two set of cues improve the gender identification accuracy and it our results also showed. These results showed that, combination of F0, F2 made the best accuracy with average of 65% in compare with other combinations and 12-years old children was the best age group with the average of 75%. F1, F2 combination, with the average of 63.06% located in the second rank and then F2, F4 and F0, F1 with the average of 62.78% and 57.5%. The combinations of 3, 4 and 5 frequencies were listed on table 3 in different age groups. F0, F1, F2 combination had the best gender identification accuracy with overall average of 63.61%. The overall average of combinations of F1, F2, F3 and F1-4 and F0-3 and F0-4 were 61.94%, 58.33%, 61.67% and 58.33% of correct identification.
525
The other accuracy of gender identification in different languages by using fundamental frequency and formants were 74% by Meditch [15] or 66% by Karlsson [16] for children from three different European languages and 71% by sergeant [11] but most of them just focused on adults, nevertheless, our result specially in 12 year-old children was comparable with them and was acceptable. Another approach to improve the accuracy for gender identifying is using normalized data. By normalizing data the dispersion of frequency values would be decreased.
IV. CONCLUSION The study investigated the first four formant frequencies and fundamental frequency of sustained vowel /a/ in normal Malaysian Malay children between 7 to 12 and applied them for gender identifying. The results showed that the second formant frequency was the best and also fundamental frequency played a role in the matter of gender identification. The combination of F0 and F2 also compared with other possible combination to get a good accuracy. The best accuracy belongs to the F2 and combination of F0 and F2.
Table 3 Percentage of gender identification across the frequency Age
,
,
,
,
, , ,
, , ,
,
, ,
7
68.33
63.33
61.67
63.33
61.67
8
63.33
60
56.67
60
56.67
9
55
55
55
55
55
10
71.67
76.67
68.33
76.67
68.33
11
53.33
55
53.33
55
53.33
12
68.33
70
68.33
71.67
68.33
All (360)
63.61
61.94
58.33
61.67
58.33
ACKNOELEDGMENT
,
In general based on previous, Fundamental frequency and formant frequencies, for gender identification were proposed. They reported higher values for females than the males in the same age in formant and fundamental frequencies. As the authors noted that, the differences between mean of formant frequency values were small for this identification. They also mentioned about that the rate of gender identification in adults was much higher than the children. They also reported that F0 was a poor predictor of male / female. [3, 4, 5, 6, 7, 11] In contrast Gelfer and Mikos and hasek et al [14, 8] noted that fundamental frequency played a main role in gender identification and formants frequency might also played a role in this matter.
We would like to thank the Ministry of Science, Technology and Innovation, Malaysia (MOSTI) for supporting this research under eScience Fund.
REFERENCES 1. Peterson, G.E. and Barney, H.L.(1952) Control methods used in a study of the vowels, J.Acoust. Soc. Am. 24(2): 175-184. 2. Hillenbrand, J., Getty, L.A., Clark, M.J. and Wheeler, K. (1995) Acoustic characteristics of American English Vowel. J. Acoust. Soc. Am. 97(5): 3099-3111. 3. Bennett S. (1981) Vowel formant frequency characteristics of preadolescent males and females. J. Acoust Soc Am. 69:231-238. 4. Bennett S. (1983) A 3-year longitudinal study of school aged children`s fundamental frequencies. J.Speech Hear. Res 26:137-142. 5. Busby, P.A. & Plant, G.L. (1995) Formant frequency values of vowels produced by preadolescent boys and girls. J. Acoust. Soc.Am.97(4): 2603-2606 6. White, P. (1999) Formant Frequency Analysis of Children's Spoken and Sung Vowels Using Sweeping Fundamental Frequency Production. J.Voice,13(4): 570-582. 7. Eguchi S, Hirsh U. (1969) Development of speech sounds in children. Acta Otolaryngologica. 257(Suppl):543. 8. Hasek, C. S., Singh, S., and Murry, T. (1980) Acoustic attributes of preadolescent voices. J. Acoust. Soc. Am. 68: 1262-1265. 9. GoldWave Inc (2009). GoldWave [Computer Program]. 10. Boersma, P. and Weenink, D. (2009). “Praat: doing phonetics by computer”. [Computer software and manual]. Retrieved May 11, 2009 from http://www.fon.hum.uva.nl/praat/.
IFMBE Proceedings Vol. 35
526
H.N. Ting and A.R. Zourmand
11. Sergeant, D. & Welch, G.F. (2009) Gender Differences in Long-Term Average Spectra of Children’s Singing Voices. J.Voice, 23(3): 319336. 12. Martland, P., Whiteside, S.P., Beet, S. W., and Baghai-Ravary, L. (1996) Analysis of ten vowel sound across gender and regional / cultural accent. Spoken language conference 4: 2231-2234. 13. Fant, G. (1975) Non-Uniform vowel normalization. Speech transmission Laboratory Quarterly Progress Status Report, Royal Institute of Technology, Stokholm, Sweden, 1-19 14. Gelfer, M. P., and Mikos, V. (2005) The relative contributions of speaking fundamental frequency and formant frequencies to gender identification based on isolated vowels. J. Voice 19(4) : 544-554.
15. Meditch, A. (1975) The development of sex specific speech pattern in young children. Anthropol Linguist. 17: 421-465. 16. Karlsson, I. (1987) Sex differentiation cues in voices of young children of different language. J. Acoust. Soc. Am. 81(1): S68.
Author: Dr. Hua-Nong TING Institute: University of Malaya Street: Jalan Pantai Baharu City: Kuala Lumpur Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Improving Low Pass Filtered Speech Intelligibility Using Nonlinear Frequency Compression with Cepstrum and Spectral Envelope Transformation M.H. Mohd Zaman, M.M. Mustafa, and A. Hussain Department of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
Abstract— Person with high frequency hearing loss cannot perceive speech containing high frequency component. However, the hearing impaired person still has significant residual hearing at lower frequency region. As such, we propose a method to overcome this by applying the nonlinear frequency compression method to the estimated spectral envelope before transposing the high frequency component to the lower frequency region. The use of the compression method is expected to improve the speech intelligibility of the low pass filtered speech which simulates the high frequency hearing loss condition. Keywords— Frequency compression, spectral envelope, low pass filtered speech, speech intelligibility, high frequency hearing loss.
I. INTRODUCTION In many cases, hearing loss can be improved through amplification. But in certain cases, amplification do not provide improvement to the hearing impaired person with high frequency hearing loss who has incapability to understand speech with high frequency component [2]. Despite this incapability, they still have some residual hearing capability at lower frequency region [3, 6, 7 & 9]. By manipulating the frequency spectrum of the speech, we can assist the hearing impaired person to improve their speech intelligibility of the high frequency components [7 & 9]. At the same time, we need to preserve the lower frequency component involving formants that are important in word classification and discrimination [5 & 8]. In this paper, we describe a method to nonlinearly compress the high frequency component of speech to the lower frequency region without affecting the speech intelligibility. The frequency compression process can be performed by transforming the estimated spectral envelope using nonlinear frequency compression method.
the word “IS” since the unvoiced speech a of the fricative /s/ sound contains high frequency component and in turn, will affect the speech intelligibility of the hearing impaired individual. However, in many cases these individuals still have significant residual hearing at the lower frequency region which can be manipulated [7 & 9]. In term of frequency response, the residual hearing region mimics a low pass filter. By setting, the cut off frequency of the low pass filter to be equivalent to the maximum frequency of the residual hearing, a low pass filter can be designed to filter the speech signal in order to simulate a speech sound that can be heard by the hearing impaired individuals that suffer from high frequency hearing loss. For instance, in this work the residual hearing region has a maximum frequency of 2 kHz, therefore the cut off frequency of the low pass filter is set to 2 kHz.
III. FREQUENCY COMPRESSION In compressing high frequency components, speech intelligibility must be preserved. One of the speech features that need to be preserved is the formant. Formants are the resonance frequencies of the vocal tract that shape the speech source signal [5 & 8]. The first 2 formants called F1 and F2 are the two most important formants in understanding the speech. For a voiced speech, the ratio of F1 to F2 is approximately equal even if the pitch of the speech is different. Nonlinear frequency compression compresses the frequency of the input signal and simultaneously preserves the speech feature such as formant ratio. The compression can be performed by applying a nonlinear frequency compression function to the input signal frequency [3, 4 & 6]. Higher frequency component will require more compression compared to the lower frequency component. The frequency compression function is given by
II. HIGH FREQUENCY HEARING LOSS High frequency hearing loss is a type of hearing loss which involves a person’s inability to hear clearly speeches containing high frequency components. As an example, a person with such inability will face problem to understand
(1) where K is the frequency compression factor, a is the warping parameter, fin is the input frequency, fout is the output frequency and fs is the sampling frequency. The warping parameter, a is equals to (K-1)/(K+1).
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 527–531, 2011. www.springerlink.com
528
M.H. Mohd Zaman, M.M. Mustafa, and A. Hussain
IV. SPECTRAL ENVELOPE PRODUCTION AND TRANSFORMATION
Speech production can be analyzed in term of sourcefilter processing [1 & 10]. To produce speech, air pressure, which is the source of energy, is expelled from the lung, resulting air flow through the vocal tract before coming out from the mouth and nose [5 & 8]. The vocal tract, which can be characterized by its formants or natural frequencies, acts as the time varying filter that shapes the air flow [10]. Based on the source-filter modeling, the spectral envelope, which is the smooth approximation of the magnitude spectrum shape, can be estimated [1]. Many methods such as cepstrum, linear predictive coefficient, LPC and channel vocoder method can be used to estimate the spectral envelope [1 & 10]. In this paper, the cepstrum method is used to estimate the spectral envelope. Figure 1 shows the block diagram to estimate spectral envelope using the cepstrum method. First, the magnitude spectrum is obtained by applying the Fast Fourier Transform, FFT to the input signal frame. By taking the logarithm of the magnitude spectrum, then applying the Inverse Fourier Transform, IFFT, the complex cepstrum is obtained. Low pass window is then applied to the complex spectrum to get the real cepstrum. Then the FFT is applied to the windowed real cepstrum to obtain the spectral envelope.
Fig. 2 Removing original spectral envelope and applying transformed spectral envelope
V. IMATERIALS AND METHODS The block diagram for the nonlinear frequency compression method used in this work is as shown in Figure 3.
Fig. 3 Proposed nonlinear frequency compression system block diagram Fig. 1 Spectral envelope estimation using cepstrum method In this paper, the spectral envelope of the original speech signal is removed using the nonlinear frequency compression method in order to transform the spectral envelope. Typically, spectral envelope transformation involves the removal of the original spectral envelope and then applying a new spectral envelope. Referring to Figure 2, lets the original and transformed spectral envelope be H1(f) and H2(f), respectively. The transformation is then performed by dividing the original spectrum with H1(f) and multiplying it with H2(f). Since the cepstrum is in the logarithmic domain, by taking logarithm of H2(f)/H1(f), transformation can be achieved by subtracting the logarithm of H1(f) from H2(f). The exponential of the subtraction result gives a spectral transformation filter which will be used to compress the frequency of the input spectrum.
The sampled speech used in this work was downloaded from AT&T Natural Voices Text-to-Speech Demo program at http://www2.research.att.com/~ttsweb/tts/demo.php. The input signal is an audio wave file of a female speech uttering the English sentence, “IT’S SPEECH”. The speech was sampled at a frequency of 16 kHz using MATLAB R2008a. This particular sentence was selected because it contains both voiced and unvoiced speech segments. In the voiced speech which represents the lower frequency region, more energy dominates the lower frequency region while for the unvoiced speech the high frequency region contains higher energy. In addition, the MATLAB software was used throughout the experimental session. A. Simulating High Frequency Hearing Loss with Low Pass Filtering To simulate the sound that can be heard by the high frequency hearing loss individuals, the acquired input signal
IFMBE Proceedings Vol. 35
Improving Low Pass Filtered Speech Intelligibility Using Nonlinear Frequency Compression
was filtered with a low pass filter. The cut off frequency of the low pass filter was set to be equal to the maximum frequency of the residual hearing region of the hearing impaired person. Next, the input signal is segmented by creating the input signal frame. The frame size was set to 2048 samples per frame with hop size of 512 samples and subject to a Hamming window. The step to generate the spectral envelope is depicted in Figure 1. Using Eq. 1, the nonlinear compression function is applied onto the frequency vector of the spectral envelope. By setting the required frequency compression factor, a new compressed frequency vector is produced with a maximum frequency of 2 kHz which equals the cut off frequency of the low pass filter. Figure 4 shows various setting of frequency compression factors and its corresponding maximum output frequency.
8000
Maximum output frequency (Hz)
7000 6000 5000 4000 3000 2000 1000 0
5
10 15 Frequency compression factor
20
25
529
overlap-add method to the output signal frames, the output signal is produced.
VI. RESULTS The nonlinear frequency compression by spectral envelope transformation improves the speech intelligibility of the low pass filtered. To investigate the speech intelligibility improvement, we compared the low pass filtered input signal and frequency compressed output signal. The acquired speech sample consists of low and high frequency components as depicted by the spectrogram in the lower part of Figure 5. We applied a low pass filter with cut off frequency of 2 kHz to the input signal to simulate the high frequency hearing loss condition (Figure 6). After generating the spectral envelope of the input signal spectrum, we transformed the envelope using the nonlinear frequency compression function as described earlier to acquire the output frequency as shown in Figure 8. The maximum output frequency was set to be equal to the cut off frequency of the low pass filter by setting the frequency compression factor to 4. Figure 9 shows the spectral envelope and spectrum of the input and output signal. After processing all the input signal frames, the high frequency components were compressed nonlinearly to the lower frequency region as shown in Figure 7. This is indicated by the darker region at 2 kHz in the spectrogram in Figure 7 compared to Figure 6. By conducting the listening test, we confirmed that the frequency compressed output signal has improved speech intelligibility compared to the low pass filtered input signal.
Fig. 4 Frequency compression factor from setting 1 to 25 and its maximum output frequency show that the maximum frequency become constant as the frequency compression factor increases B. Spectral Envelope Transformation and Output Signal Generation Following the nonlinear compression, the spectral envelope is interpolated based on the compressed frequency vector. Since the original frequency was compressed to the lower frequency region, there are still some remaining frequencies after the cut off frequency of low pass filter. The remaining magnitude is then set equivalent to the last magnitude in the original spectrum so as to generate a symmetrical transformed spectral envelope. Next the computed spectral transformation filter is applied to the original input spectrum to generate the output spectrum which is done by taking the IFFT of the output spectrum. Finally by applying
Fig. 5 The uttered speech signal for the sentence “IT’S SPEECH” and its spectrogram that shows both the low and high frequency components
IFMBE Proceedings Vol. 35
530
M.H. Mohd Zaman, M.M. Mustafa, and A. Hussain
-30
-40
Magnitude (dB)
-50
-60 input output
-70
-80
-90
-100
0
1000
2000
Fig. 6 Low pass filtered input signal and its spectrogram. Magnitudes above the cut off frequency of low pass filter, which is 2 kHz, are attenuated. This simulates the sound heard by the hearing impaired person
3000 4000 5000 Frequency (Hz)
6000
7000
8000
(a) -30 -40 -50
Magnitude (dB)
-60 -70 -80 -90 -100
spectral envelope
Spectrum spectrum Spectral envelope
-110 -120
0
1000
2000
3000 4000 5000 Frequency (Hz)
6000
7000
8000
(b) -30 spectral envelope Spectrum spectrum Spectral envelope
-40 -50
spectrogram. The high frequency components at around 0.4 and 0.9 s are compressed to lower frequency region below 2 kHz
-60 Magnitude (dB)
Fig. 7 Low pass filtered, frequency compressed output signal and its
-70 -80 -90
8000
-100
7000 -110
Output frequency (Hz)
6000
-120
0
1000
2000
3000 4000 5000 Frequency (Hz)
6000
7000
8000
5000
(c) 4000
Fig. 9 Example of an original and transformed spectral envelope of a frame showing (a) the higher frequency region compressed to lower frequency region (b) Input magnitude spectrum (c) Output magnitude spectrum
3000 2000 1000 0
VII. DISCUSSION 0
1000
2000
3000 4000 5000 Input frequency (Hz)
6000
7000
8000
Fig. 8 Input and output frequency of the nonlinear frequency compression with frequency compression factor of 4 to produce the maximum output frequency of 2 kHz
In this work, we afforded a method that used the nonlinear frequency compression to improve the speech intelligibility for the person with high frequency hearing loss. After
IFMBE Proceedings Vol. 35
Improving Low Pass Filtered Speech Intelligibility Using Nonlinear Frequency Compression
generating the spectral envelope using cepstrum method, we applied the nonlinear frequency compression function to the estimated spectral envelope to compute the new transformed spectral envelope. We used the transformed spectral envelope to compress the frequency to fit the residual hearing of hearing impaired person. We also conducted informal listening test to evaluate the speech intelligibility of the low pass filtered and frequency compressed output signal. The listening test result showed that the output speech was improved compared to the low pass filtered input signal. This result confirmed that the speech intelligibility has being improved.
VIII. CONCLUSION In short, it can be concluded that the proposed method is able to improve the speech intelligibility for the hearing impaired person with high frequency hearing loss.
531
3. Hicks, B.L., Braida, L.D., Durlach, N.I. (1981) Pitch invariant frequency lowering with nonuniform spectral compression. ICCASP Proc. vol. 6, IEEE Int. Conf. on Acoustics, Speech and Signal Processing, New York. pp 121-124 4. Reed, C.M., et al (1983) Discrimination of speech processed by lowpass filtering and pitch-invariant frequency lowering. J Acoust Soc. America 74(2): 409-419 5. Munoz, C.M.A., et al (1999) Frequency lowering processing for listeners with significant hearing loss. ICECS Proc. vol. 2. The 6th IEEE Int. Conf. on Electronics, Circuit and Systems, Cyprus. pp 741744 6. Rabiner, L.R., Schafer, R.W. (1978) Digital processing of speech signals. Prentice-Hall. New Jersey 7. Sakamoto, S., et al. (2000) Frequency compression hearing aid for severe-to-profound hearing impairments. Auris Nasus Larynx 27: 327-334 8. Schafer, R.W., Rabiner, L.R. (1975) Digital representations of speech signals. IEEE Proc. 63(4): 662-677 9. Simpson, A. (2009) Frequency-lowering devices for managing highfrequency hearing loss: A review. Trends in Amplification 2: 87-106. 10. Stylianou, Y. (2008) Voice transformation, in Springer Handbook of Speech Processing ed. Benesty, J. Sondhi, M.M., Huang, Y. Springer, German
REFERENCES 1. Arfib, D., Keiler, F., Zolzer, U. (2002) Source-filter processing, in DAFX: Digital audio effects ed. U. Zolzer. John Wiley & Sons, England 2. Bohnert, A., Nyffeler, M, Keilmann, A. (2010) Advantages of a nonlinear frequency compression algorithm in noise. Eur Arch Otorhinolaryngol 267: 1045-1053
Author: Mohd Hairi Mohd Zaman Institute: Dept. of Electrical, Electronic & Systems Eng., Fac. of Eng. & Built Environment, Universiti Kebangsaan Malaysia Street: 43600 Bangi City: Selangor Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Long-Term Heart Rate Variability Assessment M. Penhaker, T. Stula, and M. Augustynek VSB – Technical University of Ostrava, FEECS/Department of Measurement and Control, Ostrava, Czech Republic
Abstract— Heart rate variability is good diagnostics method not only in cardiology investigation. This kind of investigation is made in long term ECG analysis of Holter’s type of examination. This work deals about long the processing and evaluation qualitative changes heart rate. Two detection methods for different kind of obstruct signal were designed and implemented in standalone application. The application was designed and realized detection significant parts of one lead ECG record. Evaluation and testing were done on signals from clinical Physionet database records. Results of evaluation are displayed as numerical values of detected R-R peaks and also visualized by several inspectional charts. The real application test was done on record from children hospital clinic. The tests records represents causes complex sudden death near newborns with successfully prediction indication.
B. Heart Tachogram •
Set of consecutive RR intervals (immediate heart rate)
value of
Classification methods of HRV Time domain methods (statistical and geometrical methods) • Frequency domain methods (spectral image of RR intervals) •
•
Variability RR intervals testing Spectral analysis at spontaneous breathing P S D (m s 2 )
Keywords— Heart Rate, ECG, Filtration, Detector, R-R Interval, Long-Term.
I. INTRODUCTION Heart rate variability belongs to fundamental and most frequently evaluation physiological data in medicine. Serve as helping diagnostic method in cardiology. It's a very good indices of activity and performance heart. If the heart rate gets below definite level, that we talk of bradycardia. On the other side we speaking about tachycardia. If it happen to irregularity in heart activity, then we talk about arrhythmical. All these changes and irregularity predicate to us, that come some error in function of heart system.
f (H z)
Fig. 1 Spectral Image of RR Intervals - PSD Curve MWSA – Mayer´s wave (0-0,1Hz) matches blood pressure changes RSA – respiration wave (0,25-0,3Hz) matches breath frequency
III. SOLVING DESIGN II. THEORY BASIC A. Input Data • • • • •
Instantaneous value hearth rate (BMP) ECG, pletysmograph, phonocardiograph -> ECG HEART RATE Heart rate is determine by R wave existence of QRS complex RR interval – time slot between two neighbouring R peaks Calculation of immediate and average haert rate
The solving design of this work consists of four fundamental points. A. Input Data Transformation The heart rate we can evaluate from many biological value, which the heart activity directly determine or go with. For example electrical signal of electrocardiograph (ECG) – electrocardiogram, , acoustic signal of phonocardiograph (FCG) – phonocardiogram and pletysmograph. Most often go plotting from long-term 24 hourly or short-term 5-minute records of electrocardiogram. In these
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 532–535, 2011. www.springerlink.com
Long-Term Heart Rate Variability Assessment
533
work with limitation on determination pulse frequency only from short-term record's of ECG. For evaluation designed progress and method was using test behaviour of ECG signals in digital form which was downloaded from medical database of clinical electrocardiography records - Physionet (http://www.physionet.org/ physiobank/database/). Databank of many kinds of biological signals is situating here. The advantage of these records is, that are already exactly mark how into type of dysfunction and hearth disease.
D. Directed 1st Derivation ECG signal
N
QRS min
Signal derivation
Multiplier komparing level
Minimal length of QRS
squaring
Comparing level calculation
Algorithm of R wave detection
X2 Global averaging
B. ECG Signal Filtration It was using two types of ECG filter. The first one was a smoothing filter and the second a frequency FIR filter. In this case is a very important chose suitable frequency range with reference to cardiac artifact occurrence from 1 to 46Hz. This band allowed being a useful frequency band of ECG signal.
R wave
X1 Adaptive averaging
Fig. 4 Block Diagram of R-Detctor Amp.(mV)
R
A P
0,5 0
T Q
S
t (ms)
d(Amp.)/dt 1,2
B 0
t (ms)
(d(Amp.)/dt)2
Fig. 2 Amplitude Frequency Response 1,44
A
B
C
C. R-Wave Detection It is very important to correctly detect peaks of QRS complex so-called R-wave for analysis immediate and average value of heart rate and thereby determine time distance two successive going R-waves.
0
R
t (ms)
Fig. 5 A –ECG Signal, B – ECG After 1st.Derivation, C – ECG After Directed
Input ECG signal is first derivate. Then is squaring. It necessary, because we need positive value only. After squaring the signal goes through block, which calculate a comparing level. The most important block of all is a final algorithm of R wave detection. The function is displayed on.
Fig. 3 Electrocardiogram signal (2.5 second) To this purpose was create and using partial method for detection these peaks of QRS complex. IFMBE Proceedings Vol. 35
Fig. 6 ECG Signal Qith R-peaks Display
534
M. Penhaker, T. Stula, and M. Augustynek (d(Amp.)/dt)2
F. A
A-1
Spectral Image of RR Intervals
B A+1 B-1
Heart rate variability analysis is concerned transformation of harth tachogram curve to spectral area.
B+1
X
0
comparing level
R
t (ms)
Fig. 7 Algorithm of R Wave Find
Fig. 10 Hearth Tachogram
E. Directed Signal The second designed R-detector is very similar then first. It´s useful for ECG signal with drift zero isoline.
The thinner curve with peaks is showing immediate heart rate and the thicker curve is showing average heart rate. Set of consecutive RR intervals are first transfer by the help of formula (1) from time to frequency domain. N −1 ⎛ − jϖ n ⎞ u1 (ϖ ) = ∑ s (n ) ⋅ exp⎜ ⎟ ⎝ N ⎠ n=0
(1)
u1 (ω) – spectrum of heart tachograf signal, N – sample number of input signal Then we recalculate signal spectrum to amplitude spectrum U1(ω) by the help of formula (2).
U 1 (ϖ ) = u1 (ϖ )
Fig. 8 ECG Signal with Drif Zero Isoline The main diferent in coparison with first detector is in signal preprocessing. The input electrocardiogram signal first come into block, where is calculating his average value by global averaging method. This average value matches cardiac isoline curve. ECG signal
N
QRS min
Isoline calculation
Multiplier komparing level
Minimal length of QRS
Comparing level calculation be adaptive averaging metod
Algorithm of R-wave detection
ECG - Isoline
Squaring
(2)
Power spectral density of RR intervals calculation is concerned by amplitude spectral squaring (3).
PSD = u1 (ϖ )
2
(3)
IV. MAIN WINDOW OF CREATIVE PROGRAMME The main window consists of four baisic small windows. First one down from top is display loaded and filtered ECG signal with point, where was detected R-paeks.
R wave
Fig. 9 Block Diagram of R-Detctor For signal processing is suitable this curve filtered off. It´s done by simple subtraction from original curve. We get record with constant isoline. R-wave detection is doing by same algorithm as well as in previous method. IFMBE Proceedings Vol. 35
Fig. 11 Main Programme Environment
Long-Term Heart Rate Variability Assessment
535
ACKNOWLEDGMENT
Fig. 12 ECG Curve The second window is showing heart tachogram window. Here are several curve, immediate heart rate, average heart rate, mean value and physiological and morbid bound of heart rate.
The work and the contribution were supported by the project: Ministry of Education of the Czech Republic under Project 1M0567 “Centre of applied electronics”, student grant agency SV 4501141 “Biomedical engineering systems VII” and TACR TA01010632 “SCADA system for control and measurement of process in real time”. Also supported by project MSM6198910027 Consuming Computer Simulation and Optimization.
REFERENCES
Fig. 13 Heart Tachogram Window Third window is power spectral density curve of variability RR intervals calculate from heart tachogram. And last window display histogram of heart rate.
Fig. 14 Heart Rate Histogram
V. CONCLUSION The results of those works may be contribution for recognition causes complex sudden death near newborns and small children. At once the results shall serve doctor as a tool to better understanding given problem and also serve to correct determination diagnose of patients with heart system disorder, which will lead to repair. • • • • • •
Hearth rhythm short-term changes evaluation from ECG Detector num.1 has higher detect rate than detector number 2 Spectral image of RR intervals from 0 - 0,5 Hz Physiological and morbid heart rate changes selection Developing for needs of patients therapy in teaching hospital with policlinic Ostrava Poruba It´s serve doctors as a tool to better understanding causes generation of complex sudden death in newborns and small children
1. Prauzek, M., Penhaker, M., Methods of comparing ECG reconstruction. In 2nd Internacional Conference on Biomedical Engineering and Informatics, Tianjin: Tianjin University of Technology, 2009. Stránky 675-678, ISBN: 978-1-4244-4133-4, IEEE Catalog number: CFP0993D-PRT 2. Prauzek, M., Penhaker, M., Bernabucci, I., Conforto, S., ECG - precordial leads reconstruction. In Abstract Book of 9th International Conference on Information Technology and Applications in Biomedicine. Larnaca: University of Cyprus, 2009. Stránka 71, ISBN: 978-1-42445379-5 3. Korpas, D., Psychological Intolerance to Implantable Cardioverter Defibrillator, In Biomedical Papers-Olomouc, Volume 152, Issue: 1 p.: 147-149 , June 2008 , ISSN: 1213-8118 4. Cerny, M. Movement Activity Monitoringof Elederly People – Application in Remote Home Care Systems In Proceedings of 2010 Second International Conference on Computer Engineering and Applications ICCEA 2010, 19. – 21. March 2010, Bali Island, Indonesia, Volume 2NJ. IEEE Conference Publishing Services, 2010 p. ISBN 978-0-76953982-9 5. Garani, G., Adam, G. K. Qualitative modelling of manufacturing machinery In Book - 32nd Annual Conference on IEEE Industrial Electronics, IECON 2006 VOLS 1-11 p. 1059-1064, 2006 , ISSN: 1553572X, ISBN: 978-1-4244-0135-2 6. Vasickova, Z., Augustynek, M., New method for detection of epileptic seizure, Journal of Vibroengineering, Volume 11, Issue 2, 2009, pp.279-282, (2009) ISSN 1392 - 8716 7. Skapa, J., Siska, P., Vasinek, V., Vanda, J. Identification of external quantities using redistribution of optical power - art. no. 70031R In Book OPTICAL SENSORS 2008 Volume 7003, p. R31-R31, 2008,ISSN: 0277-786X , ISBN: 978-0-8194-7201-4 8. Majernik,J., Molcan,M., Majernikova, Z.: Evaluation of posture stability in patients with vestibular diseases, SAMI 2010, 8th IEEE International Symposium on Applied Machine Intelligence and Informatics, Óbuda University, Budapest, 9. Bernabucci, I., Conforto, S., Schmid, M., D'Alessio, T. A bio-inspired controller of an upper arm model in a perturbed environment, In proceedings The 2007 International Conference on Intelligent Sensors, Sensor Networks and Information Processing Pages: 549-553 ISBN: 978-1-4244-1501-4
Author: Martin Augustynek Institute: VSB-TUO, FEECS, DMC Street: 17. listopadu 15 City: Ostrava Country: Czech Republic Email: [email protected]
IFMBE Proceedings Vol. 35
Neural Network Classifier for Hand Motion Detection from EMG Signal Md. R. Ahsan, M.I. Ibrahimy, and O.O. Khalifa Department of Electrical & Computer Engineering, Kulliyyah of Engineering, International Islamic University Malaysia, Kuala Lumpur
Abstract— EMG signal based research is ongoing for the development of simple, robust, user friendly, efficient interfacing devices/systems for the disabled. The advancement can be observed in the area of robotic devices, prosthesis limb, exoskeleton, wearable computer, I/O for virtual reality games and physical exercise equipments. Additionally, electromyography (EMG) signals can also be applied in the field of human computer interaction (HCI) system. This paper represents the detection of different predefined hand motions (left, right, up and down) using artificial neural network (ANN). A backpropagation (BP) network with Levenberg-Marquardt training algorithm has been utilized for the classification of EMG signals. The conventional and most effective time and timefrequency based feature set is utilized for the training of neural network. The obtained results show that the designed network is able to recognize hand movements with satisfied classification efficiency in average of 88.4%. Furthermore, when the trained network tested on unknown data set, it successfully identify the movement types. Keywords— Electromyography, Human Computer Interaction, Artificial Neural Network, Discrete Wavelet Transform.
I. INTRODUCTION In the past three decades, the development of EMG based control has got the focus in the sense that it will increase the social acceptance of the disabled and aged people by improving their quality of life. In fact, EMG based input device is a natural means of HCI because the electrical activity induced by the human's arm muscle movements can be interpreted and transformed into computer's control commands. However, the most difficult part for developing myoelectric control based interfaces is the pattern recognition of EMG signals. This is because of large variations in EMG signals having different signatures depending on age, muscles activity, motor unit paths, skin-fat layer, and gesture style. Compared to other biosignals, EMG signal contains complicated types of noise that are caused by inherent equipment and environment noise, electromagnetic radiation, motion artifacts, and the interaction of different tissues. Sometimes it is difficult to extract useful features from the residual muscles of an amputee or disabled. This difficulty becomes more critical when it is resolving a multiclass classifying problem [1].
For the purpose of classification, the EMG pattern signatures are extracted for each movement and then a proper discrimination method is applied to classify the EMG signals based on features. However, it is difficult to have a precise structural or mathematical model of EMG signals that relates the measured signals to motion command. There are many pattern recognition techniques available to discriminate the functionality from extracted features. It has been found that most of researchers used ANN for the processing of biosignals [2]. ANNs are particularly useful for complex pattern recognition and classification tasks. The capability of learning from examples, the ability to reproduce arbitrary non-linear functions of input, and the highly parallel and regular structure of ANNs make them especially suitable for pattern classification tasks [3]. In this regard, integral absolute value (IAV) feature based feed-forward ANN used by Hiraiwa et. al [4], independent component analysis (ICA) based ANN by Naik et. al. [5], different multi-layer perceptron (MLP) based neural network used by Englehart et. al.[6], Kelly et. al. [7], prominent researchers Hudgins et al. have used Hopfield and ART, and later finite impulse response neural network (FIRNN) [8]. Most of the ANN based research work has been carried out with MLP containing one hidden layer and back-propagation (BP) algorithm for training. Varying accuracy in all the above neural network based classification techniques may be due to quality of collected signal, signal conditioning along with physiological characteristics of the subject. This paper describes the classification of EMG signals by using ANN. Different features were calculated for EMG signals and applied to Levenberge-Marquardt algorithm based neural network for hand movement realization. Acceptable success rate of classification has been achieved and reported in result and discussion section.
II. METHODOLOGY Signal Acquisition: The device used for the acquisition of EMG signals was Biopac MP100 data acquisition system (EMG-100C amplifier) along with the acquisition software AcqKnowledge 3.9.1. The EMG signals were collected from three able-bodied persons age between 27-32. The sampling frequency was 1000 Hz and gain for the acquired
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 536–541, 2011. www.springerlink.com
Neural Network Classifier for Hand Motion Detection from EMG Signal
signal was 1000. The optimal position of the electrode placement is determined by performing several trials of the acquisition experiment. The single channel differential electrodes placed on brachioradialis & flexor carpum ulnaris muscle and the reference electrode on the wrist. The subjects were requested to perform some predefined voluntary movements of hand in different direction (Left, Right, Up, Down) and then the EMG signals collected from the muscle. It was found that the average time required to perform each movements was around 500 miliseconds. Each EMG Signal set has been collected for 70 seconds including 5 seconds rest in start and end of the signal acquisition. The EMG signals stored in a Windows XP based personal computer (PC) for post analysis and processing. Preprocessing of EMG Signal: Because of very sensitive nature of EMG signals, it can be easily contaminated by different kind of noise sources and artifacts which will contribute in very poor classification result. The noise can be eliminated by using typical filtering procedures such as band-pass filter, band-stop filter or the use of a good quality of equipment with a proper electrode placement. Whereas it is difficult to remove the effect of other noises/artifacts and interferences of random noise that is in between dominant frequency range of EMG [9]. A six order Butterworth bandpass filter with cut-off 20-500 Hz used to eliminate the first three types of noise. The 50 Hz power line noise was removed by notch filter with 3db gain and 49-51 Hz since it was found the dominant frequency in the range of 70-300 Hz. The frequency domain presentation of filter input and output is shown in Fig. 1. Afterwards, the EMG signals were denoised using wavelet method. Wavelet techniques can successfully localize both time and frequency components and signals are processed in various scales/resolutions. Moreover, wavelet processing provides good frequency resolution at high frequencies. As a result, the noise components in the desired signal can be easily isolated by preserving important high-frequency transients [10]. EMG signal for 4 different hand movements (left, right, up, down) was segmented and each segment consists of 500 data points. For the discrete wavelet transform (DWT), four level decompositions of EMG signal were used. Daubechies (db2) mother wavelet function was selected and applied on detail wavelet coefficients for noise reduction. Later on, the features were extracted for each type of hand movement from the denoised EMG signals. Feature Extraction: For efficient classification of EMG signals, it is important to select proper features. The success of pattern classification system depends completely on the choice of features used to represent the raw EMG signals [8]. It is necessary to use multiple feature parameters for EMG signal classification since it is quite difficult to extract
537
a feature parameter which reflects the unique feature of the measured EMG signals to a motion command perfectly. It has been found that many researchers have selected time domain, frequency domain, time-frequency domain and time-scale domain features for the classification of EMG signals. Various type of features extracted by different researchers such as mean absolute value (MAV), root mean square (RMS), auto-regression (AR) coefficients, variance (VAR), standard deviation (SD), zero crossing (ZC), waveform length (WL), Willson amplitude (WA), mean absolute value slope (MAVS), mean frequency (MNF), median frequency (MDF)slope sign change (SSC), cepstrum coefficients (CC), fast Fourier transform (FFT) coefficients, short time Fourier transform (STFT) coefficients, integrated EMG (IEMG), wavelet transform (WT) coefficients, and wavelet packet transform (WPT) coefficients [8], [11], [12]. In this work, seven statistical time and time-frequency based features namely MAV, RMS, VAR, SD, ZC, SSC and WL are used.
Fig. 1 Removing of 50 Hz electrical noise Neural Network Architecture for EMG Classification: BP is based on the generalized form of Widrow-Hoff learning rule to multiple-layer network and nonlinear differentiable transfer function. Here, the input vectors and corresponding target vectors are used to train the neural network until it can approximate a function or associate input vectors with
IFMBE Proceedings Vol. 35
538
Md. R. Ahsan, M.I. Ibrahimy, and O.O. Khalifa
p1 1
IW1,1
1
Output a1
n1
b1 a1=tansig(IW1,1p1+b1)
1
LW2,1
4
a2=y
n2
b2
Table 1 Sample feature sets for different movement with target vectors of corresponding movement Feature
Fig. 2 Architecture of Artificial Neural Network The feedforward BP network architecture has shown in Fig. 2 with seven neurons in input layer, 10 tan-sigmoid neurons in hidden layer and four linear neurons in output layer. Since there is no specific way to find out the number of hidden neurons, so it has been determined from best classification result by selecting different numbers of neurons. The predefined features were extracted for four types of hand movements from five different EMG signals. 204 sets of input feature vectors from four EMG signals and their corresponding target vectors were fed to the network for training purpose. The feature vectors from remaining EMG signal used for testing the performance of network. The input feature vectors were normalized before feeding for the purpose of efficient training of neural network. The sample input vectors and corresponding target vectors for four movements are shown in Table.1. Levenberg-Marquardt (trainlm) algorithm was utilized for BP training. It is the fastest method for training of moderate-sized feedforward neural networks and based on numerical optimization techniques. The network was also generalized to avoid overfitting. This was done by dividing the training input data: 70% for training, 15% for validation and 15% for testing. Furthermore, the number of data points in training set was more than sufficient to estimate the total number of parameters in the network. Two early stopping conditions were used to improve the generalization of network. The training of the network will stop if total mean squared error <=0.001 or when it reaches to 1000 epochs. The weights and bias of input layer and hidden layer were saved after each training session. When the simulation
Movement type
MAV RMS VAR SD WL SSC ZC
Left 0.11862 0.16918 0.02862 0.16910 72.66327 234 207
MAV
0.62547
0.36995
0.39767
-0.16858
RMS VAR SD WL SSC ZC
0.57261 0.32669 0.57222 0.55323 0.10811 -0.03226
0.33743 0.01320 0.33665 0.37111 -0.43243 -0.12903
0.33016 0.00415 0.33096 0.27841 -0.27027 -0.54839
-0.23938 -0.58431 -0.24050 -0.18846 0.00000 0.35484
a2=purelin(LW2,1a1+b2)
Right 0.10395 0.14866 0.02210 0.14861 65.94147 214 204
Up 0.10554 0.14803 0.02191 0.14811 62.51984 220 191
Down 0.07303 0.09834 0.00967 0.09841 45.28778 230 219
After normalization in the range [-1 1] Input Vectors
7
Hidden
results are not satisfactory, the network trained again with the last saved weight and bias values. This was done to improve the network performance and to reduce the number of time for training. Another type of BP algorithm named scaled conjugate gradient (trainscg) is also used and reported in result section.
Extracted from EMG signal
specific output vectors or classify input vectors in an appropriate way based on certain criteria. The designed network consists of three-layers: input layer, tan-sigmoid hidden layer and linear output layer. Each layer except input layer has a weight matrix W, a bias vector b and an output vector a. The weight matrices connected to inputs called input weights (IW) and weight matrices coming from hidden layer outputs called layer weights (LW). Additionally, superscripts are used to denote the source (second index) and the destination (first index) for the various weights and other elements of the network.
1 0 0 0
Target Vector 0 0 1 0 0 1 0 0
0 0 0 1
III. RESULTS AND DISCUSSION The summery of the classification performance using back-propagation neural network is shown in Table 2. The network was trained by 204 sets of data for different movements. Each set consists of input feature vector obtained from specific type of hand movement and corresponding output vector. Different number of hidden neurons selected for both type of BP training and their classification efficiency are reported. It is found that Levenberg-Marquardt algorithm based back-propagation neural network with 10 hidden neurons yields the best classification rate and required processing time is less. This network outperforms others regarding number of iterations required, time elapsed and classification rate. The best validation performance achieved at 10 epochs and the training stopped at 16 epochs as shown in Fig. 3. The number presentations of the classes are: 1 for left, 2 for right, 3 for up and 4 for down. The average of best overall classification rate during training is found 88.4%. The detailed performances of network during
IFMBE Proceedings Vol. 35
Neural Network Classifier for Hand Motion Detection from EMG Signal
539
training, validation, testing and overall during a single trial are shown by confusion matrix for different class in Fig. 4. The trained network has also been tested on completely unknown EMG signals. The feature vectors are fed to the network without the corresponding targets. The expected output was 1 in its index position for a specific type of movement. The sample input feature vectors from test EMG signal and its corresponding output are shown in Table 3. The classified movements are presented in bold numbers which are the largest others and closer to 1. The test output shows that the trained network successfully classified all the movement types.
Fig. 4 Classification efficiency of the network
Fig. 3 The early stopping criteria
Table 2 Experimental result with comparison
trainscg
trainlm
Training Function
Stop Epochs
Regression
Time Elapsed
15 18 16 Avg 33 14 12 Avg 16 11 14 Avg 37 31 34 Avg
0.8597 0.87251 0.87401 0.86874 0.85706 0.85508 0.84772 0.853287 0.86112 0.85018 0.85102 0.854107 0.7839 0.80202 0.80767 0.797863
1.047 0.921 0.8721 0.9467 2.797 1.218 1.094 1.703 2.36 1.703 2.125 2.06267 0.703 0.797 0.859 0.78633
Classification Rate Training 88.6 94.3 88.7 90.5333 91.4 90 92.9 91.4333 92.1 91.4 89.3 90.9333 80.7 78.6 83.6 80.96667
Validation 83.3 66.7 90.3 80.1 70 80 76.7 75.5667 80 90 76.7 82.2333 83.3 90 83.3 85.5333
IFMBE Proceedings Vol. 35
Test 90 80 90.3 86.76667 83.3 86.7 83.3 84.43333 76.7 73.3 83.3 77.76667 83.3 86.7 86.7 85.56667
Overall 88 88 89.2 88.4 87 88 89 88 88 88.5 86.5 87.6667 81.5 81.5 84 82.3333
Hidden Neurons
10
20
30 10 20 30
540
Md. R. Ahsan, M.I. Ibrahimy, and O.O. Khalifa
Table 3 Sample test data with classification output
Simulation Output
Expected Output
Feature vector from Test EMG
Input>
p1 -0.12863 -0.12208 -0.51192 -0.12391 -0.17617 -0.29730 -0.42105
p2 -0.61608 -0.66515 -0.88580 -0.66707 -0.64118 0.40541 -0.55263
p3 0.37114 0.34085 -0.00891 0.33959 0.19024 0.10811 -0.65789
p4 0.20342 0.16422 -0.22084 0.16281 0.32336 -0.78378 0.31579
p5 -0.13348 -0.22866 -0.60371 -0.23027 -0.13286 -0.05405 -0.23684
p6 -0.75516 -0.77763 -0.93406 -0.77885 -0.75488 0.45946 0.02632
p7 0.22405 0.09469 -0.29750 0.09339 0.09132 0.29730 -0.52632
p8 0.33569 0.32470 -0.02931 0.32338 0.46075 -0.56757 0.50000
p9 -0.24777 -0.34035 -0.69024 -0.34266 -0.34527 0.08108 -0.55263
p10 -0.79584 -0.81355 -0.94736 -0.81470 -0.78546 0.00000 -0.50000
0
0
1
0
0
0
1
0
0
0
0
0
0
1
0
0
0
1
0
0
1
0
0
0
1
0
0
0
1
0
0
1
0
0
0
1
0
0
0
1
0.0036
0.0004
0.6597
0.0081
0.0447
0.0024
0.6676
0.0273
0.0063
0
0.0085
0.0036
0.0032
0.955
0.0021
0
0.0017
0.8774
0.0003
0.0133
0.9928
0.0001
0.3271
0.091
0.7688
0.0018
0.5993
0.1906
0.8913
0.0008
0.0548
0.9991
0.02
0
0.1496
0.998
0.0093
0
0.288
0.9995
REFERENCES
IV. CONCLUSIONS The research work has been carried out for the classification of different hand movements based on EMG signals. The classified signal can be utilized for the controlling of any human-computer centered systems or devices. The experimental result shows that the Levenberg-Marquardt algorithm based neural network recognizes the desired motions efficiently and takes minimal computation time. It has been found that the designed ANN has successfully classified the EMG signals from hand movements and the average success rate is 88.4%. Whereas in a single trial the best overall performance has been found 89.2%. The performance obtained without any prior training of the subject’s hand movements. It can be conclude that the classification efficiency will increase if the network supplied with rich EMG signals. This can be done either by short training of the subject which can reproduce repeatable signals or by allowing the network to adapt with changes of feature value. In that situation, a good classification can be obtained and which will lead to the development of suitable HCI system. The classified EMG signal can be efficiently used to develop a human computer interface to help the people with disabilities who wish to interact with computer devices.
1. Kim, J, Mastnik, S André, E. EMG-based hand gesture recognition for realtime biosignal interfacing. Proc ACM IUI ‘08, 30 39: 2. Ahsan, M, Ibrahimy, M Khalifa, O. Advances in electromyogram signal classification to improve the quality of life for the disabled and aged people. Journal of Computer Science 2010; 6:706-715 3. Subasi, A, Yilmaz, M Ozcalik, HR. Classification of EMG signals using wavelet neural network. Journal of neuroscience methods 2006; 156:360–367 4. Hiraiwa, A, Shimohara, K Tokunaga, Y. EMG pattern analysis and classification by neural network. Systems, Man and Cybernetics, 1989. Conference Proceedings., IEEE International Conference on 2002; 1113–1115 5. Naik, GR, Kumar, DK, Singh, VP et al. Hand gestures for HCI using ICA of EMG. Proceedings of the HCSNet workshop on Use of vision in human-computer interaction-Volume 56 2006; 67–72 6. Englehart, K, Hudgins, B, Stevenson, M et al. A dynamic feedforward neural network for subset classification of myoelectric signal patterns. Engineering in Medicine and Biology Society, 1995., IEEE 17th Annual Conference 2002; 1:819–820 7. Kelly, MF, Parker, PA Scott, RN. The application of neural networks to myoelectric signal analysis: a preliminary study. Biomedical Engineering, IEEE Transactions on 2002; 37:221–230 8. Hudgins, B, Parker, P Scott, R. A new strategy for multifunction myoelectric control. Biomedical Engineering, IEEE Transactions on 1993; 40:82-94 9. Reaz, MB, Hussain, MS Mohd-Yasin, F. Techniques of EMG signal analysis: detection, processing, classification and applications. Biological procedures online 2006; 8:11–35
IFMBE Proceedings Vol. 35
Neural Network Classifier for Hand Motion Detection from EMG Signal 10. Hussain, MS, Reaz, MBI Ibrahimy, MI. SEMG signal processing and analysis using wavelet transform and higher order statistics to characterize muscle force. Proceedings of the 12th WSEAS international conference on Systems 2008; 366-371 11. Phinyomark, A, Limsakul, C Phukpattaranont, P. A novel feature extraction for robust EMG pattern recognition. Journal of Computing 2009; 1:71–80 12. Tsenov, G, Zeghbib, A, Palis, F et al. Neural Networks for Online Classification of Hand and Finger Movements Using Surface EMG signals. Neural Network Applications in Electrical Engineering, 2006. NEUREL 2006. 8th Seminar on 2006; 167-171
541 The address of the corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Md. Rezwanul Ahsan International Islamic University Malaysia Jalan Gombak Kuala Lumpur Malaysia [email protected]
Performance Comparison between Mutative and Constriction PSO in Optimizing MFCC for the Classification of Hypothyroid Infant Cry A. Zabidi, W. Mansor, Y.K. Lee, I.M. Yassin, and R. Sahak Faculty of Electrical Engineering, Universiti Teknologi MARA 40450, Shah Alam, Selangor, Malaysia
Abstract— This paper compares the performance of two variants of the Particle Swarm Optimization (PSO) algorithm; PSO with constriction factor (PSO), and mutative PSO (MPSO) in optimizing Mel Frequency Cepstrum Coefficients (MFCC) parameters. The parameters were used to extract an optimal feature set for classifying healthy and hypothyroid infant cry using Multi-Layer Perceptrons (MLP). Specifically, the PSO variants optimize the number of filter banks and number of cepstrum coefficients in MFCC. Based on the values chosen by both PSO variants, the extracted features were then fed to a MLP classifier, which was trained to discriminate between the healthy and hypothyroid infant cry. Comparisons between the performance of PSO variants showed that MPSO managed to improve the convergence rate by 2.67% compared to PSO. Keywords— Hypothyroid, Biomedical signal processing, Artificial intelligence, Feedforward neural networks, Particle Swarm Optimization.
I. INTRODUCTION Newborn infant cry signal analysis is an important tool to aid clinical diagnosis since crying is the only way for infants to communicate. Experienced mothers normally can recognize their baby’s needs by differentiating between their types of cries. With the advancement of technology, this experiential knowledge can be transformed into algorithms using advanced signal processing and analysis techniques. Newborn infant cry is characterized by very high fundamental frequency with abrupt changes and voiced or unvoiced features of very short duration, within a single utterance [1]. Babies with hypothyroidism have significantly different cries than healthy babies. Babies born without, or with dysfunctional, thyroid glands may have few signs, such as prolonged or recurrent jaundice, delay in umbilical cord separation and umbilical hernia [2]. Hypothyroidism can also be detected from crying where the cry sound is husky and emitted in low. Other sign may be present in the first month of life such as dry, cold and pale skin, feeding difficulty, insufficient weight gain, noisy breathing, nasal congestion, respiratory disorders, constipation and lethargy [2].
It has been proven that the different cry signals can be discriminated using Mel Frequency Cepstrum Coefficient (MFCC) features and Multi-Layer Perceptron classifiers [1].MFCC is a feature extraction method for audio signals. It takes into account the frequency sensitivity of the human auditory system during feature extraction, which makes it ideal for voice recognition. The features are extracted as coefficients, which constitute a good representation of dominant features in acoustic information in a selected time window. To obtain the most representative coefficients, several parameters must be set prior to feature extraction, such as the number of mel-filter banks, number of coefficients to extract, windowing method, and frame length for acquisition. Most approaches tend to use parameter settings based on literature [1, 3], since the combination of parameter adjustments is practically limitless. Particle Swarm Optimization (PSO) is a stochastic optimization method evolved from Swarm Theory and Evolutionary Computation [4]. It is instigated by animals’ natural swarming behavior [5]. PSO has been proven to be a suitable technique for solving various optimization problems [6, 7]. Among the advantages of PSO are that it allows efficient and rapid optimization of the problem, due to its parallel nature, it requires only basic mathematical operators for optimization and it provides low computational and memory costs for each iteration [8]. Many variants of the PSO algorithm exist, such as PSO with inertia weight [9], PSO with constriction factor [10], and mutative PSO [11]. In [11], a novel mutative PSO method was presented, by adding a mutative function to the original PSOCF algorithm. The mutative operator kills off non-performing particles during each iteration, and replaces them with mutated versions of the global best solution. Tests performed on the 7 Integer Programming Problems (7IPP) showed that the mutative PSO method had managed to significantly outperform PSO method in terms of convergence speed. This paper describes the implementation of PSO and MPSO to optimize the MFCC analysis in recognizing hypothyroid infant cry accurately using MLP. The performance of both techniques was evaluated by computing the convergence percentage, swarm size and classification accuracy of MLP.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 542–547, 2011. www.springerlink.com
Performance Comparison between Mutative and Constriction PSO in Optimizing MFCC
543
B. Particle Swatm Optimization(PSO) and Mutative Particle Swam Optimization (MPSO) Algorithm
II. THEORETICAL BACKGROUND A. Mel Frequency Cepstrum Coefficient MFCC analysis is an acoustic feature extraction method that takes human frequency perception sensitivity into consideration during extraction, which makes it ideal for voice recognition. The method extracts MFCC that provide a good representation of dominant acoustic features in selected time windows. For a signal with frames, 0,1, … , 1 , MFCC is defined as:
MPSO is a discrete variant of the PSO stochastic optimization algorithm that iteratively searches for solutions in the problem space by taking advantage of the cooperative and competitive behavior of simple agents called particles, while removing non-performing particles using a mutative function. The PSO algorithm search is directed by its vewhich modifies the particle’s position, . The locity, can be computed using (4).
(1)
(4)
where: = MFC coefficients at frame . = original signal at frame , after application of pre-filtering and some windowing method. In MFCC feature extraction, the input signal is broken down into overlapping frame blocks to comprehensively capture the signal temporal features and changes. An unwanted side effect of the frame blocking process is leakage effect (high frequency components at the end of each frame). produced by each frame. To minimize the leakage effect and maintain continuity between frames, a windowing method (such as Hamming) is applied. The size of each frame, , depends on the sampling frequency, : 2
.
(2)
After windowing, Fourier analysis is performed on each frame, resulting in short-time Discrete Fourier Transform (DFT). Then derived values are then grouped together in critical bands and weighted by a triangular filter bank called mel-spaced filter banks. The mel-spaced filter banks are designed based on melscale frequencies, which mimic the human auditory system. The human auditory system can detect frequency tones lower than 1 kHz in linear scale, and detect frequencies higher than 1 kHz in logarithmic scale. Similarly, mel-scale frequencies are distributed linearly in low frequency range but logarithmically in the high frequency range. The number of mel filter banks can be adjusted depending on the sampling frequency of the signal and required cases. The mel-scale frequency is given by: 2595
1
(3)
where: (5) = particle velocity. = particle position. = particle’s best fitness so far. = best solution achieved by the swarm so far. = cognition learning rate. = social learning rate. , = uniformly distributed random numbers between 0 and 1. = constriction factor. The constriction factor, is calculated using: (6) where
must conform to: ,
4
(7)
is updated Subsequent to new calculated velocities, according to Eq. (5). Apart from the presented parameters above, several “tuning” parameters, , , , , and swarm size are also used to improve the convergence of the PSO algorithm. The detailed description of these parameters can be found in [12]. The particle positions in Eq. (5) generate a continuousvalued solution. The continuous-valued positions (which are real numbers) can be converted into discrete form (integer numbers), , using Eq. (8) [13].
where is frequency of signals. The MFCC can then be derived by taking the log of the band-passed frequency response and calculating the Discrete Cosine Transform (DCT) for each intermediate signal. IFMBE Proceedings Vol. 35
(8)
544
A. Zabidi et al.
where: = lower integer value for discrete-valued position. = highest integer value for discrete-valued position. = lowest range for original continuous-valued solution. = upper range for original continuous-valued solution. To use this equation, the continuous-valued solution should , and the desired discrete, be in the range of valued solution should be defined in the range of . , In the MPSO, the mutation of particles is used to prevent particles from being trapped in suboptimal points during search. Consider a swarm with N particles. After the particle positions have been updated, the worst-performing N/2 particles are eliminated, and are replaced with mutated versions of gBest. The mutation was performed by making N/2 copies of gBest, and adding a mutation factor (small random values), to them so that they are placed nearby gBest. The is usually 10% of maximum particle values. Therefore, the new particle positions: (9) where rand3 is a uniformly distributed random value between 0 and 1. The detailed description of the MPSO algorithm can be found in [20].
III. METHODOLOGY A. Hardware Overview All tests were run on an HP Pavilion dv3500 computer with Intel Centrino Core 2 Duo™ Central Processing Unit (CPU) running at 2.00 GHz with 4.00 GB of Random Access Memory (RAM). Microsoft Windows Vista Ultimate Edition™ Service Pack 1 (32-bit) was installed as the operating system. All experiments were implemented in MATLAB© version 7.8.0.347 (R2009a) environment. B. Pre-processing The dataset for normal and hypothyroid cry signals were acquired from Instituto Technologico Superior de Atlixco dataset and Chillanto dataset. The collection of cry signals is from infants of early born to 7 months. The cry signals were recorded in wave format, and resampled to 8kHz. Each signal was divided into multiple one-second segments, resulting in 88 samples from the original 45 samples (20 normal, 25 hypothyroid). Manual segmentation approach was used, where zero crossing and short time energy analysis were performed. The resulting samples were randomly divided into training (54 samples), validation (17 samples) and testing (17 samples) sets.
C. Optimisation of MFCC Using PSO and MPSO MFCC analysis was then performed to extract features in the signal. In the MFCC analysis, the resulting spectral vectors were passed through a series of Mel-scale logarithmically spaced filter banks. The number of filter banks, was selected using PSO and MPSO. Finally, the DCT was applied to log magnitudes of the spectral energies. A set of was MFCC was then retrieved. The number of MFCC, also selected using PSO and MPSO algorithm. This was done to get the best classification accuracy. For both algorithm, the constriction factor method was were both set to 2.05 [14], used. The values of and since the values cannot violate the rule set in Eq. (7). Based on the values of and , the value of equal to 0.7290 was used throughout the optimization course. Next, the values of and were set to 0 and 1, respectively. This was done so that the particle values were always between 0 and 1. The particle values would cover the entire solution space when combined with the scaling equation described in Eq. (8). Further, the dynamic range of and were respectively set to -1 (when moves from 1 to 0) and +1 (when moves from 0 to 1). In this study, was set to within 20 to 40, while the were set to 10 and , minimum and maximum values for respectively. This limitation was set based on commonly used values in literature [1, 3]. Therefore, the values of and for was set to 20 and 40, respectively. Accordingly, the values of and for was set to 10 and , respectively. The swarm sizes tested was between 5 and 20, while the number of iterations tested was between 10 and 20. This choice was arbitrary, however, these ranges were sufficient to explore the solution space and yield good results, even with different particle initialization values. An important characteristic of a stochastic optimization algorithm is that it always converges to the optimal value, regardless of its initial state. To test this characteristic, PSO and MPSO runs were repeated with 5 different random seeds: 0, 50,000, 100,000, 150,000 and 200,000. To ensure repeatability of the experiments, the generator state was set to some fixed value each time the optimization executes to ensure that the same set of random numbers are generated. D. Classification of Hypothyroid Infant Cry Using MLP were selected using both alAfter the values of and gorithm, the extracted MFCC were classified using MLP with three-layer. The three-layer architecture is capable to approximate any problem function provided that there are sufficient units in its hidden layer [15].
IFMBE Proceedings Vol. 35
Performance Comparison between Mutative and Constriction PSO in Optimizing MFCC
Here, the tangent-sigmoid was used as the activation function because of its non-linearity, suitability for gradientbased optimization algorithms, and speed due to antisymmetric properties. The Scaled Conjugate Gradient (SCG) algorithm was chosen as the training algorithm. Table 1 shows the MLP parameter settings used. Table 1 MLP structure and parameter setting Parameter Structure
Setting 3-layer MLP ((nc× no of frames generated by MFCC):5:1) Tangent-sigmoid at hidden and output layers. Bias connections at hidden and output layer. Scaled Conjugate Gradient
Activation function Biasing Training algorithm Initialization algorithm
Nguyen-Widrow, random numbers taken from MT algorithm. 200
Maximum epoch Objective criterion
Mean Squared Error (MSE)
Objective
0
Over-fitting Avoidance
Validation stop
No of repetitions
50
Division of training, validation and testing set (%)
60:20:20
IV. RESULT AND DISCUSSION The combination of swarm size test and number of iteration yielded 150 fitness results for both algorithms. The swarm size and no of iteration are selected based on the optimum fitness value (10.82%). Table 2 shows the best 5 results from combination of iteration and particle to achieve optimal fitness for MPSO and PSO method. The best iteration for MPSO is 15 whereas for the PSO is 10 (see Table 2). The optimal swarm size for both methods is 5, which shows that only 5 particles are needed to search for the optimal result in overall solution space. Fig. 1 and 2 shows the convergence of MPSO and PSO which is selected based on its best convergence towards solution. As can be seen from Fig.1, although MPSO iteration is 15, it starts to find the optimal solution when it arrives at iteration 5. Unlike the MPSO, the PSO method is still searching for the optimal result at iteration 5 and starts to find the optimal result when it arrives at iteration 7 (see Fig.2).
MPSO
Table 2 Best 5 result obtained from combination of Particle and iteration
The convergence of PSO and MPSO with different swarm sizes, iterations and initial seeds were analyzed by computing fitness function. This function was used to indicate the goodness of the solution, and is used to guide both PSO and MPSO optimization. The fitness function, , is a minimization function given by:
where formed, and
∑
PSO
E. Convergence Evaluation of PSO and MPSO
100
545
Iteration 15 20 25 30 15 10 15 20 25 25
Particle 5 5 5 5 10 5 5 5 5 5
Seed 200000 100000 100000 50000 50000 100000 100000 100000 50000 100000
Fitness 10.82 10.82 10.82 10.82 10.82 10.82 10.82 10.82 10.82 10.82
p 36 36 36 36 36 36 36 36 36 36
nc 19 19 19 19 19 19 19 19 19 19
(10)
is the number of MLP training trials peris the classification accuracy at trial : 100
(11)
where: = Number of true positives for trial . = Number of true negatives for trial . = Number of test cases.
Fig. 1 Fitness plot of MPSO (swarm size = 5, iterations = 15, seed = 100,000)
Eq. (10) penalizes misclassifications of the MLP classifier by giving it a higher fitness value. The optimal value for the fitness function is 0 (when all 50 trial runs converge to 100% accuracy).
In this work, the fitness was obtained by averaging the number of misclassifications in 50 MLP trials. As can be seen from the Fig.1 and Fig.2, all plots starts from a high fitness value (due to random initial particle), and converges to the minimum (optimal = 10.82%) solution as the
IFMBE Proceedings Vol. 35
546
A. Zabidi et al.
optimization progressed. In this investigatio on, the optimal value for is 36 whereas for , the first 19 coefficients are the significant coefficients (see Table 2).
size and iterations for this combinaation were the lowest for all trials tested. The optimal andd values were 36 and 19, respectively. The results suggeest that the solution produced by 36 filter banks, and the first 19 coefficients selected using MPSO and PSO from the MFCC analysis was sufficient to produce the optimal ressult.
ACKNOWLEDGM MENT
Fig. 2 Fitness plot of PSO (swarm size = 5, iterations = 10, seed =100,000)
In terms of convergence, MPSO managed d to find the optimal values for and 100 times, while PSO P managed to find these values 96 times out of 150 test caases. Therefore, the MPSO has improved the convergence rate by 2.67% compared to PSO. The percentage of test run ns with optimal convergence and the average accuracy for MPSO M and PSO are shown in Table 3. This result shows thaat the mutation operator help to improve the convergence of the test cases. Table 3 Summary result for MPSO and PSO P Percentage Test runs with optimal convergence Average accuracy
MPSO 66.67%
PSO 64%
89%
87.8%
When the average accuracy of all test caases for MPSO and PSO was computed and compared, it is obvious o that the average accuracy of MPSO is higher than PSO P (see Table 3). Comparisons between the performances of o PSO variants showed that MPSO managed to improve th he convergence rate by 2.67% compared to PSO.
V. CONCLUSION The performance of two PSO-based varian nts (MPSO and PSO) in optimizing MFCC analysis has beeen discussed in this paper. Based on the results shown, the optimal swarm size and iterations for MPSO are 5 particlees and 15 iterations, respectively and the MPSO algorithm converge faster (at iteration 5) toward optimal solution co ompared to the PSO algorithm. Furthermore, the average fitness, swarm
The authors would like to expresss their gratitude to Ministry of Science, Technology andd Innovation, Malaysia, Universiti Teknologi Mara, Malayssia, for financial support and providing equipment, as well aas Instituto Technologico Superior de Atlixco, Mexico, and C Chilanto for providing the cry signal for this research.
REFERENCEES 1. O. F. Reyes-Galaviz and C. A. R.-. Garcia, "A System for the Processing of Infant Cry to Recognizee Pathologies in Recently Born Babies with Neural Networks," in 9th Conference Speech and Computer (SPECOM), St. Petersburg, Rusiaa, 2004, pp. 1-6. 2. N. Setian, "Hypothyroidism in childrren: diagnosis and treatment," Jornal de Pediatria, vol. 83,2007,pp. 209-216. 3. M. Petroni, A. S. Malowany, C. C. Joohnston, and B. J. Stevens, "A Comparison of Neural Network Architeectures for the Classification of Three Types of Infant Cry Vocalizatiions," in Annual International Conference of the IEEE Engineering inn Medicine and Biology, Montreal, 1995, pp. 821-822. 4. J. Kennedy and R. C. Eberhart, "Parrticle swarm optimization," in IEEE International Conference on Neuural Networks, Perth, Australia, 1995, pp. 1942-1948. 5. X. Hu, R. Eberhart, and Y. Shi, "Recennt advances in particle swarm," in IEEE Congress on Evolutionary Coomputation, Portland, Oregon, USA, 2004, pp. 90-97. 6. M. G. H. Omran, A. P. Engelbrecht, annd A. Salman, "Dynamic clustering using particle swarm optimizatioon with application in unsupervised image classification," Trans. Engineering, Computing and Technology, vol. 9,2005,pp. 199-204. 7. P.-Y. Yin, K.-C. Chang, G.-J. Hwang, G G.-H. Hwang, and Y. Chan, "A Particle Swarm Optimization Approaach to Composing Serial Test Sheets for Multiple Assessment Criteriia," Educational Technology & Society, vol. 9,2006,pp. 3-15. 8. K. E. Parsopoulos and M. N. Vrahatis,, "Recent approaches to global optimization problems through Particlee Swarm Optimization," Natural Computing, vol. 1,2002,pp. 235-3066. 9. Y. Shi and R. C. Eberhart, "A modifiedd particle swarm optimizer," in IEEE International Conference on Evvolutionary Computation, Anchorage, Alaska, 1998, pp. 69-73. 10. M. Clerc, "The swarm and the queenn: towards a deterministic and adaptive particle swarm optimization," presented at the Proc. 1999 Int. Conf. Evolutionary Computation, W Washington, DC, 1999. 11. I. M. Yassin, N. M. Tahir, M. K. M. Saalleh, H. A. Hassan, A. Zabidi, and H. Z. Abidin, "Novel Mutative Paarticle Swarm Optimization Algorithm for Discrete Optimization " in The 2009 International Conference on Genetic and Evolutionary M Methods (GEM09), Las Vegas, NV, 2009, pp. 137-142.
IFMBE Proceedings Vol. 35
Performance Comparison between Mutative and Constriction PSO in Optimizing MFCC 12. I. M. Yassin, N. A. Rahim, and M. N. Taib, "NARMAX Identification of DC Motor Model Using Particle Swarm Optimization," in 4th Int. Colloquium on Signal Processing & Its Applications (CSPA 2008), Kuala Lumpur, Malaysia, 2008, pp. 1-7. 13. R. C. Eberhart and Y. Shi, "Comparing inertia weights and constriction factors in particle swarm optimization," in Proc. Congress of Evolutionary Computation, San Diego, CA, 2000, pp. 84-88.
547
14. H. Demuth and M. Beale. (2005) MATLAB Neural Network Toolbox v4 User's Guide. Mathworks Inc. Author: Azlee Zabidi Institute: Universiti Teknologi Mara City: Shah Alam Country: Malaysia Email:[email protected]
IFMBE Proceedings Vol. 35
Periodic Lateralized Epileptiform Discharges (PLEDs) in Post Traumatic Epileptic Patient— Magnetoencephalographic (MEG) Study T. Begum*, F. Reza, H. Omar, A.L. Ahmed, S. Bhaskar, J.M. Abdullah, and J.T.K.J. Tharakan Laboratory for MEG and ERP study, Department of Neurosciences, School of Medical Sciences, Hospital Universiti Sains Malaysia, 16150 Kubang Kerian, Kota Bharu, Kelantan, Malaysia
Abstract–– We retrospectively reviewed the MEG data of one patient with post traumatic epilepsy, in which we observed unifocal periodic lateralized epileptiform discharges (PLEDs) during offline analysis which was evoked by only auditory stimuli. The patient underwent evaluation of her epilepsy including medical history, long-term video-EEG monitoring and structural MRI at University Sains Malaysia Hospital (USMH), Kelantan, Malaysia. The ictal onset zone of this patient was determined clinically by considering the results of the evaluation prior to obtaining the MEG. In all routine EEGs, there had neither PLEDs like findings nor interictal spike. We concluded that unifocal PLEDs can localize the ictal onset zone in epilepsy patient which can be detected by MEG but not in EEG and careful observation is highly recommended during recording of MEG which might be helpful for the surgical treatment of epilepsy patient. Keywords–– PLEDs, MEG, Epilepsy, EEG, Evoked.
I. INTRODUCTION Periodic lateralized epileptiform discharges (PLEDs) are an electroencephalographic (EEG) phenomenon consisting of high voltage stereotyped periodic transients distributed over one hemisphere, mostly associated with acute or subacute structural brain lesions. The most typical form of the PLEDs consists of sharp contoured discharges repeating periodically or quasi-periodically at rates generally close to 1/ sec and separated by intervals of apparent quiescence [1]. The term PLEDs first established by Chatrian et al, 1964 [1]. Hughes and Schlagenhauff [2] mentioned the same pattern as “periodically recurring focal discharges”. PLEDs occurred over the frontal, temporal, parietal, occipital and midline regions and were maximal over the frontal and temporal regions, reflecting the site of maximal cerebral dysfunction produced by the underlying pathology. There are several studies on multiple foci of PLEDs that occured independently and asynchronously in three or more locations over both hemispheres [2, 3]. Corda et al, 2006 [4] described a patient with the syndrome of mitochondrial encephalopathy, lactic acidosis, and strokelike episodes (MELAS) with unifocal PLEDs. Bertolucci et al., 1987 [5] reported that the main diagnosis in their group was epilepsy with a prevalence of PLEDS at 0.45%. But still there is no study or findings of PLEDs in posttraumatic epilepsy.
MEG and scalp EEG have different sensitivities to epileptic discharges even when they are simultaneously recorded [6]. Most interictal spikes could be identified on both scalp EEG and MEG in cases of temporal lobe epilepsy, but some spikes were visible only on scalp EEG or MEG. Whether MEG has any advantages for spike detection is unclear. We describe unifocal PLEDs in a case of post-traumatic epilepsy who was diagnosed as temporal lobe epilepsy after her motor vehicle accident, detected by MEG, but not by simultaneously recorded scalp EEG or routine EEG. However, some neurologists and even epileptologists do not view this waveform carefully during EEG or MEG recordings. The fact that some of these patients can’t communicate on that time or may not have obvious clinical symptoms which helps to convince some to consider the pattern as only an epileptophenomenon. The goal of this study is to give importance on careful observation during MEG and EEG recording and explore the usefulness of noninvasive MEG study.
II. METHODS A. Patients History A 34 year old female with a known history of generalized tonic-clonic seizures (GTCSz) since the age of 20 years, visited our hospital for the evaluation of epileptic zone for the surgical treatment of epilepsy. The patient had motor vehicle accident (MVA) when she was 19 years old. One year after accident, seizure started which consisted of left eye twitching or déjà-vu (confused about environment) followed by generalised tonic-clonic seizures with loss of consciousness (LOC) for 3-4 times per months. The seizures usually last for more than 2 mins. According to patient, recently she has 5-6 times déjà-vu only which last for 10-15 mins. In all routine EEGs, no interictal spike was available (Fig 1a-b). Less or no activity was detected in T5 area where 9 Hz alpha activities were clearly visible in area T6 in bipolar longitudinal montage (Fig 1a) and average montage (Fig 1b). Continuous scalp video-EEG monitoring was performed for 9 days under reduced antiepileptic medications. She had total 13 seizures within 2 weeks.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 548–551, 2011. www.springerlink.com
Periodic Lateralized Epileptiform Discharges (PLEDs) in Post Traumatic Epileptic Patient
Seizures semiology was same as her habitual seizures. Ictal EEGs in case of all seizures initiated from left temporal area (T5). Neurological examination detected no other abnormalities.
549
Universiti Sains Malaysia, Kelantan) with his/her head inside the helmet for 2 min with eyes closed. Vigilance of the subjects was observed by on-line video monitoring during the recordings. Respective locations of marker coils to cardinal points of the head (nasion, left and right preauricular points) were determined with an Isotrak 3D-digitizer (Polhemus, Colchester, VT). The magnetic fields produced by the coils were used in determining the position of the subject’s head in relation to the MEG sensor array. The least-squares minimization method was used to obtain the optimal solution with goodness-of-fit greater than 80%. The patient underwent auditory, motor, visual, somatosensory stimuli with simultaneously recorded EEG and 15 mins for spontaneous recording.
III. RESULTS
Fig. 1a Example of routine EEG recording in bipolar longitudinal montage. Asterisk signs (*) indicate the two channels of T4-T6 (upper) and T3-T5 (lower), where 9Hz alpha activity is present in T4-T6 channel and low amplitude or no activities are find out at T3-T5 channel
Datas were reviewed in parallel to identify spike activities. Auditory evoked 1 Hz unifocal PLEDs were found which evolved to rhythmic MEG sharp transient like activity continuously for 15 mins at the left temporal area (Fig 2a), but not found in motor, visual, somatosensory stimuli even in spontaneous MEG recordings. Simultaneous EEG recording did not show any PLEDs or interictal spike (Fig 2b). The patient showed no subjective or objective symptoms during recording. 2a
500ft/cm
1 sec
2b.
Fig. 1b Example of routine EEG recording in average montage of fig 1a. Markedly low amplitude of alpha activity in T5 channel than T6 channel. Asterisk signs (*) indicate the two channels of T5 and T6 to compare the absolute value
B. Data Acquisition The datas were recorded with a 306-channel (204 planar gradiometers and 102 magnetometers), Neuromag Vectorview system (pass-band 0.2–30 Hz, sampling rate 1000 Hz) (Elekta-Neuromag, Helsinki, Finland). The subject was seated comfortably in a magnetically shielded room (Laboratory for MEG and ERP study, Department of Neuroscience, Hospital
Fig. 2 MEG with simultaneous EEG recording. (a) MEG recording in left temporal area showed 1 Hz auditory evoked PLEDs (black arrow) (b) Simultaneous EEG recording (A1 reference) showed no PLEDs or interictal spike
IFMBE Proceedings Vol. 35
550
T. Begum et al.
IV. DISCUSSION
Fig. 3 204-channel Gradiometer showed auditory evoked PLEDs activities in posterior left temporal area (circle areas). Inset is the enlarged view of upper circle, lower trace
These PLEDs activities showed maximum negativity at the posterior part of left temporal region on MEG (Fig. 3). Dipoles of the unifocal PLEDs were localized in the posterior part of left temporal lobe during MEG study (Fig 4), however in simultaneous EEG study there was no intarictal spike for detection of equivalent current dipole (ECD). GOF was 98%. These PLEDs activities stopped 15 mins after nearly ending part of 2nd session of auditory stimulus. PLEDs activities which were found during MEGEEG recording, were retrospectively confirmed the result of previous continuous video/EEG monitoring.
PLEDs usually consist of surface-negative biphasic, triphasic, and polyphasic spikes and sharp waves that typically are maximal on the side showing the structural abnormality [7, 8]. In our case PLEDs consisted of a typical negative sharp transient or sharp wave with a large negative component. Lee and Schauwecker [9] whose description was “frequent association of mental confusion,” also adding that focal neurological signs and obvious seizures occurred. In 1991 Gras et al. [10] used the phrase “depression of consciousness” to relate to PLEDs, adding that “partial pure epileptic seizures” can also occur. In the present study we found out the patient’s déjà-vu may or may not be related with this auditory evoked unifaocal PLEDs, which is contradictory result of previous studies [3, 9, 10], detected during MEG recording. MEG is a reference free non-invesive imaging technique with millisecond temporal resolution that has proved to be a useful tool in measuring spontaneous brain rhythms, in healthy subjects [11]. MEG has a higher sensor density than conventional electroencephalography (EEG) and is less affected by the differences in the conductivity of the brain, skull and scalp, thus exceeding EEG in its spatial resolution [12]. These properties of MEG also facilitate source modeling as compared to EEG, although source localization is generally complicated due to the non-uniqueness of the inverse problem. A popular way to characterize the sources of EEG/MEG activity is an equivalent current dipole (ECD) model. It is thought to approximate synchronized postsynaptic currents in the dendrites of cortical pyramidal neurons, considered to be the main source of EEG and MEG signals [12]. However the very popular technique ECD was used in this patient for the epileptic source localization where the source is similar as the result of previous video-EEG monitoring. Furthermore no epileptic discharges were found in routine EEG.
F L
V. CONCLUSION Evoked PLEDs can localize epileptogenic zone and MEG technique is more sensitive than EEG for inducing epileptic discharges in epileptic patient.
F
L
REFERENCES
L
Fig. 4 An equivalent current dipole (ECD) computed at the peak of the epileptiform discharges (PLEDs) at left temporal area (isocontour map of the MEG data, with the ECD as a green arrow). F- Frontal, L-Left
1. Chatrian GE, Shaw CM, Leffman H. (1964) The significance of periodic lateralized epileptiform discharges in EEG: an electrographic, clinical and pathological study. Electroencephalogr Clin Neurophysiol 17: 177-193.
IFMBE Proceedings Vol. 35
Periodic Lateralized Epileptiform Discharges (PLEDs) in Post Traumatic Epileptic Patient 2. Hughes JR, Taber J, Uppal H. (1998) TRI-PLEDs: a case report. Clin Electroencephalogr 29: 106-108. 3. Begum T, Ikeda A, Yoshioka A, et al. (2006) Rapid recovery from coma with multifocal PLEDs in a patient with severe dementia and transient hypoxemia. Intern Med;45:823–6. 4. Corda D, Rosati G, Deiana GA, Sechi G. ‘‘Erratic’’ complex partial status epilepticus as a presenting feature of MELAS. (2006). Epilepsy & Behavior 8: 655–658. 5. Bertolucci PH. daSilva AB. Periodic lateralized epileptiform discharges: (1987) I. Clinical and electroencephalographic aspects. Arq Neuropsiquiatr;45:364–70. 6. Stufflebeam S.M, Tanaka N, and Ahlfors S.P. (2009) Clinical Applications of Magnetoencephalography. Human Brain Mapping 30:1813–1823. 7. Markand ON, Daly DD. (1971) Pseudoperiodic lateralized paroxysmal discharges in electroencephalogram. Neurology;21:975981. 8. Schwartz MS, Prior PF, Scot DF. (1973). The occurrence and evolution in the EEG of a lateralized periodic phenomenon. Brain;96:613-622. 9. Lee BI, Schauwecker DS. (1988) Regional cerebral perfusion in PLEDs: a case report. Epilepsia;29:607–11.
551
10. Gras P, Grosmaire N, Soichot P, et al. (1991). EEG periodic lateralized activities associated with ischemic cerebro-vascular strokes: physiopathologic significance and localizing value. Neurophysiol Clin;21:293–9. 11. Ciulla, C., Takeda, T., Endo, H., (1999). MEG characterization of spontaneous alpha rhythm in the human brain. Brain Topogr. 11, 211 –222. 12. Ha¨ma¨la¨inen, M., Hari, R., Ilmoniemi, R.J., Knuutila, J., Lounasmaa, O.V., (1993). Magnetoencephalography — theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev. Mod. Phys. 65,413-497.
*Correspondence to: Author: Tahamina Begum Institute: Laboratory for MEG and ERP study, Department of Neurosciences, School of Medical Sciences, Hospital Universiti Sains Malaysia, Street: 16150 Kubang Kerian City: Kota Bharu, Kelantan Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Premonitory Symptom of Septic Shock in Heart Rate Variability Y. Yokota1, Y. Kawamura1, N. Matsumaru2, and K. Shirai2 1
2
Faculty of Engineering, Gifu University, Gifu, Japan Advanced Critical Care Center, Gifu University Hospital, Gifu, Japan
Abstract— Delay of administering therapy increases the mortality of septic patients. Diagnosing sepsis, however, depends on the decision of an experienced specialist of whether to perform a blood test. Therefore, we believe that a real-time, non-invasive monitoring system for sepsis is useful to promote early intervention. Heart rate variability (HRV), a measure of autonomic nervous system (ANS) activity, has been suggested as an indicator of septic shock. When analyzing electrocardiograms of patients admitted in the intensive care unit (ICU), an important challenge is to distinguish HRV originating in the ANS from other factors such as arrhythmia. A stochastic model is used in this study to extract HRV caused by ANS automatically. Applying the proposed process before HRV estimation, we identified a distinctive V-shaped temporal pattern in HRV as a signal of sepsis. Our investigation continues to explain how that temporal pattern is useful to develop a real-time sepsis monitoring system for sepsis occurrence. Keywords— Heart rate variability, Arrhythmia, Trend, Septic shock, Monitoring.
I. INTRODUCTION Sepsis is a potentially severe medical condition that is characterized by a systemic inflammatory response syndrome (SIRS) and the presence of a known or suspected infection [1]. The body might develop a syndrome by the immune system to microbes in blood, urine, etc. Severe sepsis is SIRS accompanied with infection and the presence of organ dysfunction. When organ dysfunction of severe sepsis includes low blood pressure, or insufficient blood flow to one or more organs, it is designated as septic shock. Sepsis can engender multiple organ dysfunction syndrome (MODS), and then death. Mortality from the occurrence of septic shock in patients 30 days after surgery is 33.7%. Reportedly, it is a higher than the mortality from myocardial infarction and pulmonary infarction [2]. The delay of administering therapy increases the mortality of septic patients. Although definite diagnosis for sepsis depends on a blood test, a decision of whether to perform a blood test depends on a doctor’s subjective discovery of a patient’s abnormality. We believe that real-time and objective monitoring for a patient’s abnormality related to sepsis is necessary to promote early and certain detection for the onset of
sepsis. Patients treated in an intensive care unit (ICU) have high risk of sepsis because they usually have a drain, a catheter, or a sensor inserted into their body. Each of these can become an infection source. We then aim at sepsis detection for ICU patients. Heart rate variability (HRV), the variation of the heartbeat period, has been considered as a measure to evaluate autonomic nervous system (ANS) activity [3]. Especially, it is considered that HRV is a useful tool to predict the occurrence of septic shock in patients with severe sepsis because sepsis is expected to occur accompanying changes of HRV [4]. Recently, continuous HRV monitoring has been attempted to detect the onset of sepsis for ICU inpatients earlier. Many abnormal heartbeat patterns such as arrhythmia and extrasystole occur in ICU patients. Abnormal heartbeats must be eliminated carefully in HRV estimation because the existence of abnormal heartbeats usually engenders erroneous estimation of HRV. In general HRV analysis, such abnormal heartbeats are eliminated manually because no methods exist to eliminate abnormal heartbeats with superior performance. Therefore, an automatic and real time prediction method for septic occurrence has not been achieved even though HRV is regarded as an applicable measure for detecting sepsis occurrence [4]. An automatic eliminator for abnormal heartbeat is indispensable for continuous HRV monitoring. In this paper, we propose an algorithm using a stochastic model for the interval series of heartbeats including arrhythmia to extract a component originating ANS from other components, such as abnormal heartbeats and a long time scale component, called a trend. We then adopted the proposed algorithm to RRI series for high-risk inpatients of ICU in ICU, and estimated their HRV. Results show that a typical pattern of HRV before septic shock occurs: the value of HRV increases and then decreases. The HRV pattern is observed to resemble a V-shaped temporal pattern. This result implies that this V-shaped pattern of HRV is useful as a predictor of septic shock.
II. HRV ANALYSIS The heart rate is defined as the inverse of the average of heartbeat intervals. Alternatively, HRV represents the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 552–555, 2011. www.springerlink.com
Premonitory Symptom of Septic Shock in Heart Rate Variability
variability of heartbeat intervals around the average. The heartbeat interval is usually defined as the difference between times at peaks of neighboring QRS complexes. It is called the R–R interval (RRI). Affected by internal or external stimuli, the ANS varies RRI. Consequently, the higher HRV represents good reactivity of ANS for internal and external stimuli, or health from the perspective of ANS. The HRV is also used for measuring athletic ability. It is expected that HRV gives additional information that can not be known solely from the heart rate. To evaluate HRV, several measures have been proposed. These measures are roughly classifiable into time domain analysis [5], frequency domain analysis, and nonlinear and fractal analysis [5]. Time domain analysis includes toneentropy method [6]. Nonlinear and fractal analysis include de-trended fluctuation analysis (DFA) [7]. Frequency domain analysis is based on estimation of the power spectrum of RRI series. Depending on the estimation method of power spectrum, frequency domain analysis is classified into FFT method [8], AR model method [9], maximum entropy method [10], and complex de-modulation method [11]. Akselrod et al. [3] investigated the relation between spectral component of HRV and ANS activity [12]. They classified spectral component of HRV into a high-frequency (HF) band of 0.14–0.4 Hz, a low-frequency (LF) band of 0.04–0.14 Hz, a very-low-frequency (VLF) band of 0.003– 0.04 Hz, and a ultra-low-frequency (ULF) band under 0.003 Hz. They further show that LF and HF components are affected from both sympathetic and parasympathetic nervous system activity and the parasympathetic nervous system activity, respectively [12]. Furthermore, VLF and ULF components are affected by the thermoregulation system [13].
III. ELIMINATION OF ABNORMAL HEARTBEATS AND TREND A. Abnormal Heartbeats and Trend Heartbeats for an inpatient in ICU often include many abnormal beats, such as arrhythmia, extrasystole, bigeminy, trigeminy, and quadrigeminy. Such abnormal heartbeats yield an abnormal RRI series, and result in overestimation of HRV. Although ANS might induce abnormal beats, such abnormal beats must be eliminated for HRV analysis because the effect of RRI by abnormal beats is incomparably greater than that by ANS. For that reason, all abnormal heartbeats are eliminated carefully and manually in almost all HRV analyses. The LF and HF components of HRV are more important than the VLF and ULF components for monitoring ANS using power spectrum analysis for HRV. In power spectrum estimation from RRI series with length
553
of 1–2 min at most using an autoregressive model, the existence of VLF and ULF components degrades the estimation accuracy of LF and HF components because long-term components such as VLF and ULF components work as an obstructive trend. For HRV monitoring at all times, an automatic eliminator of both abnormal heartbeats and trends from RRI series is indispensable. We therefore propose a method for estimating both abnormal beats and trend, and for eliminating these. B. Algorithm for Eliminating Abnormal Beats and Trend Let RRI at time t n , n ∈ Z in the given analysis segment be xn , n ∈ Z , where Z ≡ {1, 2,… , N } , and let N represent the number of heartbeats in the analysis segment. We introduce flag qn ∈ {0,1} ; qn takes 1 if RRI x n is normal, and it takes 0 if RRI xn is abnormal. We define flag vector Q = (q1 , q 2 ,…, q N ) . We represent a trend function f (t;θ ) , for which t and θ respectively denote time and a set of parameters of the trend function. A lower-order polynomial or a smoothing spline is applicable for the trend function f (t ;θ ) . When a polynomial is used, parameter set θ is the set of coefficients of the polynomial. Let a subset of Z such that qn = 1 be ZT . If trend function f (t ;θ ) and flag vector Q are correct, then it is considered that the residual series r (tn ;θ )= xn − f (tn ;θ ), n ∈ ZT follows a Gaussian distribution. Consequently, we need only optimize the trend function f (t ;θ ) and the flag vector Q such that the probability density distribution of residual series r (tn ;θ ) attains the closest
Gaussian distribution. An iteration algorithm is necessary to solve such optimization problems because the number of all possible combinations of Q is 2N; it is considerably large. We then propose the following iteration algorithm. First, flag vector Q is initialized to all one, i.e. all RRI’s are assumed to be normal. The parameter set θ of trend function f (t ;θ ) is determined to minimize the power of residual series r (t n ;θ ), n ∈ ZT . The flag vector Q is updated to attain the distribution of residual series r (tn ;θ ), n ∈ ZT be closest to Gaussian distribution in the following way. We use a measure of distance between a given distribution and Gaussian distribution as kurtosis κ, which takes three for Gaussian distribution and larger or smaller values for further distribution from Gaussian distribution. We next sort n ∈ Z in descending order by the absolute value | r (tn ; θ ) |, n ∈ Z of the residual series. Setting qn = 0, n ≤ NU for NU = 0,1,… , N one-by-one according to the sorted n ∈ Z , we each evaluate the measure of Gaussian for the residual
IFMBE Proceedings Vol. 35
554
Y. Yokota et al.
series r (tn ;θ ), n ∈ ZT . If kurtosis κ is below 3.1 the first time, then flag vector Q at this time is used in the next iteration. Using the updated Q, the trend function f (t;θ ) is determined. These processes are repeated while the flag vector Q converges. C. Evaluation of Estimation Accuracy
The estimation accuracy of the proposed method was evaluated for RRI series of 400 (min) in all, which were chosen randomly from all RRI data for all subjects described in the next section. Evaluation results show that the detection rate for abnormal RRI is 84.2%; the false positive rate is 6.2%. Detection errors mainly occur when abnormal heartbeats occupy more than 50% of all heartbeats for each segment with length of one (min). Details are described in [14, 15].
calculated through a Hilbert transform. The obtained envelope was low-pass filtered to enhance the amplitude of the envelope. The peak time of the obtained filter output was regarded as the time with a QRS complex wave. The difference of neighboring peak times was set as RRI. Abnormal RRI and trend were eliminated from RRI series using the proposed method described in III. In that process, we chose a smoothing spline function to represent a trend function. The obtained residual series was interpolated to have equal sampling period to 0.1 (ms), namely 10 (Hz) of sampling frequency. The power spectrum of the resulting series was calculated using an AR model. Then HRV was defined as the integral power of 0.04–0.4 (Hz) of the power spectrum. That corresponds to LF and HF components [5].
IV. HRV ANALYSIS FOR ICU PATIENTS A. Subjects
Having been approved by the ethics committee in Gifu University, this study targeted 96 inpatients in Advanced Critical Care Center, Gifu University Hospital; these inpatients have consented to the use of their data for this study. Ten inpatients among all 96 inpatients showed septic shock: one inpatient showed septic shock twice. We examined 1602 days of electrocardiogram (ECG) data obtained using an ICU patient monitor (IntelliVue MP70; Phillips Electronics Japan Ltd.). Of the data, the greater part were of lead II; a fraction of them were of lead I or lead III. The ECG were digitized at 250 (Hz) of sampling frequency. Vital data, such as blood pressure, heart rate, SpO2, respiration rate, and temperature, are also recorded with 0.1 (Hz) sampling frequency. Table 1 Patients experienced septic shock Patient A B C D
Sex Male Male Male Male
Age 65 67 74 23
Disease Burn Bradycardia Aneurysm rupture Burn
Fig. 1 HRV time series of patients who experienced septic shock B. Methods
Every analysis described below is performed for each segment having length of one minute. Each segment of ECG is band-pass filtered to enhance the amplitude of the QRS complex wave. The envelope of the filter output was
C. Results
Time series of HRV for four patients (A, B, C, and D presented in Table 1) who developed septic shock are presented in Fig. 1. The horizontal axes in these figures
IFMBE Proceedings Vol. 35
Premonitory Symptom of Septic Shock in Heart Rate Variability
represent the number of days from when the patient had been hospitalized. Vertical axes show values of HRV (dB). A median filter is applied for HRV series for better visibility in these figures. Figure 1 shows that the value of HRV decreased gradually during more than a day; then it increased in reversal before the septic shock. Temporal changes of HRV seem to take a V-shaped pattern. In 11 cases showing septic shock for 10 inpatients, a similar distinctive V-shaped pattern was observed before septic shock occurrence. One report of a previous study described that HRV assumed a lower value significantly at 2 hr before sepsis [4]. However, our result shows that HRV changed over the order of days; in other words, these results decisively differ in the time scale of such HRV change related to sepsis. The width of the Vshaped pattern seems to depend on the age of subject. Older subjects tend to have wider V-shaped pattern. In other words, the progressiveness of sepsis for old people might be slower than that for younger people. The V-shaped temporal pattern of HRV before septic shock occurrence is observed for the first time because of the automatic abnormal signal elimination method proposed in this study.
555
Manufacturing Technologies and Information Technologies at the Ministry of Education, Culture, Sports, Science and Technology, Japan.
REFERENCES 1.
2.
3.
4.
5. 6.
7.
V. CONCLUSIONS In this paper, we proposed a method for eliminating abnormal RRI and trend from RRI series using a stochastic model for RRI; the model includes the assumption that RRI originating from the autonomic nervous system follows a Gaussian probability density distribution. HRV was estimated from RRI series that abnormal RRI and trend have already eliminated using the proposed method, for inpatients including patients who developed septic shock. Results show that V-shaped temporal pattern in HRV appears during a few days before septic shock. Such a V-shaped pattern is regarded as a symptom for septic shock. Continuous HRV monitoring for ICU patients might be possible because these processes can be executed automatically and in real time. Development of septic shock might be predicted by detecting a V-shaped temporal pattern in HRV. It is expected that septic shock occurrence is preventable by presenting an alarm when a V-shaped pattern appears in continuous HRV monitoring.
ACKNOWLEDGMENTS This study was supported by the Regional Innovation Cluster Program (City Area Type) in Southern Gifu Area: Development of Advanced Medical Equipment Using
8.
9. 10.
11. 12.
13. 14.
15.
Members of the American College of Chest Physicians/Society of Critical Care Medicine consensus conference committee:American College of Chest Physicians/Society of Critical Care Medicine Consensus Conference (1992) Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. Crit. Care Med. 20:864–874 Moore LJ, Moore FA, Tadd SR, Jones SL et al. (2010) Sepsis in General Surgery The 2005-2007 National Surgical Quality Improvement Program Perspective. Arch. Surg. 145(7):695–700 Akselrod S, Gordon D, Ubel FA, Shannon DC et al. (1981) Power Spectrum Analysis of Heart Rate Fluctuation:A Quantitative Probe of Beat-to-Beat Cardiovascular Control. Science 213:220–222 Moriguchi T, Hirasawa H, Oda S, Tateishi Y (2004) Analysis of heart rate variability is a useful tool to predict the occurrence of septic shock in the patients with severe sepsis. Nippon Rinsho 62(12):2285– 2290 Hayano J (2001) circulatory disease and autonomic function second edition, 71–109, Igaku-Shoin, Tokyo Oida E, Moritani T, Yamori Y (1997) Tone-entropy analysis on cardiac recovery after dynamic exercise. J. Appl. Physiol. 82:1794– 1801 Peng CK, Havlin S, Stanley HE, Goldberger AL (1995) Quantification of scaling exponents and crossover phenomena is nonstationary heartbeat time series. Chaos 5(1):82–87 Cooley JW, Turkey JW (1965) An Algorithm for the machine calculation of complex Fourier series. Mathematics of Computation 19:297–301 Akaike H (1969) Power spectrum estimation through autoregressive model fitting. Ann Inst Statist Math 21:407–419 Malliani A, Pagani M, Lombardi F, Cerutti S (1991) Cardiovascular Neural Regulation Explored in the Frequency Domain. Circulation 84(2):482–492 Bloomfield P (1976) Fourier analysis of Time Series:An Introduction. 118–150, John Wiley & Sons, New York Hayashi H (1999) Clinical Application of Hart Rate Variability – Physiological significance, Pathophysiology, Prognosis–. IgakuShoin, Tokyo Okada T (2008) Intensive Lectures: Illustrated Physiology. Medical View, Tokyo Kawamura Y, Yokota Y, Matsumaru N, Shirai K (2010) Elimination of abnormal heart beats and trend for heart rate variability analysis. IEICE Technical Report MBE2010-54:103–108 Yokota Y, Shirai K, Kawamura Y, Matsumaru N (2010) Detection of septic shock sign using heart rate variability. BPES2010 Proc. The 25th Symposium on Biological and Physiological Engineering, Okayama, Japan, 2010, pp.243–246
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Yasunari Yokota Faculty of Engineering, Gifu University Yanagido 1-1 Gifu JAPAN [email protected]
Review of Electromyographic Control Systems Based on Pattern Recognition S.A. Ahmad1, A.J. Ishak1, and S.H. Ali2 1
Dept of Electrcial and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, Selangor, Malaysia 2 Dept of Electrical, Electronics and System, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysa, Selangor, Malaysia
Abstract— Electromyographic control is a technique that involve with with the detection, processing and classification of the electromyogaphic signal that could be applied in humanassisting robots, prosthesis application or rehabilitation devices. This paper reviews recent research and development in pattern recognition based electromyographic control systems. with an emphasis on pattern recognition control for prosthesis application. The various methods used in the different stages of the pattern recognition based control system are discussed in details.
commercial prosthetic hands use this method which is either amplitude or level coding of the EMG signal generated during an active control muscle of the user to control the prostheses. For example, the Otto Bock two-state system incorporated this technique to assign each prosthetic limb function to a separate control muscle [1, 3, 4].
Keywords— electromyographic control, prostheses, pattern recognition, feature extraction, classification.
I. INTRODUCTION The concept of using EMG for prosthesis control started in the 1940s. Electromyographic control is when the signal is used as the input for the control of powered prostheses. The signal is used to select and modulate a function of a multifunction prosthesis [1]. There are two types of EMG; needle and surface EMG. Hargrove et al. [2] had carried out an investigation to compare the performance of these two types of EMG for prostheses control application. In the investigation, both signals were acquired simultaneously and processed using the same methods. They found that both EMG types gave high classification accuracies which is between 95% to 99%. Even though there is no significant difference between SEMG and needle EMG in terms of accuracy, SEMG is the most preferred method as it is non-invasive and more convenient to use. Fig. 1 shows the block diagram presenting the relationship between normal and electromyographic control system. For the amputees, the natural motor control system which consist of the joint and the output (the shaded area in the diagram) are replaced with the control mechanism and the prosthesis respectively. The SEMG signal generated from the remnant muscle is used as the control channel of the system. Generally, electromyographic control can be divided into two; non-pattern recognition based and pattern recognition based. Non-pattern recognition based control is basically constructed using hierarchical control, threshold control, proportional control or finite state machines. Most of the
Fig. 1 Block diagram presenting relationship between normal and electromyographic control system (shaded area is removed by amputation) [Parker et al., 2006)
II. PATTERN-RECGONITION BASED ELECTROMYOGRAPHIC CONTROL
Even though the SEMG signals are stochastic, repeatable EMG patterns can be observed from different muscle contractions and this also can be seen in amputees, even though they may not have fully functioning muscles. SEMG also has been proven as an effective input for powered upper limb prostheses [2, 5]
Fig. 2 The block diagram of an electromyographic control system (ECS) based on pattern recognition
Pattern recognition aims to classify data based on statistical information extracted from the patterns and determines the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 556–559, 2011. www.springerlink.com
Review of Electromyographic Control Systems Based on Pattern Recognition
control signal that will select the final output of the device operation. Fig. 2 shows the basic components in an ECS based on pattern recognition. Three main modules of ECS are: preprocessing, feature, extraction and classification. Control channels 1 and 2 labeled in Figure 2, are the SEMG signals. The SEMG data are acquired from the surface of the skin by placing electrodes over the person's muscle. Different muscles responsible on different movement. Therefore, the electrode must be placed on the muscles that are to be investigated. For example, the extensor and flexor muscles are responsible for wrist flexion/extension movement. It is important to place the electrode on the accurate location as correct placement of the electrodes will give strong SEMG signals and gives a good distinction between movements. Inaccurate placement of the electrodes will affect the performance of the classifier [2,6]. Normally, the electrodes are accompanied by miniature pre-amplifiers. In common practices, the EMG signal will be amplified, usually using an instrumentation amplifier with a gain of 1000 - 5000 [7, 8, 9, 10, 11]. The SEMG signals are then being filtered using band pass filter (low cut-off frequencies, fcl = 450 - 500Hz and high cut-off frequencies, fch = 10 - 20Hz) to eliminate noise before transferred to the ECS [7, 8, 9, 10, 11, 12]. The SEMG signal is then sampled digitally, where in common practice the sampling frequency is above 1000Hz The ECS block diagram shown in Fig. 2 is the basic implementation of the control system. Each module plays important role for the success of the system but they can be adjusted (merge or omit) depend on the implementation of the system. Another important factor to consider is the computation time for this control system. Scott et. al [13] has reported that 200 to 300 ms is a clinically recognized maximum delay for the response of the prosthesis. A. Pre-processing
557
Fig. 3 is the time required to calculate the feature and classify the data. The drawback of this technique is that the τ is just a small portion of the segment that will cause the processor to stay at idle condition during the remaining time of the segment length. This matter is overcome with overlapped windowing technique. With this technique, the new segment slides over the current segment and the increment time is less than the segment length. For example, Hudgins et al. [10] processed the signal in every 40 ms in a 240ms adjacent window. Englehart & Hudgins [5] had reported that shorter segment increment produces a more dense but semi-redundant stream of class decision that could improve response time and accuracy. The overlapping windowing technique is the most common method due to the ability to preserve the important information in the EMG signals where there is limited time for signal processing then the non-overlapping technique is used.
Fig. 3 Adjacent windowing technique. W: Window, D: Delay, τ: time delay [Englehart & Hudgins, 2003]
B. Feature Extraction
There have been various techniques in handling the EMG data before feature extraction. Usually, data segmentation will be used and could improve the accuracy and response time of the controller. For each divided segment, a feature set will be computed which will be then fed to the classifier and these processes are continuous. The data length and the windowing technique are two main points that need to be considered. Farina & Merletti [14] showed a segment length that is less than 128ms will leads to high bias and variance of features that will degrade the classifier's performance. There are two main methods used for the data windowing; adjacent and overlapping. The adjacent windowing technique (Fig. 3) is where adjacent disjoint segments with predefined length are used feature extraction and classification after a certain processing delay, τ. The τ depicted in
The feature extraction process is where the raw SEMG signal is represented into a feature vector which is then used to separate the desired output, e.g. different hand grip postures. The success of the ECS based on pattern recognition depends on the selection and extraction of features [15]. The feature extraction techniques can be grouped into two main categories: 1. Time Domain Feature The time domain (TD) feature is based on the signal amplitude and is the most common method used in ECS for upper limb prosthesis application as the computational complexity is low. The features can be obtained in a short time using a simple algorithm executed in a microprocessor.
IFMBE Proceedings Vol. 35
558
S.A. Ahmad, A.J. Ishak, and S.H. Ali
The amplitude gives an indication of activation level, duration or force of the SEMG signal which is influenced by the following factors: the location of the electrodes, the tissues' thickness, and the system used to acquire the signal and the distribution of motor units in muscle fibre [15]. Various methods have been reported for feature extraction of the SEMG. Hudgins et. al. [10] used mean absolute value (MAV), mean absolute value slope, slope sign changes (SSC), waveform lengths and zero crossings (ZC). Chan et. al [16] also used the same features as them except SSC. Other reported features are variance, root mean square (RMS), mean and standard deviation (SD) [7, 17] Autoregressive (AR) models also have been used for the EMG classification [8, 18]. With AR, the value at the current point of a time series of an EMG signal can be predicted by several previous points. This method can produce high separability between different limb functions; however it requires a more complicated computation process. 2. Time-Frequency Domain Feature The time-frequency domain (TFD) method is to overcome the limitation of the TD method, which is acceptable for a stationary signal. However, the EMG signal is a nonstationary signal which shows high frequency characteristics and with TFD, the performance of the control system could be increased. Some of the methods used in TFD are short time Fourier transform (STFT), wavelet transform (WT) and wavelet packet transform (WPT). In general, the difference between these methods is the partitioning of the time-scale axis. In short-time Fourier transform (STFT), the EMG signal is mapped into frequency components that present within an interval of time (window). A suitable window size must be determined prior to this as small window will give good time resolution but poor frequency resolution and vice versa. The partitioning ratio of the STFT is fixed: once specified, each cell has an identical aspect ratio. To overcome the resolution problem in STFT, WT was developed. The WT has a variable partitioning ratio where the aspect ratio of the cells varies such that frequency resolution is proportional to centre frequency. WPT is the generalization of the WT method allows the best adapted analysis of the signal. WPT provides as adaptive partitioning - complete set of partitions are provided as alternatives, and the best for a given application is selected [19]. Englehart et al. [19] has conducted a comparison between TD features used by Hudgins [10] and TFD methods. Based on the results of the classification error, WPT was the most effective method. However, he also suggested that there is no clearly superior method between them. Chai et al. [20] had used WT to discriminate between four motions: hand grasp, hand extension, forearm supination
and forearm pronation. Their system has an average accuracy 90% by extracting twelve parameters from two channels EMG signals and with the nature of the WT method, the system might have a high computational time. TFD method may cause high resolution representations and AR model yields high dimensional feature vectors. However, these factors cause long processing time and delays. To avoid these problems, dimensionality reduction was introduced which reduce the dimensionality of the data while maintaining its discrimination capability. This technique also helps to reduce memory requirement, as well as the classifiers' speed [9]. In general, there are two methods of dimensionality reduction; feature selection and feature projection. There are many strategies for feature selection, such as sequential forward selection, sequential backward selection, simulated annealing and genetic algorithm [15]. Feature projection is mostly used when using the TFD method. WT produces many coefficients to represent time scale features and they need to be mapped into a lower dimension. Principal component analysis (PCA) and linear discriminant analysis (LDA) are the most commonly used method for feature projection [9, 19]. C. Classifier The information obtained during feature extraction will be then fed into a classifier. A classifier should be able to map different patterns and match them appropriately. An efficient classifier should be able to classify patterns in a short duration to meet the real-time constraint of prosthetics device. However, due to the nature of EMG signal it is possible to see a large variation in the value of the feature used. This variation may be due to the electrode placement or sweat. In early EMG control systems, statistical classifiers had been used widely until about the mid-1980s [1]. A statistical classifier also known as linear discriminant analysis (LDA) searches for feature vectors which best discriminate amongst motion classes as opposed to those which best describe the data. The LDA classifier is simpler to implement and has shown high classification accuracies [15]. Then, the application of artificial neural network (ANN) began to appear. ANN has been used in most of the EMG classification systems reported in the literature [5, 10]. An ANN consists of many simple processing units (neurons) that can be globally programmed for computation. They are trainable and the main advantage of the ANN is its ability to represent both linear and nonlinear relationships. Figure 3.9 shows one example of ECS that using ANN. A variety of ANN architecture and learning algorithm have been conducted; such as simple feedforward multilayer perceptrons [10]. Another technique that has been used for the classification of the SEMG data is fuzzy logic (FL) system. The most useful property of fuzzy logic is that it provides a simple way to
IFMBE Proceedings Vol. 35
Review of Electromyographic Control Systems Based on Pattern Recognition
arrive at a definite conclusion just based upon imprecise input information which mimics how a person would make a decision. Due to the biomedical signal characteristic, which is not always repeatable, FL is an advantageous control technique in biomedical signal processing and classification. Basically, FL system consists of three stages: fuzzification, processing and de-fuzzification stages. Weir & Ajiboye [7] used a heuristic fuzzy logic approach to multiple EMG pattern recognition for multiple prosthesis control. Chan et al. [16] also used FL to classify single channel EMG signal for multifunctional prosthesis control. Some FL systems used in conjunction with a neural network to classify the EMG data. Karlik et al. [8] have reported the used of this method, where the EMG features are clustered using fuzzy c-means algorithm which is then presented to ANN system. All the FL systems reported high classification accuracies which is about 95%.
III. CONCLUSIONS Table 1 summarizes some of the methods used for feature extraction and classification processes for the ECS based on pattern recognition for upper-limb prosthesis control. It can be seen that the methods used for the feature extraction are varies between TD and TFD method, and it has been reported that there is no superior method between these two processing techniques. As for the classifier, ANN is the most used method to discriminate the final output of the system. The most right column of the table shows the accuracy of the electromyographic control systems and the success of the system is depends upon the classification accuracy. It measures the number of correct classification achieved for a number of trials. From the table, it shows that all accuracies are from 90% to 98%. Table 1 Summary of pattern-recognition based ECS for prosthesis control application Reference [7] [8] [9] [10] [11] [16] [17] [19]
Feature Mean. SD AR WPT MAV, ZC, SSC, waveform length MAV, WT, PCA MAV, ZC, waveform length AR, RMS STFT, WPT, WT
Classifier FL FC-ANN SOFM/ PCA ANN ANN FL GMM PCA/ LDA
Accuracy 94% 96.1% 97% 90% 94.9% 95% 95% 98%
Continuous research in this field is still needed in order to provide a control system with a high classification rate. From the summary in Table 1, it can be concluded that there
559
are various factor that affect the performance of the electromyogaphic control system. This includes the number of control channel used and also methods used in the feature extraction process. A reasonable number of control channels with good feature extraction method will give a high
REFERENCES 1. Roberto Merletti (2004) Electromyography Physiology, Engineering and Noninvasive Applications. IEEE Press, John Wiley & Sons Inc. 2. Hargrove, L., Englehart, K., Hudgins, B. (2007) A cComparison of surface and intramuscular myoelectric signal classi_cation. IEEE Transactions on Biomed ical Engineering, 54 (5), 847-853. 3. Scott, R., & Parker, P. (1988). Myoelectric prostheses: State of the Art. MedicaEng. Tech., 12, 143-151. 4. Parker, P., Englehart, K., & Hudgins, B. (2006). Myoelectric signal processing for control of powered limb prostheses. Jnl. of Electromyography and Kinesiology, 16, 541-548. 5. Englehart, K., & Hudgins, B. (2003). A robust, real-time control scheme for multifunction myoelectric control. IEEE Transactions on Biomedical Engineering,Vol. 50 (7), 848-854. 6. Hargrove, L., Englehart, K., & Hudgins, B. (2006). The effect of electrode displacements on pattern recognition based myoelectric control. In IEEE Ann.Intl. Conf. on Engineering in Medicine and Biology Society, 22032206). 7. Ajiboye, A., & Weir, R. (2005). A heuristic fuzzy logic approach to EMG pattern recognition for multifunction prosthesis control. IEEE Trans. on BiomedicalEng, 52 (11), 280-291. 8. Karlik, B., M.O., T., & M., A. (2003). A fuzzy clustering neural network architecture for multifunction upper- limb prosthesis. IEEE Trans. on BiomedicalEngineering, 50 , 1255-1261. 9. Chu, J., Moon, I., Mun, M. (2006) A real-time emg pattern recognition system based on linear-non-linear feature projection for a multifunction myoelectric hand. IEEE Trans. on Biomedical Eng, 53 , 2232-2238 10. Hudgins, B., Parker, P., & Scott, R. N. (1993). A new strategy for multifunction myoelectric control. IEEE Trans. on Biomedical Eng, 40(1)(1), 82-94. 11. Y. Al-Assaf, H. A.-N. (2005). Surface myoelectric classification for prostheses control. Jnl. of Med. Eng & Tech, 29 (5), 203-207. 12. Chen, W. T., Wang, Z., & Ren, X. (2006). Characterization of surface EMG signals using improved approximate entropy. Zhejiang University Science B, 7(10), 844-848. 13. Scott, R. (1984). An introduction to myoelectric prostheses in UNB Monographs on Myoelectric Prostheses Series. Institute of Biomedical Engineering, UNB Canada. 14. Farina, D., & Merletti, R. (2000). Comparison of algorithms for estimation of EMG variables during voluntary isometric contractions. Jnl. of Electromyography and Kinesiology, 10, 337-349. 15. Asghari Oskoei, M., & Hu, H. (2007). Myoelectric control systems - a survey. Biomedical Signal Processing and Control , 4 (4), 275-294. 16. Chan, F., Yong-Sheng, Y., Lam, F., Yuan-Ting, Z., & Parker, P. (2000). Fuzzy EMG classification for prosthesis control. IEEE Trans. On Rehabilitation Engineering, Vol. 8(3). 17. Yonghong, H., Englehart, K., Hudgins, B., & Chan, A. (2005). A Gaussian mixture model based classification scheme for myoelectric control of powered upper limb prostheses. IEEE Tran. on Biomedical Engineering, 52 (11), 1801-1811. 18. Zardoshti-Kermani, M., Wheeler, B., Badie, K., & Hashemi, R. (1995). EMG feature evaluation for movement control of upper extremityprostheses. IEEE Trans. on Neural Systems and Rehabilitation, 3 (4), 324-333. 19. Englehart, K., Hudgin, B., & Parker, P. (2001). A wavelet-based continuous classification scheme for multifunction myoelectric control. IEEE Trans. on Biomedical Engineering, 48 (3), 302-311. 20. Chai, L., Wang, Z., & Zhang, H. (1999). An EMG classification method based on wavelet transform. In Proc. of the First Joint BMES/EMBS Conf., 565-568.
IFMBE Proceedings Vol. 35
Speaker Verification Using Gaussian Mixture Model (GMM) H. Hussain1, S.H. Salleh2, C.M. Ting2, A.K. Ariff2, I. Kamarulafizam2, and R.A. Suraya2 1 2
Universiti Teknologi Malaysia, Faculty of Electrical Engineering, Skudai, Malaysia Universiti Teknologi Malaysia, Center for Biomedical Engineering, Skudai, Malaysia
Abstract— This paper applies GMM for SV on Malay speech. The speaker models were trained on maximum likelihood estimated. The system was evaluated with 23 client speakers with 56 imposters. Malay clean speech data was used. 20 training of 3.5s utterances are used. The best performance achieved using 256-Gaussian imposter model and 32-Gaussian client model gave 3.01% of EER. Keywords— Speaker Verification, Gaussian Mixture Model, Equal Error Rate.
I. INTRODUCTION Speaker Recognition can be divided into two parts, Automatic Speaker Identification (ASI) and Automatic Speaker Verification (ASV). Automatic Speaker identification (ASI) is a process used to determine the identity of the test speaker from the registered speaker using specific information retained in the speech signals. This approach will identify the person accessing the system by comparing the vector profile of the test speaker with each of the vector profiles of the speakers that make up the reference set. Establishing whether a speaker is claimed correctly by an automatic means from the acoustics of his/her voice is termed as ASV. The operation of most speaker verification (SV) systems can be divided into two phases. The two phases are training phase and verification phase. In this paper the main focus is based on SV. In this paper, there are 56 imposter and 23 client speakers. There are three sessions of data gathering which were used for testing the system. Session 1 and session 2 databases is based on the same microphone. Session 3 database recording is based on a different type of microphone. This is so that the first two sessions database could be used as a baseline. This is to verify the microphone effect on system performance. In another experiment, the same microphone was tested but at a different sessions. This is to investigate the effect to the performance. In any performance evaluation, more training data means better accuracy. However, we limit our training to 20 sets of sentence. Beside, the evaluation on the number of Gaussian is performed. In theory, the higher the Gaussian, the better the accuracy. This is due to the more the complex the representation of the signal the better the representation and hence gives more accuracy.
Vector Quantization (VQ) technique is used in speaker recognition system. This technique is easy to implement through the use of the Euclidean distance measure. However, it provides less accuracy in terms of performance if only this algorithm was used for the speaker verification system. This algorithm for VQ is base on [3] with large data training required. They used 100 speakers and able to achieve 56% performance improvement using text dependent mode compared to independent. Neural Network (NN) is another alternative for speaker recognition. The size of the neural network NN has no restriction on the amount of data and is thus better in classifying the data. However the training can be long and complex [4] Gaussian based methods on the other hand have the capability of modelling statistical variations inspectral features. Speaker Recognition (SR) based on Gaussian has achieved significant recognition accuracies. The use of GMM in this paper for SR modelling is motivated that GMM provides better performance then VQ and discrete Hidden Markov Model (HMM). Based on [5] the paper compare the text independent speaker recognition method using VQ distortion and discrete/continues HMMs. The papers conclude that given the right amount of data the accuracy increased or decreased. Continues ergodic HMM’s identifies speakers much more accurately then discrete ergodic HMM’s did. However if little data is available VQdistortion–base method is more robust then continues. The challenge to SV is to build a model based on limited training data collected at a few sessions and then updating the model using the collected utterances when the system is in use [1]. Hence an alternative approach is needed. According to [2] an ideal system would be to build SV model based on limited training data collected at a few sessions and then updating the model using the collected utterances when the system is in use. This paper investigates the effect of different microphones and recording sessions on SV performance. A. Gaussian Mixture Model In front-end processing, the speech signals needed to be preprocessed. Based on [6] several processing steps occur. First the speech is segmented into frames of 20-ms window at 10-ms frame rate. Next mel-scale cepstrum feature vectors are extracted using the speech frames. The discrete
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 560–564, 2011. www.springerlink.com
Speaker Verification Using Gaussian Mixture Model (GMM)
561
cosine transform of the log-spectral energies is the melscale cepstrum that have been segmented. The spectral energies are calculated over logarithmically spaced filters with increasing bandwidths (mel-filters). A detail description of the feature extraction steps can be found in [7, 8].
A Gaussian mixture density is weighted sum of Gaussian densities. Where pi , i=1…, M, are the mixture weights and
G bi ( x ), i=1…, M, are the component Gaussian. In this
paper, diagonal covariance matrices are primarily used for speaker models.
B. Training of the GMM A Gaussian mixture density is weighted sum of M component densities, as depicted in Fig.1 and given by the equation. M G G p ( x λ ) = ∑ pi bi ( x )
Storage Models
1
(1)
i =1
G Where x is a D-dimensional random vector, bi ( x ), i=1…, M, are the component densities and
a11
Speech Signals
θ θ θ
GMM
MFCC
pi , i=1…, M, are the
mixture weights. Each component density is a D-variate Gaussian function of the form
G bi ( x ) =
1 (2π )
with mean vector
D
e 2
μi
G 1 G − ( x − μi )T Σ i −1 ( x − μi ) 2
Storage Threshold
Fig. 2 Training of GMM
(2)
Σi
C. Testing of the GMM
and covariance matrix
∑ i . The
mixture weights satisfy the consistent that ∑ pi = 1 . The complete Gaussian mixture density is parameterized by the mean vector, covariance matrices and mixture weights from all component densities. These parameters are collectively represented by the notation. M i =1
λ = { pi , μi , ∑ i }i,..., M .
(3)
For SV, the initial steps would be to verify the client through identification tag or card. Speakers S would have to claim his/her identity first. The system will retrieve the model λ S base on the identity. The system then will obtain the score of the model against the text utterance x ,that is
λS .Then the score is with the pre-specified threshold θ and the deci-
the likelihood of x , given the model compared sion is based on the equation below.
For SV, each speaker is represented by a GMM and is referred to by his/her model λ .
Storage Models
Storage Threshold
Claimed Identity Speaker S
θ θ
λS
Unknown Speaker S MFCC
θs
GMM
Decisions Score
Fig. 1 Depiction of an M component Gaussian mixture density [6] IFMBE Proceedings Vol. 35
REJECT Score<șs
Fig. 3 Verification Using GMM
ACCEPT Score>șs
562
H. Hussain et al.
p( x λ s ) > θ
[Accept]
(3)
p( x λ s ) < θ
[Reject]
(4)
The log likelihood ratio score is used as follows: S = log p ( x λ s ) - log
p( x λ z )
(5)
Where the λ z is the imposter model and can sometimes be called Universal Background Model (UBM).
II. DATABASE RESULT AND DISCUSSION The evaluation database consists of 15 clients and 56 imposter speakers. The imposter speakers are undergraduate student’s ranging in age from 18-23 years old. For client speakers, some of the data is used for training while some is used for testing. However, the imposter data is only used for testing. The number of Gaussian mixture was fixed from 64,128 and 256 for the imposter model this is because to see the difference of Gaussian’s imposter has an effect on the accuracy. While for the client model, the number of Gaussian mixture varied from 4,8,16 to 32 with also the same reason. The iteration was fixed to 20 for imposter and 15 for the client model. There are three sessions of data gathering which were used for testing the system. The first and second sessions are gathered using the same qualities of microphones where else the third session are gathered using different qualities microphone. It can be concluded that the higher the Gaussian model the better the performance of the system and this is discuss in detail in the next session.
III. RESULT AND DISCUSSION A. Effect with Different Gaussian Mixture This paper uses a performance measure based on EER ratio. The number of Gaussian was fixed from 64,128 and 256 for the imposter model. This is because to investigate the effect of the difference of Gaussian’s imposter on the accuracy. Again the training iteration was fixed to 20 times for imposter and 15 for the client model. Table 1 64 Gaussians for Imposter Model (24MFCC) Imposter Model Num of Iteration Gaussian 64 20 64 20 64 20 64 20
Client Model Num of Iteration Gaussian 4 15 8 15 16 15 32 15
Equal Error Rate Percentage Threshold (%) (Th) 4.93213 -0.490675 3.61170 -0.230974 3.94762 -0.206567 3.34774 -0.223954
Table 2 64 Gaussians for Imposter Model (24MFCC) Imposter Model Num of Iteration Gaussian 128 20 128 20 128 20 128 20
Client Model Num of Iteration Gaussian 4 15 8 15 16 15 32 15
Equal Error Rate Percentage Threshold (%) (Th) 4.76830 -0.493030 3.72953 -0.240019 4.00610 -0.158542 3.29671 -0.23338
Table 3 256 Gaussian for Imposter Model (24 MFCC) Imposter Model Num of Iteration Gaussian 256 20 256 20 256 20 256 20
Client Model Num of Iteration Gaussian 4 15 8 15 16 15 32 15
Equal Error Rate Percentage Threshold (%) (Th) 4.99237 -0.460237 3.81661 -0.207975 4.10936 -0.189771 3.24823 -0.201801
Table 2 shows an increment of number of Gaussian for imposter model of 128 while Table 3 shows for 256 number of Gaussian for imposter model. Again the number of Gaussian for client model was varied from 4 to 32. The performance of the system shows that with 128 GMM for imposter model with 32 GMM for the client model provides an EER of 3.30%. As for 256 GMM for the imposter model with 32 GMM for the client model provide an EER of 3.25%. It shows that higher number of Gaussian mixture component gives better performance. B. Effect of Varying Recording Sessions For evaluation on varying testing sessions, 64 GMM for imposter and 32GMM for client were selected due to the best result outcome. In order to evaluate the performance due to the variation (between session), the second session data was collected with the same conditions as the first session. The model was trained and tested with different sets of sentences with the same quality microphone. Table 4 shows that data for the first session. The second session data was collected about one month later and can be seen from Table 5. From both of these tables it can be seen that the higher the number of GMM the better the performance of the system. Typically when tested with the different time session, the performance of the system would deteriorate over time. However our result contradicts with this property. This may be due to the time variation of about 4 weeks, which is relatively a small duration compared to several months. Table 6 shows the performance difference between different sessions. For 256 Gaussian mixtures, there is a 7.38% improvement with the second session data over the first session data. However, using a system with 64 GMM does shows 2.39 % error difference with data train after 4 weeks.
IFMBE Proceedings Vol. 35
Speaker Verification Using Gaussian Mixture Model (GMM)
Table 4 24MFCC First Recording Session Imposter Model Num of Iteration Gaussian 64 20 128 20 256 20
Client Model Num of Iteration Gaussian 32 15 32 15 32 15
563
Table 8 Performance Different Between Different Microphones
Equal Error Rate Percentage Threshold (%) (Th) 3.35 -0.223954 3.30 -0.23338 3.25 -0.201801
Num of Gaussian for Imposter Model 64 128 256
IV. CONCLUSION
Table 5 24MFCC Second Recording Session Num of Gaussian 64 128 256
Iteration 20 20 20
Num of Gaussian 32 32 32
Iteration 15 15 15
Percentage (%) 3.43 3.15 3.01
Threshold (Th) -0.225856 -0.229685 -0.195419
Table 6 Performance Different Between Different Sessions Num of Gaussian for Imposter Model 64 128 256
Difference in Performance |%E| 2.39 4.55 7.67
C. Effect on Different Microphone SV performance is also evaluated on different types of microphones. Voice samples recorded with a capacitor microphone, which is also known as a condenser microphone, will perform differently than with a foil electra microphone. Table 7 shows the third recording session with a different quality of microphone. Again, the client model with a higher Gaussian mixture shows the best performance. Table 8 compares the performance of EER with different microphones. An imposter model with 64 Gaussian and a client model of 32 Gaussian provide an EER of 3.35% as shown in Table 4. Using another type of microphone with the same number of Gaussian mixture as shown in Table 7, the EER is 15.51%. This means that there is a great significant difference in performance measures. Table 8 shows the different performance error for different values of Gaussian used in the experiment. Table 7 24MFCC Third Recording Session Imposter Model Num of Iteration Gaussian 64 20 128 20 256 20
Client Model Num of Iteration Gaussian 32 15 32 15 32 15
Difference in Performance |%E| 362.99 276.97 246.77
Equal Error Rate Percentage Threshold (%) (Th) 15.51 -0.374589 12.44 -0.321392 11.27 -0.321392
This paper applies GMM for SV. Three different experiments were set up with different number of Gaussian mixture model with 32, 64 and 256. In one of the experiments, where speaker performance was evaluated base on time variation, there was not much performance difference when the time variation was less than a month. There was a vast difference in performance when the system was trained with one type of microphone but tested with another microphone type. This result shows that there is a difference in performance measures for the EER. When the system trained under one type of microphone, it should be tested with the same type of microphone. The best overall performance of the SV system is based on the 256 Gaussian models for the imposter and 32 Gaussian models for the client with an EER of 3.01%.
ACKNOWLEDGMENT I would like to acknowledge Centre of Biomedical Engineering (CBE) for giving me access to their database.
REFERENCES 1. Lee,C., & Gauvain,J. (1993 April). Speaker Adaptation Base on MAP Estimation of Hidden Markov Model Parameter. Pages 558-561 of: Proceeding of IEEE International Conference on Acoustics Speech and Signal Processing. 2. Furui, Sadaoki. (1991 December). Speaker Dependent Feature Extraction, Recognition and Processing Techniques. Pages 505-520 of: Speech Communication. 3. Rosenbeg, A., & Soong, F., (1987 September). Evaluation of a Vector Quantization Talker Recognition System in Text Independent and Text Dependent and Text Dependent Modes. Pages 143-157 of: Computer Speech and language, vol.22. 4. Lippman, R.P. (1998). Neural Network for Computing. of: Proceedinds of the IEEE International Conference on Acoustic Speech and Signal Processing. 5. Tomiko Matsui & Sadaoki Furui, (1994). Comparison the Text Independent Speaker Recognition Method Using VQ (Vector Quantization) Distortion and Discrete/ContinuesHMM’s. Pages 456-458. For: Proc IEEE Trance.
IFMBE Proceedings Vol. 35
564
H. Hussain et al.
6. Douglas A.Reynolds, Thomas F. Quatieri and Robest B. Dunn (2000 January/April/July). Speaker Verification Using Adapted Gaussian Mixture Models. Pages 19-41. 7. Reynolds, D. A., (September 1992). A Gaussian Mixture Modelling Approach to Text-Independent Speaker Identification. Ph.D. thesis, Georgia Institute of Technology. 8. Reynolds, D. A. and Rose, R. C., (1995) Robust Text-Independent Speaker Identification using Gaussian Mixture Speaker Models. Pages72-83 of: IEEE Trans. Speech Audio Process. 3.
Author: Sheikh Ahmad Hadri Bin Sheikh Hussain Institute: Universiti Teknologi Malaysia Street: Skudai City: Johor Bahru Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Speaker-Independent Vowel Recognition for Malay Children Using Time-Delay Neural Network B.F. Yong and H.N. Ting Department of Biomedical Engineering, Faculty of Engineering, University Malaya, Jalan Pantai Baharu, 50603 Kuala Lumpur, Malaysia
Abstract— This paper investigated the speaker independent vowel recognition for Malay children using the Time Delay Neural Network (TDNN). Due to less research done on the children speech recognition, the temporal structure of the children speech was not fully understood. This 2 hidden layers TDNN was proposed to discriminate 6 Malay vowels: /a/, /e/, /ə/, /i/, /o/ and /u/. The speech database consisted of vowel sounds from 360 children speakers. Cepstral coefficient was normalized for the input of TDNN. The frame rate of the TDNN was tested with 10ms, 20ms, and 30ms. It was found out that the 30ms frame rate produced the highest vowel recognition accuracy with 81.92%. The TDNN also showed higher speech recognition rate compared to the previous studies that used Multilayer Perceptron. Keywords— Malay Vowel Recognition, Time Delay Neural Network, Automatic Speech Recognition, Children Speech.
I. INTRODUCTION Speech recognition involving children always has higher error than adults [1]. Not many previous studies have been done on speech recognition for children. These were mainly due to the intra speaker and inter speaker variability in the spectral and temporal characteristics of children’s speech [2], [3]. Other reasons were due to speech duration variation among the children i.e. the difference speed of speaking, the differences in length of the vocal tract among children, and the limited grammar ability of children. Previous studies mostly focus on the application of children’s speech in interactive reading tutor systems [4]. The studies on Automatic Speech Recognition on children however are less than the adults because the spectral and temporal characteristics of children’s speech across various age groups are hard to be modeled into suitable speech recognition model [3], [5]. The children’s intra speaker variability such as different anatomical and emotional condition always cause the speech recognition system cannot be applied to the practical application. The gap between the training sets and the testing sets degrades the speech recognition performances [6], [7]. The speaking rate of children is lower than the adults and shows high inter speaker variability [8]. The speech recognition models need to deal with the time shift properties in the children’s speech. Therefore, in this study, the
speech recognition model that can learn the spectral and temporal characteristics of children’s speech was presented. Due to the increase of speaking rate with the age of the children, the articulation time of speech produced by children vocal tract decreases. Therefore, temporal characteristics become crucial parameter in children’s speech recognition. By solving the problems related to the temporal features of speech signal, the variability from speaker to speaker regardless of the speaking rate can also be solved. Previous researches have used Dynamic Time Warping and time domain normalization methods to solve the variability of dynamic speech features [9]. However the result is not as good as the other speech modeling systems. The speech signals need to be divided into frames of windows with a frame length less than 30 ms so that the speech signals are stationary throughout the window frame regardless of temporal parameters changes [11]. However, there is no connection between one frame and another frame due to the stationary speech signals assumption. The time shift invariant of speech signal is important characteristic in speech recognition. To increase the speech recognition accuracy, the ability of speech recognition model to handle time shift invariant characteristic will be crucial. Waibel et al presented phoneme recognition using time-delay neural network which proven successful in changes of speech features from frame to frame [10], [12], [13]. The main advantage of Time Delay Neural Network (TDNN) is its time-delay feature. It allows the speech recognition model to learn the temporal structure and it is insensitive to the shift in time [10]. Therefore it is believed that for vowel recognition in children, TDNN can be used to get higher recognition accuracy compared to those in the previous studies [5], [7] and [9]. The study is going to use TDNN to perform the speakerindependent sustained vowel recognition for Malay children between 7 and 12 years old. There are six vowels in Malay language: /a/, /e/, /ə/, /i/, /o/ and /u/. Not much study has been done on the Malay speech recognition. By using TDNN, the relationship of frame sizes of the neural network with the temporal characteristic of Malay vowel can be investigated. This study also compares with the vowel recognition model proposed by Ting and Yunus [6] which used Multilayer Perceptron (MLP) with cepstral coefficient for isolated vowel recognition and Cheang and Ahmad [14] which used MLP with Mel Frequency Cepstral Coefficient (MFCC).
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 565–568, 2011. www.springerlink.com
566
B.F. Yong and H.N. Ting
This paper is organized as follow: The section II will introduce the TDNN architecture. Detail explanation of how vowel recognition using TDNN will be explained. Section III explains the speech data collection, speech features extraction and the speech recognition experiments process. The results are presented in Section IV. The significance of studies will be discussed and explained. Finally, Section V concludes the study of vowel recognition using TDNN.
II. TIME-DELAY NEURAL NETWORK ARCHITECTURE
frame rates of 10ms, 20ms and 30ms. In original TDNN, three input frames act as one unit and it shifts by one frame each time until the end of the speech signal. This also can be viewed as one frame with two delay frames. Every three frames were forwarded into the first frame in the first hidden layer. Therefore, by shifting the frames for 13 times it formed the 13 frames at the first hidden layer. Therefore, at the input layer, 72 neuron connections will be used to feed into the first hidden layer (24 neurons per frame x 3 frames).
The TDNN architecture used in this study is adapted from the classical TDNN architecture proposed by Waibel et al. [9]. The original TDNN is used for phoneme recognition for adults’ speech. Figure 1 show the original Waibel’s TDNN, while Figure 2 shows the TDNN used in this study.
Fig. 2 Time delay neural network for Malay children vowel recognition
Fig. 1 Classical Time-Delay Neural Network by Waibel et al. [11] To adapt original TDNN system for the vowels recognition in children, some modifications are made. In this study, there will be six vowels compared to three consonants used by Waibel. Basically this TDNN and the original TDNN have the same number of layers: one input layer, two hidden layers and one output layers. At the input layer, this study uses 24 units of cepstral coefficient instead of 16 melscale filter bank coefficients. However, the sizes of the window frames are the same between these two architectures. The proposed TDNN uses 15 window frames with different
In the original TDNN, at the first hidden layer, 12 neurons per frame are used to compensate the larger input layer. At this layer, there are 60 weighted connections that feed into the second hidden layer (12 neurons per frame x 5 frames). One frame of the neurons has 4 delay frames. So, five frames of speech worked together as one unit of shifting. By shifting 9 times, it forms the 9 frames at the second hidden layer. In this study, TDNN used six output units instead of three in the original TDNN. Each of the output units corresponds to each of the Malay vowels. In the learning or training of the TDNN, the back propagation method was used. The weight was initialized between -0.5 and 0.5. The cepstral coefficient obtained from
IFMBE Proceedings Vol. 35
Speaker-Independent Vowel Recognition for Malay Children Using Time-Delay Neural Network
feature extraction will be normalized in between -1.0 and 1.0 as well. Then, it was fitted into the proposed TDNN input layer. This is the same compared to the multilayer perceptron (MLP) [6]. However, instead of update the weight every epoch as in the MLP, the weight was updated by the average value by all corresponding connection [10].
567
Table 1 Summary of vowel recognition result using TDNN with window size of 10ms, 20ms and 30ms Window Size Test I Test 2 Test 3 Vowel recognition accuracy (%) 10ms 20ms 30ms
77.04 80.12 80.53 74.62 80.53 81.64 75.30 81.92 80.80
79.23 78.93 79.34
III. SPEECH RECOGNITION EXPERIMENT A. Speech Database and Features Extraction In this study, the vowel samples data were collected from Malay children between 7 to 12 years old. The sampling rate was set at 20 kHz which fulfilled the minimum requirements as suggested by previous studies [15]. Goldwave was used to collect and store the sound samples. The sustained vowel sounds were recorded from a total of 180 male children and 180 female children. The speech recording was done in quiet room. The subjects were asked to sit with body upright and mouth facing the microphone. Each of the Malay vowels was sustained for more than 5 seconds. Data sets of 240 speakers were used for training process of the TDNN while the rest of 120 speakers were used to test the ASR model. In both the sets of data, equal amount of subjects from each gender and age groups were allocated. Besides that, three fold cross validation test was used in this speech recognition study for average value. The speech samples were divided into 3 groups of 120 data and in each test, 2 groups were used as training sets while another will be used as testing set. For feature extraction, the signal was segmented to 15 frames with each frame size of 10ms (experiment repeated with 20 and 30ms). Then, the samples were extracted using the linear predictive coding (LPC) with order of 24 according to Rabiner [16].Then; this LPC coding was converted to cepstral coefficient (CC) which is more robust to the speech recognition. For the purpose of TDNN training, these CC was normalized to between -0.5 and 0.5. B. Recognition Experiment Condition In this study, two experiments were done as below: Frame window size study: The window size of TDNN in this study was repeated by 10ms, 20ms and 30ms. The accuracy of recognition of 3 different frame rates was compared.
IV. RESULTS AND DISCUSSIONS The result of the testing using ASR model by repeating the experiment with window size of 10ms, 20ms and 30ms was show in Table 1.
30ms window size (frame rate) produced the highest vowel recognition rate of 79.34%. The confusion matrix for the highest accuracy of 81.92% is shown in Table 2 (obtained using 30ms window size). Table 3 shows the comparison between the current study with the studies of Ting and Yunus [6], and Cheang and Ahmad [14]. The accuracy obtained in the current study outperformed those obtained in other studies. Table 2 Confusion matrix of the highest vowel recognition accuracy
/a/ /e/ /ə/ /i/ /o/ /u/
/a/
/e/
/ə/
/i/
/o/
/u/
102 0 2 0 2 0
1 107 4 6 0 1
10 5 102 12 6 12
1 5 3 87 1 0
6 2 4 8 93 9
0 1 5 7 18 98
Accuracy (%) 85.00 89.17 85.00 72.50 77.50 82.35
Table 3 Comparison between TDNN and other MLP architectures TDNN Accuracy of recognition 81.92%
MLP-1 (Ting) MLP-2 (Cheang) 76.25%
74.00%
The speech recognition accuracy of TDNN showed higher recognition rate than that of the MLP used by different previous studies. This proved that TDNN had better performance in dealing with the vowel recognition. However, the recognition rate of /i/ and /o/ was lower than the rest of the vowels. Compared to the studies from Ting [6] and Cheang [14], TDNN have smaller network size and faster training time. Due to the limited data in Cheang’s study, the accuracy degraded when the increased in the number of testing subjects. While study of Ting [6] have almost the same amount of samples compared with this study. By producing higher accuracy of 81.92% compared to MLP from Ting which has 76.25%, TDNN proved more efficient in speech recognition of children vowels.
IFMBE Proceedings Vol. 35
568
B.F. Yong and H.N. Ting
V. CONCLUSION The speaker independent vowel recognition using Time Delay Neural Network was presented. The result showed that TDNN was capable of discriminate different Malay vowels. The temporal characteristic of the speech was learned properly by the recognition model. Accuracy of 81.92% suggested that the model is suitable for vowel recognition for children.
REFERENCES 1. Elenius D and Blomberg M (2004) Comparing speech recognition for adults and children. Proceedings of FONETIK, 2004, pp 156-159. 2. Benzeghiba M, Mori R , Deroo O, Dupont S, Erbes T, Jouvet D, Fissore L, Laface P, Mertins A., Ris C, Rose R, Tyagi V and Wellekens C (2007) Automatic speech recognition and speech variability: A review. Speech Communication 49:763-786 3. Potamianos A and Narayanan S (2003) Robust recognition of children speech. IEEE transaction on speech and audio processing 11(6):603616 4. Hagen A, Pellom B and Cole R (2007) Highly accurate children’s speech recognition for interactive reading tutors using subword units. Speech Communication 49:861-873 5. Wilpon J and Jacobsen C (1996) A study of speech recognition for children and the elderly. IEEE international conference on acoustics, speech and signal processing, 1996, pp 349-452 6. Ting H N and Yunus J (2004) Speaker-Independent Malay vowels recognition of children using multi layer perceptron. IEEE regional conference TENCON, 2004, pp 68-71 7. Smith B L (1992) Relationship between duration and temporal variability in children’s speech. Journal of acoustics society of America 91:2165-2174
8. Ting H N, Yunus J and Lee C W (2002) Speaker-Independent Malay isolated sound recognition. Proceeding of the 9th International Conference on Neural Information Processing, 2002, pp 2405-2408 9. Ting H N, Yunus J, Hussain S and Cheah E L (2001) Malay syllable recognition based on multilayer perceptron and dynamic time warping. Sixth international symposium on signal processing and its application, 2001, pp 743-744 10. Waibel A, Hanazawa T, Hinton G, Shikano K and Lang K J (1989) Phoneme recognition using time-delay neural network. IEEE transaction on acoustics, speech and signal processing 37:328-337 11. Sivakumar S C, Robertson W and Macleod K (1992) Improving temporal representation in TDNN structure for phoneme recognition. International joint conference on neural network, 1992, pp 728-733 12. Sugiyama M, Sawai H and Waibel A (1991) Review of TDNN (Time-delay neural network) architectures for speech recognition. IEEE international symposium on circuit and systems, 1991, pp 582585 13. Lang K, Waibel A and Hinton G (1990) A time-delay neural network architecture for isolated word recognition. Neural networks 3(1): 2343 14. Cheang S Y and Ahmad A M (2008) Malay language textindependent speaker verification using NN-MLP classifier with MFCC. International conference on electronic design, 2008, pp 1-5 15. Rabiner L and Juang B H (1993) Fundamentals of speech recognition. Prentice Hall. New Jersey.
Author: Dr. Hua-Nong TING Institute: University of Malaya Street: Jalan Pantai Baharu City: Kuala Lumpur Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Feasibility of Using the Wavelet-Phase Stability in the Objective Quantification of Neural Correlates of Auditory Selective Attention Y.F. Low1,2, K.C. Lim2, Y.G. Soo2, and D.J. Strauss1,3,4 1
Computational Diagnostics and BiocyberneticsUnit, Saarland University Hospital and Saarland University of Applied Sciences, Saabruecken, Germany 2 Faculty of Electronics & Computer Engineering, UniversitiTeknikal Malaysia Melaka (UTeM), Melaka, Malaysia 3 Key Numerics, Saarbruecken, Germany 4 Leibniz-Institute for New Materials, Saarbruecken, Germany
Abstract— In this paper, we study the feasibility of using the wavelet-phase stability (WPS) in extracting the correlates of selective attention by comparing its performance to the widely used linear interdependency measures, i.e., the wavelet coherence and the correlation coefficient. The outcome reveals that the phase measure outperforms the others in discriminating the attended and unattended single sweep auditory late responses (ALRs). Particularly, the number of response sweeps that are needed to perform the differentiation is largely reduced by using the proposed measure. It is concluded that a faster (in terms of using fewer sweeps) and more robust objective quantification of selective attention can be achieved by using the phase stability measure. Keywords— wavelet, phase stability, auditory selective attention.
I. INTRODUCTION Synchronization of EEG provides crucial information to understand the higher cognitive and neuronal processes [1], [2], [3]. In [4] it is argued that EEG phase synchronization reflects the exact timing of the communication between distant but functionally connected neural populations, the exchange of information between global and local neuronal networks, and the sequential temporal activity of neural processes in response to external stimuli (refer [4] for a detailed review). Event–related potentials (ERPs) are widely used in thestudies of neuronal synchronization associated with several higher cognitive processes. However, the amplitude information of single sweep event–related potentials turned out to be fragile in some cases [5], [6]. Large amplitude fluctuations can easily be introduced by slight accidental changes in the measurement setup over time. Since the signals exhibit a high degree of variance from one sweep to another, even robust amplitude independent synchronization measures such as the time-scale entropy [7] can hardly be applied to assess their synchronization stability. In order to address this issue, we have proposed a novel approach to identify the neural correlates of auditory selective attention which employs wavelet-based measure that
highlights the phase information of the EEG exclusively. In particular, the wavelet-phase stability (WPS) of single sweeps auditory late response (ALR) sequences is confirmed to be linked to attention [8]. In signal processing, Oppenheim and Lim [9], [10] emphsized the importance of the phase in signals by using the Fourier representation. They applied numerical experiments to illustrate the similarity between a signal and its only phase-reserved reconstruction. More recently, the significance of phases in the continuous wavelet representation of analytic signals has also been shown [11]. Besides that, a statistical interpretation of the usefulness of phase information in signal and image reconstructions has been given in [12]. The authors demonstrated that a random distortion of the phases can dramatically distort the reconstructed signal, while a random distortion of the magnitudes will not. Taken together, previous studies strongly support that the phase of a signal contains much more important information compared to the amplitude. Generally, the extraction of the EEG phase can be done via two closely related approaches: the Hilbert transform (or analytic signal approach) and the wavelet transform. As pointed out by most of the studies, the performance of both methods is comparable [13], [14], [15], [16]. However, the Hilbert phase and Hilbert amplitude have direct physical meaning only for narrow band signals [17], [18]. Meanwhile, the wavelet transform can be thought as equivalent to band-pass filtering of the signal, which makes the pre-filtering unnecessary. The main goal of this paper was to study the feasibility of using the WPS in extracting large-scale neural correlates of selective auditory attention. In order to accomplish the task, a performance study of the WPS with the other two popular methods, i.e., wavelet coherence and correlation coefficient by means of the moving mean approach was performed. The main interest of this study was to deepen our understanding of the proposed wavelet-phase stability of ALR sequences and to show its potential use as a synchronizationmeasurein analyzing neural correlates of auditory selective attention.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 569–573, 2011. www.springerlink.com
570
Y.F. Low et al.
II. METHODS A. Subjects and Materials A total of 10 student volunteers (with mean age of 26.7 and standard deviation of 2.5, 4 females) from Saarland University entered the study. All subjects were given the informed consent prior to their participation and the experiments were conducted in accordance with the Declaration of Helsinki. The maximum entropy auditory paradigm was used (more details can be found in [19]). For each experiment, subjects performed the attention task (i.e., detecting the target tones in a series of three different tones) for a length of 10 minutes followed by another 10 minutes of relaxing (with no attention). ALRs were acquired by using a commercially available bioamplifier (g.tecUSBamp, Guger Technologies Austria) with a sampling frequency of 512 Hz. Single sweeps (i.e., individual responses to tones) were recorded from the electrodes placed at the left and right mastoid (EEG channels), the vertex (Reference), and the upper forehead (Ground). Electrodes impedances were strictly maintained below 5kΩ in all measurements. Data obtained was bandpass filtered with a FIR filter with cut-off frequencies of 1-30 Hz. An additional artifact filter was used to remove responses that exceeded 50µV.
[22], [23], [24]. Furthermore, it has recently been used for a reliable detection of auditory habituation [25]. It is noted that the wavelet coherence measure that we applied here is adopted from [25], which is similar to [21]. , the wavelet coherence of two signals x For , and y, , ·,· with a fixed smoothing parameter 0 and wavelet is defined as the cross-wavelet spectrum of the two signals normalized by their corresponding autospectra: ,
,
,
,
,
,
,
,
.
,
,
,
(3)
,
Due to the Schwartz inequality, Eq. (3) is constrained to a value between 0 and 1.Then, the inter-sweep wavelet coherence of a sequence : 1, … , 1 of M-1 sweeps is defined as: ,
, ,
,
,
,
1, … ,
(4) Finally, we defined the moving mean wavelet coherence in a similar way to the moving mean wavelet–phase stability:
Υ
, ,
1
, ,
,
1, … ,
B. Moving Mean Wavelet-Phase Stability
,
∑
.
(1)
In this study, we used the 4thderivative of the complex Gaussian function as wavelet. In general, Eq. (1) yields a value in the range of 0 and 1. We have a perfect phase stability for a particular s and τ for Γ , 1 and a decreasing stability for smaller values due to phase jittering. We defined a moving mean wavelet-phase stability as afunction ofmsweeps as in the following equation:
Γ,
∑
,
,
1. (5)
We employed the time-scale coherence measures based on the complex wavelet transform. The quality and stability of the response over the stimulus sequences are evaluated in terms of the time-resolved phase information. According to [20],the phase stability of a sequence : 1, … , of M sweepsΓ , is defined by:
Γ,
1.
1, … ,
. (2)
C. Moving Mean Wavelet Coherence Wavelet coherence was first introduced by [21] and has been commonly used in evaluating synchronization in EEG
D. Moving Mean Correlation Coefficient Correlation coefficient is often referred to more specifically as the Pearson’s correlation coefficient, or Pearson Productmoment correlation coefficient. It is a measure of the linear relationship between the two signals and has been used in the EEG synchronization investigations. For a sequence : 1, … , with M sweeps is the average of the sequence , the moving mean and is defined correlation coefficient of the sequence and in terms of their covariance covand standard deviations , as seen below:
Τ
,
,
1, … ,
,
(6)
∑ , 1, … , . where This gives a value of [-1,1]. If there is no relationship between the two signals then the correlation coefficient will be 0; if there is a perfect positive match it will be 1. If there is a perfect inverse relationship, then the correlation coefficient will be -1. The significance level (i.e., p-value) is calculated
IFMBE Proceedings Vol. 35
Feasibility of Using the Wavelet-Phase Stability in the Objective Quantification of Neural Correlates
571
by transforming the correlation to create a t statistic having n-2 degree of freedom, where n is the number of subjects.
III. RESULTS AND DISCUSSIONS The scale parameter s of the complex wavelet analysis was chosen as 40. Note that the scale can be associated with a pseudo frequency of 6.4 Hz. Regarding the translation parameter , we considered the interval of 70-120 ms wherethe N1 wave appeared. Figure 1 (a) shows the grand averaged of the normalizedmoving mean wavelet-phase stability for the target tones from the maximum entropy auditory attention experiments and its corresponding significant test results (i.e., one-way ANOVA). It is noted that the horizonal dashed lines on the right of the figure indicates the significant level p <0.05. As one can observe, only as few as seven sweeps are needed to significantly discriminate the attended and unattended conditions. Regarding the evaluation which uses the moving mean wavelet coherence, the smoothing parameter δwas set to 20 as in [25] since we study the same interval of interest. The outcome is shown in Figure 1 (b). In general, the performanceof the wavelet coherence is not encouraging. Based on the figure, although the wavelet coherence of the target tones shows significance difference at certain sweeps, the difference is fluctuating over the sweeps. The result of using the correlation coefficient as synchronizationmeasure is illustrated in Figure 1 (c). The graph shows the results for both attended and unattended sweeps and the p-values are computed by using the t-test. At least 23 sweeps are required to differentiate significantly the attended and unattended conditions for the target tones. Typically, a large number of ALR sweeps is used in identifying neural correlates of auditory selective attention due to a poor signal-to-noise ratio. The number of sweeps that has been used in those pioneer studies is typically more than 100, some studies even analyzed more than 1000 sweeps (e.g., [26], [27], [28], [29]. This has led to a lengthy EEG recording and processing time. Furthermore, subjects are easily exhausted during the task performing. A number of studies in the field of EEG synchronization usethe coherence measure. However, it is argued that coherence cannot be regarded as a specific measure of synchronization [30], [31], [32]. As we know, coherence does not separatethe effects of covariance of the amplitude waveforms and of the phases of two oscillatory signals. Since the core of the synchronization is the adjustment of phases and not of amplitudes, it should be detected by a measure neglecting amplitude variations.
Fig. 1 The grand averaged of the (a) normalized moving mean wavelet phase stability, (b) moving mean wavelet coherence, (c) moving mean correlation coefficient and their corresponding significance test results for the target tones at the N1 wave. Note that the horizontal dashed line in the figure (right) indicates the significant level p <0.05 It has been highlighted by the authors in [4] that the EEG phase synchronization reflects the exact timing of communication between distant but functionally related neural populations, the exchange of the information between global and local neuronal networks, and the sequential temporal activity of neural processes in response to incoming sensory stimuli.
IFMBE Proceedings Vol. 35
572
Y.F. Low et al.
So, the phase of ongoing EEG oscillations (certain frequencies) must undergo resetting (or realignment) due to the exogenous (i.e., physical properties of the incoming auditory stimulations) as well as endogenous processes (i.e., during the performance of the attentional task). Therefore, methods to analyze the phase of the EEG are more desirable and proper because phase values might contain crucial and meaningful information related to cognitive processes. In order to gain more insights of the phase-stability measure, we evaluated the measure using signals with known properties. Hence, we generated the EEG data based on two well-known theories of the ERP genesis: phase reset and additive (also known as evoked) models. To recapitulate, the additive model articulates that the independent responses (i.e., which are triggered by the experimental events of interest) are added to the ongoing EEG (that is considered as “noise”). On the other hand, the phaseresetting theory states that the experimental events reset the phase of ongoing oscillations. Figure 2 shows the simulated ERPs and the corresponding phase stability evaluation results. We observe that the phase stability decreases with a larger phase jittering of single sweeps (Figure 2 (a) and (b)). Meanwhile, EEGs with different amplitudes have the same phase behavior (Figure 2 (c) and (d)). These indicate that the phase stability measure is sensitive only to the phase of the data, but not to the amplitude changes.
Fig. 2 Demonstration of the phase stability measure on the simulated EEG data. (a) EEG data generated based on the phase reset theory. (b) Phase stability of the data in (a). (c) EEG data generated according to the additive theory with different amplitudes. (d) Phase stability of the data in (c)
IV. CONCLUSION We have presented a performance study of using the WPS in identifying neural correlates of auditory selective attention that reflected in single sweeps ALRs. It is shown that the method requires fewer response sweeps to perform the discrimination of the attentional conditions (attended versus unattended) compared to the widely-used wavelet coherence coefficient methods. It is concluded that the WPS is feasible to be used in an objective evaluation of largescale neural correlates of auditory selective attention as a synchronization measure.
ACKNOWLEDGMENT The authors would like to thank the UniversitiTeknikal Malaysia Melaka (UTeM) and the German Federal Ministry of Education and Research (BMBF), Grant FZ: 17N1208 for the generous financial support.
REFERENCES 1. F. Varela, J.-P. Lachaux, E. Rodriguez, and J. Martinerie (2001)Thebrainweb: synchronization and large scale integration. Nature Review Neuroscience2:229-239. 2. K. J. Friston (2001) Brain function, nonlinear coupling, and neuronal transients. Neuroscientist 7:406-418. 3. W. Singer (2001) Consciousness and the binding problem. Ann NY AcadSci 929:123-146. 4. P. Sauseng and W. Klimesch (2008) What does phase information of oscillatory brain activity tell us about cognitive processes?NeurosciBiobehav Rev 32:5:1001-1013. 5. Kolev and J. Yordanova (1997) Analysis of phase-locking is informative for studying event-related potentials. Biological Cybernetics 76:229-235. 6. J. Fell (2007) Cognitive neurophysiology: Beyond averaging. Neuroimage 37:1069–1072. 7. D. J. Strauss, W. Delb, and P. K. Plinkert (2004) Analysis and detection of binaural interaction in auditory brainstem responses by timescale representations. Computers in Biology and Medicine 24:461477. 8. Y. F. Low, C. Trenado, W. Delb, F. I. Corona-Strauss, and D. J. Strauss (2007) The role of attention in the tinnitus decompensation: Reinforcementof a large-scale neural decompensation measure. Proceedings of the29th Annual International Conference of the IEEE EMBS, Lyon, France,2007, pp. 2485-2488. 9. V. Oppenheim and J. S. Lim (1981) Theimportance of phase in signals. Proceedings of the IEEE 69:5:529-541. 10. M. H. Hayes, J. S. Lim, and A. V. Oppenheim (1980) Signal reconstructionfrom the phase or magnitude. IEEE Trans, Acoust, Speech, Sig Processing 28: 672-680. 11. Grossmann, R. Kronland-Martinet, and J. Morlet (1987)Reading and understanding continuous wavelet transforms.Wavelets. TimeFrequency Methods and Phase Space, Proceedings of the International Conference, Marseille, France, 1987. 12. X. Ni and X. Huo (2007) Statistical interpretation of the importance of phase information in signal and image reconstruction. Statistics & Probability Letters 77: 447-454.
IFMBE Proceedings Vol. 35
Feasibility of Using the Wavelet-Phase Stability in the Objective Quantification of Neural Correlates 13. Bruns (2004) Fourier–, hilbert–, and wavelet based signal analysis: are they really different approaches?J. of Neuroscience Methods 137:321-332. 14. M. L. Quyen, J. Foucher, J.-P. Lachaux, E. Rodriguez, A. Lutz, J. Martinerie, and F. J. Varela (2001) Comparison of hilbert transform and wavelet methods for the analysis of neural synchrony. J. Neuroscience Methods 111:83-98. 15. T. Kijewski-Correa and A. Kareem (2006) Efficacy of hilbert and wavelet transforms for time-frequency analysis. Journal Of Engineering Mechanics 132:10:1037-1049. 16. R. QuianQuiroga, A. Kraskov, and P. Grassberger (2002) Perfomance ofdifferent synchronization measures in real data: A case study on electroencephalographicsignals. The American Physical Society 65:0419 031-14. 17. B. Boashash (1992) Estimating and interpreting the instantaneous frequency of a signal-part 1: Fundamentals. Proceedings of the IEEE80:4:83-95. 18. E. Bedrosian (1962) A product theorem for hilbert transforms. United StatesAir Force Project RAND, Santa Monica, California, Memorandum. 19. Y. F. Low, F. I. Corona-Strauss, P. Adam, and D. J. Strauss (2007) Extractionof auditory attention correlates in single sweeps of cortical potentials bymaximum entropy paradigms and its application.Proceedings of the3rd Int. IEEE EMBS Conference on Neural Engineering, TheKohalaCoast, Hawaii, USA, 2007, pp. 469-472. 20. D. J. Strauss, W. Delb, R. D’Amelio, Y. F. Low, and P. Falkai (2008) Objectivequantification of the tinnitus decompensation by synchronizationmeasures of auditory evoked single sweeps.IEEE Trans Neural SystRehabilEng, vol. 16, no. 1, 74-81. 21. J. P. Lachaux, A. Lutz, D. Rudrauf, D. Cosmelli, M. L. V. Quyen,J. Martinerie, and F. Varela (2002) Estimating the time-course of coherencebetween single-trial brain signals: and introduction to wavelet coherence. NeurophysiolClin 32:157-174. 22. Klein, T. Sauer, A. Jedynak, and W. Skrandies (2006) Conventional andwavelet coherence applied to sensory-evoked electrical brain activity. IEEE Trans Biomed Eng 53:266-272. 23. S. Gigola, C. E. D’Attellis, and S. Kochen (2008) Wavelet coherence in EEG signals.Clin Neurophysiology 119:142-143.
573
24. X.-F. Liu, H. Qi, S.-P.Wang, and M.-X. Wan (2006) Wavelet-based estimationof EEG coherence during Chinese Stroop task.Computers in Biologyand Medicine 36:1303-1315. 25. M. Mariam, W. Delb, F. I. Corona-Strauss, M. Bloching, and D. J.Strauss (2009) Comparing the habituation of the late auditory potentials toloud and soft sounds. PhysiolMeas 30:141-153. 26. S. A. Hillyard, R. F. Hink, V. L. Schwent, and T. W. Picton (1973) Electricalsigns of selective attention in the human brain. Science 182:177-180. 27. R. Naatanen, A. W. K. Gaillard, and S. Mantysalo (1978) Early selectiveauditory attention effect on evoked responses reinterpreted. ActaPsychologica 42:313-329. 28. M. Woldorff (1995) Selective listening at fast stimulus rates: so much tohear, so little time.Perspectives of Event-Related Potentials Research44: 32–51. 29. M. F. Neelon, J. Williams, and P. C. Garell (2006) The effects of auditoryattention measured from human electrocorticograms. Clinical Neurophysiology117: 504-522. 30. P. Tass, M. Rosenblum, J. Weule, J. Kurths, A. S. Pikovsky, J. Volkmann,A. Schnitzler, and H. J. Freund (1998) Detecting of n:m phase locking fromnoisy data: ammplication to magnetoencephalography. Phy Rev Lett81: 3291-3294. 31. C. Allefeld (2004) Phase synchronization analysis of event-related brain potentials in language processing. Ph.D. dissertation, Institute of Physics,Nonlinear Dynamics Group, University of Potsdam, Germany, March. 32. J.-P. Lachaux, E. Rodriguez, J. Martinerie, and F. J. Varela (1999) Measuringthe phase synchrony in brain signals. Human Brain Mapping8:194-208.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Yin Fen Low UniversitiTeknikalMalaysia Melaka (UTeM) Hang Tuah Jaya, Durian Tunggal Melaka Malaysia [email protected]
Sharing the Medical Resource: The Feasibility and Benefit of Global Medical Instruments Support and Service M.J. Tzeng, C.Y. Lee, and Y.Y. Huang Institute of Biomedical Engineering, National Taiwan University, Department of Biomedical Engineering, National Taiwan University Hospital, No. 7, Chung-Shan S. Road, Taipei, Taiwan [email protected]
Abstract— High cost of advanced medical technology and medical equipments has impeded the usage and access of these equipments for most people in the world, especially in developing countries. How to share the medical equipments widely and cost- effectively is an urgent topic for global public health. We launched a project to relocate the used or spare medical instruments via Global Medical Instruments Support and Service (GMISS) program. The purpose of the GMISS program, supported by the Department of Health (DOH), Taiwan, is to provide essential medical equipment to other countries to help improve their health care and medical services. With the cooperation with medical centers and hospitals in Taiwan and the clinical engineers in the National Taiwan University Hospital (NTUH), we have provided many usable medical equipments and facilities to many area and countries freely. All equipment and instruments are well maintained and fully functional before being shipped to the recipients. The GMISS also provides hospitals in Taiwan with a chance of sharing their used medical equipments and experiences. Up to now, enthusiastic donations from the hospitals, manufacturers or research institutes across Taiwan have enabled the GMISS program to benefit 27 countries and accomplish 48 donations successfully, including Guatemala, Haiti, Paraguay, Belize, Marshall Islands, Vietnam, Mongolia, Burkina Faso, Indonesia, Sao Tome and Principe, Saint Vincent and the Grenadines, and the Philippines etc. The donation items included dental X-ray systems, mammography, electrocardiographs, ECG machines, anesthesia units, infant incubators, defibrillators cardiac, bedside monitors, hemodialysis units, cast saws, phototherapy units, oxygen tents, infant intensive care systems, ambulances, microscopes etc., amounting to several hundreds of items worth about 3 million U.S. dollars. Cost-effectiveness analysis was done by using QALY gained.
got from them and to have ability to do something for the poor in the developing countries. In addition, high cost of advanced medical technology and medical equipments has impeded the usage and access of these equipments for most people in the world, especially in developing countries. The GMISS Program was jointly established by the Department of Health (DOH), Taiwan and the National Taiwan University Hospital (NTUH) in May 2005 to provide free usable (used or new) medical equipments collected from local medical centers and hospitals to countries globally. These medical equipments were donated from local medical centers and hospitals. GMISS provides hospitals in Taiwan with a chance of giving away used medical equipment; it also makes prolonging the lifespan of used medical equipment possible.
II. MATERIALS AND METHODS A. How Does the GMISS Work?
Keywords— GMISS, QALY, developing countries, donations.
I. INTRODUCTION About one hundred years ago, missionaries from America and Europe came to Taiwan to start the medical service. They brought medical science and provided health care for the people in Taiwan unselfishly. Nowadays, Taiwan has been a prosperous and advanced country. We are grateful for all we N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 574–577, 2011. www.springerlink.com
i) Find the source of Medical Devices or Instruments. ii) Screen the usability of donated Medical Devices.Keep regular records of medical equipment donated by local hospitals and register demands for instruments from other countries. iii) Inspect and evaluate the function of donated Medical Devices or Instruments. (If needed, we repair them). Make sure these devices function well. iv) Transportation and shipping these equipments to the recipients. v) Follow up the use of instruments by recipient countries. vi) Provide consulting service for the use of medical appliances. vii) Assist the recipient countries in installing, operating and maintaining the instruments.
Sharing the Medical Resource: The Feasibility and Benefit of Global Medical Instruments Support and Service
B. Cooperate with International organization to promote the activities of the GMISS Program. The GMISS Program has established two portals for better communication with all potential partners. One portal is for donors in Taiwan to provide key information about available equipment. See Fig 1).
(
575
C. The other is for countries to make requests for equipment. The requests will be promptly processed, and further communication will be handled by experts in medical instruments and engineering to ensure that sufficient information regarding the requested equipment accords with the donation. Further assistance in training medical engineers in both Taiwan and recipient countries will be organized. See Fig 2
(
)
Fig. 1 Domestic donative procedure i)
Conduct a regular survey of teaching and regional hospitals within Taiwan inquiring about their intentions and donation items.
ii) Donated items deemed inappropriate with be rejected and the donor will receive a letter of appreciation.
Fig. 2 The procedure for evaluating overseas requests for equipment and confirm
i.
iii) Donated items deemed appropriate will be accepted and arranged for transference. The donor will receive a letter of confirmation.
Foreign demands include those from the Ministry of Foreign Affairs, from the DOH, from foreign-aid medical groups and from the registered members.
ii.
The DOH will determine the priority order of donation based on the classification of demands.
iv) Evaluate the cost of maintenance.
iii.
v) Maintained items must be properly sealed and stored.
Check the data bank for items that can be donated.
iv.
If there are no matched items, report to t7he DOH for consideration of purchasing of new equipment. If the demand is in low priority, reject the request and put it on the waiting list.
v.
If there are matched items, evaluate the recipient country’s overall conditions such as power consumption, power stability, medical technologies, transportation and medical resource that can be used with the donation items.
vi.
Reply with a letter explaining the conditions of the equipment to be donated and confirm the recipient country’s need. Put it on the waiting list if there are no requests made.
vi) All the items will be kept on file, with details such as the donor’s name, donation date, maintenance record, maintenance cost, value of the item, lifespan, operations and maintenance manual. Whether the item has to be installed or used with other equipment should be specified as well. vii) If there is an urgent demand for a certain item, the item will be transferred to the recipient who has registered at data bank. viii) Ending
IFMBE Proceedings Vol. 35
576
M.J. Tzeng, C.Y. Lee, and Y.Y. Huang
vii.
The DOH is authorized to approve the donation once the items are finalized. Extra installation and maintenance fees, or any other related costs must seek permission from the DOH.
viii.
Transfer the items for donation.
ix.
Ending
III. RESULT Enthusiastic donations from 192 donors (hospitals, manufacturers or research institutes) across Taiwan enabled the GMISS program to benefit 27 countries accomplish 48 donations successfully See Table 1 . Donation items amounted to several hundreds of items worth 3 million U.S. dollars. The GMISS work team helping to set up the donated instruments and proceeding follow-up investigation or related maintenance training in Mongolia, Guatemala, Honduras, Marshall Islands, and Belize.
(
)
Table 1 2005 ~2010 GMISS Donations Year Donations Recipient Countries Donated Item 2005 4 donations Ecuador,El Salvador, dental X-ray systems, Honduras, Ghana mammography units, electrocardiographs, 2006 8 donations Kiribati, Mongolia, ECG machines, ENT Nicaragua, Honduras, Haiti, Saint Vincent and chairs, anesthesia the Grenadines, Burkina units, infant incubators, ICU patient beds, Faso, Belize defibrillators cardiac, 2007 6 donations Nicaragua, Honduras, bedside monitors, Mongolia, Panama, diagnostic tables, cast Paraguay, Marshall saws, shoulder neck Islands support, phototherapy 2008 8 donations Guatemala, Paraguay, Tuvalu, Nauru, Kiribati, units, oxygen tents, Belize, China, Vietnam diagnosis lamps, surgery tables, infant 2009 11 donations Solomon Islands, Indonesia, Gambia, Guatemala, intensive care systems, ambulances, viewEcuador, Saint Vincent boxes, microscopes and the Grenadines, Sao etc., amounting to Tome and Principe*2, several hundreds of Kenya, the Philippines*2 items worth 3 million 2010 11 donations Marshall Islands, GuateU.S. dollars. mala, Solomon Islands*2, Vietnam, Saint Vincent and the Grenadines*2, Paraguay, Haiti, Dominican Republic, Kampuchea
IV. DISSCUSSION In the absence of full measurement and valuation of all the costs and consequences of the alternatives being
compared, the results of economic evaluations can only be interpreted by reference to an external standard. In the 1980s and 1990s it became fashionable to make comparisons between health care interventions in terms of their relative cost-effectiveness, in cost per life-year or cost QALY gained[1]. The quality-adjusted life year (QALY) is routinely used as a summary measure of health outcome for economic evaluation, which incorporates the impact on both the quantity and quality of life. Key studies relating to the QALY and utility measurement are the sources of data. QALYs that occur in the future are discounted to current values, to incorporate the idea that people prefer to receive health benefits now rather than in the future (i.e. positive time preference). Breast cancer is a leading cause of death and disability among women, especially young women, in low- and middle-income countries[2] . In many developing countries, the incidence of breast cancer is now rising sharply due to changes in reproductive factors, lifestyle, and increased life expectancy. Today, more than half of incident cases occur in the developing world[3] . Mammography is the most commonly used screening test in developed countries. It is expensive and complex, requiring substantial financial and manpower resources. Mammography is performed with a specialized mammography x-ray machine, to get a clear xray exposure and breast compression. "Screening for cancer may lead to earlier detection of lethal cancers but also detects harmless ones that will not cause death or symptoms." Breast cancer is a huge threat to women, giving the highest incidence rate and the fourth high mortality rate as compared with other types of female cancer. The GMISS carries out many years. In our study, we donated a mammography unit to a public hospital of Quetzaltenango in the Guatemala. This equipment serves 5 patients every day so one year approximately serves 1275 people. Breast cancer screening Cost/QALY is 5780 pounds / QALY. If it can screen 10 of the breast cancer, it implied that it could save 736,950 pounds / QALY a year in the Guatemala. The GMISS could save massive social cost for their country. The incidence of end-stage renal disease (ESRD) is increasing worldwide at an annual growth rate of 8%, far in excess of the population growth rate of 1.3%[4] Besides, with an increasing number of pre-dialysis chronic kidney disease patients in the developing countries, hemodialysis units play an important role in the treatment. In developing countries where there is limited access to dialysis, some form of rationing has always been practiced. [5] Only a minority of patients with end-stage kidney disease (ESKD) in these countries enjoys access to hemodialysis; for the rest the diagnosis of chronic renal failure is a death sentence. Hemodialysis continues to be the most prevalent option of therapy in the developing world. Take another recipient
IFMBE Proceedings Vol. 35
%
Sharing the Medical Resource: The Feasibility and Benefit of Global Medical Instruments Support and Service
country the Dominican Republic for example, the GMISS donated 7 hemodialysis units to the country in 2010. Generally speaking, five hemodialysis units could treat 10 patients a day and three times a week. These equipments could serve 60 people a week so a year it could approximately serves 3120 people. Home hemodialysis Cost/QALY is 17,260 pounds / QALY. It implied that it could save 345,200 pounds / QALY a year in the Dominican Republic. The money we probably save for them is a great expense to the developing countries.
577
To sum up, the vision of the GMISS program is to enhance the accessibility of health care services regardless of race, color, religion, or geography. The journey of future medical aid is long, and the program is dedicated to providing continuous substantial assistance and spreading love the corners of the world where there is the most need.
ACKNOWLEDGMENT The authors gratefully acknowledge the Bureau of International Cooperation, Department of Health of Taiwan for financial support.
V. CONCLUSIONS The reasons why we set up the GMISS program and keep carrying it out are as follows: In the first place, we could have humanitarian and practical assistance to the people in need. As a cosmopolite, we should do as the American President Obama said, "we say we can no longer afford indifference to suffering outside our borders; nor can we consume the world's resources without regard to effect." In many countries, owing to the political instability, there is a serious lack of medical resource. We hope to help them improve their medical environment and healthcare quality by the way we are doing now. Secondly, we want to establish friendly relation with other countries. Taiwan, R.O.C., our country, has been long in a difficult time facing international diplomacy. It is a way for us to use humanitarianism and practical, medical and economic assistance to win the recognition and support of the international community. Furthermore, it has the benefit of enhancing Taiwan's international image. We carry out the missions aiming to "expressing great kindness to all sentient beings, and taking their suffering as our own." Our work is a charity mission that is a manifestation of a universal love. Above all, what we’ve done is an environmental task that is to recycle the medical resources and at the same time raise health care's quality for the people of poor nations and lower its cost.
REFERENCES
(
)
1. OXFORD MEDICAL PUBLICATIONS 2005 Methods for the Economic Evaluation of Health Care Programmes. Third Edition 2. P. Porter, 2008 Westernizing women's risks? "Westernizing" women's risks? Breast cancer in lower-income countries. New England Journal of Medicine 358 no. 3 pp. 213–216. 3. N. Beaulieu, D. Bloom, R. Bloom, and R. Stein 2009 Breakaway: The Global Burden of Cancer—Challenges and Opportunities, The Economist Intelligence Unit, London, UK 4. End Stage Renal Disease Patients in 2000 2000 A Global Perspective Handbook by Fresenius Medical Care. 5. Moosa MR, Walele AA, Daar AS. 2001 Renal transplantation in developing countries. In: Morris PJ (eds). Kidney Transplantation: Principles and Practice. WB Saunders: Philadelphia, 659–692.
(
)
:
( )
( ) ( )
Author: Ming-Jhi Tzeng, Chia-Ying Lee and Yi-You Huang Institute: Institute of Biomedical Engineering, National Taiwan University, Department of Biomedical Engineering, National Taiwan University Hospital Street: No. 7, Chung-Shan S. Road City: Taipei Country: Taiwan Email: [email protected]
IFMBE Proceedings Vol. 35
Hybrid Capillary-Flap Valve for Vapor Control in Point-of-Care Microfluidic CD T. Thio1, A.A. Nozari2, N. Soin3, M.K.B.A. Kahar4, S.Z.M. Dawal5, K.A. Samra6, M. Madou6, and F. Ibrahim1 1
Dept. of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia Dept. of Mechanicall Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia 3 Dept. of Electrical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia 4 Dept. of Microbiology, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia 5 Dept. of Eng. Design and Manufacture, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia 6 Dept. of Mechanical, Aerospace and Biomedical Engineering, University of California Irvine, CA, USA 2
Abstract— Microfluidics allows for the miniaturization of laboratory processes onto a compact disc (CD). A microfluidic CD provides a cost-effective, portable and automated diagnostic platform without the use of bulky equipment and complex machinery. This reduction in the cost, footprint, and user input allows for the development of tools suitable for point-ofcare applications. One of the criteria for point-of-care applications is the ability to have long storage life. During storage, the reagents/chemicals in typical microfluidic CDs might evaporate and mix together before actual usage, compromising on the integrity of the result. To complement the use of capillary valves, this paper presents various available valves for vapor control, and introduces a hybrid capillary-flap valve for use in microfluidic CDs for point-of-care applications. Keywords— microfluidic compact disc, capillary valve, flap valve, point-of-care, BioMEMS.
experiments in parallel, resulting in faster results at lower costs [2]. Various fluidic functions such as mixing, decanting, etc can be automated on microfluidic CDs [3] to perform laboratory processes ranging from blood sample preparation to complete nucleic acid analysis [4,], [5]. Microfluidic CDs reduces the cost, footprint, and user input of a diagnostic platform, allowing cost-effective, portable and automated diagnostic tools to be developed for use in pointof-care applications. For point-of-care applications, prolonged storage may be necessary, and the reagents/chemicals might evaporate and mix together before actual usage, compromising on the integrity of the result. To complement the use of capillary valves, this paper introduces a hybrid capillary-flap valve method that is cheap, easy to implement, and provides segregation of vapor for use in point-of-care microfluidic CDs.
I. INTRODUCTION Microfluidics allows for the miniaturization of a complete laboratory onto a compact disc (CD). Centrifugal microfluidics, or microfluidic CDs, is a unique approach to the field of microfluidics where fluids (reagents/chemicals/clinical samples) are elegantly manipulated by rotating specialized CDs [1]. At its core, the platform only requires a spinning motor and disposable plastic disc containing a fluidic network (as shown in Figure 1).
II. MICROFLUIDIC VALVES Microfluidic valves are essential for flow control of sample and reagents on a microfluidic CD. There are two categories of valves, namely passive valves, and active valves. Passive valves have no moving parts and work on the principle of the capillary effect [6]. Active valves on the other hand require moving parts such as a membrane or plunger that requires external mechanical, pneumatic, electric or thermal force for actuation. A. Capillary and Siphon Valves
Fig. 1 Microfluidic CD for ELISA fabricated at the University of Malaya Microfluidic CDs rely on the centrifugal force to move fluids, uses smaller volume of reagents, and perform several
Capillary valves prevent fluid flow for rotational speeds below the burst frequency of the valve. Two kinds of passive valves, hydrophilic and hydrophobic valves can be designed. A hydrophilic valve can be created by having a sudden expansion in the geometry of a channel inlet on a hydrophilic surface [7]. A hydrophobic valve can be created by either having a hydrophobic patch in the middle of a channel, or having a sudden narrowing in the geometry of a channel inlet. [8]. The figure 2 depicts these valves.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 578–581, 2011. www.springerlink.com
Hybrid Capillary-Flap Valve for Vapor Control in Point-of-Care Microfluidic CD
579
Fig. 4 The membrane valve opens when no pressure is applied [12] Fig. 2 Description of a microfluidic CD (A), A source and destination chamber connected by a channel, (B) Hydrophobic valve made by a narrowing in the channel, (C) Hydrophobic valve made by the application of hydrophobic material in the channel, (D) Capillary valve [8]
A siphon valve operates on the principle of the capillary effect. Unlike the capillary valves, a siphon valve prevents fluid flow at high rotational speeds. Figure 3 shows the operation of a typical U-shaped siphon valve. The hydrophilic U-shape channel between the two chambers primes the liquid from the source to the destination chamber. At high rotational speed, the centrifugal force pushes the liquid outward away from the center of the CD, trapping it at the bottom of the U shape channel. [9] [10].
C. Flap Valves A flap valve is constructed by embedding a thin film within the microluific layers. Figure 5 shows a 3-way flap valve demonstrated by Mahalanabis et al [13]. The flap valve is normally closed for the 1st channel, and normally opened for the 2nd channel. When given an active pressure of 24.13 kPa, the flap moves down and blocks the 2nd channel while allowing liquid to flow from the 1st channel. 1st channel
2nd channel
Fig. 5 The operation of a flap valve [13] D. Diaphragm Valves Fig. 3 Fluidic push pull through siphoning. [9] [10] Capillary and siphon valves allow for a wide range of applications on a microfluidic CD, however they do not provide segregation of vaporized liquid.
Somewhat between a flap and a membrane valve, a diaphragm valve has a diaphragm instead of a thin film flap, and it is controlled similarly to the membrane valve. Figure 6 shows a diaphragm valve by Zhou et al [14]. When negative pressure is applied to the control channel below the diaphragm, the diaphragm deflects downward and the valve is open.
B. Membrane Valves A membrane valves is constructed with a membrane layer parallel to the channel layer [11] [12]. When pressure is applied onto the membrane layer, the membrane deflects and blocks the flow in the parallel channel. Figure 4 shows a membrane type valve by Weaver et al [12]. The control channel is below the membrane, with the flow channel above the membrane. When pressure is applied, the membrane deflects upward and closes off the flow channel. The membrane valve provides flexibility in flow control, however the fabrication process requires careful layering of the plastic and membrane layers onto the micro device.
Fig. 6 Operation of a Diaphragm Valve [14] E. Manual Valves Manual valves, usually screw-based valves similar in working principle to piping valves are the most basic and versatile valves. To date, various valves of slightly varying
IFMBE Proceedings Vol. 35
580
T. Thio et al.
designs have been demonstrated for various applications [15] [16] [17]. Figure 7 shows a modified screw type valve for control of multiple channels by Markov et al [17]. The valve incorporates a simple screw type valve with a membrane reservoir. When turned, the screw type valve depresses the hydraulic reservoir, which in turn collapses any channels below it. Screw type valves are easy to construct, but due the mass of the valves, incorporating them on microfluidic CDs may cause an imbalance in the spinning of the CD.
III. HYBRID VALVE FOR POINT-OF-CARE CD Among the range of active valves, the sacrificial valves are the most reliable, but they require complex manufacturing in addition to expensive laser to operate; the flap valve is the least expensive, and simple to fabricate. Hence to overcome the challenge of vaporized liquid mixing on a microfluidic CD with capillary vales, a hybrid capillary-flap valve is devised. Figure 9 shows the devised flap valve to be implemented.
Capillary-Flap Valve Destination Chamber
i
Capillary-Flap Valve
Fig. 7 Operation of the Screw Type Valve [17]
ii
F. Sacrificial Valvess Sacrificial valves are one-time use, normally closed valves. These valves start out with blocked channels which are then cleared with a laser. One type of sacrificial valve is the Laser Irradiated Ferrowax Microvalve (LIFM) demonstrated by Lee et al [2]. The LIFM uses iron oxide nano-particle embedded paraffin wax which can be melted under a directed laser beam. The melted wax is carried away to a waste chamber and the liquid then flows through the LIFM. Figure 8 shows an optofluidic valve (or laser printer lithography microfluidic valve) demonstrated by Garciacordero et al [18]. A spot of printer toner is positioned to block the channel. To open the channel, a directed laser source is applied to melts the spot.
Capillary Valve
Fig. 9 (i) The capillary Flap Valve, (ii) Positioning of the valves The capillary-flap valve is constructed of thin film material, and requires a low pressure of 24.13 kPa to operate [13]. On a microfluidic CD, the pressure exerted by the liquid can be expressed by equation (1) [19] where ρ is the liquid density, ω is the spin frequency, ΔR is the length of the source chamber, and R is the average distance of the source chamber from the center of the CD. Pc = ρω 2 ΔR R
Fig. 8 Optofluidic Valve [18]
Destination Chamber
Source Chamber
(1)
Table 1 shows the pressure for chambers with R ranging from 30mm to 55mm. The pressure is calculated based on a constant ω of 250 rad/sec and ΔR of 10mm. As shown, the pressure exerted by the liquid exceeds 24.13 kPa for chambers with average distance further than 40mm from the center of the CD, hence indicating the suitability of the implementation of a capillary-flap valve. IFMBE Proceedings Vol. 35
Hybrid Capillary-Flap Valve for Vapor Control in Point-of-Care Microfluidic CD
Table 1 Pressure exerted by liquid on a microfluidic CD R (mm)
Pressure (kPa)
30
18.75
35
21.88
40
25.00
45
28.13
50
31.25
55
34.38
For the microfluidic CD, the capillary valves are designed and implemented accordingly to ensure proper flow sequencing of the intended processes. The capillary-flap valves are introduced in series after the capillary valves. Figure 9 shows the positioning of the capillary valve and the capillary-flap valve between the source and destination chambers. Hence, by introducing a modified flap valve in series after the capillary valve, flow sequencing is still controlled by the capillary valve, while the flap valve then prevents mixing of vaporized reagents/chemicals in the channels.
IV. CONCLUSION The design of a hybrid capillary-flap valve is presented for vapor control in point-of-care microfluidic CDs. The design of the capillary-flap valve is developed after consideration of a wide range of available valves such as membrane valves, flap valves, diaphragm valves, manual valves, and sacrificial valves. The capillary-flap valve is derived from a flap valve for its simplicity in design and low in cost to fabricate, and it complements the existing capillary valves used in microfluidic CDs. The capillary-flap valves is suitable for point-of-care applications as it allows for the ease of implementation onto existing designs where flow sequencing has already been established for various processes, and prevents the mixing of vaporized liquid on the microfluidic CDs during prolonged storage.
ACKNOWLEDGMENT This research was supported in part by University Malaya Research Grant (UMRG) Project No.: RG023/09AET, Fundamental Research Grant Scheme-(FRGS) FP059/2010A, Ministry of Science, Technology and Innovation (MOSTI) grant 53-02-03-1049 and Sultan Iskandar Johor Foundation.
REFERENCES 1. Madou M, Zoval J, Jia G et al (2006) Lab on a CD DOI: 10.1146/ annurev.bioeng.8.061505.095758
581
2. Lee B S, Lee J, Park J et al (2009) A fully automated immunoassay from whole blood on a disc. Movie 1548-1555 DOI: 10.1039/b820321k 3. Ducr J, Haeberle S, Lutz S et al. (2007) The centrifugal microfluidic Bio-Disk platform 103. DOI: 10.1088/0960-1317/17/7/S07. 4. Gorkin R., Park J, Siegrist, J (2010) Centrifugal microfluidics for biomedical applications. Lab on a Chip 1017581773. 5. Lai S, Wang S, Luo J at al. (2004) Design of a Compact Disk-like Microfluidic Platform for Enzyme-Linked Immunosorbent Assay. Anal. Chem. 76:1832-1837 6. Gli A, Delattre C (2006) Modeling and fabrication of capillary stop valves for planar microfluidic systems. Sensors And Actuators 131:601-608 DOI: 10.1016/j.sna.2005.12.011 7. Nolte D D (2009) Invited Review Article : Review of centrifugal microfluidic and bio-optical disks. 1-22 DOI: 10.1063/1.3236681 8. Siegrist, J, Gorkin R, Bastien M et al. (2010) Validation of a centrifugal microfluidic sample lysis and homogenization platform for nucleic acid extraction with clinical samples. Society 363-371 DOI: 10.1039/b913219h 9. Siegrist, J, Gorkin R, Clime L at al. (2010) Serial siphon valving for centrifugal microfluidic platforms. Microfluidics and Nanofluidics 55-63 DOI: 10.1007/s10404-009-0523-5 10. Microfluidics C, Zengerle R (2008) Microfluidics 2 : Microfluidic Platforms for Lab-on-a-Chip Applications 4. Centrifugal Microfluidics Technology 1-71 11. Chang H L (2010) On-demand double emulsification utilizing pneumatically actuated, selectively surface-modified PDMS microdevices. Microfluidics and Nanofluidics DOI: 10.1007/s10404-0100629-9 12. Weaver J A, Melin J, Stark, D et al. (2010) Pressure-gain valves. Nature Physics. Nature Publishing Group 6:218-223. DOI: 10.1038/nphys1513 13. Mahalanabis M, Do J, Klapperich C M (2010) An integrated disposable device for DNA extraction and helicase dependent amplification. Biomedical Microdevices 353-359. DOI: 10.1007/s10544-009-9391-8 14. Zhou P, Young L, Chen Z (2010) Weak solvent based chip lamination and characterization of on-chip valve and pump. Biomedical Microdevices DOI: 10.1007/s10544-010-9436-z 15. Oraes C H, Yss K R, Risson E M et al. (2010) An Undergraduate Lab (on-a-Chip): Probing Single Cell Mechanics on a Microfluidic Platform. DOI: 10.1007/s12195-010-0124-0 16. Mu J, Wang M, Yin X (2010) Sensors and Actuators B : Chemical A simple subatmospheric pressure device to drive reagents through microchannels for solution-phase synthesis in a parallel fashion. Sensors & Actuators: B. Chemical, 146(1), 410-413. Elsevier B.V. doi: 10.1016/j.snb.2010.01.050 17. Markov D A, Manuel S, Shor L M et al. (2010) Tape underlayment rotary-node (TURN) valves for simple on-chip microfluidic flow control. Biomedical Microdevices 135-144 DOI: 10.1007/s10544-0099368-7 18. Garcia-cordero J L, Kurzbuch D, Benito-lopez F et al. (2010) Optically addressable single-use microfluidic valves by laser printer lithography. Lab on a Chip m DOI: 10.1039/c004980h. 19. Chen J M, Huang P, Lin M (2008) Analysis and experiment of capillary valves for microfluidics on a rotating disk. Microfluidics and Nanofluidics 427-437
Author: THIO Tzer Hwai Gilbert Institute: MIMEMS Lab, Dept of Biomedical Engineering, Faculty of Engineering, University of Malaya, Street: Lembah Pantai City: Kuala Lumpur Country: MALAYSIA Email: [email protected]
IFMBE Proceedings Vol. 35
Semi-automated Dielectrophoretic Cell Characterisation Module for Lab-on-Chip Applications N.A. Kadri1,2, H.O. Fatoyinbo2, M.P. Hughes2, and F.H. Labeed2 1
2
Department of Biomedical Engineering, University of Malaya, Kuala Lumpur, Malaysia Centre for Biomedical Engineering, University of Surrey, Guildford, Surrey, United Kingdom
Abstract— Dielectrophoresis is an electrical phenomenon that occurs when a polarisable particle is placed in nonuniform electrical fields. The magnitude of the generated force is dependent upon the electrophysiological make-up of the particle, therefore the specific DEP profile may be attained for any polarisable particles based on the intrinsic electrical properties alone. Any changes to these parameters may be detected by observing the corresponding DEP spectra. Despite having the advantages of being non-invasive, DEP applications are still not widely used due to the time-consuming processes involved. This study presents the preliminary outcomes in the development of a semi-automated DEP-based cell characterisation tool that allowed concurrent DEP experiments to be conducted serially, thus significantly reducing the time taken to complete the required sets of experiments. The results showed that the system is capable of producing a DEP spectrum for K562 leukaemic cells between the 10 kHz to 1 MHz range in less than 10 minutes, when recorded at eight points per decade.
conduct large batch cell assays [16]. A number of studies have actually looked into reducing the time taken to conduct DEP experiments by automating certain experimental processes (e.g. [17,18]), but these were limited to capturing DEP effects on single cells. A semi automated system, employing a novel well-based electrode [16], was successfully used in numerous DEP-based studies (e.g. [10,11,14-16]). However, the time taken to complete the experiments required to construct a single DEP spectrum was in the range of 30 to 60 minutes. This paper presents the preliminary findings of the development of a semi-automated DEP-based system that is capable of conducting and analysing DEP experiments for cell samples of about 10 million cells per ml at a much reduced time. The data points resolution of the DEP spectra will be comparable to those acquired when using the said well electrodes.
Keywords— Dielectrophoresis, lab-on-chip, cancer cells.
II. THEORY
I. INTRODUCTION
The DEP force, FDEP, exerted on a homogeneous sphere may be modelled as:
Dielectrophoresis (DEP) is an electrical occurrence to describe the temporary generation of force around a polarisable particle when placed in a non-uniform electric field. It is commonly grouped as part of AC electrokinetics, and has been used for physically manipulating various types of cells and particles since its discovery by Herbert Pohl [1]. Following the successful separation of viable and nonviable yeast cells [2,3], numerous DEP-based cell characterisation studies have followed. It has been employed, for example, to separate different strains of bacteria [4,5,6], viruses [7,8,9], spores [10], and algae [11]. Apart from the capability of detecting and separating different types or species of particles and cells, DEP has also been used in the characterisation studies of abnormal cells, particularly cancer cells [12-15]. Nevertheless, DEP generally has yet to enjoy wide acceptance by the biotechnology industry, most probably due to two related factors, namely the time-consuming processes involved, thus allowing experiments only to be conducted serially; and the lack of techniques and/or applications to
FDEP = 2πr 3ε m Re[K (ω )]∇Ε 2
(1)
where r is the cell radius, ε m is the permittivity of the surrounding medium, K (ω ) is the complex Clausius-Mossotti factor, and E is the electric field strength (in RMS). The Clausius-Mossotti factor is a measure of the strength of the effective polarisability: K (ω ) =
where
ε *p and ε m*
ε *p − ε m* ε *p + 2ε m*
(2)
are the complex permittivities of the cell
and the medium, respectively. In addition,
ε* =ε −
jσ
ω
(3)
where σ is the conductivity, ε is the permittivity, and ω is the angular frequency of the applied AC electric field.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 582–586, 2011. www.springerlink.com
Semi-automated Dielectrophoretic Cell Characterisation Module for Lab-on-Chip Applications
The magnitude of FDEP, therefore, is dependent upon the frequency of the applied electric field, due to the presence of frequency-dependent Clausius-Mossotti factor. If the particle is more polarisable than the suspending medium, the generated force will move the particle towards region(s) with the highest electrical gradient; a phenomenon known as positive DEP. Negative DEP will occur in the opposite, i.e. when the medium is more polarisable than the particle, thus repelling the particle towards region(s) of lower electrical gradient. Changes of the generated DEP force therefore is characteristic for specific particle populations, dependent upon the frequency and the strength of the applied electric field. DEP spectra for a specific population of particles at specific conditions can thus be used as the discriminant in cell characterisation studies.
583
electrode edge and decreasing in magnitude towards the centre of the dot. This should therefore provide a correlation between particle motion and relative polarisability; the determination of which is based upon captured images from a digital camera attached to the microscope.
Fig. 1 A schematic diagram of the electrode design from the top Outlet
Inlet
III. MATERIALS AND METHODS
ITO
A. Cell Sample Preparation
Gasket
Human myelogeneous leukemic K562 cell lines (LGC Standards, Teddington, UK) were initially cryogenically frozen in 1 ml aliquots containing the culture medium in 5% DMSO at a concentration of 1×106 cells per ml. All reagents and solutions are sourced from Sigma Aldrich Co., (St. Louis, USA), unless otherwise stated. The cells were cultured in medium consisting of RPMI solution (GIBCO® RPMI Media 1640, Invitrogen Ltd., Paisley, UK), supplemented with 10% heat-inactivated fetal bovine serum (FBS) (PAA, Pasching, Austria), 1% Lglutamine, and 1% penicillin-streptomycin solution, at 37 degrees Celsius and 5.0% CO2. Doubling time was reached after 24 hours, and once confluent the cells may be used for DEP experiments. For DEP experimental purposes, cells need to be transferred in conductive medium made of 8.5% sucrose and 0.3% dextrose, with the final conductivity value adjusted using KCl solution. The DEP conductivity was set at 10 mS/m, which should provide a crossover frequency between 10 to 100 kHz (based on previous findings by e.g. [14,15]), verified by a conductivity meter. Cell samples were washed twice by centrifugation at 180 g for five minutes, and resuspended in the DEP conductive medium to produce cell concentration of 1×107 cells per ml.
Electrode
B. Electrode Design and Assembly The planar electrode geometry of choice is similar to the dot array design used by Fatoyinbo et al. [13], thus producing axisymmetrical electrical field gradients over each of the dots (Figure 1). The generated DEP force should consequently be axisymmetrical as well, being greater at the
Signal
Fig. 2 A schematic diagram of the microelectrode device from the side; where Inlet and Outlet indicates the flow path of the cell suspension, Signal is output from the waveform generator The electrode is fabricated from gold plated glass slides using common. photolithography methods. Prior to it being used in DEP experiments, the electrode needs to be assembled to create a suitable chamber for the cell suspension to be placed and the DEP effects to be observed and recorded. Figure 2 shows the schematic of the electrode assembly arrangement. The indium tin oxide (ITO) layer acted as the opposing pole in creating the AC electrical field, while the gasket (made of polyresin) acted as the required walls in creating the said chamber. The typical height of the chamber, produced by the thickness of the gasket, is about 200 µm. C. Experimental Procedures Cell suspension of about 10 µl were slowly delivered using a syringe through the inlet, and the chamber was uniformly filled with the solution via capillary action. The electrode and the ITO were connected to the positive and negative connectors of the signal generator, respectively. The signal was applied for 20 seconds at each chosen frequency value.
IFMBE Proceedings Vol. 35
584
N.A. Kadri et al.
D. DEP Spectra and Analysis
IV. RESULTS AND DISCUSSION The magnitude of the shifts in the pixel values, when plotted against the frequency at which the images were taken from, was used to construct the corresponding DEP spectrum for the K562 cell population (Figure 5). The characteristic S-shape of the plot is in agreement with previously published data for K562 cells in similar experimental conditions [14,20]. Of particular importance, the crossover frequency occurs at the predicted 10 to 100 kHz range. Electrophysiological properties of the cellular membrane, inferred from the best fit model constructed from Equation (1), were typically about 200 S and 7mF/m2 for conductance and capacitance, respectively. 1 0.8 0.6
L ig h t in ten s ity (a .u .)
Fig. 3 A schematic diagram of the movement of cells over the dots when experiencing positive DEP
0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 10
4
10
5
10
6
Frequency (Hz)
Fig. 5 Typical DEP spectra for K562 cell population (cell concentration 1x107 cells per ml, electrical field 10 Vp-p, KCl conductive medium 10 mS/m). Dotted line indicate best fit model using Equation (1) Fig. 4 The captured images and the corresponding histogram plots for dots experiencing positive DEP To construct a DEP spectrum, a correlation must be made between the cell movement and its relative polarisability. For the current system, this correlation is quantified by analysing the captured images of the dots. Figure 3 shows the schematic diagram of cell movement following an application of electrical signal that created positive DEP effects for the particles in use. Negative DEP, on the other hand, pushes the particles towards the edges of the dot. The histogram of the corresponding images will be automatically determined and changes of the peak values will be calculated. A change in the positive direction indicated positive DEP, and vice versa. Figure 4 shows the change in the histogram values for cells experiencing positive DEP. The algorithm used in the analysis is similar to the Cumulative Modal Intensity Shift (CMIS) used previously [19]. Electrophysiological properties, particularly conductance and capacitance, of the membrane and inner compartments may be inferred using the best fit model constructed from Equation (1).
Each data points in Figure 5 were constructed from four dots experiencing the same DEP effects at the same frequency. Since the electric field was generated for 20 seconds, the whole DEP experiment for this particular cell sample may be completed in less than 10 minutes, including the time taken for transferring the cells through the inlet of the chamber. This is a large improvement over previously used methods (e.g. [11,14-16,21-23]), which may take anywhere between one to two hours to complete the necessary experiments to construct a DEP spectra with similar frequency resolution.
V. CONCLUSION Using the default setting of recording four dots receiving the same frequency simultaneously, the developed system is capable of recording DEP events at 16 points resolution between 10 kHz and 1 MHz frequency range within 10 minutes. The developed hardware components should be compatible with any lab-on-chip modules, thus allowing DEP cell characterisation studies to be completed with significantly improved time.
IFMBE Proceedings Vol. 35
Semi-automated Dielectrophoretic Cell Characterisation Module for Lab-on-Chip Applications
In addition, this system may finally open up the possibility of conducting real-time DEP studies by using an alternative electrode geometry. If each of the electrodes can be individually energised, this will allow each of the dots to be supplied at different frequencies, thus generating electrical fields of differing gradients. This will technically allow a single DEP plot, recorded between 10 kHz and 1 MHz at eight points per decade, to be completed within a minute. This temporal resolution should be sufficient to record any DEP-related changes (e.g. surface and cytoplasmic conductance and capacitance values) occurring to the cell population very close to real-time. Figure 6 shows an example of the recorded image when each of the dots received different frequencies, thus producing a range of DEP effects.
Fig. 6 Captured image following a DEP experiment. Each of the dots received increasing frequency value from top left, starting at 10 kHz (producing definite negative DEP) and ending at 100 kHz (producing slight positive DEP)
ACKNOWLEDGEMENT NAK would like to thank the Ministry of Higher Education, Malaysia and the University of Malaya, Malaysia for financial grants towards the completion of his doctoral studies at the University of Surrey, United Kingdom, from where the work cited in this paper was conducted.
REFERENCES 1. Pohl, H. (1951), 'The motion and precipitation of suspensoids in divergent electric fields', Journal of Applied Physics 22, 869. 2. Pohl, H. and Hawk, I. (1966), 'Separation of Living and Dead Cells by Dielectrophoresis.', Science (New York, NY) 152(3722), 647. 3. Crane, J. and Pohl, H. (1968), 'A study of living and dead yeast cells using dielectrophoresis', Journal of the Electrochemical Society 115, 584.
583
4. Inoue, T.; Pethig, R.; Al-Ameen, T.; Burt, J. and Price, J. (1988), 'Dielectrophoretic behaviour of Micrococcus lysodeikticus and its protoplast', Journal of Electrostatics 21(2-3), 215--223. 5. Washizu, M.; Kurahashi, Y.; Iochi, H.; Kurosawa, O.; Aizawa, S.; Kudo, S.; Magariyama, Y. and Hotani, H. (1993), 'Dielectrophoretic measurement of bacterial motor characteristics', IEEE Transactions on Industry Applications 29(2), 286--294. 6. Markx, G.; Dyda, P. and Pethig, R. (1996), 'Dielectrophoretic separation of bacteria using a conductivity gradient', Journal of biotechnology 51(2), 175--180. 7. Green, N. and Morgan, H. (1997), 'Dielectrophoretic separation of nano-particles', Journal of Physics D: Applied Physics 30, L41--L44. 8. Hughes, M.; Morgan, H.; Rixon, F.; Burt, J. and Pethig, R. (1998), 'Manipulation of herpes simplex virus type 1 by dielectrophoresis', BBA-General Subjects 1425(1), 119--126. 9. Archer, S.; Morgan, H. and Rixon, F. (1999), 'Electrorotation studies of baby hamster kidney fibroblasts infected with herpes simplex virus type 1', Biophysical journal 76(5), 2833--2842. 10. Fatoyinbo, H.; Hughes, M.; Martin, S.; Pashby, P. and Labeed, F. (2007), 'Dielectrophoretic separation of Bacillus subtilis spores from environmental diesel particles', Journal of Environmental Monitoring 9(1), 87--90. 11. Hübner, Y.; Hoettges, K. and Hughes, M. (2003), 'Water quality test based on dielectrophoretic measurements of fresh water algae Selenastrum capricornutum', Journal of Environmental Monitoring 5(6), 861--864. 12. Gascoyne, P.; Pethig, R.; Burt, J. and Becker, F. (1993), 'Membrane changes accompanying the induced differentiation of Friend murine erythroleukemia cells studied by dielectrophoresis', Biochimica et biophysica acta. Biomembranes 1149(1), 119--126. 13. Huang, Y.; Wang, X.; Becker, F. and Gascoyne, P. (1996), 'Membrane changes associated with the temperature-sensitive P85gag-mosdependent transformation of rat kidney cells as determined by dielectrophoresis and electrorotation', Biochimica et Biophysica Acta (BBA)-Biomembranes 1282(1), 76--84. 14. Labeed, F.; Coley, H.; Thomas, H. and Hughes, M. (2003), 'Assessment of multidrug resistance reversal using dielectrophoresis and flow cytometry', Biophysical journal 85(3), 2028--2034. 15. Labeed, F.; Coley, H. and Hughes, M. (2006), 'Differences in the biophysical properties of membrane and cytoplasm of apoptotic cells revealed using dielectrophoresis', BBA-General Subjects 1760(6), 922--929. 16. Hoettges, K.; Hubner, Y.; Broche, L.; Ogin, S.; Kass, G. and Hughes, M. (2008), 'Dielectrophoresis-Activated Multiwell Plate for Label-Free High-Throughput Drug Assessment', Anal. Chem 80(6), 2063--2068. 17. Gasperis, G.; Wang, X.; Yang, J.; Becker, F. and Gascoyne, P. (1998), 'Automated electrorotation: dielectric characterization of living cells by real-time motion estimation', Measurement Science and Technology 9, 518. 18. Hölzel, R. (1998), 'Nystatin-induced changes in yeast monitored by time-resolved automated single cell electrorotation', Biochimica et Biophysica Acta (BBA)-General Subjects 1425(2), 311--318. 19. Fatoyinbo, H.; Hoettges, K. and Hughes, M. (2008), 'Rapid-on-chip determination of dielectric properties of biological cells using imaging techniques in a dielectrophoresis dot microsystem', Electrophoresis 29(1), 3--10. 20. Altomare, L.; Borgatti, M.; Medoro, G.; Manaresi, N.; Tartagni, M.; Guerrieri, R. and Gambari, R. (2003), 'Levitation and movement of human tumor cells using a printed circuit board device based on software-controlled dielectrophoresis', Biotechnology and bioengineering 82(4), 474--479. 21. Broche, L.; Bhadal, N.; Lewis, M.; Porter, S.; Hughes, M. and Labeed, F. (2007), 'Early detection of oral cancer-Is dielectrophoresis the answer?', Oral oncology 43(2), 199--203.
IFMBE Proceedings Vol. 35
586
N.A. Kadri et al.
22. Chin, S.; Hughes, M.; Coley, H. and Labeed, F. (2006), 'Rapid assessment of early biophysical changes in K562 cells during apoptosis determined using dielectrophoresis', International Journal of Nanomedicine 1(3), 333. 23. Johari, J.; Hübner, Y.; Hull, J.; Dale, J. and Hughes, M. (2003), 'Dielectrophoretic assay of bacterial resistance to antibiotics', Physics in Medicine and Biology 48, N193--N198.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Nahrizul Adib Kadri Department of Biomedical Engineering University of Malaya Kuala Lumpur Malaysia [email protected]
A Preliminary Study of Compression Efficiency and Noise Robustness of Orthogonal Moments on Medical X-Ray Images K.H. Thung1, S.C. Ng2, C.L. Lim1, and P. Raveendran1 1 2
Electrical Engineering, University of Malaya, Malaysia Biomedical Engineering, University of Malaya, Malaysia
Abstract— This paper provides a preliminary study of compression efficiency and noise robustness of two orthogonal moments (Legendre and Tchebichef) on medical images. Similar to a typical JPEG compression, the medical images are first subdivided into 8x8 blocks and transformed into moment domain. Moments up to order of 2 to 8 are used to reconstruct the image. The mean square error (MSE) between the reconstructed and the original images are computed as the measure of compression efficiency. For the noise robustness test, Gaussian white noise is applied to the original image before the same processing steps mentioned above are done. Discrete Cosine Transform (DCT), a standard JPEG compression algorithm, is used as the benchmark for this experimental study. The result shows that the compression ability of Tchebichef moments is comparable to DCT while Legendre moment has the highest noise robustness among them. Keywords— Orthogonal moments, Legendre, Tchebichef, compression, noise robustness.
I. INTRODUCTION A lot of medical images are captured everyday by hospital, clinic, healthcare organization and research institute. These images include the Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Computerized Axial Tomography (CAT), x-ray and ultrasound images. The traditional way to store these images is in film format in cabinet which requires a lot of space and resources. Medical image compression technology come in handy by providing a means to store these images in digital form with a small file size while still maintain important diagnostic information of the film. Medical image compression is slightly different with compression of other images. It has higher expectation on the quality of the compressed image, as the detectable distortion on the medical image may cause false diagnosis from the doctor. Digital Imaging and Communications in Medicine (DICOM) [1], a widely accepted standard for handling, storing and transmitting medical images, has adopted several algorithms for the file format of their medical images. Two of the most popular algorithms being used are JPEG lossless and JPEG2000 lossless compression.
Most of the medical image compression research focused on the lossless compression technique owing to the reluctance of the medical community to adopt lossy compression technique due to legal issues that have been raised. However, lossless compression can only achieve compression rate of around 2:1 to 3:1. Lossy compression, on the other hand, though the process is irreversible, can achieve higher compression rate. For a high quality compression, where the distortions are visually unnoticeable, a compression rate of 10:1 to 20:1 can be achieved [2]. When more researches are done on this area [3] that improves the quality and the rate of compression, it is just a matter of time when the medical community will switch from lossless to lossy compression for medical images handling. The transform-based lossy image compression techniques can be summarized into three stages: transformation, quantization and lossy coding. The image is subdivided into smaller subimages (8x8 blocks, for example, as used by DCT in JPEG) before transformed into other domain (e.g. frequency) which is easier to compress. It is of our particular interest to investigate the possibility of using orthogonal moments as another alternative of transformation method for medical images (e.g. x-ray images). The number of moment coefficients that considered adequate for a high quality compression is examined. The performance of the moments is compared with DCT in term of compression efficiency and noise robustness. Moments have been widely utilized in a variety of application in digital image processing, such as compression, pattern recognition, object detection [4] etc. In [5], a comprehensive review has been done on the moment theory, in an attempt to open a new perspective in biomedical imaging. Among all the moments that have been reviewed, orthogonal moments are chosen for this study, as they provide no information redundancy and efficient image reconstruction computation. Two orthogonal moments have been chosen for this study, one from the continuous family, called the Legendre moments, another from the discrete family, called the Tchebichef moments. The organization of this paper is as follows. The first section gives some brief introduction about medical image compression, motivation and aim of this study. The mathematical background of the orthogonal moments used in this study is
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 587–590, 2011. www.springerlink.com
588
K.H. Thung et al.
introduced in section II. Section III reports the experimental result of the compression efficiency of the orthogonal moments on medical images, with discrete cosine transform (DCT) used as a benchmark for comparison. Section IV discusses the robustness of the orthogonal moments and DCT under the influence of Gaussian white noise. We conclude our paper in section V.
B. Tchebichef Moment The (m+n)-th order of Tchebichef moment for a digital image f(x,y) of size N × N is defined by:
Tmn =
N −1 N −1
∑ ∑ ~tm ( x)~tn ( y) f ( x, y )
~ where {tn } are the normalized Tchebichef polynomial which are defined as [8]
II. ORTHOGONAL MOMENTS
n
The general formula to calculate the 2-D moments of an image f(x,y) is given by:
Fmn = ∫∫
R2
(5)
x =0 y = 0
f ( x, y )ϕ mn ( x, y )dxdy,
n! ~ tn ( x) =
k =0
⎛ N + n⎞ ⎟⎟ (2n)!⎜⎜ ⎝ 2 n + 1⎠
(1)
where ϕ mn ( x, y ) is the weighting moment kernel (basis function), and m,n=0,1,2,... is the moment order in the horizontal and vertical direction of the image respectively. By using different weighting kernels, different types of moments can be obtained.
⎛ N − 1 − k ⎞⎛ n + k ⎞⎛ x ⎞ ⎟⎜ ⎟⎜ ⎟ n − k ⎟⎠⎜⎝ n ⎟⎠⎜⎝ k ⎟⎠
∑ (−1) n −k ⎜⎜⎝
(6)
The inverse Tchecichef moment transform is given as f ( x, y ) =
N −1 N −1
∑ ∑ Tmn ~tm ( x)~tn ( y)
(7)
m= 0 n= 0
III. COMPRESSION TEST ON MEDICAL X-RAY IMAGES A. Legendre Moment The basis function for Legendre moments is ϕ mn ( x, y ) = Pm ( x) Pn ( y ) where Pn (x) denotes the nth order of the Legendre polynomial given by [6]: Pn ( x) =
(2n − 2k )!
n/2
∑ (−1) k 2 n k!(n − k )!(n − 2k )! x n −2k
(2)
k =0
The (m+n)th order of a Legendre moment, Lmn is defined as: Lmn =
( 2 m +1)( 2 n +1) 1 1 Pm ( x) Pn ( y ) 4 −1 −1
∫ ∫
f ( x, y )dxdy
The medical images that are used in this experimental study focused on the x-ray images. 22 x-ray images downloaded from the internet are used for the test. Their sizes range from 275x183 (hand x-ray) to 2828x2320 (chest x-ray). Some of the x-ray images contain words descriptions. Only the luminance (brightness level, and no colour information) of the x-ray image are extracted for processing. Fig. 1 shows some of the x-ray images used in this experimental study.
(3)
If only Legendre moment of order (m + n) ≤ M are given, the image f(x,y) can be approximated from a finite series of Lmn by using equation fˆ ( x, y ) =
M
m
∑∑ Lm−n,n Pm−n ( x) Pn ( y)
(4)
m=0 n=0
Since we are dealing with the discrete images, Legendre moments in equation (3) are normally approximated by replacing the integration with summation, which in turn caused approximation error to occur. However, this issue has been addressed in [7] where an exact Legendre moments computation method is proposed. This study use the approach described in [7] to compute Legendre moment and use equation (4) to reconstruct the image from limited number of moment coefficients.
Fig. 1 Sample of x-ray images Firstly, the images are decomposed into 8x8 blocks, and their corresponding moments value are calculated using equation (3) and (5). After that, only partial of the moment coefficients are used for reconstruction. The moments of order (m+n) up to 2 to 8 are used for reconstruction. The reconstructed image is compared with the original image by using the mean square error (MSE) given by:
IFMBE Proceedings Vol. 35
A Preliminary Study of Compression Efficiency and Noise Robustness of Orthogonal Moments on Medical X-Ray Images
MSE =
1 M ×N
M
N
∑∑
( f ( x, y ) − f ( x, y )) 2
(8)
x =0 y =0
where MxN is the dimension of the image and f ( x, y ) is the approximation of the original image (reconstructed image) using limited number of moment coefficients. Fig. 2 shows the average MSE of 22 reconstructed x-ray images using different number of coefficients obtained from moment transform and Discrete Cosine Transform (DCT).
589
As can be seen from Fig. 2, the orthogonal moments has a very close performance with DCT. There is only marginal difference between the DCT and Discrete Tchebichef Transform (DTT) in term of MSE of the reconstructed images. In general, the approximation error (MSE) decreases when more moment orders are used. But higher moment orders also means lower compression ratio. From our observation, when (m+n) moment order up to 5 is used, the compression artifacts become unnoticeable. For Legendre moments, it has slightly higher MSE (and thus poorer quality, theoretically) for the same number of coefficients used in the reconstruction.
IV. NOISE ROBUSTNESS TEST
(6 0 U RU U (
The same experimental procedures are repeated by first applying the Gaussian white noise with zero mean and variance of 0.001 to the x-ray images before processing. The reason for doing this is to test the strength of noise resistance of orthogonal moments and DCT coefficients when there are some slight white noise distortions on the x-ray images. Gaussian white noise is a random noise with flat power spectral which is common in most digital signal application.
'&7 '77 /HJHQGUH
0RPHQW 2UGHU PQ
Fig. 2 Mean square error (MSE) of reconstructed image using different moment order
( 6 0 U URU (
(a) Noisy image
(b) DCT reconstructed
'&7 '77 /HJHQGUH
0RPHQW 2UGHU PQ
(c) DTT reconstructed
(d) Legendre reconstructed
Fig. 3 Mean square error (MSE) of reconstructed image under influence of
Fig. 4 (a) Cropped noisy chest x-ray image (380x380), (b)-(d) the
Gaussian white noise with zero mean and variance of 0.001 using different moment order
cropped reconstructed images using moments (or dct) up to order 5 (m+n). The MSE is around 26-28, unnoticeable distortion at this scale
IFMBE Proceedings Vol. 35
590
K.H. Thung et al.
In this time, the MSE is calculated between the original image and the reconstructed image computed from the distorted image. Another graph plotted in Fig. 3 shows the MSE between the reconstructed noisy image and the original image. Again, DCT and DTT show similar trend in response to the Gaussian white noise. Their MSE is decreased when more moment coefficients are used until 5-th moment order. After that, MSE is increased when more moment coefficients are used. This is not surprising, as it is well-known fact that high order moments or high spatial frequencies are more sensitive to random white noise. However, Legendre moments perform the best in this case as its MSE only increased slightly after 6-th moment order, and it has the lowest MSE starting from 5-th moment order and above. Thus, Legendre moments have the highest noise robustness among the three transformation methods. One might be interested on how the reconstructed images look like using only limited number of moment (or DCT) coefficients. To answer this, Fig. 4 shows one of the cropped noisy x-ray image, with its corresponding reconstructed images using (up to 5th) orthogonal moments and DCT coefficients. The original uncropped image is exactly the same as the chest x-ray image shown in Fig. 1. From the figure, all the reconstructed images look “cleaner” than the noisy image in Fig. 4(a), as low moment order is used. At this moment order, the MSE between the reconstructed images and the original image is around 26-28. Reconstructed images in Fig. 4(b)-(d) are all look alike as they have almost equal MSE at low order moments.
V. CONCLUSIONS In this paper, a preliminary study about the compression efficiency and noise robustness of orthogonal moments on medical images (x-ray images in particular) has been carried out. It is found that the discrete Tchebichef moment has comparable compression efficiency to DCT as they can represent the medical images with visually similar quality by using about the same number of coefficients. On the other hand, though Legendre moments need more coefficients for the same quality of reconstructed image, they
have the advantage of higher resistance to random white noise. This advantage make Legendre moments a more desirable choice of transformation when transmit medical images through a noisy transmission channel. As only limited number of images are used in this experiment, and the x-ray images have been compressed by JPEG prior to the test (because the image is in jpeg format), the experimental result may be biased. More studies have to be done with raw x-ray images to have a more solid and conclusive result. However, this paper shows the potential of orthogonal moments to be used in compression of biomedical images, not only due to its comparable compression ability, but also due to its desirable noise robustness characteristic.
ACKNOWLEDGMENT The authors would like to thanks anonymous reviewers for their valuable comments on this paper.
REFERENCES 1. DICOM, http://medical.nema.org. 2. Zukoski, M. J., Boult, T., and Iyriboz, T. (2006) A novel approach to medical image compression, Int. J. Bioinformatics Res. Appl. 2, 89103. 3. Iyriboz, T. A., Seblak, S., Addis, K. A., and Zukoski, M. J. (1999) A comprehensive comparison of lossy medical image compression methods, Radiology 213P, 104-104. 4. Rocha, L., Velho, L., and Carvalho, P. C. P. (2002) Image momentsbased structuring and tracking of objects, In Computer Graphics and Image Processing, 2002. Proceedings. XV Brazilian Symposium on, pp 99-105. 5. Huazhong, S., Limin, L., and Coatrieux, J. L. (2007) Moment-Based Approaches in Imaging. 1. Basic Features [A Look At ...], Engineering in Medicine and Biology Magazine, IEEE 26, 70-74. 6. Kreyszig, E. (1988) Advanced Engineering Mathematics, 6 ed., John Wiley and Sons. 7. Pew-Thian, Y., and Paramesran, R. (2005) An efficient method for the computation of Legendre moments, Pattern Analysis and Machine Intelligence, IEEE Transactions on 27, 1996-2002. 8. Mukundan, R., Ong, S. H., and Lee, P. A. (2001) Image analysis by Tchebichef moments, Image Processing, IEEE Transactions on 10, 1357-1364. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Kim-Han Thung University of Malaya Kuala Lumpur Malaysia [email protected]
Activities of Oxy-Hb and DeOxy-Hb on Motor Imaging and Motor Execution by Near-Infrared Spectroscopy D.H.T. Nguyen, N.V.D. Hau, T.Q.D. Khoa, and V.V. Toi Biomedical Engineering Department, International University of Vietnam National University, Ho Chi Minh City, Viet Nam
Abstract— The main segmentation of a Brain Computer Interface (BCI) is to record and understand motor activities of human brain. To implement this mission, there are several methods has been used like electroencephalography (EEG), electrocorticogram (ECoG), magneto encephalography (MEG), Near Infared spectroscopy (NIRS). In recent year, NIRS technology has been widely used because the ability to record localized brain activity with a spatial resolution in the order of centimeter. In this paper, we used a Near-Infrared Spectroscopy (NIRS) system to measure the oxy- and deoxyhemoglobin changes at the motor cortex of two healthy volunteers during the left hand and right-hand motor imagery and motor execution. Therefore, we identify the difference of brain activities when the subject imagines two opposite actions (turn left and turn right) and the difference between left and right hemisphere during imaginary task. Then we also compare the differences between motor imaging and motor execution. Keywords— Motor imaging, neuron activity, Near Infrared Spectroscopy, Brain Computer Interface.
I. INTRODUCTION The increase of BCI technology assists scientist to communicate and control external devices using brain signals rather than the brain's normal output pathways of peripheral nerves and muscles. Disabled patients with appropriate physical care, and cognitive ability to communicate with a BCI, can continue to live with a reasonable quality of life over extended periods of time [1]. There are two different types of BCI: Invasive and noninvasive. Invasive BCIs are realized with implanted electrodes in brain tissue, whereas noninvasive BCIs use electrophysiological recordings in humans such as electroencephalography (EEG), electrocorticogram (ECoG) [2,3], and magneto encephalography (MEG), functional magnetic resonance imaging (fMRI), positron emission tomography (PET) and near infrared spectroscopy (NIRS) [4]. Measuring brain signal using EEG is far less practical as a signal source for a brain-computer interface; it is more expensive and less robust than the NIRS scanner. In that it does not allow subjects to move around [5]. NIRS has many advantages such as flexibility of use, portability, metabolic specificity, good spatial resolution, localized information, high sensitivity in detecting small substance concentrations and affordability [6].
The purpose of this study was applying NIRS for developing BCI. The motor imagine and motor execution of turn left, turn right was chosen as the paradigm of BCI controlled in the BCI [7,8] and NIRS [9,10,11]. When the brain is activated, it requires energies to maintain and the blood flow into it by bringing oxygenated hemoglobin. As the oxygen is used up, the hemoglobin became deoxygenated and flowed out from the brain region to be replenished. Regional brain activation is accompanied by increasing regional cerebral blood flow (rCBF) and regional cerebral oxygen metabolic rate [7]. Therefore, this experiment using NIRS expects to observe at activated region the increase of Oxy-Hb and the decrease of DeOxy-Hb. It has been shown in electrophysiological studies that brain activation during motor imagery is similar to the activation during actual execution of movement [12]. Primary motor cortex is active in both the motor imagery and motor execution. [13] Demonstrated optical response results in the contra lateral hemisphere around 5–8 s after the onset of movement. [14, 15] They reported similar optical response using NIRS signals during overt and covert hand movements. Using NIRS can help us detect the concentrations of oxygenated and deoxygenated hemoglobin [4]. Accordingly, the relationship between brain oxygen metabolic and brain activation can be interpreted [4, 16]. In this paper, we used NIRS to measure human brain when people imagined left or right, and also moved right/left arm and leg. Using that data, we can analyse the Oxy-Hb and DeOxy-Hb that concentrates different between two hemisphere of brain in one action, the different of brain signal in two different actions (left/right) and the different signal between motor execution and motor imagine.
II. METHODOLOGY A. Subjects Two 21-year-old males volunteer to this experiment. None of the recruited subjects had neurological or psychiatric history or on medication. Before the experiment, each subject filled out a questionnaire, which was kept confidential and included patient’s identification, age and gender. The tenets of the Declaration of Helsinki were followed; the local Institutional Review Board approved the study and informed consent was obtained for all subjects.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 591–595, 2011. www.springerlink.com
592
D.H.T. Nguyen et al.
B. Experimental Procedure In this study, NIRS data measured during imaginary tasks and motor execution task of human brain. During the experiment, the subject sat on a chair in a quiet room in front of a computer screen which displayed the left and right arrows on the screen. Experiment carried out on healthy subjects in 80 sec for one task. Each task has 3 periods. The first 20 sec and last 20 sec are relaxing period. Imaging and moving period take 40 sec (Fig 1). Each subject took 11 tasks: Imagine left, right, randomly. Moving left/right arm fast or slow. Moving left/right leg fast or slow. Each task we measured two times, each time has 3 loops. Data for each task were collected in separate sessions.
optodes formed one channel. Four illuminators and four detectors in the arrangement resulted in 10 channels on each hemisphere. We used a continuous wave 20-channel NIRS system over the motor cortex of two healthy volunteers. The position of probes was mapped as Fig 2.
III. RESULT AND DISCUSSION A. Motor Imaginary Part We measured the change of Oxy-Hb and DeOxy – Hb in human brain during motor imagine task left and right. Then, we wanted to check if there are any different in human brain when imagine different tasks. After collected data and analyzed, we identified that during the imaginary task, the contra lateral hemisphere had more clearly activities (The Oxy-Hb increasing and DeOxy-Hb decreasing during task) than the ipsilateral hemisphere (Fig 3). It definitely showed the converse change in the concentration of Oxy-Hb and DeOxy-Hb from left and right hemisphere imagination.
Fig. 1 Experimental Protocol
Fig. 3 Oxy-Hb and DeOxy-Hb in left and right hemisphere change during right imaginary task
Fig. 2 Place of Probes The available NIRS instrument (FOIRE – 3000 from Shimadzu Corporation, Japan) was used to measure the change in concentration of Oxy-Hb and DeOxy-Hb during left and righthand motor imaging and motor execution. The illuminator and detector optodes were placed on the scalp. The detector optodes were fixed at a distance of 3 cm from the illuminator optodes. The optodes were arranged on the left and right hemisphere on the subject's head, above the motor cortex, around C3 (left hemisphere) and C4 (right hemisphere) areas (International 10–20 System). A pair of illuminator and detector
Besides, we also wanted to identify the rule of changes during imaginary task. We collected data of two subjects in 10 times measured imaginary task to analyzed and make comparison. Then, we saw that during task, some channel in the contra lateral hemisphere had Oxy-Hb increase and DeOxy-Hb decrease. However, channels on the ipsilateral hemisphere sometimes had similar responses but to a smaller extent or in a reversed manner (increase in deoxygenated hemoglobin and decrease in oxygenated hemoglobin) (Fig 4). B. Motor Execution Part In this part, we want to know is the Oxy-Hb and DeOxyHb in motor execution change as the same rule as motor imagine. And the comparison between moving fast and moving slow was demonstrated.
IFMBE Proceedings Vol. 35
Activities of Oxy-Hb and DeOxy-Hb on Motor Imaging and Motor Execution by Near-Infrared Spectroscopy
593
I. Moving Leg While measure brain during moving right and left legs, we can realize there is same rule, the oxy-Hb increases and deOxy-Hb decreases during moving task and then turn back to normal in rest period.(Fig 5)
Fig. 6 Oxy-Hb and DeOxy-Hb in left and right hemisphere change during moving leg slowly
Fig. 4 Comparison between two hemispheres during imaginary task
Fig. 7 Oxy-Hb and DeOxy-Hb in left and right hemisphere change during moving arm fast (a)
(b)
Fig. 5 Oxy-Hb increase and DeOxy-Hb decrease during task moving fast left leg (a) and right leg (b)
Fig. 8 Oxy-Hb and DeOxy-Hb in left and right hemisphere didn’t change clearly when moving arm slowly IFMBE Proceedings Vol. 35
594
D.H.T. Nguyen et al.
There are different between moving fast and moving slow. In figure 5 we can see clearly the Oxy-Hb and DeOxy-Hb significantly change during task moving fast, but just a little in moving slow (figure 6). II. Moving Arm In the moving arm task, we have the same result with moving leg. During the moving task, the Oxy-Hb increase and DeOxy-Hb decrease (Fig 7). In moving fast, we can identify the Oxy-Hb and DeOxyHb change. However, in moving slow, we cannot identify it because it to small (Fig 8).
V. CONCLUSION Through those experiments, we see that when subject moved legs, arms and even imagine, the Oxy-Hb and DeOxy-Hb had change. Then, based on this relationship, brain activation can be interpreted using NIRS. In both motor imaging and motor execution, the Oxy-Hb was increasing, whereas the DeOxy-Hb was decreasing during task in specific hemisphere. Moreover, the Oxy-Hb and DeOxy-Hb changed differently based on speed and amplitude of movement.
ACKNOWLEDGMENT A. Random Task In this section, the subjects did imagine randomly, they did not know what to image next. Each subject take 9 tasks for imaging left and 9 tasks for imaging right. In the result, we can see that although they could not predict what to image next, the Oxy-Hb also increases and DeOxy-Hb decreases in contralateral hemisphere during imagine task (Fig 9).
We would like to thank Vietnam National Foundation for Science and Technology Development-NAFOSTED supported us for attendances and presentation. This work was partly supported by a grant from Shimadzu Asia-Pacific Pte. Ltd. and research fund from International University of Vietnam National Universities in Ho Chi Minh City. We also would like to thank Dr. Nguyen Xuan Cam Huyen, Dr. Le Thi Tuyet Lan, Dr. Nguyen Thi Doan Huong from Ho Chi Minh City Medicine and Pharmacy University for their valuable information about anatomy, physiology and function of human brain. Finally, an honorable mention goes to our volunteers and friends for their supports us in completing this project.
REFERENCES Fig. 9 This is average signal in contra lateral hemisphere imagine left/ right task
IV. DISCUSSION In this section, we want to compare our results to those of those of Sitaram et al [2], who has make comparison of hemodynamic response (signal in right and left hemisphere during imagery left and right). With their procedure,10 seconds rest – 10 second imagery/tapping task – 10 second rest, the result is that the signal did not have enough time to turn back to the normal or stable state so that we could not see clearly the Oxy-Hb and DeOxy-Hb change during task. Then we have done experiment with the different procedure, increase time for each task to 20 second rest – 40 second moving/imaging task – 20 second rest. Then we get better result, we can identify the Oxy-Hb increase, DeOxy-Hb decrease during task and turn back during rest.
1. Wolpaw, J.R., Birbaumer, N., Heetderks, W.J., McFarland, D.J.,Peckham, P.H., Schalk, G., Donchin, E., Quatrano, L.A., Robinson,C.J., Vaughan, T.M., 2000a. Brain–computer interface technology: a review of the first international meeting. IEEE Trans. Rehabil. Eng. 8,16s4–173. 2. Birbaumer, N., Hinterberger, T., Kubler, A., Neumann, N., 2003. The thought-translation device (TTD): neurobehavioral mechanisms and clinical outcome. IEEE Trans. Neural Syst. Rehabil. Eng. 11, 120–123. 3. Birbaumer, N., Ghanayim, N., Hinterberger, T., Iversen, I., Kotchoubey, B., Kubler, A., Perelmouter, J., Taub, E., Flor, H., 1999. A spelling device for the paralysed. Nature 398, 297–298 4. Wobst P., Wenzel R., et al.,(2001) Linear aspects of changes in deoxygenated hemoglobin concentration and cytochrome oxidase oxidation during brain activation, Neuroimage, 13:520-530. 5. Villringer, A., Obrig, H., 2002. Near Infrared Spectroscopy and Imaging. Elsevier Science, USA, 1053-8119/02. 6. Coyle S., Ward T., et al., (2007) Brain computer interface using a implified functional near-infrared spectroscopy system, J. Neural Engineering 4:219- 226. 7. Pfurtscheller, G., Neuper, C., Schlogl, A., Lugger, K., 1998. Separability of EEG signals recorded during right and left motor imagery using adaptive autoregressive parameters. IEEE Trans. Rehabil. Eng. 6,316–325.
IFMBE Proceedings Vol. 35
Activities of Oxy-Hb and DeOxy-Hb on Motor Imaging and Motor Execution by Near-Infrared Spectroscopy 8. Pfurtscheller, G., Neuper, C., Guger, C., Harkam, W., Ramoser, H., Schlogl, A., Obermaier, B., Pregenzer, M., 2000. Current trends in Graz Brain–computer Interface (BCI) research. IEEE Trans. Rehabil. Eng. 8, 216–219. 9. Wataru Niide, Tadashi Tsubone and Yasuhiro Wada, 2009, Identification of moving limb using Near Infared spectroscopic signals for brain activation, Proceedings of International Joint Conference on Neural Networks, Atlanta, Georgia, USA, June 14-19. 10. Sitaram R., Zhang H., et al., (2007) Temporal classification of multichannel near-infrared spectroscopy signals of motor imagery for developing a brain–computer interface, NeuroImage 34 1416:1427. 11. Takeo Muroga, Tadashi Tsubone, and Yasuhiro Wada, 2006, Estimation algorithm of tapping movement by NIRS, SICE-ICASE International Joint Conference 12. Beisteiner, R., Hollinger, P., Lindinger, G., Lang, W., Berthoz, A., 1995. Mental representation of movements. Brain potentials associated with imagination of hand movements. Electroencephalogr. Clin. Neurophysiol. 183–193.
595
13. Benaron, D.A., Hintz, S.R., Villringer, A., Boas, D., Kleinschmidt, A., Frahm, J., Hirth, C., Obrig, H., van Houten, J.C., Kermit, E.L., Cheong, W.F., Stevenson, D.K., 2000. Noninvasive functional imaging of human brain using light. J. Cereb. Blood Flow Metab. 20, 469– 477. 14. Sitaram, R., Hoshi, Y., Guan, C. (Eds.), 2005. Near Infrared Spectroscopy based Brain–Computer Interface, vol. 5852. 15. Coyle, S., Ward, T., Markham, C., McDarby, G., 2004. On the suitability of near-infrared (NIR) systems for next-generation brain– computer interfaces. Physiol. Meas. 25, 815–822. 16. Sato H., Fuchino Y., et al.,(2005) Intersubject variability of nearinfared spectroscopy signals during sensor motor contex activation, J.Biomed.Optics 10:044001.1-044001.10s
IFMBE Proceedings Vol. 35
An Overview: Segmentation Method for Blood Cell Disorders M.F. Miswan1, J.M. Sharif1, M.A. Ngadi1, D. Mohamad1, and M.M. Abdul Jamil2 2
1 Faculty of Computer Science and Information Systems, Universiti Teknologi Malaysia, Skudai, Johor Modeling and Simulation Research Laboratory, Electronic Engineering Department, Faculty of Electrical & Electronic Engineering, Universiti Tun Hussein Onn Malaysia, 86400 Parit Raja, Batu Pahat, Johor
Abstract— Blood make up of 45% blood cells and 55% plasma. Blood cells consist of red blood cell (RBC), white blood cell (WBC) and platelet (thrombocytes). Each of the cells has the role to carry oxygen and carbon dioxide in the blood, work as an antibody and function for clot mechanism. Complete blood count (CBC) is done to measure all of the blood cells and including hematocrit and hemoglobin. Any abnormal reading which beyond the normal reading is considered blood cell disorder like leukimia, aplastic anemia, and others. Many analyzing methods has been developed to give a precise and reliable reading for CBC such as flow cythometric immunophenotyping, manual blood reading, blood smear, segmentation and others. Segmentation is among the best way for analyzing in term of cost effective and time consuming compared to other methods. A continuous improvement for 3D image segmentation will lead to a better result in the future. Keywords— blood cell, complete blood count, blood disorder, segmentation, 3 dimensional.
I. INTRODUCTION To understand how the blood cell can be analyzed, firstly we must have basic understanding about the anatomy and physiology about the blood itself. The field of study of hematology covers the study of blood and the tissues that form, store or circulate blood cells. The transport of nutrients, oxygen, and hormones to cell throughout the body is the function of circulatory system. Circulatory system removes metabolic wastes (carbon dioxide, nitrogenous wastes, and heat) and regulates the body temperature, fluid pH, and water content of cells. Protection of the body against foreign microbes and toxins plays by the white blood cells. Platelet helps in the clotting mechanisms which protect the body from blood loss after injuries. The formation of blood element (hematopoisis) takes place in the red bone marrow in the long bones epiphyses, flat bones, vertebrae, and the pelvis. Red bone marrow is the place where all the development stages including stem cells are present, which are stimulated to many types of blood cells to be developed. Mitosis is the most blood cell formation and various blast cells are produced. Each of them divides and differentiates into red blood cells (RBCs), white blood cells (WBCs), and platelets. Blood consists of
cells and cell fragments (erythrocytes, leukocytes, thrombocytes) and water with dissolved molecules (plasma). Cells make up approximately 45% of blood and 55% is plasma. Erythrocytes or red blood cells (RBCs) transport oxygen (O2) and carbon dioxide (CO2) in the blood and have protein hemoglobin to attach with. As mentiond earlier, Leukocytes, or white blood cells (WBCs), have function to protect the body from foreign microbes and toxins. It can be classified into two groups based upon the presence or absence of granules in the cytoplasm and the shape of the nucleus which is granulocytes and agranulocytes. Granulocytes contain numerous granules in the cytoplasm and have nucleus that is irregular in shape with lobes and compared to agranulocytes which do not have visible granules in the cytoplasm and the nucleus is not lobbed. There are three types of granulocytes called neutrophils, eosinophils, and basophils. Neutrophils are contributed to the formation of pus with other dead tissue when it destroyed during engulfing bacteria by phagocytosis. It is the first to arrive at a site of infection. Eosinophils increase in numbers during parasitic infection and allergic reactions. The action of antibodies on antigens (foreign substances) actively phagocytizes complexes formed. Basophils release histamine (inflammatory response) in response to damaged tissue and pathogen invasion. There are two types of leukocytes; lymphocytes and monocytes. Lymphocytes play a role in the immune response to regulate the immune system response to produce T lymphocytes and B lymphocytes. Monocytes engulf microbes and cellular debris when it enlarge and become macrophages. Platelets (thrombocytes) are fragments of huge cells called megakaryocytes. Platelets activate hemostasis (the stoppage of bleeding) after adhere to damaged blood vessel walls and release the enzymes. Approximately 10 days of average life in circulation. Next, to diagnose and manage a disease, the blood tests must be performed by a physician. In addition to examining blood cells, important information about the functioning of bodily systems can be yield by the composition of chemical in the blood. It includes cholesterol, thyroid hormone, potassium, and numerous others which are dissolved in the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 596–599, 2011. www.springerlink.com
An Overview: Segmentation Method for Blood Cell Disorders
plasma and circulated in the blood. For a chemical blood test, blood is drawn from a patient’s vein and placed in an empty tube and usually allowed to clot; the fluid portion of the blood after clotting, called serum is then used for the various chemical analyses.
597 Table 1 Normal blood count for men and women
Blood cell types RBC WBC Platelet
A. Complete Blood Count (CBC) A complete blood count or CBC is referring to the examination of all three blood cell types. It is also tests hemoglobin and hematocrit. Some refer to the results as a hemogram. Hemoglobin is a protein used by red blood cells to distribute oxygen to other tissues and cells in the body. Hematocrit refers to the amount of the blood that is occupied by red cells. Blood counts show a normal reading in a periodic health examination, like other features of the examination. It is a sensitive barometer of many illnesses and the measurement is an important part of a standard periodic health examination. The reading can shows a result of some infections, diseases and other illness. For example, the white cell count may be elevated if a bacterial infection is present. The red cell count may be decreased as a result of a specific vitamin deficiency. A decreased platelet count is called thrombocytopenia [1]. As mention above, the measurement of blood cells can contribute to the diagnosis of many disorders. So, if someone has a blood cell disorder, it can be an important index of the response of the disease to treatment. It is also important to learn the effects of drug treatment or radiation therapy. The physician can determine the effectiveness of a drug that is been given to the paient from the blood cell count [1]. In this paper it reviews some of the method used to examine blood cells quantity and how the method use CBC to get all the data in the blood cell. Each of them is compared and the segmentation method is proposed in the research for a several reasons.
II. BLOOD CELL DISORDERS Normal blood counts fall within the range that has been established by testing healthy men and women of all ages. The cell counts are compared to those of healthy individuals of similar age and sex. If a cell count is higher or lower than normal, the physician will try to determine the explanation for the abnormal results. The approximate normal ranges of blood cell counts for healthy adults are as follows.
Hematocrit Hemoglobin
Men 4.5 - 6.0 million/microliter 4.5 - 11 thousand/ microliter 150 - 450 thousand/ microliter 42% to 50% 14 - 17 grams/100 milliliters
Women 4.0 to 5.0 million/microliter 4.5 to 11 thousand/ microliter 150 - 450 thousand/ microliter 36% to 45% 12 - 15 grams/100 milliliters
Differential count, sometimes referred to as a “diff,” is a breakdown of the different types of white blood cells, also called leukocytes. The observer can also tell if the white cells in the blood are normal in appearance. The five types of white cells that are counted are neutrophils (60%), lymphocytes (30%), monocytes (5%), eosinophils (4%), and basophils (below 1%). These are several blood cell disorders listed below when the blood count shows an abnormal reading: A.
Leukemia
White blood cells or leukocytes diseases may refer to have Leukemia. The different types of leukemia affect give a different blood count. Person with acute leukemia may have a low or normal or high white blood cell count. It may show an abnormal average count which many times higher than 7,000 white cells per microliter of blood. In short, the leukemic white blood cells in acute leukemia patients do not function normally. However, normally patients with chronic leukemia always have an increase in white blood cells [2]. B. Aplastic Anaemia (AA) The disease refers to a very rare condition for all three types of blood cells that it normally produced by stem cells in bone marrow. It shows a low blood cell counts in all three types of cells which are red cells, white cells and platelets. The bone marrow is examined and usually found to be hypoplastic (low growth of blood-forming stem cells) or aplastic (no growth of blood-forming stem cells) [3].
III. ANALYZING METHODS There are several tools or application used to perform the blood test or CBC. One of it is light transmission in a pressure-driven slit flow system with a vibrational mechanism is used in analyzing red blood cell (RBC) aggregation. RBC aggregation helps in determine blood viscosity which can
IFMBE Proceedings Vol. 35
598
M.F. Miswan et al.
detect diseases like diabetes,thrombosis, myocardial infarction, vascular diseases and and hematological pathology from the increased of RBC aggregation. It is begin with vibration generator which effect in disaggregation of RBC aggregates stored in the slit. Shear stress decreased exponentially and causing instantaneous pressure and transmitted light intensity measured over time. A rapid elongation of RBCs is occur from the applying of an abrupt shearing flow after disaggregation and becoming loss with the decreasing of shear stress. Then, RBCs start to re-aggregate and transmission intensity increase which giving aggregation indices detected by curve fitting program [4]. Next, one research is done in comparing the analysis method of white blood cell (WBC). It is involving the use of double-hydrodynamic sequential system (DHSS) in PENTRA 80 Automated Blodd Cell Analyzer to compare with manual microscopy counts, multiangle polarized sidescatter technology instrument and flow cytometric immunophenotyping. The research shows a good correlation for the reading of neutrophil and lymphocythes with R2 ≥ 0.92 and R2 ≥ 0.88 respectively [5]. For the reading of eosinophil and monocytes, it shows lower correlation especially for conventional microscopy. In other research, it shows that flow cythometry concept can perform red cell count in 10 seconds with ± 2.1 percent (5000 cells counted in 10 seconds for a 5 Millions counts) [6]. This facts is supported from an experiment in using of Agilent 2100 bioanalyzer which use the concept of flow cythometric immunophenotyping, antibody staining and cell fluorescence assays without a washing step to identify peripheral whole blood cells. The very small volumes of samples and reagents, the analysis which uses low number of cells (only 30000 cells per sample) as well as the easy use of the Agilent 2100 Bioanalyzer are the specific advantages of this microfluidic chip-based technology compared to a flow-cytometry-based reference [7]. Research by applying segmentation method, automated image detection in blood smears and cytometry segmentation is applied. It is automatically detects and then segments nucleated cells in Wright’s giemsa-stained blood smears. The method involves acquisition of spectral images and preprocessing the acquired Images. This is to obtain high-quality images, remove random noise and correct aberration and shading effects. Then it detects the single and touching cells in the scene to segment the nucleated cells from the rest of the scene. Using the initial cell masks, nucleated cells which are just touching are detected and separated. Simple features are then extracted and conditions applied such that single nucleated cells are finally selected. Next, segments the cells into nuclear and cytoplasmic regions. The success rate in segmenting the nucleated cells is between 81 and 93%. The major errors in segmentation of
the nucleus and the cytoplasm in the recognized nucleated cells are 3.5% and 2.2%. Lastly, post processing of the segmented regions. The advantages of the algorithms used are simple which covers touching and no touching cell, segmentation and detect nucleated cells employing conventionally prepared smears [8]. Another segmentation method, Lohitha is software that can be used for recognizing and analyzing blood cells and produce blood count reports. Lohitha is capable of performing standard counts which comprises of RBC counts, WBC counts, PLT counts and differential counts. The operation of Lohitha is purely based on image processing and computer vision technologies. The input is an image of the already prepared slide containing a film of blood, taken from a special camera attached to an ordinary microscope. The software would not consider about the preparation of the slides to be viewed through a microscope. They are required to prepare the slides as they normally do. The main objective of Lohitha is to provide a software solution which is cost effective as well as efficient for countries like Sri Lanka to be widely utilized in the healthcare industry. The software is designed to be extensible, tiered and a highly efficient with great consideration on ease of use for the end user [9].
Fig. 1 Overall flow of processing Lohita software In [10], it shows that the blood-cell image classification system to be able to analyze and distinguish blood cells in the peripheral blood image. To distinguish their abnormalities, we segment redand white-blood cell in an image acquired from microscope with CCD camera and then, apply the various feature extraction algorithms to classify them. In addition to, it use neural network model to reduce multi-variate feature number based on PCA (Principal Component Analysis) to make classifier more efficient. Finally it shows that the system has a good experimental result and can be applied to build an aiding system for pathologist. In special, we classified the abnormality of red blood cells in two steps, using the inner and outer edge information and classify normal white blood cell using various kinds of features about neucleus and cytoplasm. The experiment of results show that the complexity of neural networks can be reduced and can
IFMBE Proceedings Vol. 35
An Overview: Segmentation Method for Blood Cell Disorders
construct more efficient system, providing that principal component analysis is applied to the extracted features from cells [10]. In [11], this is a work for image segmentation and counting based on PCNN and autowave for blood cell images. Our research work, will work out on automatic counting method for any microbiological cell but for at the beginning we will focus on blood cell images. At the end, we hope this tool can be an initiated tool for any biological cell images [11]. In [12], this is other work done for blood smear segmentation which involves the process of extraction of blood cell into background, nucleus, cytoplasm and erythrocytes. It is done step by step in hierarchy which first extract nucleus, second is the background and lastly cytoplasm to get erythrocytes. This also can help as a mixing method which helps in the process of segmentation [12]. In [13], this is a work in obtaining the 3D image from 2D image from the process of enhancement and edge detection to form 2D edge image. From that, binary image is created after the process of enhancement which makes to reduce noise. Next, zero-order interpolation is applied as part of surface representation algorithm and then 3 surfaces are created through 3D plot surface to form 3D visualization. We also hope this technique can be initiated tool to create 3D image [13].
IV. CONCLUSION Segmentation is the best analyzing method that can be used to analyze disease and perform CBC in term of cost effective and efficiency. The use of flow cythometric is also good in term of efficiency and precise reading. Plus, it can handle many samples in one time. However, it creates high cost to develop such technology. Segmentation is just a simple software component which can help the pathologist, biological lab officer or any users in the field of biomedical to use it in a simplest way. It can be used to analyze a specific sample in a short time with precise reading. Thus, the development must be done to improve the segmentation technique so that it capable to excel as the main analyzing tool for CBC.
599
REFERENCES 1. Heather Darbo, The Blood Smear (What, Why and How), LVT, VTS (Emergency and Critical Care) Animal Emergency & Specialty Services. 2. The Leukemia & Lymphoma Society,1311 Mamaroneck Avenue, White Plains, NY 10605 Information Resource Center (IRC) 800.955.4572. 3. Leukaemia foundation, Fact sheet: related blood disorders, October 2008. 4. S.Shin, M.S.Park, J.H.Yang, Y.H.Ku, and J.S.Suh, Measurement of red blood cell aggregation by analysis of light transmission in a pressure-driven slit flow system, Dept. of Laboratory Medicine, Kyungpook National University, 1370 Sangyeok-dong, Buk-gu, Daegu 702-701 Korea, Korea-Australia Rheology Journal Vol.16 No.3 September 2004, pp. 129-134. 5. María E. Arroyo, PhD,1 María D. Tabernero, MD, PhD,2 María A. García-Marcos, MD, PhD,3 and Alberto Orfao, MD, PhD, Analytic Performance of the PENTRA 80 Automated Blood Cell Analyzer for the Evaluation of Normal and Pathologic WBCs, General Cytometry Service, Cancer Research Center and Department of Medicine, University of Salamanca; and 2Research Unit and 3Hematology Service, University Hospital of Salamanca, Salamanca, Spain. 6. P. Crosland Taylor, J.W.Stewart, G.Haggis, An electronic blood cell counting machine, the American society of hematology. 2021 L St, NW, Suite 900, Washington DC. 7. Sylvie Veriac, Valérie Perrone, Madeleine Avon, Identification of red and white blood cells from whole blood samples usingthe Agilent 2100 bioanalyzer. 8. Steven S.S. Poon, Rabab K. Ward, and Branko Palcic, Automated Image Detection in Blood Smears and Cytometry 13:766-774 (1992) Segmentation Cancer Imaging, B.C. Cancer Agency, Vancouver, British Columbia, V5Z 1L3 (S.S.S.P., B.P.) and Department of Electrical Engineering, University of British Columbia, Vancouver, British Columbia V6T 124 (R.K.W.), Canada, February 23, 1992. 9. G.P.M Priyankara, O.W Seneviratne, R.K.O.H Silva, W.V.D Soysa,C.R. De Silva, An extensible computer vision application for blood cell recognition and analysis, Department of Computer Science and Engineering, University of Moratuwa, Sri Lanka. 10. K.S. Kim, P.K. Kim, J.J. Song, Y.C. Park, Analyzing Blood Cell Image to Distinguish Its Abnormalities, School of Computer Engineering, Chosun University,Kwangju, Korea, 501-759. 11. Su, M.-j., et al. A new method for blood cell image segmentation and counting based on PCNN and autowave in Communications, Control and Signal Processing, 2008. ISCCSP 2008. 3rd International Symposium on 2008. 12. D. Wermser,G. Haussmann, and C.- E. Liedtke, Segmentation of Blood Smears by Hierarchical Thresholding” Computer vision, graphics, and image processing z,151-168 (1984). 13. Q. A. Salih, A. R. Ramli, R. Mahmud & R. Wirza : 3D Visualization For Blood Cells Analysis Versus Edge Detection . The Internet Journal of Medical Technology. 2004 Volume 1 Number 2
IFMBE Proceedings Vol. 35
Assessment of Ischemic Stroke Rat Using Near Infrared Spectroscopy F.M. Yang, C.W. Wu, C.Y. Lu, and J.J. Jason Chen Institute of Biomedical Engineering, National Cheng Kung University, Tainan, Taiwan
Abstract— Although varied approaches have been developed for quantifying the severity of cerebral dysfunction under ischemic stroke, the neurological scoring and behavior assessment cannot provide noninvasively time-course assessment. The aims of this study are to design non-invasive electrophysiological and optical method to assess the brain recovery of ischemic stroke. The recording system for somatosensory evoked potentials (SSEPs) using surface electrode and neurovascular response using near infrared spectroscopy (NIRS) was developed. A cranial probe was designed to arrange sensing electrodes and optical fibers for concurrent electrical and optical signal acquisition. The pin electrodes consist of spring electrode probe which can provide the well-contact between brain skull and electrode. The Continuous-Wave (CW) NIRS was employed to monitor invivo hemodynamic response using avalanche photodiodes (APD). Our results of SSEPs show decayed P1-N1 amplitude and frequency shifted in time frquency analysis after cerebral ischemia. In the future study, our NIRS system will be expanded to employ three wavelengths, which allows the assessment of cytochrome C oxidase. The integration of neurovascular assessment and low level laser therapy (LLLT) system should provide useful tool for time-course monitoring of various novel treatments to the animal of ischemic stroke.
Recent studies have shown that phototherapy as a nondrug treatment for ischemic stroke. According to recent studies on near infrared laser therapy (NILT), the time window of treatment could be extended to more than 24 hours in contrast to neuroprotection drugs [6, 7]. In this pilot study, we intend to develop an integrated NIR assessment and NILT treatment system that could noninvasively monitor the dysfunction and abnormality of brain function under experimental cerebral ischemia.
II. MATERIAL AND METHODS The neurovascular coupling assessment includes somatosensory evoke potential (SSEP) and near-infrared spectroscopy (NIRS) measurements as shown in Figure 1. There are two mainly parts of SSEP measurement system including stimulation unit and bio-potential amplification system. The stimulation unit provides constant current to stimulate forepaw of animal. The stimulator consists of voltage to current circuit which transforms controlled voltage to desired current, regardless of biological loading.
Keywords— Hemodynamic response, Neurovascular coupling, Near infrared spectroscopy, Somatosensory evoke potential, cytochrome C oxidase.
I. INTRODUCTION Ischemic stroke is caused by thrombus obstructs vessels in the brain. The neuron in the affected area was deprived of nutrition and oxygen carried by blood flow. Functional imaging techniques have been applied to evaluate the functional activation of the human brain after stroke including functional magnetic resonance imaging (fMRI), Positron emission tomography (PET), computed tomography (CT) and diffuse optical imaging (DOI) [1,2]. Neurovascular coupling represents the functional connection between neuron and cerebral capillary [3]. Neurovacular coupling has been proposed to be applied in diagnosis of ischemic stroke or brain injury [4]. In our previous study, we have demonstrated that near infrared spectroscopy (NIRS) is a potential tool to detect the tissue hemodynamic changes in the rat brain [5].
Fig. 1 Schematic diagram of neurovascular coupling assessment system The forepaw stimulation was performed by inserting two ball-tip needle electrodes into forepaw median nerve to evoke SSEP responses. The stimulus pulse signals are programmed with LabVIEW 8.5 (National Instruments), and generated by 16-bit digital-to-analog converter. The pulse width is set at 0.2 ms with intensity from 0.5 to 2.5 mA at the frequency
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 600–603, 2011. www.springerlink.com
Assessment of Ischemic Stroke Rat Using Near Infrared Spectroscopy
between 2 and 5 Hz. The bio-potential amplifiers can measure SSEP signal with non-invasive pin electrode (C.C.P Contact Probe Co., Ltd) as the electric probes (shown in Figure 2). The pin electrode with spring inside can adapt well to the shape of rat head and provide better stability for signal recording (as shown in Figure 3). The output signals of preamplifiers were sent to a biopotential amplifier (CyberAmp 380, Axon Instruments) with software-controlled signal conditioning function. The signal conditionings include low-pass filter with cut-off frequency 600 Hz, high-pass filter with cut-on frequency 10 Hz, DC offset zeroing, notch filtering at 60 Hz and secondary amplification with gain 200 for a total gain of 10000.
601
data acquisition device with 25 kHz sampling rate and processed by Matlab to calculate the concentration of chromophores.
Fig. 4 Functional block diagram of CW NIRS system Fig. 2 Structure of spring electrode
Fig. 3 Fabrication of spring electrode and optical fiber holder Continuous-Wave (CW) NIRS was employed to monitor in-vivo hemodynamic response, as shown in Figure 4. Three wavelengths of NIRS system was designed for animal experiments in which each channel equipped with three near-infrared light sources and one photonic detector module. The multi-channel measurement is achieved by frequency-division amplitude modulation (AM) to separate channels of scattered signals from the tissue. NIR emission light emitting was guided by three 400 um polymer-clad silica fibers (OFS, Furukawa Electric) to the target tissue. The back-scattered light is collected by another optical fiber and received by the avalanche photodiodes (APD). Following each APD module is a high-pass filter with cutoff frequency of 500 Hz to attenuate the ambient light. The gain control stage can match the signal level of data acquisition system which was low-pass filtered to eliminate harmonic signals [8]. Finally, the signal was sampled by
The animal study was approved by the Animal Committee of National Cheng Kung University Medical School. This study use Male Sprague-Dawley (SD) rats weighting between 280 to 350g. The animals were anesthetized by 1% Isoflurane mixted with 70% N2O and 30% O2 for surgery and transition to α-chloralose (50mg/kg) for recording hemodynamic response. Forepaw stimulation was performed by inserting Ball-tip needle stimulus electrodes to the dorsal region of left forelimb. The evoked cortical activity is immediately determined from the occurrence of SSEPs. The unilateral internal carotid artery occlusion (MCAO) was performed on the right side with the nylon suture ligation for two minutes. The suture was untied to release the vessel after 2 hrs of occlusion. During the ischemiareperfusion period, the relative concentration changes of oxygenated hemoglobin and deoxygenated hemoglobin were continuously recorded.
III. RESULT Figure 5 shows a typical temporal trace for the SSEP and the averaged profile measured by non-invasive technique. These SSEP responses are coherently averaged within a train to determine the wave form of P1, N1, and P2 components. The amplitudes and times to peak of P1, N1, and P2 are marked. Figure 6 illustrates the hemodynamic responses in rat brain during one trial of left forepaw stimulation. The stimulus interval is plotted in the same figure. The concentration of oxy-hemoglobin was significantly increased during the
IFMBE Proceedings Vol. 35
602
F.M. Yang et al.
stimulus interval while the concentration of deoxyhemoglobin was decreased in comparison with normal level. The situation of abnormal neurovascular coupling pattern with ischemic which has decay peak value and distortion of response.
surgery was performed on right site brain. The frequency distribution changed after cerebral ischemia which might be related to the injury of basal ganglia. Non affeted site Stimulation
Non-affected site Right SIFL Left SIFL
1.5
SEP of 2-Hz stimulation
600
1
500
0.5
400
P1 40 P2
Voltage (mV)
Right S1FL Left S1FL Peak detection
Frequency (Hz)
60
0
300
200
-0.5
20 100
Voltage (uV)
-1
0 0.01
0.02
0.03
0.05 0.06 Time (sec)
0.07
0.08
0.09
0
0.1
P2 amplitude: 35.31 uV
0.01
0.02
0.03
0.04 0.05 0.06 Time (Sec)
0.07
0.08
0.09
0.1
0.07
0.08
0.09
0.1
affected site Right SIFL Left SIFL
1.5
600
N1 amplitude: -68.01 uV
-40
0
Affeted site Stimulation
P1 amplitude: 53.56 uV
-20
0.04
1
500
0.5
400
N1 -80 -10
0
10
Voltage (mV)
P2 latency: 37.25 ms N1 latency: 17.55 ms
20
30
40 50 Time (msec)
60
70
80
90
100
Fig. 5 Typical SSEP responses of major components appearing in 60 msec to the stimulus pulse
Frequency (Hz)
P1 latency: 12.05 ms -60
0
300
-0.5
200
-1
100
0.01
0.02
0.03
0.04
0.05 0.06 Time (sec)
0.07
0.08
0.09
0
0
0.01
0.02
0.03
0.04 0.05 0.06 Time (Sec)
Fig. 7 Ischemic stroke affect and non-affected site SSEPs. The SSEP after ischemia has frequency shift compared with normal site
0.06 Stim Hb HbO HbT
0.05
Concentration change (uM)
0.04
IV. DISCUSSION AND CONCLUSION
0.03 0.02 0.01 0 -0.01 -0.02 -0.03 5
x 10
10
15 Time (sec)
20
25
Stim Hb HbO HbT
3 2 Concentration change (uM)
30
-4
1 0 -1 -2 -3
5
10
15 Time (sec)
20
25
Fig. 6 EROS of normal rat and cerebral ischemia rat during one trial of left forepaw stimulation Figure 7 shows the time frequency analysis results of SSEPs in right and left primary sensory cortex. The MCAO
In this study, we applied non-invasive approach for evaluating focal cerebral ischemia. The pattern of SSEP response shows that the P1-N1 amplitude was significant attenuated but the P1 latency was not significant decayed. The hemodynamic response was detected when somatosensory evoke potential was measured from surface recording region. The result of time frequency analysis shows that the high frequency shifts in the initial of the SSEPs. Several studies have also indicated that phenomenon may be caused by brain injury. The results of neurovascular coupling are consistent with previous studies [9]. Selection of the appropriate anesthetic and control the depth of anesthesia is crucial to acquire reliable data. The Near-infrared light therapy (NILT) treatment can be combined with further stroke treatment and using of currently developed non-invasive neurovascular response system. Furthermore, a multichannel CW-NIRS system with three-wavelength sensing diodes would provide the measurement of three chromophores, such as oxyhemoglobin, deoxyhemoglobin and cytochrome. The wavelength covers the absorption of cytochrome c oxidase which has been considered a main contributor of photobiomodulation by NILT. The system established in our pilot study will be applied for investigating the
IFMBE Proceedings Vol. 35
Assessment of Ischemic Stroke Rat Using Near Infrared Spectroscopy
neurovascular response of ischemic stroke animals under NILT which can provide quantitative information directly to the brain recovery in addition to conventional approaches of neurobehavior tests.
REFERENCES 1. Shibasaki H. Human brain mapping: Hemodynamic response and electrophysiology. Clinical Neurophysiology. 2008;119:731-743. 2. Hillman E. Optical brain imaging in vivo: Techniques and applications from animal to man. Journal of Biomedical Optics. 2007;12:051402. 3. Leybaert L. Neurobarrier coupling in the brain: A partner of neurovascular and neurometabolic coupling? Journal of Cerebral Blood Flow & Metabolism. 2005;25:2-16. 4. Wolf T, Lindauer U, Reuter U, Back T, Villringer A, Einhaupl K, Dirnagl U. Noninvasive near infrared spectroscopy monitoring of regional cerebral blood oxygenation changes during peri-infarct depolarizations in focal cerebral ischemia in the rat. Journal of Cerebral Blood Flow & Metabolism. 1997;17:950-954. 5. Oron A, Oron U, Chen J, Eilam A, Zhang C, Sadeh M, Lampl Y, Streeter J, DeTaboada L, Chopp M. Low-level laser therapy applied transcranially to rats after induction of stroke significantly reduces long-term neurological deficits. Stroke. 2006;37:2620.
603 6. Ying-Chi Lien, Wan-Ting Huang, Pei-Yi Lin and Jia-Jin Jason Chen. Application of multichannel continuous-wave near infrared spectroscopy system to measure cerebrovascular response. International Symposium on Biomedical Engineering. 2008. 7. DeTaboada L, Ilic S, Leichliter Martha S, Oron U, Oron A, Streeter J. Transcranial application of low energy laser irradiation improves neurological deficits in rats following acute stroke. Lasers in Surgery and Medicine. 2006;38:70-73. 8. Franceschini M, Joseph D, Huppert T, Diamond S, Boas D. Diffuse optical imaging of the whole head. Journal of Biomedical Optics. 2006;11:054007. 9. Franceschini M, Nissila I, Wu W, Diamond S, Bonmassar G, Boas D. Coupling between somatosensory evoked potentials and hemodynamic response in the rat. Neuroimage. 2008;41:189-203.
Author: Feng-Mao Yang Institute: Institute of Biomedical Engineering, National Cheng Kung University Street: No.1 Daxue Rd. East Dist. City: Tainan City Country: Taiwan (R.O.C.) Email: [email protected]
IFMBE Proceedings Vol. 35
Brain Lesion Segmentation of Diffusion-Weighted MRI Using Thresholding Technique N. Mohd Saad1, L. Salahuddin1, S.A.R. Abu-Bakar2, S. Muda3, and M.M. Mokji2 1
Faculty of Electronics & Computer Engineering, Universiti Teknikal Malaysia Melaka, Malaysia 2 Faculty of Electrical Engineering, Universiti Teknologi Malaysia, Malaysia 3 Radiology Department, Medical Centre, Universiti Kebangsaan Malaysia, Malaysia
Abstract— This paper presents brain lesion segmentation of diffusion-weighted magnetic resonance images (DW-MRI or DWI) based on thresholding technique. The lesions are solid tumor, acute infarction, haemorrhage, and abscess. Preprocessing is applied to the DWI for normalization, background removal and enhancement. Two different techniques which are Gamma-law transformation and contrast stretching are applied for the enhancement. For the image segmentation process, the DWI is divided by 8 x 8 regions. Then image histogram is calculated at each region to find the maximum number of pixels for each intensity level. The optimal threshold is determined by comparing normal and lesion regions. By using Gamma-law transformation, 0.48 is found as the optimal thresholding value whereas 0.28 for the contrast stretching. The proposed technique has been validated by using area overlap (AO), false positive rate (FPR), and false negative rate (FNR). Thresholding with gamma-law transformation algorithm provides better segmentation results compared to contrast stretching technique. The proposed technique provides good brain lesion segmentation results even though the simplest segmentation technique is used. Keywords— DWI, segmentation, thresholding, Gamma-law and contrast stretching.
I. INTRODUCTION Tumor, infarction (stroke/ischemia), haemorrhage (bleeding/ ischemia) and infection (abscess) are the example of brain lesions that are affected in the brain cerebrum. In 2006, it was reported that tumor and brain diseases such as brain infarction and haemorrhage were the third and fourth leading cause of death in Malaysia [1]. The incidence of brain tumor in 2006 was 3.9 among males and 3.2 among females per 100,000 populations with a total of 664 cases reported by the Minister of Health Malaysia. In the United States, the combined incidence of primary brain tumor was 6.6 per 100,000 persons per year with a total of 22,070 new cases in 2009 [2], while brain infarction affects approximately 750,000 new cases per year [3]. Interpretation of brain imaging plays an important part in diagnosis of various diseases and injury. Magnetic resonance imaging (MRI) is one of the popular, painless,
non-radiation and non-invasive brain imaging techniques. Nevertheless, assessment of brain lesion in MRI is a complicated process and typically performed by experienced neuroradiologists. An expert neuroradiologist performs this task with a significant degree of precision and accuracy. It can often be difficult for clinicians to precisely assess the lesion on the basis of radiographic appearance. Therefore, quantitative analysis using computers can help radiologists to overcome these problems. Due to the importance of brain imaging interpretation, significance research efforts have been devoted for developing better and more efficient techniques in several related areas including processing, modeling and understanding of brain images [4]. Over the past several years, developments in MRI unit have enabled the acquisition of MR imaging using fast and advanced techniques, proving extremely useful in various clinical and research applications such as diffusionweighted MRI (DW-MRI or DWI) [5]. DWI proficient to provide image contrast that is dependent on the molecular motion of water, which can alter by disease [6]. The image is bright (hyperintense) when the rate of water diffusion in the cell membrane is restricted and dark (hypointense) when the diffusion is elevated [6]. DWI provides very good lesion contrast compared to the appearance in conventional MRI [3]. Research have shown DWI is considered as the most sensitive technique in detecting early acute neurological disorders, stroke, infection, trauma and tumor [3,6-7]. Segmentation or separation of specific region of interest (ROI) of pathological abnormalities from MR images is an essential process for diagnosis and treatment planning. Accurate segmentation is still a challenging task because of the variety of the possible shapes, locations and image intensities of various types of problems and protocols. Computerized segmentation process is essential to overcome these problems. A large number of approaches have been proposed by various researchers to deal with various MRI protocols [8]. These approaches were introduced to solve the problems of automatic lesion detection and segmentation in various conventional MRI. Thresholding based segmentation discriminates pixels according to their gray level value. The key parameter in the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 604–610, 2011. www.springerlink.com
Brain Lesion Segmentation of Diffusion-Weighted MRI Using Thresholding Technique
thresholding process is the choice of the threshold value. In this study, histogram thresholding is applied to separate hyperintense lesion from DW images. The rationale being that the brightness of hyperintense lesion pixels are higher than the normal pixels. This paper is organized as follows. The proposed techniques are discussed section II. The section started with the flowchart of the proposed technique, followed by the DW images used for this paper in section III and preprocessing stage in section IV. Next, image segmentation is discussed in section V. In section VI, performance evaluation method is discussed. Experimental results and evaluation of applying the algorithm is illustrated in section VII, followed by the conclusions in section VIII.
605
summary of major brain lesions, types, symptoms and pathological findings is summarized in Table I. Nevertheless, this paper is only focused on the hyperintense lesions. Based on our hypothesis, the hyperintense lesions in DWI can be well separated from the normal tissue because of its high gray level intensity.
DWI Brain Image Preprocessing Image Normalization
Background Removal
II. PROPOSED TECHNIQUE Enhancement
The flowchart of the proposed segmentation is shown in Fig. 1. The samples of brain DWI dataset are first collected. In the pre-processing stage, several algorithms are applied to enhance the images. The intensity is normalized from 0 to 1, the background and scull are removed and then the intensity is enhanced using two different algorithms which are gamma-law transformation and contrast stretching. The algorithms are applied to span the narrow range of DWI histogram for thresholding purpose. The segmentation process starts at full image and splits to 8 x 8 regions. The lesion intensity range is analysed based on thresholding technique. This is done by calculating image histogram at each region and finding the maximum number of pixels at each intensity level. An optimal threshold is determined by comparing normal and lesion regions in the histogram. Region of interest (ROI) is then segmented based on the optimal threshold.
Segmentation Image Splitting (Block Processing)
Histogram
Thresholding
Fig. 1 Flowchart of the proposed technique
III. DIFFUSION-WEIGHTED MRI A. Brain Lesion Fig. 2 shows DWI intensities in major brain lesion, where the lesion is indicated by a white circle. In normal brain, the region consists of brain tissue (normally called as gray and white matter tissue in conventional MRI) and a cavity which is full of cerebral spinal fluid (CSF) located in the middle of the brain, as shown in Fig. 2(a). The DWI intensity for CSF is dark. Fig. 2 (b-f) shows several brain lesions, in which the intensity can be divided into hyperintense and hypointense. DWI hyperintense lesion includes acute infarction, acute haemorrhage, solid tumor and abscess. Chronic infarction and necrosis tumor appear to be hypointense in DWI. The
(a) Normal
(b) Solid tumor
(c) Acute infarction
(d) Abscess
(e) Haemorrhage
(f) Chronic infarction
Fig. 2 Original DWI with brain lesion indicated by a white circle
IFMBE Proceedings Vol. 35
606
N. Mohd Saad et al.
Brain Lesion
DWI Characteristics
Symptoms
Pathological Findings
Tumor
Solid: Hyperintense
Loss of balance; walking, visual and hearing problems; headache; nausea; vomiting; unusual sleep; seizure
Abnormal growth of cells in uncontrolled manner shape: round, ellipse, irregular texture: clear, partially clear, blur
Paralysis; visual disturbances; speech problems; gait difficulties; altered level of consciousness
Cerebral vascular occlusion/ blockage
Paralysis; unconsciousness; visual disturbances; speech problems
Presence of blood products outside of the cerebral vascular
Cystic/ Necrosis: Hypointense
Infarction (Stroke/ Ischemia)
Acute (30 minutes 72 hours after onset): Hyperintense Chronic (after 2 weeks): Hypointense
Haemorrhage (Bleeding/ Ischemia)
Deoxyhemoglo -bin: Hyperintense Oxyhemoglobin: Hypointense
Infection (Abscess)
Hyperintense
Fever; seizure; headache; nausea; vomiting; altered mental status
intensity enhancement. The original DWI has 12-bit intensity depth unsigned integer. In normalization process, the type of the intensity depth is converted to double precision, where the minimum value is set to 0 while for the maximum is to 1. The DWI includes background image which needs to be removed. This is because the background shares similar gray level values with certain brain structures. The technique for background removal can be found in [13]. Then, image enhancements are applied. Two different techniques are used which are Gamma-law transformation algorithm and contrast are stretching. Gamma-law transformation algorithm is chose to expand the narrow range of low input gray level values of the DW images to a wider range. It has the basic form [14];
s = cr γ
(1)
where c is amplitude, and γ is a constant power of input gray level, r. γ=0.4 has been found to be the best value based on experiments to enhance the output histogram [15]. The image response of Gamma-law transformation is shown in Fig. 3. Gamma-law transformation 1 0.9 0.8 Gray Level, O utput
Table 1 Description of brain lesions, types, symptoms and pathological findings [9-12]
Bacterial, viral or fungal infections, inflammatory and pus
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
B. Imaging Parameters
0.2
0.4 0.6 Gray level, input
0.8
1
Fig. 3 Image response: Gamma-law transformation
The DW images have been acquired from the General Hospital of Kuala Lumpur using 1.5T MRI scanners Siemens Magnetom Avanto. Acquisition parameters used were time echo (TE), 94 ms; time repetition (TR), 3200 ms; pixel resolutions, 256 x 256; slice thickness, 5 mm; gap between each slice, 6.5 mm; intensity of diffusion weighting known as b value, 1000 s/mm2 and total number of slices, 19. All samples have medical records which have been confirmed by neuroradiologists. Images were encoded in 12-bit DICOM (Digital Imaging and Communications in Medicine) format.
IV. IMAGE PREPROCESSING Several preprocessing algorithms are applied to DW images for intensity normalization, background removal and
On the other hand, contrast stretching is applied to improve an image by stretching the range of intensity values. Unlike histogram equalization, contrast stretching is restricted to a linear mapping of input to output values. For each pixel, the original value r is mapped to output value s using the equation (2).
s = ratio⋅ r
(2)
where ratio is: ratio =
Original Image Background Standardize Image Background
Standardize Image Background = 0.02 is used for this experiment. Fig. 4 shows image response of contrast stretching in which the original value r is mapped to output value s.
IFMBE Proceedings Vol. 35
Brain Lesion Segmentation of Diffusion-Weighted MRI Using Thresholding Technique
607
Contrast Stretching 1
350
0.8 G ray Level, O utput
histogram after gamma law
400
300 250
0.6
200 0.4
150 100
0.2
50 0 0
0.2
0.4 Gray Level, Input
0.6
0 0
0.8
0.2
0.4
0.6
0.8
(c) Image and histogram of Gamma-law transformation
Fig. 4 Image response: Contrast stretching
350
histogram after enhancement
300
Fig. 5 illustrates results of preprocessing steps on a haemorrhage as shown in Fig. 2(e). Fig. 5(a) shows the original normalized image and its histogram. In Fig. 5(b), all background pixels have been removed, and therefore improve the shape of the image histogram. The maximum peak is located at 0.1. After applying Gamma-law transformation algorithm, the histogram has been enhanced in which the peak is located at 0.4 as shown in Fig. 5(c). On the other hand, the maximum peak is located at 0.2 after applying contrast stretching as shown Fig. 5(d).
200 150 100 50 0 0
0.2
0.4
0.6
0.8
Fig. 5 (continued)
V. SEGMENTATION PROCESS
6000
For the image segmentation process, firstly the entire image is divided by 8 x 8 regions where 256 x 256 pixels of the entire image is split to 16 x 16 pixels size in each region. Fig. 6 shows the image splitting (block processing) with 16 x 16 pixels size per segment. Region number 46 which is indicated by red circle is the suspected lesion area.
5000 4000 3000 2000 1000 0.1
0.2
0.3
0.4
0.5
0.6
(a) Image and histogram of intensity normalization 500
histogram after background removal
400 300
200 100
0 0
1
(d) Image and histogram of contrast stretching
histogram after normalization
7000
0 0
250
0.1
0.2
0.3
0.4
0.5
0.6
(b) Image and histogram of background removal
Fig. 5 Pre-processing steps for segmentation
IFMBE Proceedings Vol. 35
Fig. 6 Image splitting (16 x 16 pixels size per segment)
608
N. Mohd Saad et al.
Next, histogram is calculated at each region as shown in Fig. 7. The red circle shows the histogram distribution of lesion, whereas the others are histogram of normal brain area.
Maximum block histogram
50 45 40
Number of gray level
35 30 25 20 15
Lesion
10 5 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Intensity
Fig. 9 Maximum block histogram Fig. 9 shows the maximum block histogram, which is done by overlapping the histogram in all blocks including both normal and abnormal. By using the proposed technique, we can clearly characterize the lesion area because it has been enhanced. The statistical features representing the hyperintense and hypointense regions are then calculated according to equation (4).
Fig. 7 Histogram distribution of each region Two histograms which are normal and abnormal (lesion) are then generated. All normal and abnormal regions are overlap respectively to find the maximum number of pixels at each intensity level. The maximum number of pixels is calculated by using the function shown in equation (3). max Num Pixels(n) = max(Num Pixels(B1: B2, n))
(3)
The purpose of overlapping the histogram for each block is to enhance the lesion for comparison with normal. This will produce a new histogram as shown in Fig. 8. The optimal threshold is determined by the ROI indicator, which the intensity level of normal histogram is reached zero pixels.
⎧1 I ( x, y) hyperint ense = ⎨ ⎩0
for I ( x, y) ≥ T elsewhere
(4)
VI. PERFORMANCE ASSESSMENT METRICS Performance assessment of the segmentation results is done by comparing the ROI results obtained from the thresholding algorithm with the actual ROI provided by neuroradiologists. Area overlap (AO), false positive rate (FPR) and false negative rate (FNR) are used as the performance metrics. These metrics are computed as follows [16]
Maximum block histogram
50
Normal Region Abnormal Region ROI Indicator
45
AO = 100 ×
40 35
S1 ∩ S 2 c S1 ∪ S 2
(5)
30 25 20
FPR = 100×
S1 ∩ S 2 c S1 ∪ S 2
(6)
FNR = 100×
S1c ∩ S 2 S1 ∪ S 2
(7)
15 10
ROI indicator
5 0 0
0.2
0.4
0.6
0.8
1
Fig. 8 Optimal threshold with Gamma-law enhancement
IFMBE Proceedings Vol. 35
Brain Lesion Segmentation of Diffusion-Weighted MRI Using Thresholding Technique
where S1 represents the segmentation results obtained by the algorithm and S2 represents the manual segmentation provided by the neuroradiologists. AO computes the segmented similarity by comparing the overlap region between the manual and the automatic segmentation. FPR and FNR are used to quantify oversegmentation and undersegmentation respectively. High AO, and low FPR and FNR showed low error, i.e. high accuracy of the measurement. The testing dataset consist of 3 abscess, 9 haemorrhage, 8 acute infarction, and 3 tumor cases. In total, 23 samples are used for evaluation.
VII. RESULTS Table 2 shows intensity of lesions on histogram thresholding. The threshold values are compared between Gamma-law transformation and contrast stretching. Minimum 0.48 and 0.28 are the optimal threshold value for the Gamma-law transformation and contrast stretching respectively. Table 2 Optimal thresholding value
609
white circle. The first row of the images (i) represents the brain lesion images whereas the second row (ii) and the third row (iii) are the segmentation results using the enhanced images after Gamma-law transformation and contrast stretching respectively. Both image enhancement techniques can successfully segment the lesions using thresholding. Table 3 shows the performance evaluation between thresholding with Gamma-law transformation and contrast stretching algorithm. The results show the segmentation of haemorrhage and infarction provide very good segmentation results. Thresholding with both algorithms provide high AO with low FPR and low FNR. These lesions were successfully segment by using our proposed thresholding technique because the lesions are very bright in DWI. However, the technique provides the worst result for all evaluation measurements for tumor. This is because, in DWI, the lesion for tumor is not only characterized by its brightness, but also by its texture. In addition, some tumor lesions in DWI comprise dark area in the middle or surrounding its hyperintense lesion which is failed to detect by histogram. Therefore, histogram thresholding provides low performance for tumor segmentation in DWI.
Optimal Threshold of Hyperintense Lesion Gamma-law Transformation Min Max 0.48 0.8
(a) haemorrhage
Min 0.28
(b) abscess
Contrast Stretching Max 1.0
(c) solid tumor
Table 3 Performance evaluation for each lesion: Comparison between Gamma-law transformation and contrast stretching algorithm Index
(d) acute infraction
(i) Brain lesion image
Area Overlap
False Positive Rate
False Negative Rate
Enhancement Technique
Gammalaw
Contrast Stretch
Gammalaw
Contrast Stretch
Gammalaw
Contrast Stretch
Abscess
0.6697
0.6199
0.2885
0.1517
0.0418
0.2284
Haemorrha ge
0.7788
0.6611
0.0695
0.1228
0.1516
0.2161
Infarction
0.6159
0.5482
0.2270
0.1624
0.1572
0.2893
Tumor
0.2405
0.2590
0.7509
0.5435
0.0086
0.1975
(ii) Gammalaw
(iii) Contrast stretching
Fig. 10 Brain lesions and their segmentation results Fig. 10 shows segmentation results tested on our dataset as discussed in section III. The lesions are indicated by
Fig. 11 shows average performance of lesion segmentation by using both Gamma-law transformation and contrast stretching algorithm. Gamma-law transformation algorithm provides higher AO which means better similarity to the manual segmentation provided by neuroradiologists. However, its FPR performance is slightly lower than the contrast stretching technique because there are certain lesions have been over segmented. For undersegmentation evaluation (FNR), gamma-law transformation provides high performance, while contrast stretching gives poorer result. Overall, gamma-law transformation algorithm provides better segmentation results compared to contrast stretching.
IFMBE Proceedings Vol. 35
610
N. Mohd Saad et al.
REFERENCES
Fig. 11 Average performance of the thresholding algorithm
VIII. CONCLUSIONS This paper describes brain lesion segmentation in DWI using thresholding technique. Clinical DWI which focused on the hyperintense lesions are used for the study. Preprocessing stage is carried out for intensity normalization, background removal and intensity enhancement. Gamma-law transformation algorithm and contrast stretching are evaluated and compared to threshold the lesions. Images are segmented to 16 x 16 pixels size and histogram thresholding is applied at each region to find the maximum number of pixels. The result shows that both intensity enhancement techniques can be applied for successfully segment the hyperintense lesions using our proposed theresholding technique. However, Gamma-law transformation algorithm provides better segmentation results compared to contrast stretching.
ACKNOWLEDGMENT The authors would like to thank Universiti Teknikal Malaysia Melaka (UTeM) for its financial support, Universiti Teknologi Malaysia and Universiti Kebangsaan Malaysia Medical Centre for providing the resources for this research.
1. Malaysian Cancer Statistics – Data and Figure Peninsular Malaysia 2006, National Cancer Registry, Ministry of Health, Malaysia 2. American Cancer Society: Cancer Facts and Figures 2009. Atlanta, Ga: American Cancer Society, 2009 3. MAGNETOM Maestro Class: Diffusion Weighted MRI of the Brain. From brochure Siemens Medical Solutions that help 4. M. Ibrahim, N. John, M. Kabuka, A. Younis (2006) Hidden Markov models-based 3D MRI brain segmentation, Elsevier Journal of Image and Vision Computing Vol.24, pp. 1065–1079 5. S. J. Holdsworth, R. Bammer (2008) Magnetic Resonance Imaging Techniques: fMRI, DWI, and PWI, Seminars in Neurology, 2008 Vol.28, No. 4 6. P. W. Schaefer, P. E. Grant, R. G. Gonzalez (2000) State of the Art: Diffusion-weighted MR imaging of the brain, Annual Meetings of the Radiological Society of North America (RSNA), 2000, pp.331–345 7. S. K. Mukherji, T. L. Chenevert, M. Castillo (2002) State of the Art: Diffusion-Weighted Magnetic Resonance Imaging, Journal of NeuroOphthalmology Vol.22, No.2 8. Rubens Cardenes, Rodrigo de Luis-Garcia, Meritxell Bach-Cuadra (2009) A multidimensional segmentation evaluation for medical image data, J. Computer Methods and Programs in Medicine, vol.96, pp.108-124 9. S. Cha (2006) Review Article: Update on Brain Tumor Imaging: From Anatomy to Physiology, Journal of Neuroradiology, vol.27, pp.475-487 10. M.D.Hammer, L.R.Wechsler (2008) Neuroimaging in ischemia and infarction, Seminars in Neurology, 2008 Vol.28, No. 4 11. R.T.Ullrich, L.W.Kracht, A.H.Jacobs, “Neuroimaging in patients with gliomas.” Seminars in Neurology, 2008, Vol.28, No. 4 12. O. Kastrup, I. Wanke, M. Maschke (2008) Neuroimaging of infections of the central nervous system, Seminars in Neurology, 2008, Vol.28, No. 4 13. Norhashimah Mohd Saad, S. A. R. Abu-Bakar, Sobri Muda, Musa Mokji (2010) Automated Segmentation of Brain Lesion based on Diffusion-Weighted MRI using a Split and Merge Approach, Proceedings of 2010 IEEE-EMBS Conference on Biomedical Engineering & Sciences (IECBES 2010), 30th – 2nd Dec. 2010, Kuala Lumpur, Malaysia 14. R.C. Gonzalez, R.E. Woods (2001) Digital Image Processing second edition, Prentice Hall 15. N. Mohd Saad, S. A. R. Abu-Bakar, Sobri Muda, M. M. Mokji, A. R. Abdullah, S. A. Mohd Chaculli (2010) Detection of Brain Lesions in Diffusion-weighted Magnetic Resonance Images using Gray Level Co-occurrence Matrix, Proceedings of the World Engineering Congress 2010 (WEC 2010), Kuching, Sarawak, Malaysia, , 2nd-5th August 2010, pp.611-618 16. Rosniza Roslan, Nursuriati Jamil, Rozi Mahmud (2010) Skull Stripping of MRI Brain Images using Mathematical Morphology, Proceedings of 2010 IEEE-EMBS Conference on Biomedical Engineering & Sciences (IECBES 2010), 30th – 2nd Dec. 2010, Kuala Lumpur, Malaysia.
IFMBE Proceedings Vol. 35
Characterization of Renal Stones from Ultrasound Images Using Nonseparable Quincunx Wavelet Transform S.R. Shah1, M.D. Desai2, L. Panchal3, and M.R. Desai3 1
Faculty of Technology, Dharmsinh Desai University, Nadiad, Gujarat, India 2 Kalol Institute of Technology, Ahmedabad, Gujarat, India 3 Kidney Hospital, Nadiad, Gujarat, India
Abstract— This paper describes an approach for texture characterization based on nonseparable quincunx wavelet decomposition transforms and its application for the discrimination of visually similar ultrasound renal stone images. The proposed feature extraction method applies quincunx wavelet transform and calculation of second order (GLCM) and FFT parameter form LL and HH part of decomposed image. This Characterization is experimented on a set of one hundred and twelve (112) different stones, which also validated with FTIR analysis in standard laboratory. It shows that GLCM, FFT transform evaluation in combination with quincunx wavelet decomposition could be a reliable method for a texture characterization. Keywords— Classification, Texture Quincunx wavelet, Wavelet Decomposition.
characterization,
strain on the efficacy of a particular technique. The main obstacle for diagnosing them is very subtle visual difference between their sonograms. One possible approach can be texture analysis, because as the composition of the renal stone changes, it produces changes in acoustical properties of the tissue, which could be detected by ultrasound as a textural pattern different for each type. For example, ultra sono graphic images of different concentrations of calcium oxalate monohydrate, dihydrate and carbonate apatite are very similar and it is very difficult, even for an experienced clinician, to perform the diagnosis about the existence and type. Therefore, a reliable non-invasive method for early detection and differentiation of these different is clearly desirable. (see Fig. 1).
I. INTRODUCTION Diagnostic ultrasound has been a useful clinical tool for imaging organs and soft tissues in the human body, for more than two decades [1]. The problem of occurrence of stones in the urinary tract been known since long. It may start from the birth of a child. Kidney stone is an extremely painful disease. The stones can form in lower calyx, pelvis of the kidney, ureter and bladder. It may lead to substantial damage and trauma in surrounding region and in extreme case severe renal impairment or failure. Early detection of the presence of stone is beneficial in therapy. It may be diet control or stone removal. Apart from surgical removal of stones, ultrasound and laser lithotripsy are the common options followed by modern hospitals. Extra-corporeal Shock Wave Lithotripsy (ESWL) has led to a revolution in urinary calculus treatment although the high cost of the dedicated equipment its contra-indication in anatomical sites close to adjacent sensitive areas such as nerve bundles has restricted its use. An alternative method of calculus fragmentation uses lasers whereby the laser energy is delivered to the calculus via an optical fibre passed along the urethra, bladder and ureter. The mineral content is known to play a central role in defining the stones’ fragility and studies are being conducted to establish the effect of the crystallite size, orientation and
Fig. 1 Ultrasound kidney stone images First row: (a) COM 100% (b) COM 90% COD 10% (c) COM 80 % COD 20% Second row: (d) COM 70% COD 30% (e) COM 60% COD 40% (f) COM 60% COD 30% CA 10% Many researchers have studied the problem of classification from ultrasound images mainly in the area of liver tissue [2]–[10]. Initial attempts to characterize diffuse diseases have utilized different signal processing techniques in order to obtain useful information from the raw radiofrequency signal [2], [3]. In a series of papers, Momenan et al. showed that second order statistical parameters from envelope-detected or intensity echo signals have discriminatory power [4], [5]. Some researchers have treated the task of tissue quantification from the point of description and classification with numerical texture
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 611–616, 2011. www.springerlink.com
612
S.R. Shah et al.
measures. Nicholas et al. were among the first who used textural features of the B-scan images, showing their potential to discriminate between livers and spleens of normal humans [6]. Wu et al. [7] have applied fractal-based statistics and compared them with other texture measures, for distinguishing between hepatoma, cirrhosis, and normal liver. In [8], Gebbinck et al. applied discriminant analysis and supervised and unsupervised neural networks, and tested their ability to detect various types of diffuse liver diseases. In [9], Paik and Fox have proposed Hartleytransform-based texture measures to detect abnormal liver patterns, but insufficient data demonstrating the accuracy of that technique has been reported. On the other hand, Kadah et al. [10] have emphasized the classification aspect in the diagnosis of diffuse diseases. They have investigated the use of well-known quantitative classification techniques and evaluated their performance in liver tissue classification. We have previously shown that the GLCM and FFT approach with the separable wavelet transforms including Daubechies, Symlet, Coieflet, orthogonal and reverse biorthogonal is appropriate feature extraction method for the analysis of ultrasound textures [11],[12]. This paper investigates the application and advantages of the nonseparable wavelet transform features for renal stone characterization, compares the approach with other texture measures, and finally, addresses several questions about the potential importance of this application.
⎡2 0 ⎤ Ds = ⎢ ⎥ ⎣0 2 ⎦
⎡π 0 ⎤ ~ Ds = ⎢ ⎥ ⎣0 π ⎦
⎡ 1 1⎤ Dq = ⎢ ⎥ ⎣ − 1 1⎦
⎡π ~ Dq = ⎢ ⎣− π
π⎤ π ⎥⎦
(1)
Fig. 2 illustrates separable and quincunx sublattices of Z2 in the spatial and frequency domains. In the frequency domain, a set of points being closer to the origin than to any other lattice point is called Voronoi cell Uc. The meaning of the Voronoi cell is extremely important, since it determines the shape of a possible low-pass filter to avoid aliasing caused by the down sampling to this sublattice.
II. BASIC PROPERTIES OF SEPARABLE AND QUINCUNX SAMPLING IN TWO DIMENSIONS
(b)
This section presents basic concepts from the theory of lattices [13] and their connection to two-dimensional (2-D) multirate systems [14], [15] that will be used through the paper. Under the original lattice in two dimensions, we assume Z2. The term multirate refers to systems “living” on different sublattices of Z2. The sublattice is determined by the sampling matrix D, as the set of all vectors generated by Dk, k ϵ Z2. A coset of a sublattice is the set of points obtained by shifting the entire sublattice by an integer shift vector k. For a given sublattice, there are exactly N = det D distinct cosets ki, i=0,…,N-1 and their union is the input lattice. For a given sampling matrix, the sampling density is reduced by the factor N = det D. Due to down sampling, the repeated spectra in the frequency domain appear on a dual lattice, which is characterized by a matrix D = 2π(D-1)T. The most often used sampling structure in image processing is separable, represented by matrices Ds and Ďs in (1). However, although more difficult to implement, nonseparable systems can offer many advantages and greater flexibility. The simplest nonseparable sampling is quincunx, represented by matrices Dq and Ďq in (1).
Fig. 2 (a) Separable and (b) quincunx lattice in the spatial and frequency domains. Dark area indicates the Voronoi cell. In the case of perfect reconstruction filter banks based on quincunx sampling, the black area represents the sub band for the low-pass analysis filter h0(m; n), whereas the gray area corresponds the shape of a high-pass filter h1(m; n) to achieve the aliasing cancellation after the complete analysis/synthesis procedure The spatial and spatial frequency expressions for the output of the down sampler can be written as y[n] = x[Dn]
(2)
and Y (ω ) = =
1 N
1 N
N −1
∑
N −1
∑
X (D
−T
.( ω − 2 π k i ))
i=0
X (D
−T
ω − mi)
(3)
i=0
where N = det D, ω = (ω1, ω2) is the 2-D real vector, n and ki are 2-D integer vectors, and mi = 2πD-Tki. From (3) we
IFMBE Proceedings Vol. 35
Characterization of Renal Stones from Ultrasound Images Using Nonseparable Quincunx Wavelet Transform
see that the output at each frequency is formed by summing input spectra at a set of N aliasing offsets D-Tω-mi. unless all but one of aliasing components are zero, it is impossible to recover the input signal from the output of the decimator. Hence, if the sampled signal has the spectrum bandlimited to the Voronoi cell of the corresponding lattice, no overlapping of spectra will occur and the signal can be reconstructed from its samples.
III. WAVELET TRANSFORM IN ONE DIMENSION The wavelet transform [16]–[18] performs the decomposition of a signal into the family of functions generated from a prototype function (mother wavelet) ψ(x) by dilation and translation operations
ψ
m ,n
( x ) = 2 − ( m / 2 )ψ ( 2 − m x − n )
The wavelet transform of a signal f(x), can be computed via the following analysis and synthesis formulas: ∞
c m ,n =
∫
f ( x )ψ
m ,n
( x ) dx
c m , nψ
m ,n
(x)
−∞
f (x) =
∑
(4)
m ,n
613
Where ĥ(k) = h(-k) and ĝ(k) = g(-k). This decomposition can be understood as passing of signal f2j(x) through a pair of low-pass and high-pass filters ĥ(k) and ĝ(k), followed by the sub sampling with a factor two.
IV. WAVELET TRANSFORM IN TWO DIMENSIONS: CONNECTION TO THE TEXTURE ANALYSIS
There are various extensions of one-dimensional (1-D) wavelet transform to two dimensions. The simplest way to generate a 2-D wavelet transform is to apply two 1-D transforms separately. Thus, image decomposition can be computed with separable filtering along the abscissa and ordinate, by using the same pyramidal algorithm as in the 1D case [18]. This corresponds to the case of separable sampling described by the sampling matrix Ds in (1). As shown in Fig. 3, this separable transform (ST) decomposes images with a multiresolution scale factor of two, providing at each resolution level one low-resolution sub image and three spatially oriented wavelet coefficient sub images. Another solution for the application of the wavelet transform to higher dimensions is to use nonseparable sampling and nonseparable filters. The simplest transform of that type, known as quincunx transform (QT), uses nonseparable and nonoriented filters, followed by the nonseparable sampling represented by the matrix Dq in (1).
The mother wavelet can be constructed from the scaling function ɸ(x) as
φ (x) = ψ (x) =
∑ h ( k )φ ( 2 x − k ) 2 ∑ g ( k )φ ( 2 x − k )
2
(5)
where g(k) = (-1)k h(1-k). In the wavelet literature [16], [17], many different sets of coefficients h(k) can be found, corresponding to wavelet bases with different properties. In the case of the discrete wavelet transform (DWT), coefficients h(k) play an important role, since they can be used for the DWT computation, instead of the explicit forms for ɸ(x) and ψ(x). It is shown [18] that, starting from the original signal f(x) = f2o(x), discrete signals f2j(x) [the approximation of f(x) at resolution 2j] and (information content lost between higher resolution 2j+1and lower resolution2j) can be computed as
Fig. 3 The division of spectrum after two iterations of the traditional dyadic wavelet decomposition
Hence, the scaling function and corresponding wavelet family are
φ m ,n ( x ) = 2 − m / 2 φ ( 2 − m x − n ) f2j (x) = d 2j (x) =
∞
∑
~ f2 j ( x)h (2 x − k )
∑
f 2 j ( x ) g~ ( 2 x − k )
k = −∞ ∞ k = −∞
ϕ m ,n ( x ) = 2 − m / 2 ϕ ( 2 − m x − n ) (6)
(7)
Since this transform is performed with the two channel filter bank, the Fourier expression for the output of channel is
IFMBE Proceedings Vol. 35
614
Y i (ω ) =
S.R. Shah et al. 1 2
1
∑
i= 0
H
i
(D
−T q
ω − m i )X (D q− T ω − m i )
Y i ( ω ) = [H i (ω 1 , ω 2 ) X ( ω 1 , ω 2 ) ] + H i (ω1 + π , ω 2 + π ) X (ω1 + π , ω 2 + π ) ]
Where ⎛0⎞ ⎛1 ⎞ k 0 = ⎜⎜ ⎟⎟ , k 1 = ⎜⎜ ⎟⎟ 0 ⎝ ⎠ ⎝0⎠ and ⎛0⎞ ⎛π m 0 = ⎜⎜ ⎟⎟ , m 1 = ⎜⎜ 0 ⎝ ⎠ ⎝π
⎞ ⎟⎟ ⎠
(8)
(9)
are coset and modulation vectors for the case of quincunx sampling. For more general applications, such as texture synthesis, to assure the cancellation of the aliasing terms at the output of the analysis/synthesis filter bank [14], the high-pass filter should be designed as h 1 (n 1 , n 2 ) = ( − 1 ) n 1 + n 2 h 0 (n 1 , n 2
)
(10)
This decomposition results in one low-resolution sub image and one nonoriented wavelet sub image. Fig. 4 illustrates the idealized partition of the frequency domain after four iterations of quincunx decomposition. At each level, the input image is decomposed with the multiresolution scale factor. This is very nice property for description of small textured images, since the analysis is twice as fine as the separable multiresolution decomposition. The spectral decompositions shown in Figs. 3 and 4 suggest several advantages of the quincunx transform for tissue characterization from B-scan images.
Fig. 4 The division of spectrum after four iterations of the quincunx transform First, the separable sampling provides only rectangular divisions of spectrum, with increased sensitivity to horizontal and vertical edges. This could be important for the analysis of directional textures, but it yields a rotationally sensitive description, which is not desirable in
this application. Due to the shape of the low-pass and highpass filters, it should be expected that the quincunx decomposition has lower orientation sensitivity than separable decomposition. Still, it is not completely rotationally insensitive—following the rotation of the QT Voronoi cell around the origin, rotational sensitivity increases up to 45 (where it reaches maximum), and then decreases again, reaching the complete invariance for 90. Second, the energy of natural textures is mainly concentrated in the mid-frequencies, with the insignificant energy along diagonals. Therefore, the quincunx low-pass filter will preserve more of the original signal energy, and its implementation in the iterated filter bank could provide more reliable description of texture. Finally, the diamond shape of the low-pass filter in the quincunx case, plays the crucial role for the extraction of texture features in the presence of noise, since it cuts off diagonal high frequencies, where the most significant portion of noise is contained. Thus, when working with noisy samples (as in our case) the spectral decomposition performed on the quincunx lattice represents a better solution than traditional approach based on the separable sampling.
V. DESIGN OF 2-D DIAMOND-SHAPED FILTERS When constructing a filter bank performing the DWT for the texture characterization, a number of design requirements have to be fulfilled. First, since images are mostly smooth, the analysis should be performed with a smooth mother wavelet. On the other hand, to achieve fast computation, the filters have to be short, affecting the smoothness of the associated wavelet. In more general applications, such as texture synthesis, it would be nice to construct a perfect reconstruction filter bank, leading to the selection of orthogonal bases. Furthermore, the filters should be symmetric, so they can be easily cascaded without any additional phase compensation. For a twochannel, real, finite impulse response case, linear phase and orthogonality are mutually exclusive, but by using 2-D biorthogonal filters it is possible to relax the orthogonality requirement, yet preserving other important characteristics [16]. Finally, in order to achieve the computation of the continuous wavelet transform by iterating the low-pass branch of the filter bank, the low-pass filter with the sufficient number of zeros at the points of replicated spectra has to be used [17]. Unfortunately, due to the difficult design of nonseparable filters there are only few solutions satisfying all these properties. Therefore, it has been decided to apply the McClellan transform and to map coefficients of the selected 1-D filter into a 2-D filter defined on the quincunx lattice. [19]
IFMBE Proceedings Vol. 35
Characterization of Renal Stones from Ultrasound Images Using Nonseparable Quincunx Wavelet Transform F (ω 1 , ω
2
)=
1 1 cos ω 1 + cos ω 2 2
(11)
C. Classification
2
The transform obtained ensures that all properties of 1-D filters are also satisfied in the 2-D case.
VI. TISSUE CLASSIFICATION A.
615
Data Acquisition
Total one hundred and twelve (112) real kidney stone samples of different types were collected from hospitals in Nadiad (Muljibhai Patel Urological Hospital, Sayaji hospital, Baroda and LIONS hospital, Mehsana) and used, in vitro, in the present research investigation. An ultrasonic gel (nonconducting) was used as the coupling medium between transducer and the samples under investigation. The samples were individually kept floating in the balloon filled with the distilled water so that the size of the floated balloon is around the size of the abdomen bowel. A doubleprobe through transmission method has been used for the samples in solid rock form. The apparatus has been calibrated and standardized. The Honda make Ultrasound scanner (Model: HS 2000) (Crystal frequency: 3.5 MHz) is used to acquire the images and saved in digital format in the scanner memory. Ultrasonic waves were then passed through the specimen under investigation and were received by the receiving transducer having the same frequency 3.0 MHz. The images thus acquired were stored in the internal memory of HS 2000, scanner. Then they are used for image processing algorithm using Matlab. The final characterization is also validated with FTIR analysis at Metropolis laboratory. The estimation of a texture characterization is performed with the four-level quincunx decomposition, yielding feature vectors with maximal length of five. B. Texture Feature Extraction Since the filter bank performing the QT represents one special case of the local linear transform approach for the texture characterization, N iterations of the quincunx decomposition can be seen as a (N+1)-channel filter bank, whose outputs I1,I2,...IN+1 serve for the estimation of texture quality in the corresponding frequency sub band. The texture is then, characterized by the set of N+1 first-order probability density functions estimated at the output of each channel. Another, psychophysical justification was offered by Pratt et al. [20], who showed that natural textures are visually indistinguishable if they possess the same first and second-order statistics.
Here results are presented for three different types namely Calcium Oxalate Monohydrate 80% Calcium Oxalate Dihydrate 20%(17 stones), Calcium Oxalate Monohydrate 90% Calcium Oxalate Dihydrate 10%(16 stones), Calcium Oxalate Monohydrate 60% Calcium Oxalate Dihydrate 30% Carbonate Apatite 10%(18 stones). For each texture class, the GLCM statistical, FFT and quincunx wavelet are estimated out of 112 texture samples with the leave-one-out method [23].
VII. RESULTS The classification results obtained using parameter energy feature of quincunx decomposition are presented in Fig. 5
Fig. 5 The distribution of energy values for three types after quincunx analysis The overall classification accuracy is 91%. Specificity of the method was 91.77%. A. Comparison with Other Texture Description Methods To compare the performance of different featureextraction techniques in diagnosing early diffuse diseases, we have compared the wavelet-based approach with: 1) gray-level concurrence (GLC) matrices, as defined in [7], and [10]; 2) fractal texture measures [7]; and 3) Fourier measures [7].
VIII. DISCUSSION AND CONCLUSION The goal of this research was the detection of an optimal feature-extraction technique for description of different ultrasound textures. Therefore, we have investigated the utility of wavelet decompositions as feature-extraction methods in discrimination among different types of kidney
IFMBE Proceedings Vol. 35
616
S.R. Shah et al.
stones. We have applied a nonseparable quincunx transform with a multiresolution scale factor, traditional approach based on the separable wavelet transform, and compared them with the previously used approaches. The classification results obtained with both wavelet transforms are very promising. Based on the experimental results, we conclude that the quincunx transform is more appropriate for characterization of noisy data, and for practical applications requiring description with lower rotational sensitivity. During the selection of stone image samples we have done our best to choose the data set as homogenous as possible. It should be also mentioned that the images were recorded by same radiologist who is free to adjust ultrasound settings (within the acceptable range) in order to visually optimize images.
ACKNOWLEDGEMENT This work is supported in part by the GUJCOST (Gujarat Council on Science and Technology, Department of Science and Technology, Government of Gujarat) through MRP (Minor Research Project) grants wide GUJCOST/MRP/201428/09-10/3059 and GUJCOST/200910/3136. The authors would like to express appreciation to Muljibhai Patel Urological Hospital-Nadiad, Lions HospitalMehsana and Kidney Hospital-Mehsana for providing the Ultrasound renal stone images for this work.
REFERENCES 1. P. N. Wells, Biomedical Ultrasonics. New York: Academic, 1977. 2. R. Kuc and M. Schwartz, “Estimating the acoustic attenuation coefficient slope for liver from reflected ultrasound signals,” IEEE Trans. Sonics Ultrason., vol. SU-26, pp. 353–362, Sept. 1979. 3. R. Kuc, “Processing of diagnostic ultrasound signals,” IEEE ASSP Mag., pp. 19–26, Jan. 1984. 4. R. Momenan, M. F. Insana, R. Wagner, B. S. Garra, and M. H. Loew, “Application of cluster analysis and unsupervised learning to multivariate tissue characterization,” J. Clin. Eng., vol. 13, pp. 455– 461, 1988. 5. R. Momenan, R. F. Wagner, B. S. Garra, M. H. Loew, and M. F. Insana, “Image staining and differential diagnosis of ultrasound scans based on the Mahalanobis distance,” IEEE Trans. Med. Imag., vol. 13, pp. 37–47, Mar. 1994. 6. D. Nicholas, D. K. Nassiri, P. Garbutt, and C. R. Hill, “Tissue characterization from ultrasound B-scan data,” Ultrasound Med., Biol., vol. 12, no. 2, pp. 135–143, Feb. 1986. 7. C. Wu, Y. Chen, and K. Hsieh, “Texture features for classification of ultrasonic liver images,” IEEE Trans. Med. Imag., vol. 11, pp. 141– 152, June 1992.
8. M. S. Klein Gebbinck, J. T. M. Verhoeven, J. M. Thijssen, and T. E. Schouten, “Application of neural networks for the classification of diffuse liver disease by quantitative echography,” Ultrason. Imag., vol. 15, no. 3, pp. 205–217, July 1993. 9. H. Paik and M. D. Fox, “Fast Hartley transforms for image processing,” IEEE Trans. Med. Imag., vol. 7, pp. 149–153, June 1988. 10. Y. M. Kadah, A. A. Farag, J. M. Zurada, A. M. Badawi, and A. M. Youssef, “Classification algorithms for quantitative tissue characterization of diffuse liver disease from ultrasound images,” IEEE Trans. Med. Imag., vol. 15, no. 4, pp. 466–477, Aug. 1996. 11. Saurin Shah, Dr.M.D.Dsai, Dr.Lalit Panchal, “Identification of content descriptive parameters for classification of renal calculi”, International Journal of Signal and Image Processing, Vol-12010/Iss.4,pp 255-259 12. Saurin Shah, Dr.M.D.Dsai, Dr.Lalit Panchal, “An Approach towards Characterization of Ultrasound Renal Stone Images Using Wavelet Transforms” Proc. conference on Biomedical Engineering and Assistive Technologies, NIT-Jalandhar, Dec 2010 13. E. Dubois, “The sampling and reconstruction of time varying imagery with application in video systems,” Proc. IEEE, vol. 73, pp. 502–522, Apr. 1985. 14. J. Kovacevic and M. Vetterli, “Nonseparable multidimensional perfect reconstruction filter banks and wavelet bases for Rn,” IEEE Trans. Inform. Theory, vol. 38, pp. 533–555, Mar. 1992. 15. E. Viscito and J. P. Allebach, “The analysis and design of multidimensional FIR perfect reconstruction filter banks for arbitrary sampling lattices,” IEEE Trans. Circuits Syst., vol. 38, pp. 29–41, Jan. 1991. 16. I. Daubechies, Ten Lectures on Wavelets. Philadelphia, PA: SIAM, 1992. 17. M. Vetterli and J. Kovacevic, Wavelets and Coding. Englewood Cliffs, NJ: Prentice-Hall, 1995. MOJSILOVIC et al.: CHARACTERIZATION OF SIMILAR DIFFUSE DISEASES 549 18. S. Mallat, “A theory for multiresolution signal decomposition: The Wavelet representation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 11, pp. 674–693, July 1989. 19. R. M. Mersereau, W. F. G. Mecklenbr¨auker, and T. F. Quatieri, “McClellan transformation for two-dimensional digital filtering: Idesign,” IEEE Trans. Circuits Syst., vol. 23, pp. 405–413, July 1976. 20. W. Pratt, O. Faugeras, and A. Gagalowitz, “Visual discrimination of stochastic texture fields,” IEEE Trans. Syst., Man, Cybern., vol. SMC-8, pp. 796–804, 1978. 21. M. Unser, “Texture classification and segmentation using wavelet frames,” IEEE Trans. Image Processing, vol. 4, pp. 1549–1560, Nov. 1995. 22. T. Chang and C. Kuo, “Texture analysis and classification with treestructured wavelet transform,” IEEE Trans. Image Processing, vol. 2, pp. 429–441, Oct. 1993. 23. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis. New York: Wiley, 1973.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Prof. Saurin R. Shah Dharmsinh Desai University College Road, Nadiad Nadiad, Gujarat India [email protected]
Cluster Approach for Auto Segmentation of Blast in Acute Leukimia Blood Slide Images N.H. Harun1, M.Y. Mashor1, and H. Rosline2 1
Electronic & Biomedical Intelligent Systems (EBItS) Research Group School of Mechatronic Engineering, University Malaysia Perlis, 02600 Jejawi, Arau, Perlis 2 Hematology Department University Hospital, Univesity Science Malaysia, Kubang Kerian, Kelantan
Abstract— Image segmentation is an essential step in image analysis and is a requirement for feature extraction. One of the common segmentation techniques is using clustering algorithms. Currently, clustering algorithm has been used widely and popular in many fields such as in pattern recognition, image processing, and signal processing. These methods partition or group the objects of an image based on similarities and differences. This study proposed an automated color image segmentation technique using combination of saturation component base on HSI color space and clustering technique which are Moving K-means and Fuzzy K-means. Then, 7x7 pixels median filter was applied to remove unwanted noise after the segmentation process was completed. The comparison performance of the proposed technique was investigated. The experimental results yield a promising result for the combination of saturation component with Moving K-means clustering algorithm and 7x7 pixels median filter. Keywords— acute leukemia blood images, clustering, Moving K-means, Fuzzy K-means, saturation component.
I. INTRODUCTION Leukemia is a type of cancer that begins in cells that form new blood cells. These cells are start in the soft, inside part of the bones called the bone marrow. After it starts, leukemia cells often shifts quickly into the blood where it can reach to other parts of the body such as the lymph nodes, spleen, liver, central nervous system (brain and spinal cord), and other organs[1]. According to the National Cancer Institute report in 2009[2], for 17 SEER (Surveillance Epidemiology and End Results) geographic areas, it is estimated 5,330 men and women (3,150 men and 2,180 women) will be diagnosed with acute lymphocytic leukemia(ALL) in 2010. The statistic also estimated 12,330 men and women (6,590 men and 5,740 women) will be diagnosed with acute myeloid leukemia (AML) in 2010. In addition, 8,950 men and women will die due to acute myeloid leukemia while 1,420 men and women will die due to acute lymphocytic leukemia in 2010. There are two main types of leukemia: acute leukemia and chronic leukemia. The acute leukemia is divided into two categories, depending upon their cell of origin.
Leukemia evolving from the myeloid/granulocyte cell line is called acute myelogenous leukemia (AML). Lymphocytic precursors give rise to acute lymphocytic leukemia (ALL) [3]. The word acute means that the cancers grow rapidly, and if not treated might be serious in a few months [4]. The original classification scheme proposed by the French-American-British (FAB) Cooperative Group which is differentiated based on morphology, including cell size, prominence of nucleoli, and the amount and appearance of cytoplasm [3]. According to French-American-British (FAB) classification also, the description of cells is small and uniform for ALL-L1. Meanwhile, cells of AML-M1 are large and regular [3]. In laboratories, hematologists and technologists investigate human blood by microscopic investigation. Since, this is done by humans it is not fast and it does not present a standardized accuracy caused by the operator’s capabilities and tiredness [5]. The reproducibility of the results is sometimes misinterpreted and lacking. For improving the reliability of the results, the automated image processing system was introduced and offers useful tools for the medical field, especially for investigating acute leukemia diseases. The success of automated image processing system of acute leukemia depends on the proper segmentation of images. Many segmentation methods for blood cell images have been proposed. The common segmentation techniques used in cell segmentation are thresholding [6, 7], cell modeling [7, 8], watershed clustering [8, 9] filtering and mathematical morphology [10], and clustering algorithm [5]. Clustering algorithm is unsupervised task that can divide the given set of data into several non-overlapping of homogenous group [9]. Besides that, the clustering technique normally represent image into cluster space [11]. Furthermore, clustering algorithm such as K-means, Fuzzy Kmeans and Moving K-means also have been proved to produce good medical image segmentation performance [12, 13, 14]. In order to segment automatically acute leukemia blood slide images, these studies focus on ability two types of clustering algorithm which are Moving and Fuzzy K-means. In addition, to ease the automatic segmentation process, the clustering algorithm has been combined with saturation
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 617–622, 2011. www.springerlink.com
618
N.H. Harun, M.Y. Mashor, and H. Rosline
formula base on HSI color space. Then, 7x7 pixels median filter has been applied to remove unwanted noise. After that, the proposed auto segmentation techniques performance has been evaluated using pixels subtraction technique and quantitative assessment base on manual segmentation image as reference image.
II. METHODOLOGY The goal of this proposed technique is to investigate the performance and to automate the segmentation process. The proposed technique consists of saturation formula, clustering algorithm and 7x7 pixels median filter. The steps are: A. Image Acquisition The acute leukemia blood slide images were analyzed using Leica microscope at 40 magnifications. Then, Infinity 2 camera was used to capture images and saved into (.*bitmap) format with 800x600 resolution. The samples of acute leukemia blood slide were provided by Hospital Universiti Sains Malaysia (HUSM) Kubang Kerian Kelantan. B. Threshold Selection Method
Saturation Base on HSI Color Space
In general, the HSI color space consists of three components Hue, Saturation and Intensity. The saturation component measures the degree of white light added to pure color. Based on observation of blood cells image, the acute leukemia cells (blast) are the most highlighted and clearly be seen in saturation component image. Meanwhile, the red blood cells and other particles become less saturated in the saturation component image. Therefore, the saturation component was chosen in order to reduce the computational effort and to ease the clustering process. The saturation formula has been applied to the original acute leukemia blood slide images as in 1 ii.
, ,
Moving K-Means Clustering In 2000, Mashor [15] proposed a modified version of Kmeans algorithm, which is Moving K-means algorithm. Moving K-means algorithm capable to minimize dead centres, pixel redundancy problems and the effect of trapped in local minima problems [15]. Based on original Moving Kmeans clustering algorithm [15], the algorithm of Moving Kmeans clustering can be implemented as: , set = = , (where 1. Initialize the centers and is a small constant value, 0< < and should be chosen to be inversely proportional to the number of centre). 2. Assign all pixels to the nearest centre and calculate the center position as in ∑
(2)
3. Check the fitness of each center using as in
The threshold selection method consists of two steps. The proposed selection method will utilize saturation formula base on HSI color space. Then, the resultant image will be used as the input for clustering algorithm. The steps are: i.
segmentation purpose. These two clustering algorithms have been applied to the resultant saturation images and the performance will be compared end of this paper.
(1)
Clustering Technique
In this study, two clustering algorithms namely Moving K-means and Fuzzy K-means have been used for automated
∑
(3)
4. Find and , the cluster that has the smallest and the largest value of . . 5. If 5.1. Assign the pixels of to if and leave the rest of the pixels to C . 5.2. Recalculate the positions of 1
1
and
and i
,
as in
(4)
will give up its pixels before step (5.1) and, Note: and in (2) are the number of the new pixels of and respectively, after the reassigning process in step (5.1). 6. Update according to and repeat . step (4) and (5) until
IFMBE Proceedings Vol. 35
Cluster Approach for Auto Segmentation of Blast in Acute Leukimia Blood Slide Images
7. Reassign all pixels to the nearest centre and recalculate the centre positions as in (2). and according to and 8. Update respectively, and repeat step (3) to (7) until
619
After completing the clustering process, the output of each type of clustering techniques was filtered by using 7x7 pixels median filter to improve the segmented images. C. Removing the Background and Unwanted Noise
. 9. Sort the centers in ascending order where .
< < …..<
The 7x7 pixels median filter was further applied to the segmented images in order to remove unwanted noise. This process yields better and more acute leukemia blood cells image visualization.
Fuzzy K-Means Clustering Fuzzy K-means is developed by Dunn in 1973[16] but improved by Bezdek in 1981[17]. Fuzzy K-means clustering minimizes the following function as in ∑
∑
(5)
is the partition matrix, which represent the where , degree of membership between each data sample and all centres and m is any real number greater than one. is the is i-th samnumber of cluster , is the number of data, is j-th centre of cluster. The value of m ple of data and will controls the fuzziness of the membership function. Based on Bezdek [17] for m=2, the Fuzzy K- means algorithm is composed of the following steps: 1. Initializes the centres. 2. Calculate
as in (6) and (7). ;
if
0,
,
(6)
∑
where
= 1 0
(7)
0
3. Calculate the new position of the centres as in ∑ ∑
(8)
4. Repeat steps (2) and (3) until the centres no longer move.
III. RESULTS AND DISCUSSIONS The proposed method was applied on 60 images of acute myeloid leukemia (AML) and acute lymphoid leukemia (ALL), which were taken from 7 slides of acute leukemia blood samples. Figure 1(a) and 2(a) show the original captured acute leukemia images with resolution of 800x600. Meanwhile, Figure 1(b) and 2(b) show, the images after applying the saturation formula. The results obtained in Figure 1(b) and 2(b) show that the proposed method by beginning with applying the saturation formula to AML and ALL original images to ease the clustering process. Figure 1(c), 1 (d), 2(c) and 2(d) represent auto segmentation result using Moving K-means and Fuzzy K-means algorithm. From the Figure 1(e), 1(f), 2(e) and 2(f), 7x7 pixels median filter was applied to resultant image produced by the clustering algorithm to give better results. From the results shown in Figure 1(e) and 1(f) for AML images, as well as Figure 2(e) and 2(f) for ALL images, it can be noticed that the two types of clustering algorithm can cluster the region of interest for the acute leukemia blood cell images to 3 regions which are background, nucleus and cytoplasm. Unfortunately, Fuzzy K-means clustering algorithm did not all the time produce good performance due to dead centre and centre redundancy problems. As well, they were not able to avoid the centres from being trapped in the local minimum. The results obtained in Figure 1(e) for AML image and Figure 2(e) for ALL image show that the Moving Kmeans clustering algorithm produces better auto segmentation performance than conventional method which is Fuzzy Kmeans algorithm. From Figure 2(f) unwanted noise (red blood cells) regions are detected as part of the regions of interest using Fuzzy Kmeans. However, less of unwanted noise regions are detected using Moving K-means algorithm as shown in Figure 1(e) and 2(e). The Moving K-means is more reliable with respect to noise.
IFMBE Proceedings Vol. 35
620
N.H. Harun, M.Y. Mashor, and H. Rosline
(a)
(b)
(a)
(b)
(c)
(d)
(c)
(d)
(e)
(f)
(e)
(f)
(g)
(h)
(g)
(h)
(i)
(i)
Fig. 1 Resulted images for AML type (a) Original image and resulted images after applying (b) saturation formula (c) Moving K-means algorithm (d) Fuzzy K-means algorithm(e) and (f) 7x7 pixels median filter (g) and (h)Ghost Image (i) manual segmentation
Fig. 2 Resulted images for ALL type (a) Original image and resulted images after applying (b) saturation formula (c) Moving K-means algorithm (d) Fuzzy K-means algorithm (e) and (f) 7x7 pixels median filter (g) and (h) Ghost Image (i) manual segmentation
IFMBE Proceedings Vol. 35
Cluster Approach for Auto Segmentation of Blast in Acute Leukimia Blood Slide Images
Results of using pixel subtraction technique between manual segmented image in Figure 1(i), 2(i) and resultant segmented image in Figure 1(e), 1(f), 2(e) and 2(f) are shown in Figure 1(g), 1(h), 2(g) and 2(h) respectively. From the Figure 1(g), 1(h), 2(g) and 2(h) is ghost image that represent the unsuccessfully segmented object appeared in original color. From the Figure 1(e) and 2(e) could remain the size and give the almost similar shape. Generally, Moving K-means produced better performance as compared to Fuzzy K-means algorithm, although some problems were still not totally avoided (e.g. some of cytoplasm regions for blast in AML images were not detected as shown in Figure 1(e)). Table 1 Segmentation performance of clustering algorithm (a) AML type
Clustering Algorithm Moving K-means Fuzzy Kmeans
Clustering Algorithm Moving K-means Fuzzy Kmeans
Accuracy (%) 96.13
TP
TN
FP
FN
50849
422776
18908
148
96.05
50452
422759
19305
165
(b) ALL type Accuracy TP TN (%) 98.54 33377 452146 88.05
37517
396274
FP
FN
136
7022
2882
56008
100%
then it labeled as false negative (FN). Finally, if the test could falsely indicate the pixels that represent the background, it will be labeled as false positive (FP). Figure 1(e), 1(f), 2(e) and 2(f) show 4 sample of segmented images used for testing. The number of data used for testing is 492681 pixels that is equivalent to the size image which is 809 x 609. Table 1 shows the segmentation performance using Moving K-means and Fuzzy K-means algorithm. From the result obtained, segmentation using Moving K-means procedures produced promising better performance with 96.13% (AML image type) and 96.05% (ALL image type). As compared to Fuzzy K-means procedures which are 98.54% (AML image type) and 88.05% (ALL image type). Generally, this proposes method sustaining the size and shape of the blasts in the AML and ALL images. Additionally, the location of blast is successfully detected.
IV. CONCLUSION
For quantitative assessment, the resultant segmented image is compared to manual segmented image as reference. Manual segmented image were prepared by manually editing using Adobe PhotoshopeTM. Figure 1(i) and 2(i) show example of manual segmented images. The segmentation performance of Moving K-means and Fuzzy K-means algorithm is evaluated by determine the percentage of accuracy, as in Accuracy
621
(9)
where TP, TN, FP and FN are true positive ,true negative, false positive and false negative. The percentage of accuracy as in (9) is calculated based on comparison of the pixels that represent resultant segmented image and the pixels represent manual segmented image. If the test is correctly indicate the blasts (object interest), then the pixels that represent the blasts will be labeled as true positive (TP). If the test could correctly indicate the pixels that represent the background, then it will be labeled as true negative (TN). However, if the test could falsely indicate the blasts (object interest) when truly is,
The current study proposed auto color segmentation technique which consists of saturation component base on HSI color space, clustering algorithm and 7x7 pixels median filter. Two clustering algorithms namely Moving Kmeans and Fuzzy K-means are proposed. The performance of those clustering algorithms are compared and analyzed. The results show that Moving K-means yield better performance due to its capability of avoiding dead centre, pixels redundancy and getting trapped in local minima. The advantage of the proposed method is that the selection of the threshold for segmentation is done automatically. Furthermore, the blasts in acute leukemia blood slide image are successfully segmented from its background and unwanted noise. The location of blast is successfully detected. Meanwhile, the size and shape for blast of acute leukemia have also been closely preserved. For the future work, the result of this paper can be used as the basis for extracting the other features from the acute blood slide images. On the other hand, to establish the capability and reliability of the proposed method more acute leukemia blood sample slides should be taken for further testing.
ACKNOWLEDGMENT We would like to express our thanks to UniMAP and Malaysian Government for supporting this research in term of research grant. This research is funded under Fundamental Research Grant Scheme (Grant No. 9003 00129).
IFMBE Proceedings Vol. 35
622
N.H. Harun, M.Y. Mashor, and H. Rosline
REFERENCES 1. 2.
What is acute lymphocytic leukemia? at http://www.cancer.org SEER Cancer Statistics Review, 1975-2007, at http://seer.cancer.gov/csr/1975_2007/ 3. Mittal P ,Meehan K R( 2001) The Acute Leukemia, Clinical Review Article, Hospital Physician, 2001, pp.37-44.4. 4. Lim G. C. C.(2002) Overview of Cancer in Malaysia, In Japanese Journal of Clinical Oncology, Department of Radiotherapy and Oncology, Hospital Kuala Lumpur,2002 5. Piuri V, Scotti F(2004) Morphology Classification of Blood Leucocytes by Microscope Images, In IEEE International Conference on Computational Intelligence International Conference on Image, Speech and Signal Analysis, 1992, pp. 530–533 6. Cseke I(1992) A Fast Segmentation Scheme for White Blood Cell Images, In Proceeding 11th IAPR for Measurement Systems and Applications Boston, MA, USA., 2004. 7. Liao Q, Deng Y(2002)An Accurate Segmentation Method for White Blood Cell Images, In IEEE International S ymposium on Biomedical Imaging,2002,pp.245-248 8. Jiahng K, Liao Q, Dai S(2003)A Novel White Blood Cell Segmentation Scheme Using Scale-Space Filtering and Watershed Clustering, In Proceeding 2nd. International Conference on Machine Learning and Cybern.,2003, pp.2820–2825. 9. Venkateswaran N, Ramana Rao Y V(2007)K-means Clustering Based Image Compression in Wavelet Domain, Journal of Information Technology: 148-153 10. Anoraganingrum D(1999)Cell Segmentation with Median Filter and Mathematical Morphology Operation, In Proceeding International Conference on Image Analysis and Processing, 1999, pp.1043–1046.
11. Rezaee M R, Nyqvist C, Van Der Zwet P M J et al.(1995)Segmentation of MR Images by A Fuzzy C-Mean Algorithm, In Proceeding Computers in Cardiology, 1995, pp. 21-24 12. Chen C W, Luo J, Parker K J(1998)Image Segmentation via Adaptive K-Mean Clustering and Knowledge-Based Morphological Operations with Biomedical Applications, In IEEE Transactions on Image Processing,1998,pp.1673-1683 13. Pham D, Prince J L, Chen Y X et.al(1997)An Automated Technique for Statistical Characterization of Brain Tissues in Magnetic Resonance Imaging, In International Journal of Pattern Recognition and Artificial Intelligence, 1997, pp.1189-1211 14. Mat-Isa N A, Mashor M Y, Othman N H(2003) Comparison of Segmentation Algorithms for Pap Smear Images, In Proceeding International Conference on Robotics, Vision, Information and Signal Processing, 2003, pp. 118-125. 15. Mashor M Y(2000)Hybrid Training Algorithm for RBF network, In International Journal of the Computer, The Internet and Management, pp.50-65 16. Dunn J C (1973) A Fuzzy Relative of the ISODATA Process and Its Use in Detecting Compact Well-Separated Clusters, Journal of Cybernetics 3: 32-57 17. Bezdek J C(1981) Pattern Recognition with Fuzzy Objective Function Algoritms,Plenum Pres, NewYork Author: Nor Hazlyna Harun Institute: University Malaysia Perlis Street: Jalan Seriab Kangar City: Kangar Country: Perlis Email: [email protected]
IFMBE Proceedings Vol. 35
Comparison of the Basal Ganglia Volumetry between SWI and T1WI in Normal Human Brain W.B. Jung1, Y.H. Han1, J.H Lee2, and C.W. Mun1,3 1
2
Biomedical engineering, Inje University, Gimhae, South Korea Neurology, Pusan National University Yangsan Hospital, Yangsan, Country 3 FIRST-research group, Inje University, Gimhae, South Korea
Abstract— Susceptibility Weighted Imaging (SWI) is an advanced MR sequence that can detect iron content. Iron deposition in brain has been examined regarding neurodegenerative disease and aging. The hypointensity region by iron deposition was estimated through comparing SWI with conventional T1weighted image (T1WI) using volumetric analysis of basal ganglia in normal brain and the specific differences were confirmed in both images. The purpose of this study is to establish the feasibility of the objective region of interest (ROI) model for volumety using SWI sequences and visualize the changed region in the brain. Authors believe that volumetry technique can provide quantitative and visual data by measuring the volume of interest (VOI) pattern for the patients with disease related to iron deposition. Keywords— SWI, Iron deposition, Volumetry.
I. INTRODUCTION Magnetic resonance imaging (MRI) is a medical diagnosis modality used in radiology to visualize detailed anatomical structures by utilizing the nuclear magnetic resonance principle. As applying the characteristic of various parameters, the signal acquired in each organization can be used getting the anatomical and physiological information to non-invasive disease diagnosis through signal processing and image reconstruction technology. Susceptibility Weighted Imaging (SWI) is a new MR imaging sequence offering contrast enhancement, which is the combination of magnitude image and phase image, obtained with 3D flow compensated gradient echo sequence and high in-plane resolution. This imaging can measure susceptibility differences between tissues and evaluate the magnetic properties of blood, iron-laden tissues and others [1]. Paramagnetic substance like deoxyhemoglobin, ferritin, a non-heme iron are known as sources of magnetic susceptibility in the tissues. Among these substances, iron is highly concentrated in the extrapyramidal system and has unpaired electrons. This paramagnetic substance weakens the local magnetic fields that add to strength of the externally applied magnetic fields. Changed magnetic field caused by iron leads to differences in the phase of the tissue compared with its
surroundings. In the results of that, iron deposition makes decrease of the signal intensity on MR images. SWI is more useful sequence to detect brain mineralization including iron deposition than conventional GRE sequence [2, 3]. Iron accumulation in basal ganglia consisting of the putamen, caudate nucleus and globus pallidus in brain has been examined in several neuropathologic conditions such as Parkinson's disease and Alzheimer's disease [4-6]. Particularly, deposition of iron in the putamen and substantia nigra has been proposed to play an important role in the pathophysiology of parkinson's disease [7, 8]. The literature on differences in basal ganglia iron content is less consistent, with both increases and decreases reported in these patients [9-11]. Other study has reported that hypointensity region in the putamen by iron accumulation has increased with aging [12, 13]. However region of interest (ROI) has been measured semi-quantitatively by relative grading scale or subjective decision of researchers in conventional studies. Every tissue in the human body has specific T1 or T2 values. These parameters are used to represent an image which most of the contrast among tissues is due to the differences in the T1 values. T1-weighted images (T1WI) have been widely utilized for various brain segmentation methods [14, 15] due to its good contrast [16]. Medical image volumety based on segmentation have been used for biomedical research process. This can visualize ROI in brain or body anatomically and improve diagnostic accuracy. In this study, we attempted the volumetric analysis of basal ganglia using SWI sequence comparing the T1WI using their features. We confirmed the feasibility of establishment for extent of standard in normal brain in order to examine its usefulness.
II. MATERIALS AND METHODS A. Subjects In our study, we included 1 healthy volunteer (male, 51years) who had no neurological symptoms or signs as normal control and the subject gave informed consent for the study.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 623–626, 2011. www.springerlink.com
624
W.B. Jung et al.
and manual process, the former process performs region competition by tracking boundaries of the set uniform intensity or edge based manner utilizing the snake algorithm which is one of these recently developed methodologies describing the velocity of every point on the snake at any particular time. The later process performs delineating contour of ROI. First, the volume of lateral ventricle was measured by auto process then caudate nucleus and globus padillus were measured by both procedures. Last, putamen was measured by manual process. All of the raw-volume was measured using total intracranial volume (ICV) and percentage of ICV in order to confirm the variation of brain volume for a subject [18].
III. RESULTS
Fig. 1 (A) The SWI magnitude image in axial and sagittal (B) The MPRAGE magnitude image in axial and sagittal
B. MR Imaging Procedure As comparative images and anatomical guided image to SWI, we selected Magnetization Prepared Rapid Acquisition Gradient Echo (MP-RAGE). This image is an enhanced pulse sequence for 3D T1WI consisting of an inversion recovery pulse followed by a rapid gradient echo readout. All MR experiments were performed on a 3.0T MRI system (Verio, Siemens, Germany) using head coil. A routine brain SWI sequence acquired 128 contiguous slices through whole brain in the axial plane with the following parameters: Repetition time (TR) = 28ms, Echo time (TE) = 20ms, Flip angle = 15°, slice thickness = 1mm with no gap, NEX = 1, FOV = 21cm × 18cm, and matrix size (Nx × Ny) = 320 × 227. Also, MP-RAGE sequence acquired 160 contiguous slices through whole brain in the axial plane with the following parameters: Repetition time (TR) = 1900ms, Echo time (TE) = 2.92ms, Inversion Time (TI) = 900ms, Flip angle = 9°, slice thickness = 1mm with no gap, NEX = 1, FOV = 22cm × 19cm, and matrix size (Nx × Ny) = 320 × 196. Acquired images were transferred to the PC. C. Volumetric Analysis Volumetry was performed by volumetric imaging tool (ITK-SNAP) [17]. Volumetry procedure is divided into auto-
Fig 1.A shows the SWI magnitude image of neurologically normal subject and represents the ROI to be distinct including lateral ventricle, caudate nucleus, globus padillus and putamen over the structure. Hypointensity regions also are showed in the posterolateral aspect of the putamen. On the contrary, globus padillus and hypointense regions of the putamen are not observed in MP-RAGE image (Fig 1.B). Fig 2.A shows that ROI was overlaid by measuring automatically and delineating contours manually in the SWI (blue is lateral ventricle, red is caudate nucleus, yellow is globus padillus, and green is putamen). Fig 2.C shows the same as Fig 2.A except for globus padillus. Fig 2.B is a volume image of SWI reconstructed by the volumetric imaging tool from Fig 2.A. Fig 2.D is a volume image of MPRAGE same method as Fig 2.B. Table 1 Comparison of brain volume between SWI and MP-RAGE Image volume (mm3) SWI
MP-RAGE
Caudate nucleus
Putamen
Raw
12809
7017
Globus padillus 5636
Right
6323
3322
2519 3117
Left
6486
3695
Raw
13341
11695
N/A
Right
6850
5631
N/A
Left
6491
6064
N/A
Table 1 represents the measurement of the brain volume comparison between two image sequences. In the rawvolume, caudate nucleus are similar each other but putamen of SWI are smaller than that of MP-RAGE. Volume of Globus padillus in MP-RAGE couldn’t be measured. The significant difference didn’t appear in left and right volumetries. The same results were shown in Table 2.
IFMBE Proceedings Vol. 35
Comparison of the Basal Ganglia Volumetry between SWI and T1WI in Normal Human Brain
625
Fig. 2 (A) Added ROI on SWI magnitude image in axial and sagittal (B) 3D volumetry on SWI (C) Added ROI on T1WI magnitude image in axial and sagittal (D) 3D volume on MP-RAGE
Table 2 Comparison of Percentage of ICV between SWI and MP-RAGE Percentage of ICV (%) SWI
MP-RAGE
Caudate nucleus
Putamen
Globus padillus
Raw
0.360
0.198
0.159
Right
0.178
0.094
0.071 0.088
Left
0.183
0.104
Raw
0.377
0.329
N/A
Right
0.193
0.156
N/A
Left
0.183
0.171
N/A
IV. DISCUSSION AND CONCLUSIONS Previous studies have examined iron deposition in the basal ganglia of patients with specific disease using MR image. SWI has been known to be very sensitive sequence to irons, 4 times more sensitive than a conventional gradient recalled-echo (GRE) sequence in the detection of hemorrhage [19, 20]. Iron content in the central nervous system (CNS) including basal ganglia increases with age and unusual degrees of iron have been reported as being seen in several neurodegenerative disease [4-8]. T1WI used a comparative image in this study has been regarded as good
contrast image in brain. Several studies have been reported on relation of manganese (Mn) accumulation in CNS. Especially, the T1WI intensity increased bilaterally in globus pallidus and striatum [21, 22]. T1WI can be a useful biomarker of exposure to Mn causing a parkinsonian syndrome. This study showed distinctly different results in normal brain volume and percentage of ICV on both SWI and MPRAGE sequences. Hypointense region appeared in posterolateral aspect of the putamen on SWI compared with MPRAGE image. That was most likely caused by the presence of iron compounds because the intensity of images tended to correlate inversely with iron concentration. On the other hand, MP-RAGE can’t appear globus padillus and hypointensity in putamen. Due to low contrast, authors believe that it may result from the change of ROI border. The volume in caudate nucleus of SWI was similar with that of MP-RAGE sequence. Caudate nucleus have been reported as the region less related to iron deposition in brain [23]. In image volumetry, low resolution images acquired from clinical routine sequence or comparative thick slice thickness images could cause partial volume effect. The partial volume effect is the loss of contrast between two adjacent tissues in an image so that more than one tissue type are included within the same
IFMBE Proceedings Vol. 35
626
W.B. Jung et al.
voxel. As results, each part has the some differences in size and position. On the contrary, too thin slice thickness of image results in so low signal to noise ratio (SNR) that determination of the area boundary is very difficult. Slice thickness of less than or equal to 3 mm has been proposed for a volume estimate resulting in a systemic bias of less than 5% [24]. Therefore, selecting the proper slice thickness is regarded as an important factor in volumetry to estimate the accurate volume. Consequently, volumes of the basal ganglia in brain were reconstructed from 2D SWI and 3D MP-RAGE and were compared to verify influence of the iron deposition. Also, each ROI was visualized using different colors in Fig 2 to evaluate quantitative volume. We suggest that volumetry technique using SWI or T1WI can provide quantitative and visual data by measuring the volume of interest (VOI) pattern for the patients with disease related to iron deposition. To optimize the results, we will devise the objective ROI model for volumety through various SWI sequences in future study.
ACKNOWLEDGMENT This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2009-0081195).
REFERENCES 1. Sehgal V, Delproopsto Z, Haacke EM et al. (2005) Clinical applications of neuroimaging with susceptibility-weighted imaging. J Magn Reson Imaging 22:439–450 2. Haacke EM, Ayaz M et al. (2007) Establishing a baseline phase behavior in magnetic resonance imaging to determine normal vs. abnormal iron content in the brain. J Magn Reson Imaging 26:256–264 3. Haacke EM, Filleti CL et al. (2007) New algorithm for quantifying vascular changes in dynamic contrast enhanced MRI independent of absolute T1 values. Magn. Reson. Med 58:463-472 4. Chen JC, Hardy Pa et al. (1993) MR of human postmortem brain tissue: correlative study between T2 and assays of iron and ferritin in Parkinson and huntington disease. AJNR Am J Neuroadiol 14:275281 5. Dexter DT, Carayon A et al. (1991) Alterations in the levels of iron, ferritin and other trace metals in Parkinson's disease and other neurodegenerative disease affecting the basal ganglia. Brain 114:19531975 6. Bartzokis G, Tishler TA. (2000) MRI evaluation of basal ganglia ferritin iron and neurotoxicity in Alzheimer's and Huntingon's disease. Cell Mol Biol 46:821-833 7. Antonini A, Leenders KL et al. (1993) T2 relaxation time in patients with Parkinson’s disease. Neurology 43:697-700
8. Ye FQ, Allen PS et al. (1996) Basal ganglia iron content in Parkinson’s disease measured with magnetic resonance. Mov Disod 11:243249 9. Sofic E, Riederer P et al. (1998) Increased iron (III) and total iron content in post mortem substantia nigra of parkinsonian brain. J Neural Transm. 74:199 –205 10. Jellinger K, Paulus W et al. (1990) Brain iron and ferritin in Parkinson’s and Alzheimer’s diseases. J Neural Transm Park Dis Dement Sect. 2:327–340 11. Riederer P, Sofic E et al. (1989) Transition metals, ferritin, glutathione, and ascorbic acid in parkinsonian brains. J Neurochem. 52:515–520. 12. Hallgen B, Sourander P. (1958) The effect of age on the non-haemin iron in the human brain. J Neurochem 3:41-51 13. Harder SL, Hopp KM et al. (2008) Mineralization of the deep gray matter with age: A restrospective review with susceptibility weighted MR imaging. AJNR 29:176-183 14. Hua Li, Anthony Yezzi et al. (2005) Fast 3d brain segmentation using dual-front active contours with optional user-interaction. Computer Vision for Biomedical Image Applications 335–345 15. C. Xu DL. Pham ME et al. (1999) Reconstruction of the human cerebral cortex from magnetic resonance images. Medical Imaging IEEE Transactions 18: 467–480 16. Xiaolan Zeng, L.H. Staib et al. (1999) Segmentation and measurement of the cortex from 3-d mr images using coupled-surfaces propagation. Medical Imaging, IEEE Transactions 18: 927–937 17. P.A. Yushkevich, et al. (2006) User-guided active contour segmentation of anatomical structures: Significantly improved efficiency and reliability, NeuroImage. 31:1116-1128 18. A Colchester et al. (2001) Structural MRI volumetric analysis in patients with organic amnesia, 1: methods and comparative findings across diagnostic groups, J Nuerol Neurosurg Psychiatry. 71:13-22 19. Tong Ka, Ashwal S et al. (2003) Hemorrhagic shearing lesions in children and adolescents with posttraumatic diffuse axonal injury: improved dection and initial results. Radiology 227:332-339 20. Tong KA, Ashwal S et al. (2004) Diffuse axonal injury in children: clinical correlation with hemorrahagic lesions. Ann Neurol 56:36-50 21. Katsuragi T, Takahashi T et al. (1996) A patient with Parkinsonism presenting hyperintensity in the globus pallidus on T1-weighted MR images: the correlation with Mn poisoning. Rinsho Shinkeigaku 36:780–2. 22. Kim Y, Kim KS, et al. (1999) Increase in signal intensities on T1weighted magnetic resonance images in asymptomatic manganeseexposed workers. Neurotoxicology 20: 901–907. 23. Jiuquan Zhang et al. (2010) Characterizing iron deposition in parkinson’s disease using susceptibility-weighted imaging: An in vivo MR study. Brain research 124-130 24. Laakso MP, Juottonen K et al. (1997) MRI volumetry of the hippocampus: the effect of slice thickness on volume formation. Mag Reson Imag 15:263-265 Corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Chi-woong Mun Inje University Obang-dong 607 Gimhae Republic of Korea [email protected]
Computer-Aided Diagnosis System for Pancreatic Tumor Detection in Ultrasound Images C. Wu1, M.H. Lin2, and J.L. Su1 1 2
Department of Biomedical Engineering, Chung Yuan Christian University, Chungli, Taiwan Division of Gastroenterology and Hepatology, Tao-Yuan General Hospital, Taoyuan, Taiwan
Abstract— In this study, a computer-aided diagnosis (CAD) system for pancreatic tumors detection in ultrasound images has been developed to provide a physician some diagnosis information and reduce the error rate in diagnosis for patients. After reducing noises, enhancing contrast, and detecting boundary of a tumor in the original ultrasound image, its texture features and morphological features were analyzed. The statistically effective features were selected and served as inputs in the self-organizing map (SOM) to classify the ultrasound images after evaluating the results. The diagnostic efficiency of the CAD system was evaluated after comparing the classified results of ultrasound images with the pathological results of patients. The primary results showed that morphological features had a better performance than texture features for pancreatic tumor classification in an ultrasound image. According to the results, 8 features were proved to effectively classify normal pancreas images and pancreatic tumor images, and 4 features of them were proved to effectively classify benign tumor images and malignant tumor images at the same time. The area of a pancreatic tumor seemed to be the most important morphological feature for image classification. A benign pancreatic tumor usually had a smaller area and a smoother contour than a malignant one. Keywords— Ultrasound image, Texture Morphological analysis, Self-organizing map (SOM).
analysis,
I. INTRODUCTION The pancreas is one of the important gastrointestinal tract organs and the pancreatic cancer has an extreme mortality. Because of physiological limitations, a physician is hard to make an accurate diagnosis for patients. In recent years, the adenocarcinoma tumor is still the first cause of death for the population in Taiwan, and the pancreatic cancer is the ninth of the major cancers. Although the proportion of pancreatic cancer (3.7%) is not too much, the mortality (> 95%) is very terrible, and patients has increased every year. Therefore, early diagnosis of pancreatic cancer is very important. The abdominal ultrasound (US) is the most popular way for a physician to make a diagnosis. US is an useful and safe diagnostic tool to distinguish benign pancreatic tumors from malignant ones. However, its diagnostic efficiency needs to improve further to help a physician make a more accurate and detailed diagnosis.
Several studies have demonstrated that certain morphological features in an ultrasound image could be used to classify breast tumors as benign or malignant [1], and the combination of image processing with some morphological features could distinguish benign breast tumors from malignant ones in an ultrasound image effectively [2]. Besides, combining neural networks with texture analysis of endoscopic ultrasound (EUS) images could accurately differentiate pancreatic cancer from chronic pancreatitis and normal tissue [3]. In this study, we have been developed the CAD system for pancreatic tumors initially by applying image processing to help a physician make a diagnosis and increase the diagnostic accuracy effectively. It could help a physician make a more accurate diagnosis and decreased the probability of making an incorrect or an invasive diagnosis for patients in the future.
II. MATERIALS AND METHODS In this study, totally 40 pathologically proven ultrasound images of pancreas from the Tao-Yuan General Hospital were used to develop and evaluate the CAD system. The image database included 26 pancreatic tumor images and 14 normal pancreas images, and the pancreatic tumor images included 9 benign tumor images and 17 malignant tumor images, respectively. The 24 bits digital ultrasound images were captured from a B-mode scanner, and each image resolution was 640 × 480 pixels. In addition to save time for system operation, a physician first needed to manually extract the region of interest (ROI) with a likely lesion in an ultrasound image. The flowchart of image processing applied in the CAD system was shown in Figure 1, and it included noise reduction, contrast enhancement, boundary detection, feature analysis, image classification, and system evaluation. Moreover, the image classification flowchart of the CAD system was shown in Figure 2, and it included two phases. The first phase was used to classify normal and abnormal case; and the second phase was used to classify benign tumor and malignant tumor as well. A typical pancreatic tumor image after each image processing step was shown in Figure 3, respectively.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 627–630, 2011. www.springerlink.com
628
C. Wu, M.H. Lin, and J.L. Su
Due to the noises in the original ultrasound image, the anisotropic diffusion filtering was used to reduce the noises and preserve the contour of tumor, as shown in Figure 3(b). And then, the histogram equalization was used to visually enhance the tumor in the noise filtered image, as shown in Figure 3(c). And then, the optimal thresholding algorithm was used to initially segment the contour of tumor in the contrast enhanced image, as shown in Figure 3(d). And then, the seeded region growing was used to detect the boundary locations of tumor in the initially segmented image, as shown in Figure 3(e). Besides, the morphological closing was used to entirely segment the precise contour of tumor in the boundary detected image, as shown in Figure 3(f). After image processing, the texture features and morphological features of the tumor were analyzed for image classification [1, 3]. Moreover, the statistically effective features were selected by the independent t-test and served as inputs
Fig. 2 Proposed image classification flowchart
Fig. 3 Pancreatic tumor image after image processing (a) original image (b) noise filtered image (c) contrast enhanced image (d) initially segmented image (e) boundary detected image (f) entirely segmented image in the SOM fixed with different modes to classify the ultrasound images after evaluating the results [4]. Finally, the diagnostic efficiency of the CAD system was evaluated after comparing the classified results of ultrasound images with the pathological results of patients.
III. RESULTS AND DISCUSSION
Fig. 1 Proposed image processing flowchart
In this study, the statistically effective features (p < 0.05) were selected and tested for image classification. According to the results, 8 features were proved to effectively classify normal pancreas images and abnormal pancreas images. These features included Mean, Area, NRL_Mean, NRL_SD, NRL_MA, NRL_AR, NRL_RI, and RL_Mean, as shown in Figure 4. Besides, 4 features of them were also proved to effectively classify benign tumor images and malignant tumor images at the same time, and included Area, NRL_MA, NRL_RI, and RL_Mean, as shown in Figure 5. The primary results showed that morphological features had a better performance than texture features for pancreatic tumor classification in an ultrasound image. The results of the first phase classification with 8 features by using leave-one-out method were tested, and the accuracy IFMBE Proceedings Vol. 35
Computer-Aided Diagnosis System for Pancreatic Tumor Detection in Ultrasound Images
Leave-one-out of Total (Benign & Malignant) 1.2 Evaluation Results
and sensitivity were 97.5% and 100% when it used 6 reference neurons, as shown in Figure 6. Moreover, the results of the second phase classification with 4 features by using leave-one-out method were tested, and the accuracy and sensitivity were 84.62% and 100% when it used 4 reference neurons, as shown in Figure 7. The sensitivity of the CAD system was effectively increased by using more reference neurons to classify the pancreatic tumor images. The area of a pancreatic tumor seemed to be the most important morphological feature for image classification, as shown in Figure 8. A benign pancreatic tumor usually had a smaller area and a smoother contour than a malignant one. The average time cost for the CAD system was 25 seconds to evaluate an ultrasound image.
629
1
0.8462 0.8462 0.8462
0.8
1
1 0.8547
0.8417 0.7647
0.5556
0.6
0.6924 0.6714 0.6206
0.4
2 Neurons 3 Neurons 4 Neurons
0.2 0 Accuracy
Sensitivity
Specificity
Kappa
Diagnostic Efficiency
Fig. 7 System diagnostic efficiency (second phase classification) Leave-one-out for Accuracy (Benign & Malignant) 0.86
0.8462 0.8462 0.8462
0.8462 0.8462 0.8462
0.8462
Evaluation Results
0.84 Independent T-test (Normal & Abnormal) 0.60 0.504
0.50 0.366
0.40
0.331
0.8077
0.82
0.7825 0.7825
0.8 0.78
0.7825
2 Neurons 3 Neurons
0.7692
4 Neurons
0.76 0.74
P value 0.30
0.72 0.165
0.20 0.10
0.049
0.018
0.001
0.001
0.001
0.006
Area
NRL_MA
NRL_RI
RL_Mean
Diagnostic Efficiency 0.001
0.001
ZC
ea n
L_ M
L_ M
R
NR L_ RI
R
Ar ea NR L_ M ea n N RL _S D N RL _M A NR L_ AR
ea n M
Sk
ew ne ss K ur to sis C irc ul ar it y
0.00
Leave-one-out for Sensitivity (Benign & Malignant)
Feature Parameters
Fig. 4 Statistically effective features (first phase classification) Independent T-test (Benign & Malignant) 0.973
1.00 0.90
Evaluation Results
1.2 1 0.8
1 0.8031 0.7647
0.9412 0.8824 0.8235
0.9412 1 0.8235
1 0.7489 0.6471
0.6 0.4
2 Neurons 3 Neurons 4 Neurons
0.2
0.80
0
0.70 0.60
Area
0.469
P value 0.50 0.40
0.254
0.316
NRL_MA
NRL_RI
RL_Mean
Diagnostic Efficiency
0.326
0.311
0.30
0.166
0.20
0.095
0.10
0.001
0.001
0.001
0.001
Fig. 8 Feature importance comparison (second phase classification)
ZC L_ M
ea n
R
L_ M
NR L_ RI
R
Ar ea NR L_ M ea n N RL _S D N RL _M A NR L_ AR
ea n M
Sk
ew ne ss K ur to sis C irc ul ar it y
0.00
Feature Parameters
IV. CONCLUSION Fig. 5 Statistically effective features (second phase classification)
Leave-one-out of Total (Normal & Abnormal)
Evaluation Results
1.2 1
0.95 0.975 0.875
0.9231
1
1
1
0.9286
0.8077
0.8
0.9441 0.8936 0.7462 2 Neurons 4 Neurons 6 Neurons
0.6 0.4 0.2 0 Accuracy
Sensitivity
Specificity
Kappa
Diagnostic Efficiency
Fig. 6 System diagnostic efficiency (first phase classification)
In this study, the CAD system which could provide the image processing modules and classify the pancreatic tumor images as benign or malignant has been developed. It could help a physician make a diagnosis, effectively improve the diagnostic accuracy, and decreased the probability of making an incorrect or an invasive diagnosis for patients. On the other hand, this study will provide some contributions or new findings for pancreatic cancer study. However, the pathological differences of solid and cystic pancreatic tumors usually lead to an entirely different diagnosis and treatment for patients. Therefore, the CAD system could combine with other medical imaging to make a more complete evaluation tool for pancreatic tumors in the future.
IFMBE Proceedings Vol. 35
630
C. Wu, M.H. Lin, and J.L. Su
ACKNOWLEDGMENT
3.
This work was supported in part by National Science Council, Taiwan, R.O.C., under Grant NSC 99-2221-E-033040.
4.
Ananya Das, Cuong C. Nguyen, Feng Li, and Baoxin Li, “Digital image analysis of EUS images accurately differentiates pancreatic cancer from chronic pancreatitis and normal tissue,” Gastrointestinal Endoscopy, 2008, 67(6): 861-867. Dar-Ren Chen, Ruey-Feng Chang, and Yu-Len Huang, “Breast cancer diagnosis using self-organizing map for sonography,” Ultrasound in Medicine and Biology, 2000, 26(3): 405-411.
REFERENCES 1.
2.
Yi-Hong Chou, Chui-Mei Tiu, Guo-Shian Hung, Shiao-Chi Wu, Tiffany Y. Chang, and Huihua Kenny Chiang, “Stepwise logistic regression analysis of tumor contour features for breast ultrasound diagnosis,” Ultrasound in Medicine and Biology, 2001, 27(11): 14931498. Ruey-Feng Chang, Wen-Jie Wu, Woo Kyung Moon, and Dar-Ren Chen, “Automatic ultrasound segmentation and morphology based diagnosis of solid breast tumors,” Breast Cancer Research and Treatment, 2005, 89: 179-185.
Author: Jenn-Lung Su, Ph.D. Institute: Department of Biomedical Engineering, Chung Yuan Christian University Street: 200, Chung-Pei Road City: Chungli City Country: Taiwan Email: [email protected]
IFMBE Proceedings Vol. 35
Abstract Conventionally, endoscopy produced a scanning result video which provides information in medical diagnosis. The advanced in computer and image processing technology have allowed the video captured being converted into series of images for further processing. Therefore, the focus of this research is to help physician in gastrointestinal disease detection by providing assistant tool in diagnosis process. By using structural similarity technique as pre-processing, image normalization is used to enhance the image quality, and use the colour feature analysis as the detecting tool. As the result, the lengthy endoscopic video can be reduced up to 70% and the system developed has the capability to highlight the suspected area as the reminder for physician in diagnosis.
testinal disease which aims to prevent overlook in endoscopic diagnosis and also reduce the burden of physicians.
II. METHODOLOGY This research method is taking endoscopic video as the input data. However, to analyze a lengthy video is a very tedious work. Thus, the idea is to extract the video into series of images. Furthermore, the detection of lesion can be done by post image processing technique. A. Structural similarity
Keywords Contrast normalization, endoscopy, histogram normalization, image recognition, structural similarity
I. INTRODUCTION Endoscopy is a process to look inside the body for medical reason by using an endoscope. An endoscope is a medical device used to exam the interior hollow organ or cavity of the body [1]. Endoscopy is a minimally invasive diagnostic medical procedure; it is less invasive than to do open surgery and have less operative trauma for the patient than an equivalent invasive procedure [2]. The endoscope can also be used for enabling biopsies and retrieving foreign objects and also serving as non-invasive alternative to surgery for foreign object removal from the gastrointestinal tract. In year 2009, more than 55 million procedures were performed with gastrointestinal (GI) endoscope in United State [3]. With such number of procedures, the workload of a diagnosis physician is high and the quality of diagnosis might be reduced. As the result, this research has been done in order to provide assistance in diagnosis process. In this research, the focus is to help physicians to detect gastroin-
Normally, Structural Similarity (SSIM) index is a method used to measure the similarity between two images [4]. Fig.1 shows that SSIM is used to compare the similarity between two consecutive images in order to get the similarity index. In this research, the user is allows to determine the threshold value they prefer to in order to compare with the index value obtained from SSIM process. The selection of the threshold value is according to how much reduction the user would like the output video to be. When the resultant similarity index obtained by comparing two images is above the threshold value defined by the user, the second image will be discarded and proceed for comparison with next images. The threshold value can be varied from zero to one where the smaller the threshold value to be, more images will be discard due to high similarity. As the result, shorter output video will be produced. The objective of image reduction by structural similarly index is to reduce the number of images which dynamically reduces the system processing time. Other benefits of reduced images include reduced memory load and requires less temporary secondary storage space in computer system.
C. Automatic contrast normalization
Fig. 1
Details flowchart of image reduction by SSIM index
B. Image enhancement module Image enhancement module is one of the main function modules in the system. The module is used to minimize image distortion of the system and maximize the quality of the image. The module aims to improve image usability for both user and system. Besides that, deinterlace is a process to remove interlace artifact from an image. Fig.2 (a) shows the sample image with interlace artifact while Fig.2 (b) shows the image after the deinterlace process. The process is useful when interlaced video input is selected for image processing. Interlaced video is common and can be found in most analogue videos and some digital video storage, which include Video CD and Video DVD. For endoscopic image, this process is crucial as endoscopy generates motion video, which then generates interlace artifact. The whole deinterlace process is optional and selectable by the user. It can be turned off when progressive video input is selected when no deinterlace process is needed. With the help of deinterlace, the system is capable of processing progress input while maintaining compatibility with interlaced input.
Automatic contrast normalization is a process to adjust contrast of any image [5]. The contrast-adjusted image normally will give improved contrast to the image. Fig.3 illustrates the overall flowchart of the automatic contrast normalization where it is performed with the use of histogram equalization technique. The process will provide improved diagnostic capability and also give more consistent data for image analysis on next module. Since the algorithm used is meant for grayscale images, some adjustments have been done to cater for colour images so that it can still be applied. The conversion from RGB colour space to HSV colour space is performed before the contrast normalization process. Histogram of the lightness value for HSV colour space is generated after colour space conversion which will then be normalized. After the normalization process, a new histogram is used to perform histogram equalization to form a new image with normalized contrast.
Fig. 3 Flowchart of automatic contrast normalization D. Image recognition module
(a)
(b)
Fig. 2(a) Image with interlaces and (b) image after deinterlace
Image recognition module is a process developed to help physicians in anomalies endoscopic image detection. This module is another main supporting modulein the developed software of this research. In this system, the image recognition module helps to detect two anomalies in gastrointestinal tract which reflux esophagitis and diverticulum. The detection can be done by using colour-based feature analysis. After the analysis, the detected area of anomalies is passed to the next area process - case-based analysis
process. The case-based analysis will process the detected area to further improve the result. The difference between the case-based analysis of reflux esophagitis and diverticulum is that reflux esophagitis uses morphological operation while diverticulum utilizes canny edge detection. E. Image analysis and highlighting After that, colour feature analysis is introduced in order to recognize the abnormalities of an endoscopic image. The colour feature analysis utilizes HSV colour space as the image suffers variation in luminance due to the motion of the camera or uneven lighting. The image is then analyzed by comparing it with the pre-recorded data. The possible region of interest is then highlighted and classed accordingly.
B. Testing on reflux diverticulum detection The developed system is not only applicable in esophagitis detection but also function well in diverticulum detection. In order to evaluate the module ability on detecting diverticulum, another 50 images with diverticulum are passed to the module for testing purposes. The processed image is then compared to original image and analyzed. Fig. 5 shows the example of the result in diverticulum detection where the detected areas have been highlighted.
III. RESULTS To test the module, two sets of data, one for each case with four images on each set, are passed to the module for evaluation purpose. The processed image is then compared with the original image for discussion. A. Testing on reflux esophagitis detection 50 images with reflux esophagitis are passed to the testing module for verification purpose. The processed image is then compared with original image and analyzed. Fig. 4 illustrates the example of the testing result where highlighted areas represent the result of reflux esophagitis detection.
Fig. 5 Example of diverticulum detection
IV. CONCLUSION In this research, the software system acts as an endoscopic support system which can help to enhance the performance of the endoscopic image, thus helping to reduce the workload of physicians. The software based system can perform various image enhancement techniques, which help to improve the quality of the endoscopic image. The software system can also be used to detect and highlight regions of anomalies in the endoscope images. Moreover, the software system is capable to decrease endoscopic diagnosis time resulted from the reduction of the amount of diagnosis images. As the result, this research can be served as an assisting tool in gastrointestinal disease diagnosis.
REFERENCES 1 2
Fig. 4 Example of reflux esophagitis detection
What is endoscopyat http://www.medicalnewstoday.com/articles/153737.php Invasiveness of surgical procedures at http://en.wikipedia.org/wiki/Invasiveness_of_surgical_procedures
3 4 5
iDATA Research at http://www.idataresearch.net/idata/report_view.php?ReportID=827 Wang Z, Bovik C, Sheikh H R et al. (2004) Image quality assessment: From error visibility to structural similarity. IEEE Transaction on Image Processing 13(4): 600-612 Nishioka N S (1998) Experience with a real-time video processor for enhancing endoscopic image contrast. Gastrointestinal Endoscopy 148(1)
Author: Low Cheng Seng Institute: Multimedia University Street: Jalan Ayer Keroh Lama City: Ayer Keroh, Melaka Country: Malaysia Email:[email protected]
Digital Image Analysis of Chronic Ulcers Tissues A.F.M. Hani1, L. Arshad1, A.S. Malik1, A. Jamil2, and B.B. Felix Yap2 1
Centre for Intelligent Signal & Imaging Research, Department of Electrical & Electronics Engineering, Universiti Teknologi PETRONAS, 31750 Tronoh, Perak, Malaysia 2 Department Of Dermatology, General Hospital Kuala Lumpur, 50586 Kuala Lumpur, Malaysia
Abstract— Ulcers are chronic wounds that do not heal within a predictable amount of time causing severe pain and discomfort to the patients. Describing the proportion of tissue types is a part of the approved clinical method of wound assessment. Many imaging techniques segment and classify different types of tissues of chronic skin wounds according to the tissue colour. The early visual indication of ulcer healing is the appearance of granulation tissue at the base of ulcers. Granulation tissue appears red in colour due to pigment hemoglobin content in the blood capillaries. The main objective of this research work is to study the optical characteristics of pigment hemoglobin content as a possible image marker in detecting granulation tissue from RGB colour images of chronic ulcers. Independent Component Analysis is employed in the study to convert RGB images of chronic ulcers into images due to pigment hemoglobin only. Preliminary results indicate areas of hemoglobin distribution obtained from hemoglobin images correspond to the presence of granulation tissue on the surface of chronic ulcers. Keywords— Ulcers, Granulation Tissue, Hemoglobin, ICA.
clinical practice, doctors normally describe the tissues inside the ulcer in terms of percentages of each tissue colour based on visual inspection [3],[4]. However, human vision lacks precision and consistency and hence is not sufficient to perform such analysis. Moreover, chronic wounds evolve gradually over time as they heal, hence the detection of slow changes with simple visual inspection might be difficult Thus, imaging techniques are developed to identify different types of tissues objectively and aid medical practitioners in evaluating the healing status of ulcers. Most recently, an unsupervised wound tissue segmentation method was proposed [5], [6]. The method utilizes three selected unsupervised segmentation methods to segment wound images into different regions. It then extracts both colour and texture descriptors from RGB colour images as inputs to a classifier for automatic classification and labeling of these regions. Most of the work developed in the field of wound assessment utilized colour content representation in RGB colour images of wounds as the main component for analysis.
I. INTRODUCTION Ulcers are chronic wounds which do not follow a normal, predictable course of healing within a predictable amount of time. According to the underlying etiologies that cause them, ulcers are generally divided into three categories: vascular, pressure, and diabetic ulcers [1]. Ulcers are most commonly found on the lower extremity below the knee and affect around 1% of adult population and 3.6% of people older than 65 years [2]. Chronic ulcers introduce not only a major problem in dermatology but an economic dilemma especially in western countries. In the United States alone, chronic wounds affect 3 million to 6 million patients and treating these wounds costs an estimated $5 billion to $10 billion each year [1]. Non-healing ulcers remain for years causing pain to patients and putting them at risk of infection and amputation of limb. Figure 1 shows samples of ulcers located at the leg. The appearance of the ulcer is important in diagnosing and assessing its healing status. The ulcer contains four main types of tissues: Necrotic, Slough, Granulation and Epithelial. At any one time throughout the healing process, all four tissue types can be present on the ulcer surface. In
Fig. 1 Leg Ulcers One of the most prominent changes during wound healing is the colour of the tissues. However, colours are the resultant of the human vision perception of the light reflected from the skin. Human skin is a highly heterogeneous media with multi-layered structure. Most of the incident light penetrates into the skin and follows a complex path until it exits back out of the skin or is attenuated by skin pigments [7]. Understanding skin reflectance models would explain how human skin interacts with light and produce colours revealing the underlying causes of the changes during healing which provides a better analysis of wound tissues.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 635–638, 2011. www.springerlink.com
636
A.F.M. Hani et al.
The first indication of ulcer healing is the growth of the healthy granulation tissue from the base of the ulcer. Granulation tissue appears red in colour due to the presence of small blood vessels that contain pigment hemoglobin. Studies show that hemoglobin contains certain optical characters that can be detected from RGB colour images and used to show their content within human skin [7-10]. The main objective of this work is to study the optical characteristics of pigment hemoglobin and develop algorithms to separate its content as a possible image marker to detect the newly growing granulation tissue. The ultimate goal of this research is to produce objective non-invasive technique that aid doctors in dermatology clinics assess the healing status of chronic ulcers in a more precise and reliable way.
II. APPROACH A. Skin Colour Model It has been found that skin colour is caused by two main pigments in the skin which are pigment melanin in skin epidermis and pigment hemoglobin in skin dermis and their quantities are mutually independent [11], [12]. Figure 2 represents a skin colour model shows that skin colour distribution lies in a two-dimensional melanin-hemoglobin colour subspace bounded by two vectors representing pure density vectors of melanin and hemoglobin respectively. All possible skin colours are contained in this subspace [12].
Based on the fact that all possible skin colours lie in the melanin-hemoglobin subspace, RGB colour images of chronic ulcers can be separated into images due to melanin and images due to hemoglobin. Images due to hemoglobin are of particular interest as they would represent areas where pigment hemoglobin is distributed on the ulcer surface which in return indicates areas of granulation tissue. In digital imaging, colour is produced by a combination of three different spectral bands, Red, Green and Blue (RGB). Hence, it is necessary to perform a conversion from RGB skin image to the melanin-hemoglobin color subspace. From this image representation, the content of pigment hemoglobin can be further investigated. B. Independent Component Analysis Independent Component Analysis (ICA) is employed to convert RGB images of ulcers into images that represent areas due to melanin and hemoglobin only. ICA is a technique that extracts the original signals from mixtures of many independent sources without a priori information on the sources and the process of the mixture [13], [14]. This technique has been successfully applied on colour skin images to separate the melanin and hemoglobin content from these images [12], [15]. In this work, RGB images of chronic ulcers are considered the observed mixtures from which independent melanin and hemoglobin sources are extracted. Independent Component Analysis is applied using the FastICA algorithm developed by Hyvärinen and Oja [13]. FastICA is based on a fixed-point iteration that uses maximization of non-gaussianity as a measure of independence to estimate the independent components. C. RGB Images of Chronic Ulcers
Fig. 2 Skin Colour Model* *
Reproduced from N. Tsumura, H. Haneishi and Y. Miyake (1999)
Samples of good quality RGB colour images of chronic ulcers that contain a mixture of each type of tissues are required to conduct a thorough analysis. RGB colour images of chronic ulcers are acquired at Department of Dermatology at Hospital Kuala Lumpur, Malaysia. This is very crucial to the research work as it ensures working on real ulcers images taken under controlled acquisition conditions. Each time before data acquisition session, the ulcers have to be examined and cleaned by the doctors. Images are then taken before the new dressing is applied to the ulcers. The proposed algorithm is applied on the collected images to estimate the first and second independent components of the images and to determine areas represented by pigment hemoglobin. Results are shown and discussed in the next section.
IFMBE Proceedings Vol. 35
Digital Image Analysis of Chronic Ulcers Tissues
637
III. RESULTS AND ANALYSIS The proposed ICA algorithm is applied on the RGB colour images of ulcers to extract the first and second independent components from these images. The extracted second component is of a particular interest as it represents images due to pigment hemoglobin which can be used as a mask (image marker) to detect areas of granulation tissue on the surface of ulcers. Figure 3 shows the results obtained from applying the proposed algorithm on several RGB colour images of chronic ulcers at different stages of healing. The images to the left are the original RGB colour images of chronic ulcers while the images to the right represent estimated images due to pigment hemoglobin only. Image 1 is a diabetic ulcer located on the toe in which most of the right side is covered with granulation tissue. Image 2 is a venous ulcer located at the leg. The ulcer is healing with healthy granulation tissue covering the entire ulcer surface. In image 3, the ulcer is quite small and mostly covered with yellow slough with very small areas of granulation tissue. Image 4 is a side view of a big venous ulcer with granulation tissue mostly covering the upper part of the ulcer while the yellow slough appears at the bottom. Image 5 is a deep chronic diabetic ulcer with mixture of yellow slough, black necrosis and red granulation tissue. Image 6 is a small superficial healing ulcer and mostly covered with granulation tissue. By examining the estimated images due to pigment hemoglobin, the darker regions represent areas where granulation tissues are distributed. These regions are marked with green boundary lines. The low and different intensity values of the images allow these regions to be segmented from the rest of the image. The same corresponding enclosing green boundaries are used to highlight regions of granulation tissue on the original RGB colour images. Preliminary observations indicates that the estimated images due to pigment hemoglobin only clearly shows areas of pigment hemoglobin distribution representing areas where blood vessels exist reelecting areas of granulation tissue on the surface of ulcers. The study presents a potential objective approach to detect and segment regions of granulation tissue based on its hemoglobin content in colored images of chronic ulcers.
1
2
3
4
5
6 Fig. 3 Applying ICA on RGB Colour Images of Chronic Ulcers Left: RGB Images of Chronic Ulcers Right: Estimated Images due to Pigment Hemoglobin
IFMBE Proceedings Vol. 35
638
A.F.M. Hani et al. [2]
IV. CONCLUSIONS A new approach to investigate the healing status of chronic ulcers by detecting the healthy granulation tissue is investigated. Most of the work developed in the field of wound assessment utilized colour content representation in RGB colour images as the main component for analysis. However, the interpretation of colour is compromised by the unavoidable differences in acquisition conditions. Granulation tissue appears red in colour due to the presence of blood vessels that contains pigment hemoglobin. Hence, another approach based on analysis of the optical characteristics of pigment hemoglobin for the detection of granulation tissue is investigated. Independent Component Analysis is implemented to convert RGB colour image of ulcers into images due to pigment hemoglobin. Preliminary analysis of estimated images indicates the distribution of pigment hemoglobin which corresponds strongly to the presence of blood vessels indicating granulation tissue. The goal is to introduce a new objective non-invasive technique to assess the healing status of chronic wounds in a more precise and reliable way.
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
ACKNOWLEDGMENT This is a collaborative work with the Department of Dermatology, and Outpatient Department, General Hospital Kuala Lumpur, Malaysia. The authors would like to thank the hospital staff in assisting them to acquire RGB colour images of chronic ulcers.
[12]
[13]
[14]
REFERENCES [1]
Frank Werdin, Mayer Tennenhaus, Hans-Eberhardt Schaller, and Hans-Oliver Rennekampff, Evidence-based Management Strategies for Treatment of Chronic Wounds, June 4, 2009.
[15]
Nick J. M. London and Richard Donnelly, ABC of Arterial and Venous Disease: Ulcerated Lower Limb, BMJ 2000; 320; 1589-1591. Robert J. Goldman, and Richard Salcid, More than One Way to Measure a Wound: An Overview of Tools and Techniques, Advances in Skin and Wound Care, Vol.15, No. 5, 2002. David Gray, Richard White, Pam Cooper, Andrew Kingsley, The Wound Healing Continuum- An Aid To Clinical Decision Making And Clinical Audit, 2004. Hazem Wannous, Sylvie Treuillet and Yves Lucas, Supervised Tissue Classification from Coloured Images for a Complete Wound Assessment Tool, Proceedings of the 29th Annual International Conference of the IEEE EMBS, Lyon, France, August 23-26, 2007. H.Wannous, Y. Lucas, S. Treuillet, B. Albouy, A Complete 3D Assessment Tool for Accurate Tissue Classification and Measurement, 15th International Conference on Image Processing ICIP 2008, pp. 2928 - 2931. Symon D’Oyly Cotton, Ela Claridge, Developing a Predictive Model of Human Skin Colouring, Proceedings of SPIE Medical Imaging, Vol. 2708, pp. 814 – 825, 1996. Symon Cotton, Ela Claridge and Per Hall, A Skin Imaging Method Based on a Colour Formation Model and its Application to the Diagnosis of Pigmented Skin Lesions, Proceedings of Medical Image Understanding and Analysis, Oxford: BMVA, pp.49-52, 1999. Ela Claridge, Symon Cotton, Per Hall, and Marc Moncrieff4, From Colour to Tissue Histology: Physics Based Interpretation of Images of Pigmented Skin Lesions, MICCAI 2002, LNCS 2488, pp. 730– 738, 2002. Craig Donner, Tim Weyrich, Eugene d’Eon, Ravi Ramamoorthi and Szymon Rusinkiewicz, A Layered, Heterogeneous Reflectance Model for Acquiring and Rendering Human Skin, 2009. Norimichi Tsumura, Hideaki Haneishi, and Yoichi Miyake, Independent Component Analysis of Skin Colour Image, Journal of Society of America, Vol. 16, Issue 9, pp. 2169-2176, 1999. Norimichi Tsumura, Hideaki Haneishi and Yoichi Miyake, Independent Component Analysis of Skin Color Images, The Sixth Color Imaging Conference: Color Science, Systems, and Applications, 1999. Aapo Hyvärinen and Erkki Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks Research Centre Helsinki University of Technolog , Finland , Neural Networks, 13(45):411-430, 2000. Dominic Langlois, Sylvain Chartier, and Dominique Gosselin, An Introduction to Independent Component Analysis: InfoMax and FastICA algorithms, Tutorials in Quantitative Methods for Psychology 2010, Vol. 6(1), p. 31-38. Ahmad MH, Norashikin, S., Suraiya, HH. and Nugroho, H. (2009) Independent component analysis for assessing therapeutic response in vitiligo skin disorder. Journal of Medical Engineering Technology 2009:101-109.
IFMBE Proceedings Vol. 35
Early Ischemic Stroke Detection through Image Colorization Graphical User Interface K.S. Sim1, M.K. Ong1, C.K. Tan1, C.P. Tso2, and A.H. Rozalina3 1
Faculty of Engineering and Technology, Multimedia University, 75450 Melaka, Malaysia 2 School of MAE, Nanyang Technological University, Singapore 3 Department of Diagnostic Imaging, Hospital Melaka, Malaysia
Abstract— Early Ischemic stroke detection on brain images is well-known to be difficult due to their subtle signs. A method is proposed to aid the detection of stroke with the help of colorization. To the extent, a system is developed and served as a training platform for all users. A test was conducted in a university to evaluate the system, and the method showed satisfactory results. The results showed that students without medical background had improved the capability to visual the early stroke by 3.7%. Besides, students with medical background had also improved their ability to detect early sign by 6.1%. The developed system appears to be helpful and effective by incorporating the proposed method. Keywords— Color, Early infarct, Hounsfield units, Ischemic stroke, Training system.
two level classification methods which were characteristics in the intensity and wavelet domain to detect brain abnormalities. Hudyma et al. proposed another method to detect early strokes [7]. This method involved cohesive rate of suspicious pixels on a series of CT images, thus the probability of suffer from stroke can be determined. Previous works on stroke detection mostly concentrated on hemorrhagic stroke compared to ischemic stroke. In this project, early ischemic stroke detection will be proposed with the help of image colorization. This method will be very helpful for radiologists in providing diagnosis to the patient. This proposed method will be presented in three parts, which are image contrast enhancement, image colorization, and also an overview of the developed system.
I. INTRODUCTION
II. THE PROPOSED METHOD
Cerebral infarct or commonly known as stroke is an area of damaged tissues due to the result of insufficient oxygen supply [1]. It is also one of the causes contributed to the brain lesion. Stroke can be happened to everybody including children, but the adults stand majority in most of the cases. According to the statistics from World Health Organization (WHO), in 2007, 15 million people suffer from stroke each year globally. Out of these 15 million people, stroke claims the life of 5 million, and another 5 million are permanently disables [2]. In Malaysia, stroke is the third largest cause of death, behind illness of heart diseases and cancer [3]. Several works had already been proposed and implemented previously to detect the lesion in brain images. Diyana et al. detected the brain abnormalities by using symmetrical features [4]. To determine the symmetrical axis, any tilted brain image was rotated correctly. After that, the features in abnormal area such as area and centroid were used in the rule-based abnormalities detection. Meanwhile, Liu et al. also proposed another new method to detect the midline shift [5]. The new midline was formed by using linear regression model (H-MLS model). Chawla et al. detected and classified stroke from brain CT images by using three steps, namely image enhancement, midline symmetry detection, and abnormal classification for brain scan images [6]. In the classification process,
A. Image Contrast Enhancement All brain images that are used nowadays equipped in DICOM format due to advanced technology. The range for this DICOM format is enormous, corresponding to 16-bit or equivalent to 65536 grayscale levels. Unnecessary information in the brain image appeared in the huge range of the DICOM format. In most of the cases, important brain tissues such as white matter, gray matter and cerebrospinal fluid fall within the range of 0 to 80 in Hounsfield unit (HU). Consequently, windowing process has to be performed to extract the region of interest. Initially, brain image in DICOM format has to be normalized before converting to grayscale format as shown in Fig. 1(a). This process is essential for all DICOM images to be converted to grayscale image regardless of different rescale intercept on different machines.
New_HU = HU × rescale slope + rescale intercept , (1) where rescale slope and rescale intercept value can be obtained from metadata in DICOM image. HU is the original pixel value in DICOM format and New_HU is the new pixel value in DICOM format. After New_HU is obtained by using Eq. 1, windowing technique is applied to display the image from 0 to 80. The
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 639–642, 2011. www.springerlink.com
640
K.S. Sim et al.
remaining pixel values fall outside the range will be threshold to black or white. To do this, the window center (WinCenter) is set to 40 while the window level (WinLevel) is set to 80. Then, it is converted to 8-bit grayscale format ranges from 0 (black) to 255 (white) as shown in Eq. 2. The final grayscale image is formed and shown in Fig. 1(b). gray image = 255 x
New_HU - WinMin
,
(2)
WinMax - WinMin
where WinMax = WinCenter +
WinLevel 2
and WinMin = WinCenter −
(a)
WinLevel 2
Table 1 The range of grayscale levels for important brain tissues Brain Structure Stroke tissues White Matter Gray Matter
Hounsfield unit range 20 – 30 30 – 40 50 – 60
Grayscale level 64 - 96 96 – 128 160 - 192
The visualization of early stroke area represented by red color is greatly improved with the help of proposed color map. Besides, the proposed color image also enables medical practitioners to pay full attention on suspected infarct area represented by red color in the brain image. Meanwhile, the detection of early infarct area is not solely based on spotting for red color region, but rather comparing the color symmetrically in brain image. With the help of color, medical practitioners will be even more difficult to miss out old infarct represented by black color. Fig. 2 shows the early infarct area.
(b)
(a)
(b)
Fig. 1 CT brain image of early infarct: (a) Image before windowing process and (b) grayscale image after undergoing conversion process
Fig. 2 CT brain image of early infarct case. (a) Early infarct area indicated by a circle. (b) Early infarct area is represented by red color
B. Image Colorization
C. System Overview
After grayscale image is produced, the image is then undergoing the process of colorization. In order to improve the visualization of brain tissues especially early ischemic stroke, each range of grayscale level is assigned with different colors. Each of the calculated grayscale level is tabulated in Table 1 based on the Hounsfield unit (HU) for brain tissues. A color map consists of three colors are formed, which are red, blue and yellow. Red color is chosen to cover the stroke tissues, representing grayscale level from 64 to 96. This color is chosen since it is the most prominent color compared to the others. Meanwhile, gray matter is colored with yellow to represent the brighter area of grayscale image. White matter, represented by blue color is used to cover the grayscale levels from 96 to 128. In addition, the HU that falls below 20 are colored with black, while those above 60 are represented by white color.
With the help of proposed method, a system is developed by using C# programming language. This system can be served as a training platform for inexperienced doctors or even medical students to detect the infarct area in an effective way. This training will be done by allowing them to diagnose early infarct patients with and without the color coded images. Firstly, a candidate is required to detect infarct area based on grayscale images as shown in Fig. 3. After completing all the detection on grayscale images, the candidate is required to detect again based on our color images as display in Fig. 4. The result for both grayscale and color images will then be verified by good and robust database. This is to ensure that they know the mistakes and not to repeat again in future after undergoing this training. The database is developed in MySQL platform with all patients’ information and result of diagnosis. An experiment will then be carried out to gauge how helpful the training
IFMBE Proceedings Vol. 35
Early Ischemic Stroke Detection through Image Colorization Graphical User Interface
system. The accuracy of the diagnosis will be used as a yardstick to determine the effectiveness of this CAD system.
641
medical background and 10 students without medical background had evaluated the system based on 10 patients’ brain images. All results stored in the database were verified by experienced radiologists. The final score is calculated as score =
Right Right + Wrong + Miss
X 100%
(3)
where Right : candidate and doctor detect the same infarct area. Wrong : candidate detects the infarct area but not doctor Miss : doctor detects the infarct area but not candidate.
Fig. 3 Developed training system based on grayscale image
Table 2 exhibits the scores of the evaluation on students without medical background. The result shows that the proposed method has improved the scores by 3.7%. As a result, the proposed method is proven to be helpful to enable students making accurate diagnosis for early infarct area. Meanwhile, there is also an improvement on scores from students with medical background. The improvement is achieved by 6.1% as tabulated in Table 3. The improvement of score is essential and useful for those who lack of experience to diagnose early infarct cases especially medical practitioners. The proposed method helps them to enhance the visualization on infarct area, thus the infarct area can be detected in an effective way. Consequently, there is some improvement in the number of “right”. With the color image, the number of “miss” is also greatly reduced in both cases. However, in both cases, there are also some increases in false positives (“wrong”) when the candidates over detect the infarct area, which is not tally with the report from the radiologist. The existing of false positives will certainly influence the accuracy of diagnosis and not entirely bring harm to a patient. The reason behind of the increase number of “wrong” is due to some students are not familiar with the brain anatomy before taking the test especially students without medical background. Moreover, unclear instruction given during the experiment also contributes to the increase number of “wrong”. As a summary, training has to be frequently conducted to help students familiarize with the color images, thus achieving prominent results.
Fig. 4 Developed training system based on color image
III. RESULTS AND DISCUSSIONS In this project, an experiment was carried out to determine the effectiveness of the developed system in a university. A total of 20 undergraduate students consist of 10 students with
Table 2 Score of students without medical background Parameter Right Wrong Miss Score(%)
IFMBE Proceedings Vol. 35
Without aid 7.3 22.7 19.6 14.8
With aid 11.2 33.1 16.2 18.5
642
K.S. Sim et al.
Table 3 Scores of students with medical background Parameter Right Wrong Miss Score(%)
Without aid 10.6 21 18.2 21.3
With aid 15.5 25.9 15.1 27.4
IV. CONCLUSIONS AND FUTURE WORK A simple color map based on different Hounsfield units for different brain tissues is proposed and applied in CT brain images. To the extent, training system is developed to evaluate the effectiveness of the proposed method. The result shows that the evaluation by students without medical knowledge is improved by 3.7%. The system is also evaluated by students with medical background. It shows satisfactory results by achieving 6.1%. Improvement on both scores is important and helpful for medical practitioners in diagnosing the infarct area at the early stage. Hence, providing the right treatment to the patient can save one’s life. However, the sample size for students in both cases is not more than 30. As a result, the system is not fully evaluated due to the small sample size. This limitation is caused by unavailability of the students in university, as well as the limited time to conduct study. In future, the system will be tested by more than 100 candidates, and more than 30 brain images of infarct cases will be stored in the database to provide a more robust training platform.
REFERENCES 1. MedicineNet.com, “Definition of infarct”, 2010.[Online], Available : http://www.medterms.com/script/main/art.asp?articlekeyD=3969 [Accessed: Jan 2 2010] 2. The Internet Stroke Center, “About stroke”, The Internet Stroke Center, Washington University, 1997. [Online]. Available: http://www.strokecenter.org/patients/stats.htm [Accessed: Jan 1, 2010]. 3. National Stroke Association of Malaysia(NASAM), “Stroke in Malaysia”,2010.[Online]. Available: 4. http://www.nasam.org/english/prevention-what_is_a_stroke.php [Accessed: Jan 1, 2010]. 5. Diyana W. M., Zaki W., Kong C.(2009) Identifying abnormalities in computed tomography brain images using symmetrical features, International Conference on Electrical Engineering and Informatics, (ICEEI '09),2009 6. Liu R., Li S., Tan C. L., Pang B. C. , Lim C. C. T, Lee C. K., Tian Q., Zhang Z.(2009) From hemorrhage to midline shift: A new method of tracing the deformed midline in traumatic brain injury ct images, 16th IEEE International Conference on Image Processing (ICIP), 2009. 7. Chawla M., Sharma S., Sivaswamy J., and Kishore L. T. (2009) A method for automatic detection and classification of stroke from brain CT images, Proc. Ann. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC 09), 2009. 8. Hudyma E., Terlikowski G.(2008), Computer-aided detecting of early strokes and its evaluation on the base of CT images, Proc. International Multiconference on Computer Science and Information Technology, 2008, pp. 251-254.
Author: Sim Kok Swee Institute: Faculty of Engineering and Technology Street: Jalan Ayer Keroh Lama City: Bukit Beruang, 75450 Melaka Country: Malaysia Email: [email protected]
ACKNOWLEDGMENT We would like to acknowledge assistance and support from Malacca General Hospital for providing brain scan images.
IFMBE Proceedings Vol. 35
Evaluation of the Effects of Chang’s Attenuation Correction Technique on Simlar Transverse Views of Cold and Hot Regions in Tc-99m SPECT: A Phantom Study I.S. Sayed and A. Harfiza Department of Diagnostic Imaging and Radiotherapy, Kulliyyah of Allied Health Sciences International Islamic University Malaysia, Kuantan Campus, 25200 Kuantan, Pahang, Malaysia
Abstract— Approximate attenuation correction techniques are used to handle the effects of absorption of gamma photons in the patient’s body on image quality and quantification of radioactivity uptake. In this study, our aim is to evaluate the effects of Chang’s attenuation correction technique on cold and hot region images with various sizes and locations in the phantom having similar cross sectional views. Philip ADAC Forte dual head gamma camera is used. A novel phantom loaded with Tc-99m is scanned. Data acquisition parameters are chosen same as that in patient studies. Images are produced by filtered back projection technique. Chang’s attenuation correction is used with different linear attenuation coefficient (LAC) values. Cold and hot region images are analyzed in terms of regions detectability, contrast and standard deviation. Hot region images are also studied by relative count ratio. Results indicate that, cold region detectability is improved as compared to hot regions with all LAC values used here. No changes in the contrast of 22.4 mm diameter and decrease in 17.9 mm and 14.3 mm diameter cold region image are observed. Hot region image contrast of 13.4 mm diameter is enhanced, whereas, 22.4 mm and 17.9 mm diameter is equivocal with and without Chang’s technique. Relative count ratio of hot region images is improved. However, standard deviation in the count density of both types of regions is increased. Effects of Chang’s attenuation correction technique on cold and hot region images are different. Therefore, care must be taken in clinical studies. Keywords— Attenuation correction, SPECT, detectability, contrast.
I. INTRODUCTION In single photon emission computed tomography (SPECT) radiopharmaceuticals are introduced into the patient’s body and the distribution is determined externally by a gamma camera system. Relatively a large number of gamma photons is internally absorbed and scattered, which is the major limitation in SPECT [1, 2]. In this modality, the chief objective is to obtain the precise information regarding the point of emission of gamma ray photons emanating from the organ of interest. Sufficient number of projections taken around the part of patient’s body is required to reconstruct images that can reflect the true distribution of radiopharmaceutical.
Most commonly, SPECT utilizes filtered back projection (FBP) technique for image reconstruction from measured projections due to its ease in implementation and fast response. Unfortunately, FBP does not take into account of attenuated gamma ray photons. The technique assumes that the projections are true line integrals of the source distribution. Whereas, projections are resulted from the line integrals of products of the source distribution, absorption and scattering of gamma ray photons along straight lines. The absorption and scattering of gamma photons will lead to the degradation in perceived image quality and cause serious errors in the quantification of radiopharmaceutical uptake. Consequently, the quality of diagnostic information obtained is unreliable and poor. Therefore, it is important to compensate the data for attenuation, in order to avoid the image artifacts and to obtain better quantification of radiopharmaceutical uptake [3, 4]. In this connection, several approaches for attenuation compensation have been developed by different researchers and groups. Some of the techniques assume uniform linear attenuation coefficient that are implemented with FBP [4-8]. These techniques can improve the image quality, but are unable to provide accurate quantification of radiopharmaceutical uptake for inhomogeneous attenuating media [9]. Another category that corrects the data for attenuation during the reconstruction is incorporated in both FBP [10] and iterative techniques [9]. Iterative techniques use variable attenuation coefficient values subject to the prior information by mapping of linear attenuation coefficient within the object to be scanned [11, 12]. Relatively, with these techniques reliable quantification of radiopharmaceutical uptake is obtained. However, these techniques have the drawback of longer computational time. Hybrid attenuation correction techniques also exist. Iterative and post processing techniques have been combined by [13]. Faber TL [14] has applied has geometric mean of two opposite ray-sums of [15] with Chang’s [7] post-processing attenuation correction technique. Murase K et al [16, 17] developed a technique by combining the iterative either with pre-processing or post-processing technique. A hybrid technique has been formed by [18] by combining arithmetic mean-based and geometric mean-based pre-processing
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 643–649, 2011. www.springerlink.com
644
I.S. Sayed and A. Harfiza
attenuation correction technique. Hybrid techniques consume less time as compared to traditional iterative techniques [14, 17]. Reduce the edge distortions especially with inhomogeneous distribution of attenuation coefficients and provide better quantification of radiopharmaceutical distribution [17-19]. With SPECT/CT systems attenuation coefficient map is achieved by taking a transmission image of that part of the body, and the information obtained is utilized for attenuation correction of images. Generally, transmission based attenuation correction techniques provide more accurate quantification. However, these techniques provide attenuation coefficient map of the narrow beam geometry ignoring the scatter contribution factor. Therefore, the attenuation correction map will lead to overcorrection in the central part of the object, if not corrected for scatter [20]. Chang’s [7] attenuation correction technique is simple and can be easily incorporated in FBP reconstruction algorithm. In general, few SPECT/CT systems are installed in the nuclear medicine departments around the world. Still common radionuclide imaging tomography systems are employed and almost majority is facilitated with Chang’s attenuation correction technique for compensation of absorption of gamma ray photons. Comparatively, it provides better quantification results [21]. For these reasons, in this work Chang’s attenuation correction technique has been chosen. In this study, effects of Chang’s attenuation correction technique on cold and hot regions of different sizes and various locations within the phantom are investigated. It is assumed that effects on cold and hot regions are different relative to each other. In order to obtain similar transverse views of reconstructed images of cold and hot regions a novel phantom insert has been constructed and scanned [22]. The views of reconstructed images of both types of regions are similar. Therefore, it makes the evaluation, analysis and comparison of any effect simple and easy.
II. METHODOLOGY A. Data Acquisition and Image Reconstruction Philip ADAC forte dual head gamma camera installed with Pegasys software for data acquisition, image reconstruction, display, analysis was used. A modified/designed PET/SPECT cylindrical phantom was scanned by using Tc-99m radionuclide. The tank was filled with water. Cold and hot regions insert was placed in the phantom tank. The insert comprises on various sized pairs (29.9, 22.4, 17.9, 14.3, 11.4, 9.7, 7.3, 5.9 and 4.7 mm diameter) of cold and hot regions at different locations
within the phantom. The phantom was left overnight for air bubbles to dissolve. The solution of Tc-99m (25mCi) was carefully introduced into the tank and it was put aside for some time to ensure that the radionuclide is homogenously distributed in the saline water before scanning. Further, the blue dye was mixed with water to check the uniform distribution of the radioactive solution inside the phantom. Phantom was placed on the patient couch with long axis parallel and in the centre of the field of view (FOV) of the gamma camera. Data were acquired by fixing low-energy high resolution (LEHR) parallel-hole collimator. The size of image data matrix chosen was 128 x 128 x 16. Standard energy window was adjusted which is 20% centered at 140 keV. Ninety projections (30 seconds per projection) were acquired over 360 degrees. It should be noted that only one head of the gamma camera was used due to some technical problems with the other head. The transverse images of cold and hot regions insert were obtained by the filtered back projection image reconstruction technique. It was corrected for uniformity and center of rotation (COR). Chang’s attenuation correction method was applied by choosing 0.111, 0.121, 0.131, 0.141 and 0.151 cm-1 linear attenuation coefficient values. Butterworth filter with cut off frequency 0.35 cycles/cm and order 5 was used. Scatter correction technique was not applied due to non-availability with this gamma camera system.
III. RESULTS The importance of attenuation correction in nuclear medicine imaging has been strongly realized in order to achieve artifact free images and accurate quantification of radiopharmaceutical uptake [23-26]. Exact measurement of attenuation within the patient’s body has always remained a challenge. Since, it depends on the type of tissue and the distance that gamma photons have to travel from the point of origin in the object to the point of interaction with the crystal of gamma camera. In practice, due to the physique of patient, tissue thickness varies for different parts of the body, hence in the reconstructed image. Therefore, deep sitting lesions are affected more severe as compared to the superficial lesions. In clinical situation, generally two types of lesions are encountered, cold lesions in a relatively hot background and hot lesions in a comparatively cold background. In principle, due to the two opposite conditions effects of absorption and scattering of gamma photons on cold and hot regions could be different. In this context, so far no proper attention has been paid. Therefore, this problem should be addressed and dealt properly in terms of both the
IFMBE Proceedings Vol. 35
Evaluation of the Effects of Chang’s Attenuation Correction Technique on Simlar Transverse Views
645
attenuation and scatter correction. Keeping in mind the above facts, an attempt is made to address the issue by scanning a new constructed phantom insert to investigate the effects of Chang’s attenuation correction technique. The data of the phantom provides images of similar cross sectional views of both types of regions. Images of cold and hot regions are analyzed by comparing region detectability, contrast, standard deviation and hot regions relative count ratio. A. Cold and Hot Region Detectability For detectability analysis and comparison of the effects of Chang’s attenuation correction technique on cold and hot regions, 17th and 27th transverse image slices are chosen, respectively. Reason is that, quality of these slices is better as compared to others, which may reduce the chance of error occurrence in image analysis. Visual analysis is performed over the uncorrected and corrected images. With Chang’s attenuation correction method different values of LAC are used, e.g., 0.111, 0.121, 0.131, 0.141 and 0.151 cm-1. Images are shown in figure 1, the top row were reconstructed from the phantom data without attenuation correction. It is observed that cold regions located in the central part of the phantom are severely affected due to absorption of gamma photons as compared to those located near the boundry of the phantom and also relative to hot regions. The cold region detecatbility is relatively improved by applying Chang’s attenuation correction technique, at least three pairs of regions of different sizes (22.4 mm, 17.9 mm and 14.3 mm diameter) can be seen clearly. Whereas, in the case of hot region images the separtion between each pair is connected with the counts, in fact there should not be the presence of counts which has reduced the resolution of images. In addition, the circular shape of cold region images is distorted and it is not changed much in the case of hot region images. It could be partly due attenuation correction and presence of scattered photons. Moreover, even though the LAC value is increased still equal number of pairs of cold and hot regions can be seen. However, the background counts are increased. Further, other smaller regions appeared to be attached together therefore those small regions have been excluded from the analysis. This applies to both types of regions. From the qualitative aspects, in particular the shape of cold and hot region images the effect of attenuation correction technique is less on hot region images compared to the cold region images. Overall appearance of hot region images is not changed although different values of LAC are applied.
Fig. 1 Shows the trnasverse slices of cold and hot region images without and with Chang’s attenuation correction using various LAC values
IFMBE Proceedings Vol. 35
646
I.S. Sayed and A. Harfiza
B. Cold and Hot Region Image Contrast Cold and hot regions are analyzed in terms of contrast, standard deviation in the regions count density and hot region relative count ratio. In this part, same transverse image slices i.e., 17th for cold and 27th for hot regions are investigated. For attenuation correction various values of LAC were used. The contrast of cold regions is calculated by the following equation C
|
–
|
(1) Where Dreg is the count density in the region and Dbkg is the count density in the background. The ROIs were drawn carefully on the image of cold region and separate on the background to obtain the count density. Figure 2 shows the contrast of cold region images of three pairs and chosen because of clarity. It is observed that, contrast is improved with Chang’s attenuation correction for 22.4 mm diameter cold region as compared to without attenuation correction except for LAC values of 0.121 and 0.151 cm-1. In the case of 17.9 mm diameter cold region, a significant improvement with LAC values of 0.131, 0.141 and 0.151 cm-1 relative to 0.111 and 0.121 cm-1 LAC value is achieved. However, it is reduced relative to without correction. The reduction in contrast with all LAC values for 14.3 mm diameter cold region image is recorded.
Fig. 3 Shows the contrast values for hot region images without and with Chang’s attenuation correction technique using various LAC values Figure 3 represents the values of contrast in hot regions. Different formula is applied to measure the contrast relative to cold regions. Count density was obtained by using the same method as in the case of cold region images. C
|
|
(2) It is noticed that contrast does not significantly varies and is less affected by changing LAC values for 22.4 mm and 17.9 mm diameter hot regions. Whereas, for 14.3 mm diameter hot region with all LAC values improvement in contrast is seen. It reflects that for small sized hot regions improvement in contrast can be achieved as copmared to small sized cold regions and this trend is opposite relative to both type of reagions. This supports the idea that the Chang’s attenuation correction has different affects on cold and hot regions. Further, in general the contrast values of hot region images are higher as compared to the cold region images when data were corrected with Chang’s attenuation correction technique. C. Standard Deivation (SD) in the Cold and Hot Region Image Count Density
Fig. 2 Shows the contrast values for cold region images without and with Chang’s attenuation correction technique using various LAC values This is due to the attenuation correction in which the counts are uniformly added throughout the image background including cold regions. In addition, this indicates that, the improvement in contrast varies with different LAC values but uniform trend is not observed.
Standard deviation is one of the parameter to characterise image quality. It defines the presence of noise component in image. Which is due to many reasons, e.g., random nature of radioactive decay, scattered gamma photons, low counting statistics, as well as genertaed by image reconstruction, analysis, and also attenuation correction techniques. The standard deviation in the count density of cold and hot regions is increased with the Chang’s attenuation correction technique, as shown in Figure 4a and b. Moreover, as the value of LAC is increased the standard
IFMBE Proceedings Vol. 35
Evaluation of the Effects of Chang’s Attenuation Correction Technique on Simlar Transverse Views
deviation also increased. Using LAC 0.111cm-1 and 0.121cm-1 almost same trend in SD is recorded for both types of all region images which is lower than the SD compared to other SD values obtained with 0.131, 0.141 and 0.151 cm-1 LAC. This suggests that, at least in this case, when no scatter correction is applied to the data, lower linear attenuation coefficient i.e., 0.111 cm-1 and 0.121 cm-1 may be used. An intresting point to note is that, standard deviation for cold region images is high relative to hot region images for all sizes and with each LAC value.
Fig. 4a Shows the standard deviation in the count density of cold region images without and with Chang’s attenuation correction technique using various LAC values
647
Where the Dhot region is the count density in the hot region and D29.9mm is the count density in the hot region image of 29.9mm diameter located near the edges of the phantom. It is assumed that this region does not experience less attenuation and therefore can be used as a reference hot region for RCR measurements. Count profiles were drawn through the centre of hot region images for RCR calculation and the maximum pixel counts were recorded. Then, the deeper hot region image maximum pixel counts were compared to those located nearer to the boundary of the phantom.
Fig. 5 Represnets the hot region images relative count ratio without and with Chang’s attenuation correction technique incorporating various linear attenuation coefficient values
In figure 5, relative count ratio of 22.4/29.9mm, 17.9/29.9mm and 14.3/29.9mm is shwon. The improvement in RCR is achieved for all sized hot region images and with each LAC value. Results show that, 22.4/29.9mm diameter hot region image RCR increases constantly. Whereas, 17.9/29.9mm hot region image RCR has a similar trend for all LAC values except 0.131cm-1. Howerver, in the case of 14.3/29.9mm diameter hot region image RCR does increase but it varies significantly except 0.141 and 0.151 cm-1 LAC. Also results indicate that, the deeper the location of the hot region has a greater attenuation effect as the photon need to travel longer distance within the object before its detection. Fig. 4b Shows the standard deviation in the count density of hot region images without and with Chang’s attenuation correction technique incorporating various LAC values
D. Relative Count Ratio of Hot Region Images A simple measurement which is the relative count ratio (RCR) of hot region images (sized, 22.4mm, 17.9mm and 14.3mm diameter) was carried out. The equation written below is used to calculate RCR. RCR
.
(3)
IV.
CONCLUSIONS
Attenuation correction in SPECT imaging is imperative in order to obtain better quality images and accurate quantification of radioactivity distribution. In general, in patient studies two types of lesions namely, cold in a hot background and hot in a cold background are encountered. The work presented here was to evaluate the effects of Chang’s attenuation correction technique on similar crosssectional views of cold and hot region images.
IFMBE Proceedings Vol. 35
648
I.S. Sayed and A. Harfiza
Visual analysis reflects that, the cold region image detecatbility is improved relative to hot region images with Chang’s attenuation correction technique. Though the circular shape of cold region images is distorted and not much changed in the case of hot region images. Contrast of cold region image of 22.4 mm diameter is marginally improved with attenuation correction. The contrast of 17.9mm diameter cold region image is decreased relative to no attenuation correction, but with other LAC values relative to 0.111 cm-1 is increased. Whereas, small cold region (14.3 mm diameter) image contrast is reduced with all LAC values. As far as concerned the hot region image contrast, it is observed that contrast does not vary significantly relative to without correction for 22.4 mm and 17.9 mm diameter hot region images by changing LAC values. Whereas, for 14.3 mm diameter hot region image contrast with all LAC values is improved. Further, in general the contrast values of hot region images are higher as compared to the cold region images when data were corrected with Chang’s attenuation correction technique. The standard deviation in the count density of cold and hot regions is increased with the Chang’s attenuation correction technique. Moreover, the standard deviation for cold region images is high relative to hot region images for all sizes and with each LAC value. In general, results of RCR measurements show that the improvement is achieved with Chang’s attenuation correction for all LAC values and for all sizes of hot region images. It is concluded that, Chang’s attenuation correction technique has different effects on cold region images relative to hot region images, in terms of region detectability, contrast, standard deviation, shape, depth and size. Therefore, in terms of analysis both types of regions corrected for attenuation may be examined accordingly. Moreover, in patient examinations careful attention may be paid. Further, a suitable value of LAC should be used in order to avoid errors in quantification of radioactivity uptake as well as image artifacts.
ACKNOWLEDGMENT Authors are grateful to the Department of Nuclear Medicine, Oncology and Radiotherapy, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, 16150 Kubang Kerian, Kota Bharu, Kelantan, Malaysia for allowing to carryout experiments for this research and the staff for cooperation and some technical help in this regard.
REFERENCES 1. Masuo H, Jun D, Keita U et al. (2005) Comparison of Methods of Attenuation and Scatter Correction in Brain perfusion SPECT. J Nucl Med Technol 33:224-229 2. Shah SI, Leeman S, Betts Mark (1998) A Comparison of Four PreProcessing Attenuation Correction Methods for Single Photon Emission Computed Tomography. Mehran University Research Journal of Engineering and Technology 17:151-158 3. Taneja S, Mohan HK, Blake GM et al. (2008) Synergistic impact of attenuation correction and gating in routine myocardial SPECT reporting: 2 year follow-up study. Nucl Med Commun 29:390-397 4. Garcia EV. (2007) SPECT attenuation correction: an essential tool to realize nuclear cardiology's manifest destiny. J Nucl Cardiol 14:16-24 5. Sorenson JA (1974) Methods for quantitative measurement of radioactivity in vivo by whole-body counting. Academic Press, New York 6. Kay DB, Keys JW Jr (1975) First order correction for absorption and resolution compensation in radionuclide Fourier tomography. J Nucl Med 16:540-541 7. Chang LT (1978) A method for attenuation correction in radionuclide computed tomography. IEEE Trans Nucl Sci 35:638643 8. Budinger TF, Gullberg GT, Huesman RH (1979) Emission Computed Tomography. Springer-Verlag 9. Gullberg GT, Huesman RH, Malko GA et al. (1985) An attenuated projector-backprojector for iterative SPECT reconstruction. Phy Med Bio 30:799-816 10. Tanaka E, Toyama H, Murayama H (1984) Convolutional image reconstruction for quantitative single photon emission computed tomography. Phy Med Bio 29:1489-1500 11. Glick SJ, Penney BJ, King MA (1991) Filtering of SPECT reconstruction made using Bellini’s attenuation correction method: a comparison of three pre-construction filters and a post reconstruction Wiener filter. IEEE Trans Nucl Sci 38:663-669 12. Liang Z, Turkington TG Gilland (1992) Simultaneous compensation for attenuation, scatter and detector response for SPECT reconstruction in three dimensions. Phy Med Bio 37:587-603 13. Moore SC, Brunelle JA, Kirsch C (1982) Quantitative multidetector emission computerized tomography using iterative attenuation compensation. J Nucl Med 23:706-714 14. Faber TL, Lewis MH, Corbett JR et al. (1984) Attenuation Correction for SPECT: An Evaluation of Hybrid Approaches. IEEE Trans Medical Imaging 3:101-107 15. Budinger TF, Gullberg GT (1974) Three-dimensional reconstruction in nuclear medicine emission imaging. IEEE Trans Nucl Sci 21:2-20 16. Murase K, Itoh H, Mogami H et al. (1987) A comparative study of attenuation correction algorithms in SPECT. Eur J Nucl Med 13:55-62 17. Murase K, Tanada S, Higashino H et al. (1989) A unified computer program for assessment of attenuation correction and data acquisition methods in single photon emission computed tomography (SPECT). Eur J Nucl Med 15:284-253 18. Shah SI, Leeman S (2006) A hybrid Pre-processing Attenuation Correction Technique for SPECT Imaging. Int J Sci Res 16:51-55 19. Walters TE, Simon W, Chesler DA et al. (1981) Attenuation correction in gamma emission computed tomography. J Comp Asst Tomogr 5:89-94 20. Licho R, Glick SJ, Xia W et al. (1999) Attenuation compensation in 99m Tc SPECT brain imaging: a comparison of the use of attenuation maps derived from transmission versus emission data in normal scans. J Nucl Med 40:456-463
IFMBE Proceedings Vol. 35
Evaluation of the Effects of Chang’s Attenuation Correction Technique on Simlar Transverse Views 21. Mas J, Ben Younes R, Bidet R (1989) Improvement of quantification in SPECT studies by scatter and attenuation compensation. Eur J Nucl Med 15:351-356 22. Sayed IS, Shah AA (2007) Modified PE/SPECT Cylindrical Phantom. IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., Seoul, South Korea, 2006, pp 1681–1683 DOI: 10.1007/978-3540-36841-0_414 23. Vahidian KA, Noori Eskandari MH, Naji MM (2010) New approach for attenuation correction in SPECT images, using linear optimization. Iran J Radiat Res 8:111-116 24. Patton, JA Turkington, TG (2008) SPECT/CT Physical Principle and Attenuation Correction. J Nucl Med Technol36: 1-10. 25. Zaidi H (2006) Quantitative analysis in nuclear medicine imaging. New York: Springer Science Business Media, Inc
649
26. Sayed IS, Ahmad Z, Norhafiza N (2008) Comparison of Chang’s with Sorenson’s Attenuation Correction Method by Varying Linear Attenuation Coefficient Values in Tc-99m SPECT Imaging. LNCS 4987, Springer-Verlag Berlin Heidelberg.
Address of the corresponding author: Author: Inayatullah Shah Sayed Institute: International Islamic University Malaysia, Kuantan Campus Street: Jalan Sultan Ahmad Shah, Bandar Indera Mahkota City: Kuantan, Pahang Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Efficiency of Enhanced Distance Active Contour (EDAC) for Microcalcifications Segmentation S.S. Yasiran1, A. Ibrahim1, W.E.Z.W.A. Rahman1, and R. Mahmud2 1
Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA 40450 Shah Alam, Selangor, Malaysia 2 Faculty of Medicine. Universiti Putra Malaysia 43400 Serdang, Selangor, Malaysia
Abstract— In this paper the boundaries of microcalcifications in mammogram are segmented images by using the Distance Active Contour (DAC) method. However, the DAC requires longer computational time to finish the segmentation process. Thus, the Enhanced Distance Active Contour (EDAC) is proposed to overcome the problems. The efficiency is measured in terms of time lapse and number of iterations. Results obtained show that the EDAC has successfully reduced the processing time as well as the number of iterations. In addition to that, the boundaries of microcalcifications have been successfully segmented by the EDAC. It is also found that the efficiency of EDAC is better than the DAC. Keywords— Enhanced Distance Active Contour, Mammogram, Microcalcifications, Segmentation, Boundaries.
Enhanced Distance Active Contour (EDAC) is introduced in order to solve the issue. In this paper, the efficiency of DAC and EDAC in segmenting characteristic details on breast phantom images is compared. In this study, the breast phantom is needed in order to test the applicability of EDAC, in the detection of the characteristic details ‘hidden’ behind the phantom. The breast phantom refers to a test object used to simulate radiographic characteristics of compressed tissue. A phantom is designed to simulate x-ray attenuation of 4.2 cm compressed human breast which is composed of 50% adipose and 50% glandular tissue. Then, both methods are implemented on mammograms to segment microcalcifications [3]. Finally, the performance of both methods are measured and compared. This is to verify that the EDAC performs better than DAC.
I INTRODUCTION Advance in computer technology are connected with the complex mathematical computation problems. Time factor is an issue when applying algorithm to solve problems having complex mathematical computations. For example, snake or Active Contour [1] which is computational in nature has been widely used in many applications of computer vision and image processing. It is composed of computer generated curves that move within images to segment the image boundaries. Segmentation is a process to partition an image into meaningful regions which corresponds to part of, or the whole of object within the scene. Image segmentation has become increasingly significant and challenging tasks in the medical imaging such as mammogram, Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) over the past two decades. Computationally, segmentation process involves iterations to find the boundary of an image which is also time consuming. The Distance Active Contour (DAC) is originally proposed by Xu & Prince [2]. However, the DAC require longer computational time to finish the segmentation process. Therefore our proposed method which is called the
II. THE ENHANCED DISTANCE ACTIVE CONTOUR (EDAC) Active Contour [1] is an energy minimizing spline guided by external forces and influenced by image forces, which pull it towards features such as boundaries, edges and lines [4]. Active Contour has been extensively used to extract the boundaries of an object such as in image segmentation [5,6,7] image tracking [8] and 3-D reconstruction [9]. A traditional Active Contour [1] is represented by parametric curve v ( s ) = [ x ( s ), y ( s )], s ∈ [0,1] with x and y as the coordinates of vertices. The curve moves through the spatial domain of an image to minimize the energy function which can be represented mathematically as 1
E AC = ∫ [Eint v( s ) + E ext v( s )] ds
(1)
0
The first term of Equation (1) represents the internal energy which is responsible for the smoothness and deformation process of the contour and can be expressed as:
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 650–654, 2011. www.springerlink.com
Efficiency of Enhanced Distance Active Contour (EDAC) for Microcalcifications Segmentation 1
E AC
[
]
1 2 2 = ∫ α vs ( s ) + β v ss ( s ) + Eext v( s )ds 2 0
(2)
where α (s ) and β (s ) is the elasticity and rigidity parameter respectively. The external energy function attracts the deformable contour to the boundary or edge of the image. By using calculus of variations, an Active Contour that minimizes the energy functional in Equation (2) must satisfy the Euler-Lagrange equation which can be expressed as:
αv ss ( s ) − βv ssss ( s ) − ∇Eext = 0
(3)
According to Hou & Han, The Finite Difference Method (FDM) is selected due to its simplicity [10]. In this paper, the DAC is chosen. The reason behind the selection of the DAC is that it is not well explored. No literature has been found so far whereby the method is implemented on mammogram. Based on the history and evolution of some Active Contour models, most researchers prefer to enhance and study the GVF Active Contour [2]. Thus, the GVF Active Contour grows to be more popular unlike the DAC which is less popular. Table 1 illustrates the citation occurrence of the DAC with the evolution of the Active Contour models.
is that based on literature, the interest among researchers is low and limited to comparison purposes only. For instance, Xu & Prince [2] introduced the GVF Active Contour as well as the DAC and made some comparison. It is found that the GVF Active Contour performs much better. Then, another study is done by the same researcher where GGVF Active [6] is proposed to improve the GVF Active Contour. Again, the DAC is used for comparison purposes only. As a different approach, Hou & Han [10] used the distance Potential force as the external force to create the FFA Active Contour. New rules are proposed for the Active Contour. According to them, the reason why they chose the force field derived from the Euclidean distance potential is that, the force field is and easy to carry out the idea of the FFA Active Contour. Another creative idea carried by Sum & Cheung [11] which associated with potential force is by replacing the original potential force with its norm. The DAC also faces the same problem as the traditional Active Contour. It requires longer computational time to finish the segmentation process. Thus, some enhancement should be made to overcome this issue. The pseudo codes of the EDAC are illustrated in Figure 1: { begin get
g = image_file; set h = 200_by_200_pxl; set f = edge_ map ; f
Table 1 DAC citation from 1987- 2010 Year 1987 1991 1993 1995 1997 1998 (a) 1998 (b) 1999 2000 2001 2003 2004 2005 2006 2007 2008 2009 2010
651
Citation Occurrence [2] [6] [11] [12] -
= 1- ( g / 255) ;
% replaced with canny edge detection; set
D = f ((x2 − x1 )2 ) + f ((y2 −y1 )2 ) > 0.5; set [ p x p y ] = gradient(− D);
j = [px py];
t =0→2π ; x = a + r cos(t ); y = b + r sin(t); while ( i <ξ) { if (( x < h ) && ( y < h )){ j = EDAC_deform ;
set
} } display
j;
end }
From Table 1, there are few citations of the DAC within 1987 - 2010. Moreover, the application of the DAC is only limited to the static data which refers to clear cut images. There is no application of DAC on mammogram which is not a static data. The idea of the DAC is derived by Xu and Prince [2] where the potential force based on the Euclidean distance is used. One of the reasons for choosing the DAC
Fig. 1 The pseudo codes of the EDAC The first step is to read the image file. Then, the edge map is computed. The inverse edge map method is used in the original algorithm. However, this method requires a long time to complete the segmentation process. Hence, this part is modified in order to reduce the time to complete the
IFMBE Proceedings Vol. 35
652
S.S. Yasiran et al.
segmentation process. The Canny edge map is chosen to replace the inverse method. This is because Canny edge detector is one of the most powerful edge detection methods to detect object boundaries [12]. The next step is to compute the external force based on the computed edge map. The Euclidean distance formula is used to compute the distance. Next, the initialization is defined by using a circle equation. Then the number of iterations is denoted by the parameter ξ which is also known as the stopping criteria for the iterations process. The iteration continues until it reaches this stopping criterion.
Start
Data collection
S1 Stage
Enhancement of DAC
Experiment of EDAC on breast phantom
S2
YES NO Scoring criteria satisfied?
III. MATERIALS AND METHOD This study is mainly divided into four major stages. The first stage is data collection. Original mammogram images are obtained from National Cancer Society Malaysia (NCSM). Then radiologist will confirm the Region of Interest (ROI) which only contains microcalcifications. The second stage is the enhancement of DAC. It will be tested first on a breast phantom. If the modified algorithm satisfies the scoring criteria set by ACR [13], then it will be used in the next phase. Otherwise, some modifications on the algorithm are made until the scoring criteria set by ACR are satisfied. If the total scoring is less than 10 then it means that the algorithm provides a poor image quality. Thus, the scoring must be greater than 10 in order to obtain a good image quality. A study on the breast phantom associated with the Active Contour or snake algorithm has been successfully conducted by Wan Eny Zarina et al. [14]. The finding shows that the implementation of the Active Contour on segments of the image of breast phantom is able to increase the sensitivity of detection of characteristic details of breast phantom. Thus, in this study the experiments are conducted on segments of the image of breast phantom. The next stage is the implementation of the EDAC on the mammograms. The final stage is to measure the performance of the EDAC. Figure 2 illustrates the block diagram of our proposed method.
Implementation of EDAC on mammograms
S3
Efficiency performance S4 End
Fig. 2 Block diagram of the proposed method
IV. EXPERIMENTAL RESULTS A. Breast Phantom To test the capability of the EDAC it is implemented on the phantom to trace the hidden characteristic details. A scoring is performed on the merged phantom before and after implementing the EDAC by a radiologist. Tables 2 (a) and (b) illustrate the scoring results obtained by the radiologist. Table 2 (a) Scoring phantom before EDAC Top
Bottom
Top
Left
Left
Right
Fibrils
2
2
0
0
Specks
1
0
0
0
Mass
1
0
1
2
4
2
1
2
Characteristic Details
Subtotal OVERALL TOTAL
IFMBE Proceedings Vol. 35
4+2+1+2=9
Bottom Right
Efficiency of Enhanced Distance Active Contour (EDAC) for Microcalcifications Segmentation
Table 2 (b) Scoring phantom after EDAC Characteristic Details
Top
Bottom
Top
Bottom
Left
Left
Right
Right
Fibrils
2
2
0
0
Specks
2
1
0
0
Mass
1
2
2
2
5
5
2
2
Subtotal OVERALL TOTAL
653
The readings taken from the DAC and the EDAC are further utilized for comparison purposes.
5+5+2+2=9
From both Tables 2 (a) and (b), it is found that the total scoring for the phantom before implementing the EDAC is 9. This shows that the phantom did not meet the criteria of the good image quality. Meanwhile, the total scoring for the phantom after implementing the EDAC is 14. These results fully satisfied all the criteria for good image quality. Besides, the total number of characteristic details detected by using EDAC is 14 which exceed 10 in number. According to ACR this result represents a good image quality. These results confirm that the EDAC algorithm is highly suitable to segment microcalcifications directly on the mammograms.
Fig. 3 Difference in time (minute) Figure 3 illustrates the difference in time (minutes) between DAC and EDAC. It can be observed that the difference between each point of DAC and EDAC on each image is huge. For instance, the application of EDAC on Image 5 requires 780 minutes (13 hours) to finish the segmentation process. However the EDAC results shows that only 14 seconds is needed to complete the segmentation process for the same image. It can be observed that in these 10 cases a lot of processing time is saved using EDAC.This is consided as a good result.
B. Efficiency of EDAC Because of longer computational time required for DAC and a limited hardware capacity used (1.50 GHz and 1GB of RAM, Intel ® Core™2 Duo CPU), the ideal number of iterations could not be generated. Hence, only 10 images were implemented using DAC. Table 3 illustrates the time lapse and number of iterations taken for segmenting microcacifications in the 10 images using the DAC and the EDAC. Fig. 4 Difference in number of iterations Table 3 Time taken and number of iterations between DAC and EDAC Image No.
Time taken ( mins) DAC EDAC
No. of iterations DAC
EDAC
1
450
0.23
2
780
0.30
28,415 48,722
40 68
3
276
0.33
20,033
55
4
480
0.22
33,440
39
5
480
0.22
39,110
38
6
420
0.18
25,788
24 21
7
480
0.18
31,891
8
420
0.17
26,891
20 22 80
9
180
0.17
12, 500
10
360
0.33
29,718
Figure 4 illustrates the difference in number of iteration between DAC and EDAC. It is certainly obvious that, if the time taken to complete the segmentation process is reduced, thus the number of iteration is also reduced. For example, in Image 5, the DAC requires 48,722 iterations to complete the segmentation process. Instead of thousands iterations, the EDAC only requires 68 iterations for the segmentation for the same image.
V. CONCLUSION The DAC has been enhanced in order to solve the time consuming problems. The results on breast phantom show that the EDAC has satisfied the standard ACR scoring
IFMBE Proceedings Vol. 35
654
S.S. Yasiran et al.
criteria. It is also shows that the efficiency of EDAC is better compared to DAC.
ACKNOWLEDGMENT The author would like to acknowledge Universiti Teknologi Mata (UiTM) for the supports and contributions.
REFERENCES 1. Kass, M., Witkin, A. & Terzopoulos, D. (1987). Snakes: Active Contour models. International Journal Computer Vission , vol. 1, 321-331. 2. Xu, C. & Prince, J.L. (March 1998 (a)). Snake, shape and gradient vector flow. IEEE Trans. on Image Process. , 359-369. 3. Administrative Code. (2005). Retrieved July 17, 2009, from facilities performing mammography Section 370.20 definitions: http://www.ilga.gov/commission/jcar/admincode/032/032003 700000200R.html 4. Xie, X. (May 2002). Interim Report: Active Colour Snakes. University of Bristol. 5. Yezzi, A., Kichenassamy, S., Kumar, A., & Tannenbaum, A. (1997). A geometric snake model for segmentation medical imagery. IEEE transaction on the medical imaging , 16, 199209. 6. Xu, C. & Prince, J.L. (1998 (b)). Generalized gradient vector flow external forces for Active Contours. Elsevier: Signal processing , 71, 131-139.
7. Brigger, P., Hong, J., Unser, M. (2000). B-Spline snakes: Aflexible tool for parametric contour detection. IEEE Transaction Image Processing , 9 (9), 8. Leymarie, F. & Levine, M.D. (1993). Tracking deformable objects in the plane using an Active Contour models. IEEE Trans. Pattern. Anal. Mach. Intell. , 15 (6), 617-634. 9. Cohen, L.D., & Cohen, I. (1993). Finite element methods for Active Contour models and balloons for 2-D and 3-D images. IEEE Trans. Pattren Anal. Mach. Intell, 15, 1131-1147. 10. Hou, Z., & Han, C. (2005). Force field analysis snake: an improved parametric active contour model. Elsevier: Pattern recognition letters , 26, 513-526. 11. Sum, K.W. & Cheung, P.Y.S. (2007). Boundary vector field for parametric active contours. Elsevier: Pattern recognition , 40, 1635-1645. 12. Canny, J.F. (1986). A Computational Approach to Edge Detection. IEEE Trans on PAMI , 8 (6), 679-698. 13. American College of Radiology Mammography Accreditation. (2009, April 4) Retrieved June 25, 2009, from PDF File: http://www.acr.org/accreditation/mammography/mammo_faq/ mammo_faq.aspx 14. Wan Eny Zarina, W.A. R., Arsmah, I., Zainab, A. B., Rozi, M.,Md. Saion, S., & Mazani, M. (2008). Post Processing of Breast Phantom MRI-156 Images Using Snake Algorithm. IEEE : Fifth International Conference on Computer Graphics, Imaging and VisualizationUse Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Siti Salmah Yasiran Univresiti Teknologi Mara (UiTM) Shah Alam Malaysia [email protected]
Estimating Retinal Vessel Diameter Change from the Vessel Cross-Section M.Z. Che Azemin1,2 and D.K. Kumar1 1
2
School of Electrical and Computer Engineering, RMIT University, Victoria 3001, Australia Biomedical Science Department, Kulliyyah of Science, International Islamic University, Pahang 25200, Malaysia
Abstract— We have developed a methodology to estimate small change in the retinal vessel diameter. This technique does not rely on the segmentation of the vessel which is prone to error when the fitted profile does not exactly represent the expected model. Using the newly proposed method, we have identified an inverse linear relationship (p<0.0001, r2 = 0.965) between the proposed technique with the narrowing of the simulated vessels. The immediate application of this technique is in the caliber change estimation of the time-series retinal vessels, gated with the electrocardiogram. Keywords— retinal vessel diameter, vessel profile, crosscorrelation.
I. INTRODUCTION Eye fundus imaging allows the viewing of the small vessels of the cardiovascular system. It is the only non-invasive and direct observation of the cardiovascular system and has number of applications [1-4]. One important feature in the retinal image is the retinal vessel diameter which is an important measure of the cardiovascular health of the person. For instance, the change in the vessel diameter may provide an early indicator of the risk of stroke incidence and mortality [5]. The recent advancement in retinal vessel imaging is the use of the retinal video imaging techniques to estimate the changes in the arteriole caliber over cardiac cycles. This technique uses Dynamic Vessel Analyzer (DVA) to measure the variation in the specific segment of the vessel diameter [6]. DVA requires substantial modification from the existing retinal camera which is prohibitively expensive making the technology inaccessible to many practitioners [7]. The alternate to this is the use of manual triggering, the reliability of which is questionable [8]. The other concern with the issue of reliability of retinal vessel caliber is the need for manual intervention for caliber grading process [8]. Recent studies have quantified the changes in the vessel diameter using measures such as Central Retinal Arteriole Equivalent (CRAE), Central Retinal Venule Equivalent (CRVE) and Arteriole-to-Venule Ratio (AVR) [9]. These measures summarize the retinal vasculature in the specific zone into one single value for a single retinal image. These
measures are routinely used for clinical evaluation and the popularity of these measures is the compactness of the measure, and convenience. While this technique has been widely used, however, the summarization of the vessel caliber into a single number results in loss of much of the information. These measures average the caliber of the vessels and thus fail to consider the variation of the vessel in a segment and in cardiac cycles. One major difficulty associated with the analysis of retinal vessel is the use of vessel segmentation techniques which is prone to error. The gold standard of retinal caliber measurement is based on the estimation of the vessel diameter based on the twin-Gaussian profile of the vessel in the retinal image [10]. The profiling technique needs the estimated profile to fit nicely with the vessel cross-section, and each term in the fitted equation is assumed to reflect the behavior of the definite shape of the twin-Gaussian profile. However, this assumption is not always accurate because this assumption is based on a perfectly symmetrical and cylindrical shape of the vessels without the inclusion of the effect of the background intensity. This can yield to erroneous diameter estimation when the fitted equation does not exactly represent the expected model. Thus, the use of measures such as CRAE and CRVE are useful because these average the variations and provide more consistent results. However, such measures reduce the information content in the image. Thus, the small variations in the diameter of the vessels cannot be measured using these techniques. In this paper, we propose the use of cross-correlation of the vessel profiles to estimate the change in the vessel diameter using synthetic images generated from the twinGaussian profile. We tested the efficacy of this method by evaluating the narrowing of the simulated vessel diameter.
II. MATERIALS AND METHODS The aim is to identify the small changes in the vessel diameters using the vessel cross-sections. For this purpose, synthetic images with vessels of known diameters having small differences in the caliber have been generated and analyzed. This section is devoted to the foundation of the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 655–658, 2011. www.springerlink.com
656
M.Z. Che Azemin and D.K. Kumar
proposed methodology to estimate the change in the vessel diameter. This section has been divided into two subsections. The first subsection explains how the simulated vessels were generated, the second subsection explains the problems with the current method of the diameter measurement, and the third subsection deals with the proposed technique to estimate the change in the vessel diameter.
Fig. 1 shows a single vessel cross-section. The vessel profile is the inverted version of the twin-Gaussian (equation 1). The edge of the vessel is defined as the half of the amplitude, and the bump in the middle is characterized as the central light reflection. Using the first image as reference, we generated another eight images with the sizes reduced by two pixels. Fig. 2 illustrates two vessels with different sizes and their profiles. Note that the shrinking vessel exhibits the loss in the similarity of the profile.
Vessel edge
B. Previous Diameter Measurement - Fitting of a TwinGaussian
Central light reflection
(a)
(b)
Fig. 1 Single vessel cross-section. (a) Simulated vessel, blue line indicates where the cross-section is taken. (b) Single vessel cross-section, it can be represented as inverted twin Gaussian
Vessel size variation
Fig. 2 Profiles of two simulated vessels with different sizes
C. Proposed Vessel Diameter Change Estimation
A. Simulated Shrinking Vessels We have simulated retinal vessels that have small variations of the diameter between them based on generally accepted twin-Gaussian model [10],
G ( x) =
⎛ x -a 2 ⎞ ⎟ -⎜⎜ a ⎟ a1e ⎝ 3 ⎠
2
+ a 4 − a5
Previous method of estimating the retinal vessel diameter employs the fitting of a twin-Gaussian to the cross-sectional profile of the vessel. The intensity cross-section was modeled by equation 1. The terms in the equation 1 were estimated by fitting the twin-Gaussian function to the profile using a Levenberg–Marquardt fitting algorithm. The width of the vessel was calculated to be 2.33a3 . The first drawback of this method is the need for the parameters to be initialized. Bad initial values may lead to divergence from the true model. Fig. 3(a) is an example when the fit computation does not converge to the twinGaussian model. Another problem with this method is it tends to overestimate the diameter due to the inclusion of the background intensity when the fitting process tries to minimize the residual errors as shown in Fig. 3(b). For example, the diameter for the vessel using the half-of-the-amplitude definition is measured to be 60 pixels, whereas fitting of a twinGaussian curve to the simulated vessel, the diameter is 75 pixels.
⎛ x -a 6 ⎞ ⎟ -⎜⎜ a ⎟ e⎝ 7 ⎠
2
(1)
The negative Gaussian represents the central light reflection, a1 is the height of the Gaussian function, a2 is the spatial displacement, and a3 is the spread of the Gaussian. Likewise, a5, a6, and a7 represent similar meaning as a1, a2, and a3 respectively for the positive Gaussian. a4 corresponds to the background intensity.
To estimate the vessel diameter change we correlate the image for evaluation with a reference image. With this method, vessel segmentation and absolute diameter measurement are not required; the change in the caliber is identified from the relative changing pattern with the reference image rather than the absolute value of the diameter. Cross-correlation was adopted to analyze the relationship between the original image and the reduced size images. It gives an estimate the degree to which two signals are correlated. The advantage of cross-correlation over the conventional correlation is that it reduces the effect of misalignment of the images. For two cross-sections x(n) and y(n) (n = 1,2,…,N), the maximum cross-correlation function is defined as
IFMBE Proceedings Vol. 35
Estimating Retinal Vessel Diameter Change from the Vessel Cross-Section
657
(a)
(b)
Fig. 3 Twin-Gaussian fitting model applied on inverted simulated vessel cross-section. (a) Fitting with arbitrary initial parameters, an example when the fitted curve does not converge to the expected model, (b) Overestimation of the vessel diameter due to the outliers from the background intensity
Fig. 4 The graph shows inverse linear relationship between Xcorrmax and the change in the simulated vessel size ⎛ N ⎜ x (n) y (n + t ) ⎜ ⎜ n =1 = max 1/ 2 ⎜⎛ N N ⎞ ⎜⎜ ( x(n)) 2 ( x(n + t )) 2 ⎟ ⎜⎜ ⎟ n =1 ⎠ ⎝ ⎝ n =1
∑
Xcorrmax
∑
∑
⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠,
(2)
where t is the spatial lag, x(n) the cross-section from the first image as the reference signal, y(n) the cross-section of the image for evaluation and N is the length of x(n) and y(n).
(p<0.0001). This graph shows that Xcorrmax was found to decrease by 0.0072 with each decreasing pixel of the vessel diameter. Table 1 shows the robustness of Xcorrmax against noise. The table shows the influence of noise to the correlation between Xcorrmax and the decrease in the vessel diameter. The results show a significant negative correlation between the noisy Xcorrmax and the reduction in the vessel diameter. Table 1 Pearson’s correlation coefficient of the Xcorrmax from the noisy images with the decrease in the vessel diameter Type of Noise Gaussian Poisson Salt & Pepper
III. RESULT Fig. 4 is the plot of Xcorrmax against the decrease in the pixel size. From this figure, it is observed that this method is able to estimate the change in the vessel diameter changes. The graph demonstrates the inverse linear trend of Xcorrmax with each pixel reduction in vessel size (r2 = 0.965). The decreasing trend in the Xcorrmax was statistically significant
Correlation Coefficient -0.995 -0.996 -0.984
IV. CONCLUSION AND DISCUSSION In this paper, we have identified the issue of estimating the change in the vessel diameter which suffers from the need for an exact mathematical model to be fitted with the
IFMBE Proceedings Vol. 35
658
M.Z. Che Azemin and D.K. Kumar
vessel profile and vessel segmentation error [10]. This paper also has proposed and tested a new measure of estimating the vessel diameter. The parametric method based on the twin-Gaussian model was shown to be lack of accuracy due to the term which is used in the fitting model to estimate the vessel diameter does not consistently reflect the true caliber. Meanwhile, the vessel segmentation technique largely depends on the algorithms being used to define the edges of the vessels. Sobel technique for example, regularly mistaken central light reflections as the vessel edges [10]. The simulation result (Fig. 4) implies that the change in the vessel diameter can be estimated with the loss of similarity between the cross-sections of two same vessels captured at different moments. This is a relative measure; with only Xcorrmax we are not able to determine which vessel cross-section shrinks (or dilates). While we lose some information, we gain the robustness against the noise which vessel segmentation techniques is susceptible to [10]. This is proven from the resulting high correlation coefficients of more than 0.98 against three types of noise mainly found in biomedical images [11] (Table 1). This technique assumes the cross-sections are sampled at the same spatial location. Care must be taken in the future with the real retina images due to the different acquisition time which can contribute to the misalignment of the images. This can be overcome by introducing rigid registration (translation and rotation) prior the sampling of the vessel profiles. Further alignment is compensated with the use of the cross-correlation which identifies the lag that gives the best measure of similarity between two profiles. The immediate application of this technique is in the caliber change estimation of the time-series retinal vessels, gated with the electrocardiogram [8].
REFERENCES 1. M. Z. C. Azemin, D. K. Kumar, T. Y. Wong, J. J. Wang, R. Kawasaki, P. Mitchell, and S. P. Arjunan, "Fusion of multiscale waveletbased fractal analysis on retina image for stroke prediction," in 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Buenos Aires, 2010, pp. 43084311. 2. M. Z. C. Azemin, D. K. Kumar, T. Y. Wong, J. J. Wang, P. Mitchell, R. Kawasaki, and H. Wu, "Age-related rarefaction in the fractal dimension of retinal vessel," Neurobiology of Aging, 2010. 3. M. Z. C. Azemin, D. K. Kumar, and H. R. Wu, "Shape signature for retinal biometrics," in 2009 Digital Image Computing: Techniques and Applications, 2009, pp. 381-386. 4. A.M. Z. Che, D. K. Kumar, T. Y. Wong, R. Kawasaki, P. Mitchell, and J. J. Wang, "Robust Methodology for Fractal Analysis of the Retinal Vasculature," IEEE transactions on medical imaging, 2010. 5. M. L. Baker, P. J. Hand, J. J. Wang, and T. Y. Wong, "Retinal signs and stroke: revisiting the link between the eye and brain," Stroke, vol. 39, p. 1371, 2008. 6. G. Garhofer, T. Bek, A. G. Boehm, D. Gherghel, J. Grunwald, P. Jeppesen, H. Kergoat, K. Kotliar, I. Lanzl, and J. V. Lovasik, "Use of the retinal vessel analyzer in ocular blood flow research," Acta ophthalmologica, 2010. 7. B. U. Seifert and W. Vilser, "Retinal Vessel Analyzer (RVA)-design and function," Biomedizinische Technik/Biomedical Engineering, vol. 47, pp. 678-681, 2002. 8. M. J. Dumskyj, S. J. Aldington, C. J. Doré, and E. M. Kohner, "The accurate assessment of changes in retinal vessel diameter using multiple frame electrocardiograph synchronised fundus photography," Current eye research, vol. 15, pp. 625-632, 1996. 9. H. Li, W. Hsu, M. L. Lee, and T. Y. Wong, "Automatic grading of retinal vessel caliber," Biomedical Engineering, IEEE Transactions on, vol. 52, pp. 1352-1355, 2005. 10. N. Chapman, N. Witt, X. Gao, A. A. Bharath, A. V. Stanton, S. A. Thom, and A. D. Hughes, "Computer algorithms for the automated measurement of retinal arteriolar diameters," British Journal of Ophthalmology, vol. 85, p. 74, 2001. 11. R. M. Rangayyan, Biomedical image analysis: CRC, 2005.
IFMBE Proceedings Vol. 35
Face-Central Incisor Morphometric Relation in Malays and Chinese L.M. Abdulhadi and H. Abass Faculty of Dentistry, University of Malaya, Kuala Lumpur, Malaysia
Abstract— This research intended to examine the metric relation between face and inverted maxillary central incisor in the Malay and Chinese ethnic groups. One hundred twenty Malay and Chinese volunteers were investigated for facecentral incisor matching. Their results were compared to 51 Iraqis who served as a control. The similarity was tested statistically using three different mathematic methods to examine the presence of face– tooth morphometric relationship. In the first part the presence of correlation between the facial and the central incisor dimensions was investigated using direct facial and tooth measurements. The following biometrical points were measured on the face: bizygomatic, bigonial widths, and nasion- pogonian height. While, the measurements on the tooth were: the midpoint of the incisal edge tooth to the midpoint of the highest cervical curvature on the labial surface, and the tooth maximum width (nearly at the level of contact points). In the second part, the Image analyzer soft ware was used to indirectly measure the width at 14 different vertical locations on the face starting from the pogonion and ending in the frontal area, and 14 areas starting from the cervical area of the tooth and ending at the widest incisal region. In the third part of the study, three dentists evaluated the face- tooth matching visually. The results showed that a statistically significant metric correlation existed between the 14 selected widths of the frontal face view and the 14 widths of the labial view of the central incisor (p<0.05). In addition, the face frontal view was matching the tooth frontal form (p <0.05) using visual perception. As a conclusion of this study, the inverted central incisor form corresponded to the frontal view of the face in Malay and Chinese people. Keywords— Morphometric relation, Digital image analyzer, Tooth, Face.
I. INTRODUCTION Teeth form selection is one of the most important steps in establishing optimum aesthetic in the replacement of lost teeth by artificial ones. This step should satisfy the patient and his relatives as well as the dentist, and promote patient self-confidence, welfare, and psychological relief. The toothface form matching theory is not new. Leon William (1914) hypothesized the presence of a harmony between the frontal inverted central incisor (CI) and the face form. Many researches had been made to find such morphologic matching
by using visual perception method, computer shape matching according to Hausdorff distance (HDD), simple or sophisticated statistical correlation analysis or even genetic findings. However, controversy is persisting. The contentious findings may be due to one of the following; - Different reference landmarks and instrumentation. - Different ethnic group samples - Variable mathematical and statistical analysis. On the other hand, the theory of Leon William is continue to be used as the most universally acceptable method that provides artistic desirable harmony for teeth form selection in the absence of pre-extraction patient’s records. The Law of harmony (1914) stated that the human central incisor contour in frontal plane could be classified into three major shapes: rectangular, triangular, and ovoid [Fig .1].
Fig. 1 The main four facial forms described by Leon He claimed that the most pleasing appearance can be achieved when the two form outline of the inverted individual's face and maxillary central incisor are matching. The face form, personality and expression, the proportions of the other parts of the face, the lip and eye forms and colors, may affect on the agreeable facial look [1,2]. The oral cavity, the tooth shape and size, the arch form, the palatal contour, and the teeth arrangement [3-5]. The three terms; sex, personality, and age (SPA) were considered dentogenic terms and had been introduced to the dental profession in the early 1950 to describe the art practice, and techniques used to achieve aesthetic goals in prosthodontics. [5]. A significant correlation was found between the face,
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 659–662, 2011. www.springerlink.com
660
L.M. Abdulhadi and H. Abass
the tooth, and the arch forms by outline superimposition forms, although this method applied high technology, however, the accuracy of determining shape and size of teeth in an edentulous patient didn’t be improved. The cost and complexity of the measurement method also made it impractical for common application [6]. The purpose of this investigation was to test the hypothesis of central incisor-face frontal shape matching on Malay and Chinese and to compare the findings to Iraqi sample.
II. MATERIALS AND METHODS 120 subjects consisted of 48 males and 72 females were participated in the study. They were composed of 65 Malay and 55 Chinese healthy and completely dentate. Their ages ranged between 17-25 years (mean= 23.13±1.4). 51 Iraqis participated in the study as a control group for comparing the records and the correlation results. The two samples haven’t been undergone any conservative treatment or prosthetic replacement for their anterior or posterior teeth that were normally aligned, in addition, to the absence of concurrent or previous orthodontic treatment. The formulated ethnic groups were pure Malay and Chinese ethnic origin for at least two generations. Special questionnaire forms were designed and used to disclose this imperative during the subject selection. Patients with gingivitis, periodontitis, enamel dysplasia, attrition and abrasion, amelogenesis imperfecta, dentinogenesis imperfecta and other enamel and dentine abnormalities or malformation of teeth were excluded from the study. The first part of the study, investigated the presence of correlation between the facial and the central incisor dimensions in addition to the analyze the difference if existed among the studied ethnic groups. Therefore, the correlation was investigated using direct facial and tooth measurements. The following biometrical points were measured on the face using cephalometer and precise metallic ruler (verniere): bizygomatic, bigonial widths, and nasion- pogonion height. While, the measurements on the tooth were: the midpoint of the incisal edge tooth to the midpoint of the highest cervical curvature on the labial surface, and the tooth maximum width (nearly at the level of contact points). In the second part of the study, the subjects were photographed at a constant distance, height, and magnification using tripod mounted Nikon digital camera (Nikon Co.) with a macro lens (AF Micro Nikkor 60mm 1:28D, Japan) and a ring flash light. The distance was fixed at (150 cm) horizontally and (135 cm) vertically measured from the subject nose tip to the lens. Preliminary impression for the maxillary arch of each subject was made using irreversible hydrocolloid impression
material (Kromopan 100 hours Hydrocolloid dust free iso 1563 class A type1. Italy) and perforated stock impression tray (Ash Co Ltd Hert fort shine, England) following the manufacturer’s instructions. The impression was poured using dental stone (Heraeus Kuzler Corp., Hanau, Germany). On the cast, the most external boundaries of the left CI was marked using pencil (0.5 mm) and photographed using fixed angulations, distance and magnification. The images were uploaded into the computer. Using image analyzer (Leica quin lite image analysis V 27.1), the face and tooth images were divided into equally separated parts by 14 horizontal parallel lines in order to measure the widths along the face and CI length (Fig. 2). A simple hypothesis had been assumed that if two objects without ideal geometric forms and different dimensions (like in case of the face and tooth) have to be matched in form, then there should be a positive statistical linear quantitative relationship between their similar parallel dimensions to confirm this matching. Therefore, the face and the inverted central incisor were divided and measured in mm and correlated using linear correlation. Prior to each record, the face, and tooth lengths were calibrated using the Image analyzer calibration technique. The face –CI morphometric matching was tested using three mathematical analysis:- the mean widths of the whole sample (face and CI), the total width ratios of face and CI, and the individual widths of each subject were correlated using parametric or non parametric test when it was indicated. In the third part of the study, three dentists evaluated the face- tooth matching visually after they had been learnt the main facial forms and offered them 2-3 minutes for each case. The face was displayed on the computer with its corresponding CI. The data of the tooth and face were transformed into ratios by dividing the width of the widest area over the next width (starting from the widest area) until 13 ratios were completed. This method hypothesized that the ratio between two successive widths of two objects if found correlated then the two objects are matched in their forms. Thirteen
IFMBE Proceedings Vol. 35
Fig. 2 The image analysis of the tooth and face
Face-Central Incisor Morphometric Relation in Malays and Chinese
values were resulted using the ratio formula for both face and tooth widths, Tooth ratio 1 (TR1) = Tooth width1 (TW1)/tooth width 2 (TW2).. TR1--------TR13 For the face, the same mathematical procedure was applied Face Ratio 1 (FR1) =Face width 1(FW1)/Face width 2 (FW2).. FR1----------FR13
661
F. The Visual Perception Methods The statistical results suggested the presence of relationship between the face frontal contour and the inverted central incisor form (Table 1). Generally, there were significant difference in assessment between the three dentists, but the three assessors agreed of the presence of similarity between the face contour and the inverted tooth form. Table 1 The results of visual perception
Subsequently, the ratios of the inverted CI widths were correlated to those of the face. The following statistical tests were used to describe the features of sample population, and the control, and to measure the presence of difference among the ethnic groups or in gender; descriptive statistics, one way analysis of variance (ANOVA), P or NP correlation, and Chi square (p<0.05).
Statistic Ethnic Agreement Χ2 Sig. Lik. Ratio Asymp. Sig
III. RESULTS
First dentist Malay Chinese 67.7% 61.1% 72.12 44.00 0.00 0.00 67.06 38.12
Second Malay 75.3% 95.62 0.00 85.3
0.00
0.00
0.00
dentist Chinese 77.8% 81.06 0.00 70.70 0.00
Third dentist Malay Chinese 73.9% 76% 82.62 90.44 0.00 0.00 74.83 78.51 0.00
0.00
IV. DISCUSSION
A. The Intra-Observer Reliability The reliability of image analysis records was checked using two sets of width record (minimum and maximum width records) ; one set belonged to the tooth and the other to the face. Each record was repeated 3 times by the same examiner and then analyzed using Spearman NP correlation. The results were highly significant (p<0.05). B. Difference between the Face and CI Records of the Three Samples (Malay, Chinese, Iraqis) There were statistical difference among the studied ethnic groups regarding all of the facial and tooth measurements. C. The Relation between the Mean Face-CI Widths: The Spearman correlation coefficient was high and significant (r = + 0.989 to 1,p<.01, two-tailed ) D. The Relation between the Face-CI Ratios The correlation was positive and highly significant for the two studied genders and the two ethnic groups in addition to the control (p<0.01, two-tailed). E. The Individual Relation between Facial and CI Widths The Spearman correlation was highly significant (p<0.01, two-tailed).
Standardized digital still imaging was used to capture the face and the maxillary central incisor images. The main purpose was to reduce the bias in recording and measuring. The variability was reduced by repeated measurement. Reliability test revealed highly significant association. Many methods had been used by other researchers to analyze the image. However, the same principle was applied. [4,6]. The image analyzer seems more practical and easy to use compared to the previous techniques and presented optimal precision and reliability. Many techniques had been attempted to disclose the presence of similarity between the labial view of the inverted CI and the face soft tissue boundary. However, none could unveil this relationship[7,4]. Our method differs from the other studies in that the face and the central incisor were divided using horizontal lines representing the facial or tooth width in 14 evenly distributed heights marked on the external contour by 14 bilateral points. In the 170 subjects, the preliminary measurements showed that the face width decreased when reaching the temporal area and its form became dissimilar from that of the CI. Therefore, the maximum facial width was considered at the bizygion level or slightly higher so that it resembles the tooth form. Hence, the final result indicated the presence of highly significant correlation between the (face-tooth) widths that meant a close similarity in their shapes within this predetermined area (pogonion-Bizygion). Visual assessment was mainly depending on the dentist’s experience that may be considered (because of their scientific background) better than layman. Moreover, the number of
IFMBE Proceedings Vol. 35
662
L.M. Abdulhadi and H. Abass
participants was high enough to offer accurate results than low number [8,9]. However, this method could be considered less favorable for such type of research and bear a lot of bias especially it depends upon many variables related to experience, background, the social agreement, and the personnel taste. The total agreement between the dentist assessments was 86% for face while it was 66.4% for tooth, which indicated an acceptable reliability among their evaluation. The identification of the face form seemed to be easier than the tooth form. The association between face-tooth forms was contradictory to the previous studies [8, 9]. In this research, the width ratios used for the first time to increase the chances for more certitude regarding the presence of a correlation between the CI and the face. Our findings were not in agreement with [4,6,8,10-14] due to the use of different measuring, analyzing methods, and ethnic groups. If a linear regression analysis was applied for the facetooth widths, the results of this research can be used to predict the form of central incisor for any edentulous patient using the facial frontal widths. A tooth model can be proposed and modulated by the predicted measurements.
V. CONCLUSIONS The following can be concluded from this study: 1. The facial and CI direct measurements and indexes showed significant difference among races: Malay, Chinese and the control population. 2. High significant correlation was found between the record of widths of the inverted left central incisor and face widths at the studied 14 locations in Malay, Chinese, and the control (Iraqi). According to this finding, a morphometric relationship may exist between the face and the inverted central incisor in the three studied ethnic groups within the defined limits and boundary of the face and the tooth in frontal plane. 3. Visually, the matching between the face and the tooth forms was observed by three independent dentists. 4. The findings supported William’s theory regarding the presence of correlation or harmony between the forms of the face and the inverted central incisor.
REFERENCES 1. Ahmad (2005). Anterior dental aesthetic: Facial perspective, Br Dent J; 199 (1): 15-21. 2. Flores-Mir C., Silver E,. Barriga M.O et al. (2004), Lay Person’s perception of s M.I mile aesthetics dental and facial views. J Orthod; 31:204-9. 3. Brigante R.F. (1981). Patient- assisted aesthetics, J Prosthet Dent; 46 (1): 14-20. 4. Sellen P.N, Phil B, Jagger D.C et al. (1998). Computer-generated study of the correlation between tooth, face, arch forms, and palatal contour J Prosthet Dent; 80 (2):163-168 5. Burchet P.J., Christensen L.C.. (1988). Estimating age and sex by using colour, form, and alignment of anterior teeth. J Prosthet Dent; 59(2): 175- 178 6. Lindemann H.P, Knauer C, Pfeiffer P. (2004) .Morphemetric relationship between tooth and face shapes, J Oral Rehabil; 31(10): 972-8. 7. Brodbelt R.H., Walker G.F, Nelson D et al. (1984). Comparism of face shape with tooth form. J Prosthet Dent; 52 (4): 588-92. 8. Bell A. (1978) .The geometric theory of selection of artificial teeth: is it valid? JADA; 97:637-40. 9. Marunick M.T, Chamberlain B.B, Robinson C.A. (1983). Denture aesthetic: an evaluation of laymen's Preferences. J Oral Rehabil; 10: 399-406. 10. Mavroskoufis F., Ritchie G.M. (1980). The face –form as a guide for the selection of maxillary central incisors, J Prosthet Dent; 43(5): 501-5. 11. Hasanreisoglu U, Berksun S, Aras K, et al. (2005). An analysis of maxillary anterior teeth: Facial and dental proportions. J Prosthet Dent; 94(6), 530-8. 12. Wolfart S, Menzel H, Kern. M (2004). Inability to relate tooth form to face shape and gender, Eur J Oral Sci; 112 (6): 471-6. 13. Varjao F.M, Nogueira S.S, Sergio R. et al. (2006). Correlation between maxillary central incisor form and face form in 4 racial groups, Quintessence Int; 37 (10): 767. 14. Ferreira. (2007). Digitized study of the correlation between the face and tooth shapes in Young adult individuals. Braz J Oral Sci; 6(22): 1383- 6.
Author: Associate Professor Dr.Laith Mahmoud Abdulhadi Institute: Faculty of Dentistry/University of Malaya Street: Jalan University City: 50603 Kuala Lumpur Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Fingertip Synchrotron Radiation Angiography for Prediction of Diabetic Microangiopathy T. Fujii1, N. Fukuyama1, Y. Ikeya1, Y. Shinozaki1, T. Tanabe1, K. Umetani2, and H. Mori 1 1
Tokai University School of Medicine/Physiology and Cardiology, Isehara, Japan 2 JASRI/Research & Utilization Division, Hyogo, Japan
Abstract— Diabetic microangiopathy causes acetylcholineinduced paradoxical vasoconstriction in arterioles (20-200 μm). Because conventional angiographic systems lack sufficient spatial resolution (100-200 μm), they are not useful for prediction of diabetic microangiopathy and for the prevention of lethal cardiovascular diseases. To determine whether fingertip synchrotron radiation microangiography has enough spatial resolution to quantitate arteriolar diameter changes, and whether an arteriolar paradoxical vasoconstriction is a characteristic observation for diabetic microangiopathy, diameter reduction as arteriolar branching and difference of the diameter changes induced by acetylcholine between control (n = 5) and diabetic rats (n = 5) were analyzed. Fingertip synchrotron radiation microangiography visualized the arterioles with a diameter range of 30-300μm and demonstrated vascular diameter reduction as branching with a fixed ratio (r = 0.93, P < 0.004, y = 0.81x + 0.004 and r = 0.73, P < 0.001, y = 0.39x + 23.3; with x and y representing diameters of mother, and 1st and 2nd daughter segments). A vasodilatory reaction was induced by acetylcholine in the control (142.4 ± 61.9 to 190.9 ± 73.5, P < 0.05, n = 25), and, in contrast, paradoxical vasoconstriction in diabetic rats (201.6 ± 83.0 to 16 0.4 ± 67.9, P < 0.05, n = 37). In conclusion, the fingertip synchrotron radiation microangiography predicted diabetic microangiopathy and warrants further investigations into its usefulness to prevent lethal cardiovascular disease. Keywords— Synchrotron Radiation, Atheromatous Disease, Acetylcholine.
Microangiography,
I. INTRODUCTION Diabetes mellitus (DM) induces microangiopathy including vascular endothelial dysfunction (VED). It have been reported that paradoxical vasoconstriction (PVC) in response to vasoactive medication such as acetylcholine (Ach) develops in an early stage of atheromatous disease (initial blood flow control abnormality) [1]. It can be speculated that arterioles also play a central role in VED at an early stage of DM microangiopathy. Synchrotron radiation (SR) is an established X-ray source that can be used in microangiography. SR is generated by
the emission radiated from the huge synchrotron accelerator, and is an electromagnetic radiation characterized by white light with extremely strong luminance and remarkable directionality. Up to now, in laboratory animals successful imaging of the coronary arterial microcirculation and ischemia of the lower limbs have been reported by microangiography using SR. The purpose of this study was to determine whether finger-tip SR microangiography can predict arteriolar microangiopathy in DM rats.
II.
MATERIALS AND METHODS
SR microangiography on experimental animals was performed in a SR experiment facility (SPring 8, Hyogo Prefecture, Japan). The white SR is converted into monochromatic X-ray of 33.2 KeV (K absorption edge of the iodine) by a silicon crystal. This monochromatic X-ray with an energy level just above the K absorption edge of iodine allows us to detect a small amount of iodine in the microvessels [2]. Monochromatic X ray that passed the objects (anesthetized rats) described below was detected by a saticon camera. A visual field was set to 9.5 mm × 9.5 mm in this SR microangiographic imaging system and SPR was 9.5 μm 2. All animal studies were conducted under protocols approved by Tokai University Animal Experimental Committee. Ten male rats (Japan SLC, Shizuoka, Japan) were divided into a control group (n=5) and a DM group (n=5). The five rats of the control group consisted of Fisher 344 rats (n=3), and LETO rats (Otsuka Pharmaceutical, Tokushima, Japan, n=2). The five rats of the DM group consisted of a type I DM model of Fisher 344 rats (n=3), that had been treated with 100 mg/kg of streptozotocin (STZ; Sigma, St. Louis, Missouri, USA) 3 months before the microangiography, and a type II DM model of OLETF rats (n=2) (Otsuka Pharmaceutical, Tokushima, Japan). Just before SR microangiography, rats were anesthetized by intraperitoneal injections of 50 mg/kg of sodium pentobarbital (Nembutal, Abbot Laboratories, North Chicago, IL, USA). The right common iliac artery was then exposed by
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 663–666, 2011. www.springerlink.com
664
T. Fujii et al.
skin incision, and a polyethylene catheter (Clay Adams, PE 50; I.D. 0.58 mm, O.D. 0.965 mm; INTRAMEDIC, USA) was inserted into the lower part of the descending aorta for contrast medium injection. Another catheter was inserted into the tail artery for vasoactive agent infusion and arteriolar blood pressure monitoring. In ten rats, SR microangiography was repeated two times in the following order; at baseline and during Ach administration with a 10 min intermission. SR microangiography was performed by irradiating the left hind-limb fingertips of the anesthetized rats with the monochromatic X-ray while injecting iodine contrast medium into the descending aorta via the right common iliac artery. The contrast medium was injected at a rate of 2.4 ml/sec (Fisher rat: total 2 ml, LETO and OLETF: total 4 ml) from the inserted polyethylene catheter into the right common iliac artery. After a 10 min intermission and confirming stabilization of aortic pressure and pulse rate the second SR angiography was done in exactly the same way as at baseline while injecting Ach (Nacalai Tesque Tokyo Japan) at a rate of 3.28 × 10-11 mol/kg/min into the tail artery. The lumen diameter of microvessels was measured at three points for each vascular bifurcation: a just proximal point tentatively named as “mother artery”, and two just distal sites of the bifurcation as the “1st (bigger) daughter artery”, and the “2nd (smaller) daughter artery” (total of 38 data sets from 10 rats at baseline or during Ach administration). Arterial diameter, in general, is reduced as branching progressively [3]. Therefore vascular diameters of the 1st and 2nd daughter arteries were plotted against the proximal sites (38 and 38 data sets, respectively) and analyzed by linear correlation and regression analysis. Next, the changes of lumen diameters of microvessels from baseline condition to Ach administration were compared between normal and DM (n=25, 5 normal rats, and n=37, 5 DM rats) groups. The quantitative angiographic analysis described above was performed on a personal computer (DELL Precision Workstations 620, DELL). The results were summarized as mean values ± SD. Linear correlation and regression analysis were applied to the two sets of data (the 1st and 2nd daughter segments versus the mother segments). Comparison of vascular diameter changes during Ach administration between DM and control rats was performed by the paired t-test.
We measured the diameters of the 3 sites for each bifurcation site (arrowheads in Figure 1, panel a-2): the mother (the proximal site before bifurcation), the 1st daughter and the 2nd daughter arteries, and analyzed whether the measurements show a progressive diameter reduction as branching with a constant ratio. In the 38 bifurcation sites from the 10 rats, all of the 3 arteries were visualized clearly enough to measure the diameters. The relations of the microvascular diameters of the 1st and the 2nd daughter arteries against the mother arteries were analyzed by correlation analysis and linear regression analysis as shown in the left and right panels of Figure 2. Both plots revealed significant linear correlations (r = 0.93, P < 0.004, r = 0.73, P < 0.001, respectively). Regression analysis revealed regression equations of y = 0.807x + 0.003, and y = 0.392x + 23.31, respectively. These results indicated that the SR microangiography could reveal a diameter reduction as branching with fixed ratios of approximately 0.8 and 0.4. The changes of lumen diameters of microvessels during Ach administration were compared between those with and without Ach stress in all ten rats. Ach treatment induces PVC; diameter reduction during Ach, in DM rats (Figure 1, the panel b-1 and b-2 and Figure 3, the right panel) as contrasted with vasodilatory reaction; diameter increase during Ach in normal rats (Figure 1, panel a-1 and a-2 and Figure 3, the left panel). The change in the microvascular diameter caused by Ach was significant in both normal and DM groups (P < 0.02, and P < 0.03, respectively, paired t-test).
Fig. 1 Fingertip Microangiograms
III. RESULTS As shown in panel a-1 and magnified panel a-1 of Figure 1 (the arrows in these figures), SR microangiography visualized fingertip arterioles. The minimum diameter of the measured fingertip arterioles was 30 μm (Figure 3, right panel, the arrowhead).
The three upper panels are microangiograms in a control rat; center: at baseline (a-1), left: magnified view of the baseline and right: during Ach administration (a-2). The two lower panels are those in a DM rat; center: baseline (b-1) and right: during Ach administration (b-2). The arrows indicate Ach induced vasodilation (the upper panels) and the PVC (the lower panels). The arrowheads ( ‡#) indicate an example of diameter measurement of the mother, 1st and 2nd daughter arteries.
IFMBE Proceedings Vol. 35
*
Fingertip Synchrotron Radiation Angiography for Prediction of Diabetic Microangiopathy
665
IV. DISCUSSION This study showed that fingertip SR microangiography has enough spatial resolution (SPR) to evaluate quantitatively arteriolar diameter changes and can detect functional microangiopathy predicting pathological changes in DM rats. By analyzing vascular diameter reduction as branching with linear correlation and regression analysis, we confirmed a constant ratio of diameter reduction from mother to daughter arteries in the vascular beds with a diameter of 30300 μm (Figure 2). This observation indicates that SPR of the present SR microagiographic system is more precise than the ability to detect the small vessels filled with small amount of iodine contrast materials (concentration resolution). Tanaka et al demonstrated a branching pattern with a fixed diameter reduction ratio continuous from the epicardial-to- intramyocardial branching in the dog coronary artery [3]. They analyzed branching patterns of coronary vessels determined by microangiography using SR as done in the present study, and showed constant ratios of the daughter vascular diameter to the mother vascular diameter were 0.84 in the epicardial coronary artery and 0.74 in the intramural coronary artery. If all of the vascular diameter measurements were done above the limit of SPR of the SR microangiography (9.5 μm), the degree of reduction should be constant. A marked deviation of the reduction ratio of the vascular diameter from the regression line means that the measurement was done beyond the limit of SPR of the angiography. We confirmed in this study the reduction ratio of vascular diameter was kept constant throughout the measurement in the range of 30-300 μm. The reduction ratios of the microvascular diameters as branching were 0.81 and 0.39, and the reduction ratio was constant throughout the range of the measurement. The smallest measurable microvascular diameter was 30 μm. Because this measurement is plotted on a straight-line approximation as shown in Figure 2, it can be regarded that this SR system had enough SPR to quantitate the vessels with a diameter of 30 μm (theoretical resolution of 9.5 μm). Thus, we concluded SPR of the fingertip SR microangiography was high enough to verify the quantitative data for arteriolar PVC. By performing fingertip SR microangiography with and without Ach stress, we succeeded in visualizing arteriolar PVC only in DM rats (Figure 2, the panel b-1 and b-2 and Figure 3, the right panel). In contrast, vasodilatory reaction was noted in control rats (Figure 2, panel a-1 and a-2 and Figure 3, the left panel).
Fig. 2 Regression Analysis Regarding Microvascular Diameter Reduction as Branching Vascular diameters of a mother artery were plotted to a horizontal axis, and vascular diameters of 1st (the left panel) or 2nd daughter artery (the right panel) were plotted to a vertical axis. The slopes of the regression lines: 0.807, and 0.392, indicated the reduction rate of the microvascular diameter as branching. The arrowhead in the right panel indicates is the minimum diameter of the measured.
Fig. 3 The Changes of the Microvascular Diameter Induced by Ach Stress The left panel: the diameter changes from baseline to during Ach administration in normal rats, and the right panel: those in the DM rats. Ach induced vasodilator reaction in normal rats, in contrast, paradoxical vasoconstrictive reaction (PVC) in DM group.
V. CONCLUSIONS This study showed that the fingertip SR microangiography has enough SPR to quantitatively evaluate arteriolar diameter changes and can detect functional microangiopathy in predicting pathological changes in DM rats. The system would be useful for the clinical detection of DM microangiopathy.
ACKNOWLEDGEMENT This work was supported by grants from the Japan Societies of Promotion of Sciences, and from the Ministry of Health, Labor and Welfare. The authors wish to thank the Japan Synchrotron Radiation Research Institute (JASRI)
IFMBE Proceedings Vol. 35
666
T. Fujii et al.
with reference to the approval of these experiments in SPring 8 (BL28B). We also thank Ms. Yoko Takahari, Sachie Tanaka and Yoshiko Shinozaki of the Teaching and Research Support Center, Tokai University School of Medicine.
REFERENCES 1. Ludmer PL, Selwyn AP, Shook TL, et al. (1986) Paradoxical vasoconstriction induced by acetylcholine in atherosclerotic coronary arteries. N Engl J Med 315(17):1046–51 2. Umetani K, Fukushima K, Sugimura K. (2008) Microangiography system for investigation of metabolic syndrome in rat model using synchrotron radiation. Conf Proc IEEE Eng Med Biol Soc :2693–6
3. Tanaka A, Mori H, Tanaka E, et al. (1999) Branching patterns of intramural coronary vessels determined by microangiography using synchrotron radiation. Am J Physiol 276(6 Pt 2):H2262–7 Author: Hidezo Mori, M.D. Ph.D. Professor Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Tokai University School of Medicine 143 Shimokasuya Isehara Japan [email protected]
Hybrid Multilayered Perceptron Network Trained by Modified Recursive Prediction Error-Extreme Learning Machine for Tuberculosis Bacilli Detection M.K. Osman1,2, M.Y. Mashor2, and H. Jaafar3 1
2
Faculty of Electrical Engineering, Universiti Teknologi MARA (UiTM), Malaysia Electronic & Biomedical Intelligent Systems (EBItS) Research Group, School of Mechatronic Engineering, Universiti Malaysia Perlis, Malaysia 3 Department of Pathology, School of Medical Science, Universiti Sains Malaysia, Malaysia
Abstract— In this paper, image processing technique and artificial neural network are used to detect and classify the tuberculosis (TB) bacilli in tissue slide images. The tissue sections consisting of TB bacilli are stained using the ZiehlNeelsen method and their images are acquired using a digital camera mounted on a light microscope. Colour image segmentation is applied to remove the remove undesired artefacts and background. Then affine moment invariants are extracted to represent the segmented regions. Finally, the study proposes a method that integrates both Modified Recursive Prediction Error (MRPE) algorithm and Extreme Learning Machine, called MRPE-ELM to train Hybrid Multilayered Perceptron (HMLP) network. The network is used to classify the segmented regions into three classes: ‘TB’, ‘overlapped TB’ and ‘non-TB’. The classification performance of the HMLP network trained by the MRPE-ELM is compared with the HMLP trained by the MRPE algorithm and single layer feedforward neural network (SLFNN) trained by the ELM. The results indicated that the proposed MRPE-ELM has slightly improves the classification performance and reduces the number of epochs required in the training process compared to the MRPE algorithm. Keywords— Tuberculosis bacilli detection, tissue sections, affine moment invariants, artificial neural network.
I. INTRODUCTION An artificial neural network (ANN) is an information processing and computational system based on the human brain process information. It consists of a large number of highly interconnected neurons or nodes that are capable of performing data processing and learning from the input information [1]. The learning process is achieved by adjusting the weights which connect between the neurons. ANNs have been used in solving a wide variety of application such as for forecasting, control system, pattern recognition, signal processing, image processing and data compression. The use of ANNs in medical image analysis have gained much popularity in the past few years [2]. The major strengths of ANNs are having capability to adapt, improve
and optimize themselves according to variety and change of input information [1]. These capabilities help to improve the efficiency and reliability of such inspection and diagnosis of disease from medical images. The ANNs have been successfully applied to various applications such as image enhancement, image segmentation, and diagnosis and classification of diseases [1]. Tuberculosis (TB) is the second leading infectious killer after HIV/AIDS. Statistical information from World Health Organization (WHO) indicated that 2 billion people were affected by TB bacilli, with 9.4 million new cases and 1.8 million death recorded in 2008 [3]. The disease is caused by the infection of Mycobacterium tuberculosis. The bacteria usually attacks the lung causing pulmonary TB (PTB), but can affects other part of human body, refers to extrapulmonary TB (EPTB). The clinical diagnosis of PTB is conducted by finding the presence of TB bacilli in sputum, while for EPTB using biopsied-tissue examination. The use of image processing and ANN in TB detection and diagnosis is not completely new. The method was first proposed by Veropoulos et al. [4] to detect the presence TB bacilli in sputum smear. A multilayer perceptron network (MLP) is used to classify between the bacillus and nonbacillus. Early works on computer-assisted TB detection were based on fluorescence microscopy images [4-6]. However, due to a high cost and difficulty in equipment maintenance, light microscope is commonly used for screening and diagnosis of TB. In order to detect TB bacilli using light microscope, the clinical specimens need to be stained with Ziehl-Neelsen (ZN) method. After staining, the TB bacilli will appear as red coloured rods against blue background. Recent researches on automated TB bacilli were based on light microscopic images [7-9], due to its commonly used tool for TB diagnosis in low and medium-income countries. Even though a number of studies had been proposed to automate the PTB detection [4-9], researches in automated EPTB detection is still limited. Research in EPTB detection using ZN-stained tissue slide images can only be found in [7]. The work claimed to have identified bacilli in both
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 667–673, 2011. www.springerlink.com
668
M.K. Osman, M.Y. Mashor, and H. Jaafar
sputum and tissue section. However, only sputum result was demonstrated in the analysis. Detection of TB bacilli in tissue is more difficult compared to the ones in sputum, due to the complexity of tissues’ background. Furthermore, overstaining and understaining often occur in tissue samples [10]. These will produced images with intensity inhomogeneities and further complicate the analysis. The current study focuses on automated detection of TB bacilli in tissue slide images. The tissue slide images are captured from ZN-stained tissue slides using a digital camera attached to a light microscope. Several image processing tasks are implemented to segment the TB bacilli. Then, affine moment invariants [11] are extracted and fed to a Hybrid Multilayered Perceptron (HMLP) network for classification. The study proposes a method that integrates both Modified Recursive Prediction Error (MRPE) algorithm and Extreme Learning Machine, called MRPE-ELM to train HMLP network. Comparison with the HMLP network trained using the MRPE and single layer feedforward neural network (SLFNN) trained using the the ELM is also made find the best training method for the TB classification.
Nh
yˆ k (t ) = ∑ w 2jk h j (t ) k = 1,2...N o
(2)
j =1
where w 2jk and N o denote the weights that connect the hidden and output layers, and the number of output nodes, respectively. Mashor [12] introduced a modified version of MLP network with additional linear connections called the Hybrid Multilayered Perceptron (HMLP) network. The HMLP network allows the input layers to be connected directly to the output layer using weighted connection as illustrated in Figure 1. The output of k-th neuron, for a HMLP network is the sum of linear and non-linear connections and can be written as: Nh
Ni
j =1
i =1
yˆ k (t ) = ∑ w 2jk h j (t ) + ∑ wikL vi (t )
(3)
where wikL denotes the weights of the linear connection between the input and output layers.
II. THEORETICAL BACKGROUND This section gives a brief overview of the HMLP network and the MRPE algorithm used for training the network. Also, the ELM algorithm for training a SLFNN is reviewed in terms of its basic concept and implementation. A method for training the HMLP network by combining both the MRPE and ELM algorithm is presented at the end of the section. A. Hybrid Multilayered Perceptron Network A multilayer perceptron (MLP) network is the most common and popular ANN. It is a type of feedforward ANN and consisting of an input layer, hidden layer and output layer. Consider a MLP network with N i inputs, N h hidden nodes and vi (t ) is the i-th input signal at t-th sample. The output of the j-th hidden node is given by:
⎛ Ni ⎞ h j (t ) = F ⎜ ∑ w1ij vi (t ) + b j ⎟ t = 1, 2,3 ⎜ i =1 ⎟ ⎝ ⎠
N
(1)
where w1ij , b j , F (•) and N represent the weights that con-
nect the input and hidden layers, the thresholds in the hidden nodes, the activation function and number of samples, respectively. The output of the k-th neuron can be expressed as:
Fig. 1 Schematic diagram of a HMLP network B. Modified Recursive Prediction Error The Modified Recursive Prediction Error (MRPE) algorithm is introduced in [12] to train the HMLP network. The algorithm is based on the Recursive Prediction Error (RPE) algorithm which was proposed by Chen et al. [13] for training MLP network. It minimizes the following cost function: ˆ)= 1 J (Θ 2N
N
∑ ε T (t , Θˆ )Λ−1ε (t, Θˆ )
(4)
t =1
ˆ iteratively by updating the estimated parameter vector, Θ using Gauss-Newton algorithm:
IFMBE Proceedings Vol. 35
Hybrid Multilayered Perceptron Network Trained by Modified Recursive Prediction Error-Extreme Learning Machine
ˆ (t ) = Θ ˆ (t − 1) + P(t )Δ(t ) Θ
(5)
In brief, the implementation of MRPE algorithm for training a one-hidden-layer HMLP network is given as follows [12]:
(6)
the weights, the thresholds, i. Initialize P(0), a, b, α m (0), α g (0), λ 0 and λ (0) . b is a design
and
Δ (t ) = α m (t )Δ (t − 1) + α g (t )ψ (t )ε (t )
669
where ε (t ), Λ, α m (t ) and α g (t ) are the prediction error, an m × m symmetric positive definite matrix with m number of output nodes, the momentum and the learning rate, respectively. Both the α m (t ) and α g (t ) are varied to improve the convergence rate of the RPE algorithm according to:
α m (t ) = α m (t − 1) + a
(7)
α g (t ) = α g (0)(1 − α m (t ))
(8)
and
where a is a small constant with a typical value of 0.01; α m (0) and α g (0) indicate the initial values of α m (t ) and
parameter that has a typical value between 0.8 and 0.9. ii. Determine the output of the network according to (3). iii. Calculate the prediction error ε k (t ) = y (t ) − yˆ (t ) and the ψ (t ) matrix according to (11). iv. Compute the matrix P (t ) and λ (t ) according to (9) and (10). v. If α m (t ) < b, update α m (t ) according to (7). vi. Update α g (t ) and Δ (t ) according to (8) and (6). vii. Update parameter vector Θ(t ) according to (5). viii. Repeat steps ii to vii for each training data.
α g (t ) that have the typical values of 0 and 0.5 respectively.
C. Extreme Learning Machine
ψ (t ) is the gradient of the one-step-ahead predicted output with respect to the network parameters. P (t ) in (5) is updated recursively according to:
The Extreme Learning Machine (ELM) was proposed by Huang et al. [14] to train a SLFNN. The method was reported to produce better generalization performance and able to overcome the problem of slow training of the gradient-based learning algorithm in SLFNN. Consider a dataset consisting of N samples (x i , t i ) where
P (t ) =
⎤ 1 ⎡P (t − 1) − P (t − 1)ψ (t )(λ (t )I ⎢ ⎥ T −1 λ (t ) ⎣⎢+ ψ (t )P (t − 1)ψ (t )) ψ (t )P(t − 1)⎦⎥
(9)
where λ (t ) is the forgetting factor, 0 < λ (t ) < 1 and has been updated using the following scheme:
λ (t ) = λ0 λ (t − 1) + (1 − λ0 )
(10)
where λ0 and the initial forgetting factor λ (0) are the design values. Initially, the value of P (t ) matrix, P(0) is set to αI where I is the identity matrix and α is a constant and have a value between 100 to 10000. In order to accommodate the extra linear connections for a one-hidden-layer HMLP network, the gradient matrix ψ k (t ) are modified by differentiating (3) with respect to the parameter, θ c :
ψ k (t ) =
x i = [xi1 , xi 2 , … xin ]T ∈ ℜ n and t i = [t i1 , t i 2 , …, t im ]T ∈ ℜ m . In general, the j-output of N samples for a SLFNN with n h hidden nodes and activation function F(x) can be expressed as: nh
yˆ j = ∑ β i F (w i ⋅ x j + bi )
j = 1,2,
N
(12)
i =1
where x i , b j , w i and β represent the input data, bias of i-hidden node, weight vector connecting the inputs and the i-hidden nodes, and the weight vector connecting the ihidden and output nodes, respectively. Assuming an ideal case with zero training errors, then Equation (12) can be written as:
dy k (t ) = dθ c
nh
t j = ∑ βF ( w i ⋅ x j + bi )
j = 1,2,
N
(13)
i =1
⎧ v1j ⎪ vi0 ⎪ ⎪ 1 1 2 ⎨ v j (1 − v j ) w jk ⎪ 1 1 2 0 ⎪v j (1 − v j ) w jk vi ⎪⎩ 0
if if if if
θ c = w 2jk 1 ≤ j ≤ nh θ c = wikl 1 ≤ i ≤ ni (11) θ c = b1j 1 ≤ j ≤ n h 1 θ c = wij 1 ≤ j ≤ n h .1 ≤ i ≤ ni otherwise
where t j is the corresponding output for the input data x j . The equation can be written compactly as: Hβ = T
where
IFMBE Proceedings Vol. 35
(14)
670
M.K. Osman, M.Y. Mashor, and H. Jaafar
⎡ F (w 1 ⋅ x1 + b1 ) ⎢ H=⎢ ⎢ F (w 1 ⋅ x N + b1 ) ⎣ ⎡ β1T ⎤ ⎢ ⎥ β =⎢ ⎥ ⎢β T ⎥ ⎣ nh ⎦ nh ×m
F ( w n h ⋅ x j + bn h ) ⎤ ⎥ (15) ⎥ ⎥ F ( w n h ⋅ x N + bn h ) ⎦ N ×nh
(16)
⎡ t 1T ⎤ ⎢ ⎥ T=⎢ ⎥ ⎢t T ⎥ ⎣ N ⎦ N ×m
H HMLP = ⎡ F (w1 ⋅ x1 + b1 ) ⎢ ⎢ ⎢ F (w1 ⋅ x N + b1 ) ⎣
F ( w n h ⋅ x N + bn h
x N1
x1n ⎤ ⎥ ⎥ x Nn ⎥⎦ N ×( nh + n )
and
β HMLP
β = H †T
(18)
where H † is the Moore-Penrose generalized inverse. The implementation of ELM can be summarized as follows:
2. 3.
x11
(19) (17)
Then, the solution for β can be determined as:
1.
F (w n h ⋅ x j + bn h )
Randomly assign the weights connecting the input and the hidden node w i and bias b i . Determine the hidden layer output matrix H using (15). Calculate the weights connecting the hidden and output node as in (18).
III.
⎡ β1T ⎤ ⎢ ⎥ =⎢ ⎥ ⎢β T ⎥ + n n ⎣ h ⎦ ( nh + n)×m
(20)
METHODOLOGY
This section discusses the method to automate the TB bacilli detection and classification. It comprises of four steps; (1) image acquisition (2) colour image segmentation (3) feature extraction and (4) classification. A. Image Acquisition
D. HMLP Network Trained by Modified Recursive Prediction Error-Extreme Learning Machine The ELM has fast learning speed when compare to backpropagation algorithms as only the weights connecting the hidden and output node are analytically determined using the Moore-Penrose generalized inverse and it needs no iterative procedure. However, the random selection of weights connecting the input and the hidden node wi and bias bi may reduced the classification performance and increase the size of the network [15]. The current study proposed an integration of MRPE and ELM algorithms, referred as MRPE-ELM. The motivation for integrating the MRPE and ELM algorithms is to have advantage of both algorithms, as well as overcome the drawback of random selection in the ELM. The training process starts by implementing the MRPE algorithm. Then, the weights connecting the input and the hidden node wi and bias bi which were determined by the MRPE algorithm is used by the ELM to determine the hidden layer output matrix H and the weights connecting the hidden and output node. Modification to the ELM is also required so that the linear connection weights can be estimated. For a HMLP-ELM network, the H and β matrices, as mentioned in (15) and (16), are modified to accommodate the linear connection, to yield:
A total of 20 ZN-stained tissue slides taken from TB patients were used in the experiment. All the slides were provided by the Pathology Department, Hospital Universiti Sains Malaysia, Kelantan. Images of tissue slides were acquired using the Luminera Infinity 2 digital camera mounted on the Nikon Eclipse 80i light microscope. The slides were analysed under 40× magnification. The system captured 24-bit RGB images at a resolution of 800×600 pixels and they were saved in the bitmap (.bmp) file format. B. Colour Image Segmentation The current study uses the procedure for image segmentation as in [16]. The approach starts by removing pixels which are not related to red colour using a CY-based colour filter and k-mean clustering. In order to wipe off some small and large unwanted particles, a 5×5 median filter, followed by region growing technique, were applied to the image. The median filter was used to reject small artefacts and smooth the segmented region while the region growing was used to label and calculate the region size. All the regions with less than 50 pixels or larger than 800 pixels in size, are considered as non-bacilli and will be eliminated. Fig. 2(a) shows an example of a tissue slide image containing TB bacilli while Fig. 2(b)-(e) shows the sequence of resultant images after applying the proposed colour image segmentation.
IFMBE Proceedings Vol. 35
Hybrid Multilayered Perceptron Network Trained by Modified Recursive Prediction Error-Extreme Learning Machine
671
C. Feature Extraction Since the segmentation is based on colour information, it was unable to completely remove all the unwanted regions with colour similar to that of the TB bacilli. Therefore, the representation of the bacilli in terms of their geometrical shape is required. The present study proposed to extract six affine moment invariants for feature representation. These features are invariant under rotation, scale and translation, thus useful to represent the bacilli. Detailed discussion of affine moment invariants can be found in [11].
(a)
(c)
Fig. 3 Examples of (a) ‘TB’ (b) ‘overlapped TB’ and (c) non-TB regions which are not associated with TB bacilli are labeled as ‘non-TB’. Fig. 3 illustrates some examples of ‘TB’, ‘overlapped TB’ and ‘non-TB’.
D. Classification Using HMLP Network Trained by MRPE-ELM In this study, the six affine moment invariants are fed to the HMLP network trained by the MRPE-ELM. The network is used to classify the segmented regions into three classes: ‘TB’, ‘overlapped TB’ and ‘non-TB’. The ‘TB’ is referred to a region which represents a single TB bacillus and the ‘overlapped TB’ is referred to a region consisting of more than one bacillus and overlapping each other. All the
(a)
(b)
(c)
(d)
(e)
Fig. 2 The procedure for colour image segmentation. (a) Original ZNstained tissue slide image and its result after applying (b) the C-Y based colour filter (c) the k-mean clustering (d) the median filter (e) the region growing
(b)
IV. RESULTS AND DISCUSSION A dataset consisting of 1603 objects which belong to ‘TB’, ‘overlapped TB’ or ‘non-TB’ was collected from 150 tissue slide images with various staining conditions. The dataset has six attributes which represent the six affine moment invariants and three classes. All the input data were normalised within the range [0, 1] to avoid some of the features from dominating the training process. From the dataset, 1200 and 403 objects are chosen randomly for training and testing data, respectively. The training data were further partitioned into two: 1000 object for training while the remaining samples for validation. The number of hidden node was varied from 1 to 35 and chosen based on the one which produced the lowest validation error. The sigmoidal function was used as the activation function. Accuracy, optimum number of hidden node, and the number of epoch had been used as the performance indicator. The performance of the HMLP network trained using MRPE-ELM has been evaluated by comparing with the HMLP network trained by MRPE algorithm and SLFNN trained by ELM. Figure 4 shows the testing accuracy over 35 hidden nodes for the HMLP trained by the MRPE and MRPE-ELM, and the SLFNN trained by the ELM. Figure 5 tabulates the number of iterations required during the training process over 35 hidden nodes for the HMLP network trained by the MRPE and MRPE-ELM. Table 1 summarizes the classification performance for the three networks. Throughout the analysis, it can be deduced that the classification performance of the HMLP trained by the MRPEELM is slightly better than both the HMLP network trained by the MRPE algorithm and SLFNN trained by the ELM. The HMLP network trained by the MRPE-ELM had achieved the highest testing accuracy of 77.33% and average testing accuracy of 74.62% for 35 hidden nodes. However, the proposed method required larger hidden nodes compared to the HMLP trained by the MRPE and the SLFNN trained by the ELM. In addition, by integrating
IFMBE Proceedings Vol. 35
672
M.K. Osman, M.Y. Mashor, and H. Jaafar
Fig. 4 Testing accuracy over 35 hidden nodes for HMLP network trained by MRPE, MRPE ELM and SLFNN trained by ELM
and HMLP network is presented. Six affine moment invariants are extracted from the segmented region and fed to a HMLP network trained by MRPE-ELM to perform classification task. The classification performance using the proposed method is also compared with the HMLP network trained by the MRPE algorithm and SLFNN using the ELM. Overall results indicate that the proposed MRPEELM for training the HMLP network able to achieve acceptable classification performance compared to the MRPE and the ELM algorithms. The network achieves the highest testing accuracy of 77.33% and average testing accuracy of 74.62% for 35 hidden nodes. Furthermore, the proposed method also improves the training efficiency by reducing the number of iteration requires during the training process.
REFERENCES
Fig. 5 Number of iteration required over 35 hidden nodes for HMLP network trained by MRPE, MRPE ELM and SLFNN trained by ELM
Table 1 Result for TB classification using HMLP trained by MRPE, MRPE-ELM and SLFNN trained by ELM
Network type HMLP with MRPE SLFNN with ELM HMLP with MRPE-ELM
Optimum No. of hidden iteration nodes
Accuracy (%) Training (best/average)
Testing (best/average)
23
10
76.60/75.49
76.67/74.23
20
-
75.00/69.93
77.17/69.61
27
8
77.00/75.49
77.33/74.62
both the MRPE and ELM had significantly reduced the number of iterations required during the training process, compared to the HMLP network trained by the MRPE, as illustrated in Fig. 5.
V. CONCLUSIONS In this paper, an automated TB bacilli detection in ZNstained tissue slide images using affine moment invariants
[1] J. Jiang, P. Trundle, and J. Ren, ‘Medical Image Analysis with Artificial Neural Networks’, Computerized Medical Imaging and Graphics, 2010. [2] Z. Shi, and L. He, ‘Application of Neural Networks in Medical Image Processing’. Proc. Second International Symposium on Networking and Network Security (ISNNS ’10), Jinggangshan, P. R. China, 2-4, April 2010 pp. 23-26. [3] Global Tuberculosis Control. A Short Update to the 2009 Report', World Health Organization (WHO), 2009. [4] K. Veropoulos, C. Campbell, and G. Learmonth, ‘Image Processing and Neural Computing Used in the Diagnosis of Tuberculosis’. Proc. IEE Colloquium on Intelligent Methods in Healthcare and Medical Applications (Digest No. 1998/514), , York, UK, 20 Oct 1998 1998 pp. 8/1-8/4. [5] K. Veropoulos, C. Campbell, Learmonth, G., , B. Knight, and J. Simpson, ‘The Automated Identification of Tubercle Bacilli using Image Processing and Neural Computing Techniques’. Proc. 8th International Conference on Artificial Neural Networks, Skövde, Sweden1998 pp. 797-802. [6] K. Veropoulos, G. Learmonth, C. Campbell, B. Knight, and J. Simpson, ‘Automated identification of tubercle bacilli in sputum: a preliminary investigation’, Analytical and quantitative cytology and histology, 21, pp. 277-282. 1999. [7] P. Sadaphal, J. Rao, G. Comstock, and M. Beg, ‘Image Processing Techniques for Identifying Mycobacterium Tuberculosis in ZiehlNeelsen Stains’, The International Journal of Tuberculosis and Lung Disease: The Official Journal of the International Union against Tuberculosis and Lung Disease, 12, (5), pp. 579-582. 2008. [8] R. Khutlang, S. Krishnan, R. Dendere, A. Whitelaw, K. Veropoulos, G. Learmonth, and T.S. Douglas, ‘Classification of Mycobacterium tuberculosis in images of ZN-stained sputum smears’, IEEE Transactions on Information Technology in Biomedicine, 2009. [9] V. Makkapati, R. Agrawal, and R. Acharya, ‘Segmentation and classification of tuberculosis bacillifrom ZN-stained sputum smear images’. Proc. 5th Annual IEEE Conference on Automation Science and Engineering, Bangalore, India, 22-25 August 2009 pp. 217-220. [10] H. Krishnaswamik, and C.K. Job, ‘The Role of Ziehl-Neelsen and Fluorescent Stains in Tissue Sections in the Diagnosis of Tuberculosis’, International Journal of Tuberculosis, 21, (1), pp. 1821. 1974.
IFMBE Proceedings Vol. 35
Hybrid Multilayered Perceptron Network Trained by Modified Recursive Prediction Error-Extreme Learning Machine [11] J. Flusser, and T. Suk, ‘Pattern recognition by affine moment invariants’, Pattern recognition, 26, (1), pp. 167-174. 1993. [12] M.Y. Mashor, ‘Hybrid Multilayered Perceptron Networks’, International Journal of Systems Science, 31, (6), pp. 771-785. 2000. [13] S. Chen, C.F.N. Cowan, S.A. Billings, and P.M. Grant, ‘Parallel Recursive Prediction Error Algorithm for Training Layered Neural Networks’, International Journal of control, 51, (6), pp. 1215-1228. 1990. [14] G. Huang, Q. Zhu, and C. Siew, ‘Extreme learning machine: theory and applications’, Neurocomputing, 70, (1-3), pp. 489-501. 2006. [15] H.T. Huynh, J.J. Kim, and Y. Won, ‘DNA Microarray Classification with Compact Single Hidden-Layer Feedforward Neural Networks’. Proc. Frontiers in the Convergence of Bioscience and Information Technologies (FBIT 2007), Cheju Island, Korea2008 pp. 193-198.
673
[16] M.K. Osman, M.Y. Mashor, Z. Saad, and H. Jaafar, ‘Segmentation of Tuberculosis Bacilli in Ziehl-Neelsen-Stained Tissue Images based on K-Mean Clustering Procedure’. Proc. 3rd International Conference on Intelligent & Advanced Systems (ICIAS2010), Kuala Lumpur, Malaysia, 15-17 June 2010. Author: Muhammad Khusairi Osman Institute: Faculty of Electrical Engineering Universiti Teknologi MARA (UiTM). Street: Jalan Permatang Pauh City: Permatang Pauh, Pulau Pinang. Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Intelligent Spatial Based Breast Cancer Recognition and Signal Enhancement System in Magnetic Resonance Images F.K. Chia1, K.S. Sim1, S.S. Chong1, S.T. Tan1, H.Y. Ting1, Siti Fathimah Abbas2, and Sarimah Omar2 1
Faculty of Engineering and Technology, Multimedia University, Malacca, Malaysia 2 Malacca General Hospital, Malacca, Malaysia
Abstract— In this paper, a Computer Aided-Diagnosis (CAD) system for lesion detection in breast MR images is designed. The CAD process begins with analysis of MR images to detect the existence of lesion. Then, the detected lesions are coloured based on types of tumours which are benign, suspicious or malignant accordingly. Our CAD system enables better visualization of the lesions and improves accuracy as well as speed for breast cancer diagnosis. Besides, an intelligent database is embedded to analysis the statistically data for the patients.
Current MRI process involves use of contrast agent, usually Gadolinium-based, to improve tissue discrimination [9]. This paramagnetic compound mainly resides at intravascular and extracellular fluid space and increases the brightness of these Gadolinium-enhanced tissues greatly. It helps in better detection of vascular tissues such as lesions.
Keywords— breast cancer, MRI, lesion recognition, signal enhancement.
A. Data Acquisition
I. INTRODUCTION Cancer is a class of diseases that cause uncontrolled growth of cells. Breast cancer originates from breast tissue, most commonly from the inner lining of milk ducts or the lobules. During 2009, an estimated 192,370 new cases of invasive breast cancer are expected to occur among women in the US, and about 1,910 new cases are expected in men. Breast cancer is the most frequently diagnosed cancer in women after skin cancers. 40,610 breast cancer deaths are estimated during 2009 [1]. Early detection of breast cancer is essential to prevent breast cancer death because cancer therapies are more effective in the early stages [2]. Mammography is one of the standard screening tools for breast cancer [3]. However, mammography is less sensitive for young women with a hereditary risk [4]. When a case is undetermined by mammography, Magnetic Resonance Imaging (MRI) serves as an alternative imaging tool. This is because it has higher soft tissues sensitivity than mammography [5]. MR Images are shown as thin horizontal slices of the breast tissues and they are able to be studied at many different angles. For each case of breast MRI analysis, a large number images are created and to be interpreted by a radiologist. Due to limitation of human eye-brain visual system and high number of images to be read, computer-aided diagnosis (CAD) is extremely useful in MRI analysis. CAD improves the sensitivity, accuracy, and speed in MRI analysis.
II. THE PROPOSED METHOD
The raw images are taken from Malacca General Hospital, Malaysia which includes the patients’ biopsy results as references. Before the injection of contrast agent, a set of pre-contrast MR images are taken with the T1 mode, FLASH 3D traverse setting of the MRI machine and normally called as “pre-contrast images”. Then, after the injection of contrast agent to the patient, the MR images were generated continuously every one minute for a total of six minutes. In this project, 58 sets of MR images are analysed, which includes all four cases; normal, benign, suspicious and malignant. B. Lesion Classifications MRI contrast agents are used to increase the visibility of internal body structures under Magnetic Resonance Imaging. By changing the relaxation times of tissues, the MRI contrast agents can aid the radiologist determine the presence of the tumour. The intensity depends on the image weighting may show in either a higher or lower signal. Based on this feature, a tumour detection scheme composed of semi-quantitative analysis and contrast agent wash-out is devised. The semi-quantitative analysis is an algorithm that computes the gradient echo of images. The signal intensity values within the interest regions on all pre- and postcontrast images are measured. The measurement of signal enhancement of MR breast image is defined as the difference value between pre and post contrast on each pixel. The signal intensity/time on the pixel is evaluated as (1).
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 674–677, 2011. www.springerlink.com
Intelligent Spatial Based Breast Cancer Recognition and Signal Enhancement System in Magnetic Resonance Images
-% enhancement of the signal intensity between pre and post contrast:
Si ( x , y ) − S o ( x , y ) So ( x , y )
(1)
p
q
2
p
q
2
(4)
Where Bi denote the i-th candidate in the l-th iteration and l
Intensity
(a)
( xc − xc ) + ( y c − y c ) ,
l
× 100
where x & y are pixel positions, So is the post-contrast intensity of the image from second minutes onwards and Si is the signal intensity of pre-contrast image. By analysing the patterns of intensity difference, a prediction can be made to verify whether the lesion is benign, suspicious or malignant [6]. Typical patterns are shown in Fig. 1. Intensity
dist ( p , q ) =
675
l
Bi represent the area of Bi , ^ is the logical AND operator, p and q denote the lesion regions, TD is a threshold distance (recommended value is 2). For each lesion region of the patient Bi, the CT test is performed for all MRI slices. Fig. 2 shows the analyzed results.
Intensity
(a)
(b)
(c)
(d)
(e)
(f)
(c)
(b)
Fig. 1 Patterns of Contrast Enhancement. (a) Benign (b) Suspicious (c) Malignant A lesion is classified as benign if there is a monotonic increase in signal intensity over six minutes of the examination period. For suspicious lesion, the peak signals intensity is achieved before three minutes and is maintained for the remainder of the acquisition. Besides, a lesion is classified as malignant if there is a decrease in signal intensity immediately after the peak is achieved. C. Inter-Slices Analysis In order to reduce the false positive detection, the interslice noise elimination algorithm is employed. The aim of inter-slices analysis is to improve the accuracy of lesion regions detection. In general MRI screening, the position of a lesion may not change drastically between MRI slices. Based on this study, the centre of mass of detected lesion region between current and next slices is compared [11]. Equations (2), (3) and (4) evaluate the centre of mass for lesion regions and eliminate non 3D-connected objects.
xc =
1 l
Bi
⎧ true CT = ⎨ ⎩ false
∑
( x , y )∈Bi
x , yc =
1 l
Bi
∑
y
( x , y )∈Bi
(2) l −1
if ( dist ( p , q ) < T ) ∧ ( Bi ∩ Bi ) D
otherwise
l
(3)
Fig. 2 Original MRI slices and detection results. (a), (b) are original MRI slices; (c), (d) are results of spatial analysis, (e), (f) are the results after the spatial and inter-slice analysis D. Colourisation of Lesions MR images are stored in Digital Imaging and Communications in Medicine (DICOM) format, which use 16 bits to store the image pixel values and patients’ information. To apply colourisation on MR images, the 16 bits images are first converted into 8 bits colour representation in order to reduce the processing time by applying (5).
IFMBE Proceedings Vol. 35
676
F.K. Chia et al.
I 8 bits ( x, y ) =
I 12 bits ( x, y ) − I 12 bits (min) I 12 bits (max) − I 12 bits (min)
× 255,
(5)
where I8bits denotes the 8 bits MRI image, I12bits represents the 12 bits MRI image, x & y are the coordinate from 1 to 448, I12bits (min) indicate the minimum intensity of the 12 bits MRI image and I12bits (max) is the maximum intensity of the 12 bits MRI image. Then the 8 bits MRI image output is converted to RGB format by duplicate the output image twice. The Light red, yellow and green colours are selected to label the lesions. Light red colour represents the malignant lesion, light yellow colour denotes the suspicious lesion and light green colour labels the benign lesion.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
III. RESULTS The aim of the proposed scheme is evaluate the performance and accuracy of the system in breast cancer detection. The proposed framework was applied to a population of breast DCE-MRI data from 58 patients. Each set of MRI consisted of 864 slices. The slice size is 448 x 448 pixels (32cm x 32cm). Besides, the patients MRI data are independently certified by two radiologists in Malacca General Hospital. All the slices are acquired by using T1 protocol with echo time 1.65 seconds. Fig. 3 shows the result image of inter-slices analysis. The false positive detection of the breast lesion located on left breast has been eliminated.
(a)
(b)
Fig. 3 Result image of inter-slices analysis (a), (b) normal case, (c), (d) benign case, (e), (f) suspicious case and (g), (h) malignant case
The input samples are constituted from 25 sets benign cases, 15 sets suspicious cases and 8 sets malignant cases. The output from the proposed method yields 93.1% similarity with the validated result. Fig. 4 shows the result images and graph of pixel enhancement percentage for 4 different cases.
Fig. 4 Result MRI slices and pixel enhancement percentage plotting for (a), (b) normal case, (c), (d) benign case, (e), (f) suspicious case and (g), (h) malignant case
Besides, the patients statistically data in the database are analyzed. Fig. 5 and Fig. 6 show breast cancer age distribution and the relationship between patients’ age and body mass index.
IFMBE Proceedings Vol. 35
Intelligent Spatial Based Breast Cancer Recognition and Signal Enhancement System in Magnetic Resonance Images
677
images and enhance the detection accuracy. The proposed system is simpler and more efficiency when compared to the conventional methods that employed a very complicated detection algorithms. The proposed software system has been tested with 58 sets of MRI breast images. The images are verified by the radiologists from Malacca Hospital. 54 over 58 set images of the proposed system are matched to the verified biopsies and show satisfactory result. This high accuracy value indicates that the developed software system is able to produce accurate and reliable identification results with high sensitivity.
REFERENCES
Fig. 5 Breast cancer age distribution
Fig. 6 Breast cancer age distribution Age is a risk factor for human malignancies, including breast cancer. From Fig. 5, the women with age 40 -69 years old have the greater chance of breast cancer incidence. It means the Asian women tend to have the earlier presentation of breast cancer compare to the Western women (60-64 years). For women within 40-69 years old, the premenopausal and postmenopausal experienced by them will influence the reproductive factor which in return increase the potential risk of getting the breast cancer [7]. Obesity is another factor associated with both higher rates of breast cancer and unfavorable breast cancer outcomes [8]. From Fig. 6, there are 70.9% of the patients are overweight whereas only 29.1% of patients are with normal weight. This proves that the obesity or overweight may increase the risk of breast cancer. Most of the breast cancer women with age from 40 to 49 years old are overweight as compare to other age group.
IV. CONCLUSIONS
[1] American Cancer Society, Cancer Fact & Fig. 2009. [2] R. M. Rangayyan, L. Shen, Y. Shen, J. E. L. Desautels, H. Bryant, T. J. Terry, N. Horeczko and M. S. Rose, "Improvement of Sensitivity of Breast Cancer Diagnosis with Adaptive Neighborhood Contrast Enhancement of Mammograms," IEEE Trans Inf Technol Biomed., vol. 1, pp. 161-170, Sept. 1997. [3] C. D. Lehman, J. D. Blume, P. Weatherall, D. Thickman, N. Hylton, E. Warner, E. Pisano, S. J. Schnitt, C. Gatsonis and M. Schnall, “Screening Women at High Risk for Breast Cancer with Mammography and Magnetic Resonance Imaging,” Wiley InterScience, vol. 103, pp. 1898–1905, Mar. 2005. [4] M. J. Stoutjesdijk, C. Boetes, G. J. Jager, L. Beex, P. Bult, J. H. C. L. Hendriks, R. J. F. Laheij, L. Massuger, L. E. van Die, T. Wobbes, J. O. Barentsz, “Magnetic Resonance Imaging and Mammography in Women With a Hereditary Risk of Breast Cancer,” Journal of the National Cancer Institute, vol. 93, pp. 1095–1102, July 2001. [5] M. Kriege, C. T. M. Brekelmans,C. Boetes, P. E. Besnard, H. M. Zonderland, I. M. Obdeijn, R. A. Manoliu, T. Kok, H. Peterse, M. M. A. Tilanus-Linthorst, S. H. Muller, S. Meijer, J. C. Oosterwijk,Louk. V. A. M. Beex, R. A. E. M. Tollenaar, H. J. de Koning, E. J. T. Rutgers and J. G. M. Klijn, “Efficacy of MRI and Mammography for Breast-Cancer Screening in Women with a Familial or Genetic Predisposition,” N Engl J Med, vol. 351, pp. 427–437, July 2004. [6] H. T. Le-Petross, “Breast MRI as a Screening Tool: The Appropriate Role,” Journal of the National Comprehensive Cancer Network, vol. 4, pp. 523–526, May 2006. [7] G. J. Yoo, E. G. Levine, C. Aviv ,C. Ewing and A. Au, “Older women, breast cancer, and social support,” BMC Health Serv Res, vol. 9, pp. 9, Nov. 2009. [8] S.L. Peacock, E. White, J.R. Daling, L.F. Voigt and K.E. Malone, “Risk of anogenital cancer in women with CIN.,” Fred Hutchinson Cancer Research Center, vol. 151(7), pp. 737, Apr. 2000. [9] S. H. Heywang-Kobrunner, R. Beck, Contrast-enchanced MRI of the breast. Berlin-New York: Springer, 1996, [10] J. Brown, D. Buckley, A. Coulthard, A.K. Dixon, J.M. Dixon, D.F. Easton, R.A. Eeles, D.G.R. Evans, F.G. Gilbert, and M. Graves, “Magnetic resonance imaging screening in women at genetic risk of breast cancer: imaging and analysis protocol for the UK multicentre study,” Magn Reson Imaging, vol. 18, pp. 765-776, June 2000. [11] G. S. Lin, S. K. Chai, W. C. Yeh, L. J. Cheng “Lesion Detection Based on Spatial and Inter-Slice Analyses for MRI Breast Imaging,” in MVA2007 IAPR Conference on Machine Vision Applications, pp.500-503, 2007.
The CAD system is proposed in order to aid the radiologists and doctors in the interpretation of MRI breast IFMBE Proceedings Vol. 35
Investigating the Mozart Effect on Brain Function by Using Near Infrared Spectroscopy H.Q.M. Huy, T.Q.D. Khoa, and V.V. Toi Biomedical Engineering Department, International University - Viet Nam National University, Ho Chi Minh City, Viet Nam
Abstract— The Mozart Effect (ME) have been investigated for nearly two decades and shown that it improves human IQ and decreases blood pressure. However, most of the results were concluded from the IQ test or fMRI results. In this investigation, we used the functional Near-Infrared Spectroscopy (fNIRS) to investigate the brain activity because it is more convenient and safer to the human subjects. We investigated the ME on 60 subjects (18 – 25 year old). We divided 60 subjects into three groups. The first group listened to the Sonata for two pianos in D major first movement “Allegro con spirito” (K448) composed by Mozart then performed the IQ test for 30 minutes; the second group was listening to K448 while doing the test and the final group performed the test without listening to the music. The results showed that there was a correlation between the art and neuroscience. Keywords— Mozart Effect, IQ test, NIRS machine, neuroscience, blood pressure.
I. INTRODUCTION The ME was first described by Raucher et al (1993) [1] who investigated the IQ score of 84 college students by doing Stanford – Binet test (Binet and Simon 1905). In this experiment, Raucher divided the subjects into three groups. The first group listened to the Sonata for two pianos in D major first movement “Allegro con spirito” (K448) composed by W.A.Mozart; the second group listened to relaxing music; and the final group did not listen to music. All subjects were then given a spatial reasoning test taken from Stanford – Binet test. The result showed that the first group had IQ score higher than the other groups. Another study about ME by John R. Hughes [2] indicated that Mozart music especially the K448 pieces effects on Epileptiform activity by using EEG 18 – channel instruments and standard International 10 – 20 System of Electrode Placement on 29 patients (ages 3 – 47). The superorganization of the cerebral cortex would seem to resonate with the superior architecture of Mozart’s music to normalize any suboptimal functioning of the cortex. He explained why Mozart music effect because of the architecture of Mozart’s music is brilliantly complex and highly organized. Additionally, Hughes’s study indicated the periodic of Mozart music by
investigating the WAV files which converted from CD format and founded that these periodicities are consistent with a general theme, and characteristic of Mozart music is highly organized with the superorganized cerebral cortex. M.Cacciafesta and his colleagues give new frontiers in rehabilitation in geriatric age by using Mozart music [3]. They research in twelve individuals (8 men and 4 women) from 66 and 77 year old with diagnoses of MCI. By using K448 piece combine with a series of tests: Paper – folding and cutting test (PFC) for spatial – temporal; three objects and three places test for the episodic learning; the clock test for the ideational – praxis abilities; Rey’s 15 – word test to the recall; the trail making test for the attention; the digit span for the memory for numbers, they indicated that the patient with listen to music is improve the spatial – temporal abilities. The other research concerned about the effect of music on blood pressure regulation by Den’etsu Sutoo and Kayo Akiyama [4]. They did experiments on mouse and in their study systolic blood in SHR was significantly reduced along with decreased behavioral activity during and after exposure to Mozart’s music. On the other hand, Jakob Pietschnig and his colleagues have a metal - analysis [5] on 39 studies, yielding 38, 11, and 15 study effects for different treatment conditions. They used two methods to assess the differences of effect sizes for spatial task performance which are subgroup analysis and weighted multiple linear metaregression. This study demonstrates that there is only little support for a specific Mozart effect in published as well as in unpublished work. Although results are positive, the effect by Mozart’s music (K448) compared to no stimulus at all on spatial task performance observed were very small in size. In the published paper of Norbert Jausˇovec, Ksenija Jausˇovec, Ivan Gerlic [6], they investigated in 56 right – handed student – teachers and divided them in to 4 groups randomly. Then let them listen to Mozart’s music (K448) and do the test. This study indicated the influence Mozart’s music has on different phases of learning – represemted by Hebb’s recurrent activation phase and memory consolidation. There are many experiments designs similar to Raucher’s experiment: Steele et al (1999) [7] show that Mozart’s music had a significant influence on mood only but not on
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 678–681, 2011. www.springerlink.com
Investigating the Mozart Effect on Brain Function by Using Near Infrared Spectroscopy
spatial – temporal performance. Next, Hetland (2000) [8] concluded the ME exits but is limited to specific spatial task type. Chabris(1999) [9] presented an analysis of 16 listening to Mozart only increase 1.4 IQ points. A study used MRI was done by Bodner (2001) [10] showed that when the patient listens K448 can induce activation of the dorso – lateral prefrontal cortex, the occipital cortex and the cerebellum. However, study the ME by using NIRS is a new method. Therefore, the aims of this study are: (1) to verify the ME on brain function; (2) investigate how ME effects on brain function areas by using NIRS.
679
In the second period, 15 student – 9 males and 6 females – will be measured the hemoglobin concentration in brain by NIRS machine of Shimadzu. The process will be done with the same condition in first period. However, the time for the first group was set into 60s – 480s – 300s which rest in 60s, the time for listening the music is 480s and do the test in 300s. The time for second group is 60s – 300s which rest in 60s and time for doing the test while listening to music is 300s. The last group takes 300s to do the test without listening music. Figure 1 and figure 2 show the arrangement of 14 channels measured the brain by NIRS. The first 7 channels used to measure the left brain and the other channels measured the right brain.
II. METHOD A. Subject The first period 60 healthy students (18 – 25 year old) from International University – HCM VNU, University of Natural Science – HCM VNU, University of Technology – HCM VNU, HUFLIT University, and HCM Medicine and Hospital University were participated in an experiment which divided participants into 3 groups; and in each group the subjects were divided into two group: group like classical music (A) and group like pop music (B). The participants will do a test which has math questions; language questions; and questions of image logical thinking.
Fig. 2 The arrangement of 14 channels on the head left side and right side
B. Protocol The project divided into two periods. In the first period, the first group listened Sonata for two pianos in D major first movement “Allegro con spirito” (K448) composed by W.A.Mozart in 8 minutes at 60 dB volume and then did the test in 30 minutes which has 12 math questions; 14 language questions; and 4 questions of image logical thinking. The second group was listening the music while doing the test for 30 minutes at the same volume condition of the first group. The last group did the test without listening music for 30 minutes.
Fig. 1 The number of channels use for experiment
Fig. 3 The anatomy of human brain [11]. Channel 2; channel 4; channel 5; channel 12 according to the Symbols function; Grammar function; Vision of Alphabet function and Object Recognition function on left brain and right brain respectively
IFMBE Proceedings Vol. 35
680
H.Q.M. Huy, T.Q.D. Khoa, and V.V. Toi Periodogram channel 5 group 1
III. RESULTS A. Statistic Result The results investigated in 3 groups and divided in to two types of music: subjects like classical and subjects like pop music. Discussion from 3 below tables
P ow e r/F req u en c y (d B /H z )
5
Group B Group A
0 -5 -10 -15 -20 -25 0
0.1
Table 1 Score of Group 1
0.2
0.3
0.4
0.5
Frequency (Hz)
0.6
0.7
0.8
0.9
1
Fig. 4 The periodogram of Channel 5 of group 1
Test
A
B
Math
48.83%
54.67%
31%
33.4%
Periodogram channel 5 group 2 0
Alphabet
40%
48.8%
Image logical thinking
55%
71.25%
Table 2 Score of Group 2 Test
A
B
Math
52.83%
55.55%
Grammar
30.8%
36%
Alphabet
33.78%
51.23%
Image logical thinking
67.5%
70%
Group B Group A
-5
Power/Frequency (dB/Hz)
Grammar
-10
-15
-20
-25
-30 0
0.1
0.2
0.3
0.4
0.5 Frequency (Hz)
0.6
0.7
0.8
0.9
1
Fig. 5 The periodogram of Channel 5 of group 2 Periodogram channel 5 group 3 5
Table 3 Score of Group 3
Group B Group A
Test
A
B
Math
56.69%
57.35%
Grammar
34.6%
34%
Alphabet
47.2%
43%
Image logical thinking
68.75%
67.55%
Power/Frequency (dB/Hz)
0
-5
-10
-15
-20
-25 0
0.1
B. NIRS Result
0.2
0.3
0.4
0.5 Frequency (Hz)
0.6
0.7
0.8
0.9
1
Fig. 6 The periodogram of channel 4 of group 3
Although using 14 channels to measure, only four channels were taken: channel 2, channel 4, channel 5, channel 12, which according to math function; grammar function; vision of alphabet function and vision function respectively. Figure 3 indicate the position of four channels. Using power spectral density (PSD) results of three groups and choose one represent result of 4 channels. In group 1, the group A was effected by Mozart’s music better than the group B because the power spectral density of group A is higher than group B. When the participants do the test and listen to music at the same time, the group A effected by K448 master piece better than the group B. The results of group 3 show that without listen to music, almost the participants do not have the result as well as the other two groups.
IV. DISSCUSSIONS The PSD results of three groups proved the statistic result. In the math function, the classical music group of group 3 is higher than the other group because of the highest power spectrum. It’s mean people who like classical music is good at math skill. However, people who listen the music and do math test are not as good as the people who listen and do math at the same time. Pop music group is the same result with classical music group. In the grammar function, classical music group in group 2 has the highest score. In the other word, the grammar function gets the highest status when the people who like classical music listen to music and learn grammar at the
IFMBE Proceedings Vol. 35
Investigating the Mozart Effect on Brain Function by Using Near Infrared Spectroscopy
same time. However, the pop music group in group 2 is the lowest and group 1 is the highest. Therefore, the people who like pop music should not listen to music and learn grammar at the same time because it will not get the highest status. In the alphabet function, classical music group in group 2 gets the highest score. In addition, the pop music group in group 1 is highest in pop music group results. Thus, to get the highest status for alphabet function, the people who like classical music should learn alphabet and listen to the music at the same time. The image logical thinking function gets the highest status in group 1 with classical music group. In addition, the highest score in pop music groups is pop music in group 3. Therefore, to learn geometry best, the people who like classical music should listen to music first and then learn geometry. In the other hand, the people who like pop music have the good skill in image logical thinking without listen music. Raucher’s experiment (1993) [1] investigated the IQ score in 84 students and concluded that the group listened to music first and did the IQ test get the higher score than the group did the test without listening. In the other word, Raucher’s experiment show that the Mozart Effect is exist and it effect on the brain function specially increase the IQ. However, this article has proved that the Mozart Effect is effect on the grammar function, alphabet function and image logical thinking. According to the PSD results of figure 4; 5; 6, the students who like classical music effected better than students who like pop music. Familiar to Raucher’s experiment, Norbert Jausˇovec’s result indicated the effect of Mozart’s music in learning but not point out what exactly the learning ability which K448 sonata is effected in. The PSD results can fill in the blank in Norbert’s result. Many research try to explain the effected of Mozart’s music. Hughes’s study (2001) [2] indicated that Mozart’s music has periodicity characteristic and either does J.S.Bach and J.C.Bach. M.Cacciafesta (2010) [3] concluded the sonata for two piano K448 has the unique structure that is simple elements and rare dissonance harmony.
V. CONCLUSIONS This article has just finish at investigating Mozart Effect on two types of group people – group like classical music and like pop music. The results show that the classical music group effected by Mozart Effect better than the other groups. In the future, this experiment will add the gender factor into the investigation.
681
ACKNOWLEDGMENT We would like to thank Vietnam National Foundation for Science and Technology Development-NAFOSTED for supporting attendances and presentation. This research was partly supported by a grant from Shimadzu Asia-Pacific Pte. Ltd. and research fund from International University of Vietnam National Universities in Ho Chi Minh City. We also would like to thank BSc Le Thi Hanh Nguyen – University of Science Ho Chi Minh City and BSc Phan Dang Loc – Ho Chi Minh University of Foreign language and Information technology for helping us to set up the protocol for three group subjects. Finally, an honorable mention goes to our volunteers and family for their supports on us in completing this project.
REFERENCES 1. Raucher,F.H,Shaw,G.L,Ky,K.N (1993). Music and spatial task performance. Nature 365,611 2. John R. Hughes (2001). Mozart effect. Epilepsy & Behavior 2, 369 – 417. 3. M.Cacciafesta, E.Ettore, A.Amici, P.Cicconetti, V.Martinelli, A.Linguanti, A.Baratta,W. Verrusio, V. Margliano (2010). New frontiers of cognitive rehabilitation in geriatric age: the Mozart Effect (ME). Archives of Gerontology and Geriatrics. 4. Den’etsu Sutoo, Kayo Akiyama (2004). Music improves dopaminergic neurotransmission: demonstration based on the effect of music on blood pressure regulation. Brain research 1016, 255 – 262. 5. Jakob Pietsching, Martin Voracek, Anton K.Formann (2010). Mozart Effect – Schomzart efect: A meta – analysis. Intelligence 38, 314 – 323. 6. Norbert Jausˇovec , Ksenija Jausˇovec, Ivan Gerlic (2006).The influence of Mozart’s music on brain activity in the process of learning. Clinical Neurophysiology 117, 2703 – 2714. 7. Steele,K.M,Bass,K.E,Crook,M.D (1999). The mystery of the Mozart effect: failure to replicate. Psychol.Sci 10, 366 – 369. 8. Hetland,L (2000). Listening to music enhances spatial – temporal reasoning: evidence for the “Mozart effect”. J.Aesthet.Educ 34 (2000 Fall/Winter). 9. Chabris,C.F (1999). Prelude or requiem for the” Mozart Effect”? Nature 400, 826 – 827. 10. Bodner.M,Muftule.L.T,Nalcioglu.O,Shaw.G.L (2001). FMRI study relevant to the Mozart effect: brain areas involved in spatial – temporal reasoning. Neurol.Res 23,683 – 690. 11. http://www.hiddentalents.org
Author: Truong Quang Dang Khoa Institute: Biomedical Engineering Department – Ho Chi Minh City International University Street: Quarter 6, Linh Trung, Thu Duc Dist. City: Ho Chi Minh Country: Viet Nam Email: [email protected]
IFMBE Proceedings Vol. 35
Latex Glove Protein Estimation Using Maximum Minimum Area Variation K.P. Yong1, K.S. Sim1, H.Y. Ting1, W.K. Lim1, K.L. Mok2, and A.H.M. Yatim2 1
Faculty of Engineering and Technology Multimedia University Melaka, Malaysia 2 Rubber Research Institute Malaysia (RRIM) Kuala Lumpur, Malaysia
Abstract— This paper reported an improvement to the previously proposed maximum-minimum variation (MMV) test for protein estimation. The proposed method utilizes single protein test as compared to three protein tests for MMV. In addition, a new computerized algorithm namely maximumminimum area variation (MMAV) has been proposed in order to estimate the protein concentration in the latex glove. The new proposed technique, give significantly better results in terms of consistency and accuracy than existing methods. Besides, the method reduces the chemicals usage, lab consumables, and the need for certain hardwares. Thus, it is more environmental friendly. Keywords— latex glove, Bradford, MMV.
I. INTRODUCTION Natural rubber latex gloves were first introduced into surgery in the 1880s. In the beginning, medical latex glove was utilized to shield the hands of nurses and surgeons from dermatitis which is the consequence of caustic disinfecting agents in operation room [1]. Today, the function of sterile surgical gloves is to offer two forms of protection: to shield surgery patients from contagion and to guard health care workers (HCWs) from exposure to bloodborne pathogens [2]-[3]. Thus, it is indeed a necessity practice to use latex gloves in the healthcare settings. Major technical advances in microelectronics and computers have resulted in development of charge-couple device (CCD) cameras and inexpensive line scanners, which used extensively in digital photography and for scanning text and photographs. The charge-coupled devices digitize and give quantitative light intensity data, which can be subsequently analyzed by computers. These devices have many applications in biological research [4]. Conventionally, animal biocompatibility test is done by the bio-implantation testing [5], which can be defined as test methods for assessment of local effects of implant material on living tissue, at macroscopic and microscopic level. The experiment involves animals such as dogs or rabbits, each of which received certain number of implanted fragments.
The evaluation is made based on the tissue response to the implanted sample such as infection and also the healing progress of the animal in a certain period after the implantation. This testing method can determine sub-chronic effect or chronic effect based on the period that can be minimum 12 weeks or more. Fragments are removed weekly from each animal and regular examination of the wound is carried out during this period. After that, the removed sample is checked under refinery device such as a scanning electron microscope (SEM). From observation, evaluation can be made for the biocompatibility level. Fig. 1 shows the process of bio-implantation. Protein estimation is indeed essential in biochemical analysis. Typically, different methods such as Biuret, or protein binding dye such as Coomassie blue and ponceau S are used for protein estimation [6]-[7]. However, these procedures are based on spectrophotometric measurements. They suffer from disadvantages like high background with common reagents and, in the case of protein binding dyes, continuous precipitation of the protein which leads to dye colour complexes [8]-[11]. Moreover, these methods are laborious, lengthy and may require large quantities of the protein samples. In 2008, Sim et al. had proposed a maximum-minimum variation (MMV) test to quantify the levels of the protein in latex glove samples by using three types of protein tests and digitally acquire the tested latex glove samples by using scanner [12]. However, since then, the authors have found
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 682–685, 2011. www.springerlink.com
Samples Bio-implantation Remove samples View samples by SEM Fig. 1 The process of bio-implantation test
Latex Glove Protein Estimation Using Maximum Minimum Area Variation
683
that the technique could be improved further by simplifying the three different protein tests to one protein test. For image analysis part, the maximum-minimum area variation (MMAV) is proposed.
area variation (MMAV) technique, is used to perform image processing on the images, and conclusions are formed based on the observation. The details of process for Bradford protein test flow chart are shown in Fig. 3.
II. METHODS
Latex glove sample
A. Maximum-Minimum Variation (MMV) Test
Placement in test tube
In MMV test, three types of protein tests were employed, such as Millon’s test, Ninhydrin test, and Xanthoproteic test. This method trounced the limitations in modified Lowry method where all the utilized chemicals were simplified into three protein tests and the testing equipments were replaced by computer vision system. After the protein tests, the glove samples were digitally acquired by using scanner. The digital color images were analyzed by MMV. The method calculates the maximum and minimum values in the R(red), G(green), and B(blue) channels, respectively. After that, the expected three values were obtained by finding the resultants of those RGB values, respectively. Fig. 2 presents the overall flow of MMV process.
Add Bradford reagent and note the color change Clean sample with distilled water and dry sample with dryer Scan and save the latex glove sample image Apply image processing technique (MMAV)
Latex glove samples
Fig. 3 The details of process for Bradford protein test Protein tests (Millon’s test, Ninhydrin test, and Xanthoproteic test)
After the scanning process, the protein colour image is denoted as X. Then, the image is separated into R (red), G (green), and B (blue) channels. We named them as function Xr(i,j), Xg(i,j), and Xb(i,j), respectively. Histograms are plotted for RGB channels of the image, respectively. After that, maximum frequency of the histogram will be obtained and divided into half and denoted as f max/2 . Then, function y = fmax/2 is plotted into the histogram to divide the y-axis into upper and lower plane. We can determine the maximum intensity, I max and minimum
Clean samples with distilled water and dry samples with dryer Scan and save the latex glove sample images
intensity, I min values from the interception points. The maximum-minimum area variation is defined as
Apply image processing technique (MMV)
VRGB = Er ( I max − I min ) 2 + E g ( I max − I min )2 + Eb ( I max − I min ) 2
(1)
Fig. 2 Overall flow of MMV process where Er , Eg , and Eb are the expected value for the RGB B. Maximum-Minimum Area Variation (MMAV) The proposed method uses computerized analysis to determine the protein levels of the sample. The glove sample will first go through pre-processing of specified chemical tests for protein such as Bradford protein test [8]. Then, the glove sample will go through the scanning procedure to acquire the image of the sample. An algorithm, the maximum-minimum
channels, respectively.
III. RESULTS AND DISCUSSION In the experiment, 100 types of sample latex gloves are used. They are varied from high grade to common grade latex gloves. All the gloves have gone through the process
IFMBE Proceedings Vol. 35
684
K.P. Yong et al.
shown in Fig. 3. After that, each individual grade of glove has to go through the MMAV test to identify the different grade of protein content inside each glove. In this paper, three types of latex gloves are shown which are Type A for good grade of latex glove, Type B for middle grade, and lastly Type C is for common grade of latex glove. With the same practice, we have tested on 100 types of gloves. Fig. 4(a), (b), and (c) exhibit the Type A, B, and C latex glove, respectively.
Table 1 Comparison data between MMAV and MMV Glove Type
Minimum Value
Maximum Value
Type A
R: 149.05 G: 220.99 B: 224 R: 146.04 G: 210.02 B: 225.99 R: 112.9 G:201.99 B: 240.02
R: 155.96 G: 224.02 B: 245 R: 153.92 G: 216.03 B: 229.01 R: 126 G: 206.96 B: 243.02
Type B Type C
MMAV ( VRGB ) 48.69
72.36
144.63
MMV
R: 30 G: 21 B: 22 R: 42 G: 27 B: 25 R: 57 G:26 B:24
In Table 1, we have shown that Type A latex glove has the lowest maximum-minimum area variation value compared to Type B and Type C. From Figure 4(a), we can clearly observe that the Type A glove is less blue than other two types in terms of intensity. This shows that it contains less protein than the others. In Table 1, the minimum value (second column) indicates the minimum intensity value for RGB while maximum value (third column) denotes the maximum intensity value for RGB. We have shown that Type A latex glove has the lowest MMAV value compared to Type B and Type C. From Figure 4(a), we can clearly observe that the Type A glove is less blue than other two types in terms of intensity. This shows that it contains less protein than the others. The fifth column in the table represents the values generated from MMV. The computed three results from MMV technique in RGB values hardly to estimate the protein levels in latex glove samples.
(a)
IV. CONCLUSION
(b)
In conclusion, the consistency and accuracy computation results can be improved by using MMAV test. With the implementation of the proposed MMAV test, promising results can be obtained in protein levels determination for latex gloves.
ACKNOWLEDGMENT The authors would like to acknowledge the Malaysian Rubber Board and the Rubber Research Institute of Malaysia for their contribution in data and latex glove samples.
(c) Fig. 4 (a) Type A latex glove, (b) Type B latex glove, and (c) Type C latex glove
IFMBE Proceedings Vol. 35
Latex Glove Protein Estimation Using Maximum Minimum Area Variation
REFERENCES Geelhoed, G W (1998) The pre-Halstedian and post-Halstedian history of surgical rubber glove. Surg. Gynecal. Obstet 167:350-356 Jones, S F (1993) The OSHA bloodborne pathogens standard. AAOH J. 41(5):218 West, K H, Cohen, M L (1997) Standard precautions-a new approach to reduce infection transmission in the hospital setting. J. Intraven. Nurs. 20(6): 7-10 Sainis, J K, Valli G, Rao, Y V et al. (1994) Essential features of digital image processing and its applications in life sciences. Indian Journal of Experimental Biology 32: 1-3 Finck, C, Lefebvre, P (2005) Implantation of esterified hyaluronic acid in microdissected Reinke's space after vocal fold microsurgery: First clinical experiences. Laryngoscope 115(10): 1841-1847 Scopes, R K (1982) In protein purification: principle and practice. Springer Verlag, New York Bradford, M M (1976) A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal. Biochem. 72: 248-254 Stutzenberger, F J (1992) Interference of the detergent Tween 80 in protein assays. Anal. Biochem. 207(2): 249-254
685
Williams, K M, Marshall, T (1992) Coomassie blue protein dye-binding assays measure formation of an insoluble protein-dye complex. Anal. Biochem. 204(1): 1895 Raghupathi, R N, Diwan, A M (1994) A protocol for protein estimation that gives a nearly constant color yield with simple protein and nullifies the effects of four known interfering agents: microestimation of peptide groups. Anal. Biochem. 219(2): 356-359 Kirazov, L P, Venkov L G, Kirazov E P (1993) Comparison of the Lowry and the Bradford protein assays as applied for protein estimation of membrane-containing fractions. Anal. Biochem. 208(1):44-48 Sim K S, Chin F S, Tso C P et al. (2008) Protein identification in latex gloves for bio-compatibility using maximum minimum variation test. 4th Kuala Lumpur Internation Conference on Biomedical Engineering 21:611-614.
Author: Yong Kok Phin Institute: Multimedia University Street: Jalan Ayer Keroh Lama City: Ayer Keroh, Melaka Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Measurement of the Area and Diameter of Human Pupil Using Matlab N.H. Mahmood, N. Uyop, M.M. Mansor, and A.M. Jumadi Faculty of Electrical Engineering, Universiti Teknologi Malaysia, Johor, Malaysia
Abstract— This paper presents the simple guide of measuring the area and diameter of human pupil using the MATLAB. The pupil measurement and recognition system is very useful in biometric field because the measurement of pupil is unique and is different for each person. The image of eye use in this work is an image that has been downloaded from CASIA Iris Image Database. The image is converted into the binary image and estimated the threshold value of the dark region. After several steps, the result of diameter and area of pupil have been successful calculated in the pixel unit. The measurement is converted in millimeter (mm) unit, and the result is displayed using Graphical User Interface (GUI). It shows that from five samples, different people have different area and diameter of the pupil, and this is the reason why it could be use in biometric field like fingerprint recognition. Keywords— Human pupil measurement, pupil recognition, biometric field, Graphical User Interface, iris recognition.
I. INTRODUCTION Eyes are the most important part of our visual system. An image of eye contains pupil within iris, sclera, eyelids and eyelashes. Biometrics uses physiological or biological characteristics to measure the identity of an individual. In recent years, iris recognition has become the major recognition technology since it is the most reliable form of biometrics [1,2,8,9]. Iris patterns are unique and stable, even over long period of time. Unfortunately, iris recognition has some disadvantage that must be considered. The main disadvantage of iris recognition is there are possibilities that the person eye may get damaged due to passage of infrared rays and some optical disease may affect that person [1,5]. It is also help the code crackers access to the iris scanner although the person is dead. To overcome the iris recognition disadvantages, pupil recognition is used. Pupil is a circular hole inside the iris and the radius of the pupil unique for each person. One difficulty in processing pupil images for biometrics is that the pupil changes in size due to involuntary dilation [3]. The size of pupil is changed according to two muscles called sphincter and dilator muscle. The sphincter muscles make the iris larger and therefore the pupil smaller while the dilator muscles make the iris smaller and therefore the pupil
(a)
(b)
Fig. 1 Image of the pupil [10] (a) smaller and (b) larger larger. However, the shape of the pupil remains same and this can be shown clearly in Figure 1. Due to the size of the pupil changes base on the intensity of light, the chances for false non-match between enrolment images and images to be recognized is greater because of larger differences of pupil dilation. Therefore, capturing multiple enrollment images with varying degrees of dilation could solve this problem. Instead of this problem, there are some advantages of the pupil recognition. One of the advantages in security access is when a person is dead. This give high security to any code crackers as the pupil of the dead person gets dilated and fixed. Another advantage is the method will not cause harm to the human eye and the person will not suffer from the optical disease because this method use monochromatic light and not infrared light like iris re- cognition. The method is also can be used for identifying the blind people because the pupils of the blind people also respond to the light. In this paper, the methodology to measure the human pupil using MATLAB is described. It involves the usage of MATLAB Image Processing Toolbox as well as the Graphical User Interface. Although we found there are lot of previous works focused on human pupil measurement, there are not describe in details the usage of MATLAB Image Processing Toolbox. Hopefully, this paper will benefit and give rough idea to understand human pupil measurement in simple approach.
II. METHODOLOGY MATLAB Image Processing Toolbox provides a comprehensive set of reference-standard algorithms and graphical
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 686–689, 2011. www.springerlink.com
Measurement of the Area and Diameter of Human Pupil Using Matlab
tools for image processing, analysis, visualization, and algorithm development. User can restore noisy or degraded images, enhance images for improved intelligibility, extract features, analyze shapes and textures, and register two images. Most toolbox functions are written in the open MATLAB language, giving user the ability to inspect the algorithms, modify the source code, and create your own custom functions [4]. Previous works [6,7] concentrate of using histogram technique to measure human pupil size. For this work, the Image Processing Toolbox is chosen and utilized to measure the area and diameter of the pupil size. It started by obtaining the actual eye image that clearly show the differences between pupil and iris. Practically, to obtain this kind of image, special camera is used instead of normal camera because of it limitation. However, in this work, the existing image database from “CASIA Iris Image Database” [10] is used. Sample of the images are shown in Figure 1 and Figure 2 respectively. Image database that had been downloaded are read or uploaded into MATLAB programming environment using “imread” function and followed by single bracket of image name with format of the image. This function allowed user read single image at a time. The downloaded images are in RGB colour model which are an additive colour of red, green, and blue that is added together in various ways to produce a board array of colours. The name of the model comes from initials of the three additive primary colours, red, green and blue. In MATLAB environment, RGB image are represent MxNx3 of the image size instead of MxN in greyscale image. As the MATLAB requiring image in greyscale format, function “rgb2grey” is used to convert the image from RGB image to greyscale image as shown in Figure 2.
687
image after converting into the binary image with 0.1 levels. From Figure 3, there is some white spot on the black area and also have small black spot on white area. Both spot will affect the image calculation later, so an enhancement of the image is required by using “imfill” and “bwareaopen” operators. Before using both of the operators, the image is inverted using “~” operator in order to enhance the result. The result is shown in Figure 4.
Fig. 3 Binary image with 0.1 levels
Fig. 4 Inverted binary image using “~” operator Fig. 2 Greyscale image of the original image Then image are converted into binary using “im2bw” function. Through some experiments, it is found that 0.1 is the best level for the image. The level is dependant of image structure and is between zero until one. Fig. 3 shows the
After the image is inverted, image enhancement using “imfill” operators will remove black spot on white area as in Figure 4 by converting black spot into white colour. Then, “bwareaopen” are required in order to remove or fill up the small white spot on black area with black colour. In
IFMBE Proceedings Vol. 35
688
N.H. Mahmood et al.
this case, the black colour is filled if the spot are below than 30 pixels. The resulted image after both “imfill” and “bwareaopen” operators are shown on Figure 5 and Figure 6.
suitable on actual life. So, it required to change the unit from pixels into meters where one pixel is equal to 0.000264583 meters (base on the screen resolution). Finally, the image is inverted again using “~” operator and displays the area and the diameter of measurements for human pupil as shown in Figure 7. It shows that the diameter for one of the sample images is 13.930 mm and the area is 152.399 mm2.
Fig. 5 Resulted Image of “imfill” operator
Fig. 7 Resulted image of image enhancement the area and diameter measurements are displayed
III. RESULT AND DISCUSSION
Fig. 6 Resulted image of “bwareaopen” operator The boundaries of the white circle are detected using “bwboundaries” operators. It is required before the operator “regionprops” is used. The “regionprops” operator will measure the image region properties and usually use in blob analysis. It will measure the region area, diameter, centre of the boundaries, and others. Area and diameter for these sample of human pupils for this work had been measured using “regionprops” are in pixels unit. Pixels unit are suitable in describing image size, however this units are not
The measurements for diameter and area for human pupil are presented in SI unit (millimeter). The methodology of measuring the diameter and area using MATLAB is not suitable for person who does not have any basic in operating MATLAB programming. To overcome this problem, we developed a simple Graphical User Interface (GUI) that has simple and attractive features and user friendly to operate. Figure 8 shows the GUI that consists of three portion of segment (boxes) to display image, and seven pushes button to process the image. To avoid any confused by user, we stated the sequence number on the button by starting from #1 until #7. First, the operation will start with browse and upload the original image into “Original Image” box as shown in Figure 8 and it can be done by hitting the “#1 Load Image” button. Then, it compulsory to hit “#2 Grayscale Image” button and a gray scale image will be displayed on the second box. Then, users are free to hit the button and the Resulted Image box will display the resulted image depending on what button is hit by user. Button that started with
IFMBE Proceedings Vol. 35
Measurement of the Area and Diameter of Human Pupil Using Matlab
#3 until #7 will process the image as described in methodology by using different operator and display it in “Resulted Image” box. For example, if the user hit “#3 Binary Image” button, GUI will display the binary image as shown in Figure 8 in “Resulted Image” box. Other buttons are #4, #5, #6 and #7 that have their own operation and will display the resulted image as shown in Figure 3 until Figure 7 respectively.
689
IV. CONCLUSION As a conclusion, the measurement of the size for human pupil in term of diameter and area is successful applied in MATLAB, as well as presented with GUI application to display the results. The purposed of this work is to provide basic knowledge for those who are eager to learn the MATLAB Image Processing Toolbox especially for human pupil size measurement. Pupil detection could be also used to identifying blind people because pupil of blinds people also respond to the light. However, the size of pupil changes over time, so the data should be update at least once in 5 years. It is also impossible to make fake pupil, so it has high security. In future, this work could be extended to high level image processing such as processing in real-time application.
ACKNOWLEDGEMENT This work has been carried out during the Problem-based Learning of 4th Year Laboratory at Medical Electronics Laboratory, Faculty of Electrical Engineering, Universiti Teknologi Malaysia. Fig. 8 Graphical User Interface (GUI) of human pupil measurements The experiments involve five samples of human pupil downloaded from CASIA Iris Image Database from different person in order to observe the result. Table 1 shows the results of area and diameter measurements for 5 pairs of eyes that had been obtained. Table 1 Analysis results from different person EYES
A
AREA (mm2) 152.399
B
141.268
RIGHT DIAMETER (mm) 13.930 13.412
AREA (mm2) 137.558 167.790
LEFT DIAMATER (mm) 13.234 14.485
C
112.987
11.994
130.488
12.890
D
160.520
14.296
196.423
15.815
E
154.009
14.003
137.135
13.214
From the observation, different people have different area and diameter of the pupil. For example, the highest size of pupil based on area and diameter is obtained from the eyes of person D. The radius of the pupil is unique for each person and the right and left eyes are also having slightly different measurement. The size of pupil is also based on the light intensity.
REFERENCES 1. K. Bowyer, K. Hollingsworth, P. Flynn. (2008) Image understanding for iris biometrics: A survey. Computer Vision and Image Understanding, 2008. 110 (2): pp 281-307 2. L. Zhonghua. (2010) A novel iris recognition method based on the natural-open eyes, 10th International Conference on Signal Processing, 2010, pp 1090-1093 3. X. Lin, G. Klette, R. Klette, J. Craig and S.Dean (2003) Accurately Measuring the Size of the Pupil of The Eye, Proceeding of Image and Computer Vision, New Zealand, 2003, pp 221-226 4. R. C. Gonzalez, R. E. Woods (2004) Digital image processing using Matlab. Pearson Education. 5. T. Shinoda, M. Kato (2006) A pupil diameter measurement system for accident prevention. Systems, Man and Cybernatics, 2006. Vol. 2, pp 1699-1703 6. M.G.Masi, L Peretto, R. Tinarelli, L Rovati (2009) Measurement of the pupil diameter under different light stimula. Instrumentaton and Measurement Technology Conference, 2009, pp 1652-2656 7. S.Dey, D. Samanta (2007) An efficient approach for pupil detection in iris images. International conference of Advance computing and communications, 2007, pp 382-389 8. S.I Kim et al. (2005) A fast center of pupil detection algorithm for VOG-based eye movement tracking. International conference of Engineering in Medicine and Biology Society, 2005, pp 3188-3191 9. I.K Kallel, D.S. Masmoudi, N. Derbel (2009) Fast pupil location for better iris detection. International multi-conference of Systems, Signals and Devices, 2009, pp 1-6 10. CBSR at http://www.cbsr.ia.ac.cn/IrisDatabase.htm Author: Nasrul Humaimi Mahmood Institute: Universiti Teknologi Malaysia Street: UTM Johor Bahru City: Johor Bahru, Johor Country: MALAYSIA Email: [email protected]
IFMBE Proceedings Vol. 35
Medical Image Pixel Extraction via Block Positioning Subtraction Technique for Motion Analysis H.S.D.S. Jitvinder, S.S.S. Ranjit, S.A. Anas, K.C. Lim, and A.J. Salim Faculty of Electronic and Computer Engineering, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, 76100 Durian Tunggal, Melaka, Malaysia [email protected], [email protected]
Abstract— This paper presents a study on image pixels via block based technique using block positioning subtraction technique to determine the motion estimation. Motion estimation occurs in areas of interest while processing the video sequences to determine the motion estimation. Due to loading full video sequences the computational cost and the processing time for motion estimation will also increase. The block-based positioning subtraction technique extracts the small block of interest to be processed. This small block of interest is processed individually without need to process the full frame. This technique divides the frames into small 8 × 8 macro blocks and identifies the location of small block of interest in the frame. 8 × 8 block of interest will be extracted from the frame to determine the pixels value. Extracted pixels value from the 8 × 8 block of interest will be subtracted and analyzed for motion changes.
from current frame and subsequent frame is compared and subtracted to detect the motion. If the subtracted results show zero values, there is no motion detected. If the shows nonzero pixel values, there is motion detected.
II. PROPOSED EXPERIEMENT To evaluate the changes of pixels values of a small macro block between targeted frames (video sequences) in order to determine motion estimation, each of the interested frames is divided into 8 × 8 small macro block as illustrated in Fig. 1.
Keywords— Block-Based, Block Positioning, Positioning Subtraction, Block of Interest, Motion Changes.
I. INTRODUCTION Motion estimation is a process to determine the motion vector in a video sequences by reducing temporal redundancy between consecutive successful video frames [1, 2]. Motion estimation identifies blocks that match each other in a video sequences by detecting objects transformation which appears in each frames but at different locations [3]. The identified blocks will be represented with a motion vector (x, y) to indicate the motion pixel displacement in frames [4]. Differences of values in pixels indicate that there are changes occurred in the frames. Block Matching Algorithm (BMA) technique is widely used for motion estimation where a frame is divided into square size of macro blocks [5, 6]. The pixels values of the macro blocks which are divided will be used to compare between current macro block with the subsequent macro block [7, 8, 9] in the same frame. In this paper, a simple block-based pixels values comparison via block positioning subtraction technique is applied to detect the motion. Pixels values in macro block
Fig. 1 8 × 8 Small block size Once the frames are divided into 8 × 8 macro blocks, the block of interest is selected to determine the position of the 8 × 8 small macro block using the coordinate (x, y) where x represents the column and y represents row. The coordinate for both of the frames (current frame and subsequent frame) are the same to analysis the changes that occurs in the pixels values, i.e. if frame 1 block of interest is located at (10,2), then the subsequent frame such as frame 10 block of interest is located at (10,2). Equation (1) is used to select the location of the desired block of interest.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 690–693, 2011. www.springerlink.com
Medical Image Pixel Extraction via Block Positioning Subtraction Technique for Motion Analysis
Block =
[
N ×( y −1) +1: N ×( y −1) +8, N ×( x −1) +1:N ×( x −1) +8
]
(1)
Where N = macro block size x = column coordinate y = row coordinate The block of interest from both of the video sequences frame which are selected is extracted from the frame itself where it will be a single small macro block. Equation (2) will be used to magnify the image for analysis. (kron(Block,uint8(ones(16))))
(2)
691
III. EXPERIMENT RESULTS, DATA AND ANALYSIS In order to analyze the changes of pixels involve in frames, certain condition need to be set to allow the same environment is applied for each of the frames. The condition which is set constant is: 8 × 8 small macro block, one MRI images which are converted into video (video sequences of a brain). In this experiment, frame 1 and frame 10 of two different brain video sequences are implemented. Fig. 3 shows the block of interest (1c, 2c) of frame 1 (1a, 2a) and frame 10 (1b, 2b). In the first video sequence of the brain, the coordinate for block of interest is at coordinate (11, 11) as shown in Fig. 3 (1c). The different pixels value shows that there is motion detected. In the second video sequences, the block of interest is at coordinate (4, 12). In frame 1 which is shown in Fig. 3 (2a), the boundary of the brain image is chosen whereas in frame 10 as shown in Fig. 3 (2b), the side of the brain expands. The difference of the pixel values when compared and subtracted shows that there is motion present.
(1a)
(1b)
Fig. 2 Block positioning subtraction technique The pixels values of a single small macro block from both of the frames are analyzed by comparing and subtracting them. Fig. 2 elaborates the proposed experiment in flowchart manner.
(1c)
Fig. 3 (1a, 2a) Frame 1, (1b, 2b) Frame 10 and (1c, 2c) block of interest
IFMBE Proceedings Vol. 35
692
H.S.D.S. Jitvinder et al.
to 255. The 0 value represents the black color and 255 represent white. The values in between represents many shades of gray. As the pixels are subtracted, higher pixels value represents drastic motion whereas lower pixels value represents slow motion. The result which represents zero value shows that there is no motion occurs.
(2a)
(3a)
35 47 49 41 39 45 62 81
(2b)
37 46 46 39 37 41 54 70
37 40 35 30 33 37 46 59
36 31 20 17 25 32 41 53
43 29 10 5 14 22 31 43
52 53 48 33 33 30 9 9 8 0 0 2 6 4 5 10 3 2 16 5 2 27 15 11
(4a)
(2c)
Fig. 3 (continued)
IV. DATA ANALYSIS
(3b)
Block of Interest (3a, 3b) is the extracted pixels value from the frame 1 and frame 10 respectively in the first video sequences of the brain as in Fig. 4. Block of Interest (3c, 3d) is the extracted pixels value from the frame 1 and frame 10 respectively in the second video sequences of the brain as in Fig. 5. Each of the blocks is magnified to analyze the pixels value. The differences in the pixels value which are extracted shows that motion is occurred in the frame. Different pixels value represents different shading color. Since the images are in grayscale, the value of the pixels is between 0
IFMBE Proceedings Vol. 35
0 24 28 18 14 12 39 93
1 24 24 13 11 11 39 90
5 21 18 9 9 11 37 83
6 17 14 8 10 13 35 72
4 12 12 11 14 16 33 62
3 8 12 14 16 18 34 57
4 7 11 14 15 19 37 59
6 7 11 13 12 18 39 62
(4b)
Fig. 4 (3a, 4a) Frame 1, (3b, 4b) Frame 10
Medical Image Pixel Extraction via Block Positioning Subtraction Technique for Motion Analysis
693
block to reduce the computational cost and time. The storage usage will also reduced as only selected frames are only used in a video sequence.
VI. CONCLUSION Pixels comparison of macro block using block based positioning subtraction technique is a simple method to detect motion. The advantage of this technique is that it can choose the macro block of interest without processing the complete frame of an image. Thus reduces the elapsed processing time, memory space and computational cost because a certain area is processed to detect motion.
(3c)
7 5 14 14 4 2 0 0 0 2 4 0 7 7 3 6 4 3 3 5 0 4 3 3 1 5 6 1 1 11 0 4 10 10 9 5 11 0 0 1
6 20 2 2 11 10 0 9 15 1 9 12 7 2 2 0 0 0 0 7 3 9 0 5
REFERENCES
(4c)
149 142 147 126 133 152 150 102
83 97 124 133 159 163 164 158
59 79 115 101 106 137 137 142
(3d) 59 58 79 42 115 53 101 71 106 108 137 118 137 141 142 168 (4d)
65 69 58 67 76 57 73 76 54 48 68 63 121 55 61 92 132 45 121 118 81 139 124 110
Fig. 5 (3c, 4c) Frame 1, (3d, 4d) Frame 10
1. Ahluwalia S, Dr. Shukla A, Rungta S (2010) Optimal Circular 2-D Search Algorithm for Motion Estimation, International Multi Conference of Engineers and Computer Scientists 2010 vol II, Hong Kong 2. Prabhudev I, Hosur, Ma K K (1999) Motion Vector Field Adaptive Fast Motion Estimation, 2nd International Conference Information, Communications and Signal Processing Singapore 3. Chen X, Zhao Z, Rahmati A, et al. (2009) SaVE: Sensor-assisted Motion Estimation for Efficient H.264/AVC Video Encoding, ACM Multimedia, Beijing, China 4. Phadtare M (2007) Motion Estimation Techniques in Video Processing, Electronic Engineering Times India, India 5. Ishfaq A, Weiguo Z, Jiancong L, Ming L, (2006) A Fast Adaptive Motion Estimation Algorithm, IEEE Transactions on Circuits and Systems for Video Technology, vol.16, no.3 6. Ranjit S, Sim K S, Besar R, Tso C P (2008) Motion Estimation in Medical Imaging, 4th Kuala Lumpur International Conference on Biomedical Engineering, pp 603-606 7. Chen Y S, Hung Y P, Fuh C S (2001) Fast Block Matching Algorithm Based on the Wineer-Update Strategy, IEEE Transactions on Image Processing, vol.10, no.8 8. Ranjit S, Sim K S, Besar R, Tso C P (2008) Application of Motion Estimation Using Ultrasound Images, 4th Kuala Lumpur International Conference on Biomedical Engineering, pp 519-522 9. Ranjit S S S, Sim K S, Besar R, Tso C P (2009) From ultrasound images to block based region motion estimation, Biomedical Imaging Intervention Journal, 5(3):e32 Address of the corresponding authors:
V. DISCUSSION A pixel value gives a lot of information in changes of color representation. As changes happen in the pixels value, it indicates that there is motion present or interferences have taken place. If there is no motion, the value of the pixels will be same for frame 1 and frame 10 and when subtracted it will produce a zero value. If there is motion, the value of pixel for frame 1 and frame 10 is different and when subtracted it will produce a nonzero value. This technique provides not only the ability to observe the changes of the pixels value but also allows users to choose the desired macro
Author: Jitvinder Dev Singh Hardev Singh Institute: Universiti Teknikal Malaysia Melaka Street:Hang Tuah Jaya City:76100 Durian Tunggal, Melaka Country: Malaysia Email:[email protected] Author: Ranjit Singh Sarban Singh Institute: Universiti Teknikal Malaysia Melaka Street:Hang Tuah Jaya City:76100 Durian Tunggal, Melaka Country: Malaysia Email:[email protected] or [email protected]
IFMBE Proceedings Vol. 35
Monte Carlo Characterization of Scattered Radiation Profile in Volumetric 64 Slice CT Using GATE A. Najafi Darmian1,3, M.R. Ay2,3,4, M. Pouladian1, A. Shirazi1,2 , H. Ghadiri2,3, and A. Akbarzadeh2,3 1
Department of Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran 2 Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran 3 Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran, Iran 4 Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran
Abstract— It is well known that contamination of CT data with scattered radiation reduces reconstructed CT numbers and introduces cupping artifacts in the reconstructed images. This effect is more pronounced in multi detector CT scanners with extended detector aperture mostly using cone-beam configurations, which are much less immune to scatter than fan-beam and single-slice CT scanners. Accurate characterization of scattered radiation behavior is mandatory for implementation of robust and accurate scatter correction strategies in volumetric CT reconstruction. As characterization of scattered radiation behavior using experimental measurement is a difficult and time consuming approach, Monte Carlo simulation can be an ideal calculation method. In this study we used recently released GATE MC Code with the ability of CT simulation for characterization of scattering in volumetric GE 64 slice CT scanner. The Monte Carlo simulation was validated through comparison with experimental measurement data. Thereafter, the effect of tube voltage and Scatter to Primary Ratio (SPR) was calculated. The results indicate that the GATE Monte Carlo code is a useful tool for investigation of scattered radiation characterization in CT scanners. Moreover, there is a possibility of take advantage of GATE for simulation of PET scanners in order to simultaneously asses the contribution of scattered radiation in PET/CT scanners. Keywords— CT, GATE, Scatter, SPR.
I. INTRODUCTION There are several sources of artifact that affect clinical image quality in x-ray computed tomography (CT).It is therefore essential to assess their significance and effect on the resulting images to reduce their impact either by optimizing the scanner design or by devising appropriate image correction and reconstruction algorithms. One of the most important parameters in x-ray CT imaging is the noise induced by detected scattered radiation, which depends on the geometry of the CT scanner and the object under study [1, 2] in the reconstructed images. This effect is more pronounced in multi detector CT scanners with extended
detector aperture mostly using cone-beam configurations, which are much less immune to scatter than fan-beam and single-slice CT scanners. In the context of CT imaging different groups have proposed methods for assessment of the scatter component including experimental measurements, mathematical modeling and Monte Carlo simulations for both fan - and cone-beam configurations. However, most published papers investigating the distribution of scattered radiation in the fan-beam geometry used either simple experimental measurements based on the use of a single blocker [3, 4, 5] or comprehensive Monte Carlo simulations [5, 6, 7]. Experimental measurement of scatter profile utilizing lead as a blocker of primary photons produces secondary scattered radiation which propagates remarkable errors in experimental measurements. Scattered photons originate from the lead blocker are attenuated through the phantom and finally reach the detector array and contaminate the scatter profile. It is possible to extract the scattered radiation profile induced from lead using experimental methods; however, it is impossible to appropriately model phantom attenuation of these photons. The issue of how to control and reduce scattered radiation in cone-beam CT remains a big challenge. Monte Carlo modeling is by far the most accurate and robust approach to calculate the scattered radiation. The GEANT4-based Monte Carlo simulation package GATE has been successful in the application of PET and SPECT with its precise modeling of various physics processes. Other advantages of the GATE software are that it is publicly available, free of cost, and actively maintained by an international collaboration with close relation to the GEANT4 team. The aim of this study is to first validate the GATE Monte Carlo simulations for accurate modeling of volumetric 64 slice CT scanner. The validated GATE simulation package was then used to characterize the scatter to primary ratio for a range of x-ray energy spectra. Simulation conducted to investigate scatter from a realistic, in order to determine the characteristics of various scatter components which cannot be separated in measurements.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 694–697, 2011. www.springerlink.com
Monte Carlo Characterization of Scattered Radiation Profile in Volumetric 64 Slice CT Using GATE
II. MATERIAL AND METHOD A. Volumetric 64 Slice CT System The simulated CT scanner in this study was GE LightSpeed VCT64-slice cardiac CT scanner (GE Healthcare Technologies, Waukesha, WI). This third generation CT scanner has 540 mm source to isocenter and 950 mm source to detector distance, 58,368 individual elements arranged in 64 rows of 0.625 mm thickness at isocenter, each containing 888 active patient elements and 24 reference elements with Highlight (Y2Gd2O3:Eu) ceramic scintillator. The scanner is equipped with the Performix Pro Anode Grounded MetalCeramic Tube Unit which uses 56 degree fan angle, 7 degree target angle and minimum inherent filtration of 3.25 mm Al and 0.1 mm Cu at 140 kVp. B. Phantoms Two cylindrical phantoms were used for simulation. The small phantom was water phantom with a wall made of Perspex material. The external diameter of this phantom is 215 mm and its wall thickness is equal to 6 mm. The large utilized phantom was polypropylene phantom. This phantom that normally used for modeling of bony structures has 350 mm diameter, suitable for large scan field of views.
695
plate if they do not stop within the phantom by undergoing a photoelectric interaction. Production and transport of scintillation light in the crystal were not modeled. The photons deposited in the detector were recorded and separated by primary (defined as those that did not undergo any scattering within the phantom) and scatter (defined as those encountering at least one Compton or Rayleigh scattering in the phantom). All deposited energies of the primary or the scatter photons were then summed into 1*1*3 mm3 voxels to form corresponding profiles. Since we did not model the production and collection of scintillation light in the detector, a scaling applied to the simulated profiles necessary in order to relatively compare the simulated and measured profiles. The final primary and scatter profiles for a given x-ray kVp setting were obtained by summing the individual images generated at each energy according to the desired x-ray spectrum. The scatter to primary ratio profile or total (primary plus scatter_ image can thus be obtained). Figure 1 shows CT simulation by GATE.
C. Monte Carlo Simulation The GATE (GEANT4 application for tomographic emission). Monte Carlo simulation package [8] is used to generate photons of varying energies and simulate their transport within different materials. GEANT4 provides three models for photon interactions: Standard, low Energy, and PENELOPE, which are all relevant in modeling x-ray medical imaging applications. The more accurate low energy electromagnetic model was chosen for use in the Monte Carlo simulation studies described here. In the GATE simulation of x-ray photons, every interaction process including the Compton and Rayleigh scatterings was labeled and the number of times that a photon undergoes the Compton or Rayleigh scatterings within the phantom or detector was counted, providing a means to separate the single or multiple incoherent or coherent scatterings. The simulated phantom was the same size, shape, and composition as the physical phantom used in the experimental measurements. To model x-ray tube emission with, simulated photons were emitted isotropically from a point source within a fan angle of 56° and cone angle of 4° so as to expose the entire phantom. The emitted photons can traverse the phantom and reach the 3 mm thick scintillator
Fig. 1 Geometry of simulated CT by GATE
III. RESULTS AND DISCUSSION Figure 2 shows simulated and experimental measurement of attenuation profiles at 140 kVp spectrum and normalized error between experimental and simulation data. In order to
IFMBE Proceedings Vol. 35
696
A. Najafi Darmian et al.
know how accurate our simulation is, the resulting simulated attenuation profile was compared with experimental attenuation profile. The method of comparison was based on normalized error (NE) which calculates the relative difference between simulation and measurement data. This method has been used as a figure of merit to evaluate differences between two data sets [5].
NE (u , v) =
Figure 3 shows the normalized scatter profile calculated from detector row 32 (the central row) at different x-ray tube voltages. It should be noted that scatter profile normalized to unit area of scatter in 140 kVp. Figure 4 shows the normalized scatter profile at different tube voltages for the cylindrical Polypropylene phantom.
PMeasured(u, v) - PSimulation(u, v) (1) PMeasured(u, v)
Where u and v are detector element’s coordinates, PMeasured (u, v) and PSimulation(u, v) are the measured and simulated projection data for each detector element. The maximum relative difference between experimental and simulated data (in the region of phantom shadow) was close to 4%. (a)
Fig. 4 Scattered radiation profile for cylindrical Polypropylene phantom at different tube voltages. Scatter profile normalized to unit area of scatter in 140 kVp
Tube voltage dependence of the SPR was studied by calculating scatter to primary ratios for different x-ray kVp settings. In this part of simulation, the x-ray energy setting was changed from 80 to 140 with interval of 20 kVp for both water and polypropylene phantoms. (b)
Fig. 2 (a) Comparison of attenuation profiles for a uniform cylindrical water phantom using experimental measurements (solid line) and GATE simulations (dash dot line) (b) Normalized error between measurements and GATE simulations
Fig. 3 Scattered radiation profile for cylindrical water phantom, at different tube voltages. The curves profile normalized to unit area of scatter in 140 kVp
Fig. 5 SPR profile for cylindrical water phantom, at different tube voltage
Fig. 6 SPR profile for cylindrical polypropylene phantom, at different tube voltage
IFMBE Proceedings Vol. 35
Monte Carlo Characterization of Scattered Radiation Profile in Volumetric 64 Slice CT Using GATE
The Polypropylene phantom was used in order to mimic clinical conditions for obese patients. Monte Carlo modeling is the most accurate and robust approach to calculate the scattered radiation. The two peaks in the scatter profile observed in figures 3,4 are due to the trade-off between increasing the probability of Compton scattering while decreasing the transmission probability of scattered photons with increasing the attenuation length. It should be emphasized that the lower scattered photons in the center of the scatter profile covered by the phantom whose diameter is large compared to the mean free path of photons is the result of either absorption of incoming photons before undergoing a Compton event or attenuation of scattered photons after Compton scattering. The higher SPR value in 80 kVp in figures 5 and 6 is due to the fact that transmitted primary radiation increases by increasing tube voltage. On the other hand, it is well known that the probability of compton scattering increases with increasing tube voltage (Figure 3 and 4). As a matter of fact, the amount of primary photons increases much more than the amount of scattered photons by increasing tube voltage. Therefore, the SPR decreases with increasing tube voltage.
IV. CONCLUSION
697
REFERENCES [1] Joseph P M, Spital R D (1982) The effects of scatter in x-ray computed tomography Med. Phys. 9 464–472. [2] Johns P C, Yaffe M (1982) Scattered radiation in fan beam imaging systems Med. Phys. 9 231–9. [3] Glover G H (1982) Compton scatter effects in CT reconstructions Med.Phys. 9 860–7. [4] Siewerdsen J H, Jaffray D A (2001) Cone-beam computed tomography with flat-panel imager: magnitude and effects of x-ray scatter Med. Phys. 28 220–231. [5] Ay M R, Zaidi H. (2005) Development and validation of MCNP4Cbased Monte Carlo simulator for fan- and cone-beam x-ray CT. Phys.Med. Biol. 50:4863-4885. [6] Colijn A P, Beekman F J (2004) Accelerated simulation of cone beam x-ray scatter projections IEEE Trans. Med.Imaging 23 584– 590. [7] Malusek A, Sandborg M P et al. (2003) Simulation of scatter in cone beam CT: effects on projection image quality SPIE Medical Imaging 2003: Physics of Medical Imaging (San Diego, CA, USA) SPIE vol 5030; pp 740–751. [8] Jan S, Santin G et al. (2004) GATE: a simulation toolkit for PET and SPECT Phys. Med.Biol. 49, 4543–4561. Author: Mohammad Reza Ay Institute: School of medicine, Tehran University of Medical Sciences Street: Poursina City: Tehran Country: Iran Email: [email protected]
In conclusion, scatter in Volumetric 64 slice CT was investigated by GATE Monte Carlo simulation package. The Monte Carlo simulation was validated through comparison with experimental measurement data. The validated GATE Monte Carlo simulation used to characterize the accurate scatter and SPR profiles in different x-ray kVp settings and air gaps as well as for different phantom sizes and densities. Accurate estimation of scatter and SPR will allow one to develop a more effective and precise scatter correction method in CT image reconstruction.
ACKNOWLEDGMENT This work was supported by the Research Center for Science and Technology in Medicine, Tehran, Iran.
IFMBE Proceedings Vol. 35
Multi-Modality Medical Images Feature Analysis H. Madzin, R. Zainuddin, and N.S. Mohamed Artificial Intelligent Department, University Malaya, Kuala Lumpur, Malaysia
Abstract— In this research study we analyze visual features of texture, shape and color in multi-modality medical images. The analysis consists of contrast, correlation, energy and
homogeneity in texture extraction. We apply Hu moment invariant method to extract shape features. Both texture and shape features are extract in local level. Color histogram is used in represent color descriptor in analysing global medical images. These features are then will be classified based on its modality using support vector machine classifier. It shows that different modality have different characteristic and the importance of selecting significance features. Keywords— Multi-modality medical images, analysis, shape, color and texture descriptor.
feature
I. INTRODUCTION The technology of medical data production has been rapidly changed over the past few years. Medical data was initially produced from film-based records to the electronic record and current technology is the integration of multimedia resource for patient record [1]. Modern computer technology has created the possibility of development of several new imaging modalities that used different radiant energy technique to elucidate properties of body tissues. The ability to extract significant and accurate information from conventional or tomographic radiographic images such as computed tomography (CT), magnet resonance image (MRI), ultrasound and nuclear medicine image has developed since the discovery of X-rays. The ongoing developments of medical imaging instrumentation and techniques have created an enormous growth in the quantity of data produced including large amount of medical images. Multimodality of medical images constitutes an important source of anatomical and functional information for diagnosis of disease, medical research and education. The potential of multimodality imaging in providing information can be very useful and significant in biomedical research and clinical investigations. The capabilities of this application fields can be extended to provide valuable teaching, training and enhanced image interpretation support, by developing techniques supporting
the automated archiving and the retrieval images by content. Daily thousands of medical images produced in various of modality at the radiology department. In Geneva University Hospital alone the number of images produced by Radiology department has increased up to 70,000 images per day in 2007 [2]. Therefore the needed of extract the structure and content of medical images are significant and the important of efficient image retrieval system for patient care, education and research [3]. Content-based image retrieval (CBIR) is a system to extract features to represent the image itself. The visual features used can be classified into primitive, logical and abstract features [4]. Most available system in medical image indexing and retrieval used primitive features which based on color, texture and shape. However different medical imaging modalities reveal different characteristics of the human body. The quality of a medical image is determined by the imaging method and equipment characteristics. There are at least 5 factors need to be considered in quality characteristics which include contrast, blur, noise, artifacts and distortion [5]. The combination of primitive features and quality characteristics may increase the performance in CBIR system. Texture feature generally capture the information of image characteristic with respect to the changes in certain direction and scale of the image. This information gives benefit for regions or images with homogeneous texture. Among popular texture descriptor methods that have been used for medical image indexing and retrieval are cooccurrence matrices [6, 7], wavelets [8, 9] and Fourier transform [10]. Shape features had been often to describe as visual information that based on two classes: region and contour of the image. Contour shape-based techniques in global level has firstly introduced in [11]. The application is easy to compute and robust to noise but limit in discriminatory power. The study of shape-based spectral descriptor of Fourier and wavelet has been performed by [12]. Though these methodologies are robustness to noise and easy normalization, these descriptors are not inherently rotations invariant. In region-based techniques, moment-based shape features provide a numerical shape-preserving representation that is invariant to translation, rotation and scale [13]. However the drawback of using this technique is it relied on
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 698–703, 2011. www.springerlink.com
Multi-Modality Medical Images Feature Analysis
699
image segmentation. In medical image application automated segmentation of images is completely an unsolved problem since it depends on the content of the medical image [14]. Color features can be the most effective features for system that employ colors image. The color descriptors consist of a number of histogram descriptors, a dominant color descriptor, and a color layout descriptor [17]. This set of descriptors was done to serve different application domains while in this research paper is concentrate in specialized fields’ namely medical domain. Color histogram is one of the most frequently used color descriptors that characterize the color distribution in an image. Color histogram descriptor was defined that would be able to capture the color distribution with reasonable accuracy for image search and retrieval applications. Conventionally RGB (Red, Green, Blue) color space is used for indexing and querying. However this color space does not correspond very well in human perception. Therefore other color space is used as alternative such as HSV (Hue, Saturation, Value) [18], Luv [19] for better respect to human perceive. These expressive visual features can be extracted in global and local level. However it shown in [11], shape feature in global level is weak to discriminate different shape. Methods based on local presentation have shown promise good result for image indexing and retrieval task. Local presentation is an image pattern which has different value or characteristic from its instantaneous neighborhood and associated with a change of image properties (intensity, color and texture) simultaneously [15]. The advantages of local presentation are it robust with respect to noise, variability in object shape and partial occlusions [16]. Considering medical images have a specific composition for each modality and anatomic region, we proposed to used local level application and skip the segmentation part in order to speed up analysis and increase the accuracy of the performance. This research paper is fold to four sections. Initially will be the introduction section. The next section will describe the methodology of the experimentation for visual features of texture, shape and color. Section III presents the results based on the experimentation followed by the conclusion and discussion in the final section.
(GX). These images are taken from ImageCLEF 2010 (www.imageclef.org) dataset. We follow the standard image indexing paradigm, in which we initially extract meaningful visual features of texture, shape, color and quality characteristics. The major steps involves in the extracting the visual features can be summarized as follows: 1) Preprocessing: for texture, shape and quality characteristic features extraction convert the image to grayscale if needed and resize the image to 100x100 pixels. As for color feature the RGB image need to convert to HSV color space. 2) Local patches: Partition each image to four nonoverlap pixel blocks for texture, shape and quality characteristics features. 3) Feature extraction process: Evaluate each patch with texture and shape descriptor. As for texture we analyze feature of contrast, correlation, energy and homogeneity of each medical image. We also measure noise and blur for quality characteristics evaluation. For shape descriptor we use Hu moment invariant and Fourier transform analysis. Finally for color feature we used color histogram descriptor. 4) Feature vector for classification: We used Support Vector Machine (SVM) to trained the feature vectors. The results of the evaluation will be the precision and recall value for each feature. A. Local Image Presentation Local image presentation can be in points, edges or small image patches [20]. The easiest way to describe local level feature extraction is to use small patches of fixed size and location which known as partitioning of the image [1]. In this research study, each medical image will be partition into four non overlap small patches as depicted in Fig. 1.
II. METHODOLOGY Experiment are performed using 2000 medical images from various modality which include X-ray, computed tomography (CT) scan, ultrasound (US), nuclear medicine (NM), positron emission tomography (PET), magnetic resonance image (MRI), optical imaging (PX) and graphic
Fig. 1 Image of cranium that has been partition to four non overlap patches
IFMBE Proceedings Vol. 35
700
H. Madzin, R. Zainuddin, and N.S. Mohamed
B. Texture Descriptor
C. Shape Descriptor
In this experimentation we analyse texture features based on contrast, correlation, energy and homogeneity. These texture measures try to capture the characteristics of the image parts with respect to changes in certain directions and the scale of the changes. Gray-level co-occurrence matrix (GLCM) is a statistical method of examining texture that considers the spatial relationship of pixels which also known as the gray-level spatial dependence matrix. The GLCM functions characterize the texture of an image by calculating how often pairs of pixel with specific values and in a specified spatial relationship occur in an image, creating a GLCM, and then extracting statistical measures from this matrix. The texture features information is derived from GLCM by using the following formula.
In this research paper we analyse shape descriptor using Hu moment invariant method. Medical image is usually complex to high variability and only have subtle differences from other image in the context of visual appearance, it seems inappropriate to compute in global level. Alternatively, we calculate the shape feature in each four local patches to access more detail information of the medical image without any segmentation.
1) Contrast: Measures the local variations in the graylevel co-occurrence matrix
Computing the coordinates of centers of mass for each patch as below:
(1)
(6)
1) Hu Moment Invariant Method For each patch, we used Hu moment function ݉ with (p+q) order moment is given as, (5)
Therefore the central moments can be defined as: 2) Correlation: Measures the joint occurrence of the specified pixel pairs
probability
(7) When a scaling normalization is applied the central moments change as:
(2) (8) 3) Energy: Provides the sum of squared elements in the GLCM. Also known as uniformity or the angular second moment (3)
Where the normalization factor: = (p+q/2) +1. In particular, Hu described a set of six moments that are rotation, scaling, translation invariant and the seventh invariant is skew invariant. The seven moments are given as below:
4) Homogeneity: Measures the closeness of the distribution of elements in the GLCM to the GLCM diagonal (9) (4)
Where; i: the row number; j: the column number; Pij: the normalized value in the cell i,j; N: the number of rows or columns. IFMBE Proceedings Vol. 35
Multi-Modality Medical Images Feature Analysis
701
The color histogram describes the proportion of pixels of each color in an image with simple and computationally effective manner. The color histogram is obtained by quantizing image colors into discrete levels and then counting the number of times each discrete color occurs in the image. In this research paper, generic color histogram is used. Initially the image color needs to be in HSV format. Then the HSV histogram is created. Generic color histogram descriptor was defined that would be able to capture the color distribution with reasonable accuracy for image search and retrieval applications. Later is to do normalization of color histogram to produce better result. E. Support Vector Machine (SVM) Classifier SVM is a binary classifier that finds the optimal linear decision surface based on the concept of structural risk minimization. The input to a SVM algorithm is a set
A c c u ra c y Va lu e
D. Color Descriptor
GLCM Texture Descriptor 1 0.8 0.6
Precision
0.4
Recall
0.2 0 CT
GX
XR
MR
NM
US
PET
PX
Medical Image Modality
Fig. 1 Texture descriptor value of precision and recall
Moment Shape Descriptor 1 A c c u ra c y V a lu e
These seven moments will be applied to all four patches in the image. Therefore for each image will produce 28 dimensional feature vectors. The moments in eq. (9) is the combination of similitude and orthogonal transformation which independent to position, size and orientation.
0.8 0.6
Precision
0.4
Recall
0.2 0 CT
GX
XR
MR
NM
US
PET
PX
Medical Image Modality
{
} of labeled training data where
is the
data and is the label. SVM use hyperplane to separate the two classes by selecting the maximized distance value of support vector. The yperplane has the form:
Where the coefficients and the in Eq. (6) are the solutions of a quadratic programming problem [21].
Color Descriptor Accuracy value
(10)
Fig. 2 Shape descriptor value of precision and recall
1 0.8 0.6
Precision
0.4
Recall
0.2 0 CT
Table 1 Percentage of Correctly Classified Modality CT GX XR MR NM US PET PX
Texture 56 91 49 17 26 38 54 18
Shape 49 89 13 25 52 64 27 46
GX XR MR NM US PET PX Medical Image Modality
Color 30 85 29 42 69 41 38 52
Fig. 3 Color descriptor value of precision and recall For multi-class classification which has to classify more than two classes, there are two general approaches which are one-against-one and one-against all. The detail of these approaches can be found in [22].
IFMBE Proceedings Vol. 35
702
H. Madzin, R. Zainuddin, and N.S. Mohamed
Fig. 4 Example of multi-modality medical image taken from ImageCLEF
III. RESULTS We conducted an analysis of visual features of texture, shape and color. The experiments involved of 2000 medical images from eight different modality which include X-ray (XR), computed tomography (CT) scan, ultrasound (US), nuclear medicine (NM), positron emission tomography (PET), magnetic resonance image (MRI), optical imaging (PX) and graphic (GX) as epicted in Fig. 5. The experiment was to extract texture and shape visual features using local level and color feature using color histogram. The performance will be evaluated based on the formula correctness rate, which is the percentage of correctly classified image divided by total number of images and also the rate of precision and recall of each modality. We used SVM for classifier with polykernel value of 1. The training and testing dataset is based on 10-fold cross validation. Table 1 shows the percentage of accuracy classified based on each modality. From the table we can explain that GX has the higher value in each feature extraction. This is because GX only represent chart and graph of medical image which it is not a complex image. In contrast XR and
MR have low value due to the complexity of the image. The difference from each modality is subtle. Figure 1,2 and 3 shows the accuracy value of precision and recal for texture, shape and color descriptor. It shows that CT, XR, US, PET and PX are suitable to be classified using texture descriptor as depicted in Fig. 1. As for moment shape descriptor, the result almost similar with texture but the value of accuracy is higher as shown in Fig. 2. It explains that shape descriptor using local level has better presentation compare to texture descriptor in analysing multi-modality medical images. Finally the color descriptor in Fig.3 shows that GX, PX, NM and PET have higher value. This is due to both modalities have used more colors compare to other modalities which concentrate only on grey-scale image.
IV. CONCLUSIONS From the explanation in result section, it shows that different modality of medical images has different characteristic in term of selecting significance features value. As for example, the GX mostly has higher value in precision and recall. This is because this image is not
IFMBE Proceedings Vol. 35
Multi-Modality Medical Images Feature Analysis
703
complex compare to other modalities. It also shown that shape descriptor in local level has better performance compare to texture descriptor in the application of medical images. It may be due the texture of medical images have only subtle difference in each modality especially in CT, MR, XR and US. For future work we will apply interest point in local level which will give more detail information compare to patch level.
REFERENCES [1] Haux R. Health information system: Past, present and future. Int J Med Inform 2006; 75:268-81 [2] Henning Müller, Nicolas Michoux, David Bandon, Antoine Geissbuhler A Review of Content-Based Image Retrieval Systems in Medical Applications - Clinical Benefits and Future Directions Internatio Journal of Medical Informatics, Vol. 73, No. 1. (2003), pp. 1-23. [3] William Hsu, Sameer Antani, L. Rodney Long, M.A., Leif Neve, and George R. Thoma, SPIRS: A Web-based Image Retrieval System for Large Biomedical Databases Int J Med Inform. 2009 April; 78(Suppl 1): S13–S24. [4] Eakins, J. and Graham, M., “Content-based image retrieval”, Joint Information Systems Committee, 1999 [5] Perry Sprawls, “The Physical Principles of Medical Imaging, 2nd Ed” Wolters Kluwer Law & Business, 1987 [6] Mueen, A. and Zainuddin, R. and Baba, M.S., “Automatic multilevel medical image annotation and retrieval”, Journal of Digital Imaging 2008, pp. 290—295 [7] Kuo, W.J. and Chang, R.F. and Lee, C.C. and Moon, W.K. and Chen, D.R., “Retrieval technique for the diagnosis of solid breast tumors on sonogram”, Ultrasound in medicine & biology 2002, pp. 903—909 [8] Chen, Y.T. and Tseng, D.C., “Wavelet-based medical image compression with adaptive prediction”, Computerized Medical Imaging and Graphics 2007, pp. 1—8
[9] Pizurica, A. and Philips, W. and Lemahieu, I. and Acheroy, M., “A versatile wavelet domain noise filtration technique for medical imaging”,IEEE Transactions on Medical Imaging 2003, pp. 323— 331 [10] Sumanaweera, T. and Liu, D., “Medical image reconstruction with the FFT”, GPU Gems 2005, pp. 765—784 [11] Flickner, M. and Sawhney, H. and Niblack, W. and Ashley, J. and Huang, Q. and Dom, B. and Gorkani, M. and Hafner, J. and Lee, D. and Petkovic, D. and others, “Query by image and video content: The QBIC system” 2001, pp. 264 [12] Zhang, D. and Lu, G., “A comparative study on shape retrieval using Fourier descriptors with different shape signatures”, pp. 1—9 [13] Zhu, Y. and De Silva, LC and Ko, CC, “Using moment invariants and HMM in facial expression recognition”, Pattern Recognition Letters 2002, pp. 83—91 [14] Zheng, X. and Zhou, M.Q. and Wang, X.C., “Interest Point Based Medical Image Retrieval”, Lecture Notes In Computer Science 2008, pp. 118—124 [15] Tuytelaars, T. and Mikolajczyk, K., “Local invariant feature detectors: A survey”, Foundations and Trends® in Computer Graphics and Vision 2008, pp. 177—280 [16] Setia, L. and Teynor, A. and Halawani, A. and Burkhardt, H., “Grayscale medical image annotation using local relational features”, Pattern Recognition Letters 2008, pp. 2039—2045 [17] Manjunath, B.S.; Ohm, J.-R.; Vasudevan, V.V.; Yamada, A., "Color and texture descriptors", IEEE Trans. CSVT, Volume: 11 Issue: 6 , Page(s): 703 -715, June 2001 [18] J.R. Smith, S. F Chang, “VisualSEEK: a fully automated contentbased image query system”, in The Fourth ACM International Multimedia Conference and Exhibition, Boston, MA USA 1996. [19] S. Sclaroff, L. Taycher, M. La Cascia, “ImageRover: a contentbased browser for the world wide web” IEEE Workshop on ContentBased Access of Image and Video Libraries, Puerto Rico 1997 [20] Tuytelaars, T. and Mikolajczyk, K., “Local invariant feature detectors: A survey”, Foundations and Trends® in Computer Graphics and Vision 2008, pp. 177—280 [21] Vapnik, V., “Structure of statistical learning theory”, Computational Learning and Probabilistic Reasoning 1996, pp. 3 [22] Hsu, C.W. and Lin, C.J., “A comparison of methods for multiclass support vector machines”, IEEE transactions on Neural Networks 2002, pp. 415—425
IFMBE Proceedings Vol. 35
Parametric Dictionary Design Using Genetic Algorithm for Biomedical Image De-noising Application H. Nozari1, G.A. Rezai Rad1, M. Pourmajidian2, and A.K. Abdul-Wahab2 1
2
Iran University of Science and Technology/Electrical Engineering, Tehran, Iran University of Malaya/Department of Biomedical Engineering, Kuala Lumpur, Malaysia
Abstract— Due to their potential to generate sparse approximation of signals, overcompelete representations have particularly became an interesting issue in signal processing theory. Choosing an appropriate dictionary is a crucial point in sparse approximation methods. During the previous processes it has been shown that an incoherent dictionary is suitable for this method but recently, using parametric dictionary has gained much more interest so a set of parametric functions has been used to design this kind of dictionary. This paper discusses the use of different characteristics of equiangular tight frame ( norm of each frame) as a new objective function to find the best parameter. Using these characteristics helps us to eliminate the iteration method which has been used in previous experiments. In order to reduce this problem, we use Genetic Algorithm (GA). By implementing this algorithm in Multi-scale Gabor Function (MGF), we can reach to better results in comparison with other studies in this case. We applied both the initial and our designed dictionary on a Biomedical Image case study and compared the results to show the advantages of using this method in biomedical cases. Keywords— parametric dictionary (PD), Equiangular tight frame (ETF), MGF, Genetic Algorithm, Biomedical CT Image.
I. INTRODUCTION Since the sparse representation of signals has leaded to improvements in many applications such as coding, denoising, feature extraction and etc. In signal and image processing, there has been a growing interest in the study of this method in recent years. Sparse and redundant representation modeling of data assumes an ability to describe signal as linear combinations of a few atoms from a pre-specified dictionary [1], where the linear coefficients are sparse. In : , as a dictiothis case, we use a matrix nary, either to compactly express or efficiently approximate and be the given signal and the a signal. Let coefficient vectors. We can respectively express sparse equations as below: ,
:
(1)
Where is the norm, counting the nonzero entries of a vector. Many algorithms have been proposed in order to
reach an approximate solution and all of them can be categorized as greedy algorithms [1],[2] and relaxation method [4]. Choosing a dictionary D which have to be an overcomplete dictionary is one of the most important fundamental steps that must be considered [5]. An overcompelete dictionary D that leads to sparse representations can either be chosen as a pre-specified set of functions [6]-[9] or designed by adapting its content to fit a given set of signal examples [10]. Using a pre-specified dictionary leads to simple and fast algorithms for the evaluations of the sparse representation. In this paper, we used a method that has been proposed recently in [12]. Mathematical representations for each column of dictionary (atom) are similar to each other but in parametric form. The objective of this dictionary design is to change the parameters in order to make a dictionary as close as possible to a Grassmannian frame [13]. Our contribution focuses on defining a new objective function based on norm of each frame and by using a genetic algorithm approach. This is to find the best parameters to make the dictionary as close and similar to Grassmannian frame as possible. This method has some privileges in comparison with previous methods. It removes the lengthy iteration and leads to achieve better parameters. Hence, needs less time to operate. Furthermore, with new objective function the coherence between the dictionary atoms will be close to the Grassmannian frames. We demonstrate the use of our dictionary design application in a CT image denoising. The result shows improved images using this type of dictionary design.
II. TIGHT FRAMES If
A family of is a frame of the Hilbert space 0, then we can have [6]: ,
∑
|
,
|
.
(2)
Where the usual Hermitian inner product is denoted with . , . , and . is used for the associated norm. When A=B, then the tight frames and unit norm tight frames can be achieved, if || || = 1.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 704–707, 2011. www.springerlink.com
Parametric Dictionary Design Using Genetic Algorithm for Biomedical Image De-noising Application
A. Equiangular Tight Frames (ETF) Consider … is a form matrix. Deas Gram matrix [15], the diagonal enfining tries of the Gram matrix is equal to the squared norms of the frame vectors, and the off-diagonal entries of the Gram matrix is equal to the inner products between distinct frame vectors. An Equiangular Tight Frame (ETF) can be viewed as a set of unit vectors in a Hilbert space with the property in which the absolute inner products between pairs of vectors are identical and minimal [16]. The absolute inner product between two vectors in is greater than [12]: (3) Unfortunately, EFT does not exist for all the combinations of , . In [15], it can be shown that, by alternating projection, we can find a matrix that is predicted to be close to ETF. The aim of parametric dictionary (PD) is to find a set of parameters that minimizes the distance between dictionary and the Gram matrix. B. Parametric Dictionary (PD) In this paper, we are interested to design a dictionary which will is close to ETFs as possible. In this case, parametric dictionary (PD) can be applied as a solution in order to design a dictionary that is similar to Gram matrix consi.. , . In PD, we assumed dering each column of D as an elementary function such as the Gabor function, wavelets and etc. that has a number of paparameters , ,.., and rameters. Let there be , , ,.., to be the parametric function. Considering this, we can write for each column of dictionary as , , ,.., , due to this, we have parameters for each column and parameters for the entire dictionary. The purpose is to find a dictionary that will have a minimum mutual coherence. We define and as a matrix which all of its diagonal elements are 1 and the absolute of off-diagonal is (defined in (3)). Each entry is the coherence between related atoms in dictionary. of To minimize the distance between two matrixes, in [12], they define ( . is frobinius norm) as objective function and then they use a kind of alternative minimization to minimize and gradient decent of updating parameter. In this paper, we define different objective functions. At as norm of each column of . first, we define , ,…, then .With these two If matrixes, and the objective function is as below: :
(4)
705
, we have 1 1 . It For all columns of means that with this invariant feature of Gram matrix, we can eliminate the iteration method that has been proposed in [12]. In order to minimize this objective function, we proposed the Genetic Algorithm (GA) approach instead. C. Parameter Optimization Using GA We can use any optimization algorithm in order to update the parameter. The main common point in optimization methods that use gradient of objective function is the initial guess point. It suggests we need an intelligent initial guess to initiate optimization for better convergence. In this paper, we use Genetic Algorithm (GA) instead. This algorithm repeatedly modifies a population of individual to get solutions. At each step, it randomly selects individuals from the current population to be parents and use them to produce children for the next generation [16]. Number of variables in optimization process depends on the number of parame).The initial ters that contributes in atom formation( population must have enough diversity to find the best parameter. A parameter can be selected if it makes a real value of the function. By defining boundary for each parameter, the rate of convergence can be increased. Let , , 1, 2, … , be the bounded parameter. To range by unidefine initial population, we divide , form distribution. In the next section, we explain how to use the genetic algorithm.
III. MULTI-SCALE GABOR FUNCTION (MGF) Multi-scale Gabor functions create a basis for ), which is defined through translations and modulations processes with different scales [1], so it can be a parametric function. The mathematical form of this function is: cos 2
(5)
There are four parameter in this function, , , and that respectively highlights the time shifting, scaling, frequency and phase parameter. Parameter is used for normalization. A. Simulation and Results Objective function in (4) is set to fitness function of Genetic Algorithm. In order to define initial populations, we uniformly sample parameters and create population of size .This function is well-defined for all reasonable values of parameters, but in order to improve the rate of convergence, we have to find each parameter as below:
IFMBE Proceedings Vol. 35
706
H. Nozari et al. 0
,0
,
,0
5
(6)
Initial arbitrary dictionary
120
100
80 Fitness value
L2 norm of each column
4
3.5
3
2.5
2 Design Dictionary 1.5 0
50
100
150 200 Column of Dictionary
250
300
Fig. 2 Norm of each column of initial and design dictionary and grassmanian frames(dash line)
45 40
Difference With Gram Matrix
The value of , and depends on the type of signals; speech or image signal ( 4, 256 and 5 in our study). Other important options that deal with GA are reproduction which specifies how the genetic algorithm creates children for the next generation. The numbers of individuals that are to survive in the next generation (Elite Count) are set to d to enable the GA to search for a broader space. Small random changes on the individuals in population are made in order to create mutation children. To reach this goal, we specify Gaussian function as mutation function. For crossover, we used Arithmetic function which creates children who are the weighed arithmetic mean of two parents. In this work, we set 64, 256 and use 500 generations to find the minimum of objective function. The result of GA is plotted in Fig. 1. The value of objective function reached 2.5894 after 500 generation. To compare the initial and design dictionary that is close to ETF, we plotted norm of each column of each dictionary as in Fig. 2. As we can see from norm, the resulted dictionary is closer to ETFs than initial arbitrary dictionary.
4.5
35 30 25 20 15
Our Algorithm
60
10 40
5 1.5
2
2.5
3 3.5 Redundancy
4
4.5
5
20
Fig. 3 Distance with Gram Matrix for different redundancy(result of our 0
0
50
100
150
200 250 300 Generation
350
400
450
500
Fig. 1 Converging OBJ after 500 generation to 2.5894 We implemented these two algorithms on the same dictionary and compared the results for different redundancy / . Fig. 3 highlights the plotted distance between Gram matrixes of the designed dictionary and ETF dictionary. We implemented GA for 10 times and took the average of the results. This proposed objective function designs a dictionary that is close to be a Gram matrix. Using norm to define objective function, we can remove the iteration method effects and so it results in better performance.
algorithm is plotted with o)
IV. CASE-STUDY: BIOMEDICAL CT IMAGE In this section, we are able to show the advantage of this type for dictionary design in a CT image denoising application. Here, we use the algorithm that was presented in [15]. The size of dictionary is set to 8 16 . The denoising process is done 20 times for each dictionary (initial and design dictionary) and resultant Peak Signal to Noise Ratio (PSNR) is computed through averaging them. Table 1 shows the value of PSNR for different types of dictionary design.
IFMBE Proceedings Vol. 35
Parametric Dictionary Design Using Genetic Algorithm for Biomedical Image De-noising Application
707
Table 1 CT image denoising with different type of dictionary design
V. CONCLUSION In this paper, we studied the problems of parametric dictionary (PD) design for sparse representations. We can see that by minimizing the distance between norm of column of designed dictionary and Gram matrix, the resultant dictionary will have the minimum mutual coherence compared to initial dictionary. By the mean of constant characteristic of gram matrix, we eliminate the iteration method that was used in previous methods, enabling better results.
REFRENCES 1. S. Mallat and Z. Zhang, “Matching pursuits with time frequency dictionaries,”IEEE Trans. Signal Process, vol. 41, no. 12, pp. 3397– 3415,1993. 2. T. Blumensath and M. Davies, “Gradient pursuits,” IEEE Trans. Signal Process., vol. 56, no. 6, pp. 2370–2382, Jun. 2008. 3. S. Chen, D. Donoho, and M. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Scientif. Comput., vol. 20, no. 1, pp. 33–61, 1998. 4. R. Rubinstein, A.M. Bruckstein, and M. Elad, " Dictionaries for Sparse Representation Modeling", IEEE Proceedings - Special Issue on Applications of Sparse Representation & Compressive Sensing, Vol. 98, No. 6, pages 1045-1057, April 2010. 5. S. Mallat, A Wavelet Tour of Signal Processing, 3rd ed. New York: Academic, 2009.
6. E. J. Candès and D. L. Donoho, Curvelets—A Surprisingly Effective Nonadaptive Representation for Objects With Edges. Nashville, TN:Vanderbilt University Press, 1999, Curve and Surface Fitting: Saint-Malo, pp. 105–120. 7. M. N. Do and M. Vetterli, “The contourlet transform: An efficient directional multiresolution image representation,” IEEE Trans. ImageProcess., vol. 14, no. 12, pp. 2091–2106, Dec. 2005. 8. E. LePennec and S. Mallat, “Sparse geometric image representations with bandelets,” IEEE Trans. Image Process., vol. 14, no. 4, pp.423– 438, Apr. 2005 9. M. Aharon, M. Elad, and A. M. Bruckstein, “The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process., vol. 54, no. 11, pp. 4311–4322, Nov. 2006. 10. M. Yaghoobi, T. Blumensath, and M. Davies, “Dictionarylearning for sparse approximations with the majorization method,” IEEE Trans. on Signal Processing, vol. 57, no. 6, pp.2178–2191, 2009. 11. T. Strohmer and R. Heath, “Grassmannian frames with applications to coding and communication,” Appl. Comput. Harmon. Anal., vol. 14,no. 3, pp. 257–275, 2003. 12. J. Tropp, I. Dhillon, R. Heath ,Jr, and T. Strohmer, “Designing structural tight frames via an alternating projection method,” IEEE Trans.Inf. Theory, vol. 51, no. 1, pp. 188–209, 2005. 13. M. Sustik, J. Tropp, I. Dhillon, and R. Heath, “On the existence of equiangular tight frames,” Linear Algebra Its Appl., vol. 426, no. 23,pp. 619–635, 2007. 14. D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine learning. Reading. MA: Addison Wesley, 1989. 15. M. Elad and M. Aharon, "Image Denoising Via Sparse and Redundant representations over Learned Dictionaries", the IEEE Trans. on Image Processing, Vol. 15, no. 12, pp. 3736-3745, December 2006.
IFMBE Proceedings Vol. 35
Quantification of Inter-crystal Scattering and Parallax Effect in Pixelated High Resolution Small Animal Gamma Camera: A Monte Carlo Study F. Adibpour1,2, M.R. Ay1,2,3, S. Sarkar1,2, and G. Loudos4 1
Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran 2 Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran, Iran 3 Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran 4 Department of Medical Instrument Technology, Technological Educational Institute, Athens, Greece
Abstract— Small animal SPECT imaging is currently a rapidly expanding field since high resolution in vivo measurements can readily be achieved. Since dedicated high resolution scanners employ smaller crystal array dimensions, deteriorative resolution factors are mainly caused by intercrystal scatter (ICS) and parallax effect(also called penetration). The foremost goal of present study is to quantify both penetration and inter-crystal scatter effects as function of crystal material with various pixel sizes (0.5:0.5:3) and crystal length. To fulfill this aim, GATE simulation package was employed. Different scintillators including NaI, CsI and LaBr3 were exposed by 140KeV pencil beam in both perpendicular and oblique angles. When irradiation is perpendicular to detector array, percentage of events undergo ICS was most and least in pixel sizes 0.5 × 0.5 and 3 × 3 mm2 for crystal materials of LaBr3 and CsI, respectively. The results also revealed as the angle of non perpendicular incidence increases, detected events in central pixel were significantly dropped. According to results, GATE is useful tool for investigation of photon interaction in gamma camera detectors in order to precisely model the ICS and penetration behavior for incorporating these effects during image reconstruction for resolution recovery purposes.
fraction of penetrated and scattered photons increases and this may have a detrimental impact on the quality of the final image because they cause some photons to be detected in crystals not corresponding to original position from which the photon was emitted and this leads to remarkable error in position estimation (mispositioning)[3]. Although Both intercrystal scattering(ICS) and penetration have been matters of concern in PET imaging[4], but it seems small crystal size in high resolution small animal gamma camera cause the imager suffer from ICS events even when using low energy photons such as 140KeV[5]. The general purpose of this study is to first validate the scanner. The validated model was used to quantify effects of penetration and ICS in terms of crystal material, size and length by means of Monte Carlo simulation. The relative contributions of ICS and parallax are investigated in three different scintillation materials for varying angle of incidence with isotope Tc-99m. Such knowledge will be useful for: (i) simulating gamma camera systems ;( ii) for optimizing and designing gamma camera systems, and (iii) for modeling photon transport during iterative reconstruction in SPECT in order to improve the quality of reconstructed images.
Keywords— Inter-crystal scattering, Penetration, GATE, Small Animal imaging, Pixelated Scintilator.
I. INTRODUCTION Preclinical imaging is essential key tool for many research purposes in nuclear medicine. Since small animal imaging requires high resolution imaging modality, it has been realized that conventional systems do not meet the requirements in spatial resolution and sensitivity for this type of imaging [1]. Dedicated systems based on position sensitive PMT coupled to pixelated detector would provide excellent resolution sensitivity tradeoff in small objects. Nowadays Monte Carlo Simulations (MCS) are widely performed in nuclear medicine. They are particularly useful when experimental measurements are impossible [2]. Simulations have shown that the use of small pixel size can lead to a substantial increase of spatial Resolution. However, the use of smaller pixel size brings disadvantages, such as: the
II. MATERALS AND METHODS A. Monte Carlo Simulation MCS is a very appropriate technique for studying particle transport since the underlying physics is well known. The results are calculated from the basic principles of physics. Hence, MCS is an accurate and general tool in such simulations. A drawback of MCS is the long computation time that may be required: often many millions of random events have to be simulated to obtain the results with a sufficiently low noise level. The GATE Monte Carlo package version 4.0.0 was used in this study. GATE is an acronym for “Geant4 Application for Tomography Emission”, and has been developed by Open GATE collaboration. It should be emphasized that the code extensively validated in several publication.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 708–711, 2011. www.springerlink.com
Quantification of Inter-crystal Scattering and Parallax Effect
B.
709
Description of Prototype Small Animal Gamma Camera
In present study, the gamma camera consists of crystal array and a high resolution parallel hole collimator designed for use with compounds tagged Technetium. The crystal has an active area 48 mm × 98 mm. 0.2 mm Epoxy is considered as inter crystal gap. The array is viewed through 3mm glass window and encapsulation is completed by 50µm aluminum. 0.5 mm aluminum is positioned in front of collimator with 25- mm lead thickness. The pattern of collimator holes is in the form of hexagonal cells with radius of 0.6 mm, separated by 0.2 mm septum walls. The aforementioned geometry was accurately described in GATE. Signal processing chain was considered in the form of so-called digitizer. To speed up simulation, a ±10% energy window was applied during the acquisition, so only photons with energy of 126 till 154 keV are stored. For the simulation in this study, default GATE physic processes involving gamma interaction (low-energy models used for photoelectric effect, Compton scattering and Rayleigh scattering) have been taken into account. The simulation sorts the location of the hit, the type of particle, and the energy for all gamma photons that reach the detector in an output file. Crystal arrays of NaI(Tl), CsI(Tl), CsI(Na) and LaBr3:Ce were utilized as pixelated scintillator in this study with 50 cm2 surface area. A 140 keV pencil gamma-beam was used to irradiate crystal. The model validated through comparison with some simple experimental measurement supplied by National Technical University of Athens.
Third category belongs to photons incident which experience one or more scatter interaction in pixels initially entered and subsequently photoelectric interaction occurs in pixel other than primary crystal. It is obvious that the second and third categories produce mispostioning of the location of primary gamma interaction.
III. RESULTS In order to validate the modeled scanner spatial resolution and sensitivity as function of source to collimator measured experimentally and compared with simulated data in Figure 1 and 2, respectively. Simulations were carried out with a Tc-99m source consisting of a 1.1 mm diameter capillary of 8 cm length. The activity of the source was set to 401 µCi. Obviously, good agreement between simulation and experimental data is observed.
C. Position Detection Accuracy and ICS Fraction The influence of scintillation material on position detection accuracy and ICS were investigated. As in formula 1 can be seen, position detection accuracy (PDA) is defined as percentage of events which are truly positioned in the crystal that is irradiated (the central crystal). Roughly speaking, this quantity represents the accuracy of crystal identification.
Fig. 1 Spatial resolution as a function of source to detector distance, using the capillary phantom
(1) D. Classification of Detected Events To evaluate ICS and parallax, three different classes are considered in order to classify the registered events. In first category there are photons which were registered in crystal that was irradiated. This group is called true photons. Penetration occurs when the photon after collimator passes through incident crystal with no interaction and alternatively detected in another crystal.
Fig. 2 Comparison between experimental and simulation sensitivity in terms of distance from detector
IFMBE Proceedings Vol. 35
710
F. Adibpour et al.
Figure 3 shows the percentage of detected mispositioned events in different scintillator with different size, when the photons irradiated perpendicular to central pixel of detector array. Percentage of detected photons in one of the adjacent pixel in crystal array is illustrated in Figure 4. As it can be seen, quantitative variations of detected photons are negligible for 0 to 10 degree.
Fig. 5 Percentage of ICS events in 1×1
pixel size in different
materials as a function of crystal length
Fig. 3 Percentage of mispositioned events in terms of pixel size for different crystal materials
Fig. 4 Percentage of detected events in 1×1 ×5 central detector array
adjacent pixel of
Percentage of ICS events as function of crystal length for candidate crystal materials is shown in figure 5. Figure 6 Shows calculated PDA for pixel size of 1 × 1 mm2, when CsI detector array is irradiated by oblique incidences. The influence of crystal material on PDA as function of pixel size in orthogonal incidences is shown in Figure 7. The results indicate that PDA moderately decreases as pixel size is reduced. The LaBr3 detector has lower PDA than other crystals.
Fig. 6 PDA when pencil beam is irradiated to the central CsI crystal with pixel dimension 1×1 ×5 with different angle incidences
IFMBE Proceedings Vol. 35
Quantification of Inter-crystal Scattering and Parallax Effect
711
ACKNOWLEDGMENT This work has been supported by the Research Center for Science and Technology in Medicine and Tehran University of Medical Sciences under grant no. 9762.
REFERENCES
Fig. 7 PDA as function of pixel size in different crystal materials in perpendicular incidences
IV. DISCUSSION Monte Carlo modeling is the most accurate approach to calculate ICS and penetration effects. As it is evident from Fig. 4 the fraction of penetrated events in gamma ray incidences 0 to 10 degree are negligible, but penetrated events are considerably increased for following incidence angles. Hence, penetrated events are matter of concern in greater incidence angles. Fig 5 reveals that increasing crystal length has noticeable influence on percentage of ICS events and greatest amount of ICS belongs to LaBr3, while the fraction of events encounter ICS for other crystal materials are nearly the same. Among the candidate crystals CsI(Tl) shows a satisfactory performance regarding to analysis of ICS and penetration effects. Although CsI provides higher light output, but shorter decay time and better energy resolution of NaI make it still attractive as the scintillator of choice for gamma cameras.
1. Loudos G, Nikita K, Giokaris N, et al. (2003) A 3D high-resolution gamma camera for radiopharmaceutical studies with small animals. Applied Radiation and Isotopes 58:501-508. 2. Bollini D, Campanini R, Lanconelli N, et al. (2002) A modular description of the geometry in Monte Carlo modeling studies for Nuclear Medicine. International Journal of Modern Physics C-Physics and Computer 13:465-476. 3. Van der Have F. (2007) Ultra-high-resolution small-animal SPECT imaging. 4. Rafecas M, Böning G, Pichler B, et al. (2003) Inter-crystal scatter in a dual layer, high resolution LSO-APD positron emission tomograph. Physics in Medicine and Biology 48:821 5. Rasouli M, Ay M, Takavar A, et al. (2009) The Influence of InterCrystal Scattering on Detection Efficiency of Dedicated Breast Gamma Camera: A Monte Carlo Study. Springer. pp. 2451-2454. Corresponding Author: Institute: Street: City: Country: Email:
V. CONCLUSION In conclusion, ICS and penetration were investigated by GATE Monte Carlo simulation package. The results indicate that the GATE dedicated Monte Carlo code is a useful toolkit for investigation of photons interaction in gamma camera detectors in order to accurately model the ICS and parallax behavior. Accurate information of ICS and penetration will allow one to develop a higher resolution imaging system with implementation of resolution recovery methods during reconstruction, which is crucial for small animal imaging.
IFMBE Proceedings Vol. 35
Mohammad Reza Ay Tehran University of Medical Sciences Pour Sina Tehran Iran [email protected]
Quantitative Assessment of the Influence of Crystal Material and Size on the Inter Crystal Scattering and Penetration Effect in Pixilated Dual Head Small Animal PET Scanner N. Ghazanfari1,2, M.R. Ay1,2,3, N. Zeraatkar5,6, S. Sarkar1,2, and G. Loudos4 1
Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran 2 Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran, Iran 3 Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran 4 Department of Medical Instrument Technology, Technological Educational Institute, Athens, Greece 5 Faculty of Health Sciences, University of Sydney, NSW 2006, Sydney, Australia 6 Brain and Mind Research Institute, University of Sydney, NSW 2006, Sydney, Australia
Abstract— One of the most discriminator parameters for all PET scanners systems, is spatial resolution. The dimension and pitch of crystal elements, in scintillator based systems, impose some deteriorative limitation on spatial resolution. Whereas, the detraction in the scintillator size, provide increment on the fraction of gamma rays perpendicularly incident on the surface of the crystal arrangement, causing inter-crystal scattering and parallax error. In this study, we took advantages of Monte Carlo Simulation in order to investigate and evaluate ICS and penetration effects on pixilated dual head PET scanner as a function of crystal material and size. The Geant4 application for Tomographic Emission (GATE) simulation package was used in this study. After validation code It was concluded that by increasing crystal dimension percentage of ICS and penetration decreased for all crystal material but around the pixel size 2 mm there will be a good trade of between true coincidences and mispositioned event. LORs have less mispostioning for BGO crystals. Following the current results, our future work consists of incorporating the simulated effects of ICS and penetration on image quality of dual head animal PET scanners. Keywords— Inter crystal scattering, parallax, GATE, Animal PET.
I. INTRODUCTION Small animal imaging is a significant tool at the disposition of biological researchers to use in non-invasive study of preclinical animal models[1]. In the last two decades, commercialization of these technologies has significantly increased due to the utility and flexibility of these original prototype systems in providing non-invasive, repeatable and nondestructive studies for academic and industrial purpose and its compatibility to detect nano-molar to pico-molar concentration of radiolabel in vivo imaging[2]. On the other hand, any improvement and proceeding on the design of this kind of prototype systems are more likely to translate into
the clinical scanner, by considering the point that there are miniaturized of clinical PET systems[3]. However, several groups try to develop pixilated dual head systems, which are cost benefits and can satisfy requirement of basic PET studies. Designing such a systems posse many challenges for improving spatial resolution and reduction of ICS and penetration photons[4]. Nowadays many investigators, researchers and manufactures make an effort to improvement spatial resolution of such a systems by decreasing the mentioned deteriorative parameters[2]. Penetration effects happen whenever an incident photon without any interaction passes through the crystal which is hit to it and then detected in the other position in the wrong place of detector (in pixilated scintillator). It could cause an error which is famous as a parallax error. Parallax events most probably happened for the photons which entering to the detector with non-perpendicular angle. It significantly influenced by the crystal materials of the detectors and photon incident energies. Whereas, the basic operation in PET systems is according to detecting photons with energy of 511keV, this error will be significantly substantial for this kind system. On the other part, Inter crystal scattering phenomena happens for both, non-perpendicular and perpendicular photons that abandoned the interacted crystal after one or more Compton scattering interaction and detected in other crystals. These two phenomena could cause mispositiong in the detection of right place of LORs, because some photons detected in the crystals not correspond to the original position of photons emission[5]. These mispositioning diminished spatial resolution, due to the probability of ICS and parallax intensify, especially for the crystals to design with small pixel size to achieve high spatial resolution which is the aim of most manufactures. To the best of our knowledge, in all published studies for assessing limitation of ICS and penetration in detection of the right position of LORs, photons assumed as a single
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 712–715, 2011. www.springerlink.com
Quantitative Assessment of the Influence of Crystal Material and Size on the Inter Crystal Scattering and Penetration Effect
photon with 511-keV energies instead of coincidence events [5-7]. It should be noted that considering a single-photon study instead of coincidences not provide adequate system matrix for accurate image reconstruction[8]. The aim of this study is quantification of ICS and Parralax error in coincidence mode which can be used for generation of an accurate system matrix. Whereas, statistical image reconstruction methods is based on system matrix and this matrix formed according to LORs, therefore considering a single-photon study instead of coincidences not provide adequate and correct details, So exploiting photons as coincidences for organizing LORs in system matrix has key role in image reconstruction [9].
713
simulation. A 6 ns coincidence window was used in all. Table 1 summarizes the specification of system under study. Table 1 Characteristics and geometrical dimension of simulation systems Characteristics Detector ring diameter (mm) Number of module detectors
100 2
Crystal material Crystal pixel size (mm)
BGO/LSO/GSO From 0.5×0.5 to 3×3(step 0.5)
Crystal length (mm)
10
Total effective area of scintillator (mm2)
50±0.7
Inter crystal material
Epoxy
Inter crystal space (mm)
0.2
II. MATERIALS AND METHODS A. GATE Model of the System
B. Classification of ICS and Penetration
In this study simulate pixilated dual head animal PET scanner using GATE Monte Carlo Package. Monte Carlo simulation is an essential tool for assisting of emission tomography in the design of new medical imaging ans optimization performance property of them and the development or assessment of image reconstruction algorithms and correction techniques[10]. The basic design of the proposed tomography system, consist of two detector blocks. Each detector is formed by a pixelized scintillator crystal array. It is well known that the highest spatial resolution significantly depend to different parameters such as crystal material, crystal size and inter material crystal dimension. So, it is essential to specify the smallest crystal size with sufficient space as an inter material size that compatible with the scintillation light output for particular crystal material[11-12]. In order to assess the influence of crystal dimension on ICS and penetration, numerous pixel size, ranging between 0.5×0.5 to 3×3 mm by increment of 0.5 mm (0.5:0.5:3) with 0.2 millimeter Epoxy as an inter material space was considered. The crystal materials simulated in this study were Bismuth Germanate Orthosilicate (BGO), Cerium-doped Lutetium Orthosilicate (LSO), gadolinium Orthosilicate (GSO). Mispositioning and incorrect estimation of LORs depends on many factors. Positron range and noncolinearity of photons and Compton scattering and random coincidence are number of them but among all of these restricting factors, ICS and penetration have considerable effects on mispositioning of the events. To evaluate the fraction of photons which ICS and penetration, we exploited an ideal 511 keV point source at the center of FOV. The energy window with lower and upper threshold respectively 300 keV and 650 keV was set in
For classification of the data which are stored in the ASCII output of the GATE, We exploited an in-house software was supplemented for classification of detected events. For primary grading events, all random coincidences and scattered photons, before reaching to the detector, eliminated from the output data of simulation. Other post processing conduct on the events which were not random and scatter and called True coincidences. So the result of post processing confined to mispositioned events only caused by ICS or ICS-penetration. For each simulation true coincidence events evaluated according to the position of LORs registration proportion to the volume defined by using two detection elements (pixels). Each two pixels of the detector define a tube of response (TOR) which connects two pixels to each other instead of a line as a LOR. So by using the soft-ware the position of the point source examined if it was in the TOR the event be a purely True Coincidence; if not it would be a mispositioned coincidence. The possibility of registration Compton events is available in the GATE, So misposioned events classified and assembled into three associated groups that related to condition happen for each single photon in a coincidence event, if two singles or one of them just undergo penetration it considered as a Group-1. In fact Group-1 including mispositioned events which suffer penetration and free from ICS. If one of the two singles affected by ICS or penetration and the other one not undergo ICS and suffer penetration or not any mispositioning it considered as a Group-2. If both single undergo ICS (or also penetration) they categorized in Group-3. Figure 1. Illustrates the categorization of coincidence events. In order to categorize and distinguish LORs which correctly registered (Purely True Coincidences) and mispositioned LORs from True Coincidence ones.
IFMBE Proceedings Vol. 35
714
N. Ghazanfari et al. 80
True Concidences
Group-2
Percentage of Misposition/True events (%)
Mis positioned
Group-1
Group1 Group2
LSO 70
Purely True
Group-3
Group3 Mispositioned events
60 50 40 30 20 10
Fig. 1 The algorithm for classification of coincidences 0.5
1.0
1.5
2.0
2.5
3.0
Crystal Dimension (mm)
III. RESULTS 80
After each data acquisition the post processing analysis performed on the output of MC simulation by using the inhouse soft-ware. Various dimensions with different crystal material considered in order to qualify the percentage of mispositioned events by percentage of ICS and penetration. Figure 2 a, b and c demonstrated the proportion of mispostioned events (ICS–P coincidences) as a deduction of true coincidences and number of events which registered in the other position without suffering any Compton interaction in the detector (Group-1) or not happening any Compton effects for none of the single photons (Group-2) or at least one of the singles photon of a coincidences suffer a Compton scattering (Group-3) as a function of crystal dimension respectively for BGO, LSO, GSO crystal for observing the impact of crystal material on the output.
Group-1
GSO
Group-2 Group-3 Mispositioned events
Percentage of Misposition/True events (%)
70 60 50 40 30 20 10 0 0.5
1.0
1.5
2.0
2.5
3.0
Crystal Dimension (mm)
Fig. 2 (continued) 80 Mispositioned events(BGO)
BGO
70
Percentage of Misposition/True events (%)
Group-1 Group-2 Group-3 Mispositioned events
60 50 40 30 20 10 0.5
1.0
1.5
2.0
2.5
Percentage of Misposition/True events (%)
80
75
Group2+Group3(BGO) Mispositioned events(LSO)
70
Group2+Group3(LSO) Mispositioned events(GSO) Group2+Group3(GSO)
65 60 55 50 45 40
3.0
35
Crystal Dimension (mm)
0.5
1.5
2.0
2.5
3.0
Crystal Dimension (mm)
Fig. 2 Percentage relative ratio of mispositioned event to true events for different crystal material as function of size. (a) BGO, (b) LSO and (c) GSO
1.0
Fig. 3 Comparison of mispositioned events for different crystal material
IFMBE Proceedings Vol. 35
Quantitative Assessment of the Influence of Crystal Material and Size on the Inter Crystal Scattering and Penetration Effect
715
IV. DISCUSSION
REFERENCES
As, it reveals from the Figure 2 ,a ,b and c by increasing crystal dimension the total number of mispostioned events significantly decreased for all crystal materials. Among all dimensions, crystal with the 0.5 mm pixel size has higher percentage of mispositioned coincidences and the dimension 3.0 mm has the less number of mispositionrd coincidences. Due to the fact that for small dimensions probability of intering photon to the other pixels intensify. As it can be seen in figure 2, a, b and c with the increasment of the crystal dimension from 0.5×0.5 to 3×3 mm, the coincidences in Group-3 (both singles suffer Compton effect) has no significant variation. Figure 3 illustrates the behavior of (ICS– P coincidences) and coincidence events related to the LORs which registered on incorrect position for BGO, LSO and GSO. The quantitative trend of mispostioned events (ICS–P coincidences) for LSO and GSO more or less are the same but for BGO number of misposioned event, especially for extended crystal pixel size are fewer, Due to the fact that BGO has higher stopping power and more annihilated of 511 keV photons were stopped in the crystal and so the percentages of mispositioned event decreased.
[1] Chatziioannou, et al., "System sensitivity in preclinical small animal imaging," 2008, pp. 1417-1420. [2] Rahmim and H. Zaidi, "PET versus SPECT: strengths, limitations and challenges," Nuclear Medicine Communications, vol. 29, p. 193, 2008. [3] Levin and H. Zaidi, "Current trends in preclinical PET system design," PET Clin, vol. 2, pp. 125-60, 2007. [4] N. Efthimiou, et al., "Tomographic evaluation of a dual head PET," 2010, pp. 27-30. [5] N. Zeraatkar, et al., "Quantitative Investigation of Inter-Crystal Scatter and Penetration in the GE Discovery RX PET/CT Scanner using Monte Carlo Simulations." [6] Y. Shao, et al., "A study of inter-crystal scatter in small scintillator arrays designed for high resolution PET imaging," Nuclear Science, IEEE Transactions on, vol. 43, pp. 1938-1944, 2002. [7] S. Rechka, et al., "LabPET inter-crystal scatter study using GATE," 2010, pp. 3988-3994. [8] V. Panin, et al., "PET reconstruction with system matrix derived from point source measurements," Nuclear Science, IEEE Transactions on, vol. 53, pp. 152-159, 2006. [9] F. Fahey, "Data acquisition in PET imaging," Journal of nuclear medicine technology, vol. 30, p. 39, 2002. [10] S. Jan, et al., "GATE: a simulation toolkit for PET and SPECT," Physics in Medicine and Biology, vol. 49, p. 4543, 2004. [11] S. Lashkari, et al., "The Influence of Crystal Material on Intercrystal Scattering and the Parallax Effect in PET Block Detectors: A Monte Carlo Study," 2008, pp. 633-636. [12] R. Ramirez, et al., "A comparison of BGO, GSO, MLS, LGSO, LYSO and LSO scintillation materials for high-spatial-resolution animal PET detectors," 2006, pp. 2835-2839.
V. CONCLUSIONS By increasing the crystal dimensions number of mispositioned events significantly decreased. Percentages of ICS and penetration events which affected by the crystal material especially for extended dimension significantly is obvious and among all materials that named these quantities for BGO are less than other candidate.
Corresponding author: Author: Mohammad Reza Ay Institute: Tehran University of Medical Sciences Street: Pour Sina City: Tehran Country: Iran Email: [email protected]
IFMBE Proceedings Vol. 35
Rapid Calibration of 3D Freehand Ultrasound for Vessel Intervention K. Luan1, H. Liao1, T. Ohya1, J. Wang2, and I. Sakuma1 1
School of Engineering, The University of Tokyo, Tokyo, Japan 2 Robotics Institute, Beihang University, Beijing, China
Abstract— To create a freehand three-dimensional (3D) ultrasound (US) system for image-guided vessel interventions, a US probe calibration must be performed to build 3D volumes from 2D vessel images. In this paper, a novel calibration method of 3D freehand US is described. It simply images the tip of a tracked needle that could freely move around within the coupling medium while the probe is fixed. The physical coordinates of the tip and the probe location are simultaneously recorded. Image coordinates of the tip are mapped to physical points. A RANSAC algorithm is applied to minimize the error matching image points and physical points. An accuracy evaluation experiment is performed to verify the proposed method. Results show that the needle tip can be accurately located and the RMS calibration error is 1.56mm. A phantom experiment and an in vivo experiment verify the availability of the technique in clinic. That means the present method is applicable to the calibration of a 3D freehand US in vessel interventions.
classified by Mercier et al. [9]. All of these methods need a complicated phantom with precise known geometries for a better accuracy and precision. In this study, we present an automatic fast calibration method similar to the methods of [5, 10] without a phantom. Our approach differs from them. Because we first define the imaging plane of the US probe, then perform the location of a needle tip in US images automatically and robustly. Furthermore, in order to match 2D image points to their corresponding physical points and avoid local minima of Levenberg-Marquardt (LM) [11], a RANSAC algorithm is applied to minimize the residual error between the mapped location of image points and physical points.
Keywords— Freehand ultrasound, Calibration, RANSAC.
The materials in our experimental design are shown in Fig. 1: An EM tracking system, including a 6 degree of freedom (6DOF) reference sensor, a 5DOF needle which tip can be tracked, a 2DUS and a computer for collecting US images and EM tracking data. The EM tracking system (NDI, Aurora, Canada) is used to track the physical locations of two independently movable instruments, the tip of the needle, and the reference sensor on a 2DUS probe (Prosound 10, Aloka, Japan).
I. INTRODUCTION Minimally invasive intravascular intervention surgery is performed with the aid of guide wires and catheters. During surgery, intraoperative imaging is needed in addition to preoperative images, since vessels changing during surgery. Even though computed tomography (CT) is used during surgery, there are still some significant practical limitations, due to costs and radiation doses. 3DUS is already being introduced alone or together with preoperational images for rapid 3D acquisition techniques. A flexible 3DUS system, freehand allows image acquisition with unconstrained movement. A freehand 3D volume reconstruction system usually needs positioning data of 2D slices. The most common method for obtaining the positioning data is to attach a position sensor to the probe: electromagnetic (EM), optical, mechanical arm or acoustic [1]. An additional step must be added to compute the transformation between the origin of the sensor mounted on the probe and the imaging plane itself. Other groups [2-8] have solved this kind of problem. A precise calibration method is to use an object known as a phantom with known geometric properties. This method is to image the phantom and to identify its features on the US images. These features are also located in the physical phantom space. The spatial relationship between the position of the features in the image and the features on the phantom is estimated in the calibration process. These phantoms are
II. MATERIALS AND METHOD
α
Fig. 1 System configuration A 2D US scanning is accomplished by emitting directional sound which transmits within the imaging plane and receiving the echoes reflected from an object. When the tip of a needle passes through the imaging plane, a bright image of the tip is produced. However, the position of the imaging plane is unknown due to the section thickness and
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 716–719, 2011. www.springerlink.com
Rapid Calibration of 3D Freehand Ultrasound for Vessel Intervention
side lobe artifact [12, 13]. Therefore we first model the imaging plane as a plane in 3D space. This plane can yield the most intensive echo when the needle tip goes through the US beam continuously (Fig. 1). An algorithm is proposed to detect the image when the tip lies on the imaging plane by analyzing the max intensity of the tip in ultrasound image sequences during the needle moving.
717
·
·
and a simplified formula:
A. Location of Needle Tip
(1) (2) where
is the binary image before the needle passing; is the binary image of ith image from the image sequence during the needle passing; is the subtracted image; is the original image of ; is the ith image with the ROI of the tip. By using equation (1) and (2), the point with max intensity in the tip ROI can be located. The moment when the tip just passes through US beam is also recorded. B. Proposed Algorithm In order to fit 2D image points to their corresponding physical points that are assumed to lie in the imaging plane, a point-to point registration is implemented. The 2D image points are mapped to the corresponding 3D physical points, applying the RANSAC algorithm to estimate the six unknown parameters in the 3D transformation matrix and the two pixel scaling factors. There is a one-to-one correspondence between image points and physical points. ·
·
·
(3)
The point in US image Xi is first scaled by S. It is mapped in sensor space by the rigid transformation Tsi, then into physical space by Tts. That is Xt, the point reported by EM tracking. We establish equation between the image point set and the physical point set given by formula:
0
0 0 · 0 1 (4)
· When the needle moves in the US beam, acoustic signals travel along the needle shaft and reflect from discontinuities along its length [13]. According to this phenomenon, the image at the moment the tip just passes through US beam can be located by the change of intensity. We identify the point with max intensity in the region of interest (ROI) of the tip from each US image. In order to implement this detection accurately and robustly, we employ a digital subtraction imaging method depicted as following.
0 0 0
· · ·
· · ·
·
1
(5)
where (xt, yt, zt) is the coordinate of points in physical space; (a, b c; d, e, f; g, h, i) is the rotation matrix of Tsi; (tx, ty, tz) is the translation of Tsi; su and sv are the pixel scaling factors; u and v are pixel coordinates from the origin of image. The model is determined by a parameter vector Θ = (a, b, d, e, g, h, su, sv, tx, ty, tz). The parameter vector Θ is estimated by the RANSAC [14]. It can treat data sets repeatedly instantiates this model, using small, random subsets of the data, until a model is found that is consistent with a large subset of the data. It works like this: 1. Choose 3 image-physical point pairs at random; 2. Use point pairs to compute vector Θ; 3. Map every point in the imaging plane to 3D physical space using the model determined by vector Θ; 4. Compute the distance between every image point's mapped location and the location reported by tracking system; 5. Compute the consensus point set in which the elements are consistent with the model instantiated with the estimated parameters, if the distance less than a tolerance value; 6. If the probability of finding a better consensus point set drops below a certain threshold, stop iterating. Otherwise, repeat from step 1 (choose a new set); 7. Use all consensus points to generate a final vector Θ.
III. EXPERIMENT AND RESULT The functionality of the algorithm is verified on real ultrasound data acquired by the ultrasound probe. A tracking needle with a 1-mm diameter was imaged by a 2D ultrasound probe operating at central frequency 10MHz. The probe was fixed at a stand, and the acoustic signal was projected into a water tank. The needle passed through the US beam so as to produced images of its tip. The size of the image data was 640 480 pixels. The US images were acquired at the same time that 3D physical locations of the sensor mounted on US probe and the needle tip are stored respectively and analyzed offline in MATLAB (The Mathworks, Inc.). The tip of needle was
×
IFMBE Proceedings Vol. 35
718
K. Luan et al.
imaged in various parts of US beam. The pixel coordinates of the tip were identified automatically to form 2D image points. Then these 2D image points were used to match their physical locations. The whole calibration process takes about 10 minutes. Fig. 2 shows an example of the localization of needle tip. The tip ROI is extracted from background including noises and the point with max intensity is located.
1 3
in different imaging sizes were measured (Fig. 4). The RMS calibration error of our method is 1.56mm and the target validation error is 2.88mm.
2 4
Fig. 2 Locating of the needle tip in pixel coordinates (1 mask image, 2 image with needle tip, 3 subtracted result and 4 original image with location of needle tip)
Fig. 4 Plot of average FREs and average TREs obtained from RANSAC and LM from different imaging sizes
Fig. 3 Detection of the image with most intensive signal from image sequences (abscissa axis) by the change of intensity (ordinate axis) in tip ROI during the needle going through the imaging plane continuously When the needle goes through the US beam continuously, the change of max intensity in tip ROI in image sequences is shown in Fig. 3(left). The most intensity is identified (Fig. 3 middle) and the corresponding sequence number is located (Fig. 3 right). To measure the accuracy of the proposed method in our calibration procedure, a target registration error (TRE) [15, 16] and a fiducial registration error (FRE) [15] are computed. A fiducial point set with 40 pairs of image-physical points was collected. Similarly, a target point set with 40 pairs was stored. The FRE and TRE comparisons between our method and LM
Fig. 5 shows the result of a phantom experiment. The phantom with a cylinder marker was reconstructed by the US probe calibrated with our method, at 7cm imaging depth and 10MHz central frequency. A 5DOF needle was tracked by the EM tracking system in real time. The reconstructed phantom and the tracked needle are shown in 3D slicer (www.slicer.org). Fig. 6 shows the result of an in vivo experiment in which the carotid is reconstructed with the US probe. The US worked in power Doppler mode, 9cm imaging depth. The US probe was calibrated with the proposed method. The carotid was scanned in freehand along its length. 2D Power Doppler images and location data of probe were acquired simultaneously. Colored points in 2D US images were extracted. The 3D corresponding positions of these colored points were computed by using the transformation matrix in (5). The 3D points were reconstructed and the surface of carotid was rendered with VTK (www.vtk.org).
IFMBE Proceedings Vol. 35
Rapid Calibration of 3D Freehand Ultrasound for Vessel Intervention
719
that the calibration errors produced by our method are better than that of LM. The availability of the technique in clinic is evaluated by the phantom and in vvivo experiments. These results prove that this is a feasiblee calibration method for image-guided vessel interventions.
REFERENCEES
Fig. 5 Phantom experiment (left: phantom and needdle, middle: reconstructed phantom and model of needle, right: marker direected by needle)
1
2
3
Fig. 6 Reconstructed carotid with freehand US (1: the neeck is scanned with a 2D probe attached EM sensor, 2: the power Doppler image i of carotid, 3: surface rendering of carotid)
IV. DISCUSSIONS One of the contributions that this work makes m to the 3D freehand US field is the rapid and autom matic calibration process. The calibration error of our method d is better than that of LM is due to some outlying pointts could be removed. The in vivo experiment shows combining the portability of US, this method is easy to implem ment for vessel reconstruction in clinic.
V. CONCLUSIONS This paper demonstrates a novel method for f freehand US calibration based on the RANSAC. In all caases of our calibration process, the tip of needle can be acccurately identified in US images. Additionally, the results from the comparison experiment between our method and LM show
1. Cinquin P, Bainville E, Barbe C, et al. ((1995) Computer assisted medical interventions. IEEE Eng Med Biol Magazine 14:254–263 2. Prager RW, Rohling RN, Gee AH, et aal. (1998) Rapid calibration for 3-D freehand ultrasound. Ultrasound M Med Biol 24:855-869 3. Blackall JM, Rueckert D, Maurer CR, eet al. (2000) An image registration approach to automated calibrationn for freehand 3D ultrasound. MICCAI 2000 LNCS. Vol1935, Berlin , Germany, 2000, pp 462–471 4. Pagoulatos N, Haynor DR, Kim Y. (20001) A fast calibration method for 3-D tracking of ultrasound images uusing a spatial localizer. Ultrasound Med Biol 27:1219-1229 5. Muratore DM, Galloway RL. (2001)) Beam calibration without a phantom for creating a 3-D freehand uultrasound system. Ultrasound Med Biol 27:1557–1566 6. Lindseth F, Tangen GA, Langø T, et aal. (2003) Probe calibration for freehand 3-D ultrasound. Ultrasound M Med Biol 29:1607-1623 7. Leotta DF. An efficient calibration m method for freehand 3-D ultrasound imaging systems. (2004) Ultrasouund Med Biol 30:999-1008 8. Hsu PW, Prager RW, Gee AH, Treecee GM. Real-time freehand 3D ultrasound calibration. (2008b) Ultrasouund Med Biol 34(2):239-251 9. Mercier L, Langø T, Lindseth F, Collinns DL. A review of calibration techniques for freehand 3-d ultrasounnd systems. (2005) Ultrasound Med Biol 31(4):449-471 10. Zhang H, Banovac F, White A, Clearyy K. (2006) Freehand 3D ultrasound calibration using an electromaggnetically tracked needle. Proceedings of SPIE Medical Imaging Sym mposium, 2006, pp 775–783. 11. More JJ. (1977) The Levenberg–Marqquardt algorithm: Implementation and theory. Lecture notes in mathem matics. 630:105–116 12. Goldstein A, Madrazo BL. Slice-thicknness artifacts in grayscale ultrasound (1981) J Clin Ultrasound 9:365–3375 13. Huang, J., Triedman, J. K., Vasilyev, N N. V., Suematsu, Y., Cleveland, R. O., and Dupont, P. E. (2007) Imagiing artifacts of medical instruments in ultrasound-guided interveentions. J. Ultrasound Med 26(10):1303–1322 14. Torr, P.H.S and Zisserman A. (2000) MLESAC: A new robust estimator with application to estimating im mage geometry. Computer Vision and Image Understanding, 78:138––156 15. Fitzpatrick JM, West JB, Maurer CR. (1998) Predicting error in rigidbody point-based registration. IEEE Trans Med Imaging 17(5): 694-702 16. Moghari MH, Abolmaesumi P. (2006)) A high-order solution for the distribution of target registration error iin rigid-body point-based registration. MICCAI 2006 LNCS. Vol. 44191, Berlin, Germany, 2006, pp 603-611
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Kuan Luan School of Engineering, the Unniversity of Tokyo 7-3-1, Hongo, Bunkyo-ku Tokyo Japan [email protected]
Segmentation of Tumor in Digital Mammograms Using Wavelet Transform Modulus Maxima on a Low Cost Parallel Computing System Hanifah Sulaiman1, Arsmah Ibrahim1, and Norma Alias2 1
Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, 40450 Shah Alam, Selangor 2 Institute Ibnu Sina, Universiti Teknologi Malaysia, 81310 Skudai, Johor Bahru
Abstract— Parallel Computing System (PCS) is currently used widely in many applications of complex problems involving high computations. This is because it has the capability to process computations efficiently using a parallel scheme. ARS cluster is a low-cost PCS developed to implement processing of full-field digital mammograms. In this system eight processors are used to communicate via the Ethernet network using LINUX which is Fedora 7 as the operating system and Matlab Distributed Computing Server (MDCS) as a platform to process the digital mammograms. In this paper the Wavelet Transforms Modulus Maxima (WTMM) method is used to detect the edge of tumor in digital mammogram implemented on the ARS cluster. The study involved 80 digitized mammographic images obtained from the Malaysian National Cancer Center (NCC). The performance of the PCS in detecting the edge of tumors in digital mammograms using WTMM on the ARS cluster is reported. The experimental results showed that the speedup of the PCS improves when the number of processors is increased.
between the user and the system. This system can be used to solve problems involving high complex computations. In this paper the improvement in time processing, speedup and effectiveness of the ARS PCS in segmenting tumors in FFD images is reported.
II. MATERIALS AND METHODS In this work a multiscale edge detection that is based on the wavelet transform modulus maxima (WTMM) is used to segment tumors in FFD mammograms. The ARS PCS with MDCS is developed and used as a platform to process the segmentation. Parallel technique and computation used in processing the mammograms adopted work done by Mylonas [5] and Srivinasan [4]. Figure 1 below displays the framework of the parallel technique used.
Keywords— Breast Tumor, Digital Mammograms, Edge Detection, Segmentation, Matlab Distributed Computing Server, Parallel Processing, Wavelet Transform Modulus Maxima.
I. INTRODUCTION Parallel Computing System (PCS) is a system based on parallel processing conception that uses multiple processors which run simultaneously and operate independently. In the system each processor has its own private Internet Protocol (IP) address, memory and space to store data. The data is shared via a communication network. The performance of a PCS depends on the specification of each processor and the memory capacity available in the system. A Full Field Digital (FFD) mammogram is an image that consists a huge data in matrix form that represent the resolution, brightness and contrast of the image. Hence image processing of FFD mammograms require a system with high computational capability. The ARS cluster is a low cost PCS that consists of eight workers that uses the Matlab Distributed Computing System (MDCS) and the open source LINUX Fedora 7 as an interface
Fig. 1 Parallel Framework of Edge Detection Digital images have been produced by the digital mammography instrument is initially read by a client PC. The client PC submits the task to the Job Manager (JM) which in turn distributes the task to each worker in the system. Each worker processes the task given independently. When the task is accomplished the JM collects the output and returns it back to the client PC for the result to be displayed in the monitor. The ARS PCS is used on a set of 80 FFD images are acquired from the Malaysian National Cancer Center [6].
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 720–723, 2011. www.springerlink.com
Segmentation of Tumor in Digital Mammograms Using Wavelet Transform Modulus Maxima
A local maximum is a value that is maximum within some specific neighborhood. The local maxima of a wavelet transform modulus is closely related with multiscale edge detection that concerns about contours or boundaries of objects. From the multiscale edges detected by the WTMM the original signal is reconstructed. According to Mallat et.al (1992), a multiscale edge detection in two dimensions can be formalized through a wavelet transform defined respectively by the following wavelets 1
ψ ( x, y ) =
∂θ ( x, y ) ∂x
2
and ψ ( x, y ) =
∂θ ( x, y ) ∂y
(1)
where θ ( x, y ) is a two-dimensional smoothing function whose integral over x and y is equal to 1 and converges to 0 at infinity. With s serves as the dilation factor let 1
ψ s ( x, y ) =
1 s2
maxima in the direction of the gradient given by As f ( x, y ) . The line formed by the ( x, y ) along this direction represents the edge.
III. WTMM ALGORITHM EDGE DETECTION OF TUMOR ON MAMMOGRAM IMAGE
The WTMM algorithm of edge detection of tumor on mammogram image is based on the A`Trous Algorithm [9]. The algorithm involves three convolution kernels namely the low-pass filter H, the high-pass filter G and the diract filter D [1]. In this work the kernels used are shown in the Table 1. Table 1 Convolution Kernel
1 ⎛x y⎞ 1⎛ x y ⎞ ψ ⎜ , ⎟ and ψ s2 ( x, y ) = ψ 2 ⎜ , ⎟ (2)
⎝s s⎠
s2
H 0.125 0.125 0.125 0.125
⎝s s⎠
This allows the wavelet transform of a function 2 2 f ( x, y ) ∈ L ( R ) at the scale s with the respective wavelets to produce two components defined as 1
1
Ws f ( x, y ) = f ∗ψ s ( x, y ) 2
(3)
2
Ws f ( x, y ) = f ∗ψ s ( x, y )
⎛∂ ⎞ G ⎛ Ws f ( x, y) ⎞ ⎜ ∂x ( f ∗ θ s ) ( x, y) ⎟ = s∇ ( f ∗ θ s ) ( x, y) ⎜ 2 ⎟ = s⎜ ∂ ⎟ ⎝ Ws f ( x, y) ⎠ ⎜ ( f ∗ θ s ) ( x, y) ⎟ ⎝ ∂y ⎠ 1
(5)
G ∇ ( f ∗ θ s ) ( x, y ) . At any scale s, the wavelet transform
modulus of f ( x, y ) is defined as 1
2
Ws f ( x, y ) + Ws f ( x, y )
points ( x, y ) where the
( ) (x, y ) * (D, G ) (x, y ) * (G D )
I 2 j +1 (x, y ) = I 2 j ( x, y ) * H j , H j W2 j +1 I (x , y ) = I 2 j 1
(x , y ) = I 2
M 2 j +1 I (x , y ) =
j
j
j,
2
W1 2
j +1
I (x , y ) + W 2 2
j +1
I (x , y )
2
(6)
(7)
( f ∗ θ s ) ( x, y ) are the set of modulus M s f ( x, y ) has local
The sharp variation points of
j=0; while j<J
j=j+1 end of while.
2
where the angle of the gradient vector with the horizontal direction is governed by
⎛ W 2 f ( x, y ) ⎞ ⎟⎟ As f (x, y ) = tan −1 ⎜⎜ s1 ⎝ W s f ( x, y ) ⎠
D 1 0 0 0
The following algorithm computes the WTMM of the digital mammogram.
2 W2 j +1 I
The two wavelet transform components are proportional to the two components of the gradient vector
M s f ( x, y ) =
G 0 0.5 -0.5 0
(4)
Theoretically the multiscale sharp variation points can be obtained when Equations (1) are satisfied. Consequently the wavelet components above can be rewritten as
2
721
This algorithm is implemented on the digital mammogram using ARS PCS with Matlab R2009a software.
IV. PARALLEL IMAGE PROCESSING APPLICATION The parallel implementation of WTMM on digital mammograms adopted the technique by Mylonas [5] where the original image is decomposed into several sub-images.
IFMBE Proceedings Vol. 35
722
H. Sulaiman, A. Ibrahim, and N. Alias
The performance of ARS PCS is based on Amdahl’s law [7] in terms of the speedup and efficiency. Figures 4-6 show the performances of the ARS PCS in detecting the tumor in digital mammogram using the WTMM method.
Fig. 2 Physical Parallel Algorithm Figure 2 illustrates the physical parallel algorithm that decomposes the original image into four sub-images. Each sub-image is distributed into two processors. Hence each of the 4 sub-images undergoes the WTMM computation in a parallel manner. The output images are then united and displayed as one whole image. Consequently the edges detected by the eight workers merge to form one closed contour. Based on several experiments, the time taken for the iterations to complete sequentially and parallel is recorded. The performance of the PCS in terms of time processing, speedup and efficiency are analyzed.
Fig. 4 Processing Time vs No. Worker(s)
V. RESULTS AND ANALYSIS The edge of breast tumor has been detected using three scales of the multiscale edge detection using WTMM method. Each mammographic image shows the six images that have been processed based on scale of iteration. The last image shows the result of the edge detection of breast tumor that will be used by radiologist to analyze the tumor. Figure 3 shows the results obtained upon the implementation of the WTMM algorithm on a digital mammogram using the ARS PCS.
Fig. 5 Speedup vs No. Worker(s)
Fig. 6 Efficiency vs No. Worker(s)
Fig. 3 Edge Detection Using WTMM
In the Figure 4, the elapsed processing time of the WTMM tested on ARS PCS that consist of eight (8) IBM processors is depicted for four differences digital mammographic which are CCL, CCR, MLOL and MLOR. Each of it has their own size of the pixels. Based on the figure, the processing time decreased when the number of processors increased. In the Figure 5, it is shown the speedup of the ARS PCS is increasing slowly when the number of processors increased. Based on the Kocak et.al. [10], the speedup of the large problem is better than the smaller problem. In IFMBE Proceedings Vol. 35
Segmentation of Tumor in Digital Mammograms Using Wavelet Transform Modulus Maxima
this case the size of the pixels of MLOR is large than the other digital mammographic. Hence, it gives the effect of the efficiency of ARS PCS that is shown in the Figure 6. From the figure, the efficiency of the ARS PCS is decreasing when the number of processors increased. It is shown that the utilization of the ARS PCS memory decreased when the processors increased. It depends on the size of the problems. If the efficiency reduced it shows that the PCS has a space to accept a more problem to solve. It is compared for one processor that solves the problem and the memory of processor is fully utilized and there is no space to accept more problems. Message Passing Interface (MPI) is used in ARS PCS as a communication tools between each processor. So, MPI is responsibilities to distribute the problem to each processor. In this experiment, the distribution of the problem is done by a Job Manager that indicated in the MDCS software [8].
VI. CONCLUSION AND RECOMMENDATIONS Based on the results obtained above, it shows that the ARS PCS is successful in improving the performance of the huge computations complexity in processing of FFD mammograms. This implies that the system can be benefited for image monitoring and visualizing in multidimensional problems.
723
REFERENCES 1. S.Mallat, S.Zhong. (1992) Characterization of Signals from Multiscale Edges. IEEE Trans. Pattern Analysis and Machine Intelligence, volume. 14, no. 7, pp.710-732. 2. S.Mallat. (1998) A Wavelet Tour of Signal Processing. Academic Press, Second Edition. 3. S.Mallat, W.Hwang. (1992) Singularity Detection and Processing with Wavelets. IEEE Trans. Information Theory, volume. 38, no. 2, pp.617-643. 4. Srinivasan, N. and Vaudehi, V. “Application of Cluster Computing in Medical Image Processing”. IEEE. (2005). 5. Mylonas, S.A, Trancoso, P. and Trimikliniotis, M. (2000) Adaptive Noise canceling and Edge Detection in Images using PVM on a NOW. 10th Mediterranen Electrotechnical Conference. 6. National Cancer Registry. (2003) Second Report of the National Cancer Registry Cancer Incidence in Malaysia,2003. Ministry of Health Malaysia. Retrieved on 23rd Nov 2007 at http://www.crc.gov.my/ncr. 7. Amdahl, G.M. (1967) Validity of the single processor approach to achieving large scale computing capabilities. AFIPS spring joint computer conference. 8. Arsmah, A. Norma, S. Hanifah and Y. S. Salmah. (2009) Active Contour Model on Digital Mammograms Using Low-Cost Parallel Computing Systems. Proceeding of ICORAFFS. 9. Xiaodong Zhang and Deren Li. (2001) A Trous Wavelet decomposition Applied to Image Edge Detection”. Geographic Information Sciences. Volume 7, no. 2, pp. 119-123. 10. S. Kocak and H.U. Akay. (2001) Parallel Schur Complement Method for Large-Scale Systems on Distributed Memory Computers. Applied Mathematical Modelling 25.
IFMBE Proceedings Vol. 35
The Correlation Analyses between fMRI and Psychophysical Results Contribute to Certify the Activated Area for Motion-Defined Pattern Perception S. Kamiya1, A. Kodabashi2, Y. Higashi2, M. Sekine1, T. Fujimoto2, and T. Tamura1 1
Graduate School of Engineering, Chiba University, Chiba, Japan 2 Fujimoto-Hayasuzu Hospital, Miyakonojo, Japan
Abstract— Motion-defined pattern perception includes two perceptions: motion and pattern perception. The purpose of this study was to clarify the difference in activated areas between motion and pattern perception. Psychophysical experiments were carried out by employing a random dot kinematogram (RDK) to ask the direction of coherently moving dots in a core rectangle toward upper left or lower right (a direction discrimination task), and the shape of that rectangle oblong vertically or horizontally (a pattern discrimination task) by varying dot velocity in four levels (14.4, 28.8, 43.2 and 57.6 deg/s). The results of the psychophysical experiments revealed that the mean correct rate of 10 participants was significantly larger for motion discrimination than that for the pattern perception in three velocity levels (14.4, 28.8 and 43.2 deg/s). The functional MR measurements were carried out by employing the same RDK stimuli. Most of the activated areas located in the left occipital lobe. The correlation analyses were undertaken between the slopes of regression (sizes of effect) and correct rates of two tasks. The significant correlation was emerged at Brodmann’s area (BA) 19. In contrast, the correlations between these slopes and those of direction discrimination were significant at BA9, BA39 and anterior parts of BA19. These results suggest that motion-defined pattern perception is processed at BA19. Keywords— motion-defined pattern, random dot kinematogram, fMRI, correct rate, correlation analysis.
I. INTRODUCTION Motion-defined pattern perception is realized, when we watch an area composed of coherently moving dots surrounded by “sand storm” region of incoherently moving dots. According to psychophysical experiments, this type of pattern perception includes two processes: perception of motion itself and that of shape or contour [1]. The activated brain area for detecting motion itself has been established to be in the middle temporal area (MT/V5) or area V3A [2]. In contrast, the area for motion-defined pattern perception has not been clearly identified. The results of many conventional fMRI studies suggested that various areas 6, 9, 19, 32 and 38 in Brodmann’s area (BA) were activated by motiondefined pattern stimuli [3]. However, it is not evident from these results whether they were elicited for motion perception itself or for the pattern perception.
The purpose of our study is to discriminate the areas for the pattern perception process from that for motion itself.
II. METHOD A. Participants Ten healthy, adult volunteers (7 females and 3 males) participated in this study with normal vision. Their ages were between 19 and 43 years old. The study protocol had been approved by the local ethics committee and the Helsinki Declaration covering the use of human subjects was observed at all times. All subjects provided their written, informed consent in advance. B. Stimuli The visual stimuli were generated using a Power Macintosh G4 computer using the Psychophysics Toolbox program library. Stimuli were projected on a screen at a frame rate of 60 Hz, resolution of 640 × 480 pixels, and viewing distance of 57 cm by the projector (EPSON, ELP-7300). The stimuli were composed of random dot kinematograms (RDKs). The size of the random dots was 3.6 arc min; half were white (60.89 cd/m2) and half were black (0.57 cd/m2) (Fig.1). The RDK window (240 × 480 pixels) was placed in the center of the screen, surrounded by an averaged gray background (30.73 cd/m2). In the center of the RDK window, we positioned the core rectangular region (120 × 60 pixels), which was oblong either vertically or horizontally. The dots moved coherently in either an upward and left or a downward and right direction in the lower visual field at 7.2 deg eccentricity. The velocity of the moving dots varied across four levels ranging from 14.4 deg/s to 57.6 deg/s (14.4, 28.8, 43.2 and 57.6 deg/s). The rest of the RDK region was a “sandstorm” area composed of incoherently moving dots. The duration of stimulation (“ON” period) was 500 ms, and each presentation was followed by an “OFF” period, during which dots in the entire RDK region was static for 500 ms.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 724–727, 2011. www.springerlink.com
The Correlation Analyses between fMRI and Psychophysical Results Contribute to Certify the Activated Area
C. Procedure of the Psychophysical Experiments The experiments were focused on two independent tasks: (1) pattern discrimination and (2) motion-direction discrimination. In the pattern discrimination task, the participants were asked to answer whether the contour of the core rectangular region was oblong vertically or horizontally. In this task, the moving direction was fixed at lower right (Fig.1). In the motion-direction discrimination task, the participants were asked to answer whether the direction of moving dots was toward the upper left or lower right. In this task, the core region was fixed at a horizontally elongated pattern. For both tasks, each condition was repeated 20 times, giving a total of 160 trials, which were presented randomly in a series (constant stimuli method). When one trial was finished, the response was entered using the twoalternative forced-choice method, and then the next trial was followed.
725
2. Experimental paradigm For each moving velocity level, the measurement was repeated for 3 cycles. In each cycle, the resting block of static random dot presentation was continues for 30 sec (10 scans) and was then followed by the stimulating block, during which the same motion-defined pattern stimuli as those in psychophysical experiments continuing for 30 sec were presented. The order of velocity levels was randomized. 3. Analysis of the fMRI data The fMRI data were analyzed using SPM5 (Welcome Foundation, ICL, UK). The functional images were “realigned” and “normalized” to the standard brain Montreal Neurological Institute (MNI) space, using a voxel size of 2 mm. The “normalized” data were then smoothed using a spatial filter (FWHM) of 8 mm. The “size of effect” at each voxel was ordinarily evaluated from the slope value of regression in the general linear model. The activated brain regions were determined by the significant difference between the size of effect at the velocity of 14.4 deg/s and 57.6 deg/s examined for the data of 10 participants by uncorrected t-test at the threshold p<0.001. The MNI coordinates of significant voxel was translated into Talairach coordinates using the MNI2TAL tool.
III. RESULTS A. Results from Psychophysical Experiments Fig. 1 Schematic drawings of motion-defined pattern stimuli The core rectangular region composed of coherently moving dots (enclosed by broken line) is surrounded with the “sandstorm” zone composed of incoherently moving dots. The velocity of moving dots in the direction indicated by arrows was varied in four levels.
D. Procedure of Functional Magneto-Resonance (fMR) Measurements 1. Measurement parameters The fMR measurements were carried out by using 1.5 T, whole body MR scanner (Symphon, Siemens, Japan). The participants were able to view a projection screen via a mirror in the supine position. We used a T2*-sensitive, gradient echo, EPI sequence. The sequence parameters were: TR = 3000 ms, the TE Effective = 47 ms, Flip angle = 90 degrees. The brain was imaged using a package of 23 slices with a thickness of 5 mm.
Fig. 2 shows the relationship between the moving dot velocity (abscissa) and the correct answer rate (ordinate) obtained in the psychophysical experiments. The correct rates data shown are averages of the ten participants. Solid circles indicate the results of the patterndiscrimination task and blank circles designate those of the direction-discrimination task. The data in each graph were fitted to a sigmoid function curve, and the threshold (75% correct rate) was estimated from each curve. The threshold in the pattern-discrimination task was 33.94 deg/s and that in the direction-discrimination task was 46.59 deg/s. The difference in the correct rates between the two tasks was examined by applying statistical analyses (paired t-test) on the data of the ten participants for each moving velocity. The results revealed that the correct rates for the direction discrimination task were significantly higher than those for the pattern discrimination task at the velocity of 14.4 deg/s ( t(18) = 2.21, p < 0.05), 28.8 deg/s ( t(18) = 3.62, p < 0.01) and 43.2 deg/s ( t(18) = 3.66, p < 0.01). These results suggested that the direction discrimination task was much easier than the pattern discrimination task.
IFMBE Proceedings Vol. 35
726
S. Kamiya et al.
Pattern Direction
1
* Correct rate
**
0.75
**
0.5 14.4
28.8
43.2
57.6
Velocity (deg/s) Fig. 2 The relation between the dot velocity and correct rates for the pattern discrimination task (solid circles) and the direction discrimination task (blank circles)
Correct rates data (mean±1SE) were fitted with sigmoidal function curves. B. Results from fMR Study The activated areas determined as described above (Ⅱ,B-3) are indicated as red regions in Fig.3. Most of the activated areas were in the left occipital lobe. These activated areas were considered to include the processing areas for both motion itself and motion-defined pattern perception. We then tried to discriminate the areas for pattern perception by utilizing the correlation analyses between the sizes of effect and the correct rates for pattern discrimination task at 4 velocity levels. The only one voxel significantly correlated in such analyses was the area designated by the black square at BA19 in Fig.3 ( t(3) = 5.185, p<0.05). The correlation coefficient ( r = 0.967) revealed very strong correlation. The location of this area is listed in Table.1. Similar analyses were performed between the sizes of effect at the activated areas and the correct rates of the direction discrimination task. The significant correlations by this analyses were emerged at one voxel in BA9 ( t(3) = 4.487, p<0.05), two voxels in BA19 ( t(3) = 9.52, p<0.05; t(3) = 5.684, p<0.05) and one voxel in BA39 ( t(3) = 10.51, p<0.01). These correlation coefficients were also very high ( BA9: r = 0.954, BA19: r = 0.989 and 0.97, BA39: r = 0.991). These significantly correlated areas are indicated by white circles in Fig.3. Their locations are listed in Table.2.
Fig. 3 The activated brain regions due to motion-defined pattern stimuli The activated areas of 14.4 deg/s condition subtracted by 57.6 deg/s were indicated as red region. The location of significant correlation between the sizes of effect and correct rates for pattern perception is designated as black square. The locations of significant correlations with those for motion discrimination are depicted as white circles. Table 1 Activated areas correlated with pattern discrimination task BA
Coordinates x, y, z
Statistics Z value
19
-32, -84, 30
3.11
The only one voxel significantly correlated was at BA19 ( t(3) = 5.185, p<0.05). Table 2 Activated areas correlated with direction discrimination task BA
Coordinates x, y, z
Statistics Z value
9
-36, 20, 34
3.41
19
-42, -80, 24
3.41
19
-36, -82, 28
3.18
39
-44, -78, 24
3.22
The significant correlations were emerged at one voxel in BA9 (t(3) = 4.487, p<0.05), two voxels in BA19 ( t(3) = 9.52, p<0.05; t(3) = 5.684, p<0.05) and one voxel in BA39 (t(3) = 10.51, p<0.01).
IFMBE Proceedings Vol. 35
The Correlation Analyses between fMRI and Psychophysical Results Contribute to Certify the Activated Area
IV. DISCUSSSION These results of the correlation analyses in Fig.3 and Table.1 certified that BA19 was only the activated area significantly correlated with the correct rates for the pattern discrimination task. Regan et al. [4] reported that the patients with lesions in BA18, BA19, BA21, BA22 and BA39 were unable to discriminate the motion-defined letter shape, while they were intact formotion perception itself. It means that motion-defined pattern is processed in the areas involved in the lsesions mentioned above. The common areas with our study and the study by Regan et al. is BA19. The motion-defined pattern perception should be performed by interconnecting the magncellular system, which deals with location and moton, and the parvocellular system, which processes shape and color. BA19 is identical to V3 (visual area 3), which has also been suggested to represent an important area for integration and transformation of visual signals processed by magno- and parvo-cellular systems ([5], [6]). It is then plausible that motion-defined pattern perception is eventually accomplished at V3. In this study, we carried out the correlation analyses between the data obtained by the psychophysical experiments and those acquired in the psychopysiological experiments. As suggested from the results in Fig.3, the close correlation between the data of correct rate for pattern discrimination task and of the size of effect by fMR measurements well contributed to identify the location of the processing area for moton-defined pattern perception as BA19. Accordingly, it is suggested that the correlation analyses between the
727
psychophysical and psycho-physiological data may help to have insight into the mechanisms or time courses of various perceptoin processes in the brain. The opportunity to attempt this type of analyses is now expanding, because such noninvasive measurements as fMRI etc. becoming more widely available.
ACKNOWLEDGMENT This study was partly supported by a Grand-in-Aid for Scientific Research (21300043) from Japan Society for the Promotion of Science.
REFERENCES 1. Chang J, Julesz B (1983) Displacement limits, direction anisotropy and direction versus form discrimination in random-dot cinematogram. Vision Res. 23: 639-646 2. Tootell B, Mendola D, Hadjikhani K et al. (1997) Functional analysis of V3A and related areas in human visual cortex. J Neurosci. 17:70607078 3. Marcar L, Loenneker T, Straessle A et al. (2004) An fMRI study of the cerebral macro network involved in 'cue invariant' form perception and how it is influenced by stimulus complexity. Neuroimage. 23: 947955 4. Regan D, Giaschi D, Sharpe A et al. (1992) Visual processing of motion-defined form: selective failure in patients with parietotemporal lesions. J Neurosci. 12: 2198-2210 5. Felleman J, Van Essen C (1987) Receptive field properties of neurons in area V3 of macaque monkey extrastriate cortex. J Neurophysiol. 57:889-920 6. Gegenfurtner R, Kiper C, Levitt B (1997) Functional properties of neurons in macaque area V3. J Neurophysiol. 77:1906-1923
IFMBE Proceedings Vol. 35
A New Method for Measuring Pistoning in Lower Limb Prosthetic H. Gholizadeh1, N.A. Abu Osman1, Á.G. Lúðvíksdóttir2, M. Kamyab3, A. Eshraghi1, S. Ali1, and W.A.B. Wan Abas1 1
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia 2 Clinical Research, R&D, Össur Head Office, Reykjavík, Iceland 3 Orthotics & Prosthetics Department, Faculty of Rehabilitation Sciences, Tehran University of Medical Sciences, Tehran, Iran
Abstract— The quality of suspension in lower limb prosthetics has significant role in amputee’s satisfaction. Good suspension lessens the pistoning movement of the residual limb inside the prosthetic socket. A number of methods are used for measuring pistoning; however most of them are difficult to be performed inside the prosthetic clinic and by every prosthetist. The purpose of this study was to introduce a new simple method for measuring pistoning in Transtibial prosthesis in static position. This method was developed in-house at Össur for the first time. In this study we used this technique to measure pistoning between socket and liner in three subjects. The result showed that this method can be an alternative method for measuring pistoning. Moreover, that makes it simple for prosthetists to test pistoning in the clinic and is a safe method for the subject.
different levels. The movement can be between the hard socket and the liner, between the liner and the skin, and between the skin and the bone [5, 6]. Radiological methods include roentgenology [7, 8], cineradiography [9, 10], and roentgen stereo photogrammetric analysis [6]. Ultrasonic methods or transducer are also used to record the pistoning [11, 12, 13]. However, since these methods required complicated devices and settings, and X-ray expose is not ethical for the subjects, these studies mostly have been limited to laboratory and were not used clinically. The objective of this study was to introduce and assess a new method for measuring pistoning within the socket, designed and developed in-house at Össur for the first time.
Keywords— Transtibial prosthesis, Suspension, Pistoning, Weight bearing, liner.
II. METHODOLOGY
I. INTRODUCTION The lower limb prosthesis's efficiency is mainly guaranteed by its optimal suspension method in order to secure the socket to the amputee's stump. Suspension and fitness in residual limb prosthesis have main role in comfort and prosthetic function [1, 2].The most important factor mentioned by the amputees is the fit of their prosthesis and suspension [3]. Pistoning or vertical movement of the residual limb inside the prosthetic socket is said to be one of the major indications of evaluating suspension method of lower limb prosthesis [4]. Prosthetic fitting has been said to be correlated with pistoning [5]. Thus, measuring the pistoning would be helpful in determining the optimal prosthetic fit. Several methods have been used to measure pistoning movement occurring in
A. Subjects Three male unilateral transtibial amputees (mean, 42 years old), that used Pattelar tendon bearing socket (PTB) for the last 3 years with silicone liner and shuttle lock, with mobility grade K3, based on American Academy of Orthotists & Prosthetists, participated in this study. Ethical approval was granted from the University of Malaya Medical Centre (UMMC) Ethics Committee. In this study, we tested pistoning movement in these subjects with the new method. The subjects were required to complete five static trials consist of five different static conditions, including single limb support on prosthetic limb (full weight bearing), non weight bearing, and three different axial loading conditions. Loads of 30, 60 and 90N were added to the prosthetic foot.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 728–731, 2011. www.springerlink.com
A New Method for Measuring Pistoning in Lower Limb Prosthetic
B. Technique for Measuring Pistoning (Developed by Össur) To identify any pistoning movement inside a prosthetic socket the following equipments are required: -good high quality measuring tape,
729
Conditions: 1. Amputee standing full weight bearing on prosthesis (all the body weight on the prosthesis) – This was the baseline measure. 2. Amputee standing with no weight on prosthesis with the leg straight (e.g. on stairs with the sound leg on the step and prosthesis suspending freely)
-3, 6 and 9 kg loads, -high resolution camera -Two reference rulers (with an accurate and known length) on the lateral side of the limb and the socket - to be used as a reference when measuring the displacement on the photos (the ruler attached to the limb is used as a measuring reference for A and B, and the ruler attached to the socket is used as a measuring references for C and D) -Black ink markers on the following positions (Fig 1): Greater Trochanter (A), proximal lateral end of liner (B), proximal lateral end of socket (C), distal end of socket (D)
3. Applying 30 Newton force in the longitudinal axis of the prosthesis with the leg straight 4. Applying 60 Newton force in the longitudinal axis of the prosthesis with the leg straight 5. Applying 90 Newton force in the longitudinal axis of the prosthesis with the leg straight. C. Calculating Pistoning Movements The measurement when the user stood full weight bearing on prosthesis was used as a baseline (AD standing, AC standing, AB standing, BD standing).Then the other four conditions were compared to the baseline to identify any pistoning movement. The Δ AD displacement was calculated as follows: 1. Δ AD (no weight) = AD no weight - AD standing 2. Δ AD (30N) = AD 30 Newton - AD standing 3. Δ AD (60N) = AD 60 Newton - AD standing 4. Δ AD (90N) = AD 90 Newton - AD standing
Fig. 1 Position of markers and rulers In the 5 conditions below, we used the measuring tape to measure following four distances: AD, AC, AB, and BD (Fig 1). For each condition we took a photo from a fixed point using a photo stand. The photo was taken so that the markers (and the reference ruler) could be seen clearly, and was not at an angle from the photo stand.
The calculations on previous step were repeated for AC, AB and BD. We compared the displacement for the three distances (AD, AC and BD) and found the mean displacement. In order to find the errors of measurements, the differences in displacement between the measures were compared. Ideally the displacement should have been the same for the three distances, AD, AC and AB+BD. Deviations represented errors of measurement. The photos were used to validate the tape measurements by measuring the distance between the markers on the photos and the reference rulers were employed to calculate the real distance between the markers. Finally, we compared those results to the tape measures.
IFMBE Proceedings Vol. 35
730
H. Gholizadeh et al.
Table 1 Average of distance (mm) between the markers in 3subjects
AB AC AD BD CD
full weight bearing
non weight bearing
Adding 30N
Adding 60N
Adding 90N
0
2
2
2
2
0
16
20
21
25
0
16
20
21
25
0
14
18
19
23
0
0
0
0
0
Fig. 2 Average of displacement (mm) in 3 subjects
III. RESULT
IV. DISCUSSION AND CONCLUSION
The average of displacement in three subjects is shown in table 1 and Fig.2. Results showed that AC and AD are same and AB+BD = AD. Most of the displacement was found between changing the position from Full weight bearing to Non weight bearing (mean, 16mm). However after non weight bearing position until adding 90 N the displacement was consistent for all three subjects (mean, 9 mm). Also between the markers A and B there was only 2mm (average) displacement after the subjects changed their position from full weight bearing to non weight bearing. However, there was no displacement between A and B after adding loads.
In this study, the new method for measuring pistoning within the socket (designed and developed in-house at Össur), was evaluated to measure pistoning between socket and liner in three transtibial subjects in five different static positions. The most difference in displacement was seen between full weight bearing and non-weight bearing position (mean, 16mm, SD, 7.2), however when the loads were added to the limb, the displacement increased, but not as much as expected. In this study, we found out that it is better to use AC+CD for measuring AD and BC+CD for measuring BD, so that we can eliminate knee angel effect, because after the load
IFMBE Proceedings Vol. 35
A New Method for Measuring Pistoning in Lower Limb Prosthetic
731
added on the residual limb, the muscles reaction caused the knee to bend slightly. Using this new system for measuring pistoning brings the possibility of easy and fast determination of pistoning between the liner and socket and at the same time it is a safe method comparing with X-ray. Moreover, every prosthetist can conduct it easily in prosthetic clinic.
6. Söderberg B, L Ryd, B Persson. (2003) Roentgen stereophotogrammetric analysis of motion between the bone and the socket in a transtibial amputation prosthesis: a case study. J Prosthet Orthot 15(3): 95-99. 7. Erikson U, R Lemperg. (1969) Roentgenological study of movements of the amputation stump within the prosthesis socket in below-knee amputees fitted with a PTB prosthesis. Acta Orthopaedica, 40(4): p. 520-526. 8. Grevsten S, U Erikson. (1975) A roentgenological study of the stumpsocket contact and skeletal displacement in the PTB-Suction Prosthesis. Ups J Med Sci 80(1): 49-57. 9. Lilja M, T Johansson, T Öberg. (1993) Movement of the tibial end in a PTB prosthesis socket: a sagittal X-ray study of the PTB prosthesis. Prosthet Orthot Int 17(1): 21-26. 10. Narita H, Yokogushi K, Shi S, et al. (1997) Suspension effect and dynamic evaluation of the total surface bearing (TSB) trans-tibial prosthesis: a comparison with the patellar tendon bearing (PTB) transtibial prosthesis. Prosthet Orthot Int 21(3): 175-178. 11. Convery P, K Murray. (2000) Ultrasound study of the motion of the residual femur within a trans-femoral socket during gait. Prosthet Orthot Int 24(3): 226-232. 12. Murray K, P Convery. (2000) The calibration of ultrasound transducers used to monitor motion of the residual femur within a trans-femoral socket during gait. Prosthet Orthot Int 24(1): 55-62. 13. Abu Osman N, Spence WD, Solomonidis SE, et al. (2010) Transducers for the determination of the pressure and shear stress distribution at the stump–socket interface of trans-tibial amputees. P I Mech Eng B-J Eng 224(8): 1239-1250.
ACKNOWLEGMENT The financial support of Ossur Iceland and Department of Biomedical Engineering, Faculty of Engineering, University of Malaya is gratefully acknowledged. The authors would like to thank Mr. Stefán Karl Sævarsson, Mr. Scott Elliott, and Mrs. Elham Yahyavi for their kind help and encouragement.
REFRENCES 1. Kristinsson Ö. (1993) The ICEROSS concept: a discussion of a philosophy. Prosthet Orthot Int 17(1): 49-55. 2. Isozaki K, Hosoda M, Masuda T, et al. (2006) CAD/CAM Evaluation of the fit of trans-tibial sockets for trans-tibial amputation stumps. J Med Dent Sci; 53(1): 51-56. 3. Legro M, Reiber G, Del Aguila M, et al. (1999) Issues of importance reported by persons with lower limb amputations and prostheses. J Rehabil Res Dev 36(3):155-63. 4. Newton R, D. Morgan, M Schreiber. (1988) Radiological evaluation of prosthetic fit in below-the-knee amputees. Skeletal Radiol 17(4): p. 276-280. 5. Sanders J, Karchin A, Fergason JR, et al. (2006) A noncontact sensor for measurement of distal residual-limb position during walking. J Rehabil Res Dev 43(4): 509-516.
Author: Hossein Gholizadeh (MEngSc)(Prosthetist-Orthotist) Institute: Department of Biomedical Engineering, Faculty of Engineering, University of Malaya Street: Jalan Lembah Pantai City: Kuala Lumpur Country: Malaysia Email: [email protected] [email protected]
IFMBE Proceedings Vol. 35
Ambulatory Function Monitor for Amputees S.N. Ooi and N.A. Abu Osman Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia Abstract— Ambulation data of the amputee is crucial to stand as outcome assessment. This study develops a low-cost monitor which is available for every amputee and can be inserted into the pylon to maintain the originality of the ambulation data. A prototype had been developed and presented as below. Keywords— Ambulation, activity monitor, low-cost, accelerometer, prosthesis.
I. INTRODUCTION In the past 40 years, many attempts had been done to evaluate the ambulatory function and condition of humans. This kind of evaluation is considered as useful indicators of a person's condition. Most of the methods being developed are for the evaluation of the ambulatory function and condition of normal and healthy subject or certain patients like those who surfer coronary artery disease, multiple sclerosis, musculoskeletal disorders and etc. Their main focus is to find out about the static and dynamic activities, postural control, swaying motion, step counting, energy expenditure, fall risk of the elderly and etc. Yet, very little effort had been put in the research regarding the amputees’ ambulation with their prosthesis. Evaluation of the ability of ambulation on amputees with their prosthesis had not been widely explored. The most common method is the clinical judgment [1]. The second is questionnaires [2]-[5]. Questionnaires method is inexpensive yet time consuming and depend very much on the subject in his recall ability, memories, estimation, and the ability to remain objective when participate the study is carried out [6]. Therefore most of the data obtained through this method is bias and do not reflect the actual situation truthfully. A few researchers also contribute in this field by developing their own device to evaluate the ambulation of the amputees; among them are Holden & Fernie [7] and Bussmann et. al.[8]-[16]. However the technologies are only available to a limited number of selected objects for the purpose of research during a certain period of time. For the device require the placement of the sensors on certain parts of the body, so either a basic training is provided to the amputees or the amputees have to meet with the researcher every single time they put on the device. Some researchers adapted commercial activity monitor to carry out their evaluation, such as Patient Activity Monitor (PAM) [17]-[19], StepWatch [20],
[21], [6] and Dynaport ADL[22]. Most of the devices mentioned above are very advanced and accurate. But again, their usage is also quit limited, mainly due to the costly price. For example, the StepWatch step activity monitor of SAM together with the Computer Interface Dock, Communication costs about 3,300USD in total. Although such technologies result in an overall advancement in clinical research and device improvement, they are not significantly beneficial on an individual cases basis. Moreover, the above mentioned devices are mostly worn outside of the prosthesis, hence it is visible to the patients and this may results in obtaining the “performing data” of the amputee which doesn’t reflect the true situation. Hence, a low cost monitoring system which can be commonly available for every amputee and be kept invisible from the amputee had to be developed to assist in evaluation of the ambulation function on every amputee with their prosthesis.
II. METHOD A. Amputees’ Activity Monitor (AAM)
Fig. 1 Amputees’ Activity Monitor (AAM) The Amputees’ Activity Monitor (AAM) shown in Figure 1 was developed as a low-cost, lightweight, user friendly monitor which require no maintenance, can be inserted into the pylon of the prosthesis and possessed big memory capacity to keep long term ambulation raw data. It can detect the steps count and the pattern of the ambulatory function of the amputee with their prosthesis. It consist of a switch, a three axis accelerometer as sensor, a powerful in function yet low power consumption microcontroller and a
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 732–734, 2011. www.springerlink.com
Ambulatory Function Monitor for Amputees
733
8G microSD card as data storage. Its size is 73mm X 20mm X 10mm and supported with 3 AAA batteries as the power supply. This size is specially designed to be able to fit into the pylon of the prosthesis which diameter is generally 25mm. The monitor is placed at the end inside the pylon, just next to the prosthetic foot. It is very flexible, it can be used both for above knee and below knee amputees. A simple software was also developed to process the data obtained with AAM.
can be differentiated in a very clear way. A comparison had been made on the data obtained through the AMM with the record done by an observer, and the results matched each other.
B. Subjects A pilot test had been carried out with three healthy, unilateral below knee patients who were between the ages of 50 to 70. All subjects could walk normally without any assistance or assistive device with their prosthesis. AAM was putted inside the pylon of their prosthesis. Then, the following instructions were given and the patient carried it out in a fixed order: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.
Sitting (for 30 seconds) Stand up Standing (for 30 seconds) Walk for 10 meters with self-selected paste. Standing (for 30 seconds) Turn their body 180° Standing (for 30 seconds) Sit down Sitting (for 30 seconds) Stand up Standing (for 30 seconds) Walk for 10 meters with self-selected paste. Standing (for 30 seconds) Turn their body 180° Standing (for 30 seconds) Sit down Sitting (for 30 seconds)
Fig. 2 A sample of presented data obtained with AAM
IV. DISCUSSION
The whole tests were repeated three times for every patient. At the same time, all the activity and the time spent for each steps had been recorded down by an observer in a prepared form with the help of a digital stop watch. Then, AAM was being removed from the pylon, and the data recorded were transferred into the computer.
III. RESULT The data retrieved from the AMM are then being processed and presented as shown in Figure 2. In the graph, sitting is represented in green colour, standing in red colour, while walking in blue colour. By looking at the different colour, the activity done by the patient with their prosthesis
From the data collected with AAM, the activity done by the amputees with their prosthesis such as sitting, standing, walking and time they spent in each transition period during changes of action, can all be presented in a clear and in an objective way. In the sample data shown in Fig. 2, the transition from sitting to standing and from standing to sitting is considered as walking, so also for the turning of the body 180°. Currently, the development of AAM is still in the preliminary stage, further study will be carried out to differentiate the transition from walking in the future. Study will also be carried out on other activity which will be done by the amputees in their daily living, such as climbing up and down the stairs, driving, lying, dancing and etc. with AAM. The advantage of AAM is its size, which can be put into the pylon. By such, the amputee will not be aware of being monitored and thus ensure the originality of the data obtained and being unobtrusive to amputee’s daily activities. It is very user friendly, require no attachment of the sensors in any part of the body (compare with some of the activity monitor mentioned above), required no special training on how to use it nor any maintenance to keep it in the best performance, required no putting on and off of the monitor
IFMBE Proceedings Vol. 35
734
S.N. Ooi and N.A. Abu Osman
daily to the prosthesis and required no demands on education level of the ability in handling technology product.
V. CONCLUSION The AMM is a low cost ambulatory function monitoring system that is suitable and available for every amputee who owns prosthesis. It is small and able to monitor amputees’ ambulatory function outside the laboratory area and thus provide very solid evidence on the usage of the prosthesis, how much does the amputee really ambulate in their daily living and record the ambulatory function data of the amputees with their prosthesis to help the amputees in gaining better and more appropriate care from the clinician, physiotherapist and orthotist. Further research will be directed at the improving of the power supply of the monitor and the validity and reliability of the AMM.
ACKNOWLEDGMENT The authors would like to show our appreciation on the contribution of Jay, C.K. Chan and Aravin in helping to produce the prototype.
REFERENCES 1. Day HJ (1981) The assessment and description of amputee activity. Prosthet Orthot Int 5(1): 23-28. 2. Hagberg E, Berlin OK, Renstrom P (1992) Function after throughknee compared with below-knee and above-knee amputation. Prosthet Orthot Int 16(3):168-173. 3. Legro MW, Reiber GD, Smith DG et al. (1998) Prosthesis evaluation questionnaire for persons with lower limb amputations: assessing prosthesis-related quality of life. Arch Phys Med Rehabil 79:931938. 4. Panesar BS, Morrison P, Hunter J (2001) A comparison of three measures of progress in early lower limb amputee rehailitatin. Clinical Rehabilitation 15(2): 157-171. 5. Selles RW, Janssens PJ, Jongenengel CD et al. (2005) A randomized controlled trial comparing functional outcome and cost efficiency of a total surface-bearing socket versus a conventional patellar tendonbearing socket in transtibial amputees. Arch Phys Med Rehabil 86(1):154-161. 6. Stepien JM, Cavenett S, Taylor L et al. (2007) Activity levels among lower-limb amputees: self-report versus step activity monitor. Arch Phys Med Rehabil 88(7):896-900. 7. Holden JM, Fernie GR (1987) Extent of artificial limb use following rehabilitation. J. Orthop. Res 5(4): 562-568. 8. Bussmann JBJ, Stam HJ (1998) Techniques for measurement and assessment of mobility in rehabilitation: a theoretical approach. Clinical Rehabilitation 12:455-464.
9. Bussmann JBJ, Schrauwen HJ, Stam HJ (2008) Daily physical activity and heart rate response in people with a unilateral traumatic transtibial amputation. Arch Phys Med Rehabil 89:430-434. 10. Bussmann JBJ, Tulen JH, van Herel EC et al. (1998) Quantification of physical activities by means of ambulatory accelerometry: a validation study. Psychophysiology 35(5): 488-496. 11. Bussmann JBJ, Grootscholten EA, Stam HJ (2004) Daily physical activity and heart rate response in people with a unilateral transtivial amputation for vascular disease. Arch Phys Med Rehabil 85:240-244. 12. Bussmann JBJ, Martens WLJ, Tulen JHM et al. (2001) Measuring daily behabior using ambulatory accelerometry: the activity monitor. Behav Res Meth Instrum Comput 33(3): 349-356. 13. Bussmann JBJ, Reuvekamp PJ, Veltink PH et al. (1998) Validity and reliability of measurements obtained with an "activity monitor" in persons with and without a transtibial amputation. Physical Therapy 78(9):989-998. 14. Bussmann JBJ, van De Laar YM, Neelman MP et al. (1998) Ambulatory accelerometry to quantify motor behavior in patients after failed back surgery: a validation study. Pain 74(2-3): 153-161. 15. Bussmann JBJ, van den Berg-Emons HJG, Angulo SM et al. (2004) Sensitivity and reproducibility of accelerometry and heart rate in physical strain assessment during prosthetic gait. Eur J Appl Physiol 91:71-78. 16. Bussmann JBJ, Veltink PH, Koelma F et al. (1995) Ambulatory monitoring of mobility-related activities: the initial phase of the development of an activity monitor. Eur J Physical Med Rehab 5(1) : 2-7. 17. Bussmann JBJ, Culhane KM, Horemans HLD et al. (2004) Validity of the prosthetic activity monitor to assess the duration and spatiotemporal characteristics of prosthetic walking. 12(4):379-386. 18. Ramstrand N, Nilsson KA (2007) Validation of a patient activity monitor to quantify ambulatory activity in an amputee population. Prosthet Orthot Int 31(2): 157-166. 19. Dudek NL, Khan OD, Lemaire ED et al. (2008) Ambulation monitoring of transtibial amputation subjects with patient activity monitor versus pedometer. J Rehabil Res Dev 45(4): 577-585. 20. Klute GK, Berge JS, Orendurff MS et al. (2006) Prosthetic intervention effects on activity of lower-extremity amputees. Arch Phys Med Rehabil 87(5):717-722. 21. Hafner BJ, Willingham LL, Buell NC (2007) Evaluation of function, performance, and preference as transfemoral amputees transition from mechanical to microprocessor control of the prosthetic knee. Arch Phys Med Rehabil 88(22): 207-217. 22. Van Dam MS, Kok GJ, Munneke M et al. (2001) Measuring physical activity in patients after surgery for a malignant tumour in the leg: the reliability and validity of a continuous ambulatory activity monitor. J Bone Joint Surg 83-B:1015–1019.
Author: Ooi Song Nian Institute: Department of Biomedical Engineering, University of Malaya Street: Faculty of Engineering City: Kuala Lumpur Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Anthromorphic Design Methodology for Multifingered Arm Prosthesis U.S. Md Ali1, N.A. Abu Osman1, N. Yusoff2, N.A. Hamzaid1, and H. Md Zin2 1
2
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia Department of Design and Manufacture Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia
Abstract— This paper proposes a new approach of designing anthropometric hand that mimicking human hand. The first part reviews different work done in developing prosthetic arms.The second part reviews on design of hand prosthesis based on plane of action and approaches of defining prehension motion of the fingers and the thumb. This part emphasizes more on the endoskeletal design which gives advantages on designing mechanical part. There are two approaches throughout the research namely morphology design and 4-bar linkage design. Morphology design involves a 3D CAD design using IronCad software to determine the appropriate angle of flexion and extension of general fingers model and thumb model. The finger’s model next is analyzed by 4 bar linkage mechanism to check prehension of fingers at any angle. Finally the hand prototype is manufactured using RP machine to test its efficacy.
II. KINEMATICS MODEL OF HAND Human hand is unique whereby it consists of the most articulated lies on the fingers. However to have a better understanding of human hand, skeletons can be modeled via kinematical model as shown in figure 1. The kinematics model are made up of a palm as base of hand, thumb, index, middle, ring and middle fingers where every fingertips act as end effectors.
Keywords— Computer Aided Design (CAD), IronCad, mechanical design, hand prosthesis, prosthesis, and phalange.
I. INTRODUCTION Over the years, the research in this area is rapidly being develop in order to imitate or mimicking human hand either in terms of its functionality or complexity. Although the product in market is generally has the limitation of mobility, it is slowly resembling natural hand or follows the anthromorphology hand. Generally, there are two types of prosthetic hand namely active and passive hand. The passive hand is usually used as cosmetic and it does not have any sensors and electronic processing. This kind of prosthesis is suitable for the one who prefer aesthetically pleasing prosthesis or one who do not having an active lifestyle. On the other hand, active prosthesis stressed more on the functional part which involved all the processors and sensors; example of this type is CyberHand and Utah/MIT hand [1]. Although the there are many multifingered prosthesis is being commercialized, but some of them are still lack of grasping capability and dexterity. The main reasons is because of lack of degree of freedom (DOF) or maybe the device is to heavy and bulky. Thus, this paper proposed the new mechanical design of multifingered arm for prosthetics patients to overcome those problems.
Fig. 1 Hand kinematics model Moreover, there are two types of movement namely active and passive. Active movement actuated or triggered by tendons and muscles that allow free movement. Passive movement is vice versa of active movement. A. General Finger Model Kinematics structure demonstrated a clear feature of boned of human hand and it helps in designing prosthesis hand. The base of finger or palm in other word in other name is made up five bony metacarpals. Other than thumb, remaining fingers (index, middle, ring and little) consist of 3 links which are proximal, middle (intermediate) and distal phalanges. The joints between 3 links are connected by metacarpophalangeal (MCP) joint, proximal interphalangeal (PIP) joint and distal interphalangeal (DIP). The kinematic model shown the fingers have 27 degree of freedom (DOF). Each fingers have 4 DOF, 5 DOF for thumb and remaining 6 DOF for rotational and translation
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 735–738, 2011. www.springerlink.com
736
U.S. Md Ali et al.
of palm [2] However, index to little finger carpometacarpal (CMC) joints at palm allow limited motion for every metacarpal bone. Four fingers only allow limited motion at yaw mode known as abduction/adduction at MCP while twelve joints allow pitch mode (flextion/extension) to the phalange. B. Thumb Model Thumb has a different structure as compared to other fingers. Bones of thumb is known as trapezium, metacarpal, proximal and distal. Usually thumb is considered as 5 DOF consisting of interphalangeal (IP) joint, metacarpophalangeal (MCP) and trapeziometacarpal (TM) and it does not have proximal interphalangeal (PIP). The thumb is able to rotate and abduction/adduction motion at metacarpal and trapezium as two independent DOF [2]. Whilst MCP and DIP joints allow flexion mode as motion.
III. SOFTWARE
and dynamics performance. According to [2, 3, 4], the length of each fingers for normal Asian male as shown in table 1. Table 1 Length of each phalanx Types of finger MCP PIP DIP
Index/middle finger (mm) 50 29 24
Thumb (mm) 48 35 29
The fingers designed in this project focuses on 3 fingers; thumb, index and middle and composed of 3 phalanges in order to get anthropometric stable grasping. The hand will covered with silicone glove with passive hand (ring and little finger) to fulfill aesthetic requirements. Moreover, the hollow is made at each phalange as one of characteristic to reduce the weight. The diameter built for finger is 18 mm and thumb is 20 mm.
With the advent of the technology, Computer Aided Design (CAD) is widely used by engineers as a designing tool in various industries from manufacturing to medicine field. As compared to the conventional way where the design process itself is difficult especially when it involves with complex geometry. CAD software provides as an aid to represent the design, engineering analysis and also animation. One of the powerful CAD software that is used in designing the prototype in the project is IRONCAD which it models the parts and assemblies precisely. This interactive 3D graphic can also view various angles of model simply by specifying necessary dimension, orientation and position and enhance the designer’s work.
DIP
PIP
MCP
IV. DESIGN SPECIFICATIONS In general the multifinger prosthesis hand is designed to meet some requirements which defined as below: 1. 2. 3.
Palm and finger resemble physical appearance of human hand. Fingers resemble human size. Hand resembles weight of human hand.
V. PROPOSED HAND DESIGN After studied the anthropometry of hand as section II in terms of anatomy, grasping capability, gestures, kinematic
(a)
(b)
Fig. 2 A 3D design of index and middle fingers: (a) front view; (b) side view Thumb on the other hand is complex to design especially at MCP joint. After considering a lot of design alternatives, the gimbal joint as figure 3 is designed since the ability to do free movement by having 2 DOF for flexion-extension and abduction-adduction at a time.
IFMBE Proceedings Vol. 35
Anthromorphic Design Methodology for Multifingered Arm Prosthesis
737
of fingers, it is assembled as figure 5 and the joint is connected via snap fit mechanism. It is proposed that each finger is flexed via cable that been attached along hollow inside phalange. While the extension of the finger is obtained by torsion spring that placed at each joint to ensure that the joints can move freely during flexion or extension.
Fig. 3 Gimbal design for MCP thumb Moreover, according to a study conducted by Lozac’h [5] the most preferable plane of palm to allocate the thumb is at 45° plane with the natural twist of 20°of the thumb as fig.
Fig. 5 Proposed prosthetic hand design at initial position Fig. 4 45° plane of thumb and natural twist of thumb at 20° Range of motion is important to obtain a natural movement of fingers [6]. The difference between this project as compared to other works, the angle of fingers is being set according to their static constraints as table 2. This will limit the movement within the maximum and minimum of flexion and extension at each joint. One of the advantage of this design is it can replace the function of sensors that usually being embedded at each joint; thus it reduces the cost of producing the artificial hand.
VI. LINKAGE MECHANISM Another designing methodology of this project is by estimating or analyzing the4-bar linkage mechanism. It is important for the designer to know the location of the fingertip (end effector) in order to get exact location especially for gripping.
Table 2 Range of motion (ROM) of each phalanx Types of finger MCP
PIP DIP
Thumb
Index/middle finger
TMCadd/abd: 0°≤ θMCP≤ 80° TMCf/e: 0°≤ θMCP≤ 90° 0°≤ θPIP≤ 80° 0°≤ θDIP≤ 80°
0°≤ θMCP≤ 90°
Fig. 6 4-bar linkage mechanism analysis cos sin
0°≤ θPIP≤ 110° 0°≤ θDIP≤ 90°
After the determine appropriate endoskeletal data such as length of phalange based on average Asian male hand and range of motion for flexion/extension and abduction/adduction
Where:
IFMBE Proceedings Vol. 35
cos sin
cos sin
(1) (2)
738
U.S. Md Ali et al.
VII. RESULT
ACKNOWLEDGMENT
For the primary evaluation, the proposed hand is fabricated using Rapid Prototype (RP) machine. The fabrication takes about a day to manufacture and cleaning using ultrasound. A new prosthetic hand (figure 7) is now ready to be integrated with control system.
The first author expresses her cordial gratitude to her family, supervisors and friends at the Centre for Applied Biomechanics (CAB), and technicians. This work is supported by IPPP grant under University of Malaya (PS080 2010B).
REFERENCES
Fig. 7 Prototype of proposed prosthetic arm
VIII. CONCLUSION The proposed multifingered arm prosthesis was designed after a deep study on the anthromophology of endoskeletal of human hand and also the consideration approach based on ergonomic principles. Designed using IronCad is more as it is one of the powerful graphical drawing techniques which lead to realization of prototype according to desired criteria. The prototype of the hand is currently on completing the joint of the mechanical structure using the suitable spring. Future work will be more focus the integration of overall system including the control system. In addition, the prototype will be tested to the real amputee and various tasks will be tested on them especially on grasping capability, dexterity and manipulation of the hand.
1. N. Zainul Azlan, and Y. Hiroshi, “ Underactuated anthropomorphic finger mechanism for grasping and pinching with optimized parameter,” Journal of Computer Science, vol. 6 (8), pp. 928-933, 2010. 2. M. A. Saliba, and M. Axiak, “Design of a compact, dexterous robot hand with remotely located actuators and sensors,” presented at 15th Mediterranean Conference on Control and Automation, Athens, Greece, 2007. 3. V. Grinyagin, E. V. Biryukova, and M. A. Maier, “Kinematic and dynamic synergies of human precision-grip movements,” Journal of Neurophysiology, vol. 94, pp. 2284-2294, 2005. 4. L.Y. Chang, and Y. Matsuoka, “A kinematic model for the ACT hand,” presented at IEEE International Conference on Robotics and Automation, Orlando, Florida, May 2006. 5. R. Vinet, Y. Lozac, N. Beaudry, and G. Drouin, “Design methodology for multifunctional hand prosthesis,” Journal of Rehabilitation Research and Development., vol. 32 (4), pp. 316-324, November 1995. 6. J. Lin, Y. Wu, and T. S. Huang, “Modeling constraints of human hand motion,” IEEE Human Motion Proceeding, pp. 121-126, 2000. 7. L. Zollo, S. Rocella, E. Guglielmelli, M. C. Carrozza, and P. Dario, “Biomechatronic design and control of an anthropomorphic artificial hand for prosthetic and robotic applications,” IEEE/ASME Transaction on Mechatronics., vol. 12 (4), pp. 418- 42, August 2007. 8. L. Ungureanu, A. Stanciu, and K. Menyhardt, “Actuating a human hand prosthesis: model study,” presented at 2nd WSEAS International Conference on Dynamical Systems and Control., Bucharest Romania, Oct 2006. 9. J. W. S. Martell, and G. Gin, “Robotic hands: design review and proposal of new design process,”World Academy of Science, Engineering and Technology, vol. 26, pp. 85-90, 2007.
Author: Ummi Syahirah Md Ali Institute: Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Jalan Pantai Dalam, 50603Kuala Lumpur, Malaysia. Email: [email protected]
IFMBE Proceedings Vol. 35
Approximation Technique for Prosthetic Design Using Numerical Foot Profiling A.Y. Bani Hashim1, N.A. Abu Osman2, W.A.B. Wan Abas2, and L. Abdul Latif3 1
Department of Robotics & Automation, Faculty of Manufacturing Engineering, Universiti Teknikal Malaysia Melaka, Durian Tunggal, 76100 Melaka, Malaysia 2 Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia 3 Department of Rehabilitation Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
Abstract— This work studies foot structure to develop a method that represents foot it in the form numerical profile. This profile is used to approximate the important parts for prosthetic foot design. This is done by a proper mapping of bones and joint which resulted in the creation of the kinematic structure. The structure simplifies the outlook of foot anatomy. The structure is then converted into graph that stores the information for vertex adjacency, edge incidence, and path. Further, this information is used to create the visual structures in the form of three distinct synthetic digital images. From these images the decision for important parts in the prosthetic foot design is made. Therefore, it is decided that the prosthetic foot should consist of foot-ankle mechnism and a flexible keel. Keywords— Numerical profile, graph, prosthetic foot.
I. INTRODUCTION Foot inspection can be done on a model skeleton. A photograph or a radiograph, on the other hand, is the medium for live foot inspection. But, do engineers need to examine radiographs or to go through autopsies to study foot biomechanics? Presently, there is no standard method for prosthetic foot design. There are, however, methods for prosthetic foot selection. For example, clinical teams select prosthetic feet by ranking the biomechanical parameter —the spring efficiency [1]. But there are times the prescribed foot is based on intuition [2]. It is argued that the current analytical technique for calculating spring efficiency has two flaws: i) prosthetic feet with a bendable flexible keel are analyzed the same way as those with an articulated ankle and a rigid foot, ii) there is no accounting for the energy losses in the viscoelastic cosmetic material surrounding the keel which can be found in a silicon rubber cosmesis [1]. This work proposes a model that allow engineers to visualize foot structure from synthetic images created from proper profiling of bones and joints. The advantage of this model is that the engineers do not need to comprehend radiograph images or undergo autopsies to understand the skeleton mechanics. In fact, this model allow computer analysis because the stored information is digital. The objective of this work is
to model foot structure in the form of numerical profile and to estimate prosthetic foot design based on justified information from the profile.
II. METHOD A. Kinematic Structure Kinematic structure is an abstract representation of a mechanical structure. It contains the essential information about which link ( L ) connects to which other links by what types of joint, J . Figure 1 shows the proposed kinematic structure that represents human foot. For example, L1 connects L2 and L3 . The link L1 represents talus bone. The object shape identifies the type of link. The legend displays the meaning of the object shapes found in the figure. For example, link L1 is a quaternary. There are three different types of links: quaternary, ternary, and binary. A quaternary link has four joints, the ternary has three, and the binary has two. A circle represents a revolute single-axis joint. The two shaded circles are the joints that connect the foot to fibula and tibula. These points are insignificant in this study.
Fig. 1 The proposed human foot kinematic structure is shown above. The legend explains the object symbols that represent the components in foot. The skeleton figure has L1 represents talus bone
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 739–742, 2011. www.springerlink.com
740
A.Y. Bani Hashim et al.
B. Graph A network of vertices is a graph. In general, a graph contains vertices and edges. In Fig. 2, the circles represent vertices V , the connecting lines as edges E , and the concentric circle is the root—talus ( v1 ). They are the conversion of links and joints in Fig. 1. A link is equivalent to a vertex and a joint to an edge. The labels designate locations of vertices and edges. For example, v2a1 preceeds v2 . The edge e22a1 links v2 and v2a1 . ⎧ ⎪ ⎪ A = ⎨ ⎣⎡ ai , j , N V × N V ⎦⎤ ⎪ ⎪ ⎩ ⎧ ⎪ B = ⎨ ⎣⎡bi , j , N V × N E ⎦⎤ ⎪ ⎩
⎛ ⎧ if vi is adjacent ⎜ ⎪1 ⎪ to v j ⎜ a = ⎜ i , j ⎨ otherwise and ⎪0 ⎜ ⎪ i= j ⎜ ⎩ ⎝
⎫ ⎞ ⎪ ⎟ ⎪ ⎟ ; v V ∈ ⎬ (1) ⎟ ⎪ ⎟ ⎪ ⎟ ⎠ ⎭
⎫ ⎛ ⎧1 if vi contain e j ⎞ ⎪ ⎜⎜ bi , j = ⎨0 otherwise ⎟⎟ ; v, e ∈ V , E ⎬ (2) ⎩ ⎝ ⎠ ⎪ ⎭
⎧ ⎪ ⎪ P = ⎨ ⎣⎡ pi , j , N E × ( N V − 1) ⎦⎤ ⎪ ⎪ ⎩
⎛ ⎧ ⎜ ⎪ ⎜ ⎪1 ⎜ pi , j = ⎨ ⎪ ⎜ ⎪0 ⎜ ⎩ ⎝
⎫ if ei , j lies on ⎞ ⎪ ⎟ the path, ⎪ ⎟ ; e, v ∈ E ,V ⎬ (3) end at v j +1 ⎟ ⎪ ⎟ ⎪ ⎟ else ⎠ ⎭
The degree of vertex is equivalent to the number of edges. Equation (1) describes the vertex-to-vertex adjacency. It is an NV × NV symmetric matrix having zero diagonal elements. Equation (2), however, defines incidence matrix that outlines vertices and edges. Lastly, equation (3) defines the path matrix that stores information about all paths that emanate from the root. It is a N E × ( N V − 1) matrix excluding
The size of A is 26 × 26 . Unfortunately, the matrix cannot be shown in this paper because of page constraint. If one is to inspect the matrix, the element a12 = 1 indicates that v2 is adjacent to v1 . The elements a23 , a24 , and a25 are adjacent to the common vertex where v2 a1 , v2b1 , and v2c1 are adjacent to v2 . This signify a quaternary link. Similarly, a19,8 = 1 and a19,9 = 1 show that v4 a1 and v4b1 are adjacent to v4 . This is a ternary link. There are all zeros diagonal that imply none of the vertices mirror to themselves. Every column has a pair of elements that indicate incidences of vertices and edges. The size of B is 26 × 25 . In row two there are four incidences and in row nineteen there are three. The remaining rows have two incidences. Four incidences signify the occurring vertex has four edges. The size of P is 25 × 25 . It describes the sequences of trail, path, and walk found within graph.
C. Synthetic Image Solving for equations (1) to (3) result in large matrices. Furthermore, these matrices are difficult to read. One would have to identify the specific row and column to confirm the element’s value. It is, however, that the linear representations allow digital computation. The unique patterns found in them provide information concerning foot architecture. They show the patterns for bone to bone adjacency, bone to edge incidence, and the paths that emanate from the talus for human’s foot.
A IMAGE
⎧ ⎪⎪ = ⎨⎡⎣ aIMAGE,i , j ⎤⎦ ⎪ ⎩⎪
⎫ ⎛ ⎧0 if ai , j = 1 ⎞ ⎪⎪ ⎜ ⎟ ⎪ open ⎜ aIMAGE,i , j = ⎨α ⎟ ; aIMAGE,i , j ∈ ` ⎬ (4) ⎪ ⎪ ⎜ ⎟ ⎩ 255 if ai , j = 0 ⎠ ⎝ ⎭⎪
B IMAGE
⎧ ⎪⎪ = ⎨ ⎡⎣bIMAGE,i , j ⎤⎦ ⎪ ⎩⎪
⎫ ⎛ ⎧0 if bi , j = 1 ⎞ ⎜ ⎟ ⎪ ⎪⎪ open ⎜ bIMAGE,i , j = ⎨ β ⎟ ; bIMAGE,i , j ∈ ` ⎬ (5) ⎪ ⎪ ⎜ ⎟ ⎩255 if bi , j = 0 ⎠ ⎝ ⎭⎪
the root.
PIMAGE
Fig. 2 The graph representation. It is a direct conversion from the structural kinematic representation. The legend describes the names of each object symbol
⎧ ⎪ ⎪ = ⎨ ⎡⎣ pIMAGE,i , j ⎤⎦ ⎪ ⎪ ⎩
⎛ if pi , j = 1 ⎧0 ⎜ ⎪ π open ⎪ ⎜p = ⎜ IMAGE,i , j ⎨255 if p = 0 i, j ⎪ ⎜ ⎜ ⎪⎩τ else, τ ≠ π ⎝
⎫ ⎞ ⎪ ⎟ ⎪ ⎟; p ∈ ` ⎬ (6) ⎟ IMAGE,i , j ⎪ ⎟ ⎟ ⎪ ⎠ ⎭
We modify equations (1) to (3) into equations (4) to (6) to visualize those patterns. These result in the characteristic images formatted in grayscale. Grayscale has the scale of natural numbers 0 until 255. The digit 0 characterizes pure black, 255 pure white. The α , β , and π are the open variables for special uses. The images produced using these
IFMBE Proceedings Vol. 35
Approximation Technique for Prosthetic Design Using Numerical Foot Profiling
equations would have apparent pixels, seen in Fig. 3, that exhibit the patterns previously shown in the linear representations.
Fig. 3 The characteristic images: (a) Vertex to vertex adjacency, (b) Vertex to edge incidence, and (c) Paths the emanate from the root
741
in Fig. 4(c) is the foot-ankle. The loose objects are the remaining prosthetic foot’s components. It is deduced that these objects form the keel.
Fig. 4 The characteristic images: (a) The adjacency image showing the vertices in gray pixels (α=128) where fk exist; (b) the incidence image showing vertex-edge incidences in gray pixels (β=128) where fk exist; (c) the path image showing vertex-edge paths
In Fig. 3a, A IMAGE has 676 pixels. The element aIMAGE ,1,2 = 0 , pure black pixel indicates v2 is adjacent to v1 . In other words, talus borders calcaneus. The pure white diagonal implies that none of the vertices mirror to itself. The A IMAGE has an “←” shape. In Fig. 3b, BIMAGE has 650 pixels. In the row-column intersections, the vertices and edges meet. In row two, there are four black pixels that indicate four incidences. In row nineteen, there are three, and the remainders there are two. The BIMAGE has a “ ”ےshape. In Fig. 3c, PIMAGE has 625 pixels. The five light gray triangles depict the five digits on the foot. This is the common feature for primates. The image describes the possible paths that emanate from the root and terminate at v j +1 . These triangles represents the trails in graph. Every trail, however, can only have unique elements. This distinguishes a path from a trail. For example, in Trail-1 the sequence begins from e22a1 terminates at v2a4 —the first triangle in Fig. 3c, whereas Path-1 begins from v2 and terminates at v2a4 .
III. RESULTS The adjacency image shown in Figure 4(a) exhibits the gray pixels (α=128) where the ground reaction force, f k should occur on a simple walk. This depiction considers the occurring vertices. Further, Fig. 4(b) shows the occurring vertices where f k act on points nearby certain edges. It shows vertex-edge incidences in gray pixels (β=128) where f k exist. Figure 4(c) has a major amendment. It considers the useful sequence of f k on a simple walk based on the plots of Fig. 4(a) and Fig. 4(b). Therefore, the top left corner object
Figure 5 shows the design of the ankle mechanism and a standard keel part. This is the first prosthetic foot laboratory prototype.
Fig. 5 The laboratory prototype prosthetic foot design that is derived from the proposed numerical profile
IV. DISCUSSION AND CONCLUSION At present, there is no standard method for prosthetic foot design. It is important that prosthetic feet are designed with the considerations that they have some degree of resemblance to the normal foot in terms of outlook as well as functionality. The outcomes of this work proved that proper modeling of foot can results in justifiable prosthetic design both the outlook and the functionality. The technique used in this work, however, may require further study because the foot prototype has not been tested on patients. Therefore, at this stage the technique is sufficient to justify that the prosthetic foot design should consist of ankle-foot mechanism and a flexible keel.
ACKNOWLEDGMENT This work is supported in part by the Malaysian Ministry of Higher Education under Grant FRGS/2008/FKP-0069.
IFMBE Proceedings Vol. 35
742
A.Y. Bani Hashim et al.
REFERENCES Prince F, Winter DA, Sjonnensen G, et al. Mechanical efficiency during gait of adults with transtibial amputation: A pilot study comparing the SACH, Seattle, and Golden-Ankle prosthetic feet. J of Rehabilitation Research & Development. 1998; 35(2):177-185. Twiste M, Rithalia S. Transverse rotation and longitudinal translation during prosthetic gait—A literature review J of Rehabilitation Research & Development. 2003; 40(1):9–18. Author 1:Ahmad Yusairi Bani Hashim Institute: Department of Robotics & Automation, Faculty of Manufacturing Engineering, Universiti Teknikal Malaysia Melaka Street: City: Durian Tunggal Country: Malaysia Email: [email protected]
Author 3:Wan Abu Bakar Wan Abas Institute: Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya Street: City: Kuala Lumpur Country: Malaysia Email: [email protected] Author 4:Lydia Abdul Latif Institute: Department of Rehabilitation Medicine, Faculty of Medicine, Universiti Malaya Street: City: Kuala Lumpur Country: Malaysia Email: [email protected]
Author 2:Noor Azuan Abu Osman Institute: Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya Street: City: Kuala Lumpur Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Comparison Study of the Transradial Prosthetics and Body Powered Prosthetics Using Pressure Distribution Approach N.A. Abd Razak and N.A. Abu Osman Department of Biomedical Engineering, University of Malaya, Kuala Lumpur, Malaysia
Abstract— This paper presents the comparison study of the pressure distribution analysis between wearing the transradial prosthetics with the body powered prosthetics which are focus on the transradial amputation level. The major different analysis totally depends on the amputation level, mass of both body powered prosthetics and the transradial prosthetics. The body power prosthetics involve the mechanical engineering structure of Bowden cable as the main system while transradial prosthetics used biomechatronics system as the main engineering principle. The paper briefly describes the different of pressure distribution that occur to the amputee while wearing the different approach of prosthetics hand. The data analysis of both methods using the Fscan to investigate the reliable prosthetics hand needs to be worn by the amputee. Keywords— Transradial prosthetics, prosthetics, pressure distribution.
body
powered
I. INTRODUCTION A. Prosthetics Hand Nowadays prosthetics hand have two characteristics; cosmetically appearance (expensive and keep clean) and functionally prosthetics. The cosmetically prosthetics usually appear exactly similar with the real hand and it is generally less functionality (Schabowsky, 2008). The cosmetic prosthetics hand usually made from a mold then turn into a rubber or silicon (Stephanie, 2008). Even though it is well pleasing look and affordable to be touch but the function are very limit and not much helping in a real world. The cosmetic prosthesis is also known as passive prosthesis is the oldest prototype, with the first making as an artificial hand made from Marcus Sergius in the Punic War (218-201 BC). This device is made by skilled army and emerged as a gauntleted hand, or armored glove (Julio, 2007). Although the representation of human hand is exists, the static device causes the absence in the movement of hand and thereby promotes the uncomfortable feelings of the prosthesis user. Functional prosthetics hand can be classified into two part; body powered prosthetics (used tension cable) and externally powered prosthetics (electrically powered). Body
powered prosthetics itself have a few type accordingly to need of the amputee (Stark, 2004). The advantage of this type of prosthetics is same for all type such as it is moderate from the cost and weight criteria. Other than that it have a high sensory feedback and easy to be learn. The disadvantages of this type are because of it is not very well cosmetically and also need to learn or muscle up a few gross limb movement (Controzzi, 2008). In the body powered prosthetics, the patient used his/her own muscle to move a few motion. Usually the motion required a high force at the shoulder to pull the tension cable(known as Bowden cable) until the movement of the task that are assign. The common tasks that can be done by this body powered prosthesis usually pick and place of the arm, and also movement of the transradial part that is pronation and supination. B. Body Powered Prosthetics The body-powered prosthesis was invented by a Berlin dentist, Peter Baliff in 1812 which was operated by attaching the straps to the trunk of amputee’s body. (Stephanie, 2008)) The current devices are modified by rounding the straps over the amputee’s back and shoulders. It is a functional prosthesis which operated using cable and harness system that require the movement of amputee’s body, such as move the shoulders or arm in order to pull the cable and make the terminal device to open or close. For the transradial (below-elbow) prosthetics, only one Bowden cable that is need usually to doing task that resolve open and close function. For the transhumeral (above – elbow) two Bowden cable required in order to give motion for the both transhumeral rotation and also transradial open and close function (Controzzi, 2008). The Bowden cable reacts as a control system where it has been attached at a few appropriate terminal fittings that anchor of the cable to the body-harness C. Transradial Prosthetics Prosthetics hand has different level of criteria with a lot of function and configuration. Transradial prosthetics hand
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 743–746, 2011. www.springerlink.com
744
N.A. Abd Razak and N.A. Abu Osman
or also known as below elbow prosthetics need a well develop (Stephanie, 2008). Nowadays prosthetics are more to functionality and not cosmetically well pleasing (Weir, 2006). Some design is more focusing in electronics controller and system rather than the outer part. (Leow, 2010) used brain computer interface (BCI) in design the low cost efficient electronics system. As results many amputee avoid to wear the robotics prosthetics hand because they are felt being control by the robot not by themselves (Controzzi, 2008). The combination of mechanical design look and light electronics controller or known as biomechatronics is focusing in giving the need of the amputee (Tenore, 2008).
II. METHODOLOGY The socket prototype design of the transradial proathetics had been tested with the interface pressure distribution between the socket and the stamp of the user. This procedure is needed since to investigate either the transradial prosthetics may give either positive or negative impact to the user in the pressure acquisition by wearing it. The experimental setup used the Teksacn Inc.F-Socket sensor (9811E). The reason of using the F-Socket to determine the pressure surface because of it flexibility, rectangular printed and the lowest thickness. The printed circuit with a thickness just about 0.18 mm make the sensor easily fit in the gap between the socket and the surface of amputation level. For the transradial part, the F-Socket are place exactly at the below elbow. Only one F-Socket is needed to cover almost the entire socket surface that attached to the amputation level. F-Socket sensor uses the F-Scan software to operate and connected to the rear of the PC via 762 mm long cable and a cuff unit (98 x 64 x29 mm). The cable functions as the converter of the analog signal to digital signal so that the data may be read by the PC. In order to have the most accurate and reliable results, some precautions need to be consider such as to make sure all the connection are well organized. The cable wire can be tightening to the amputee body part but make sure that the surface of the sensor is not being disrupted. For this study, the data received by all 5 trials that recorded by F-Scan software. Note that every time the trial was done, there is no need to change the sensor or make a calibration. For the transition between the socket and the amputation limit, the sensors which were already on the transradial limb were only removed if any of the sensors was found to be faulty. With the weight of the residual limb that is left and the area of the F-Socket sensor attached to the residual limb, the pressure can be found. According to the anthropometry
theorem the residual limb that is still left is about 10cm instead of 25cm. the procedure of the experiment begin by measuring the height ,weight and calculating the body segment. Only one part of the F-Socket sensor that is being used since the residual limb that is let very short.
III. RESULTS AND DISCUSSION The first part is to determine the pressure at the residual limb by wearing the body powered prosthetics. The result occurs on the F-SCAN software are in a distribution maps in 2D configuration. The results are display on fig.2 based of 1out of 3 best trials. The pressure socket sensor are illustrated from left to right where these pressure maps were obtained by combining color coded sensor images obtained from the sensors placed at the respective regions of the residual limb. Pressure maps were obtained from a trial of one subject. The length of the sensor is divided into six equal boxes of row and four equal boxes of column, acting as a window representing each sensor point. Peak contact pressure recorded for each box is displayed at the top right corner of the respective boxes. The color displayed in each box represents the pressure sensed by each sensel location. The capture showed in a color as a force references as in fig.1. The legend (fig. 1) shows the correlation between color and pressure values to produce the pressure distribution map. The upper threshold will be displayed as red while the white areas indicate that pressure at the region is below minimum measureable threshold pressure of 0 kPa. Since only one part of the sensor that is used, the result showed in the fig.1 represents the pressure sensed by each sensel location.
Fig. 1 The legend for the interface pressure distribution
IFMBE Proceedings Vol. 35
Comparison Study of the Transradial Prosthetics and Body Powered Prosthetics Using Pressure Distribution Approach
Fig.2 is the result of pressure distribution for the body powered prosthetics that taking the sensor mapping in fig.1 as the reference. Each box represents each sensor point. For the body powered prosthetics, the results shows that after a few trial, the pressure that contributed by worn the prosthetics is distributed surrounding the residual limb. The pressure capture from the F-SCAN software showed that the second trials give about 6kPa (see table 1). The third trials give a similar result as 5kPa (see table 1) but the final trials give a maximum pressure about 8kPa (see table 1). As recorded, almost all five trials showed that the pressure point come from all surround the residual limb that attached to the prosthetics. The maximum pressure occur at the below elbow on the outer part. The pressure may be due to the contact of the ulna bones that are still active in generating the motion. But the pressure occurs almost at all of the residual area. Based from this result, it show how the amputee had been involve within a force and pressure by worn it statically without conducting any motion. That is why, the body type need a lot of rehabilitation procedure and training of the muscle in order to correlate with the prosthetics. At the end of the day, the shoulder muscle becomes unbalanced to each other. The design of the prosthetics itself also not very convenient enough as it only focusing on the functionality and failed to put the balanced on the cosmetically. The socket itself was not well attached to the amputee and much more focus on the softness of the stockinet. The pressure occur to be reach more after trials by trials showed how the amputee is not comfortable by wearing it. As a result the amputee refuse to worn it longer and based from his feedback, he worn it 2 hours per task. The results also show that the area that being pressure was increasing accordingly. The weight of the prosthetics gives the impact of the pressure also. By increasing the weight that will increase the force at the same time the pressure will also be increased. Based on the anthropometry theorem, prosthetics hand is much heavier than the normal hand weight. For the transradial prosthetics, the distribution of pressure only occurs at a few areas as fig.3. The pressure seems occur at the radaii bones instead of the ulna bones. But the pressure that applied was not due to worn the trandradial prosthetics but because of the bones. This is because of the pressure that comes from the radaii bones which the amputee suffers the congenital bones fracture at that bones. The results also showed that the area of pressure occurs were only in a small point. This was due to the radaii bones which pressure with unsupportive position for the design. The weight of the transradial prosthetics give bit of contact pressure, but the weight was still relevant as much more lighter than the body powered prosthetics. Fig. 2 and 3 compares the pressure waveform curves distributed with the use of body powered prosthetics with
745
the transradial prosthetics. The legend color in the figures was obtained from the 1 out of 5 trials that had been done in both of experiment. The shapes of the interface pressure legend distribution were different between the sockets that attached to the residual limb. The legend for pressure distribution for the body powered prosthetics (fig.2) was given higher value of pressure between peaks. The results shows how the pressure increases accordingly after the amputee worn it. The pressure distributed between 8-10kPa for the body powered prosthetics. The value is slightly different for the amputee than worn the transradial prosthetics that distributed the pressure around 8kPa (fig.3). But the location and the pressure occur was in different position and occur with the different reason. The body powered prosthetics proven that it give a pressure to the person that worn it because of the socket and the weight of the prosthetics itself. While, for the transradial prosthetics shows that the socket was well comfortable be worn by the amputee. At the same time the pressure that occurs was because of the bones fracture instead of the socket design and the weight of the transradail prosthetics.
Fig. 2 The highest pressure of about 6 kPa was found in the first trial between the ulna bones at the third and fourth columns and the first row (yellow colour)
Fig. 3 The pressure distribution of about 10 kPa where the highest pressure was detected between the first column and the fourth and the fifth rows (red area)
IFMBE Proceedings Vol. 35
746
N.A. Abd Razak and N.A. Abu Osman
IV. CONCLUSION Based on this experimental view, it is proven that the new development of the transradial prosthetics had been design accordingly the need and the comfortably of the amputee. By wearing the tranradial prosthetics, the amputee does not have to worry about the training, the precaution of wearing, and the cause after wearing.
ACKNOWLEDGMENT I would like to thank to technician of Motion Analysis Laboratory, Biomedical Engineering Department, University of Malaya for the contribution in the data gathered.
REFERENCES 1. Stephanie L. Carey, M. Jason Highsmith, Murray E. Maitland, Rajiv V. Dubey, Compensatory movements of transradial prosthesis users during common tasks, Clinical Biomechanics, Volume 23, Issue 9, November 2008, Pages 1128-1135, ISSN 0268-0033, DOI: 10.1016/j.clinbiomech.2008.05.008. 2. Weir, R. F. ff. (2003): Design of Artificial Arms and Hands for Prosthetic Applications. Invited Chapter (Chapter 32) in Standard Handbook of Biomedical Engineering & Design. Myer Kutz, Editor, McGraw-Hill, New York, pp. 32.1 – 32.61.
3. Leow,R. S., Moghavvemi, M. & Ibrahim, F. "An efficient low-cost real-time brain computer interface system based on SSVEP , IEICE Electron. Express, Vol. 7, No. 5, pp.326-331, (2010) (ISI-Cited Publication) 4. M. Controzzi, C. Cipriani, and M. C. Carrozza, “Mechatronic design of a transradial cybernetic hand,” in Proc. 2008 IEEE/RSJ Int. Conf. Intell. Robots Syst., 2008, pp. 576–581. 5. Yahud, S., and N.A. Abu Osman. 2007. Prosthetic hand for the braincomputer interface system. IFMBE Proceedings 15:643–646. Springer, Berlin. 6. E. A. Biddiss and T. T. Chau, Upper limb prosthesis use and abandonment: A survey of the last 25 years, Prosthetics and Orthotics Int’l 31 (3), pp. 236–257, 2007. doi:10.1080/03093640600994581 7. Laura A. Miller, C., Robert D. Lipschutz, CP, Kathy A. Stubblefield, OT, Blair A. Lock, He Huang,, T. Walley Williams, III,, Richard F. Weir, & Todd A. Kuiken. (November, 2008). Control of a Six Degree of Freedom Prosthetic Arm AfterTargeted Muscle Reinnervation Surgery. Arch Phys Med Rehabil, 89, 2057-2065 8. Xin Chen, Y.-P.Z., Jing-Yi Guo, Jun Shi, Sonomyography (SMG) Control For Powered Prothetic Hand: A Study with Normal Subjects. Ultrasound in Medicine and Biology, April 26, 2010. 36(7): p. 13.
Address of the corresponding author: Author: Nasrul Anuar Abd Razak Institute: Department of Biomedical Engineering, University of Malaya Street: 50603 City: Kuala Lumpur Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Effect of Position in Fixed Screw on Prosthetic Temporomandibular Joint P.H. Liu1, T.H. Huang1, and J.S. Huang2 1
Department of Biomedical Engineering, I-Shou University, Kaohsiung, Taiwan, R.O.C. Institute of Oral Medicine, National Cheng Kung University, Tainan, Taiwan, R.O.C.
2
Abstract— The purpose of this study was to investigate stress distributions on mandible around screw holes, condylar prosthesis and fixed screws with three types of distributed positions of fixed screws for prosthetic temporomandibular joint (TMJ) replacement. The finite element model was consisted of a defected mandible of condylar resection (including cortical and concellous bone), the condylar prosthesis and fixed screws. The five principal muscles were applied as loading condition to simulate the activity of mouth closing, and boundary condition was fixed at condylar and incisive regions of mandible where position of the mandible was adjusted at the state of 5mm mouth opening. The number of elements and nodes in this finite element model of this study were 135466 and 214575 respectively. The peak stress concentrations of the mandible around the screw holes were detected mainly at the most anterior screw in all of the FE models except the model A. The trend of the maximum stress of the three models at the fixed screws was similar to that at the mandible. This study concluded that the distribution of fixed screws of the Model B was better option for decreasing the stress-induced failure. Keywords— Finite element analysis, Total temporomandibular joint, Stress distribution.
I. INTRODUCTION The temporomandibular joint (TMJ) is a particular joint that is both a hinge and a sliding joint [1]. The TMJ dysfunction is a frequent finding in clinical work. Severe patients generally resulted in orofacial pain and mouth opening restriction [2]. Total TMJ replacement consequences revealed satisfactory improvements in terms of both pain levels and jaw function, but failure of the TMJ prosthesis was still happened after surgery replacement [3]. Most of the researches regarding the TMJ prosthesis have been clinical reports [4], with few studies investigating the effect of stress distribution on the TMJ prosthesis. Therefore, the purpose of this study was to investigate a relationship between effect of stress distribution and positions of fixed screws by three dimensional finite element analysis (FEA).
II. MATERIALS AND METHODS Mandibular CT image of 42-years-old TMJ dysfunction female subject was selected. The defected mandible of right unilateral condylar resection, including the material proper-
ties of cortical and cancellous bone, was reconstructed for 3D finite element analysis. The model of FEA in this study according to clinical protocol of the TMJ condylar prosthesis combing with five fixed screws were selected and analyzed for investigating the influence of three types of screw positions (Fig. 1). The FE model of the TMJ reconstructed condylar prosthesis (Biomet/Lorenz, Warsaw, IN, USA) and fixed screws were created by CAD software (Solidworks 2010, SolidWizard Co.).
Fig. 1 The distribution of five fixed screws (black circle) in three types of model A, B and C (from left to right) The TMJ implant of the condylar prosthesis has a straight flat body and condyle head as well as 9 holes aligned for the screws. For the boundary conditions the incisive and condylar regions was fixed in three directions. The five principal muscles were applied as loading condition to simulate the function of mouth closing, and were defined on the FE model (Table 1). Boundary condition was fixed at condylar and incisor regions of mandible where position of the mandible was adjusted at the state of 5mm mouth opening (Fig. 2). The material properties of Young modulus at the TMJ prosthesis, fixed screws, cortical and cancellous bone of the mandible were 111000, 111000, 12800, and 1280 Mpa respectively, and Poisson ratio of 0.3 was assigned for the prosthesis and bone of the model [5]. The total number of elements and nodes generated in the FE model were 135,466 and 214,575. Table 1 Five muscle forces applied for mouth closing Muscles Deep Masseter Superficial Masseter Medial Pterygoid Temporalis Medial Temporal
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 747–749, 2011. www.springerlink.com
F(N) 219.83 184.34 274.82 0.3973 9.4165
748
P.H. Liu, T.H. Huang, and J.S. Huang
Model A
Fig. 2 Mesh model, loading conditions (red arrows) and boundary conditions (blue triangles) of FEA
III. RESULTS AND DISCUSSION Figure 3 presents the Von Mises stress of the mandible around screw holes comparing three models with different fixed screw positions. The maximum stresses of the mandible were detected at the model A around screw hole of number 8 (175.4 Mpa), the model B around screw hole of number 5 (141.85 Mpa) and the model C around screw hole of number 4 (127.84 Mpa). The peak Von Mises stresses of condylar prosthesis on the model A (699.73 Mpa), B (640.14 Mpa) and C (608.68 Mpa) were discovered individually at the inferior region of the prosthesis near screw hole 9, posterior region of prosthetic neck and anterior region of prosthetic neck respectively (Fig. 4). The maximum Von Miesis stresses of the fixed screws showed that the model A at the screw 8, the model B at the screw 5 and the model C at the screw 4 were 628.11MPa, 768.11MPa and 1161MPa respectively (Fig. 5). Bone absorption around screw or screw loosening for total joint replacement has been studied as one of failure factor. Stress concentration around the fixed screw seems to be an index to evaluate a possibility of failure after the TMJ prosthesis replacement. Therefore, the distribution of the fixed screws in the model C was evidenced more suitable for decreasing the stress concentration around screw holes. In addition, the distribution of the fixed screws at the model A could be better to prevent screw fracture due to less magnitude in peak stress concentration. Analyzing the different types of positions in the screw insertion showed that the magnitudes of peak stress in three condylar prostheses were not significant difference, but the regions of the peak stress were trivial dissimilar in the three FE models.
Model B
Model C
Fig. 3 The Von Mises stress of the mandible around screw holes
Model A
Fig. 4 The Von Mises stress of the condylar prostheses
IFMBE Proceedings Vol. 35
Effect of Position in Fixed Screw on Prosthetic Temporomandibular Joint
749
Model B Model C
Fig. 5 (continued)
IV. CONCLUSIONS The different positions of the five fixed screws were evidenced to influence the stress distribution in the bone and screw. The screw positions of the model B could provide a better compensation in stress decrease for comparing the importance of implant fracture and screw loosening on the prosthetic TMJ replacement.
Model C
Fig. 4 (continued)
ACKNOWLEDGMENT This research was supported by a grant from National Science Council of Taiwan, NSC 99-2221-E-214-042.
REFERENCES 1. Guarda-Nardini L, Manfredini D, Ferronato G. (2008) Total temporomandibular joint replacement: A clinical csae with a proposal for post-surgical rehabilitation. J Craniomaxillofac Surg 36:403-409 2. van Loon JP, Bont LG, Stegenga B, Spijkervet FK, Verkerke GJ. (2002) Groningen temporomandibular joint prosthesis. Development and first clinical application. Int J Oral Maxillofac Surg 31:44-52 3. Yuan K, Lee TM, Huang JS. (2010) Temporomandibular joint reconstruction: alloplastc prosthesis to bioengineering tissue. J Med Biol Eng 30:65-72 4. Mishima K, Yamada T, Sugahara T. (2003) Evaluation of respiratory status and mandibular movement after total temporomandibular joint replacement in patients with rheumatoid arthritis. Int J Oral Maxillofac Surg 32:275-275 5. Field C. (2009) Mechanical response to orthodontic loading, A 3 dimensional finite element multi-tooth model. Am J Orth Dento Orthop 135:174-181
Model A
Model B
Fig. 5 The Von Mises stress of the five fixed screws
Author: Pao-Hsin Liu Institute: I-Shou University Street: No.8, Yida Rd., Jiaosu Village Yanchao District City: Kauhsiung Country: Taiwan, R.O.C. Email: [email protected]
IFMBE Proceedings Vol. 35
Evaluation of EMG Feature Extraction for Movement Control of Upper Limb Prostheses Based on Class Separation Index A. Phinyomark, S. Hirunviriya, A. Nuidod, P. Phukpattaranont, and C. Limsakul Department of Electrical Engineering, Prince of Songkla University, Songkhla, Thailand [email protected], [email protected], [email protected], [email protected], [email protected]
Abstract— To control of the upper-limb prostheses based on surface electromyography (EMG) movement actions, the first and the most significant step is an extraction of the efficient features. In this paper, an evaluation of various existed EMG features based on time and frequency domains is proposed by using a statistical criterion method, namely RES index, the ratio of the Euclidean distance to the standard deviation. The RES index can response the distance between movement scatter groups and directly address the variation of feature in the same group. Moreover, the evaluation of EMG features based on the statistical index does not depend on the classifier types. The EMG signals recorded from ten subjects were employed with seven upper-limb movements and eight muscle positions. Fifteen features that have been widely used to classify the EMG signals were tested with three real-time window size functions including 256, 128, and 64 samples. From the experimental results, Willison amplitude (WAMP) with threshold value 0.025 volts shows the best performance in class separation compared to the other features. Waveform length (WL) and root mean square are useful augmenting features. Two efficient features, i.e., WAMP and WL, are suggested to use as a feature vector for the EMG recognition system. It will be obtained the high classification accuracy and can be reaching for the real-time control system. Moreover, the effect of window-size functions is dependent on the type of features. Keywords— Electromyography (EMG) signal, feature extraction, feature selection, cluster index, prosthesis.
I. INTRODUCTION In recent, surface electromyography (EMG) signal is widely used in many engineering and clinical applications. It contains lots of information from the muscles. However, it is not contained only the useful information but it also includes a variety of noises and interferences. Thus this will be lead to difficulty in the analysis of the EMG signal. Generally, in order to design an EMG recognition system, there are two main issues that should be carefully selected including feature selection and classifier design. In this study, we have been interested in the first issue. The feature selection can be implemented based on two criterions: measure of
classification accuracy obtained from the classifier and measure of discrimination in feature space using the statistical index [1]. However, the first criterion has a major disadvantage that the evaluation of the EMG features depend on the classifier types, but the second criterion, the statistical index is not problematic in this way and it tries to quantify the suitableness of the feature space [2]. From the literatures, there are many existed statistical indexes for evaluation of the EMG features such as Davies-Bouldin index [14], scattering index [4], Fishers linear discriminate index [5], Bhattacharyya distance [6], and fuzzy-entropy-based feature evaluation index [7]. Moreover, in our previous work, we proposed a statistical index, namely the ratio of Euclidean distance and standard deviation (RES index) [8]. The most significant advantage of the proposed index is that it is simple to be implemented and computed, and the experimental result showed that this index offered the same trend with the evaluation by using an efficient classifier, namely support vector machine. However, in our previous work, the EMG signals were assumed as a short transient signal (short dynamic movement) that is only 256-ms data after the trigger or onset activity was used as a representative action EMG signal [8]. But in the control of prosthetic device using the EMG signal can be used the EMG signal with long transient or steady state types [9]. Therefore, in this study each movement will be maintained for the long time duration (long dynamic movement).
II. EXPERIMENTS AND DATA ACQUISITION The EMG signals that were used in the evaluation were recorded from ten normal subjects with seven upperlimb movements and eight muscle positions. The seven upper-limb movements including wrist flexion (wf), wrist extension (we), hand close (hc), hand open (ho), forearm pronation (fp), and forearm supination (fs) are shown in Fig. 1 and the eight muscle positions are located on the right arm, as shown in Fig. 2. These data sets were acquired by the Carleton University in Canada [10]. A duo-trode AgAgCl surface electrode (Myotronics, 6140) was used and an
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 750–754, 2011. www.springerlink.com
Evaluation of EMG Feature Extraction for Movement Control of Upper Limb Prostheses Based on Class Separation Index
Ag-AgCl Red-Dot surface electrode (3M, 2237) was placed on the wrist to provide a common ground reference. This system set a band-pass filter with a 1-1000 Hz bandwidth and amplifier with a 60 dB (Grass Telefactor, Model 15). The EMG signals were sampled by using an analog-todigital converter board (National Instruments, PCI-6071E), and the sampling frequency was 3 kHz. However, in order to reduce the computational time and reduce the effect of noises, down sampling of the EMG data from 3 kHz to 1 kHz was done and bad-pass filter in range of 20 to 500 Hz was also implemented. Moreover, there are ninety-six data sets per movement for each subject and each data set contains the action EMG signal in 3-second duration.
751
Fig. 2 The eight muscle positions on the right arm [11] Table 1 Mathematical definition of the EMG feature extraction methods. Let xn represents the EMG signal in a segment n. N denotes the length of the EMG signal. wn is the continuous weighting window function. I is the number of segments covering EMG signal. ai is the linear predictive coefficients. x n is the predicted EMG signal value. Pj is the EMG power spectrum at frequency bin j. fj is the frequency of the EMG power spectrum at frequency bin j
III. METHODS Feature extraction
Definition N
A. Feature Extraction Methods
Integrated EMG (IEMG)
IEMG = ¦ xn
Mean absolute value (MAV)
1 MAV = N
n=1
In this study, we evaluate different kinds of features that have been widely used in EMG upper-limb prostheses control and there are up-to-date to the available techniques today [1-11]. Fifteen features from time domain and frequency domain are used in evaluation. Their mathematical definitions are presented in Table 1. All introduced features in time and frequency domains can be implemented in realtime application. Thirteen features in time domain including integrated EMG, mean absolute value, modified mean absolute value 1 and 2, mean absolute value slope, simple square integral, variance of EMG, root mean square, waveform length, zero crossing, slope sign change, Willison amplitude, and auto-regressive model. Moreover, two features based on frequency domain are mean frequency and median frequency. In addition, some specific parameters in feature methods are fixed i.e. the number of segments I of MAVS is 2, the order of AR model is 1, and the threshold parameter of ZC, SSC and WAMP is chosen between 10 and 100 mV.
1 N
MAV1 =
Modified Mean Absolute Value 1 (MAV1)
wn = {
1,
N
¦x N
¦w n =1
xn ,
n
if 0.25 N ≤ n ≤ 0.75 N
0.5,
otherwise
1 MAV2 = N
Modified Mean Absolute Value 2 (MAV2)
n
n=1
N
¦w
n
xn ,
n=1
1, if 0.25 N ≤ n ≤ 0.75 N wn = { 4n / N , if 0.25 N > n 4(n − N ) / N , if 0.75 N < n .
Mean Absolute Value Slope (MAVS)
MAVSi =MAVi +1 -MAVi ; i = 1,..., I − 1.
Simple Square Integral (SSI)
SSI = ¦ xn
N
2
n =1
Variance of EMG (VAR)
VAR =
Root Mean Square (RMS)
RMS =
1 N 2 ¦ xn N − 1 n=1 1 N
N
¦x
2 n
n =1
N −1
Waveform length (WL)
WL = ¦ xn +1 − xn n =1
N −1
ZC = ¦ ª¬sgn ( xn × xn +1 ) ∩ xn − xn+1 ≥ threshold º¼ ;
Zero crossing (ZC)
n =1
1, if x ≥ threshold sgn( x) = ® ¯0, otherwise N −1
SSC = ¦ ª¬ f ª¬( xn − xn −1 ) × ( xn − xn +1 )º¼ º¼ ;
Slope Sign Change (SSC)
Fig. 1 Estimated six upper-limb movements (a) wf (b) we (c) hc (d) ho (e) fp (f) fs [11]
IFMBE Proceedings Vol. 35
n= 2
1, if x ≥ threshold f ( x) = ® ¯0, otherwise
752
A. Phinyomark et al.
Table 1 (continued) N −1
(
words, the maximum separation between classes is obtained and the small value of the variation in subject experiment is reached. In this study, we used the scatter graph and the RES index (statistical measurement method) as the evaluation criterions. The definition of the RES index [8] that used in this study is as follows. The EMG features in the matrix form can be expressed as
)
WAMP = ¦ f xn − xn+1 ;
Willison amplitude (WAMP)
Auto-regressive (AR) coefficients
n =1
1, if x ≥ threshold f ( x) = ® ¯0, otherwise p
xn = −¦ ai xn−i + wn i =1
MDF
Median Frequency (MDF)
j = MDF
1 M ¦ Pj 2 j=1
M
M
j
j =1
Mean Frequency (MNF)
M
¦P = ¦
Pj =
MNF = ¦ f j Pj j =1
Fi ,kj
¦P
j
j =1
⎡ f1,1k ⎢ k ⎢f = ⎢ 2,1 ⎢ ⎢fk ⎣ I,1
k f1,2
f1,Jk ⎤ ⎥ k f 2,J ⎥ ⎥, ⎥ k ⎥ f I,J ⎦
...
k f 2,2 …
k f I,2
(1)
where f is the EMG feature, i is the channel number (1 ≤ i ≤ I, I = 8), j is the window number (1 ≤ j ≤ J, J = floor(L/N)), N is length of the window size function, L is the whole data set length of each EMG motion (L ≈ 3072) and k is the motion number (1 ≤ k ≤ K, K = 6). Note that the EMG feature values from each channel of all motions were normalized to be in the range of 0 and 1 which can be expressed as
f norm =
Fig. 3 The EMG signals recorded from wrist flexion movement with (up) short dynamic movement, 0.3 s in duration (down) long dynamic movement, 3 s in duration
(2)
The average of the EMG feature values of each channel can be given by ⎡ f1k ⎤ ⎢ ⎥ Fik = ⎢ ⎥ , ⎢ k⎥ ⎣ fi ⎦
B. Evaluation Criterion The EMG signal in this study acquired from the long movement duration that we called “long dynamic movement”. It is difference to our previous study [8] that the movement was performed with a short duration. We can observe difference between short and long dynamic movements in Fig. 3. However, in order to reach the real-time system, the decision process should be finished with 1/3 s in duration that the window size should be less than 300 ms. In this study we have been proposed three window size function, i.e., 256, 128, and 64 ms. The disjoint segmentation was employed to get a series of feature from a long EMG data. However, we can notice that when the muscle contraction is maintained for a long period the amplitude of the EMG signal is dropped. This will be difficult to classify the correct movement. However, if the system can recognize a long time movement the utility of its control system will be increase. In order to quantity the performance of the EMG features, class separability viewpoint is a main concern. A good quality in class separation means that the result of classification accuracy will be as high as possible. In other
f − min( f ) . max( f ) − min( f )
(3)
where fi k is calculated from the definition in Table 1. The standard deviation of the EMG feature values of each channel can be defined as Sik
⎡ s1k ⎤ ⎢ ⎥ = ⎢ ⎥, ⎢ k⎥ ⎣ si ⎦
(4)
where J
∑( f sik =
k i, j
− fi k ) 2
j =1
(5)
,
J
The mathematical definition of the RES index can be expressed as RES index = where
IFMBE Proceedings Vol. 35
ED
σ
,
(6)
Evaluation of EMG Feature Extraction for Movement Control of Upper Limb Prostheses Based on Class Separation Index
ED =
2 K(K-1)
K-1
K
∑∑
( f1 p − f1q )2 + ... + ( f I p − f Iq )2 , (7)
p =1 q = p +1
σ=
1 IK
I
K
∑∑ s
k i
,
(8)
i =1 k =1
and p and q are motion number (1=wf, 2=we, 3=hc, 4=ho, 5=fp, and 6=fs). The performance will be best when the RES index obtain the high value. Moreover, this index is proved in the previous work that it exhibited the same trend with the efficient classifiers.
IV. RESULTS AND DISCUSSION To demonstrate the performance of classification, in this paper, we used the RES index to indicate the quality of class separation instead of using only the observation from scatter plot. From the experimental results, the WAMP is the best feature compared to the other EMG features as we can observe from the Figs. 4(a) to 4(c). The WAMP with 0.025 V threshold obtains the RES index in range of 3.8-7.2. Its average value is approximately 5.1-5.3. Its RES index is higher than the RES index of the secondary feature group about 1.0. The WL, RMS, MAV, IEMG, and MAV1 are the secondary features group. Their RES indexes are greater than 4.0 at the 128-ms and 64-ms window size functions. Moreover, they provide only one feature per channel which is small enough to combine with the other features to make a more powerful feature vector while it does not increase the computational burden for the classifier. The ZC with 0.005 V threshold, MAV2, VAR, and SSI are closed by the secondary features group. Their RES indexes are approx. 3.5. The other features obtain the poor RES indexes that are not recommend to use in a feature vector. However, in order to reduce the EMG features to the smallest dimension feature vector, we can remove features that have the same pattern that we can observe that from the scatter plots. In more details, in case of time domain features that are calculated based on their amplitudes, we found that features in the secondary group have the same pattern. Thus only the best one in this group is recommended that is the WL feature with 128-ms window size function. In addition, for the time domain features that obtained the frequency information, WAMP has the better in cluster separability than ZC and SSC. The optimal threshold value of WAMP is about 25 mV and the optimal threshold value of ZC and SSC is 5 and 2 mV, respectively. Furthermore, the modified version of the MAV is worse than its traditional version. The whole features in frequency domain show poor class separability. The MDF obtains the larger RES index than the MNF. Nevertheless, the MAVS is the worst classifier performance compared to the other features. Its
753
maximum RES index is only 1.5. Additionally, the AR and MAVS in this study used the first order and two segments for obtaining only one feature per channel. Therefore, the increasing of the AR order and the MAVS segment may improve the classification results in the future test. The effect of the window size function is found that there is not the same trend in each feature. The VAR, SSI, SSC, MDF, and MNF features obtain the higher RES index when the window size function is set to 256 ms. While the window size function is set to 128 ms, the WAMP, WL, RMS, MAV1, MAV2, and ZC get the higher RES index value. In addition, the MAV, IEMG, AR, and MAVS acquire the high RES index value when 64-ms window size function is set. From the experimental results and discussions above we can recommend that the WAMP with threshold 0.025 V and WL will be made an efficient feature vector. It should provide the high classification accuracy. Moreover, the suggestion of the EMG features in this study is same as the recommend feature in our previous study [8]. The order of the best feature is difference but the better feature group is the same one. Thus the use of the EMG signal with the short or long dynamic contractions is not affected to the evaluation of the feature methods. In the future work, the other features that have been reported in the literatures should be evaluated to find the better one. Moreover, the combination of some useful features should be tested using the achievement classifiers to find optimal feature vector for the EMG recognition system.
Fig. 4 Bar plot of the average RES index of fifteen EMG features with the six different movements and eight muscles of ten subjects at window size (a) 256 ms (b) 128 ms (c) 64 ms
IFMBE Proceedings Vol. 35
754
A. Phinyomark et al.
Fig. 4(continued)
ACKNOWLEDGMENT This work was supported in part by the Thailand Research Fund (TRF) through the Royal Golden Jubilee Ph.D. Program (Grant No. PHD/0110/2550), and in part by NECTEC-PSU Center of Excellence for Rehabilitation Engineering, Faculty of Engineering, Prince of Songkla University. The authors also gratefully acknowledge the support of Dr. Adrian D.C. Chan from the Carleton University, Canada, for providing EMG data.
REFERENCES 1. Zardoshti-Kermani M, Wheeler BC, Badie K et al. (1995) EMG feature evaluation for movement control of upper extremity prostheses. IEEE T Rehabil Eng 3:324–333 2. Han-Pang H, Chun-Yen C (1999) Development of a myoelectric discrimination system for a multi-degree prosthetic hand, IEEE Proc. vol. 3, Int. Conf. Robotics and Automation, pp 2392–2397
3. Wang G, Wang Z, Chen W et al. (2006) Classification of surface EMG signals using optimal wavelet packet method based on DaviesBouldin criterion. Med Biol Eng Comput 44:865–872 4. Boostani R, Moradi MH (2003) Evaluation of the forearm EMG signal features for the control of a prosthetic hand. Physiol Meas 24:309–319 5. Oskoei MA, Hu H (2006) GA-based Feature Subset Selection for Myoelectric Classification, IEEE Proc. Int. Conf. Robotics and Biomimetics, pp 1465–1470 6. Park SH, Lee SP (1998) EMG pattern recognition based on artificial intelligence techniques. IEEE T Rehabil Eng 6:400–405 7. Huang HP, Liu YH, Wong CS (2003) Automatic EMG feature evaluation for controlling a prosthetic hand using a supervised feature mining method: an intelligent approach, IEEE Proc. Int. Conf. Robotics and Automation, pp 220–225 8. Phinyomark A, Hirunviriya S, Limsakul C et al. (2010) Evaluation of EMG Feature Extraction for Hand Movement Recognition Based on Euclidean Distance and Standard Deviation, IEEE Proc. 7th Int. Conf. Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, pp 856–860 9. Hudgins B, Parker P, Scott RN (1993) A new strategy for multifunction myoelectric control. IEEE T Bio-med Eng 40:82–94 10. Chan ADC, Green GC (2007) Myoelectric control development toolbox, Proc. 30th Conf. Canadian Medical and Biological Engineering Society, M0100 11. Phinyomark A, Limsakul C, Phukpattaranont P (2009) A novel feature extraction for robust EMG pattern recognition. J Comput 1:71–80
Address of the corresponding author: Author: Angkoon Phinyomark Institute: Biomedical Engineering and Assistive Technology Laboratory, Department of Electrical Engineering, Faculty of Engineering, Prince of Songkla University Street: 110/5 Kanjanavanid Road, Kho Hong, Hat Yai City: Songkhla Country: Thailand Email: [email protected] Website: http://saturn.ee.psu.ac.th/~beatlab/
IFMBE Proceedings Vol. 35
Modeling and Fabrication of Articulate Patellar H.H. Rifa’t1, Y. Nukman1, N.A. Abu Osman2, L.K. Gym2, and M.Z. Harizam1 1
University Malaya/Computer Aided Design and Manufacturing Engineering Department, Kuala Lumpur, Malaysia 2 University Malaya/Biomedical Engineering Department, Kuala Lumpur, Malaysia
Abstract— There are little attention given for Patellofemoral joint's design although it is highly loaded part of a total knee arthroplasty. Finite element method proved the poor design leads to excessive stresses within the polyethylene of the patellar unit during flexion of the knee. In this study, patellar prosthesis with more suitable with the femoral unit than the other design tested were developed. The experiment done has proved that the use of patella and femoral models developed can be use to predict the contact area between them. Keywords— patellar, artificial knee cap, articulate patellar, total knee replacement.
Fig. 2 Designed patella-femoral component.[2]
I. INTRODUCTION The most complex joint in the human body is the joint of patella-femoral. The compressive load is subjected to the articulate patellar and can reach several times body weight. During squatting, sitting, standing and other activities, the flexion angle of the patella-femoral will change depend on the activities. During the flexion, the component undergoes high shear loads (between the patella and femur) and it may cause wear (osteolysis: due to polyethylene and metal debris, leading to bone loss and implant failure) to the articulate patellar. In order to reduce the high shear load to the articulate patellar, we should improve the conformity of the femur and articulate patellar by increasing the contact area between them. By designing the articulate patellar similar or closely match to normal anatomy in shape and kinematics will allows the articulate patellar to move smoothly through the groove of femur.
Fig. 1 Normal anatomy of patella-femoral [2]
Fig. 3 Articulate patellar Figure 3 shows the design of articulate patellar based on biconcave shape of the normal patella. A metallic femoral plate is used with polyethylene prostheses, while those of metal articulate with the patellar trochlea of the natural femur. In the short-term, the results have been encouraging and a prosthetic replacement has preferred to removal of the patella [1]. However, longer-term results have not been as successful as total knee replacement. In total knee replacement, there has been a debate about resurfacing of the patella. By leaving the patella alone, prosthesis fractures, patellar fractures, and gross wear do occur with all the designs produced. The most common failure associated with the metalbacked designs where the thickness of the polyethylene is at a minimum 120-241 [1]. Early failure of the polyethylene, in both all-polyethylene and metal-backed designs, can be reproduced in the laboratory. With increasing conformity and thickness of the pore surfacing polyethylene over the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 755–757, 2011. www.springerlink.com
756
H.H. Rifa’t et al.
metal backing in highly stressed areas, the failures were reduced. A unique arc movement of patella is during knee flexion depending on the soft tissue structures and the bony geometry of the femoral trochlear surface. The geometry of metallic knee prosthesis enforces a change from the normal tracking characteristics of the patella. The patella forces and conformity altered by the change in mediolateral translation and the horizontal rotation of the patella [1].
II. EXPERIMENTAL STUDIES A. Manufacture of Patellar and Jigs Articulate patellar was made by compacting bulk polyethylene powder into the die. The material that used to produce articulate patellar is all-polyethylene. In this study, only the dome shape will be manufactured by using CNC lathe machine (Miyano) in the Department of Engineering Design machining laboratory. Firstly, MRI (magnetic resonance imaging) is used to scan human body at each point in 3-axis, MIMICS software was used to simulate the data from MRI to generate the 3D model in .stl format. From the 3D model, the size of the kneecap of the patient can be measured and the suitable size of the articulate patellar during the total knee replacement can be decided. Secondly, by using reverse engineering method, a commonly designed articulate patellar was scanned using a 3D scanner and the 3D data was viewed and converted using IronCAD software. With the aid of (fig. 4), the size of the articulate patellar can be identified by measuring A and B and the size of the patellar (standard) is 32 which A=3mm B=8.5mm.
Machining process using CNC lathe was used to manufacture the articulate patellar (the dome shape). Jigs were used to hold the component at fixed position and at certain angle as well. The jigs also need to be manufactured as shown in fig. 6. B. Patellar Femoral Stress Distribution Stress distribution between the femoral components to the articulate patellar can be defined by using FEA (finite element analysis) as shown in fig. 5. The contact area which extremely at high load can be determined. High stress at certain point also may cause wear to the articulate patellar. FUJI prescale film was used to define the distribution stress between the femur and articulate patellar by showing the mark of concentrated pressure. With both methods we can do the comparison between both methods to support the stress distribution result.
Fig. 5 finite element analysis between patellar and femur [2] C. Patellar Femoral Contact Studies and Methodology
Fig. 4 Wire mesh articulate patellar component.[2] Thirdly, a CAM software namely Mastercam is used to generate the G-code (a Part Program for the CNC).
The femoral unit clamping block or jig was produced which it can flex at certain angle. The jig produced has the same lateral profile as the femoral unit. The femoral’s jig has 5 holes which is to lock the femoral’s jig at 5 flexion angle of knee flexion. At the back of the femoral unit, there are 2 pins which allows the femoral’s jig to clamp the femoral unit. The femoral’s jig is clamped by compression test machine (Universal Testing Machine , model:LS-28101UTM) [1]. The patellar’s jig with a diameter central holes 14.45mm and depth of 5.5mm was clamp to the UTM machine. Then the articulate patellar seated into the patellar’s jighole underneath the femoral unit and allowed to move freely in the hole to ‘centralise’ the unit about the femoral surface. The two sheets of FUJI ‘Prescale’ low stress film
IFMBE Proceedings Vol. 35
Modeling and Fabrication of Articulate Patellar
757
III. CONCLUSIONS
were placed between the femoral and the patellar before bringing the femoral unit down onto the patellar unit.
Major factors to designing a new articulate patella: • femoral’s jig
femoral
•
•
Patellar’s jig articulate patellar Fig. 6 Diagram of the patellar prosthesis rig assembly As the femoral was brought down to the patellar, compressed at 500N (50.9684kg) and the force maintained for 30s, FUJI prescale film will produce mark on the film which shows the contact area between the femoral and patellar. With the used of planimeter on enlarged scaled views of the Prescale print, the FUJI prescale film can be estimated [1]. The prostheses tested were the standard sizes and a comparison with other commonly used designs (Fig 7) was done.
•
First, the poor rise to extremely high values of contact stress [1,3] Secondly, the patellar movement during flexion in both lateral and rotary directions for prosthesis fixed to the bone of the patella. This can cause larger compression forces on the periphery of the polyethylene and increased tension of the lateral tissues around the patella[1,4] By using only polyethylene resins that have no added calcium stearate. A recent study shows that not adding calcium stearate can improve the morphology of the polyethylene by facilitating consolidation of the material. Many processed polyethylene can ensures the quality by performing ultra-sound and microscopic inspection. These inspection techniques detects voids, inclusions, and unconsolidated areas.[2] Method of processing the articulate patellar. To obtain a high degree of polymer fusion optimized processing techniques used with rigorous controls. Zimmer manufactures all articulating surfaces from compression molded polyethylene. Compression molding allows control of the time, temperature, and pressure required during the melt, isothermic hold, and cooling phases of the process to yield ideal material morphology.[2]
ACKNOWLEDGMENT Thanks to the technicians from the Department of Engineering Design and Manufacture, Faculty of Engineering, University of Malaya for the support.
REFERENCES 1. R.J. Minnsa*, I.W. Wallaceb(1996) A fully mobile patellar prosthesis: design, biomechanics and laboratory evaluation. 2. Nexgen complete knee solution at http://www.zimmer.com 3. McNamara JL, Collier JP, Mayor MB, Jensen RE. A comparison of contact pressures in tibia1 and patellar total knee components before and after service in vivo. Clin Orthop 1994; 299: 104-13. 4. Rubin PJ, Blankevoort L. Influence of soft structures on patellar three-dimensional tracking. Clin Orthop 1994; 299: 235-43. 5. www.lotus-sci.com
Fig. 7 Samples of contact pattern between patellar and femoral for 3 samples. [1]
Author: Muhammad Rifa’t bin Hassan Institute: University Malaya City: Kuala Lumpur Country: Malaysia Email:[email protected]
IFMBE Proceedings Vol. 35
Pistoning Measurement in Lower Limb Prostheses – A Literature Review A. Eshraghi1, N.A. Abu Osman1, M.T. Karimi2, H. Gholi Zadeh1, and S. Ali1 1
2
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia Department of Prosthetics and Orthotics, Faculty of Rehabilitation, Isfahan University of Medical Sciences, Isfahan, Iran
Abstract–– Pistoning is said to be one of the major indicators of optimal suspension of lower limb prosthesis. Manufacturers of prosthetic components have always tried to come up with new innovative suspension systems to lessen pistoning. Some have attempted to decrease the pistoning by targeting the skin-liner interface and others have been working on the surface between liner and hard socket. The objective of this study was to review the literature regarding pistoning in lower limb prosthesis in order to evaluate objective data in sense of methods used for assessing pistoning during static and dynamic conditions with different prosthetic suspension systems and components and their reported advantages and disadvantages. Keywords— Prosthesis, pistoning, suspension, liner, socket.
I. INTRODUCTION Pistoning is said to be one of the major indicators of optimal suspension of lower limb prosthesis. Suspension methods vary depending on the level of the amputation, the residual limb condition and activities of the amputee. Sometimes suspension is achieved by use of a component such as a belt, liner, sleeve, or other device. Also, a technique such as suction within the socket, tight fit around the condyles of the stump or socket design can be employed as suspension. It is also possible to combine them if necessary. Prosthetic fitting has been said to be correlated with pistoning [1, 2]. Thus, measuring the pistoning would be helpful in determining the optimal prosthetic fit. A number of methods have been used to measure pistoning movement occurring in different levels. It can be between the hard socket and the liner, inside the soft-liner, between the liner and the skin, and between the skin and the bone [1, 3]. Radiological methods including roentgenology [4, 5], cineradiography [2, 6, 7], and roentgen stereophotogrammetric analysis have been used as imaging systems to record the pistoning movement [3]. Ultrasonic methods have also been employed [8, 9]. The objective of this study was to review the literature regarding pistoning in lower limb prosthesis in order to evaluate objective data in sense of methods used for
assessing pistoning during static and dynamic conditions with different prosthetic suspension systems and their reported advantages and disadvantages.
II. METHOD A. Search The key words were used in a PubMed, Science direct and Web of science database search to find papers in English included: prosthetic pistoning, vertical translation, residual limb slippage, prosthetic liner, and prosthetic suspension. Twelve experimental articles were selected according to the inclusion criteria to be reviewed. B. Selection Criteria Papers in English were selected. We reviewed the study design, method of subject selection and protocol.
III. RESULTS A. Study Population The mean age of participated subjects varied widely from 15 [10] to 79[6]. Five papers were case studies and in other papers subjects were selected from a larger group or the method were not clear. Number of subjects participated in the studies, except from case studies ranged from 7 [6] to 22 [5]. Both unilateral and bilateral amputees were included, but the subjects were mostly unilateral Tran-tibial amputees. Only one study was conducted on a Trans-femoral subject [8]. The cause of amputation was mostly trauma, but also included diabetes, infection, arteriosclerosis, tumor, burn, Bergers Disease and congenital limb defects (Table 1). Male and female subjects were both included but the population of male amputees was more. B. Prosthesis Specifications The only trans-femoral prosthetic socket was total contact quadrilateral with suction socket, stabilized knee and uniaxial foot [8].
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 758–761, 2011. www.springerlink.com
Pistoning Measurement in Lower Limb Prostheses – A Literature Review
Trans-tibial prostheses were mainly Patellar Tendon Bearing (PTB) and Total Surface Bearing (TSB). Some of the studies did not use any liner with the PTB socket. The following suspension systems were among the systems were evaluated [2, 3] : • • • • • • • •
Supracondylar, suprapatellar Supracondylar Cuff Waistband and cuff Figure-of-eight suprapatellar strap Rubber sleeve Articulated supracondylar wedge Distal pin suspension
Neoprene sleeve and shuttle lock were also evaluated [11].
759
Table 1 Study population Study (n=12) Board (2001)
Subject’s age (years) 32-64
Cause of amputation (%) Trauma
Convery (2000)
39
Industrial accident
Commean (1997)
56
Unknown
Grevsten (1975)
28-66
Unknown
Lilja (1993)
61-79
Diabetes mellitus (71%), arteriosclerosis (29%)
Narita (1997)
19-74
Traumatic injuries (66%), tumors (22%), burns (12%)
Newton (1988)
Unknown
Unknown
Sanders (2006)
60
Traumatic injury
Soderberg (2003)
69
Trauma
Tanner (2001)
37
Trauma
Wirta (1990)
23-76
Trauma, Infection, diabetes, arterial Insufficiency, Bergers disease, Congenital correction, Chondro sarcoma
Yigiter (2002)
15-37
Traumatic injuries
C. Study Design Only two studies were prospective and used the followup design [5, 10].In two Studies [7, 10] PTB socket was compared to TSB. Others only studied the effect of PTB socket with different suspension systems, such as supracondylar strap, distal pin and vacuum. D. Data Presentation Except from case studies, in four studies data for each patient was given individually. Yigiter (2002) and Wirta (1990) presented only group data. (Table 2) E. Measurement Methods Some studies have used imaging methods to assess the residual limb bony structures position relative to the socket wall. Some methods have been employed to study pistoning in static and a few during walking. Radiological methods were roentgenology [2–3], cineradiography [1, 4–5], fluoroscopy [6], and roentgen stereophotogrammetric analysis [7]. Ultrasonic methods used transducers which are attached to the socket wall and video recordings are assessed [8, 9]. F. Static pistoning Commean et al. used a harness in order to apply the the force to the prosthesis by the shoulders. They employed spiral computed tomography (CT) [12]. In a study Madsen et al. designed a loading device for the Spiral CT method that allowed to apply large loads [5, 13]. The applied load was determined by the subject's weight.
In some studies they tried to simulate the gait. Once pistoning of the tibial end was examined in four different positions representing four phases in the gait cycle: heel contact, mid-stance push-off and finally swing phase. To simulate heel contact and push-off they used a board tilted 15 degrees [3, 6]. In order to simulate the swing phase, they positioned the prosthetic limb at 45 degrees relative to the floor. In another study, to simulate the swing phase of gait a 5kg mass was applied to the foot of the prosthesis, and an x-ray was taken with the prosthesis suspended at a knee flexion angle of 30° [7]. To X-ray sagittal and coronal planes of the residual femur in weight bearing and nonweight bearing conditions in the prosthesis, two 5MHz linear array transducers were used to provide a good image of the femur. To simulate statically different phases of the gait cycle the amputee adopted a typical stride and while weight bearing "pulled the prosthetic heel backwards" to simulate early prosthetic stance, or "pull the toe of the prosthetic foot forwards" to simulate late prosthetic stance. From a typical standing base and during weight bearing, amputee was asked to "push the prosthetic foot laterally" to simulate abduction or "pull the prosthetic foot medially" to simulate adduction [8].
IFMBE Proceedings Vol. 35
760
A. Eshraghi et al.
Yigiter et al. assessed the sufficiency of suspension by marking the anterosuperior border of the socket on the stump sock in the standing position and at the beginning of swing phase. The difference between stance and swing phase was recorded in centimeters [10]. Loads of 44.5 and 88.9 N was used to simulate swing phase during walking and running, respectively in a study of pistoning with X-ray. The X-rays were taken while the subject was lying supine and the data of loaded and unloaded positions were compared. Tanner et al. used radiography to evaluate vertical translation of tibia during full weight bearing, Partial weight bearing and non weight bearing. [11].
made by cineradiography in the foot contact phase and the swing phase. [7]. In an ultrasound study on trans-femoral prosthesis (first reported study of the motion of the residual femur within socket during gait), during gait the motion of the "apex" of the circular arc of the femur within each transverse sensing plane was video recorded. [8]. Wirta et al. in a study of the effect of below-knee suspension systems evaluated axial movement of the distal surface of the residual limb relative to the socket during walking, by a potentiometer as an axial movement detector placed at the bottom of the socket [2].
IV. DISCUSSION
Table 2 Study description Data presentation per patient Yes
Sample size
Study design
Board (2001)
11
CSS
Convery (2000)
1
CS
-
Commean (1997)
1
CS
-
Grevsten (1975)
22
CSS/P
Lilja (1993)
7
CSS
Narita (1997)
9
CSS
Newton (1988)
8
CSS
Sanders (2006)
1
CS
Soderberg (2003)
1
CS
Tanner (2001)
1
CS
Wirta (1990)
20
Yigiter (2002)
20
Study (n=12)
CSS CSS/P
Yes Yes Yes No No No
CS=Case study P=Prospective (follow-up) CSS=Case series
G. Dynamic Pistoning There were found only few studies focusing on the pistoning during gait. Sanders et al. used an easy-to-use non radiological tool to measure the position of the distal stump surface relative to the distal socket during walking an 18.5 meter walkway. It was a photoelectric sensor, i.e. the light source and light sensor were both within a common housing. A holder containing the sensor was mounted on the inside distal socket wall [1]. In another study, a walking machine was used for walking with prosthesis and the measurements subsequently
Radiological methods have been more popular to measure the pistoning. However, some of them are not generally available to many clinicians or prosthetists because of the cost of the equipment, and complex, time consuming data collection. Besides that there is the concern of exposing the patient to the X-ray. Using CT scanners has some advantages like being widely available, having high spatial resolution and showing 3-D information about the prosthesis and the internal tissues of the stump, but the challenge is that they require the subjects to be positioned supine [13]. The use of photoelectric sensor reported to have some limitations because it is not wireless and a cable connects the sensor to data acquisition system. But it is said to be overcome by radiofrequency telemetry systems. [1]. Regarding the gait simulation, in one study they reported that they could simulate the dynamic component of the floor reaction force [6]. They regarded the model as an approximation of slow gait that moderately contributed to the floor reaction forces [6]. Diagnostic ultrasound was said to have no known side effects [8]. However, in the study with ultrasound on trans-femoral prosthesis, the sample rate of ultrasound data was restricted to 25Hz because of using a single head video recorder to record the ultrasound image. For ethical reasons because of radiation in the study with roentgen stereophotogrammetry, they could not perform multiple gait-cycles assessments [3]. They also reported that the radiographic apparatus and the calibration cage restricted the system.
V. CONCLUSIONS In this paper, our focus was on the methods used for measuring pistoning in lower limb amputees. Most of the studies measured the pistoning by simulating the gait
IFMBE Proceedings Vol. 35
Pistoning Measurement in Lower Limb Prostheses – A Literature Review
through applying static loads. Moreover, these studies employed radiological methods, which expose the amputee to harmful X-ray. Since reducing pistoning significantly contributes to optimal prosthetic fit, further research with bigger sample size seems necessary to invent and evaluate easy and safe methods of pistoning measurement which are widely available to prosthetist. Future research should also examine pistoning during activities of daily living.
ACKNOWLEDGMENT This work was done under the financial support of the Ministry of Higher Education through Fundamental Research Grant Scheme.
REFERENCES 1. Sanders JE, Karchin A, Fergason, JR et al. (2006) A noncontact sensor for measurement of distal residual-limb position during walking. JRRD 43(4):509 2. Wirta R, Golbranson FL, Mason R et al. (1990) Analysis of belowknee suspension systems: effect on gait. J Rehabil Res Dev 27(4): 385-96 3. Söderberg B, Ryd L, Persson BM (2003) Roentgen stereophotogrammetric analysis of motion between the bone and the socket in a transtibial amputation prosthesis: a case study. JPO 15(3): 95 4. Erikson U, Lemperg R (1969) Roentgenological study of movements of the amputation stump within the prosthesis socket in below-knee amputees fitted with a PTB prosthesis. Acta Orthopaedica 40(4): 520-526
761
5. Grevsten S, Erikson U (1975) A roentgenological study of the stumpsocket contact and skeletal displacement in the PTB-Suction Prosthesis. Upsala Journal of Medical Sciences 80(1): 49-57 6. Lilja M, Johansson T, Öberg T (1993) Movement of the tibial end in a PTB prosthesis socket: a sagittal X-ray study of the PTB prosthesis. Prosthet Orthot Int 17(1): 21-26 7. Narita H, Yokogushi K, Shi S et al. (1997) Suspension effect and dynamic evaluation of the total surface bearing (TSB) trans-tibial prosthesis: a comparison with the patellar tendon bearing (PTB) transtibial prosthesis. Prosthet Orthot Int 21(3): 175-178 8. Convery P, Murray KD (2000) Ultrasound study of the motion of the residual femur within a trans-femoral socket during gait. Prosthet Orthot Int 24(3):226-232 9. Murray KD, Convery P (2000) The calibration of ultrasound transducers used to monitor motion of the residual femur within a transfemoral socket during gait. Prosthet Orthot Int 24(1):55-62 10. Yigiter K, Sener G, Bayar K (2002) Comparison of the effects of patellar tendon bearing and total surface bearing sockets on prosthetic fitting and rehabilitation. Prosthet Orthot Int 26(3):206-212 11. Tanner J, Berke G (2001) Radiographic comparison of vertical tibial translation using two types of suspensions on a transtibial prosthesis: a case study. JPO 13(1):14 12. Commean PK, Smith K, Vannier M (1997) Lower extremity residual limb slippage within the prosthesis. Arch Phys Med Rehab 78(5):476 13. Madsen M, Haller J, Commean PK et al. (2000) A device for applying static loads prosthetic limbs of transtibial amputees during spiral examination. Development 37(4):383-387
Author: Arezoo Eshraghi Institute: Department of Biomedical Engineering, Faculty of Engineering, University of Malaya Street: Jalan Lembah Pantai City: Kuala Lumpur Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Prosthetics and Orthotics Services in the Rehabilitation Clinics of University Malaya Medical Centre S. Ali1, N.A. Abu Osman1, H. Gholizadeh1, A. Eshraghi1, P.M. Verdan2, and L. Abdul Latif2 1
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia 2 Department of Rehabilitation Medicine, Faculty of Medicine University of Malaya, Kuala Lumpur, Malaysia
Abstract— Quality prosthetics and orthotics (P&O) services have significant role in treating the disabled patients. Every year a big number of patients are referred to rehabilitation clinics in University Malaya Medical Centre for prosthetics and orthotics services. In 2010, total 5293 patients visited seven Rehabilitation clinics, 241 amputations were done, 572 orthoses and 53 prostheses delivered to patients. The system currently available for orthotics and prosthetics treatment is still in the developing phase. The objective of the study was to outline the prosthetic and orthotic services given to patients at UMMC.
standardization scale and good collaboration among different Centers to achieve higher standard of rehabilitation care given to patients [5].
II. METHOD
Keywords— Prosthetics, Orthotics, UMMC, Amputation, Rehabilitation.
Data was collected from the orthopedic department, Rehabilitation Medicine department and prosthetics and orthotics workshop of University Malaya Medical Centre. All the data was collected from the patient’s admission registration book in the orthopedic ward and rehabilitation department.
I. INTRODUCTION
III. RESULTS
In the recent past prosthetics and orthotics practice has evolved tremendously and is increasingly recognized as an ever growing field of modern science [1]. After assessing the patient thoroughly the next job of orthotist is to design and fabricate the appropriate orthosis which will support the weakened body structures. Prosthetists assess and fabricate the prosthesis which replaces the missing body part [2]. We need a reliable and valid self-report instrument to help to evaluate prosthetics and orthotics outcomes [3].In prosthetics and orthotics practice we can achieve a rational success by good technical skills and by bringing awareness about the psychological issues in the rehabilitation [4]. University Malaya Medical Centre (UMMC) is one of the well known research hospitals in Malaysia and the only hospital in Malaysia with two certified prosthetist and orthotist. Rehabilitation is being achieved through prosthetics and orthotics as a team approach. Ideally the patient should be treated by smoothly functioning team. The team members should meet on regular basis, depending on the volume of patients. In UMMC seven different specialties of the rehab team are achieving this daunting task of achieving better functional and psychological outcome for the patients with disabilities. The prosthetic and orthotic service delivery system needs scientific
Detailed study for patients visiting various rehabilitation clinics at UMMC has been carried out for the year 2010. The rehabilitation clinics considered for the present study are Neuro surgical rehabilitation, Neuro Medical rehabilitation, Amputee rehabilitation, Spinal rehabilitation, Paeds rehabilitation, Scoliosis rehabilitation and Spasticity rehabilitation respectively. Fig. 1 shows the number of patients visited throughout the year at rehabilitation clinics. The scoliosis rehabilitation clinic shows the maximum number of patients visited. Spasticity rehabilitation clinic was least visited by patients compared to all the clinics. The numbers of transfemoral and transtibial amputation shows increase in numbers from the 2009, the occurrence of partial foot amputation was more than the other amputation levels both in 2009 and 2010 (see fig. 2).Main three causes noticed for amputations included 1. Diabetics 2. Trauma 3.Cancer. A. Prosthetics Thirty five prostheses delivered to patients in the year 2010. 5-10 percent of amputees received pre-operative treatment including psychological treatment and future prosthetic life and less than 5 percent received interim prosthesis immediately after amputation. 80-90 percent
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 762–764, 2011. www.springerlink.com
Prosthetics and Orthotics Services in the Rehabilitation Clinics of University Malaya Medical Centre
patients got post-operative treatment. 5-10 percent patients did not get any post operative treatment because of other medical problems. From those who were referred for prosthetics services 75-80 percent patients got their prostheses with mobility Grade1- Grade2 and 15-20 percent patients were getting prostheses with mobility Grade2Grade3 (based on Otto Bock mobility system, which are four mobility grades. 1. the indoor walker 2. the restricted outdoor walker 3. the unrestricted outdoor walker 4. the unrestricted outdoor walker with especially rigorous demands).70 – 80 percent had gait training after the permanent prostheses were delivered but 10-20 percent was not able to come for gait training because of their personal problems. The numbers of amputation in year 2010 were more than the year 2009 (see figure 2) but the number of prostheses delivered to patients were less than 2009. The numbers of foot, transtibial and transfemoral amputations shows growing trend over the last decade. In Amputee rehabilitation transtibial prosthesis was the most common prosthesis. Patella tendon bearing (PTB) and Patella tendon supra condylar (PTS) socket were used for transtibial users. Only quadrilateral socket was used for transfemoral users. The construction design was an endoskeleton. Partial foot amputations were more than the other levels of amputation but the treatment approach was below the standards for partial foot users. Table 1 Funds for P&O services
763
B. Orthotics Every day few numbers of orthotics patient visit UMMC, as follows 80-90 percent of the lower limb orthoses were static and 5 percent was dynamic.80-90 percent lower limb orthotics patients did not have proper gait training with their prescribed orthoses.90-95 percent patients with the foot problem got their orthoses. Insoles and AFO’s were the most common prescribed lower limb orthoses. The quality of orthopedic shoe was not acceptable to most of the users. C. Funds for P&O Services Prosthetic and orthotic users were getting funds from different resources (see table 1).
IV. DISCUSSION AND CONCLUSION In this study, we overviewed the prosthetics and orthotics services at UMMC. We found that the numbers of orthotic users were more than the prosthetic users but still there was no proper system available for orthotics patients. Although there is a rehabilitation team at UMMC, but most of the orthoses were prescribed without consultation with physiotherapist and prosthetist/orthotist except for the foot clinic. None of the rehab team was following proper check out procedure and follow-up for orthotic users. The quality of orthoses was not comparable to international standard. As compared to orthotic services there was a proper system available for prosthetics patients, but still the system available was not fulfilling the international standard. There are few elements involved which are restricting the system to fulfill the international standard. Less number of prosthetists/orthotists and rehabilitation physicians. 2. Lack of funding from government and private sources is also contributing factor. 3. Absence of upgrading system is also a hindrance to improve the quality of prosthetic and orthotic services. 4. Lack of awareness both for patients and medical prictioners regarding prosthetics and orthotics treatment. 1.
IFMBE Proceedings Vol. 35
764
S. Ali et al.
Fig. 1 Number of patients per month in each rehabilitation clinic in 2010
Fig. 2 Number of amputations in 2009 and 2010
REFERENCES 1. Fuhrer MJ (1995) An agenda for Medical rehabilitation outcomes research. Am J Phys Med Rehabil 74: 243-248 2. American Academy of Prosthetists and Orthotists (1990) The orthotics and prosthetics profession. J Prosthet Orthot 2: 101-111 3. W. Heinemann, R.K. Bode, C.O. Reilly (2003) Development and measurement properties of the Orthotics and Prosthetics Users' Survey (OPUS): a comprehensive set of clinical outcome instruments. Prosthet Orthot Int 27: 191-206 4. Desmond, Deirdre MacLachlan, Malcolm (2002) Psychosocial Issues in the Field of Prosthetics and Orthotics. J Prosthet Orthot,14: 19-22
5. Heinemann, A. W. Gershon et al. (2006) Development and Application of the Orthotics and Prosthetics User Survey: Applications and opportunities for Health Care Quality Improvement.Prosthet Orthot Int,18: 80-85
Author: Sadeeq Ali Institute: Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, kuala Lumpur Malaysia Street : Jalan elmu of jalan university City : Kuala Lumpur Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Prosthetic Foot Design: The Significance of the Normalized Ground Reaction Force A.Y. Bani Hashim1, N.A. Abu Osman2, W.A.B. Wan Abas2, and L. Abdul Latif3 1
Department of Robotics & Automation, Faculty of Manufacturing Engineering, Universiti Teknikal Malaysia Melaka, Durian Tunggal, 76100 Melaka, Malaysia 2 Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia 3 Department of Rehabilitation Medicine, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
Abstract— This study maps bones and joints, relate them to the vertical ground reaction forces on gait. Their localities are charted in the form of graph. This is then being exploited as arguments to reason the design of prosthetic foot. The mathematical model is developed and is based on the relationship among the anatomy of bones and joints, the vertical ground reaction forces on certain points as well as the sequence when these points experience the vertical ground reaction forces. Using the model, a custom design prosthetic foot is built using a male subject of size 5 scanned foot image. It is concluded that proper mappings and modeling of foot structure can be used as reference in design and development of prosthetic feet.
The scope of the design follows the notion of energy storage and retrieval capabilities. This is achievable by applying a flexible keel. The complete prosthetic foot design has the foot-ankle module attaches to the flexible keel. Thus, the objectives of this work are to identify bones, joints, and ground reaction forces localities that act on specific bones, to design the way to visualize them numerically, to develop methods in determining foot identity, and to assess the foot identity to build custom prosthetic foot.
II. METHOD
Keywords— Prosthetic foot, graph, ground reaction force.
I. INTRODUCTION In Malaysia, there is a need to produce lower limb prosthetic that is affordable. At present, good prosthetic feet are mostly the import products. A typical foot prosthesis costs around RM8,000. This is comparable to the price of a motorcycle. In fact, a number of young male patients lost their lower limb due to motorcycle accidents! Unluckily for them, it is like losing a motorcycle and limb, and as a replacement they need to purchase prosthetic foot. This work tries to replicate biological foot system and develop a model to be used as reference in design and development of prosthetic foot. It begins by investigating kinematic structure of human foot—why the bones and joints are arranged that way, why they are different from other primates, and how can they be imitate to develop foot prosthesis. It poses a challenge to emulate biological systems, which seem to have the least flaws. At present there are no studies that explain the functions of the point of contact (POC)—the point that experiences the vertical ground reaction forces. There are, however, studies on foot biomechanics such as in [1], and on walking gait analysis in [2]-[8], and [9]. Computational intelligence is the current trend in gait analysis [10]. The advantage of computational intelligence technology is that it offers the ability to investigate non-linear data relationships.
The first step is to study the anatomy of foot. Then, a graph is created that charts bones and joints localities. Vertex symbolizes bone, whereas as an edge symbolizes joint. The graph eventually characterizes foot. It depicts bones adjacencies, bones and joints incidences, and the routes of paths from the root. This is further elaborated in Fig. 1.
Fig. 1(a) The foot radiograph and we propose the graph as shown in Fig. 1(b). Figure 1(c) is the model skeleton. Six vertices pinpoint their assigned bones in Fig. 1(a) and Fig. 1(c) as examples. Graphs can aid the process of mechanical structures design [12]
A. Mathematical Model By inspection, human foot has twenty-seven bones. It has five digits known as the phalanges. They control the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 765–768, 2011. www.springerlink.com
766
A.Y. Bani Hashim et al.
drop-off—the experience at the end of gait. Calcaneus, on the other hand, works upon gait initiation. It is the largest bone that bears body weight upon heel strike. Navicular and cuboid bear distributed body weight upon foot flat. The last phalanx or the toe bears the body weight at the end of gait. In Fig. 1, the circles are the vertices that represent bones and the connecting lines are the edges that represent joints. The concentric circle is the root, which is the talus. Let there be a number of sequences from POC-0 to POC13 during gait. The sequence Q follows the POCs. Every sequence has at least one POC and a GRF— f k except for Q0 and Q13 . On foot flat, there are twelve POCs. This is shown in Fig. 2.
⎡A ⎢A ⎢ ⎢A ⎢ ⎢A ⎢A ⎢ ⎢A ⎢A ⎢ H = ⎢A ⎢ ⎢A ⎢A ⎢ ⎢A ⎢A ⎢ ⎢A ⎢A ⎢ ⎣⎢ A
A⎤ A ⎥⎥ V L L M M M S S S S S S A⎥ ⎥ A V L L M M M S S S S S A⎥ A A V L L M M M S S S S A⎥ ⎥ A A A V L L M M M S S S A⎥ A A A A V L L M M M S S A⎥ ⎥ A A A A A V L L M M M S A⎥ ⎥ A A A A A A V L L M M M A⎥ A A A A A A A V L L M M A⎥ ⎥ A A A A A A A A V L L M A⎥ A A A A A A A A A V L L A⎥ ⎥ A A A A A A A A A A V L A⎥ A A A A A A A A A A A V A⎥ ⎥ A A A A A A A A A A A A A ⎦⎥ A A A
A
A
A
A
A
A
A
A
A
V A A
A
A
A
A
A
A
A
A
A
(1)
Fig. 2 Above—the point-of-contacts shown on the footprint. Below—the ambulatory path
Fig. 3 The profile for ideal ambulation derived from Eq. (1) [13] It is straightforward that POC-0 in Q0 does not experience any reaction prior to touching the ground. In this sequence, none of the POC has contact with the ground. This is also true for the last sequence. However, upon heel strike POC-1 bears maximum body load. Equation (1) explains the process above where the rows are the sequences and the columns are the POC. The degree ‘V’ means very large, ‘L’ large, ‘M’ medium, ‘S’ small, and ‘A’ absent. In Q0 there is no GRF and Q1 has very large GRF upon heel strike. For ideal gait, the computed ‘M’-curve is shown in Fig. 3, which is the basis for selection of the active POCs. These POCs lie within the curve’s peaks and valley.
III. RESULTS A 25-year-old male subject volunteered to provide the information of his size-5 right foot. The locations of the POCs is estimated and the ambulatory path is plotted on the scanned image shown in Fig. 4. This is then compared to the selected POCs defined in Fig. 3.
Fig. 4. The foot identity of a male subject with size-5 foot IFMBE Proceedings Vol. 35
Prosthetic Foot Design: The Significance of the Normalized Ground Reaction Force
In Fig. 5, the function Γ 0 ( POC, DA ) the curve derived from Fig. 4. It is based on the distances of POC from the origin—POC-1. The curve below it, Γ1 ( POC,0.5DA ) is the estimated function halve of the actual curve. The last curve, Γ 2 ( POC,0.25DA ) is taken as the quarter of the original with selected points: POC-3e, POC-7e, and POC-11e. In Fig. 6, the relationship between the estimated curve and the actual construction of the foot-ankle mechanism is shown.
767
information to build the prosthetic foot by following the proposed model. The clinicians can prescribe prosthetic foot following this process that complements their intuition and experience. In fact, it is straightforward for engineers to decide on the construction and manufacturing matters. The patients, on the other hand, may purchase the product through online and perform self-installation, if necessary. Therefore, proper mappings and modeling of bones and joints can be used as reference in design and development of prosthetic feet.
ACKNOWLEDGMENT The authors thank University of Malaya’s Prosthetic and Orthotic Laboratory technician Razalee Rahimi for assisting in the experiments. This work is supported in part by the Malaysian Ministry of Higher Education under Grant FRGS/2008/FKP-0069.
REFERENCES
Fig. 5 The estimated functions for active POCs measured as distance from the origin for the foot identity of a male subject with size-5
Fig. 6 The relationship between the estimated curve and the actual construction of the foot-ankle module.
IV. DISCUSSION AND CONCLUSION In this work, a novel method to develop prosthetic foot from mere scanned image is proposed. It is derived from the unconventional structural foot modeling. It should promise a fast and low-cost option. This is possible because the process simply requires the patient who has undergone transtibial amputation to submit the scanned foot image of scale 1:1 to the clinician. The clinician then communicate with engineers. The engineers will then use the relevant
[1] W.J. Wang & R.H. Crompton, “Analysis of the human foot during bipedal standing with implications for the evolution of the foot,” J. of Biomechanics, vol. 37, pp. 1831-1836, 2004. [2] H. Elftman, “Dynamic structure of the human foot,” Artif. Limbs, vol. 13, no. 1, pp. 49-58, 1969. [3] J.K. Gronley, & J. Perry, “Gait analysis techniques,” Physical Therapy, vol. 64, no. 12, pp. 1831-1838, 1993. [4] D.E. Krebs, J.E. Edelstein, S. Fishman, “Reliability of observational kinematic gait analysis,” Physical Therapy, vol. 65, no. 7, pp. 10271033, 1985. [5] R.K. Laughman, L.J. Askew, R.R. Bleimeye, E.Y. Chao, “Objective clinical evaluation of function gait analysis,” Physical Therapy, vol. 64, no. 12, pp. 1839-1845, 1984. [6] J.L. McGinley, P.A. Goldie, K.M. Greenwood, S.J. Olney, “Accuracy and reliability of observational gait analysis data: Judgments of push-off in gait after stroke,” Physical Therapy, vol. 83, no. 2, pp. 146-160, 2003. [7] M.J. Peterson, J. Perry, J. Montgomery, “Walking patterns of healthy subjects wearing rocker shoes,” Physical Therapy, vol. 65, no. 10, pp. 1483-1489, 1985. [8] P-F. Su, S.A. Gard, R.D. Lipschutz, T.A. Kuiken, “Gait characteristics of persons with bilateral transtibial amputations,” J. Rehabilitation Research & Development, vol. 44, no. 2, pp. 491-501, 2007. [9] D. Winter, Kinematics and kinematics patterns in human gait, Human Movement Science, 3(1984), 51-76. [10] D. Lai, R. Begg, M. Palaniswani, “Computational intelligence in gait research: A perspective on current applications and future research,” IEEE Trans. On Inf. Tech. in Biomedicine, vol. 13, no. 5, pp. 687702, September 2009. [11] C. Saligo, A.E. Müller,”Nails and claws in primate evolution,” Journal of Human Evolution, vol. 36(1), p. 97, 1999. [12] L-W. Tsai, Mechanism Design: Enumeration of Kinematic Structures According To Function. CRC Press, Boca Raton, 2001. [13] A.Y. Bani Hashim, N.A. Abu Osman, W.A.B. Wan Abas, L. Abdul Latif, “Evaluation of a bio-Mechanism by graphed static equilibrium forces,” Proc. World Academy of Science, Engineering and Technology, 2009, vol. 60, pp. 693-696.
IFMBE Proceedings Vol. 35
768
A.Y. Bani Hashim et al. Author 1:Ahmad Yusairi Bani Hashim Institute: Department of Robotics & Automation, Faculty of Manufacturing Engineering, Universiti Teknikal Malaysia Melaka Street: City: Durian Tunggal Country: Malaysia Email: [email protected]
Author 3:Wan Abu Bakar Wan Abas Institute: Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya Street: City: Kuala Lumpur Country: Malaysia Email: [email protected]
Author 2:Noor Azuan Abu Osman Institute: Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya Street: City: Kuala Lumpur Country: Malaysia Email: [email protected]
Author 4:Lydia Abdul Latif Institute: Department of Rehabilitation Medicine, Faculty of Medicine, Universiti Malaya Street: City: Kuala Lumpur Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
A Survey on Welfare Equipment Using Information Technologies in Korea and Japan H.S. Seong1, N.H. Kim1, Y.A. Yang2, E.J. Chung2, S.H. Park3, and J.M. Cho1 1
Dept. Biomedical Engineering, Inje University, Gimhae, Republic of Korea 2 Dept., Inje University, Gimhae, Republic of Korea
Abstract— The average span of Korean life has been increased by the medical development, the improvement of dietary and life environment since the Industrialization. Many social and environmental change emerged by the increase of the number of the aged caused by the entrance the aged society. Japan, which faced the fasted aged society, marketized senior-friendly industry in earnest and has concentrated on it for the next generation future growing industry since 1990s. In the meaning, the activation of senior-friendly industry can prevent the aged problem and further social problems at the same time. For this reason, it is sure that the silver welfare is the important mission that can improve the level of welfare of the whole nation, not just the current aged but also the potential aged. Therefore, considering the senior-friendly industry as the important criteria of the aged welfare service, the formation and tendency of senior-friendly industry of Japan, the fastest aged society were looked into in this study. Through it, the implication of the development of senior-friendly industry that Korea should take for the future was also found. By comparing the current condition and reality of Korean seniorfriendly industry promotion with the those of Japan, the prospect and the development plan of Korean senior-friendly industry for the future are considered in this study. Keywords— Senior-friendly industry, Welfare equipments, Fusing IT technology, Welfare of Korea and Japan, Graying.
I. INTRODUCTION Thanks to medical development and improvement of eating habits and the life environment, the average life span expectancy of the country's population has rapidly increased after industrialization. Because of such extension in life expectancy and also due to the decrease of birth rate, the graying rate of Korea today is at the highest state in the world. The ratio of citizens more than 65 years of age to the entire population was only 3.1% in 1970, 7.3% in 2000 and 8.7% in 2004[1]. It is expected that in 2018 this ratio will reach 14.3%. As seen in Table 1, Korea has the relatively shortest number of years expected to go from an ageing society to an aged society and in turn to a super-old (or post-aged) society and thus needs to quickly come up with countermeasures. Korea is the worlds best in IT technology and infrastructure and thus has an advantageous foundation for promoting a IT fused senior-friendly industry. Because of the aging
Table 1 Speed of graying
Korea Japan German US France
Ageing society 2000 1970 1932 1942 1864
Arrival year Aged Super-old society society 2018 2026 1994 2006 1972 2009 2015 2036 1979 2018
Time required To aged To supersociety old society 18 8 24 12 40 37 73 21 115 39
tendency of the world the silver industry is a promising business with unlimited potentials and needs political promotion through focused support in order to dominate overseas markets and to overcome the small size of the domestic market. These aspects are also meaningful for satisfying the expectations of the baby-boom generation, the newly arising demanding class. The baby-boom generation has not only accumulated an income higher than the regular standard but is also aided by such policies as the income maintenance policy and therefore has considerable purchasing power. In addition, this generation experienced the pro-democracy movements of the 1980's and is known for its strong social awareness. Also due to European influences in values, there exists a sense of individuality and thus consumption for the self is on the increase. With such purchasing power, the baby-boom generation is a group whose demand for quality, diversification and individualization must be satisfied. Such a process requires phased preparation and the supply of high-quality consumer-specific products in order to develop into the country's new dynamic industry. Developed countries that have already become an aged society are overcoming graying-related social problems through the use of IT technology. Examples are the European "AAL Project," the British "Telecare," Japanese "uJapan" and the Singaporean "iN2015." Of these countries, Japan has experienced the fastest graying in the world and since the 1990s has marketed the silver industry to the full scale to fuse it with IT and to develop as the next generation in future growth industry. In comparison Korea selected only 19 strategic items in 2005 in consideration of the international competition and
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 769–772, 2011. www.springerlink.com
770
H.S. Seong et al.
market growth rate of the aged society's developing power industry. Thus there is great need for more measures for industrial invigoration and political improvement for the senior-friendly industry. This research compares the country's present state and reality of the promotion of senior-friendly industry related in relation to Japan, the country of the greatest longevity, and suggests prospects and development plans for IT technology in connection to the senior-friendly industry.
II. SUBJECT Senior-friendly industry is defined as the industry that pursues profit through providing health, convenience and safety that meet the physical/mental demands which derive from the biological aging and social/economical decline of abilities of senior citizens. Research of the senior-friendly industry in Korea started in actuality in the 1990’s. As the country entered the 21st century, the industry became more diversified and was recognized as a future developing power industry and research for its connection with IT technology became active. In relative comparison to the studies of the 1990's, the research of the 2000's suggests comprehensive plans and policy volition as the primary values that the senior-friendly industry should aim for. Along with this it also takes into consideration the changes in industrial structure and consumption form due to the shift in population structure and points to specific and detailed countermeasures for industries high in senior-friendly aspects[1]. A. Silver Industry of Korea and Japan Korea's senior-friendly industry has defined as a singular industry the products or services provided by private enterprises for the improvement of health and living of senior citizens and senior-to-be citizens. On the other hand Japan's silver industry is defined as a service industry since it supplies support based on market principles for the functional weaknesses of senior citizens. Baby-boom generation of Korea is a generation of the post-war industrialization, born between 1955 and 1963 and consisting of people in their mid forties to early fifties. Consisting of about 8 million 160 thousand people, they form about 16.8% of the world population and when the age of 53 is considered the age of retirement, met their retirement period in 2008[2]. The consumption desires of the baby-boom generation are focused around health/medical treatment while information communication products with simplified functions are preferred over complicated high-tech ones and one-stop fused services are favored
considering that non-reversible and continuous services are often needed. A generation similar to our own baby-boom generation is the Dankai generation, born between 1947 and 1949 after World War II. They form 5.4% of the entire population with 8 million 60 thousand people and reached retirement period in 2007. Consumption desires are strong in health, medical and leisurely tendencies and though hobbies and preferences are very diverse, there is an overlying tendency to value trust when it comes to choosing products, services or enterprises. Most popular products of the Japanese silver industry fall into the groups of health-related food products or ant-aging cosmetic products, following the assumption that senior citizens feel an "anxiety of growing old"[2]. In 1987 Japan established the "Institute of Techno Aid Society" in order to encourage more IT technology promotions, such as product standardization and development of human resources, so that more IT technology could be applied to welfare equipment. In particular, the "u-Japan" policy, devised in early 2004, held the purpose of realizing a society in which everyone, including senior citizens and the disabled could easily approach information and in which "Universal Design" is possible. This was all done through research of IT use support in order to provide a good living environment and support independent lives for senior citizens. B. Product of Senior-Friendly Products Fused with IT Technology In Japan's case, the senior-friendly industry was chosen as a promising industry of the future along with environmental industry and sensitivity industry in a report, "The 21C Economic World - industrial policy vision" by the subsidiary Industrial Structure Council. By 2010, research and development were operated for the development of agent technology, improvement of interface for senior users, securing of information approachability and the IT use support for senior citizens and the disabled. This was all done since the beginning of 2004 when u-Japan which was devised and had the purpose of realizing a society in which "Universal Design" was possible and where everyone could easily approach information. Through this, the pursue of independent lives for senior citizens is supported by providing good living environments and securing employment opportunities while also becoming a main force behind the provision of remote medical systems and securing of migratory aspects for sufferers of chronic diseases. Houses were constructed that can comprehend through IT products such as sensors and information appliances the state of health of
IFMBE Proceedings Vol. 35
A Survey on Welfare Equipment Using Information Technologies in Korea and Japan
its inhabitant while various policies and commercialized technologies were developed for the support of the independent lives of senior citizens through social involvement. The "New IT reform strategy" holds the purpose of reforming the structure of medical treatment, proenvironment, safety and traffic through IT and also resolving social problems such as unemployment, welfare for the disabled and graying of the society through the use of IT. Also, the importance of IT target on reinforcement was emphasized and the goal was to strengthen international competition through IT. In May 2007 the Japanese Ministry of Internal Affairs judged that the cause of the fall in the info-communications industry was the absence of an integrated strategy on the part of the state and announced the "International Competitiveness Reinforcement Program." In Japan the Minister of Health declared a list of welfare equipment for rental and equipment which could prevent the need of care service, as can be seen in Table 2. Table 2 Welfare items of Japan Purchase (5)
Rental (12)
Versatile bathtub Care bed components Bedsore prevention goods Wheelchair components
Urine gather Handrail Position changer
Bathing supplies Care beds
Toilet chair
Lift
Cane
Wheelchairs
Slope
Walker
lift
Table 3 Welfare items of Korea Purchase (10)
Rental (6)
Toilet chair
Anti-slip goods
Safe grip
Walk assistive goods
Bath chair
Handy toilet
Cane
Bedsore prevention seat
Wheel chair
Electric bed
C. IT Technology for Application to Senior-Friendly Industry Products Table 4 Application for welfare equipment with IT Name
Toilet chair
Care beds
Bedsore prevention Mat
Walker Position changer Versatile bathtub
Function Supports urination
Measures glucosuria value Controlled temperature of seat Automatic opening cover
Informs temperature of water Tilting and Height controllable chair
Walker and Walker tools to aid assistive in walking goods
Can ascend stairs Detects obstacle Automatic brake function on downhill slope
supports Safety grip user against bar
Electrically controlled height
tools to aid in walking
Detects obstacles Automatically folds
Bedsore bedsore prevention prevention seat and mat
Massages Cooling and heating functions Position memory seat and mat
Position changer
changes posture and position of body
Electrically controlled angle Position memory seat and mat
Wheel chair
assists movement
Stand-up electric wheelchair Auto driving wheelchair Position tracking
Care bed
Support for stand and
Bathtub
assists baths (up-down)
Bath lift
In comparing Table 3 with Table 2, the welfare equipment items offered to senior citizens in Korea are different from those offered in Japan. In Japan's case, there is a small number of purchasable items and a greater number of items available for rental services. Another difference is that there is a focus on providing component parts. But in the case of Korea, more than half the items are purchasable. This increases the burden put on the consumer. Also Korea’s welfare equipments are more passive meaning that there are hardly any items with electric controllers. Only two such items exist like the electric wheelchair and the electric bed.
Application
posture maintenance Bath chair when taking baths
Cane
Wander detector
771
IFMBE Proceedings Vol. 35
Helps keep standing
balance
when
Maintains temperature of water Massages through Whirlpool
772
H.S. Seong et al.
In table 4, give a example for involve IT technology to senior friendly products. Products in present are excluded IT technology, but fusing IT to senior friendly industry is inevitable for improvement of senior life. The current market for welfare equipment must not stop at simply implementing IT but continue to introduce new equipment. One such product is a detector for wandering dementia patients, an equipment that is already available as a rental welfare equipment through the Japanese government. Today the number of dementia patients is about a million while scholars predict that by 2050 the number will increase to 2 million and a hundred thousand. Though the number of dementia patients is on the increase, there is a limit to the facilities to accommodate them. Therefore some dementia patients must receive outpatient treatment where a guardian must take care of the patient 24 hours a day. Since it is difficult to watch over a patient full time, many cases are occurring in which the patient leaves the house and goes missing. A sensor for such patients would make it possible to regularly report the whereabouts of patients and also locate patients in the case of an emergency. Also as the number of senior citizens living alone is quickly on the increase there is a need to introduce a system that can sense emergencies on a regular basis. There needs to be a system that sends data measured through movement sensors and a water/electricity supply meters installed within the household to nurses of the elderly and to the relevant control government authorities and thus prevent lonely death and aid in the fast notification of family members in the case of an emergency. Such activity detecting systems are already managed by local governments at present (in Yongin and Seoul) but since it is in the demonstrationbusiness stage it is difficult for every user to benefit from the system and thus there is a need for the system to be introduced on the part of the Korean government.
reorganization of the senior-friendly industry as a newly developing industry for the improvement in the quality of senior life. The senior-friendly industry has grown and developed through the support of academic and business related fields. In particular, the market of the senior-friendly industry is expanding due to the entrance of the financially affluent Dankai generation into senior age. However, the Korean senior-friendly industry has only begun to gain interest in the 1990’s and the "Institute of the Future Social Committee" selected nineteen items of eight sections for the primary stage of the senior-friendly industry and fifteen items of six sections for the secondary stage. As the wealthy baby-boom generation reaches the senior age, it is expected that the senior-friendly industry will go through massive growth and that the senior class will undergo definite changes due to different economic and consumption values. In short, the aging of society in Korea is occurring rapidly and by referencing to Japan, a society where the seniorfriendly industry is already active, it is possible to establish countermeasures for our country's senior-friendly industry and to support it as a new potential for a national developing power industry.
REFERENCES 1. National IT Industry Promotion Agency(2009), A Study on Higher Value-added Necessities for IT-based convergence Senior Products , Ministry of Knowledge Economy 2. Park, Hyun-Sik, Yeon-Jeong. Kim et al(2009), The Comparison Study between the Senior-Friendly industry of Korea and that of Japan Depending on the Paradigm Change into the Aged Society, The Korean Association of Asian Studies 12-2. 71~111page
Author : Hyangsook. Seong
III. CONCLUSIONS Problems of health, medical care and welfare are emerging as social problems due to the rapid graying of the population. Such increase in senior population is a problem undergone in the past by Japan and Western countries: Japan already implemented various services through the
Institute: Dept. BioMedical Engineering, Inje University Street: Obangdong City: Gimhaesi Country: The republic of Korea Email: [email protected]
IFMBE Proceedings Vol. 35
Biomechanical Analysis on the Effect of Bone Graft of the Wrist after Arthroplasty M.N. Bajuri, M.R. Abdul Kadir, and M.Y. Yahya Medical Implant Technology Group, Faculty of Biomedical Engineering and Health Sciences, Universiti Teknologi Malaysia, 81310 UTM Skudai, Malaysia
Abstract— Instability is one of the symptoms associated with Rheumatoid Arthritis of the wrist joint. The instability which is caused by weakened ligaments as well as worn cartilages makes the carpal bones to move freely, causing a painful condition. Wrist arthroplasty is one of the treatments for severe cases of Rheumatoid Arthritis of the wrist joint. This project involved biomechanical analysis of wrist arthroplasty with and without bone graft in terms of its ability to provide carpal stability for replacement of the joint severely affected with Rheumatoid Arthritis. The clinical symptoms of the skeletal disease such as the pathological changes of the bone, cartilage, ligaments, tendon as well as the load transfer were used to accurately simulate the wrist joint affected with Rheumatoid Arthritis. Load simulating the gripping action of the joint was applied to the models. Results showed that the fixation of the wrist arthroplasty associated with bone graft had less displacement with sufficient amount of stresses compared to fixation without bone graft. It can be concluded that fixation of the wrist arthroplasty specifically at the carpal component could be enhanced using bone graft.
commonly associated with carpal bones instability. The weakened ligaments reduced the bony supports of the joint[5]. Arthroplasty is one of the treatments for severe deformity case of Rheumatoid Arthritis, intended to preserve wrist joint motion while removing pain or any other problems associated with the wrist joint [6]. However, there are several problems associated with the wrist arthroplasty. Loosening effect was commonly occurred due to lack of bony support towards the implant [7-8]. Jay Menon et al. claimed that the wrist arthroplasty experienced with a carpal component fixed with three screws could successfully result a good bony support [8]. Bone graft was needed in order to enhance the fusion between bones and implant [9]. Hence, the aim of this project was to analyse the biomechanical effect of bone graft in enhancing distal component fixation of the wrist implant.
Keywords— Finite element analysis, wrist, Rheumatoid Arthritis, arthroplasty, stability.
II. MATERIALS AND METHODS A. Finite Element Model Construction
I. INTRODUCTION The wrist joint is involved in virtually every human functional activity and as such, are exposed to high number of traumatic injuries, primary osteoarthritis and secondary degenerative disease [1]. One of the most common skeletal disease that is always associated with the wrist joint is Rheumatoid Arthritis[2]. The disease affects mostly synovial joints, resulting in considerable pain, loss of function and eventual deformity. It is a life-long condition, and the disease activity might change over time. Even though other joints such as the hip and knee can also be affected with Rheumatoid Arthritis, the most commonly affected joint is the wrist [3]. In general, there are three main symptoms in which led to wrist deformation-cartilage destruction, synovial proliferation and ligamentous laxity [3-4]. The laxity of the ligaments due to stretching action caused by synovial expansion, resulting the instability of the wrist. As stated by Metz et al., wrist affected with Rheumatoid Arthritis was
Three-dimensional model of the wrist joint was reconstructed from Computed Tomography (CT) with no macroscopic or radiological signs of pathology. The scans were taken of the whole upper limb of normal human body. The radiograph of the wrist, ranging from the distal end of the left forearm bone - radius and ulna- to the proximal third of the metacarpals was then cropped. The total length of the scans was 102.3 mm with resolution of 0.98mm in plane and the slice thickness was 1.5mm. Semi-automatic segmentation was carried out on the slices of CT dataset in order to extract the bone area. Three dimensional (3D) model of wrist joint was reconstructed based on the segmented contours. Since the source of CT datasets has relatively low resolution, a numbers of sharp edges were existed. Hence, automatic smoothing steps were carried out on the bone geometry. In order to perform a surface smoothing module, vertices were shifted according to its neighbours' coordinates. The interaction between bones was then been investigated right after the bones already been
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 773–777, 2011. www.springerlink.com
774
M.N. Bajuri, M.R. Abdul Kadir, and M.Y. Yahya
constructed. It is crucial due to the numerous articulations at the wrist joint. Boolean operations were performed in order to check any intersecting bodies between bones. If overlapping occurred, modifications were done by subtracting each body using Boolean operations. In order to check for accuracy of these operations, image was then recalculated so that the location and shape of the particular bone could be reconfirmed. The model was then converted into finite element mesh using triangular surface elements. Due to the complexity of the bone model as well as limitation on computing resources, distal part of the metacarpals were removed.
Ligaments
Carpal Bones
(a)
B. The Modelling of the Wrist Affected with Rheumatoid Arthritis Based on literature studies, symptoms as well as the pathophysiology of Rheumatoid Arthritis were wellidentified. Type IIIa [10] of RA was simulated, in which 6 criteria were included: 1. 2. 3. 4. 5. 6.
Mid Peg
Screws CarpalCap
Osteoporotic bone [10-11] Cartilage destruction [4, 10-11] Bone erosion [3, 10-13] The carpus dislocates in the ulnar direction [10]. The loss of carpal height because of bone destruction [10]. Hand scoliosis in which results from the ruptured tendon. This mechanism ends in a changed axis of the wrist to the ulna with a consecutive rotation of the metacarpal bones in the radial direction[10].
The criteria related to dislocation and erosion of the bones were simulated using image processing software. The cartilage destruction was simulated by removing all the articular cartilages. Meanwhile the property of the osteoporotic bone was simulated by reducing the elastic modulus of the bones- 66% for cortical bone and 33% for cancellous [14-20]. This erosions as well as the destruction of the cartilage were directly resulting the impaction of the wrist joint.
(b)
Screws
CarpalCap
(c)
C. The Construction of Finite Element Model of the Wrist Arthroplasty The latest wrist implant, ReMotionTM produced by Small Bone Innovations was used in the simulation [9] . The model was constructed using a CAD software, Solidwork 2009. The 3D models were turned into surface triangular elements and positioned onto the Rheumatoid Arthritis model. Surgical preparation of the bone was carried out prior to insertion of the implant. The implants were then placed in their respective bone model and then converted into solid tetrahedral mesh.
Bone Cement (d)
(d)
Fig. 1 3 Dimensional finite element model for (a) Rheumatoid Arthritis wrist, (b) the carpal side of the wrist arthroplasty, (c)the positioned implant into the Rheumatoid Arthritis model as well as the enhanced fixation of the carpal side of the wrist arthroplasty (d)
IFMBE Proceedings Vol. 35
Biomechanical Analysis on the Effect of Bone Graft of the Wrist after Arthroplasty
D. Material Properties The bone models were assigned with a linear elastic isotropic material representing cortical bone (E=18GPa, ν=0.2). The implants were assigned homogeneous linear elastic material resembling titanium alloy material for screws and carpal component (E=110GPa, ν=0.35). Bone grafts were also modeled using linear elastic material (E=100MPa, ν=0.2) [21]. Ligamentous constraint was provided using linear spring elements to model each ligament[22]. The position of the ligaments were estimated based on previously published anatomical studies[23]. The stiffness parameter of the respective ligaments was ranging from 40 to 350 N/mm. For ligaments that did not have published material parameters, it was assumed that the properties of the neighbouring ligaments would apply [24].
775
in figure 2. It could be seen that the fixation without bone graft had higher displacement compared to the fixation with bone graft, 1.6 mm and 0.6 mm respectively, particularly occurred at the 1st metacarpal and trapezium.
E. Contact Modelling Each of the contact body was defined as deformable in which it complies with the properties of each material [23]. Hence, a deformable-to-deformable contact was established between the articulating surfaces. Friction coefficient of 0.3 was applied at the bones [25] and for carpalcap which has relatively rougher surface, friction coeffecient of 0.8 was assigned. In order to simulate the fixation effect of the thread, glued type of contact was assigned at the screws and midpeg with the contacted bodies.
Fig. 2 Finite element model assembly. From the figure the proximal fixation and the loading conditions can be seen. Pressures were applied along the metacarpal axis and the proximal ends of the carpalcap was kept rigid
MPa
F. Specification of Loading Conditions The load calculated from the previous study was used to derive the loading at the boundaries [23]. Loading simulating gripping force with resultant force of 647.5 N in magnitude, distributed over the 5 digits, applied on metacarpals was used during static condition. All the applied loads and the placement of the loading were stated in the Table II[23].
(a) MPa
Table 1 Relative loading on the metacarpal bones
Loading (N)
Thumb 25.6
Index 12.0
Long 10.6
Ring 8.8
Little 7.7
III. RESULT A. Displacement (b)
Contour plots of displacement for the two models, the fixation with bone graft and without bone graft, are shown
Fig. 3 The displacement plot of the fixation (a) with bone graft and (b) without bone graft
IFMBE Proceedings Vol. 35
776
M.N. Bajuri, M.R. Abdul Kadir, and M.Y. Yahya
B. Stress Distribution Fig. 4 has the information on the stress distribution of the carpal side of the wrist arthroplasty fixed with and without bone graft. It was clearly observed that the one fixed with bone graft has relatively 10 times higher stress compared to implant fixation without bone graft. In general, the implant has higher stresses compared to the bone and bone graft. MPa
MPa
(a)
(b)
Fig. 4 Von Mises stress distribution of the palmar side of the wrist joint consisted of the fixation with bone graft (a) and without bone graft (b)
IV. DISCUSSION Wrist affected with Rheumatoid Arthritis was commonly associated with carpal bones instability. In addition, stability of the wrist arthroplasty was still arguable due to some clinical reported the failure of the wrist arthroplasty due to weak fixation in which lead to instability. One of the suggested methods to enhance the fixation was to use bone graft. It was clearly observed in the results that the displacement of the bones as well as implants in the model fixed with bone graft is low. Since this study was established under semi-static condition, the low displacement significantly indicated the stability of the implant fixation. Thus, the fixation with bone graft found to be relatively
better solution to enhance the fixation of the implant. This finding was in line with previous clinical studies [9, 26-29]. Regarding the stress distribution, results showed that higher stresses were generated at the implant with bone graft. This condition was resulted from counter action between screws of the implant, the bone graft and the adjacent bones [30]. Higher stresses were needed in order to generate grip action between these bodies [30-31]. With no values exceeded the yield strength of the bone, implant and bone graft, it was concluded that this fixation successfully achieved stability. There were several limitations with constructive recommendations found in this study. Regarding the model of wrist after arthroplasty, it was preferable to perform microstudy analysis for the fixation area. This project has simplified the screw by removing the threads. Hence, it is an advantageous if the threads could be constructed accurately. Screw threads have a crucial effect in determining the effects for fixation [32]. The effects on small carpal bones could be successfully investigated if microstudy analysis was carried out. On top of that, this presented project performing only hand gripping action. Thus, wider physiological loading condition would result more encouraging outcome yet a better model of the wrist. In addition, the model with arthroplasty will be more reliable if this condition could be precisely assigned. Further improvement of ligaments must consider the correct direction or shapes. The presented model used straight line of springs, in which it does not precisely represent the actual ligaments [23-24, 33-34]. On top of that, the non-linear viscoelastic properties of the ligament should be applied in order to adequately simulate the real properties of the ligaments. Last but not least, in order to achieve a reliable result, the accuracy of the geometrical shape of the articulation surface is a must. This is because the correct concavity and convexity of the articulation surface promises the convergence of the finite element analysis.
V. CONCLUSIONS This presented paper was successfully illustrated the biomechanical behaviour of the two main fixation methods of the wrist arthroplasty, with and without bone graft. Based on the results and presented discussions, it can be concluded that wrist arthroplasty with carpal component fixed with bone graft successfully restore the stability of the carpal bones of the wrist with Rheumatoid Arthritis. This stability was interpreted with lower carpal displacement as well as considerably sufficient stresses value to provide fixation.
IFMBE Proceedings Vol. 35
Biomechanical Analysis on the Effect of Bone Graft of the Wrist after Arthroplasty
ACKNOWLEDGMENT Medical Implant Technology Group, Faculty of Health Sciences and Biomedical Engineering, Universiti Teknologi Malaysia.
REFERENCES 1. wolfe s w, "kinematic total wrist arthroplasty," US Patent US 20090204223A1, 2009. 2. Stegeman M et al. (2005) Biaxial total wrist arthroplasty in rheumatoid arthritis. Satisfactory functional results, Rheumatology International 25:191-194 3. Simmen B R et al. (2007) (iv) The management of the rheumatoid wrist, Current Orthopaedics 21:344-357 4. Cush J JLipsky P E (1991) Cellular Basis for Rheumatoid Inflammation, Clinical Orthopaedics and Related Research 265: 5. Metz V M et al. (1997) Ligamentous instabilities of the wrist, European Journal of Radiology 25:104-111 6. Adams B D (2006) Total wrist arthroplasty for rheumatoid arthritis, International Congress Series 1295:83-93 7. Shepherd D E TJohnstone A J (2002) Design considerations for a wrist implant, Medical Engineering & Physics 24:641-650 8. Menon J (1998) Universal total wrist implant : Experience with a carpal component fixed with three screws, The Journal of Arthroplasty 13:515-523 9. Innovations S B, "Surgical Technique, ReMotionTM Total Wrist Implant System," S. B. Innovations, Ed., ed: Small Bone Innovations, 2009. 10. Trieb KHofstätter S (2009) Rheumatoid Arthritis of the Wrist, Techniques in Orthopaedics 24:8-12 10.1097/BTO.0b013e3181a32a36 11. Trieb KHofstaetter S G (2009) Treatment strategies in surgery for rheumatoid arthritis, European Journal of Radiology 71:204-210 12. Ertel A N et al. (1988) Flexor tendon ruptures in patients with rheumatoid arthritis, The Journal of Hand Surgery 13:860-866 13. McKee ABurge P (2010) (i) The principles of surgery in the rheumatoid hand and wrist, Orthopaedics and Trauma 24:171-180 14. Polikeit A et al. (2004) Simulated influence of osteoporosis and disc degeneration on the load transfer in a lumbar functional spinal unit, Journal of Biomechanics 37:1061-1069 15. Andresen R et al. (1999) CT determination of bone mineral density and structural investigations on the axial skeleton for estimating the osteoporosis-related fracture risk by means of a risk score, The British Journal of Radiology 72:569-578 16. Homminga J et al. (2001) Osteoporosis Changes the Amount of Vertebral Trabecular Bone at Risk of Fracture but Not the Vertebral Load Distribution, Spine 26: 17. Schaffler M BBurr D B (1988) Stiffness of compact bone: Effects of porosity and density, Journal of Biomechanics 21:13-16
777
18. Augat P et al. (1998) Anisotropy of the elastic modulus of trabecular bone specimens from different anatomical locations, Medical Engineering & Physics 20:124-131 19. Lang T et al. (1998) Noninvasive Assessment of Bone Density and Structure Using Computed Tomography and Magnetic Resonance, Bone 22:149S-153S 20. Rice J C et al. (1988) On the dependence of the elasticity and strength of cancellous bone on apparent density, Journal of Biomechanics 21:155-168 21. He D et al. Facet joint plus interspinous process graft fusion to prevent postoperative late correction loss in thoracolumbar fractures with disc damage: Finite element analysis and small clinical trials, Clinical Biomechanics In Press, Corrected Proof: 22. Ezquerro F et al. (2007) The influence of wire positioning upon the initial stability of scaphoid fractures fixed using Kirschner wires: A finite element study, Medical Engineering & Physics 29:652-660 23. Gislason M K et al. (2010) Finite element model creation and stability considerations of complex biological articulation: The human wrist joint, Med Eng Phys 32:523-31 24. Gislason M K et al. (2009) A three-dimensional finite element model of maximal grip loading in the human wrist, Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 223:849-861 25. Alkan I et al. (2004) Influence of occlusal forces on stress distribution in preloaded dental implant screws, The Journal of Prosthetic Dentistry 91:319-325 26. Walsh W R et al. (2004) Cemented fixation with PMMA or Bis-GMA resin hydroxyapatite cement: effect of implant surface roughness, Biomaterials 25:4929-4934 27. Thielen T et al. (2009) Development of a reinforced PMMA-based hip spacer adapted to patients' needs, Medical Engineering & Physics 31:930-936 28. Nguyen N C et al. (1997) Reliability of PMMA bone graph fixation: fracture and fatigue crack-growth behaviour, Journal of Materials Science: Materials in Medicine 8:473-483 29. Janssen D et al. (2008) Micro-mechanical modeling of the cementbone interface: The effect of friction, morphology and material properties on the micromechanical response, Journal of Biomechanics 41:3158-3163 30. Mahoney O M et al. (2009) Rotational Kinematics of a Modern Fixed-Bearing Posterior Stabilized Total Knee Arthroplasty, The Journal of Arthroplasty 24:641-645 31. Alonso-Vázquez A et al. (2004) Initial stability of ankle arthrodesis with three-screw fixation. A finite element analysis, Clinical Biomechanics 19:751-759 32. KayabasI O et al. (2006) Static, dynamic and fatigue behaviors of dental implant using finite element method, Advances in Engineering Software 37:649-658 33. Fischli S et al. (2009) Simulation of extension, radial and ulnar deviation of the wrist with a rigid body spring model, Journal of Biomechanics 42:1363-1366 34. Carrigan S D et al. (2003) Development of a Three-Dimensional Finite Element Model for Carpal Load Transmission in a Static Neutral Posture, Annals of Biomedical Engineering 31:718-725
IFMBE Proceedings Vol. 35
Can Walking with Orthosis Decrease Bone Osteoporosis in Paraplegic Subjects? M.T. Karimi1, S. Solomonidis2, and A. Eshraghi3 1
Rehabilitation Faculty, Isfahan University of Medical Sciences, Isfahan, Iran 2 Bioengineering Unit, Strathclyde University, Glasgow UK 3 Department of Biomedical Engineering, University of Malaya, Malaysia
Abstract— Various types of orthoses have been designed for paraplegic subjects to assist them to walk and stand. The main purpose of using an orthosis is to decrease bone osteoporoses and to improve the physiological and psychological heath of individuals with Spinal Cord injury (SCI). However, it cannot be concluded from the literature that walking with orthosis decreases osteoporosis or not? The main aim of this research is to measure the loads transmitted by orthosis and anatomy to find the effects of using orthosis on loads transmitted by the body. 5 normal subjects were recruited in this research study to walk with a new type of Reciprocal Gait Orthosis (RGO0 with strain gages embedded on its lateral bar. The results of this research showed that the patterns of the loads applied on the orthosis differed from that of the orthosis. So it can be concluded that walking with an orthosis can influence the magnitude of bone osteoporosis. Keywords— Spinal Cord Injury, Osteoporosis, Orthosis.
I. INTRODUCTION Over years various types of orthoses have been designed to assist Spinal Cord Injury (SCI) subjects to stand and walk, however the functional performance of the orthosis has not been adequate [1-6]. Clinical experience has shown that wheelchair users often have complication secondary to their injury and also due to long term sitting [7]. Standing and walking has been claimed to bring some benefits for SCI patients, such as decreasing bone osteoporosis, prevention of pressure sores and improving the function of digestive system [7-8]. However, the main reason to prescript the orthosis by clinicians is to decrease bone osteoporosis. The main question posted here is that can walking with orthosis decrease bone osteoporosis or not? Many researchers assume that all the loads applied on the orthosis and body complex during walking of paraplegic subjects is transmitted by the orthosis [9-10]. However, if this be a right assumption, the magnitude of the bone osteoporosis be decreased by walking with an orthosis. The main aim of this research project was to measure the loads transmitted by body, orthosis and by a combination of body and orthosis to find the influence of using orthosis on osteoporosis.
II. METHOD A new type of Reciprocal Gait Orthosis (RGO) was designed in Bioengineering Unit of Strathclyde University. It is of modular construction incorporating features which allow the alignment of the structure to be adjusted to suit the patient needs. Features have also introduced to facilitate the donning and doffing procedure, and allowing independence use of the device by the patients. Some strain gauges were attached on the lateral bar of the orthosis in order to measure the absolute values of the loads applied on the orthosis. The gait parameters and the loads applied on the hip joint and crutches were measure by motion analysis system. Five normal subjects were recruited in this research study. They had neither deformity nor contraindications for standing and walking or due to complicated medical history. The mean values of their age, height and their mass were 24 ±6.04, 1.76 ±0.023 m and 75.35 ±10.75 kg, respectively. The subjects were trained to stand and walk with the orthoses before starting the data collection.
III. RESULTS The maximum value of the axial compression force applied on the lateral bar of the orthosis was 0.174± 0.055 N/BW, which was very small in comparison with the force applied on the foot, figure 1 and 2. Approximately 0.138±0.05 N/BW axial tension force was applied on the orthosis with the maximum value displayed during the swing phase. The mean values of the compression and tension forces applied on the lateral bar during walking were 0.17±0.0745 and 0.107±0.0477 N/BW, respectively. The flexing moment applied at the lateral bar was significantly less than that applied at the hip joint, it was 0.0366±0.0312 Nm/kg for the lateral bar and 0.516±0.051 for the hip joint complex figures 3 and 4. In contrast to the flexing moment, the extending moment transmitted by the lateral bar was more than flexing moment (0.22±0.0411 Nm/kg). The adducting moment applied at the hip joint complex was 1.056±0.288 Nm/kg however; it was only
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 778–780, 2011. www.springerlink.com
Can Walking with Orthosis Decrease Bone Osteoporosis in Paraplegic Subjects?
779
Fig. 1 The vertical force transmitted through the foot in walking with the orthosis
Fig. 2 The vertical force transmitted through the lateral bar of the
Fig. 3 The flexion/extending moments applied on the hip joint complex
Fig. 4 The flexing/extending moment applied on the orthosis
0.516±0.051 Nm/kg for the lateral bar. Figures 1and 4 show the output of the strain gauges and also the loads applied on the lateral bar of the orthosis.
IV.
DISCUSSION
The magnitude of the compression force applied on the lateral bar of the orthosis was very small in contrast to the force applied on the foot. The maximum value of the compression force applied on the lateral bar was before toe off (before the maximum extension angle of the hip joint). Then, it changed to a tension force as the hip joint of the orthosis reached the extension stop. It was expected that the lateral bar of the orthosis subjected to a tensile force during swing phase. As can be seen from figures 1 and 2 the pattern of the force applied on the bar is completely different from the pattern of the force applied on the foot during walking with the orthosis.
orthosis
The mean value of adducting moment transmitted by the lateral bar of the orthosis was 0.516 Nm/kg which was nearly half of the adducting moment around the hip joint complex. The peak of the adducting moment applied on the bar was near the end of the stance phase. The weight of the orthosis produced an adducting moment during the swing phase which was less than that in the stance phase. The lateral bar of the orthosis was subjected to an extending moment in the stance phase with a mean value as 0.22 Nm/kg. However, there was a flexing moment applied at the hip joint complex in the first part of the stance phase which changed to an extending moment in the second part of the stance phase. The extending moment applied on the lateral bar decreased during the swing phase and changed to a flexing moment at the end of this phase. The patterns of the loads transmitted through the lateral bar of the orthosis differed from the data collected by gait analysis. The data of the strain gauges showed the absolute values of the loads applied on the orthosis which can be used for designing a new orthosis. The results of the strain
IFMBE Proceedings Vol. 35
780
M.T. Karimi, S. Solomonidis, and A. Eshraghi
gauge showed that using the data of the gait analysis for designing a new orthosis is not too practical. Unfortunately, there is a lack of information in the literature regarding the forces and moments applied on a HKAFO orthosis during walking. However, some researchers carried out different research studies to find out the loads applied on a KAFO orthosis during walking of handicapped subjects. In contrast the magnitude of the loads which were reported by Lim [11] was more than those of the aforementioned research. It seems that the patterns of the bending moments of the KAFO and HKAFO orthoses were the same [11].
V. CONCLUSION The patterns of the loads applied on the hip joint complex and orthosis structure differ from each other. This means that the role of the orthosis in transmitting the loads is not as much as expected. As some parts of the loads transmitted through the orthosis, it can be concluded that using the orthosis can influence the development of osteoporosis in paraplegics. In contrast to what has been shown in the literature, the main moment applied on the orthosis is adducting moment.
REFERENCES 1. STALLARD, J., MAJOR, R. E., POINER, R., FARMER, I. R. & JONES, N. 1986. Engineering design considerations of the ORLAU Parawalker and FES hybrid system. Engineering in Medicine, 15, 123-9. 2. MAJOR, R. E., STALLARD, J. & ROSE, G. K. 1981. The dynamics of walking using the hip guidance orthosis (hgo) with crutches. Prosthetics and Orthotics International, 7, 19-22.
3. STALLARD, J., MCLEOD, N., WOOLLAM, P. J. & MILLER, K. 2003. Reciprocal walking orthosis with composite material body brace: initial development. Proceedings of the Institution of Mechanical Engineers. Part H, 217, 385-92. 4. BUTLER, P., ENGELBRECHT, M., MAJOR, R. E., TAIT, J. H., TSTALLARD, J. & PATRICK, J. H. 1984. Physiological cost index of walking for normal children and its use as an indicator of physical handicap. Developmental Medicine and Child Neurology, 26, 607-12. 5. HARVEY, L. A., DAVIS, G. M., SMITH, M. B. & ENGEL, S. 1998. Energy expenditure during gait using the walkabout and isocentric reciprocal gait orthoses in persons with paraplegia. Archives of Physical Medicine and Rehabilitation, 79, 945-9. 6. MASSUCCI, M., BRUNETTI, G., PIPERNO, R., BETTI, L. & FRANCESCHINI, M. 1998. Walking with the advanced reciprocating gait orthosis (ARGO) in thoracic paraplegic patients: energy expenditure and cardiorespiratory performance. Spinal Cord, 36(4), 223-7. 7. DOUGLAS, R., LARSON, P. F., D' AMBROSIA, R. & MCCALL, R. E. 1983. The LSU Reciprocal-Gait Orthosis. Orthopedics, 6, 834839. 8. MAZUR, J. M., SHURTLEFF, D., MENELAUS, M. & COLLIVER, J. 1989. Orthopaedic management of high-level spina bifida. Early walking compared with early use of a wheelchair. Journal of Bone Joint Surgery, 71, 56-61. 9. DALL, P. M. 2004. The function of orthotic hip and knee joints during gait for individuals with thoracic level spinal cord injury. PhD Thesis Thesis (Ph.D.), University of Strathclyde. 10. JOHNSON, G. R., FERRARIN, M., HARRINGTON, M., HERMENS, H., JONKERS, I., MAK, P. & STALLARD, J. 2004. Performance specification for lower limb orthotic devices. Clinical Biomechanics 19, 711-8. 11. LIM, S. Y. E. 1985. A study of the biomechanical performance of knee-ankle-foot orthoses in normal ambulatory activities. PhD Thesis, University of Strathclyde. Author: Mohammad Karimi Institute: Rehabilitation Faculty, Isfahan University of Medical Sciences City: Isfahan Country: Iran Email: [email protected]
IFMBE Proceedings Vol. 35
Design and Development of Arm Rehabilitation Monitoring Device R. Ambar1,2, M.S. Ahmad1, and M.M. Abdul Jamil1,2 1
Department of Electronics Engineering, Faculty of Electrical and Electronics Engineering, Universiti Tun Hussein Onn Malaysia, Batu Pahat, Johor, Malaysia 2 Modelling and Simulation Research Laboratory
Abstract— Continuous monitoring of arm rehabilitation exercise is essential in order to provide information of rehabilitation results to be analyzed by physical therapist. This will help them to improve rehabilitation process. Moreover, a portable home-based rehabilitation device which can interact with patients (by providing repetition counters and memory) can help motivate patients to improve daily rehabilitation activity. Some previous studies regarding home-based rehabilitation process have shown improvement in promoting human movement recovery. But existing rehabilitation devices are expensive and need to be supervised by physical therapist, which are complicated to be used at home. Some devices (such as exoskeleton-type devices) are also not so efficient to be used at home due to large size and complex system. In this current work, a portable arm rehabilitation device is designed and developed which can continuously monitor arm movement, The device consists of a flexible bend sensor attached to the elbow which can measure elbow bending angle. Analog signal from the sensor will be conveyed to a PIC18F4520 microcontroller for data processing and logging. LCD display will interact with user by displaying number of repetition. The results of rehabilitation activity also can be retrieved by using a personal computer for further monitoring and analysis. Keywords— monitoring device, arm rehabilitation.
I. INTRODUCTION In post-stroke patients, rehabilitation is important to regain back the mobility and fitness to do the things they did previously. Post-stroke rehabilitation process may include physical activity which requires extensive exercise plus patient’s self motivation to complete the process. Such rehabilitation process which needed the patients to do repetitive physical exercises may result to a less attractive and may result in loss of interest for completing rehabilitation processes. Researchers from various institutions and companies have been actively searching ways on how to design devices and training procedures which incorporate many high-tech systems that can help patients with disabilities and injuries. These systems maybe involve attaching devices to the affected human limbs in order to improve patient movement in any environment of rehabilitation. An unsupervised system which continuously monitors the rehabilitation of
patients can be considered as an important method to analyze the improvement of rehabilitation and as a tool for displaying the results of certain tasks. This automated monitoring system can be considered an important tool in the field of post-stroke rehabilitation. L.K. Simone et al. [1] used flex sensors contained in Lycra/Nylon sleeves to collect real-time flexion data of finger flexion over extended period of time. The individual sensor sleeves are securely attached to the back of each finger. They demonstrated that data can be collected comfortably over an extended period of time while individuals perform daily activities away from the clinical site. Jae-Myung Yoo et al. [2] uses flex sensor as a part of sensing system for artificial arm’s control research. It can measure how flexible the muscle is based on flex sensor. The proposed system was verified and tested on a small artificial arm of 2 degrees of freedom (D.O.F) that is consisted of two actuators and two potentiometers. They found that the moving speed of the artificial arm was the same with that of the actual human arm though some position error occurred during the tests. In 1999, Chapman et al [3] used flex sensor along with nitinol wires (usually known as muscle wire or Flexinol), attached on a straw as a new design alternative to motor-driven artificial limbs. It is referred as “Strawbotic”. The structures are relatively small, cheap, light, continuous, adaptable, available and easily controllable. Rehabilitation process are based on clinical assessment tools which can be executed by self-report (home-based) and observer-rated (done at rehabilitation centre) [4]. Observerrated by caregivers can be time consuming and patients required to have repeated observations at rehabilitation centre which can be costly. Early home-based rehabilitation proved to promote a better physical health because it appeared to permit motor and functional gains that occurred with natural recovery and satisfaction with community integration [5]. How to effectively motivate patients to do the regular physical activity is an important research topic [6]. Furthermore, while enormous researches have been done focusing on lower limbs rehabilitation, there are less work done on arm rehabilitation. Moreover, majority of the arm rehabilitation researches involved the usage of exoskeleton type devices which, although can maintain correct movement form and can produce force inputs for therapy, but due to its large and
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 781–784, 2011. www.springerlink.com
782
R. Ambar, M.S. Ahmad, and M.M. Abdul Jamil
complex built, it is not suitable for home-based rehabilitation device. Therefore, the work described in this paper evolved from the above stated researchers involving the application of flex sensor in the design and development of monitoring device for arm rehabilitation.
II. MATERIALS A. Hardware This arm rehabilitation monitoring device consists of a flex sensor, a Microchip PIC18F4520 microcontroller, a Cytron SK40C PIC start-up kit, a USB to USART converter, a USB ICSP PIC Programmer and a 16x2 RT1602C LCD display. Figure 1 shows the actual hardware setup.
Fig. 2 Illustration of flex sensor [7] •
PIC18F4520 Microcontroller
Fig. 3 Schematic of PIC18F4520 microcontroller [7] Fig. 1 Picture of hardware setup for monitoring device •
Flexible Sensor
Flexible sensor or flex sensor is a type of resistor which actually composed of tiny patches of carbon that can change resistance values when bending from convex to concave shapes. It is known as an ideal input device for controlling limb-like mechanisms due to its ease of positioning when attached to finger or elbow to track movement. Figure 2 shows the basic function of flex sensor. In this project we will use flex sensor produced by Spectra Symbol. Resistance value of the sensor is about 8,000 Ω (8K) at 0˚. The resistance will gradually increase if the flex sensor as being bent inward. At 90˚, the resistance value is about 10-14K Ω. The flex sensor may be bent greater than 360˚ depending upon the radius of the curve, this will further increase the resistance value to increase more further. Its life cycle is more than 1 million bend. The sensor measures ¼ inch wide and 4 ½ inches long and only .019 inches thick.
PIC18F4520 is chosen due to its flexibility of self programming Flash which is made possible using C programming and also its internally available ADC module. Source codes can be uploaded into microcontroller using PIC programmer without much hassle. Figure 3 shows the circuit schematic of this microcontroller. There are 5 input/output ports on PIC microcontroller namely port A, port B, port C, port D and port E. Each port has different function. Most of them can be used as I/O port. B. Software For this study, the PIC18F4520 microcontroller programmed in C language, using C18 compiler integrated to the MPLAB IDE software. MPLAB IDE software is a free software from Microchip. It supports the free software simulator, hardware debug, and programming tools using a single standardized graphical user interface. There are several reasons why it is programmed in C, such as less time consuming, ease of modification and update and portable to other microcontrollers with simple modification. C18 compiler will produce
IFMBE Proceedings Vol. 35
Design and Development of Arm Rehabilitation Monitoring Device
783
hexadecimal file (hex file) of the compiled source code. This hex file is embedded into PIC18F4520 using PICkit 2 software via PIC Programmer. If programmer successful in downloading the hex file, the microcontroller and its peripherals should be ready to be used.
III. EXPERIMENTAL METHOD •
Sensing System (a)
(b)
Fig. 5 (a) Elbow guard attached with flex sensor (b) LCD displays repetition counter.
•
Fig. 4 Overall hardware setup diagram Figure 4 shows the overall hardware setup diagram. In this project, we proposed a wearable arm rehabilitation monitoring device based on flex sensor technology. We proposed to use an elbow guard which can be bought in local pharmacy. A single flex sensor is attached on the below part of the elbow guard in order to detect easily the arm bending activities. Figure 5 (a) shows an initial setup of the sensor on the elbow guard. The sensor is connected to microcontroller for analog signal detection. To process the output from sensor, its variable resistance is converted to a variable voltage. Then, the analog signal is transmitted to the AD converter (analog to digital) in the PIC for data processing. Before the analog data can be processed, we need to download the source code from computer into microcontroller unit through a PIC programming device. When succefully downloaded, the processor should start working. This arm rehabilitation monitoring device works when analog signal from sensory system is detected due to arm bending activity. The signals will be sent to the microcontroller. Microcontroller will then read and process the data into digital signals according to the algorithms of source codes that we downloaded previously and then stored it inside the microcontroller. These data can be transmitted to PC through USB to UART converter device.
Monitoring System and Data Logging
As a monitoring system, this device is equipped with real-time movement counter. Each time arm is bent, the flex sensor will detect the movement and LCD will display the number of movement done by the user. Counter stops when ‘RESET’ button on the SK40C board is pushed. Initially, the microcontroller will process the analog data from flex sensor to become output for LCD display and data logging system. Fig.5 (b) shows example of LCD display counter. The data collected can be displayed directly to the computer through Graphic User Interface (GUI). The source code can be designed using Visual Basic version 6.0. The data which acquired also can be saved into .txt files. Data such as day, date, time, number of repetition, angle record for each repetition and best angle record for each day can be saved for monitoring purposes. This is the advantage of this project.
IV. RESULTS AND DISCUSSION In this paper, the characteristic of flex sensor has been recorded. The flex sensor is a resistance-varying strip that increases its total electrical resistance when the strip is flexed in one direction (in the other direction the resistance does not vary significantly). This angle is measured between the two tangent lines at the ends of the flex sensor’s body. The flex sensor has a typical electrical resistance variation when flexed or bent as shown in Fig. 6 (a), Fig. 6 (b) and Fig. 6 (c). From the experiment, resistance value against the flex sensor bending angle can be plotted as shown on Fig.7. The experiment shows that when flex sensor is bend inward, resistance value increased significantly as the angle of flex sensor is bend further. However, when it is bent outward, the resistance value decreased gradually. These preliminary
IFMBE Proceedings Vol. 35
784
R. Ambar, M.S. Ahmad, and M.M. Abdul Jamil
has data logging system which can store data for certain period of time that can be used by physical therapist for further analysis. With this low cost device, stroke patients can utilize this system at their home with minimal assistance from doctors and clinicians. Our future work will focus on further enhancing the capability of sensory unit by adding 3-axis gyro sensors to measure the speed of arm bending. By adding 3axis gyro sensor, we predict to be able to improve the rehabilitation process which can be determined by measuring the time taken to make an arm bending movement. In conclusion, this preliminary findings attained from this project may enable us to contribute towards the development of new arm rehabilitation monitoring device which can benefit human lives.
ACKNOWLEDGMENT Fig. 6 Variation of flex sensor’s resistance reading finding suggest that flex sensor is clearly suitable to detect bending angle by utilizing inward bend of the flex sensor.
The authors would like to thank Mr. Hazwaj Bin Poad for supplying the hardware materials for this project. Appreciation also to technicians in Electronic Measurement Laboratory for their assistance in this work.
Resistance value (kΩ)
35 30
Resistance Value ( Inw ard bend )
25
Resistance Value (Outw ard bend)
REFERENCES
20 15 10 5 0 0˚
45˚
90˚ 135˚ 180˚ Flex sensor bending angle ( ˚ )
320˚
Fig. 7 Resistance value against flex sensor bending angle
V. CONCLUSION A wearable arm rehabilitation device equipped with monitoring system for post-stroke rehabilitation were designed and proposed. The described system includes a flex sensor which is equipped with real-time monitoring and data logging function translating to the following advantages: the system is compact, lightweight and does not restrict movement during usage. The proposed system is easy to be attached onto arm with minimal external assistance. It
1. A Low Cost Method to Measure Finger Flexion in Individuals with Reduced Hand and Finger Range of Motion. L.K. Simone et al.; In Proc. 26th Annual International Conference of the IEEE EMBS San Francisco USA; September 1-5 2004.South J, Blass B (2001) The future of modern genomics. Blackwell, London 2. A Study on A Sensing System For Artificial Arm’s Control. JaeMyung Yoo and Yong-Myung Ahn. SICE-ICASE International Joint Conference 2006. Oct. 18-2 1, Bexco, Busan, Korea. 3. A Motorless Artificial Limb and its Control Architecture. R.Chapman et al.; in Proc. IEEE Canadian Conference on Electrical and Computer Engineering Show Conference Center, Edmonton,Canada; May 9-12, 1999. 4. A Combined sEMG and Accelerometer System for Monitoring Functional Activity in Stroke; IEEE Transactions On Neural Systems And Rehabilitation Engineering, Vol.17, No. 6, December 2009. 5. No Place Like Home : An Evaluation of Early Supported Discharge for Stroke; Nancy E. Mayo et al; Stroke 2000;31;1016-1013; Journal of The American Heart Association. 6. Hands-off Assistive Robotics for Post-Stroke Arm Rehabilitation. Jon Eriksson, Maja J. Mataric and Carolee J. Weinstein; in Proc. IEEE 9th International Conference on Rehabilitation Robotics; 22-24; 2005. 7. http://www.cytron.com.my. Author: Radzi Bin Ambar Institute: Universiti Tun Hussein Onn Malaysia City: 86400 Batu Pahat, Johor Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Development of Artificial Hand Gripper for Rehabilitation Process A.M. Mohd Ali1,3, M.Y. Ismail2, and M.M. Abdul Jamil1,3 1
2
Department of Electronic Engineering, Faculty of Electrical and Electronic Engineering, Department of Communication Engineering, Faculty of Electrical and Electronic Engineering, 3 Modeling and Simulation Research Laboratory, Universiti Tun Hussein Onn Malaysia, Parit Raja, Batu Pahat, 86400 Johor, Malaysia
Abstract— This paper focuses on the development of a robotic hand that imitates the movement of a human hand. The basic movement of the surgeon hand was limited from a wrist, elbow and shoulder degree of freedom during an operation. The artificial hand gripper system requires sensors for a smooth and accurate movement. This allows large movement from the surgeon hand to be corrected on a small scale with a perfect incision and without any vibration. Although such a system available in the market, the utilization of robotic hand particularly in Malaysia for medical application are still very minimum due to its expensive cost. Therefore, in this research we plan to develop a reasonably cheaper home built robotic hand which can perform the task of a hand gripper as a beginning step. The initial objective of this research is to analyze and develop artificial arm with a strength limit proportional to the weight. Next, followed by the attachment of a wireless system on the prosthetic gripper via Radio Frequency (RF) transceiver. The system development involves a Programming Interfacing Circuit (PIC) 16F877 as a core processing for the instrumentation, communication and controlling applications. A series of flex force sensors are fitted in a leather glove to get reading from the movement of human fingers. Microcontroller will further use this information to control multiple servo that act as a mechanical hand inside the prosthetic gripper. Keywords— Artificial hand gripper, flex sensor, rehabilitation and medical robotic.
I. INTRODUCTION The term of “robot” comes from the Czechoslovakian word “Robota”, that means obligatory work or servitude. It was used firstly in a Czechoslovakian play called “R.U.R”. (Rossum’s Unversal Robots) [1]. The first prosthetic hand was done by a roman general who had his arm cut off and replaced with an iron hand in the 17th century [2]. In science, the definition of “gripper” is subsystems of handling mechanisms which provide temporary contact with the object to be grasped. They ensure the position and orientation when carrying and joining the object to the handling equipment. Prehension is achieved by force producing and form matching elements. The term “gripper” is also used in cases where no actual grasping, but rather holding of the object as in vacuum suction where the retention force can act on a point, line or surface [1].
Since then, prosthetic arm continues to evolve with variety of concept. For instance a “Myoelectric” prosthesis that uses electromyography signals or potentials from voluntarily contracted muscles to control the movements of the prosthesis arms. Here, a residual neuro-muscular system of the human body to control the functions of an electric powered prosthetic hand, wrist or elbow [2]. In another research the type of prosthetic arm are also called ‘Dermatos’ which has appearance as very close to that of the natural arm. In this research it was found that by movement of muscles back and forward, the system can send signals to the brain, which turns them into electrical messages that control motors inside the prosthesis. The prosthesis has three motors that open and close the joints [3]. In another study, a new prosthetic hand is being tested at the Orthopedic University Hospital in Heidelberg Grip which functions almost like a natural hand. It can hold a credit card, use a keyboard with the index finger, and lift a bag weighing up to 20 kg. It's the world's first commercially available prosthetic hand that can move each finger separately and has an outstanding range of grip configurations. Touch Bionics is a leading developer of advanced upperlimb prosthetics (ULP). One of the two products now commercially available from this company, are the “i-LIMB Hand”, is a first to market prosthetic device with five individually powered digits[4]. This artificial limb looks and acts like a real human hand and represents a generational advance in bionics and patient care. The “i-LIMB Hand” is controlled by a unique, highly intuitive control system that uses a traditional two-input “Myoelectric” (muscle signal) to open and close the hand’s [5]. The German Aerospace Centre (DLR), in cooperation with the Harbin Institute of Technology (HIT), has already developed a robotic hand similar to a human hand with the aid of miniature actuators and high-performance bus technology. Constructing a robotic hand with the capabilities and dexterity of a human hand requires at least four fingers: three fingers to allow the robotic hand to grip conical parts, and a thumb used as a support [5]. This later concept will be followed as in this study for the development of a artificial hand gripper.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 785–788, 2011. www.springerlink.com
786
A.M. Mohd Ali, M.Y. Ismail, and M.M. Abdul Jamil
The aim of this research is to assist handicap individual in providing them with an enhanced version prosthetics that is economical and affordable. The construction of the artificial hand gripper which is each individual powered finger can be quickly removed by simply removing one screw. Thus, the developed prosthetics can easily swap out fingers which require servicing and therefore patients can return to their everyday lives after a short visit to the clinic [6 - 8]. [a]
II. MATERIALS AND METHODS In this section, we will start with demonstrating the operational flow chart for the proposed artificial hand gripper system as in this study (refer Figure 1).
[c]
Fig. 2 Figures showing the experiment performed on the flexible sensor and the resistance output displayed via a digital multi meter a) when the sensor at 0º, by using Multi meter b) at 0º
Fig. 1 Block diagram showing the operation of the Artificial Hand Gripper
The flexible-bend sensor or flexure sensor is designed in such a way that it can be bent easily. The sensor functions according to the changes in the resistance depending on the amount of bend introduced on the sensor. In other words, the flex sensor is a unique component which changes its resistance according to the amount of bent. An inflexed sensor has a nominal resistance of about 10,000 Ω (10KΩ). As the flex sensor bent the resistance gradually increases. When the sensor is bent at 90º its range will be between 10 KΩ - 14 KΩ ohms (refer Figure 3). The flex sensor could be bent up to 360º depending upon the radius of the curve. This will set the resistance value to increase further. The operating temperature of this sensor is 35ºC to 80ºC. It has a life cycle of more than 1 million bend.
III. RESULT AND DISCUSSION A. Flex Sensor In this section, we will start with demonstrating the operational/testing of flexible sensor for the artificial hand gripper system as in this experiment by using digital multi meter (refer Figure 2). In addition, Figure 2 will also demonstrate the experiment performed on the flexible sensor by using oscilloscope (refer Figure 2). Fig. 3 Illustration of the flex sensor
IFMBE Proceedings Vol. 35
Development of Artificial Hand Gripper for Rehabilitation Process
B.
Positional Servo Data
In this section, we will start with demonstrating the operational of servo motor by using PIC 16F877 for the proposed artificial hand gripper system as in this study (refer Figure 4).
787
servo, neutral is defined to be the point where the servomotor has exactly the same amount of potential rotation in counter clockwise direction as it does in the clockwise direction.[10] When the PWM sent to a servo is less than 1.5 ms, the servo moves few number degrees counterclockwise from the neutral point. When the pulse is greater than 1.5 ms the servo move few number of degrees clockwise from the neutral point. Generally the minimum pulse will be about 1.0 ms and the maximum pulse will be 2.0 ms with neutral (stop) movement at 1.5 ms. Servo speed was measured by the movement and the amount of time ( in second) it takes an inch servo arm to sweep left or right through a 60 degree arc at either 4.8 or 6.0 volt. C. Glove as Sensing Mechanism In this section, we will demonstrate the measurements results for the mechanical part of the hand gripper by using flex sensor (refer Figure 6). Next, followed by the graphs produced for the bending analysis of the flex sensors positioned at the leather glove[9].
Fig. 4 Figures showing the proposed glove sense mechanism and servo motor connected to PIC16F877
A servo is a mechanical motorized device that can be instructed to move the output shaft attached to a servo wheel or arm to a specified position. Inside the servo box is a Direct Current (DC) motor mechanically linked to a positional feedback potentiometer, gearbox, electronic feedback control loop circuitry and motor drive electronic circuitry. Fig. 6 Picture showing the glove sense mechanism, fingers and the flex sensors positions
Fig. 5 Picture showing the experiment setup for the servo motor connected with the PIC 16F844 microcontroller
A Pulse Width Modulation (PWM) of approximately 1.5 ms (1500 us) is the “neutral” position for the servo. The
This project uses leather glove with flex sensors placed on the style of the four index fingers. The flex sensor will produce an output of 10KΩ at 0º and 9.56KΩ at 90º. The outputs for the bending experiment were shown in Figure 7. The analysis indicates a high level of linearity with a dynamic response to changes in flex position. The outputs for the bending experiment were collected at 0º, 45º, 90º and 180º bent respectively in two directions which is left and right (refer Figure 7a and 7b). From the observation the output reading produced for both directions correlate with the bending level introduced to the four flex sensors.
IFMBE Proceedings Vol. 35
788
A.M. Mohd Ali, M.Y. Ismail, and M.M. Abdul Jamil
REFERENCES
[a]
[b]
Fig. 7 Graphs showing the experimental data collected for the flexible sensor positioned in the sensing glove, a) bending right position and b) bending left position
IV. CONCLUSION The robot was initially designed to assist handicap individual in providing them with an enhanced version of the normal prosthetics which are economical and affordable. It shows a lot of promise and was even applied in hands-on training with the application of visual basic interfacing software among university students with encouraging feedback. With support from the industries and government agencies, the artificial hand gripper could be better develop and achieved its intended target. Finally, the preliminary findings of this study cannot be understated.
1. P.L.Chen, R.S.Muller, And R.M White, “Thin Film ZnO-Mos Transducers with virtually de response,” in Proc.IEEE ultrason Symp, Nov. 1980,pp. 945-948. 2. M.K Brown, “ A controlled Impedance Gripper,” AT &T Tech.J. Vol.64, no,4,pp.937-969, Apr 1985. 3. T .Murphy, D. Lyons and A Hendriks: Stable Grasping with a Multifingered Robot Hand: A Behavior-Based Approch. IEEE/RSJ international Conference on intelligent Robot and system, Yokohama, Japan 1993. 4. Title of the Web: The i-LIMB Hand.Web Adress: www.touchbionics.com. [Date accessed:24 Oct 2010] , [Date update: 3 Jan 2011]. 5. T. Yoshikawa, Foundations of Robotics. Cambridge, MA: M.I.T. Press,1990. 6. Biagiotti L, Lotti G, Melchiorri C, Vassura G. Design Aspects for Advanced robot Hands. IEEE/RSJ International Conference, Lausanne, 2002 . 7. ABB Robotics, "IRB6400 User and System Manual", ABB Robotics, Vasteras, 2002. 8. Huu, Cong-Nguyen, Haeng-Bong, Shin Dong-Jun, Park Sung-Hyun, Han. Title :Grasping control of flexible hand with thirteen D.O.Fs IEEE: ICCAS-SICE, 2009 . 18-21 Aug. 2009. Page: 2097-2102. 9. Sok, Jin-Hwan, An, Tae-Hee, Kim, Jun-Hong, Park, In-Man, Han, Sung-Hyun. Title: A flexible control of robot hand with three fingers IEEE- Control Automation and Systems (ICCAS), 2010 International Conference on 27-30 Oct. 2010. Page-2094-2098 10. Fischer, T. Rapela, D.Woern H. Title: Joint controller for the objectpose controlling on multifinger grippers. IEEE: Advanced Intelligent Mechatronics, 1999. Proceedings. 1999 IEEE/ASME International Conference on. Page: 422-427
Address of the corresponding author: Author: Abdul Malik B. Mohd Ali Institute: Department of Electronic Engineering, Faculty of Electrical and Electronic Engineering Street: Parit Raj City: Batu Pahat Country: Johor, Malaysia Email: [email protected]
.ACKNOWLEDGEMENTS The authors would like to take this opportunity to express his heartfelt appreciation to his respectful supervisor Dr. M. Mahadi Abdul Jamil and co-supervisor Dr. M. Yusof Ismail for their supervision, encouragement, contradictive ideas, patience, guidance and invaluable advice, enabling the author to produce this paper. IFMBE Proceedings Vol. 35
Motor Control in Children with Developmental Coordination Disorder–Fitts’ Paradigm Test S.H. Chang1 and N.Y. Yu2 1
Department of Occupational Therapy, I-Shou University, Kaohsiung City, Taiwan 2 Department of Physical Therapy, I-Shou University, Kaohsiung City, Taiwan
Abstract— According to the survey of American Psychology Association, developmental coordination disorder (DCD) occurs in at least 6% of school-aged children. In Taiwan, the prevalence is about 12 %. Researchers generally agree that perceptual motor problems evident in DCD are, in part, the result of perceptual and cognitive processes, but the limited research available remains inconclusive. This study investigated fine motor skills in children with and without DCD. Twenty four children, aged seven years (+/- six months) with DCD, and twenty four children without DCD, matched for age, body length, body weight and gender, were recruited into this study. Measures of task-directed movements will be analyzed under two conditions (discrete and consecutive types) with three levels of complexity. The movement is requested to be as accurate and fast as possible. The result of this study indicates that the DCD subjects have slower movement in discrete and continuous Fitts’task. In the design of force control, there was no significant difference found between the two groups. During the test of two-fold force control in the middle trial, the average force in the later trials was found significant different from the first and second ones.. Keywords— Developmental coordination disorder, Motor control, Fine motor, Open-loop control, Closed-loop control.
I. INTRODUCTION Fine motor control plays a key role in daily life and academic learning of school children. For example, handwriting, craft and drawing require skilled fine motor function. These activities usually occupy the most of the school activities. In addition, they build the foundations of future daily life skill and academic learning. According to the results of previous studies, children with developmental coordination disorder (DCD) usually encounter difficulty in academic learning especially in relation to fine motor control (Willoughby et al., 1995). Pertaining to the past researches in DCD, the focus was addressed on the diagnosis and assessment. In the recent studies, the in-depth analysis of motor control has been tried to develop an effective intervention program. From the report of Ameratunga et al. (2004) in a taskoriented study, compared with children without DCD, the children with DCD produced larger endpoint errors, greater
movement times and longer trajectories. Children in both groups produced larger endpoint errors, greater movement times and longer trajectories in non-visually guided aiming versus visually guided aiming tasks. They concluded children with DCD moved more slowly, with longer movement trajectories and were less accurate than children without DCD when aiming to all target positions under all sensory conditions. The greatest error and trajectory length occurred for both groups when aiming movements were performed in the absence of vision. As children in the DCD group had difficulties with movement executed under kinaesthetic or visual control, the results indicate that the normal advantage of vision displayed by children without DCD is not apparent, and visual and kinaesthetic problems may be present in children with DCD. Smits-Engelsman et al. (2003) have investigated the motor control of children with DCD and learning disability using kinematic movement analysis of fine-motor performance. Three hypotheses about the nature of the motor deficits observed in children with LD were tested: general slowness hypothesis, limited information capacity hypothesis, and the motor control mode hypothesis. Measures of drawing movements were analyzed under different task conditions using a Fitts’paradigm. In a reciprocal aiming task, the children drew straight-line segments between two targets 2.5 cm apart. Three target sizes were used (0.22, 0.44, and 0.88 cm). Children used an electronic pen that left no trace on the writing tablet. To manipulate the degree of open-loop movement control, the aiming task was performed under two different control regimes: discrete aiming and cyclic aiming. The kinematic analysis of the writing movements of the 32 children with DCD/LD that took part in the experimental study confirmed that besides learning disabilities they have a motor learning problem as well. Overall, the two groups did not differ in response time, nor did they respond differently according to Fitts’' Law. Both groups displayed a conventional trade-off between Target Size and average Movement Time. However, while movement errors for children with DCD/LD were minimal on the discrete task, they made significantly more errors on the cyclic task. This, together with faster endpoint velocities, suggests a reduced ability to use a control strategy that emphasizes the
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 789–792, 2011. www.springerlink.com
S.H. Chang and N.Y. Yu
790
terminal control of accuracy. Taken together, the results suggest that children with DCD/LD rely more on feedback during movement execution and have difficulty switching to a feedforward or open-loop strategy. In summary, SmitsEngelsman et al. provided a preliminary evidence that deficits in the transition from closed- to open-loop (motor) control may explain the motor coordination difficulties of the DCD/LD comorbid group. As mentioned in the above reviewed articles, motor performance can be successful even with loss of visual, tactile, or proprioceptive information, but the performance suffers, demonstrating the complexity of the constant interaction between the various sensory skills needed in order to both learn and perform even simple motor skills well (Schoemaker, et al., 2001). The present study investigated perceptual-motor abilities, with regard to vision, kinesthesia and cross-modal judgment, in children with and without DCD. From the literature reviews above, it appears that children with DCD may have visual or kinaesthetic difficulties that contribute to their motor incoordination difficulties. However, there seems to be debated over the validity of previous tests used and inconsistencies in the results obtained when measuring these difficulties. Recent developments in computer technology of digital tablet and wireless pen now permit the examination of a much richer set of handwriting outcome measures. With the aid of a digitizing tablet and instrumented pen, a child’s fine motor skill can be monitored in real-time and stored in a form atamenable to sophisticated kinematic and kinetic analyses (e.g., Smits-Engelsman & Van Galen, 1997; Smits-Engelsman, Van Galen, & Shoemaker, 1998; Sovik, Arntzen & Thygesen, 1987; Wann & Jones, 1986). To date, perceptual function, motor planning and motor control have been paid more attention in the field of fine motor study. The objective of this study is to show how advanced digital technology enables a more in-depth study of the temporal and kinematic characteristics of the movement control in children with or without DCD. This study implemented digital tablet to investigate the movement coordination and motor control in relation to fine motor skills. Through a series of experiment design, the fine motor control of children with DCD was studied. The first component of this study is the experiments in accuracy and speed. Experiments were designed to observe the Fitts’paradigm in children with DCD. The second component was to explore the force exertion on the paper in the different groups.
schools were recruited into this study. The children were recruited from two elementary schools in Kaohsiung county. The parents of each child were provided with information about the study and their consents were obtained. Children were excluded from this study if they are reported a history of any medical, neurological or pervasive developmental disorders, intellectual disability, oncological, musculoskeletal, sensory (hearing, vision) or skin disorders. The presence of DCD was determined by a score lower than 15 % on the Movement Assessment Battery for Children (Movement ABC) (Henderson and Sugden, 1992). Children with a ranking above the 15th percentile were placed in the non-DCD group. Kinematic and kinetic movement analyses were utilized to investigate the fine-motor performance of children with or without DCD. A digital tablet (intuos 2, Wocom) with a 1024-level pressure transducer registered the touch forces (normal to the surface) generated by the pen tip. B. Experimental Design In the first part, Fitts’paradigm was designed on a digital tablet for examining the fine motor skills of children. Three levels of complexity were designed by using three sets of circles with 2.2, 4.4, and 8.8 mm in diameter. The distance of the paired circles was fixed in a length of 25 mm (Figure 1). In addition to the task complexity, there were two testing paradigms with a crossover procedure as depicted on Table 1. One is discrete type with 20 separated tests in every task complexity. The other is continuous type with a series of consecutive movements in every task complexity. The first one was used to examine the performance of closed-loop control. The later one was used to examine the performance of open-loop control. In the design of force control, two tests were scheduled. The first one is uniform force test. The children arbitrarily touch the digitizer five times. They were asked for uniform force output throughout the five trials. The second one is two-fold force in the middle trial. Five trials were performed by the children. The third one is two-fold to the others. If the pressure is over 512 level, they are told to try again.
II. METHODS A. Subjects and Apparatus Twenty four children with DCD and handwriting deficits and their age-matched controls attending normal primary
Fig. 1 In every test sheet, 4 circles with different widths were depicted in a fixed diagonal distance of 25 mm. The diameter of the three circles is (a) 8.8, (b) 4.4, and (c) 2.2 mm
IFMBE Proceedings Vol. 35
Motor Control in Children with Developmental Coordination Disorder–Fitts’ Paradigm Test
Table 1 Experimental protocol, 24 cases were tested by a crossover design
791
Table 4 Mean and standard deviation of repetition in every trial Task complexity DCD (ms) Small (R1) 33.78 (15.20)
Non-DCD (ms) 45.07 (21.37).
p-value p=0.002
Subjects 1, 2, 13, 14
Test order Consecutive Discrete
Task size 0.88 0.22 0.44
3, 4, 15, 16
Consecutive Discrete
0.44 0.88 0.22
Median (R2)
35.13 (22.82)
56.57 (27.58)
p=0.03
5, 6, 17, 18
Consecutive Discrete
0.22 0.44 0.88
Large (R3)
44.46 (20.54)
65.92 (25.64)
p=0.019
7, 8, 19, 20
Discrete Consecutive
0.88 0.22 0.44
9, 10, 21, 22
Discrete Consecutive
0.44 0.88 0.22
11, 12, 23, 24
Discrete Consecutive
0.22 0.44 0.88
C. Statistical Analysis Twenty four children with DCD were compared with an age- and gender- matched control group of 24 children to compare the difference between groups and to examine mechanisms that adapt the pen tip force at the digital tablet in a pen tip touch task. Paired t tests with repeated measures were performed to compare the difference between DCD and non-DCD children in the speed and accuracy of Fitts’paradigm and force control in digital tablets.
III. RESULTS
DCD: 1/R1: 1/R2: 1/R3 = 0.022 : 0.028 : 0.030 = 1 : 0.96 : 0.76. Non-DCD: 1/R1: 1/R2: 1/R3 = 0.015 : 0.018 : 0.022 = 1 : 0.80 : 0.68.
B. Tasks for Pressure Control In uniform force control, significant difference was not found between two groups. In constant force control, standard deviation of the five force levels was used as control index. The SD of the DCD group was 64.27 gram. The SD of the non-DCD group was 84.53 gram. There is no significant difference between DCD and non-DCD subject. In two-fold force control in middle trial, the two-fold trial in DCD and non-DCD group was 1.87 and 1.96 of the average of other trials. There was no significant difference found between two groups. However, the difference was found significantly between the two groups. The difference was 0.58 grams between the first two trials and the last two trials in non-DCD group. However, the difference was 87.63 grams in DCD group.
A. Tasks for Fitts’ Paradigm The response time of DCD and non-DCD children in discrete task was depicted on Table 2. The difference between the two groups was not found statistically significant. The time expenditure in every movement of DCD and non-DCD children in discrete task was depicted on Table 3. Significant difference can be found in small and median tasks but not in large tasks. In continuous tasks, as shown on Table 4, significant differences can be found in all of the tasks. Table 2 Mean and standard deviation of response time in every movement DCD (ms) Task complexity Small 33.87 (15.20)
26.77 (12.22)
Non-DCD (ms)
p-value p=0.561
Median
33.37 (15.15)
30.06 (8.68)
p=0.481
Large
31.69 (14.63)
28.69 (12.62)
p=0.179
Table 3 Mean and standard deviation of time expenditure in one movement Task complexity DCD (ms) Small(T1) 65.50 (16.84)
Non-DCD (ms) 53.16 (15.02)
p-value p=.047
Median (T2)
63.98 (18.26)
43.56 (13.63)
p=0.002
Large (T3)
46.42 (20.41)
37.86 (11.86)
p=0.183
DCD: T1: T2: T3 = 65.5 : 63.98 : 46.42 = 1 : 0.98: 0.71. Non-DCD: T1: T2: T3 = 53.16 : 63.98 : 46.42 = 1 : 0.82: 0.71.
IV. DISCUSSIONS The result of this study indicates that the DCD subjects have slower movement in discrete and continuous Fitts’task. The difference is more apparent in discrete movement tasks. In the force control of two-fold force in the middle trial, the average force in the fourth and fifth trials was found significant different from the first and second ones. The task was performed more difficultly in DCD group than in non-DCD group. Furthermore, DCD group is more difficult in perform open loop control task and has poor force control skills. For the verification of Fitts’law in DCD subjects, the ID (index of difficulty) of this study was calculated. The equation for ID is Log(2*A/W), where A is the amplitude of the task and W is the width of the task. A is 25 mm and W is 2.2 mm, 4.4 mm, or 8.8 mm in this study. The ID's in the three complexities is 1.36, 1.06 and 0.75 respectively. After normalized by the maximal value, the ID of the three complexities becomes 1, 0.78, and 0.56. For the comparison between ID derived from Fitts’law and the result from the performance. The index from result of the performance is derived as depicted as the notes in Table 3 and 4. The similarity of nonDCD was larger than that of DCD children. For example, they are 1 : 0.82: 0.71 in non-DCD and 1 : 0.98 : 0.71 in DCD children, where the true ID are 1: 0.78: 0.56.
IFMBE Proceedings Vol. 35
S.H. Chang and N.Y. Yu
792
The DCD group is hard to transfer from closed loop control to open loop control. They relied mostly on the feedback information. This hypothesis was proposed by Wilson et al. (1998). A new hypothesis should be constructed to challenge this hypothesis. If give the DCD subject sufficient information, they can complete the task as quick and accurate as normal children. The experiment should be a design where feedback information is abundant. DCD group is difficult in perform open loop control task and has poor force control skills. Intervention can be given in DCD subjects for better motor planning and motor control Both study groups exhibited disturbances of the basic coordination of forces in the initial phase of the movement, manifested by longer time latencies and higher force levels than the control group. All subjects were able to adapt the force output in response to the friction at the digit–object interface. Higher grip forces and safety margins were documented for the DCD group in comparison to the controls. Furthermore, there was greater variation in the parametric control of the grip force in the DCD group. The results suggest that the control of the grip force is similar in children with DCD, regardless of whether they have associated ADD or not, but it is impaired in comparison to that of controls. Open-loop or pre-programmed movements (or movement phases) require mini-mal attention. It was already mentioned by Schellekens et al. (1983) that pre-programmed movements are less efficient in children with non-optimal neurological status. Our data do not suggest that children with DCD/LD use a different strategy for the planning and execution of discrete movements. Also perceptual guidance was not less efficient, even to smaller targets and with distal movements. However, if task requirements demand preprogramming or open-loop control, children without learning and motor problems are able to adapt their goal-directed movements to the task demands while children with DCD are not. They seem to lack a feedforward model that regenerates anticipatory force adjustments. It is difficult in DCD children to transfer closed loop to open loop control. The parameterization of force and timing might be useful in the intervention of DCD subjects. In the future development, Intervention can be given in DCD subjects for better motor planning and motor control.
REFERENCES 1. Ameratunga, D., Johnston, L., Burns, Y. (2004) Goal-directed upper limb movements by children with and without DCD: a window into perceptuo-motor dysfunction? Physiotherapy Research International, 9, 1-12. 2. Henderson, S. E., & Sugden, D. A. (1992). Movement Assessment Battery for Children. Kent: The Psychological Corporation. 3. Schoemaker, M. M., van der Wees, M., Flapper, B., Verheij-Jansen, N., Scholten-Jagers, S., & Geuze, R. H. (2001). Perceptual skills of children with developmental coordination disorder. Human Movement Science, 20, 111-133. 4. Smits-Engelsman, B. C. M., Niemeijer, A. S., & Van Galen, G. P. (2001) Fine motor deficiencies in children diagnosed as DCD based on poor grapho-motor ability. Human Movement Science, 20, 161182. 5. Smits-Engelsman, B. C. M., & Van Galen, G. P. (1997) Dysgraphia in children: Lasting psychomotor deficiency or transient developmental delay? Journal of Experimental Child Psychology 67, 164-184. 6. Smits-Engelsman, B. C. M, Wilson, P.H., Westenberg, Y., & Duysensd, J. (2003) Fine motor deficiencies in children with developmental coordination disorder and learning disabilities: An underlying open-loop control deficit. Human Movement Science 22, 495513. 7. Sovik, N., Arntzen, O., & Thygesen, R. (1987) Writing characteristics of "normal", dyslexic and dysgraphic children. Human Movement Studies 31, 171-187. 8. Van Galen, G. P., Portier, S. J., Smits-Engelsman, B. C. M., & Schomaker, L. R. B. (1993). Neuromotor noise and poor handwriting in children. Acta Psychologica, 82, 161-178. 9. Wann, J. P., & Jones, J. G. (1986) Space-time invariance in handwriting. Human Movement Science 5, 275-296. 10. Wilson, P. H. & McKenzie, B. E. (1998) Information processing deficits associated with developmental coordination disorder: metaanalysis of research findings. Journal of Child Psychology and Psychiatry and Allied Disciplines; 39, 829-840. 11. Willoughby, C., & Polatajko, H. J. (1995) Motor problems in children with developmental coordination disorder: review of the literature. The American Journal of Occupational Therapy, 49, 787-794. 12. Wing, A. M. (2000) Motor control: Mechanisms of motor equivalence in handwriting. Current Biology, 10, R245-8. Author: Institute: Street: City: Country: Email:
ACKNOWLEDGMENT The authors would like to thank the National Science Council of the Republic of China for supporting this work financially under contract nos. NSC-95-2221-E-214-008 and NSC-97-2221-E-214-054-MY2.
IFMBE Proceedings Vol. 35
Nan-Ying Yu Department of Physical Therapy, I-Shou University No.8, Yida Rd., Jiaosu Village, Yanchao District Kaohsiung Taiwan [email protected]
Quantitative Analysis of Conductive Fabric Sensor Used for a Metthod Evaluating Rehabilitation Status B.W. Lee, C.K. C Lee, Y.S. Ryu, D.H. Choi, and M.H. Lee* Department of Electriccal and Electronic Engineering, Yonsei University, Seoul, Korrea
Abstract— Recently, many people have a great g interest in their leisure time for being good for their health. However, they get injured on their joint like knee or ank kle in case of too heavy exercises. In this case, it is needed to cu ure and rehabilitate. It is able to estimate rehabilitation status using u conductive fabric sensor for both measuring convenientlly and realizing wearable computing technology. This paper describes about quantitative analysis of conductive fabric sen nsor used for a method evaluating rehabilitation status. The strip-type conductive fabric sensor is compared with an Ag g/AgCl electrode for evaluating validity under knee movement condition. Subjects are composed of 10 males(age: 26.6±2.591) who have not had problems on their knee joint. The diffference of bioimpedance between an Ag/AgCl electrode and the strip-type conductive fabric sensor averages 7.067±13.9 987Ω. As the pvalue is under 0.0001 in 99 % of t-distribution n, the strip-type conductive fabric sensor is correlated with an n Ag/AgCl electrode.
blood vessels which have low resisstivity in a human body. This method is to measure bio-imppedance based on variation of muscles and blood vessels uusing four Ag/AgCl electrodes[1-3]. However, the method using an Ag/AgCl electrode has a problem which is able to measure bioimpedance from a limited part of boody. This paper describes about quanntitative analysis of conductive fabric sensor used for a method measuring bioimpedance from a comparatively wiide part of body.
II. MATERIALS ANDD METHODS A. Strip-Type Conductive Fabric SSensor The equation of bio-impedancee represents as follows from Nyboer(1970) and Swanson(19976)’s equation.
Keywords— Strip-type Conductive Fabric Sensor, Method Evaluating Rehabilitation Status, Wearable Computing Technology.
I. INTRODUCTION Many people mostly choose some exercisses such as riding a bicycle, climbing a mountain, and goin ng a health club for their leisure time and health. Sometimes they, however, get injured on their joint like knee or anklee in case of too heavy exercises. In this case, it is needed to cure c and rehabilitate. In this time, a goniometer, a speciall video camera, and EMG(Electromyography) are mostly ussed to estimate rehabilitation status[1]. These existing methods for evaluating reh habilitation status on knee joint have disadvantages. For exaample, methods using a special camera and EMG are rather expensive than other methods and have limitations of measu uring places. A method using a goniometer has a difficulty related r to a precise measurement value. A special method measuring bio-impedan nce to value rehabilitation status about knee joint exists. This T method is relatively inexpensive and can make up for the weak point of existing methods. In this bio-impedancee measurement method, an electrical current flows throug gh muscles and
Z
ρ
(1)
Z is bio-impedance of cylinder lim mbs, L is distance among electrodes, A is area of sensing, andd ρ is characteristic resistivity of muscles and blood vessels. The volume variation of increaasing muscles(∆Vm) and blood vessels(∆Vb) shows as follow ws if the area of muscles and blood vessels(∆Am + ∆Ab) increeases. ΔV ΔV
LΔA
ρ
(2)
LΔA
ρ
(3)
It shows from equation (2) and (3)) that the sensing area of limbs should be expanded for meassuring the volume variation of increasing muscles and bloodd vessels.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings P 35, pp. 793–796, 2011. www.springerlink.com
Fig. 1 Strip-type Conductive Fabric Sensor
794
B.W. Lee et al.
Therefore, the strip-type conductive fabric sensor was composed of 25 cm by 1 cm based on expanding the sensing area of limbs in Fig. 1. In addition, external appearance of the sensor is made of contractible non-conductive fabrics because the volume of each subject’ muscles is different. The sensor has range which is able to adjust from 25 cm by 3 cm to 50 cm by 3 cm.
III. RESULTS Fig. 3 shows that one of subjects flexes and extends his knee joint during 30 seconds as sticking the strip-type conductive fabric sensors.
Fig. 3 Bio-impedance Variation of a Subject
Fig. 2 Impedance Characteristics of Strip-type Conductive Fabric Sensor Impedance characteristics of an Ag/AgCl electrode range 170 Ω ~ 200 Ω among 10 Hz ~ 10 KHz in bandwidth of frequency[4]. Impedance characteristics of the strip-type conductive fabric sensor, however, range 79.012 Ω ~ 83.196 Ω among 10 Hz ~ 10 KHz in bandwidth of frequency in Fig. 2. The sensor having impedance of this relatively low range has a weak point which has more noise signal than an Ag/AgCl electrode. B. Experimental Methods
㎂
The sensor is excited by 50 KHz and 100 which are optimal frequency and electrical current in MP150, EBI100C(BIOPAC Systems). The upper side(from hip joint to knee joint) of subject’s lower limbs is divided by 3 and the lower side(from knee joint to ankle joint) of subject’s lower limbs is divided by 4. Then, an Ag/AgCl electrode is stuck on position 2 of 3 from hip joint in the upper side and on position 1 of 4 from knee joint in the lower side. These positions is based on optimal positions of measuring bioimpedance on the lower limbs using an Ag/AgCl electrode[5-7]. As Ag/AgCl electrodes are stuck on the lower limbs of subjects, flexion and extension using knee joint are conducted during 30 seconds. After removing Ag/AgCl electrodes, the strip-type conductive fabric sensors are stuck on the same positions. Flexion and extension using knee joint are processed during 30 seconds again. Subjects are composed of ten males(age: 26.6±2.591) who have not had problems on their knee joint.
The unit of row axis in Fig. 3 is second. The first row of four rows is the variation of bio-impedance and its unit is Ω. The second row of four rows is tilting signals to be referred to degrees originated from the movement of knee joint. The third row of four rows is trigger signals caused by a generator to be referred to the movement of knee joint. A subject flexed his knee joint during rising trigger signal and extended his knee joint during falling trigger signal. Fifteen rising and falling trigger signals were generated during 30 seconds. The fourth row of four rows is the variation of phase originated from the variation of bio-impedance. To compare a correlation between an Ag/AgCl electrode and the strip-type conductive fabric sensor, the variation range of bio-impedance measured from both the sensors was calculated by finding the maximum and the minimum of bio-impedance. Although the trigger signals had 15 periods, only 12 of 15 periods were permitted because a generator arbitrarily produced the rising and falling trigger signals. Hence, the bio-impedance of the first, second, and fifteenth rising and falling trigger signals were excluded. Fig. 4 represents the maximum and the minimum of bioimpedance using the strip-type conductive fabric sensor during 12 periodic trigger signals. The red(the maximum) and blue(the minimum) color circles were indicated in Fig. 4. An Ag/AgCl electrode had the maximum value which is from 10.899 Ω to 11.078 Ω and the minimum value which is from 7.062 Ω to 7.501 Ω. The maximal difference range of bio-impedance was 4.016 Ω.
IFMBE Proceedings Vol. 35
Quantitative Analysis of Conductive Fabric Sensor Used for a Method Evaluating Rehabilitation Status
795
IV. DISCUSSION
Fig. 4 The Maximum and the Minimum of Bio-impedance using the Striptype Conductive Fabric Sensor
The strip-type conductive fabric sensor had the maximum value which is from 10.907 Ω to 12.274 Ω and the minimum value which is from 7.568 Ω to 8.575 Ω. The maximal difference range of bio-impedance was 4.706 Ω. As a result, while an Ag/AgCl electrode was stably and constantly measured, the strip-type conductive fabric sensor was rather less stable but had much larger bio-impedance range. The values of bio-impedance measured by both the sensors from ten subjects were compared by t-distribution using the SPSS software. The values of bio-impedance measured by an Ag/AgCl electrode were 7.934±9.004 Ω and those of bio-impedance measured by the strip-type conductive fabric sensor were 15.001±12.336 Ω. Table 1 shows results of t-distribution between both the sensors. Table 1 Results of t-distribution between both the Sensors t-distribution
An Ag/AgCl electrode The Striptype Conductive Fabric Sensor
Average
Standard Deviation
7.934 Ω
±9.004 Ω
15.001 Ω
±12.336 Ω
p-value
p<0.0001
As the results of t-distribution between both the sensors, the difference of average is 7.067±13.987 Ω. As the p-value is under 0.0001 in 99 % of t-distribution, the strip-type conductive fabric sensor is correlated with an Ag/AgCl electrode.
The purpose of this paper is to suggest and estimate the strip-type conductive fabric sensor used for one of the methods evaluating rehabilitation status such as knee joint movements. Measuring bio-impedance using an Ag/AgCl electrode has a limitation which has a low variable range from the measured bio-impedance signals because of measuring from a limited part of body. Besides, the electrode is disposable. On the other hand, as it is mentioned on results of the experiments, the strip-type conductive fabric sensor has a larger bio-impedance range than an Ag/AgCl electrode because of measuring from a comparatively wide part of body. In addition, as the results of t-distribution between both the sensors, the p-value is under 0.0001 in 99 % of tdistribution. These results mean that the strip-type conductive fabric sensor has more sensitivity than an Ag/AgCl electrode to measuring bio-impedance and the strip-type conductive fabric sensor is correlated with an Ag/AgCl electrode because of little difference. Also, the proposed conductive fabric sensor can be used to measure the bio-impedance signal for a long while. Therefore, the strip-type conductive fabric sensor can be utilized as one of the methods evaluating rehabilitation status such as knee joint movements with an Ag/AgCl electrode.
V. CONCLUSIONS This paper describes that it is possible to measure bioimpedance using the strip-type conductive fabric sensor. The proposed conductive fabric sensor can not only have larger bio-impedance characteristics than an Ag/AgCl electrode but also be used for measuring bio-impedance for a long time. Furthermore, it is possible to apply the proposed conductive fabric sensor to wearable computing technology if the pants which are made of the strip-type conductive fabric sensor can be utilized to estimate rehabilitation status in near future.
REFERENCES 1. Takao Nakamura and Yoshitake Yamamoto (2001) Evaluation System of Physical Exercise Ability using Bio-electrical Impedance, International Symposium on Industrial Electronics Proc., Pusan, Korea, 2001, pp 2053-2058 2. Lee E. Baker (1989) Principles of the impedance technique. IEEE engineering in medicine and biology magazine 11-15
IFMBE Proceedings Vol. 35
796
B.W. Lee et al.
3. Deok-Won Kim (1989) Detection of physiological events by impedance. Yonsei Medical Journal 30 4. John G. Webster (1998) Medical instrumentation : application and design. John Wiley & Sons, Inc., New York 5. Chul Gyu Song et al. (2005) Optimum electrode configuration for detection of leg movement using bio-impedance. Physiol. Meas. 26:S5968 6. Cornish, B. H. et al. (1999) Optimizing electrode sites for segmental bioimpedance measurements. Physiol. Meas. 20:241-250 7. Timo Vuorela (2003) Bioimpedance Measurement System for Smart Clothing, Proc. of the Seventh IEEE International Symposium on Wearable Computers, New York, USA, 2003, pp 98-107
Author: Byung Woo Lee Institute: Department of Electrical and Electronic Engineering, Yonsei University Street: Yonsei University 50 Yonsei-Ro, Seodaemun-Gu City: Seoul Country: Korea Email: [email protected] Author: Chungkeun Lee Institute: Department of Electrical and Electronic Engineering, Yonsei University Street: Yonsei University 50 Yonsei-Ro, Seodaemun-Gu City: Seoul Country: Korea Email: [email protected]
Author: Yeonsoo Ryu Institute: Department of Electrical and Electronic Engineering, Yonsei University Street: Yonsei University 50 Yonsei-Ro, Seodaemun-Gu City: Seoul Country: Korea Email: [email protected] Author: Donghag Choi Institute: Department of Electrical and Electronic Engineering, Yonsei University Street: Yonsei University 50 Yonsei-Ro, Seodaemun-Gu City: Seoul Country: Korea Email: [email protected] Author : Myoungho Lee* Institute: Department of Electrical and Electronic Engineering, Yonsei University Street: Yonsei University 50 Yonsei-Ro, Seodaemun-Gu City: Seoul Country: Korea Email: [email protected] Asterisk(*) indicates corresponding author
IFMBE Proceedings Vol. 35
Study on Posture Homeostasis – One Hour Pilot Experiment G.W. Lin1, T.C. Hsiao1,2, and C.W. Lin3 1
Institute of Biomedical Engineering, National Chiao Tung University, Hsinchu, Taiwan Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan 3 Institute of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
2
Abstract— The goal of this study is to study the feature of posture homeostasis in prolonged sitting of normals, in order to provide information to electrical wheelchair designer to design ideal wheelchair to help people who cannot functionally reposition themselves and relief pressures under their bony prominences. Subjects sat on a powered wheelchair which seat inclination can be adjusted. We used a pressure mat to measure the pressure under hip and back during prolonged wheelchair sitting, then calculated the center of pressure (COP) to analyzed the mode and range of posture change in different seat inclination. The study included 10 healthy subjects. The results showed that in these 10 subjects, the COP movement of 100°, 110°, 120° and 130° in the sagittal plane was 11.5±6.5, 11.1±4.3, 11.4±5.8, and 12.8±5.5cm, and in the coronal plane was 7.1±5.4, 4.2±5.5, 4.7±6.9, and 5.0±5.1.Through one-way ANOVA, we found statistically significant differences (p-value < 0.05). When separated different sex in two group, we found that male motion extent in sagittal and coronal plane were 11.7 ± 0.7 and 6.5 ± 1.8, and female were 10.2 ± 1.9 and 3.9 ± 1.6. There was significant difference between motion extent of males and females. Males moved to a larger extent than females (p < 0.05). In addition, the same subjects seated in the experimental design in four different angles, two to four will show similar features. The features of posture homeostasis in sitting of different subjects somehow may have significant differences. Keywords— Homeostasis, Inclination, Center of pressure.
Posture,
Pressure
mat,
function. We can feel the pain and numbness then move to relief the pressure or shift to the other side temporary. But the physically disabled or patient after surgery often unable to perform like normals. At this time if the caregivers do not change patient’s posture frequently to relief pressure, or assisted by the automatic control system, as long as two or three hours will result in local tissue necrosis [2]. The costs for treating PS associated with SCI are substantial, exceeding 1.2 billion dollars annually in the United States alone [1]. In the clinical care process, nurses relieve pressure over bony prominence by a widely range posture change to the patient frequently to reduce the risk of PS [1]. Biomedical engineering researchers refer the characteristics of posture change of healthy people as the design template to develop the automatic control system to control of body posture from automatic chair adjustments and relief pressure with schedule to help disable [3-5]. However, the relationship between posture change and seat inclination during prolonged sitting was not discussed in detail. Moreover, an efficient and systematic method to assist participants to relief pressure in the research is still limited. Consider to Normals' biofeedback mechanisms to maintain homeostasis, this study will focus on normals in a wheelchair under different inclination, and to study the feature of sitting posture homeostasis, and whether the posture homeostasis pattern and range of posture change have sex differences of not.
I. INTRODUCTION II. MATERIALS AND METHODS Pressure sores (PS) are a frequent and potentially fatal complication in permanent wheelchair users and people on long-term bed rest. They are caused by prolonged pressure on an area that lies over bony Prominence. The skin over the hip, tailbony, back, heels and elbows is often an area of PS development. Prolonged pressure over bony Prominence induced poor circulation and avascular necrosis of the Skin, and even induced ulcers and wounds, while the humidity and temperature increase also urge risk of PS [1]. Healthy People may move or change their posture when excessive/prolonged sitting since we have normal sensory and motor nerve
A. System Description Fig. 1 shows a 32 × 32 sensors pressure mat (FSA pressure mapping system, Vista Medical Co., Winnipeg, Canada) to receive the pressure distribution data, a tilt adjustable electric wheelchair (Nita International Enterprise Co., Ltd., Taichung County, Taiwan) as inclination controller and a notebook (model: X31, BenQ International Co., Ltd., Taipei, Taiwan) and LabVIEW (Version 8.6, National Instruments Co., Austin, Texas) to collect data.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 797–800, 2011. www.springerlink.com
798
G.W. Lin, T.C. Hsiao, and C.W. Lin
Fig. 3 VCV – time graph
Fig. 1 System Configuration B. Data Analysis Collect data and calculate the COP for each experiment in the sagittal plane and coronal plane of the maximum and minimum, then subtract from each others to get range of movement. Analyze the posture changing range of these two dimensions. In addition, we get a pressure distribution plot per 0.2 second (Fig. 2). We separate it into two parts: back and hip. Calculate the coefficients of variation (CV, expressed as V CV) of these two plots, equation of VCV: VCV = (σ / μ) × 100%
Fig. 4 Vthreshold, Joint distant and Times- 3D graph This graph can be used to analyze the feature of sitting posture homeostasis.
(1)
σ is expressed as standard deviation, μ is expressed as mean. σ and μ in the calculation basis for the only points of the compressed sensor, respectively, to determine the posture changing range of back and hip.
Fig. 2 Pressure Distribution plot Calculate the value of all VCV in each experiment, drawn into the VCV – time graph (Fig. 3), and then use Vthreshold, Joint distant and Times as x, y and z axis, drawn into the 3D graph (Fig. 4), where Vthreshold representative of a different VCV as a threshold, and to scan all the Vthreshold ,when the data points is greater than the Vthreshold, Joint distant means the continuous distance of the points and Times means the times of that continuous occurrence.
C. Experimental Process Ten subjects with no history of spine-related (5 male, 5 female), aged 23 ± 3 years, height 167 ± 9 cm, weight 59 ± 12 kg, sat on a powered wheelchair which seat inclination can be adjusted. We used a pressure mat to measure the pressure under hip and back during prolonged wheelchair sitting in an assigned inclination. Analyzed the pattern and range of posture change in different seat inclination. Subjects experienced four different inclinations(100°, 110°, 120° and 130 °) sets on separate four days. Each set of 80 minutes included 10 minutes for adaptation, 60 minutes for data acquisition and 10 minutes for pre-end. The purpose of setting the adaptation period is to remove the factor subject just begin to sit on the wheelchair and move to adapt the chair that may affect our analysis. The purpose of setting the 10 minutes before the end of the experiment is to avoid subjects may expect the experiment will over and float of psychological situation. Four Experiment for the same subject were implemented at the same hour in four days. 32 × 32 sensors pressure mat data then separate it into two parts: back and hip. Calculate the values of the compressed sensor to determine the posture changing range and pattern of back and hip.
IFMBE Proceedings Vol. 35
Study on Posture Homeostasis - One Hour Pilot Experiment
799
Table 1 Analyzed result of range of posture change Subjects number
5 5
Range of sitting posture change (cm)
Gender
Male Female Combined
Sagittal plane
Coronal plane
100±
110±
120±
130±
100±
110±
120±
130±
15.4²7.0
10.0²5.1
12.4²5.5
14.7²4.6
8.2²6.1
4.7²7.1
5.3²7.1
7.9²5.9
7.7²3.1
12.1²3.8
10.3²6.5
10.7²6.0
5.9²4.9
3.7²4.2
4.0²5.3
2.1²1.0
11.5²6.5
11.1²4.3
11.4²5.8
12.8²5.5
7.1²5.4
4.2²5.5
4.7²5.9
5.0²5.1
*Males moved to a larger extent than females (p < 0.05)
method, this scenario can be discovered and do the differences observed.
III. RESULTS AND DISCUSSION With a pressure mat and analysis program, we found that different subject has significantly difference to the other subject in the same inclination. The frequency of posture change are different in different subjects, too. The statistical results show that in the ten subjects, at four different angles: 100 °, 110 °, 120 ° and 130 °,the sagittal plane range of movement of the order of 11.5 ± 6.5, 11.1 ± 4.3, 11.4 ± 5.8 , 12.8 ± 5.5 cm (μ ± σ ), while the coronal plane range of movement is the order of 7.1 ± 5.4, 4.2 ± 5.5, 4.7 ± 6.9, 5.0 ± 5.1 cm. The one-way analysis of variance (ANOVA) shows that p values were less than 0.05, statistically significant difference, indicating the range of posture change will be affected by the inclination. If male and female were divided into two groups, the male in the sagittal plane and coronal plane, the average range of movement were 11.7 ± 0.7 and 6.5 ± 1.8, while for females was 10.2 ± 1.9 and 3.9 ± 1.6. The results shown in Table 1, show that males in the prolonged sitting period, regardless of sagittal or coronal plane, the range of movement are larger than females sitting (p <0.05). Aspects of the feature of posture homeostasis, Calculated VCV for the compressed sensor of hip and back separately, and then use Vthreshold, Joint distant and Times as x, y and z axis, drawn into the 3D graph, in order to analyzed the pattern of posture change. We found the same subject for a given inclination of sitting in the experimental set 100 °, 110 °, 120 ° and 130 °. In these four different angles, two to four will show similar features. For example, one of back and hip is activities more frequent, or the tendency of small, frequent moves, or the occasional, large moves. And the features of sitting posture homeostasis of different subject will show some significant differences, reveal these features seem to be provide some identifiable characteristics, similar to gait, even facial features. If the subjects slide their sitting position during the experiment, use this all-threshold-scan analyzed
IV. CONCLUSIONS This study acquire pressure data by using a 32×32 sensors pressure mat, then calculate the values of the compressed sensor and analyzed COP and VCV to determine the posture changing range and pattern of back and hip. Attempt to clarify the feature of sitting posture homeostasis for normals. The experimental results showed that males in the prolonged sitting period, regardless of sagittal or coronal plane, the range of movement are larger than females sitting (p <0.05). The same subjects seated in the experimental design in four different angles, two to four will show similar features. The features of posture homeostasis in sitting of different subjects somehow may have significant differences. Analyze of the two specific dimensions COP can discover the posture changing range in sagittal plane and coronal plane, but when a mixed-dimensional movement is occurred, the two dimensions analyzed method will miss this information. By using VCV to analyzed the compressed sensor value can detect any dimensional movement on the pressure mat. If the subjects slide their sitting position during the experiment, use this all-threshold-scan analyzed method, this scenario can be discovered and do the differences observed. Different analyzed methods have their applicability and limitations, so the following analysis will try to use other rules to analyze the data, and received more subjects and data to be summarized feature of sitting posture homeostasis.
ACKNOWLEDGMENT We thank the National Science Council of Taiwan for financial support of this work(98-2221-E-009-148).
IFMBE Proceedings Vol. 35
800
G.W. Lin, T.C. Hsiao, and C.W. Lin
REFERENCES 1. Agency for Health Care Policy and Research. (1994) Treatment of pressure ulcers, in: Clinical practice guideline no. 15. US Department of Health and Human Services, Publication no. 95-0652 2. E. Linder-Ganz, N. Shabshin, Y. Itzchak et al. (2007) Peak gluteal muscle strain and stress values during sitting are greater in paraplegics than in normals. Processing of ASME 2007 Summer Bioengineering Conference, Colorado, US, 2007 3. E. Linder-Ganz, M. Scheinowitz, Z. Yizhar, S.S. et al. (2007) How do normals move during prolonged wheelchair-sitting? Technology and Health Care 15 (3): 195-202 4. P. van Geffen, B. I. Molier, J. Reenalda etal. (2008) Body segments decoupling in sitting: Control of body posture from automatic chair adjustments. Journal of Biomechanics 41 (16): 3419-3425 5. P. J. Mork, R. H. Westgaard. (2009) Back posture and low back muscle activity in female computer workers: A field study. Clinical Biomechanics 24: 169–175
6. Laura K. Smith, Elizabeth L. Weiss, L. Don Lehmkuhl. (1996) Brunnstrom’s Clinical Kinesiology, 5th Edition. Book Promotion & Service Ltd, Bangkok 7. J. Pynt, J. Higgs, M. Mackey. (2002) Milestones in the evolution of lumbar spinal postural health in seating. Spine 27: 2180–2189
Author: T.C. Hsiao Institute: 1. Institute of Biomedical Engineering, National Chiao Tung University, Hsinchu, Taiwan 2. Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan Street: University Road City: Hsinchu Country: Taiwan, ROC
Email:
IFMBE Proceedings Vol. 35
[email protected]
The Development of Muscle Training System Using the Electromyogram and Interactive Game for Physical Rehabilitation K.S. Kim1, J.H. Kang1, Y.H. Lee1, C.S. Moon2, H.H. Choi1, and C.W. Mun1,3 1
Inje University/Biomedical Engineering, Obang-dong, Gimhae, Gyoungnam, 621-749, Republic of Korea 2 KMG Ltd, Saha-gu, Busan, Republic of Korea 3 Inje University/UHRC, Obang-dong, Gimhae, Gyoungnam, 621-749, Republic of Korea
Abstract— In this study, muscle training system was developed for effective rehabilitation treatment. This system consists of the electromyogram (EMG) acquisition module, the graphic user interface (GUI) system, and game contents of each channel mode. EMG acquisition module acquires EMG signals and transmits to GUI. GUI performs to control the direction in for game operation using acquired EMG signals. The obtained data were analyzed using laboratory-developed software based on Microsoft Foundation Class (MFC) of Visual Studio 2008. The games were developed as each-channel mode for various exercises in rehabilitation. All the games were implemented in Flash CS4 using Action Script 2.0. This system was to overcome the boredom and monotony in rehabilitation. We proposed the method which improves the patient's participation willingness and motivates them in rehabilitation. Keywords— Electromyogram, Interactive game, Rehabilitation.
I. INTRODUCTION Many studies on electromyogram (EMG) systems have been reported in several fields, such as in the fields providing assistance for disabled and for rehabilitation training, and in the field of device control [1-3]. Nevertheless, an EMG device has been passively applied to a rehabilitation treatment and has some limitations in terms of the patient’s involvment. It is well known that the patient should actively participate in the treatment to improve this limitations. The EMG system combined with the game would help the patients actively participate in rehabilitation training. So the development of interesting training system is required for effective rehabilitation [4,5]. This kind of interface method is called the interactive game. The interactive game interface is a user interface in which a person interacts with digital information through the physical environment [6]. The EMG can generate necessary control commands for the game system result from the muscular movements. And it can be used as a method of interactive game interface [7,8]. The recognition of exercise directions is required to control device or game interface for rehabilitation. In the previous
study, we proposed a reliable algorithm without time delay to recognize the defined wrist-motion patterns [7,9]. Rehabilitation and physical therapy are essential elements for patients to recover from functional disabilities. Because rehabilitation requires strenuous and repeated movement, it can be helpful to add fun and enjoyment using game device like Nintendo wii and play station [10~12]. If using EMG signal compared with motion recognition through an acceleration sensor, variety information about the muscles as well as the motion recognition can be obtained. Generally game contents have been made for the public, but for the patient, it may be somewhat difficult to use. Thus, the necessity of the customized game contents for rehabilitation has been being increased. In this study, we proposed the multi-channel EMG module combined with customized game contents for effective rehabilitation which improves the patient's participation willingness and motivate them in rehabilitation by overcoming the boredom and monotony in rehabilitation.
II. METHODS The proposed training system consists of the EMG acquisition module, the graphic user interface and game contents for each channel mode. A. EMG Acquisition Module The EMG acquisition module was configured to acquire up to 4 channel signals simultaneously. The semi permanent dry type electrodes (KOREC, Korea) as shown in Fig. 1 were used as EMG sensor [13]. This dry sensor includes differential amplifier, bans-pass filter (10~500Hz) and band reject filter (60Hz). The valid EMG signal was acquired by amplification and filtering of frequency band without separating analog circuit from dry sensor. The obtained signal received from designed EMG module was transferred to PC after signal processing. The sampling frequency of the 12-bit A/D converter was 1 kHz. Self-developed program was used to operate ATMEGA128 (Atmel, Inc, USA), and a converted signal was transmitted to software through communication
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 801–804, 2011. www.springerlink.com
802
K.S. Kim et al.
interface with PC. The EMG acquisition module was designed as portable size (8cm x 6cm) for comfortable rehabilitation.
Fig. 2 Process of experimental preparation
Fig. 1 Four-channel EMG acquisition module B. Signal Processing The acquired data were analyzed using laboratorydeveloped software based on Microsoft Foundation Class (MFC) of Visual Studio 2008 (Microsoft, Inc, USA). The procedures were performed to control the keyboard for game operation using acquired EMG signals. Two steps were carried out: initialization (step 1) and real-time process (step 2). Step 1 is the process to select the keys on the keyboard required for the games before the rehabilitation exercise as shown in Fig. 2(a). After mode selection (1 Ch, 2 Ch, 4Ch), scaling function was made by individuals according to differences of the EMG signal and could select the keys. In step 2, depending on the movement of the muscles a real-time process operates the game as shown in Fig. 2(b). Reliable algorithm was already developed to classify the movements of the muscles without time delay in the author’s previous studies [7,9]. If the time window size were to set between 150~300 ms for the feature extraction, realtime pattern recognition could be performed with no difficulty [14]. In this study, the time window size was to set at 150 ms.
The number of channel was selected depending on the mode or type of exercise. The developed games consist of the ball rolling, break bricks, car race and tetris as shown in Fig. 4~6.
C. Game Design
A. Application of One Channel Mode Game
All the games were implemented in Flash CS4 using Action Script 2.0. The games were developed for each-channel mode as shown Fig. 3. Because the subjects were patients while using the system, the games were designed to be able to adjust the game difficulty and speed appropriate to each.
The occurrence of muscle strength by the movement of patient can generate the keyboard event through the developed software. The continuous strength rolls the ball to the destination, and then current step is cleared as shown in Fig.4.
Fig. 3 Flow chart of the game selection for rehabilitation
III. RESULT
IFMBE Proceedings Vol. 35
The Development of Muscle Training System Using the Electromyogram and Interactive Game for Physical Rehabilitation
803
& abduction/adduction of wrist, etc, where rehabilitation requires four directional muscle movement.
Fig. 4 Ball rolling – application of one channel mode The next step is required more strength to occur the key event. The one channel mode can be applied such as flexor synergy of upper limb, ankle flexion and grasp, etc.
Fig. 6 Left: Car race, Right: Tetris – applications of 4 channel mode
IV. DISCUSSION AND CONCLUSIONS
B. Application of Two Channel Mode Game In the break bricks as shown in Fig. 5, speed and difficulty level are adjusted depending on the patient’s condition, the game starts by pressing the start button. The electrodes are attached in two parts of muscles and the tray is moved according to the signal difference between two channels. The scores are raised through the removal of bricks, and when all the bricks are removed the stage ends. The two channel mode can be applied such as ankle plantar/dorsi flexion, flexor synergy of upper limb of right/left, etc, where rehabilitation is required in both directions.
Due to the different muscle activation area regarding each muscle movement, the system reliability is closely related to attach the electrodes to the exact position. So, we investigated the muscle activation region for several exercises. Each exercise can be assigned to the specified mode and the electrode positions as shown in Table 1 [7,15,16]. Table 1 Applications of possible exercises and electrode position Mode
One channel
Exercise
Electrode position
Flexor synergy of upper limb
Deltoid muscle
Extensor synergy of upper limb
Triceps
Ankle plantar-flexion
Lateral/ Medial gastrocnemius
Ankle dorsi-flexion
Tibialis Anterior
Grasp
Extensor Digitorum
Ankle plantar & dorsi flexion
1.Lateral/ Medial gastrocnemius 2.Tibialis Anterior
Two channel
Fig. 5 Break brick – application of two channel mode
Flexor synergy of upper limb Right & Left
1. Right deltoid muscle 2.Left deltoid muscle
Shoulder shrug
1.Right trapezius 2.Left trapezius
Flexsion/Extension & Abduction/Adduction of wrist
1.Extensor digitorum 2.Flexor carpi ulnaris 3.Flexor carpi radialis 4. Abductor policis longus
C. Application of Four Channel Mode Game Two games in the Fig. 6 can be applied close to normal patients because of the complexity of the games, which require information of four directions of muscle motion. The four channel mode can be applied such as flexion/extension
Four channel
IFMBE Proceedings Vol. 35
804
K.S. Kim et al.
In this study EMG driven-game system for rehabilitation treatment was proposed. A further direction of this study is to optimize the system through clinical test for verification of reliability and efficiency. It is necessary to customize the system according to each patient and each disease through clinical tests. In this study the developed training system is adapting the wired communication system, but we will try change it to wireless communication system using Bluetooth. Then the system will be compatible with smart phone using the modified application software. We developed multi-channel EMG acquisition module and customized game contents for active rehabilitation system. And we found that the system can be used for several remedial-exercises. The training system is expected to improve the patient’s willingness and motivation in their physical rehabilitation.
ACKNOWLEDGMENT This research was financially supported by National Research Foundation of Korea (NRF) and the Ministry of Education, Science Technology (MEST) through the Human Resource Training Project for Regional Innovation (No. 20100274).
REFERENCES 1. A.B. Ajiboye, R.F. Weir, (2005) A heuristic fuzzy logic approach to EMG pattern recognition for prosthesis control, IEEE Trans-NSRE, vol. 13, pp.280-291 2. D.S. Andreasen, S.K. Allen, D.A. Backus. (2005) Exoskeleton with EMG Based Active Assistance for Rehabilitaion, IEEE ICRR Proc vol.9, pp.333-336 3. Nick D. Panagiotacopulos, Jae S. Lee, Malcolm H. Pope, et. al. (1998) Evaluation of EMG signals from rehabilitated patients with lower back pain using wavelets, J Electromyogr Kinesiol, vol. 8, pp. 269-278 4. T.S. Saponas, D.S. Tan, D. Morris, et. al. (2009) Enabling AlwaysAvailable Input with Muscle-Computer Interfaces, UIST Proc, vol.22, pp.167-176 5. C.S. Moon, (2008) A self-training system using bio-feedback game for muscle and electromyogram bio-feedback game method thereof, Korea Patent, 10-0822483-0000
6. C.D. Anthony Herndon, Marvalyn Decambre, Patrick H. Mckenna (2001) Interactive computer games for treatment of pelvic floor dysfunction, J Urolo, vol.166, pp.1893-1898 7. K.S. Kim, H.Y. Han, C.W. Mun, et. al. (2010) Technical Development of Interactive Game Interface Using Multi-Channel EMG signal, J Kor Game Soc, vol.10, pp.65-74 8. Soares, A. Andrade, E. Lamounier, et. al. (2004) The development of a virtual myoelectric prosthesis controlled by an EMG pattern recognition system based on neural networks, J Intellig Inform Sys, vol. 21, pp.127-141 9. K.S. Kim, H.H.Choi, C.W. Mun, et. al. (2010) Comparison of knearest neighbor, quadratic discriminant and linear discriminant analysis in classification of electromyogram signals based on the wrist-motion directions, Curr Appl Physi, online available, doi: 10.1016/j.cap.2010.11.051 10. J.E. Deutsch, M. Borbely, J. Filer, et. al. (2008) Use of a low-cost, commercially available gaming console (wii) for rehabilitation of an adolescent with cerebral palsy, physic therap, vol. 88, pp. 1196-1207 11. Jonathan Halton, (2008) Virtual rehabilitation with video games: A new frontier for occupational therapy, Occupat therap, vol. 9, pp.1214 12. D. Rand, R. Kizony, P. L. Weiss. (2008) The sony playstation II eye toy : low-cost virtual reality for use in rehabilitation, J Neurol Physic Therap, vol. 32, pp. 155-163 13. S.H. Park, J.K. Kim, S.W. Yuk, et. al. (2008) The Prototype of the 3 D.O.F Myoelectric Hand System for the Upper Limb Amputee, IEEE Mechatro Autom, pp. 983-987 14. F.H.Y. Chan, Y.S. Yang, F. K. Lam, te. al. (2000) Fuzzy EMG classification for prosthesis control, IEEE Trans Rehab Eng, vol. 8, pp. 305-311 15. Janet H. Carr, Roberta B. Shepherd, (2004) Stroke rehabilitation: guidelines for exercise and training to optimize motor skill, Elsevier Health Science 16. Gregory S. Kolt, Lynn Snyder-Mackler, (2007) Physical therapies in sports and exercise, Elsevier Health Science Corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Chi-woong Mun Inje University Obang-dong 607 Gimhae Republic of Korea [email protected]
A Preliminary Study on Magnetic Fields Effects on Stem Cell Differentiation Azizi Miskon*and Jatendra Uslama Faculty of Electrical & Electronic Engineering, UniversitiTun Hussein Onn Malaysia, 86400 Parit Raja, Batu Pahat Johor Malaysia [email protected] * Corresponding author. Abstract— The objective of this paper is to provide the fundamental mathematical formula to predict the effect of magnetic fields (MF) on stem cell differentiation. The data were reviewed from journals related to the effects of magnetic fields on stem cell differentiation. These data were given a value for differentiation which is related to their MF strength with these conditions; MF strength that does not affect the stem cell differentiation given the value of zero, MF strength that results in stem cells death given the value of 1, and the MF strength that affects stem cells The differentiation given the value between 0.1 and 0.9. graph was plotted according to these data and the mathematical equation is designed from the graph. From this review, we suggest that the intensity of MF that can affect the stem cell differentiation is between 600µT and 9.4T in which the cell differentiation will not occur with intensity of less than 10µT and intensity of more than 12T will cause the death of stem cells. We also suggest that the limit of MF effects on stem cell differentiation lies between 10 µT and 600 µT, and the limit of MF strength that can lead to the death of stem cells lies between 9.4 T and 12 T. It can be concluded that the exposure of MF on stem cell differentiation depends not only on the MF intensity, but also on the period of exposure. Keywords— Differentiation, Ion Resonance Frequency, Magnetic Fields, Stem Cells, Tissue Engineering.
I. INTRODUCTION Stem cells are primitive cells, which are present in all organisms and have the ability to divide and give rise to more stem cells, or switch to become more specialized cells in human body like cells in brain, heart, muscle, and kidney [1]. There are two types of stem cell; embryonic stem (ES) cell and adult stem cell. ES cells are pluripotent and harvested from inner cell mass of blastocyst and possess the ability to differentiate into all of the specialized embryonic tissues [1], [2]. ES cells also may open the door to the rapidly progressing field of therapeutic cell transplantation [3]. The adult stem cells are multipotent with capacity to differentiate or transdifferentiate into cell types other than their tissue of origin [1]. Adult stem cells and progenitor cells can be found in the adult tissue. Both of these cells act as a repair system for body, replenishing the specialized cells,
and maintaining the normal regenerative of organs, such as blood, skin, or intestinal tissues. Magnetic fields (MF) produced by moving electric charge and exist all around us like earth MF and manmade MF sources. Numerous static and alternating MF arising from man-made sources have possible biological effect [4]. There are many biological functions that are modulated by extremely low frequency magnetic field (ELF-MF) [5]-[7]. However, there is not enough evidence that the ELF-MF could endanger the human health [8]. ELF-MF is MF with a range of frequency of 3 to 30Hz. Even so, MF has been shown to affect proliferation and growth factor expression in cultured cells [9]-[11] and also interfere with endorphinergic and cholinergic system [12]-[14]. Other than MF, electrical fields (EF) also have biological effects that can influence neural growth and orientation in vitro [15], and have been applied for the treatment of spinal cord injuries in recent clinical trials [16]. The response of cells to the EF was essentially passive and determined by the physical properties of the cell, however cells can also actively respond to EF [17]. Electromagnetic fields (EMF) are produced when electric current flows through an electrical conductor like power line [18]. Like MF and EF, EMF also has biological effects such as altered rate of cell growth [5], [19], altered quantities of RNA transcript and proteins [20], altered cell surface properties [21] and effect on development [22]. However EMFbased technologies have not progressed to clinical translation and the reason for this is the scepticism due to differences in experimental exposure protocols and static MF (SMF) variation applied in experiment [23]. The objective of this paper is to design the mathematical formula to predict the effect of magnetic fields on stem cells differentiation. To the extent of our knowledge, there is no standard range of suitable magnetic fields provided which can affect the stem cells differentiation ability. Therefore, we review the data of magnetic fields and its effect on stem cells differentiation used by previous researchers. From these data, we design a mathematical equation to predict the suitable range of magnetic fields that can affect the stem cells ability to differentiate. This work will provide an essential basis prior to any future in vitro experiment, in which we can predict the strength of MF used either to trigger the cells
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 805–810, 2011. www.springerlink.com
806
A. Miskon and J. Uslama
differentiation or vice versa. Therefore, we may avoid unnecessary failure during the experimental works.
occur in 600µT but the differentiation is not observed in
II. METHOD The data from several journals related to the effects of MF on stem cells differentiation that were published in various journals from the year 2004 until 2010 were reviewed [23]-[29] as shown in Table 1. Since EF and EMF can also affect the stem cells differentiation, this review only focuses on the effect of MF and EMF on the stem cells differentiation. The data in Table 1 were revalued according to the MF strength. The value for differentiation of related MF strength were given to these conditions (Table 2); MF strength that does not affect the stem cells differentiation was given the value of zero and MF strength that results in stem cells death was given the value of 1. For the MF strength that affects stem cells differentiation, the value was given between 0.1 and 0.9. The given value for MF group that affect the stem cells differentiation was done with the assumptions that the smaller or higher the value, the lesser the MF effect on differentiation. The data were plotted into graph by using Microsoft Office Excel 2010 software and the result of project presented in Graphical User Interface (GUI) design by using the Microsoft Visual Studio Ultimate 2010. Table 1 MF strengths that were used in reviewed journals and classified into their corresponding group; with group 0 for MF strength does not affect the stem cells differentiation, group 1 for MF strength that effects the stem cells differentiation, and group 2 for MF strength that kill the stem cells
0)LQWHQVLW\7 P P P
*URXS
10µT. This may indicate that the minimum strength of MF to influence the stem cell differentiation in frequency of 50Hz lies somewhere between 10µT and 600µT. We also observed that the effect of MF on differentiation of stem cells still occur in MF strength of 9.4T. However, such high field strength magnet like 9.4T is not easily available and such studies may not be translated into the clinical study because the current limits for magnetic field strengths approved by U.S. Food and drug Administration (FDA) is 3T [24]. This is the reason that many of the research works were carried out using the magnetic fields less than 3T. The stem cells death will occur at the exposure of 12T for more than 24h. Therefore, the minimum exposure of MF before resulting in stem cells death lies between 9.4T and 12T. We revalued the data in Table 1 into Table 2, and as a result, the graph was plotted as depicted in Fig. 1. The graph is reasonable as the trait of MF effect on the stem cells is almost resembling the growth model. The closer the strength of MF to the lower asymptotes or upper asymptotes, the lesser the significance the differentiation is observed. The smaller the intensity of MF used, the lesser the possibility of MF to affect the stem cell differentiation. Whereas, the higher the intensity of MF used, the more possibilities that differentiation will not occur since stem cells were more likely to die. From the logarithmic graph, we design the mathematical function as shown in equation (1). From this equation, we can predict the strength of MF which is able to influence the stem cells differentiation ability. Referring to the equation (1), the value of ‘y’ is assumed to be ‘y=0’ for value of ‘x’ less than 10µT and ‘y=1’ for ‘x’ value more than 12T. The logarithmic function is inserted in GUI code shown in Fig. 2 so that it can be used in any future in vitro experiment.
III. RESULT From Table 2, we design the graph using Microsoft Office Excel 2010 as shown in Fig. 1. From Table 2, we observed that the effect of MF on stem cell differentiation still
Fig. 1 The graph of MF effects on stem cells differentiation
IFMBE Proceedings Vol. 35
A Preliminary Study on Magnetic Fields Effects on Stem Cell Differentiation
y = 0.0684ln(x) + 0.7518
(1)
Fig. 2 GUI for predicting the MF effects on stem cells
IV. DISCUSSION Cardiosperes (CDps) and cardiospere-derived cells (CDCs) exposed to extremely low frequency magnetic fields (ELF-MF) (10µT, 18Hz) did not affect the expression of cardiac and vascular markers (cTnl, Nkx 2.5, MHC, VEGF, KDR, and SMA) during the experiment, thus this may suggest an evidence that the stem cells did not differentiate with MF of 10µT [23]. Human Mesenchymal stem cells (hMSC) are multipotential cells and possess high replication capacity, and there are potentials to differentiate into different lineages of mesenchymal tissue such as bone, cartilage, muscle, fat, and marrow stroma [30]. The exposure of human mesenchymal stem cells (hMSC) to 600µT enables the MSC to differentiate into adipogenic cells as indicated by enhanced expression of lipoprotein lipase and peroxisome proliferatoractivated receptor gamma [24]. In addition, the immediate exposure of MSC to MF has enhanced the cells differentiation. This research was accomplished to investigate the effect of MF on hMSC and labelled with super magnetic particles of iron oxide (SPIO). This was done by using SPIO to label the cells, and detect ion was done by using Magnetic Resonance Imaging (MRI). The viability or differentiation potential of the cells has not been affected by SPIOlabelling of MSC [31], [32], but it has an impact on the iron metabolism, the migration capacity, and the colony formation of MSC [32]-[34]. The principle function of MRI is the exposure of cells to high MF and magnetic force that can direct the iron labelled stem cells in vitro and in vivo [35]-[37], guided localization of iron labelled stem cells to the desired region [35], [38], for the seeding of scaffolds with stem cells [36], [39], or for the engineering of 3D tissues by stem cells [37]. Therefore, the effect of MF strength in vitro experiment should be carried out in order to verify the equation (1). The stem cell is able to differentiate in a sinusoidal MF of 800µT with frequency of 50Hz. Real-time quantitative reverse transcriptase-polymerase chain reaction (RT-PCR)
807
analysis revealed a remarkably increased of GATA-4 and Nkx-2.5 mRNA expression for both embryoid bodies (EBs) and puromycinselected cardiomyocytes [25]. Hence, the result of exposing GTR1 embryonic stem cells with this MF was the differentiation into cardiac specific cells, and this experiment was done without an aid of gene transfer technology [25]. As for information, GATA-4 and Nkx-2.5 mRNA encode respectively for a zinc finger containing transcription factor and homeodomain, and both of these have been shown to be essential for cardiogenesis in different animal species [40], [41], and human [42 ]. The effects of MF were also done on P19 embryonal carcinoma cells (P19 cells) [26]. Differentiation was detected by exposing the P19 cells to 1mT of MF with frequency of 50Hz, however the analysis result was not very significant. By exposing P19 cells into more intense MF with the strength of 10mT, P19 cells were differentiated into neuronal cells [26]. The effects of ELF-MF after neuronal differentiation were evaluated by morphological analysis, immunochemical analysis (MAP2 and GFAP), and developmental neuronal network activities recorded by the micro-electrode arrays (MEAs). From the results, the percentage of MAP2 positive cells and the spike frequencies had increased, but the percentage of GFAP positive cells has reduced. These results suggest that an exposure to 10mT ELF-MF would affect the characteristic of neuronal differentiation and functional neuronal properties [26]. These results also may suggest that the effect of MF on stem cell differentiation will become less significant with lower intensity of MF strength as verified by using our equation in Fig 1. The stem cells are also able to differentiate in MF intensity of 1.1mT as demonstrated in previous work using the bone morrow stem cells (BMSC) [27]. It results in the differentiation of BMSC into osteogenesis and the increase of intracellular Ca2+ after MF stimulation. From this result, they postulated that the elevated Ca2+ is possibly the underlying biochemical mechanism which is responsible for the induction of terminal differentiation [27]. As mentioned above, most of the experiments were done to investigate the effect of MF on stem cells ability to differentiate, and were performed with the intensity less than 3T. Even so, there are also experiments done with the MF intensity above 3T. The MF intensity between 4.7T and 9.4T were found to affect the stem cells differentiation [28]. Therefore, our range for cells differentiation was limited at 9.4T. The effects of microgravity (MG) modelled by large gradient high magnetic field (LGHMF) with intensity of 12T and 16T on hMSC led to cell death after 24 hours exposure. Almost all the cells died after 48 hours, but in the first 6 hours of osteogenic induction, it had resulted in suppression
IFMBE Proceedings Vol. 35
808
A. Miskon and J. Uslama
of the osteogenesis of hMSC [29]. Therefore, we assumed that 12T and 16T still affect the cell differentiation for a short period of exposure. This response may be due to the synergistic effect of high magnetic force and MG existed in their experiment modelling systems [43]-[46]. Hence, from the data reviewed, we suggest that the MF intensity between 9.4T and 12T could lead to stem cell death. As stated in the result section, the MF was classified into three groups according to their effects on stem cell differentiation. From these groups, we know that MF less than 600µT or more than 9.4T do not lead to stem cell differentiation. Also, the MF effect on stem cells differentiation will not occur at MF of 10µT and below. However, MF of 12T and above will result in stem cell death. The effects to stem cells differentiation by MF ranged from 600µT to 9.4T vary according to the MF strength itself. It is likely that the MF of 10mT affects more cells differentiation as compared with 600µT or 9.4T. By using the mathematical equation (1), not only we can predict whether the strength of MF is able to affect the stem cells differentiation, but can also predict the strength of MF on stem cell differentiation. This is because the mathematical formula was designed after considering the analysis of the reviewed data. Stem cells have been used in several pre-clinical models of disease [47]-[49], and are currently applied in clinical trials of phase I-III [50]-[53]. For example, neuronal stem cells (NSC) can differentiate into neuronal or glial cells and also express trophic factors, which may be used to rescue dysfunction brain tissue [54]-[58]. Stem cells are also used in tissue transplantation, and the optimal cell type to be transplanted should have these characters: (a) spontaneous disposition to integrate with the target tissue without induction of immune reaction; (b) differentiate into specific cells commitment; (c) have the capacity to develop gap junctions with host cells; and (d) have some degree of resistance to ischaemia, in order to avoid massive apoptosis, which is currently observed during cell transfer [23], [59]. The current findings of MF effect on stem cell differentiation may open a new prospective, in particular the use of MF to direct the differentiation processes of stem cells into a specific cellular phenotype without the aid of gene transfer technologies [25]. On the other hand, there are increasing public interests of possible health risk associated with ELF-MF [8], [18], [60]. Therefore, fundamental studies are necessary to ensure the safe method to differentiate stem cell into specific cell through MF exposure. In conclusion, we suggest that the intensity of MF that can affect stem cell differentiation is between 600µT and 9.4T. Also the differentiation will not occur if the intensity is less than 10µT and with intensity of more than 12T, it will cause death of stem cells. We also suggest that the
range of MF effects on stem cell differentiation lies between 10µT and 600µT. The limit of MF strength that can lead towards the death of stem cells lies between 9.4T and 12T. We conclude that the result of the exposure of MF on stem cells differentiation depends not only on the MF intensity, but also on the period of exposure.
REFERENCES [1] R. Passier, C. Mummery,“Origin and use of embryonic and adult stem cells in differentiation and tissue repair,”Cardiovascular Research 2003, vol. 58, pp. 324-335. [2] C.M. Metallo, S.M. Azarin, Lin ji, et al.,“Engineering tissue from human embryonic stem cells,” J. Cell Mol Med 2008, vol. 12(3), pp. 709-729. [3] S.G. Nir, R. David, M. Zaruba, et al.,“Human embryonic stem cells for cardiovascular repair,” Cardiovascular Research 2003, vol. 58 (2), pp. 313-323. [4] W.T. Kaune,“Introduction to power-frequency electric and magnetic fields,” Environmental Health Perspectives Supplements 1993, vol. 101, pp. 73-81. [5] A.R. Liboff, T. Williams Jr., D.M. Strong,et al., “Time-varying magnetic fields: effects on DNA synthesis,”Science, 1984, vol. 223, pp. 818-820. [6] J. Jajte, M. Zmyslony, J. Palus, et al., “Protective effects of melatonin against in vitro iron ions and 7 mT, 50 Hz, magnetic fieldsinduced DNA damage in rat lymphocytes,”Mutat Res 2001, vol. 483, pp. 57-64. [7] S. Falone., A. Mirabilio, M.C. Carbone, et al., “Chronic exposure to 50Hz magnetic fields causes a significant weakening of antioxidant defence systems in aged rat brain,”Int J Biochem Cell Biol 2008, vol. 12, pp. 2762-2770. [8] D.W. Zipse, “Health effects of extremely low frequency (50 and 60 Hertz) electric and magnetic fields,” IEEE 1991, vol. PCIC-91-09. [9] A. Cossarizza, D. Monti, F. Bersani, et al. “Extremely low frequency pulsed electromagnetic fields increase cell proliferation in lymphocytes from young and aged subjects,”Biochem. Biophys. Res. Commun. 1989, vol. 160, pp. 692-698. [10] R. Cadossi, F. Bersani, A. Cossarizza, et al. “Lymphocytes and lowfrequency electromagnetic fields,” FASEB J. 1992, vol. 6, pp. 2667– 2674. [11] A. Cossarizza, , S. Angioni, , F. Petraglia, , et al. “Exposure to lowfrequency pulsed electromagnetic fields increases interleukin-1 and interleukin-6 production by human peripheral blood mononuclear cells,” Exp. Cell Res. 1993, vol. 204, pp. 385–387. [12] A.W. Thomas, M. Kavaliers, F.S. Prato, et al. “Pulsed magnetic field induced “analgesia” in the land snail, Cepaeanemoralis, and the effects of mu, delta, and kappa opioid receptor agonists/antagonists” Peptides 1997, vol. 18, pp. 703–709. [13] V.V. Vorobyov, E.A. Sosunov, N.I. Kukushkin, et al.“Weak combined magnetic field affects basic and morphine-induced rat’s EEG,” Brain Res. 1998, vol. 781, pp. 182–187. [14] H. Lai, M.A. Carino, A. Horita, et al.“Effects of a 60 Hz magnetic field on central cholinergic systems of the rat,”Bioelectromagnetics1993, vol. 14, pp. 5–15. [15] N.B. Patel, M.M. Poo, “Perturbation of the direction of neurite growth by pulsed and focal electric fields,” J Neurosci, 1984, vol. 4, pp. 2939-47. [16] L.D. Duffell, “Long-term intensive electrically stimulated cycling by spinal cord-injured people: Effect on muscle properties and their relation to power output,” Muscle Nerve 2008, vol. 38, pp. 1304-11.
IFMBE Proceedings Vol. 35
A Preliminary Study on Magnetic Fields Effects on Stem Cell Differentiation [17] H.M. Gerard,“The use of electric fields in tissue engineering,” Organogenesis 2004, vol. 4, pp. 11-17. [18] R. Goodman, Y. Chizmadzhew, A. Shirley- Henderson,“Electromagnetic fields and cells” Journal of Cellular Biochemistry 1993, vol. 51, pp. 436-441. [19] K. Takahashi, I. Kaneko, M. Date, et al.,“Effect of pulsing electromagnetic fields on DNA synthesis in mammalian cells in culture,”Experimentia 1986, vol. 42, pp. 185- 186. [20] E. Czerska, H. Al-Baranzi, J. Casamento, et al.,“Comparison of the effect of elf fields on cnayc oncogene expression in normal and transformed human cells,” In Transection of the Bioelectromagnetic Society, Thirteenth Annual Meeting 1991. Salt Lake City, UT, B-2-14. [21] M.T. Marron, E.M. Goodman, P.T. Sharpe, et al.,“Low frequency electric and magnetic fields have different effects on the cell surface,” FEBS Lett 1988, vol. 230, pp. 13-16. [22] M.R. Delgado, J. Leal, J.L. Monteagudo, et al.,“Embryological changes induced by weak, extremely low frequency electromagnetic fields,” J Anat 1982, vol. 134, pp. 533-551. [23] G. Roberto, L. Mario, B. Lucio, et al.,“Differentiation of human adult cardiac stem cells exposed to extremely low-frequency electromagnetic fields,” Cardiovascular Research 2009, vol. 82, pp. 411-420. [24] S. Richard, B. Rudiger, K. Rainer, et al.,“Functional investigation on human mesenchymal stem cells exposed to magnetic fields and labeled with clinically approved iron nanoparticles,” BMC Cell Biology 2010, vol. 11, pp. 22 [25] C. Ventura, M. Maioli, Y. Asara, , et al.,“Turning on stem cell cardiogenesis with extremely low frequency magnetic fields,” FASEB J. 2004, vol. 19, pp. 155–157. [26] S. Atsushi, T. Yuzo, M. Hiroyuki, et al. “Developmental effects of low frequency magnetic fields on P19-Derived Neuronal Cells,” IEEE EMBS 31st Annual International Conference, September 2009. [27] W. Zhao, W. Ma, H. Wu.,“The effects of magnetic fields on the differentiation and intracellular free calcium of bone marrow mesenchymal stem cells,” IEEE WAC 2008. [28] S. Magnisky, R.M. Walton, J.H. Wolfe, et al., “Magnetic resonance imaging detects differences in migration between primary and immortalized neural stem cells,” AcadRadiol 2008, vol. 15(10), pp. 1269-1281. [29] D. Shi, R. Meng, W. Deng, et al., “Effects of Microgravity Modeled by Large Gradient High Magnetic Field on the Osteogenic Initiation of Human Mesenchymal Stem Cells,” Stem Cell Rev and Rep 2010. [30] M.F. Pittenger, A.M. Beck S.C. Mackay, et al.,“Multilineage potential of adult human mesenchymal stem cells,” Science 1999, vol. 284, pp. 143–147. [31] C. Bos, Y. Delmas, A. Desmouliere, et al., “In vivo MR imaging of intravascularly injected magnetically labeledmesenchymal stem cells in rat kidney and liver,” Radiology 2004, vol. 233, pp. 781-789. [32] R. Schäfer, R. Kehlbach, M. Muller, et al.,“Labeling of human mesenchymal stromal cells with superparamagnetic iron oxide leads to adecrease in migration capacity and colony formation ability,”Cytotherapy2009, vol. 11, pp. 68-78. [33] E. Pawelczyk, A.S. Arbab, S. Pandit, et al.,“Expression of transferrin receptor and ferritin following ferumoxides-protamine sulfatelabeling of cells: implications for cellular magnetic resonance imaging,” NMR Biomed 2006, vol. 19, pp. 581-592. [34] R. Schäfer, R. Kehlbach, J. Wiskirchen, et al., “Transferrin Receptor Upregulation: In Vitro Labeling of Rat Mesenchymal Stem Cells with Superparamagnetic Iron Oxide,” Radiology 2007, vol. 244, pp. 514-523. [35] S.V. Pislaru, A. Harbuzariu, G. Agarwal, et al.,“Magnetic forces enable rapid endothelialization of synthetic vascular grafts,” Circulation 2006, vol. 114, pp. I314-I318.
809
[36] K. Shimizu, A. Ito, H. Honda,“Mag-seeding of rat bone marrow stromal cells into porous hydroxyapatite scaffolds for bone tissue engineering,” J BiosciBioeng 2007, vol. 104, pp. 171-177 [37] K. Shimizu, A. Ito, T. Yoshida, et al.,“Bone tissue engineering with human mesenchymal stem cell sheets constructed using magnetite nanoparticles and magnetic force,” J Biomed Mater Res B ApplBiomater 2007, vol. 82, pp. 471-480. [38] S.V. Pislaru, A. Harbuzariu, R. Gulati, et al.,“Magnetically targeted endothelial cell localization in stented vessels,” J Am CollCardiol 2006, vol. 48, pp. 1839-1845. [39] D. Robert, D. Fayol, C.L. Visage, et al.,“Magnetic micromanipulations to probe the local physical properties of porous scaffolds and to confine stem cells,” Biomaterials 2010, vol. 21, pp. 1586- 1595. [40] C. Biben, R.P. Harvey,“Homeodomain factor Nkx-2.5 controls left/right asymmetric expression of bHLH gene eHand during heart development,” Genes Dev. 1997, vol. 11, pp. 1357–1369. [41] T.J. Lints, L.M. Parsons, L. Hartley, et al., “Nkx- 2.5: a novel murine homeobox gene expressed in early heart progenitor cells and their myogenic descendants,” Development 1993, vol. 119, pp. 419–431. [42] D.W. Benson, G.M. Silberbach, A. Kavanaugh- McHugh, et al.,“Mutations in the cardiac transcription factor Nkx-2.5 affect diverse cardiac developmental pathways,” J. Clin. Invest. 1999, vol. 104, pp. 1567–1573 [43] Q. Airong, Z. Wei, W. Yuanyuan, et al.,“Gravitational environment produced by superconducting magnet affects osteoblast morphology and functions,”ActaAstronautica, 2008, vol. 63, pp. 929–946. [44] P.W. Neurath,“High gradient magnetic field inhibits embryonic development of frogs,” Nature 1968, vol. 219, pp. 1358–1359. [45] Q. Airong, D. Shengmeng, G. Xiang, et al.,“cDNA microarray reveals the alterations of cytoskeleton-related genes in osteoblast under high magneto-gravitational environment,”ActaBiochimica et BiophysicaSinica (Shanghai)2009, vol. 41, pp. 561–577. [46] Q. Airong, H. Lifang, G. Xiang, et al.,“Large gradient high magnetic field affects the association of MACF1 with actin and microtubule cytoskeleton,” Bioelectromagnetics 2009, vol. 30, pp. 545–555. [47] P. Hematti, P. Obrtlikova, D.S. Kaufman, “Nonhuman primate embryonic stem cells as a preclinical model for hematopoietic and vascular repair,” Experimental Hematology. 2005, vol. 33(9), pp. 980–6. [48] G. Martino, S. Pluchino,“The therapeutic potential of neural stem cells,” Nature Reviews Neuroscience 2006, vol. 7(5), pp. 395–406. [49] R.L. Zhang, Z.G. Zhang, M. Chopp,“Neurogenesis in the adult ischemic brain: generation, migration, survival, and restorative therapy,” Neuroscientist 2005, vol. 11(5), pp. 408–16. [50] S.S. Kuthiala, G.H. Lyman, O.F. Ballester, “Randomized clinical trials for hematopoietic stem cell transplantation: lessons to be learned from the European experience,” Bone Marrow Transplantation 2006, vol. 37(2), pp. 219–21. [51] R.A. Nash, J.D. Bowen, P.A. McSweeney, et al., “High-dose immunosuppressive therapy and autologous peripheral blood stem cell transplantation for severe multiple sclerosis,” Blood 2003, vol. 102(7), pp. 2364–72. [52] H. Okada, I.F. Pollack, F. Lieberman, et al., “Gene therapy of malignant gliomas: a pilot study of vaccination with irradiated autologous glioma and dendritic cells admixed with IL-4 transduced fibroblasts to elicit an immune response,” Human Gene Therapy 2001, vol. 12(5), pp. 575–95. [53] C. Hesdorffer, J. Ayello, M. Ward, et al., “Phase I trial of retroviralmediated transfer of the human MDR1 gene as marrow chemoprotection in patients undergoing high-dose chemotherapy and autologous stem-cell transplantation,” Journal of Clinical Oncology. 1998, vol. 16(1), pp. 165–72.
IFMBE Proceedings Vol. 35
810
A. Miskon and J. Uslama
[54] E.Y. Snyder, D.L. Deitcher, C. Walsh, et al., “Multipotent neural cell lines can engraft and participate in development of mouse cerebellum,” Cell. 1992, vol. 68(1), pp. 33–51. [55] E.Y. Snyder, R.M. Taylor, J.H. Wolfe, “Neural progenitor cell engraftment corrects lysosomal storage throughout the MPS VII mouse brain,” Nature 1995, vol. 374(6520), pp. 367–70. [56] J.D. Flax, S. Aurora, C. Yang, et al.,“Engraftable human neural stem cells respond to developmental cues, replace neurons, and express foreign genes,” Nature Biotechnology. 1998, vol. 16(11), pp. 1033–9.
[57] R.A. Fricker, M.K. Carpenter, C. Winkler, et al., “Site-specific migration and neuronal differentiation of human neural progenitor cells after transplantation in the adult rat brain,” Journal of Neuroscience 1999, vol. 19(14), pp. 5990–6005. [58] J. Ourednik, V. Ourednik, W.P. Lynch, et al., “Neural stem cells display an inherent mechanism for rescuing dysfunctional neurons,” Nature Biotechnology. 2002, vol. 20(11), pp. 1103–10. [59] H.J. Cho, N. Lee, J.Y. Lee, et al., “Role of host tissues for sustained humoral effects after endothelial progenitor cell transplantation into the ischemic heart,” J Exp Med 2007, vol. 204, pp. 3257-3269. [60] V. Hartwig, G. Giovannetti, N. Vanello, et al., “Biological Effects and Safety in Magnetic Resonance Imaging: A Review,” Int. J. Environ. Res. Public Health 2009, vol. 6, pp. 1778-1798.
IFMBE Proceedings Vol. 35
A Preliminary Study on Possibility of Improving Animal Cell Growth M.Y. Jang1 and C.W. Mun1,2 1
Inje University/Biomedical Engineering, GimHae-Si, Republic of Korea 2 Inje University/UHRC, GimHae-Si, Republic of Korea
Abstract— The purpose of this study is to investigate the influence of organism activation solution (OA solution) on the animal cell growth according to cell morphology (Fibroblast and Epithelial cell) and origin. We investigated the reactions on the MG-63 cell line and MCF-7 cell line in mixture with various amount of OA solution. DMEM-HG was used for cell culture as the basic media. The concentration of mixture was limited from 0% to 30%. DMEM-HG volume is fixed for minimizing its effect on cells. Total culture term was 14 days and MTT assay was performed on day 1, 3, 7, 10 and 14 throughout the experiment. For cell viability test, ELISA reader was used to measure the optical density (OD) at 595nm wavelength filter. MCF-7 was linearly proliferated according to culture time and concentrations of OA solution. On the other hand, MG-63 showed that cells in 18%~25% were rapidly grown at 7days. The largest growth rate of MCF-7 was 1.25 at 15%, MG-63 was 3.02 at 15%. It was confirmed that cell grows rapidly at certain period although cells were taken a negative effect in initial stage. Future study is to compare the effects of OA solution to normal and cancer cell and to remove the negative effect to cell. Keywords— MG-63, MCF-7, MTT assay, Cell viability, cell growth.
I. INTRODUCTION Cell culture is a complicated process by which cells extracted from original tissue are grown under controlled conditions. Experimental cell culture has become commonplace in today’s research laboratories and has needed at field of clinical medicine and biology [1, 2]. In 1900 year, Harrison et al. accomplished animal cell culture by using neural tissue of frog for the first time [3]. Furthermore, Carrel acquired the technique called passage which is basic process of cell proliferation in cell culture [4]. Thereafter, in 1950s, cell unit culture was begun in earnest as being found cell line such as Hela cell. And a basis of animal cell culture was established since Eagle had developed basic culture media for various animal cells [5]. It has been verified that cells were survived through controlling nutrient and environment on external body. There has been a renewal of interest in medical fields based on molecular cell biology such as gene therapy and
regenerative medicine in recent years[6]. Accordingly, cell culture technique which could culture cells in quantity is demanded. However, the mass culture of animal cell is difficult because it have complex gene structure in comparison with the microorganism. Meanwhile, Higa T had discovered the microorganism called “Effective microorganism” and had used for plant cells. As a result, it was confirmed that growth of plant cells is improved and that mass production of grain and general plants is practicable[7]. Besides, various organism activation materials have been developed and have been used for agriculture and floriculture ecumenically. The purpose of this study is to evaluate the effects of the organism activation solution (OA solution) as an additional media and to establish the optimal cell culture condition.
II. MATERIALS AND METHODS A. Materials Dulbecco’s Modified Eagle Medium High glucose (DMEM-HG, Gibco, Invitrogen Co., USA) was used as basic medium. Insulin (Sigma-Aldrich Co., USA) was used in MCF-7 as growth factor[8]. And MTT assay kit(Roche Ltd., Switzerland) was used for disinterested and accurate cell viability test[9-11]. B. Cell Culture Two types cell lines, MG-63(Human osteosarcoma cell line) and MCF-7(Human breast cancer cell line), were used. Cellular morphologies are fibroblast and epithelial cell, respectively, and both are adherent cell. MG-63 was originated from human bone taken osteosarcoma, MCF-7 originated from human mammary gland taken adenocarcinoma. MG-63 was cultured in DMEM-HG supplemented with 10% Fetal bovine serum, 1% penicillin-streptomycin and MCF-7 was cultured at the same media and additional insulin of 10ug/ml. Cell seeding density was 2x104 per well at 24-well plate. And cells were incubated at 37 , 5% CO2 for 14days. To investigate the effects of the OA solution on MG-63 and MCF-7, the medium was mixed with OA solution at
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 811–814, 2011. www.springerlink.com
℃
812
M.Y. Jang and C.W. Mun
various concentrations. Concentration of OA solution was controlled to 3%, 6%, 9%, 12%, 15%, 18%, 20%, 25% and 30% under fixed amount of the media(Table 1). These conditions were determined by previous study[12, 13]. C. Cell Viability Measurement
MCF-7 had trend that proliferated in linear manner according to culture time and OA solution concentrations (Fig 1.a). On the other hand, MG-63 showed that 18%~25% of the cells rapidly grew at 7days(Fig 1.b). As a results, it was confirmed that OA solution was acted to both cells differently.
Cell growth aspect along according to time was assessed by MTT assay [3-(4,5-dimethyl thiazol-2-yl)-2,5-diphenyl tetrazolium bromide]. The MTT assay was conducted according to the protocol described in its product datasheet (ref). MTT reagent was added one day, before optical density measurement was performed. Fresh media 200 and MTT reagent 20 were added to each well after old media was removed. After 4hour incubation, 200 solubilization reagent was added. Afterwards cells were incubated more 12 hours. Aliquots of 200 of resulting MTT reaction were transferred in 96-well plate and optical density was measured at 595nm using ELISA reader (Enzyme-linked immunosorbent assay, Multiskan EX, Thermo Scientific Inc., USA) system.
㎕
㎕
㎕
㎕
Table 1 Concentration of mixture media Concetration
Media amout(ml)
OA solution(ml)
0% (Control)
0
3%
0.031
6%
0.064
9%
0.099
12% 15%
1
0.137 0.176
18%
0.220
Fig. 1 Results of cell viability measurement according to culture term. Cell
20%
0.250
25%
0.340
proliferation amount was represented by optical density (O.D) (a) Measured O.D values of MCF-7. (b) Measured O.D values of MG-63
30%
0.430
III. RESULTS Fig 1 shows cell proliferation results of a MCF-7 and a MG-63 which were cultured in mixed media during 14days. Quantities of living cells were represented optical density measured by ELISA reader. Cells cultured in mixed media (Experimental group) were proliferated than cells at 0% (Control group) in 14days although control group was grown than others in the early part of culture. And also it was confirmed that cell growth was reached the stabilized state at 3days of culture using microscopy.
Optical density data of 3days was normalized to 1 by multiplying the same constant to verify the growth rate at each culture duration (about 3days interval). These data is shown in Figure 2. The slope vealues of each culture period represents its growth rate. In case of MCF-7, the largest slope value was 1.22 at 15% in between 10days and 14days and was twice of control group(Fig 2.a). Meanwhile, the largest slope value of MG-63 was 15% in between 7days and 10days. The slope value was 3.02 and sextuple compared with control group (Fig 2.b). Consequently, it was confirmed that OA solution help cell grow rapidly within a certain period of cultue time although some a negative effect were shown in initial stage.
IFMBE Proceedings Vol. 35
A Preliminary Study on Possibility of Improving Animal Cell Growth
IV. DISCUSSION AND CONCLUTIONS Improvement of plant cell growth was accomplished through a lot of researches for obtaining countermeasure against a scarcity of food. OA solution was used in this study also had been developed and widely used in various field limited to plant organism[7]. (a)
813
by the solution in the early part of culture. And also, OA solution had a different effect on animal cell as its origin and morphology. Since MTT assay has several limitaions[14, 15], almost researchers perform various test protocols proper to the experiment more than 2 such as cell counting, DNA quantity and cell staining. Accordingly, it is necessary to investigate the OA effects using other method except for MTT assay. And it is expected that for various combined test protocols would improve the reliability of research result. A further direction of this study will be to compare the effect of OA solution to normal cell and cancer cell and to remove the negative effects at early stage of the cell culture.
ACKNOWLEDGMENT This work was supported by the National Research Foundation of Korea (NRF) grant founded by the Korea government (MEST) (No. 2009-0081195).
REFERENCES (b)
Fig. 2 Cell growth rate. Data was normalized using 3days data. The slope values of graph are represented growth rate for each interval. (a) Growth rate of MCF-7. (b) Growth rate of MG-63
However, little attention has been given to the influence of OA solution to animal cell growth. Previous study perfromed by authors have verified that the media mixed with optimal concentration of OA solution improves the growth of cultured cell[12, 13]. This study was performed concerning two types of cells, MG-63 and MCF-7, to determine the optimal concentration and to examine the impact of OA solution according to the cell morphology and orgin. Concentration of mixed media was limited to 30%. Authors focused that OA sollution improves the growth speed of animal cell culture although the cells were shocked
1. S.J. Morgan, D.C. Darling (1993) Animal Cell Culture. Oxford, London. 2. John R. W. (2000) Animal Cell Culture: A Practical Approach. Oxford, London 3. Rose G. Harrison, M. J. Greenman, Franklin P. Mall et al (1907) Observations of the living developing nerve fiber. Anat Rec 1(5):116128 4. Alexis Carrel (1912) Pure Cultures of Cells. JEM 16(2):165-168 5. Hanrry Eagle (1959) Amino acid metabolism in mammalian cell cultures. Science 130:432-437 6. Tom J. Burdon, Arghya Paul, Nicolas Noiseus, Satya Prakash, Dominique Shum-Tim (2011) Bone Marrow Stem Cell Derived Paracrine Factors for Regenerative Medicine: Current Perspectives and Therapeutic Potential. Bone Marrow Research 2011:1-14 DOI 10.1155 /2011/207326 7. Higa Teruo (1995) Use of microorganism in agriculture & their positive effects on environmental safety. Nobunkyo, Janpan. 8. James Chappell, J. Wayne Leiter, Scott Solomon, Inga Golovchenko, Marc L. Goalstone, Boris Draznin (2001) Effect of Insulin on Cell Cycle Progression in MCF-7 Breast Cancer Cells. J Biol Chem:276 (41):38023-38028 9. Ji Eun Kim, Mi Sook Kim, Chang Mo Kang, Jong Il Kim, Hye Kyung Shin, Chul Won Choi, Young Seok Seo, Young Hoon Ji (2008) The Use of MTT Assay, In Vitro and Ex Vivo, to predict the Radiosensitivity of colorectal Cancer. KOSTRO:26(3):166-172 10. Tirn Mosmann (1983) Rapid colorimetric assay for cellular growth and survival: Application to proliferation and cytotoxicity assays. J Immunol Methods 65(1-2):55-63 11. George Fotakis, John A. Timbrell (2006) In Vitro cytotoxicity assays: Comparison of LDH, neutral red, MTT and protein assay in hepatoma cell lines following exposure to cadmium chloride. Toxicol Lett 160(2):171-177 12. M.Y. Jang, S.I. Chun, C.W. Mun (2009) The Influence of Organism Activation Enzyme Solution on cell Growth. KOSOMBE Proc. vol.40, Seoul, Korea, 2009, pp-256-258
IFMBE Proceedings Vol. 35
814
M.Y. Jang and C.W. Mun
13. M.Y.Jang, C.W.Mun (2010) The investigation about optimal concentration of organism activation solution for improving the animal cell growth. KOSOMBE Proc. vol.41, Future Neurotechnologies for Neuron, Network and Brain, Chucheon, Korea, 2010, pp75 14. Sheila O’Hare, Chris K. Atterwill (1995) In Vitro Toxicity Testing Protocols. Humana Press, USA. 15. Piwen Wang, Susanne M. Henning, David Heber (2010) Limitations of Antiproliferative activity of green tea polyphenols. PLos One 5(4):e10202
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Mun, Chi-Woong Inje Univeristy Obang-Dong GimHae-Si Republic of Korea [email protected]
Dynamic Behaviour of Human Bone Marrow Derived-Mesenchymal Stem Cells on Uniaxial Cyclical Stretched Substrate – A Preliminary Study H.Y. Nam1, B. Pingguan-Murphy2, A.A. Abbas1, A.M. Merican1, and T. Kamarul1 1
Tissue Engineering Group, NOCERAL, Department of Orthopaedic Surgery, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia 2 Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia
Abstract— Tissues and cells in the body are continuously exposed tocomplex mechanical microenvironment forces. It is well known that mechanical stimulations are important to induce morphological changes, developments and, regeneration. Although the regulation of progenitor cells through mechanical induction has been previously described, the effects of uniaxial strain on monolayer MSC constructs have not been widely discussed. In this study, we investigated the effects of mechanical stretching on the alignmentand proliferation of human bone marrow mesenchymal stem cells (hMSCs) on elastomeric substratesin vitro. Two groups (mechanically loaded and static) were used in this study. hMSCs from human bone marrow were isolated and cultured. MSCs were passaged to P2/P3 before being seeded on elastomeric surfaces and allowed to settle for 72 hours before subjected to continuous cyclic uniaxial stretching (6% strain, 1 Hz). A specially designed stretch device capable of operating inside an incubator was used for this study.Image analyses were performed at 0, 6, 12, 24, 36 and 48 hours. Cell proliferation at various time points of mechanical loading was investigated using alamarBlue® assay. The results demonstrate that cells subjected to tensile loading are realigned inperpendicular to the unaxial stretching direction. The proportion of cell proliferation within the tensile loading group is higher than those in the control group. In conclusion, the results of this study suggest that MSCs are mechanosensitive,influencing cell proliferation and cell realignment. The knowledge that MSCs response to mechanical signals provides better understanding ofthe mechanical regulation involved in stem cells behaviourthus, allowing better control of cell growth.
to several lineages of mesenchymal tissues such as osteocytes, cardiomyocytes, fibroblasts, chondrocytes, tenocytes, and adipocytes [1-3]. Cells and tissues are subjected to microenvironmental factors such as mechanical strain invivo. It is therefore essential to understand the effects of mechanical strain on the regulation of cell differentiation and related functions. Successfully engineered tissues in vitro are achieved by mimicking the biophysical conditions tissues are subjectedin vivo[4-5]. The rate of mesenchymal stem cell (MSC) proliferation in monolayer environment is slow, a problem which needs to be addressed in view of the need to obtain sufficient number of cells within a reasonable time frame.In vitro growth and activities of MSCs are supported and enhanced by biochemical such as cell culture medium. These includegrowth factors, cytokines, and hormones.However,it is recently reported that mechanical forces also play acentral role in the physiological process involving a wide variety of tissues [6-9].Cyclic uniaxial strain applied on elastic substrates causes changes in cell responses, orientation and alignment.The application of mechanical force also influences cell proliferation, differentiation, and gene expression in a wide variety of cell types [10-12]. The aim of this study is to investigate the effects ofuniaxial cyclic stretching on theorientation and proliferation of human bone marrow derived-mesenchymal stem cells.
Keywords— bone marrow, mesenchymal stem cell, cyclic uniaxial loading, cell orientation, proliferation.
II. MATERIALS AND METHODS A. Cell Isolation and Culture
I. INTRODUCTION Stem cells are capable of both self-renewal and differentiation intoat least one mature cell type. Bone marrow stem cells include haematopoietic stem cells and mesenchymal stem cells (MSCs). MSCs can proliferate and differentiate
Human bone marrow derived-MSCs were obtained with consent and in accordance to the approval of the ethics committee from donors undergoing various surgical procedures inUniversity Malaya Medical Centre. These MSCs have been well characterized by their surface markers and differentiation potential. MSCs were cultured inculture
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 815–818, 2011. www.springerlink.com
816
H.Y. Nam et al.
growth medium (Gibco, USA) to allow for cell proliferation without differentiation as demonstrated in our previous publications [13-14]. The cells were maintained in humidified incubators at 37oC with 5% CO2 and all of the experiments were conducted in the cell culture incubators. Cells were harvested upon reaching80-85% confluent(passage 2-3).
D. Image Processing Analysis
B. Cell Seeding
E. alamarBlue® assay for Proliferation (On-Going)
A transparent, medical grade silicone elastic membrane (Wacker, USA) (0.1 cm in thickness, 5.8 x 10 cm growth area, 3.5 cm in height)was sterilized by immersion in 75 % ethanol aqueous solution for 5 min, and 2 h UV irradiation. A total of 2×105 cells were seeded into silicone dish coated with collagen type I (Sigma, USA) in complete growth medium at 37◦C. After 48 h of culture, the medium were changed to 1 % fetal bovine serum (FBS) (Gibco, USA) for 24 h, and assembled into uniaxial strain device.Uniaxial cyclic stretching (6 % elongation) at a frequency of 1 Hz was applied (Fig. 1). Non-stretched hMSCs were used as controls.
Proliferation of MSCs at various time points wasinvestigated using the alamarBlue® assay (Invitrogen, USA) after beingincubated for 3 hours. Using a micro-titer plate reader(Epoch, BioTek), absorbance measurement was taken at570nm, using 600nm as the reference wavelength.
Uniaxial stretch
To compare the morphology and alignment of mesenchymal stemcells on elastomeric substrate before and after testing, images of the culturedhMSCs werecaptured using ainverted CCD camera-assisted microscope (Nikon, Japan).
III. RESULTS AND DISCUSSION A. Cell Morphology and Characterization of hMSC After 24 h, cells appear to adhere to the culture flask and form colony forming units. At 2-3 weeks cells appear to congregate, form colonies and possess fibroblast-like morphologies (Fig.3).
Fig. 1 A custom-designed uniaxial stretch device was used to apply uniaxial strain to hMSCs in only one direction C. Mechanical Stretch Device A specific tensile test device (Fig. 2) was designed andmanufactured to apply of uniaxial cycling strain oncultured cells by stretching the collagen type I coated silicone at different strain rates,frequency and, time points. The device is consisted of two units: theelectrical and mechanical units. Fig. 3 Phase-contrast photograph of hMSCs (day 14) (Mag:4x)
Fig. 2 A custom-made mechanical device
Flow cytometry analysis used in this study demonstrated positive markers for CD105 and CD44, but negative for CD34 and CD45. This corresponded to the characteristics of MSCs. After one week of incubation inadipogenic medium, small refractile vesicles believed to be lipidswere seen in the cytoplasms and these vesiclesbecome enlarged and fused together to formbigger lipid droplets. The lipid was found to stainpositively with Oil Red O (Fig. 4).
IFMBE Proceedings Vol. 35
Dynamic Behaviour of Human Bone Marrow Derived-Mesenchymal Stem Cells on Uniaxial Cyclical Stretched Substrate
817
0h
Fig. 4 Oil Red O staining on adipogenic induced MSC. (Mag: 10x) 6h
After one week of incubation in osteogenic medium, some crystals were seen deposited sparsely on the cells. The number ofcrystals increased and became very crowdedtowards the end of the incubation. The whole monolayer cellsappeared orangered when stained with Alizarin Red S (Fig. 5). 12 h
24 h
Fig. 5 Amorphous calcium phosphate formed from by the alkaline phosphatase which stained positivelywith Alizarin Red S(Mag: 10x)
B. Effect of Stretching on Cell Alignment Comparison of hMSCs images before and after being subjected to cyclic loading indicates that changes in MSC alignment, rearrangement and elongation occur as the result of continuous cyclic stretch loading.Cells, which appeared to be in random alignment, seem to be realigned in a direction perpendicular to the direction of loading (6 %, 1 Hz) after 6 hours (Fig. 6). It was found that the elongation of cells was obviousand becameincreasingly stretched thereafter. This may have been acellular adaptation response to stretching in order to minimize the strain energy required to maintain cellular tensigrity. This reduces the energy required to counter stretching mechanicsms involved during the stretching of actomysoinfibers. Cell reorientation requires the continuous remodelling of both focal adhesions and actin cytoskeletons [15-16]. The actin stress fibers need to be depolymerized andrepolymerized in a new direction to re-establish a state of equilibrium, following an adaptive response. A
B
36 h
48 h
Fig. 6 Effect of uniaxial strain on hMSC alignment.A: Unstrain (control); B (dynamic). The substrate was stretched in the Å-direction (Mag: 4x)
IFMBE Proceedings Vol. 35
818
H.Y. Nam et al.
C. Effect of Stretch on the Proliferation of hMSCs Cell proliferation is reflected by the increase in the reduction of alamarBlue® assay. In our preliminary study, increasing cell proliferation wasobservedas the result of continuous cyclic tensile loading (Fig. 7). This increase appears to be sustained over time. The results of this study suggest that mechanical strain affects the growth of cells proportionately to the amount of loading. In addition it appears that an appropriate stretch can promote maximal proliferation of hMSCsin vitro.However,further prospective studies are required to confirmthese findings since considering that only few samples were used in this preliminary study.
Fig. 7 Cell proliferation assay by alamarBlue®
IV. CONCLUSIONS Our results suggest that mechanically stimulated MSCs may be an effective and promising approach to enhance the MSCs proliferating capacity and that this may also influence cell reorientation, which is an important behaviour required in cell migration.
ACKNOWLEDGMENT This study was supported by the University of Malaya Postgraduate GrantPS260/2010B and UM High Impact Research Grant J-00000-73559. We would like to thank Dr. Barry P Pereira for his guidance on the use of alamarBlue® assay.
REFERENCES 1. Minguell J, Erices A, CongetP (2001) Mesenchymal stem cells. Experimental Biology and Medicine 226(6):507-520 2. Tuan R, Boland G, TuliR (2002). Adultmesenchymal stem cells and cell-based tissue engineering. Arthritis Research and Therapy 5(1): 32-45 3. Grove J E, Bruscia E,Krause D S (2004) Plasticity of bone marrowderived stem cells. Stem Cells 22: 487-500 4. Kelly D J and Prendergast P J (2005) Mechanoregulationof stem cell differentiation and tissue regeneration in osteochondral defects. Journal of Biomechanics 38:1413–1422 5. Vogel V, Sheetz M (2006) Local force and geometry sensing regulate cell functions. Nat Rev Mol Cell Biol 7(4):265-75. 6. Park J S, Chu J S F, Cheng C et al (2004) Differential effects of equiaxial and uniaxial strain on mesenchymal stem cells. Biotechnology and Bioengineering 88(3):359-368 7. Shalaw F G, Slimani S, Sarda M N K et al (2006) Effect of cyclic stretching and foetal bovine serum (FBS) on proliferation and extra cellular matrix synthesis of fibroblast. Bio-Medical Materials and Engineering 16:137-144 8. Kaspar D, Seidl W, Wilke C N et al (2002) Proliferation of humanderived osteoblast-like cells depends on the cycle number and frequency of unaxial strain. Journal of Biomechanics 35:873-880 9. Song G, Ju Y, Shen X et al (2007) Mechanical stretch promotes proliferation of rat bone marrow mesenchymal stem cells. Colloids and Surfaces B 58:271-277 10. Maul T M, Chew D W, Nieponice A et al (2011) Mechanical stimuli differentially control stem cell behavior:morphology, proliferation, and differentiation. Biomech Model Mechanobiol. In Press, 15pg 11. Chen Y J, Huang C H, Lee I C et al (2008) Effects of cyclic mechanical stretching on the mRNA expression of tendon/ligament-related and osteoblast-specific genes in human mesenchymal stem cells. Connective Tissue Research 49:7-14 12. Choi K M, Seo Y K, Yoon H H et al (2007) Effects of mechanical stimulation on the proliferation of bone marrow-derived human mesenchymal stem cells. Bioetechnology and Bioprocess Engineering 12:601-609 13. Boo L, Selvaratnam L, Tai C C et al. Expansion and preservation of multipotentiality of rabbit bone-marrow derived mesenchymal stem cells in dextran - based microcarrier spin culture. Journal of Materials Science: Materials in Medicine(ahead of printing) 14. Kamarul T, Chong P P, Selvaratnam L et al (2009) Quantitative realtime PCR analysis for chondrogenic differentiation of human mesenchymal stem cell in alginate scaffolds. European Cells and Material 18(Suppl.1):44 15. Hayakawa K, Sato N, Obinata T (2001) Dynamic reorientation of cultured cells and stress fibers under mechanical stress from periodic stratching. Exp Cell Res 268(1):104-114 16. Na S, Meininger G A, Humphrey J D (2007) A theoretical model for F-actin remodeling in vascular smooth muscle cells subjected to cyclic stretch. J TheorBiol 246(1):87-99
Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 35
Nam Hui Yin University of Malaya Kuala Lumpur Malaysia [email protected]
Effect of Herbal Extracts on Rat Bone Marrow Stromal Cells (BMSCs) Derived Osteoblast–Preliminary Result C.T. Poon1, W.A.B. Abas1, K.H. Kim2, and B. Pingguan-Murphy1 2
1 Department of Biomedical Engineering, Faculty of Engineering, Kuala Lumpur, Malaysia Department of Physiology, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
Abstract— Local medicinal herbs are widely used in the management of bone fracture and osteoporosis. However there is no scientific evidence for the significance of the usage of the herbs in bone-related disease treatment. Therefore we initiated proper scientific studies to test the effects of two herbs (Drynaria quercifolia, Gendarussa vulgaris) and a formulation powder containing G.vulgaris. Here we report on the effect of the extracts of the herbs on the proliferation of rat BMSC derived osteoblasts. Samples of the herbs were obtained from local herbalists and prepared to obtain four types of polarity compound solvent extract for each herb were reconstituted in media tested on rat-BMSCderived osteoblasts. Rat-BMSC-derived osteoblasts were harvested and seeded in 96-well plates at a density of 3000cells/well. Cells in the wells were treated with 100µg/ml reconstituted extracts and cultured for 3 days, 7 days and 14 days. The resulting cell viability was determined with a proliferation assay using Cell Counting Kit-8 (Sigma). Results showed the addition of herbal extracts induced enhancement effects on BMSC-derived osteoblasts to varying extents. Ethanol extract of Gendarussa vulgaris induced a significant and consistent enhancement on BMSC derived osteoblast proliferation on day 7 and day 10 incubation. The enhancement percentage is 31.35% on day 7 incubation and 58.91% for day 10 incubation compared to respective negative control. Thus, we can conclude that ethanol extract of Gendarussa vulgaris could enhance bone-cell proliferation. Further investigation will be focused on characterization of the effects of bioactive compounds on bone cell proliferation on alkaline phosphatase and osteocalcin production. Keywords— Rat BMSC, osteoblast, herbal extract, proliferation, Gendarussa vulgaris.
I. INTRODUCTION A. Background Study Fractures caused by osteoporosis affect 50% of women and 20% of men over the age of 50 [1]. Traditional treatment of bone fracture uses autograft, allograft, vascularized fibula and iliac crest grafts [2], and other bone transport techniques but limitations exist. A number of medicinal herbs are being investigated regarding their effectiveness in disease treatment. The survey of Barnes et al. 2002 [3] in the year 2002 on complementary and alternative medicine use among adult at United States
pointed out that nineteen percent of adults in the US used natural products such as herbal medicine, functional foods, and animal-based supplements during the past 12 months prior to their survey. Some herbs are used in bone fracture treatment as well as osteoporosis. The studies of Sun et al. 2002 and Jeong et al. 2005 [4, 5] showed that Gu-Sui-Bu has potential effects on primary and secondary bone cells culture. The herbal extract enhanced proliferation and differentiation of bone cell in the studies. On the other hand, the crude extract has been proven possessing inhibition effect on osteoclast obtained from fetal mouse long bone [6]. The two herbs and a formulation containing one of the herbs were used in this study. Drynaria quercifolia is one of the candidates. This plant is identified to have anti-microbial activity in the studies of Poonam et al. 2005 [7]. D.quercifolia was chosen in this study because it is under the same family of Gu-Sui-Bu, which has been well-established for bone healing as mentioned above. Gendarussa vulgaris is another locally found herb that was used in the present study. G.vulgaris is reported to have inhibition effect on the microbial proliferation of Staphylococcus aureus [8]. Neither herb has been widely studied for its potential to enhance bone-cell activity. The formulation containing G.vulgaris as the main constituent was provided by Mr. Lim Kok Hong a local herbalist. He claims that G.vulgaris (when applied to a bone fracture) results in faster healing. Further, several other local herbalists also claimed similar effects. Thus, in the present study we study these two herbal extracts and one formulation of G.vulgaris on the proliferation of rat BMSC derived osteoblast. Rat BMSC derived osteoblasts were used in the present study as the subject of herbal extract treatment. BMSCs are one of the favoured models used in tissue-engineering research studies, due to its plasticity and multipotent capacity. BMSC is a self-renewable source of multipotent progenitor cells with the capacity to differentiate into several distinct mesenchymal lineages, such as bone, cartilage, muscle, tendon, ligament and fat tissue [9, 10]. In bone tissue engineering, BMSC is important because it serves as osteoprogenitor cell in the skeleton system. BMSC can be induced to differentiate into osteoblast by incubating with osteogenic medium which contains dexamethasone and β-glycerolphosphate [11].
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 819–822, 2011. www.springerlink.com
820
C.T. Poon et al.
II. METHODS A. Bone Marrow Stromal Cells Isolation and Cultivation Bone marrow stromal cells (BMSCs) were isolated aseptically from tibia and femurs of male young adult Sprague Dawley rats according to the procedure described in Maniatopoulos et al. 1988 [12] with minor modifications. Following spinal dislocation and immersion in 70% alcohol for disinfection purposes, the tibia and femurs were cut from the rat. After removal of soft tissue attached to the bones, the metaphyseal ends of each excised bones were then cut off and the marrow from the midshaft was flushed. The cells were then collected in a 15 ml sterile test tube and centrifuged at 1500 rpm for 5 min. The resulting cell pellets were resuspended in 12 ml of DMEM media with 10% fetal bovine serum and plated in T-75 flasks. After 4 days incubation in a CO2 incubator under 5% CO2 atmosphere, at 37 oC and relative humidity of 95%, the medium was discarded and the BMSC were induced osteogenically using DMEM containing 10mM β-glycerophosphate, 50 µg mL-1 L-ascorbic acid and 10nM dexamethasone. The cells were subjected to an evaluation of proliferation test induced by herbal extracts after the cell reached confluency.
under 5% CO2 atmosphere, at 37 ºC and at a relative humidity of 95% for 24 hours. After that, the culture medium was changed to culture medium containing 100µg/ml of herbal extract. The cells cultured with culture medium alone were used as control. The cells incubated with culture medium containing 100µg/ml of herbal extract were cultured until day 3, 7 or 14. Culture medium with 100µg/ml herbal extract was changed every 2-3 days. After day 3, 7 and 14, the medium with herbal extract was discarded and replaced with fresh 100µl culture medium in each well. 10µl Cell Counting Kit-8 (Sigma) solution was added to each well. Then the test plate was incubated in incubator for 3 hours. The optical density of each well was measured at a wavelength of 460nm using a microplate reader. The OD of well incubated with herbal extract in culture medium was compared to OD of cells incubated with culture medium alone to determine the effect of herbal extract on rat BMSC derived osteoblasts.
III. RESULTS A. Bone Marrow Stromal Cells Isolation and Cultivation
B. Crude Extraction of the Medicinal Herbs The collected herbs were dried under sunlight or in oven at 40ºC. The dried herbs were weighed and ground in powder. The powder was macerated with ethanol in a ratio of 1g : 1l for 3 days. NaSO4 was added to absorb water content in the mixture. The ethanol with herb powder was then filtered and evaporated with rotary evaporator at 30-35ºC. The ethanolic extract was added with 200ml hexane. The mixture was left for 3 days. After that, the mixture was filtered to gain hexane extract. The residue was added with water and ethyl acetate to perform liquid-liquid extraction. Water layer was freeze-dried to obtain water extract. Meanwhile, ethyl acetate layer was evaporated to obtain ethyl acetate extract. The four types of herbal extracts were used in bone cell proliferation tests.
Fig. 1 Bone marrow cells plated
C. Proliferation Evaluation of Rat BMSC Derived Osteoblasts Induced by Plant Extracts Rat-BMSC-derived osteoblasts were harvested using accutase after reaching confluency. The harvested cells were then seeded in a 96well test plate at a cell density of 3000 cells/well. The test plate was incubated in a CO2 incubator
Fig. 2 Rat BMSC after 4 days
in T-75 cm2 immediately after harvest from rat (200x)
Fig. 3 Rat BMSC derived osteoblast at confluency The rat BMSC was cultured with primary medium (DMEM with 10% fetal bovine serum) for four days after isolation from bone. The rat BMSC was mixed with hemapoietic cells which were not attaching to surface of the
IFMBE Proceedings Vol. 35
Effect of Herbal Extracts on Rat Bone Marrow Stromal Cells (BMSCs) Derived Osteoblast–Preliminary Result
bottom of the culture flask. Thus there were a lot of floating cells visible under microscope immediately after harvest from rat (Figure 1). Primary medium was discarded together with floating cells on the 4th day. Only BMSC which attached to bottom of flask remained (Figure 2). The flask was changed to contain osteogenic medium to induce osteogenic differentiation of BMSC until the cells reached confluence (Figure 3). B. Crude Extraction of the Medicinal Herbs Four types of herbal extracts with different polarity were done for each herb in present study. Thus 12 herbal extracts were obtained from 3 herbs. Table 1 Herbal extract and compound polarity Herb Drynaria quercifolia
Extract Ethanol Hexane Ethyl acetate Water
Compound Polarity 70% polar + 30% non-polar Non-polar Polar Semi polar
Gendarussa vulgaris
Ethanol Hexane Ethyl acetate Water
70% polar + 30% non-polar Non-polar Polar Semi polar
Gendarussa vulgaris formulation
Ethanol Hexane Ethyl acetate Water
70% polar + 30% non-polar Non-polar Polar Semi polar
Fig. 4 Result of 3 days incubation
C. Proliferation Evaluation of Rat BMSC Derived Osteoblasts Induced by Plant Extracts Fig. 5 Result of 7 days incubation The proliferation of cells enhanced by herbal extracts showed increase in the optical density of test wells, as compared to control wells. The results obtained show the addition of herbal extracts induced enhancement of proliferation of rat-BMSC-derived osteoblasts to various extents. Generally, the 12 herbal extracts induced mild enhancement at day 3 of incubation. The ethanol extract of Gendarussa vulgaris consistently induced a significant enhancement on BMSC derived osteoblast proliferation on day 7 and day 10 incubation. The osteoblasts incubated with this herbal extract show the highest proliferation rate compared to others. The enhancement percentage is 31.35% on day 7 incubation and 58.91% for day 10 incubation compared to respective negative control.
Fig. 6 Result of 10 days incubation IFMBE Proceedings Vol. 35
821
822
C.T. Poon et al.
IV. DISCUSSIONS At present, many of the biochemical substances found in medicinal herbs have shown effectiveness in treating bone fracture. Furthermore, the side effects of natural products are considered less harmful than those of conventional medicine. The use of medicinal herbs in the treatment of bone fractures is due to enhancement of osteoblast proliferation. In the present study, enhancement of cell proliferation by ethanol extract of Gendarussa vulgaris is the highest among the herbal extracts, confirming the claims by the herbalist Mr Lim Kok Hong. However, others herbalist formulations of G.vulgaris provide less activity as compared to G.vulgaris alone. But formulation of combinations of other medicinal herbs for treatment of bone fracture could be due to the action of other constituents in the formulation that may enhance formation of new blood vessels and encourage bone growth factor synthesis instead of directly acting on osteoblast proliferation. Gu-Sui-Bu (Drynaria fortunei) is a well-researched herb that has gained attention for its potential in bone fracture healing. It was postulated that similar compounds to those found in Gu-Sui-Bu may be found in Drynaria quercifolia (which was used in present study) to enhance bone cell growth. From the results obtained, we can conclude that G.vulgari ethanol extract which consists of 70% polarity compounds showed an enhancement of proliferation within rat BMSC derived osteoblasts. On the other hand, the other 11 herbal extracts showed a slight enhancement. The results obtained clearly indicate the potential use of these herbs as bone fracture healing agent. However, proliferation of osteoblast is not the only parameter to measure the efficacy of a bone healing agent. Thus, further investigation on bioactive compounds on bone cell proliferation such as alkaline phosphatase and osteocalcin production need to be carried to determine the efficacy of the herbs on bone fracture healing.
REFERENCES 1. Cummings S R, and Melton III L J (2002). Epidemiology and outcomes of osteoporotic fractures. Lancet. 359:1761-1767.
2. Arrington E D, Smith W J, Chambers H G, Bucknell A L, and Davino N A (1996). Complications of iliac crest bone graft harvesting. Clinical Orthopaedics. 329: 300-309. 3. Barnes Patricia M., Powell-Griner Eve, McFann Kim, and Nahin Richard L. (2002). Complementary and alternative medicine use among adults: United States. Advance Data from Vital and Health Statistics. 343:1-20. 4. Sun Jui Sheng, Lin Chun Yu, Dong Guo Chung, Sheu Shiow Yunn, Lin Feng Huei, Chen Li Ting, and Wang Yng Jiin (2002). The effect of Gu-Sui-Bu (Drynaria fortunei J. Sm) on bone cell activities. Biomaterials. 23:3377-3385. 5. Jeong Ji Cheon, Lee Jae Wook, Yoon Cheol Ho, Lee Young Choon, Chung Kang Hyun, Kim Min Gon, and Kim Cheorl Ho (2005). Stimulative effects of Drynariae Rhizoma extracts on the proliferation and differentiation of osteoblastic MC3T3-E1 Cells. Journal of Ethnopharmacology. 96:489-495. 6. Jeong Ji Cheon, Kang Sung Koo, Youn Cheol Ho, Jeong Chang Whan, Kim Hyung Min, Lee Young Choon, Chang Young Chae, and Kim Cheorl Ho (2003). Inhibition of Drynariae Rhizoma extracts on bone resorption mediated by processing of cathepsin K in cultured mouse osteoclasts. International Immunopharmacology. 3:1685-1697. 7. Poonam Shokeen, Krishna Ray, Manju Bala, and Vibha Tandon (2005). Preliminary studies on activity of Ocimum sanctum, Drynaria quercifolia, and Annona squamosa against Neisseria gonorrhoeae. Journal of the American Sexually Transmitted Disease Association. 32:106-111. 8. Paul W. Grosvenor, Agus Supriono, David O. Gray (1994) Medicinal plants from Riau Province, Sumatra, Indonesia. Part 2: antibacterial and antifungal activity. Journal of Ethnopharmacology. 45:97-111. 9. Caplan AI. (1991). Mesenchymal stem cells. Journal of Orthopaedic Research. 9:641-650. 10. Pittenger MF, Mackay AM, Beck SC, Jaiswal RK, Douglas R, Mosca JD, Moorman MA, Simonetti DW, Craig S, and Marshak DR. (1999). Multilineage potential of adult human mesenchymal stem cells. Science. 284:143-147. 11. Friedenstein AJ, Piatetzky-Shapiro II, and Petrakova KV. (1966). Osteogenesis in transplants of bone marrow cells. Journal of Embryology and Experimental Morphology. 16:381-390. 12. Maniatopoulos C, Sodek J, Melcher AH. (1988). Bone formation in vitro by stromal cells obtained from bone marrow of young adult rats. Cell Tissue Research. 254:317-330.
Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 35
Poon Chi Tat University of Malaya Kuala Lumpur Malaysia [email protected]
Fabrication and In-vivo Evaluation of BCP Scaffolds A. Behnamghader1, R. Tolouei1, R. Nemati1, D. Sharifi2, M. Farahpour3, T. Forati3, A. Rezaei1, A. Gozalian1, and R. Neghabat3 1 Materials and Energy Research Center (MERC), Tehran, Iran Faculty of Veterinary Medicine, Tehran University, Tehran, Iran 3 Islamic Azad University, Science & Research Branch, Tehran, Iran 2
Abstract–– The aim of this study was to produce the absorbable BCP scaffolds from tricalcium phosphate (TCP) and hydroxyapatite (HA). The Naphthalene particles were used as pore forming agent. The conventional pressing and sintering were utilized to prepare disc shape samples. The sintering were carried out at the temperatures 1000, 1100 and 1200 °C. The phase composition and chemical structure analyses were performed using XRD and FTIR experiments. The morphological aspects of scaffolds such as pore size and distribution and their interconnectivities were studied by SEM. The sintered samples were composed of TCP and HA phases, without any unwanted impurities remaining from the additives. Those pressed at the pressure of 220 MPa and then sintered at the temperature of 1200 °C had the pores between 50 to 300 μm in size and rough on the inner surface. The results of In-vivo experiments showed that the new bone was well formed in the bone defect when the porous BCP scaffolds were implanted for a period of three months. Keywords–– Scaffold, Hydroxyapatite, Tricalcium phosphate, Tissue engineering, in vivo.
I. INTRODUCTION Calcium phosphate bioceramics have drawn worldwide attention as the important substitute and scaffolding materials in hard tissue engineering due to their biocompatibility and bioactivity. This is due to the chemical similarity of these materials especially carbonated hydroxyapatite to the mineral constituent of bone [1-6]. Following the implantation of HA, no negative biological responses such as poisonous effect and swelling is detected from the body [7]. Nevertheless, HA is found to be significantly more chemical and structural stable in comparison with other calcium phosphate phases such as βTCP in In-vitro and In-vivo conditions 8. So, the calcined dense HA implants especially of high crystallinity are not considered as a biodegradable material. There are even reports about the formation of thick fibrous layer, considered as a negative response of surrounding tissue to implant [2].
From the point of view of bioactivity, the substitution material should be degraded inside the defect simultaneously with the formation of a new bone. So, a double phase composition containing HA and β-TCP (BCP) is considered to be more efficient for the repair of bone defects. In recent years, the BCP scaffold materials have been widely investigated for cellular as well as growth factor-based tissue engineering [9, 10]. Apart from chemical and biological characteristics of material, the presence of pore and their sizing and distributions are considered as the principal parameters influencing the In-vitro and In-vivo performance of the scaffolds [9, 10]. There are reports about the effect of pore characteristics in the calcium phosphate scaffolds on the bone formation and osseointegration [11-16]. Accordingly, a pore size ranging from 100 to 500 μm depending on scaffolding materials, with the adequate interconnection seems to be essential for tissue growth throughout the biodegradable scaffolds. More chemically stable calcium phosphate scaffolds containing higher amount of HA would need bigger pores to ensure their success. In view of undesirable effect of pores on mechanical properties, it is indispensable that it should be compensated by the strength of the pores wall. So, the effect of sintering on the phase development and microstructure of pores wall is studied in this research. In vivo response of bone tissue was evaluated by comparing the radiographic and pathological aspects of the porous scaffolds and dense bodies bodies implanted for 1 and 3 months in rabbits.
II. MATERIALS AND METHOD The HA and tricalcium phosphate powders (Merck) were used as raw materials. Naphthalene (Merck) was also employed as pore forming agent. The calcined TCP (70%) and as received HA (30%) powders were mixed in a planetary mill with an overall Ca/P ratio equal to 1.53. Naphthalene powder with a size of 100-300 μm was then added to the HA/TCP mixed powder. After adding the CMC binder, disc shape samples were formed by uniaxial
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 823–826, 2011. www.springerlink.com
824
A. Behnamghader et al.
pressing under the pressure of 75 and 220 MPa. The samples were dried at 75 °C for 72 hours and finally heat treated at the temperature of 1000, 1100 and 1200°C. The phase composition and chemical structure analyses were performed using XRD and FTIR. The microstructural characteristics and the size, distribution and inner surface of pores were studied by SEM. The microstructural characteristics of sintered bodies, the pore size and distribution, their interconnectivities as well as the irregularities of pore inner surfaces were studied by SEM. To evaluate the effect of the BCP scaffolds on bone repair, 12 adult male New Zealand white rabbits were used. Rabbits were divided into two groups of dense BCP bodies (group A) and porous BCP scaffolds (group B). Each group was divided into two subgroups containing 3 samples designed for two implantation periods of 1 and 3 months. Under general anesthesia, a segmental bone of 10 mm in length was removed from middle of the right radius shaft. During the implantation, the animals were evaluated clinically and the bone regeneration and BCP resorption were qualitatively measured via X-ray examination. After sacrificing the animals, callus samples were collected and stained with Hemytoxylin and Eosin (H&E) and Massons Trichrom for histomorphological interpretation.
Fig. 1 XRD patterns of dense samples sintered at temperatures of 1000°C (a), 1100°C (b) and 1200°C (c) The chemical structure of the HA-TCP mixed powder (BCP), sintered samples containing the binder (BCP-B) and both binder and pore forming agent (BCP-CN) was studied by FTIR (Fig. 2).
III. RESULTS The samples were divided into two groups; porous scaffolds and dense bodies. The porous samples were produced using pore forming agent. The porous samples shaped at the both two pressures and sintered at the temperature of 1000 °C were not perfectly intact and damaged through the handling. The samples sintered at 1100 °C was intact even if the compaction was done at 75 MPa. The density of the green bodies pressed at the pressure of 75 and 220 MPa, were 1.42 and 1.47 g/cm3, respectively. Taking into account the density of the starting materials, the average porosities of green bodies were found to be 55 and 53% under the pressures of 75 and 220 MPa, respectively. Also, the density of 1.52 and 1.62 g/cm3 and the porosity of 53% and 45% were measured respectively for samples sintered at 1100 and 1200 °C. The compressive strength of these samples were 16 ± 3 and 25 ± 3.5 MPa, respectively. The XRD analysis of dense samples sintered at 1000, 1100 and 1200 °C revealed the presence of β-TCP and HA in all samples and a little α-TCP phase at 1100 and 1200 °C (Fig. 1). The amount of each phase was determined using semiquantitative XRD by comparing the peak intensity of HA (211), b-TCP (0210) and a-TCP (034).
Fig. 2 FT-IR spectrums of BCP-CN (a), BCP-B (b) and CP (c) samples The characteristic P-O bands of TCP and HA phases as well as the characteristic O-H bands of HA phase were identified. No other peaks originated by the addition of the binder and pore forming agent were found in the sintered samples. It seems that the P-O peak intensities were decreased by the addition of the binder, though. The SEM image presented in figure 3, taken from the fracture surface of a porous body shaped at 220 MPa and
Fig. 3 SEM image of a porous BCP sample sintered at 1200°C
IFMBE Proceedings Vol. 35
Fabrication and In-vivo Evaluation of BCP Scaffolds
825
sintered at 1200 °C illustrates the average pore size and porosity were almost about 50-250 μm and 55% respectively. The pores were almost uniformly distributed all over the sample. As it can be deduced from figure 4a, the attachment of the grains in the pore walls of porous scaffold is similar to that obtained in the dense sample. The rough inner surface and granular microstructure of a pore is brought in figure 4b. It is clear from this image that the grain size difference between TCP and HA phases caused the roughening of the inner surfaces.
Fig. 4 SEM images depict a) the pore wall microstructure and b) the pore inner surface irregularities of the porous sample sintered at 1200 °C
microstrucure of the sintered hydroxyapatite. The microporosities of a few micrometers size distributed throughout the microstructure caused by hydroxil radicals release [16], additive evaporation and or shrinkages occurred during sintering stage. The amount of HA phase was remained unchanged after adding the naphthalene. It seems only a slight increase in αTCP and decrease in β- TCP amounts were happened through the sintering. Rossi [12] reported a significant increasing of α-TCP in the presence of naphthalene for the starting materials with Ca/P ratio equal to 1.58. It should be noted that the microstructure contained more microporosities when the sample was made porous by the pore former. It could be due to partial deformation of pore former during compaction of the naphthalene-added samples. The rough inner pore surface morphology consisted of the absorbable tricalcium phosphate phase along with the stable HA phase can provide the proper conditions for the seeding and proliferation of the bone cells [23]. The formation of these uneven surfaces are also attributed to the presence of pore former [4] which is not provable in this study. In vivo experiments
IV. DISCUSSION Higher sintering temperature (1000-1250) and lower Ca/P ratio and particle size of powders toward nano scale range favor the formation of less chemically stable phase components than HA [10, 14, 17]. A mean particle size of 1-3 μm was measured for the mixed powder. As reported elsewhere [3, 4] the HA to β-TCP transformation temperature decreases for samples with a Ca/P ratio less than 1.67. As shown in figure 1, no significant changes were observed in the amount of HA and β-TCP phases with the increasing of temperature from 1000 to 1200 °C. The β-TCP→α-TCP transformation must have occurred at the temperatures 1100 and 1200 °C. The α-TCP phase might have been slightly transformed to β-TCP phase (lower than 5%) during furnace cooling and remained metastable at room temperature. SEM evaluation of sample sintered at 1200°C revealed a microstructure without any microcrack. Volume increasing of about 7% due to β-TCP to α-TCP transformation is considered as an undesirable phenomenon leading to phase boundary cracking [18-20]. Some authors [16, 21] have also reported it might not be so considerable to cause significant microstructural cracking. The strengthening of material due to compressive stresses emanating from this expansion is even reported elsewhere [22]. SEM analysis of the fracture surface of the dense sample sintered at the temprature of 1200 °C revealed typical
Based on radiographic findings, there was no displacement but some periosteal reaction was observed at the both ends of radial fragment of group A samples after one month implantation. The group B samples illustrated a reduction of opacity due to callus formation after the same implantation period but no endosteal or intercortical bone reaction was detected. Three months after implantation, the most of BCP scaffolds of group B (Fig. 5) were disappeared extensively and no traces of periosteal and intercortical callus were observed anymore. However, the samples of group A were remained almost intact but displayed an interfacial adhesion to bone though. Histomorphological studies of group A specimens of one month duration revealed that despite of the presence of
Fig. 5 Radiograph of the radius implanted by the samples of group B after 3 months implantation and related histomorphologic image
IFMBE Proceedings Vol. 35
826
A. Behnamghader et al.
some granulation tissues, no significant traces of new bone formation were found. However, a little bone formation and thicker granulation tissues were observed in group B specimens at the same period. The thin layers of newly formed bone distributed in he form of bony island were detected in group A specimens after 3 months of implantation. There was no signs of inflammatory or infection. After the same period, the formation of new bone in group B specimens was increased significantly. There were also the signs of remodeling from woven to lamellar bone structure. The significant difference between the results of In Vivo experiments of dense bodies and porous scaffolds can be attributed to the biodegradable nature of BCP constituents and the sizes and rough inner surfaces of porosities.
V. CONCLUSION In this study, the BCP scaffolds were produced from the TCP and HA mixed powder with the ratio of 70:30. The scaffolds prepared from compaction at 220 MPa and sintering at 1200 °C were composed of HA and TCP phases and contained pores with an average pore size and porosity of about 50-250 μm and 55% respectively. The In-vivo experiments revealed the use of porous BCP scaffolds led to the formation of new bone in bone defects after three months of implantation. The increased osteoneogenesis is generally attributed to the stimulatory effect of BCP on local cells involved in bone regeneration such as osteoblast and bone progenitor cells. This behavior could be also further enhanced by the presence of rough pore inner surfaces throughout the porous scaffolds. Our findings suggest a potential use of the porous BCP scaffolds in the treatment and repair of bone defects.
REFERENCES 1. R.Z. LeGeros. Properties of osteoconductive biomaterials: Calcium phosphates. Clin. Orthop. Rel. Res. 2002; 395: 81–95. 2. C.A. van Blitterswijk, J.J. Grote, W. Kuijpers, W.Th. Daems, K. de Groot. Macroporous tissue ingrowth: a quantitative and qualitative study on hydroxyapatite ceramics. Biomaterials 1986;7: 137–143. 3. M. Jarcho, Biological aspects of calcium phosphates: properties and applications. Dent. Clin. North Am. 1986;30: 25–47. 4. R.W. Bucholz, A. Carlton and R.E. Holmes, Hydroxyapatite and tricalcium phosphate bone graft substitutes. Orthop. Clin. North Am. 1987;18: 323–334. 5. L.L. Hench. Bioceramics. J. Am. Ceram. Soc. 1998;81: 1705–1728. 6. L.L. Hench. Bioceramics: from concept to clinic. J. Am. Ceram. Soc. 1991;74: 1487-1510.
7. Shiny V, Ramesh P, Sunny M.C, Varma H.K, Extrusion of hydroxyapatite to clinically significant shapes. Mater. Lett. 2000; 46: 142-146. 8. CP. Klein, JM. De Blieck-Hogervorst, JG. Wolke and K. de Groot. Studies of the solubility of different calcium phosphate ceramic particles in vitro. Biomaterials 1990;11: 509–512. 9. Sunho Oh, Namsik Oh, Mark Appleford, Joo L. Ong. Bioceramics for tissue engineering Applications – A review. Am. J. of Biochemistry and Biotechnology, 2006;2: 49-56. 10. P. Quinten Ruhé. "Tissue Engineering and Artificial Organs" in: Calcium Phosphate Ceramics for tissue engineering. Taylor & Francis Group, LLC. 2006. 11. Sergey V. dorozhkin and Matthias Epple. Biological and Medical Significance of Calcium Phosphates. Angew. Chem. Int. Ed. 2002;4: 3130-3146. 12. J.F. D'Oloiveira, A.M. Rossi. Effect of Process Parameters on the Characteristics of Porous Calcium Phosphate Ceramics for Bone Tissue Scaffolds. Artificial Organ. 2004;27(5): 406-411. 13. Raynaud, E.Champion, D.Bernache-Assollant, P.Thomas. Calcium phosphate apatites with variable Ca/P atomic ratio I: Synthesis, characterization and thermal stability of powders. Biomaterials 2002;23: 1065–1072. 14. K. De Groot. Effect of Porosity and Physicochemical Properties on the Stability, Resorption, and Strength of Calcium Phosphate Ceramics. Annals New York Academy of Science. 1998; 227-233. 15. G. Gregori, H.-J. Kleebe, H. Mayr, G. Ziegler. EELS characterization of β-tricalcium phosphate and hydroxyapatite. Journal of the European Ceramic Society 2006;26: 1473–1479. 16. E. Peon, G. Fuentes, J. A. Delgado, L. Morejon. Preparation and characterization of porous blocks of synthetic hydroxyapatite. Latin American Applied Research. 2004;34: 225-228. 17. Behnamghader, N. Bagheri, B. Raissi, F. Moztarzadeh. Phase development and sintering behavior of biphasic HA-TCP materials prepared from hydroxyapaptite and bioactive glass. Journal of Material Science: Materials medicine. 2008; 9: 197-201. 18. K. Wang, C. P. Ju and J. H. Chern Lin. Effect of doped bioactive glass on structure and properties of sintered hydroxyapatite Mater. Chem. Phys. 1998;53: 138-149. 19. J. C. Knowles. Development of hydroxyapatite with enhanced mechanical properties. Br. Ceram. Trans. 1994;93: 100-103. 20. F. N. Oktar, G Göller. Sintering effects on mechanical properties of glass-reinforced hydroxyapatite composites. Ceram. Int. 2002;28: 617-621. 21. D. C. Tancred, B. McCormack, A. J. Carr. A quantitative study of the sintering and mechanical properties of hydroxyapatite/phosphate glass composites. Biomaterials 1998;19: 1735-1743. 22. M. A. Lopes, F. J. Monteiro, J. D. Santos. Glass-reinforced hydroxyapatite composites: Secondary phase proportions and densification effects on biaxial bending strength. J. Biomed. Mater. Res. Appl. Biomater. 1999;48: 734-740 23. D. Boyan, T. W. Hummert, D. D. Dean, Z. Schwartz. Role of material surfaces in regulating bone and cartilage cell response. Biomaterials 1996; 17: 137-146. Author:
Aliasghar BEHNAMGHADER
Institute: Street: City: Country: Email:
Biomaterials group, Materials and Energy Research Center Alvand (Argentin sq) Tehran (P.O. Box. 14155-4777) Iran [email protected]
IFMBE Proceedings Vol. 35
Fabrication of Porous Ceramic Scaffolds via Polymeric Sponge Method Using Sol-Gel Derived Strontium Doped Hydroxyapatite Powder 2
I. Sopyan1, M. Mardziah , and Z. Ahmad1 1
Department of Manufacturing and Materials Engineering, International Islamic University of Malaysia, Kuala Lumpur, Malaysia 2 Faculty of Mechanical Engineering, University of Technology MARA Malaysia, Shah Alam, Malaysia
Abstract–– Recently, development of porous calcium phosphates ceramics have raised considerable interest. A porous structure promotes cell attachment, proliferation and provides pathways for biofluids. Therefore, a high porosity with interconnected pore structure generally favors tissue regeneration. In this work, replication of 0, 2, 5, 10 and 15% SrHA (strontium-doped hydroxyapatite) porous scaffolds via polymeric sponge method has been employed using the sol-gel derived SrHA powders. To prepare the porous samples, the synthesized SrHA powders was mixed with distilled water and appropriate amount of dispersing agent followed by drying in the ambient air and specific sintering process. Morphological evaluation by FESEM measurement revealed that the SrHA scaffolds were characterized by macro-micro interconnected porosity, which replicates the morphology of the cancellous bone. Compression test on the porous scaffolds revealed that doping 10 mol% of strontium in HA has increased the compressive strength by a factor of two compared to the undoped HA with 1.81±0.26 MPa at 41% porosity.
of these methods, polymeric sponge method, which involves impregnation of a polymeric sponge with slurries containing ceramics particles and appropriate additives, has received particular attention. The increasing interest in porous ceramics is associated mainly with some specific properties, like high specific surface area, high permeability, low density and low thermal conductivity. These scaffolds are expected to have good interconnection between pores, biocompatible and controllable degradation rate to promote bone ingrowth and to support bone-cell attachment [2]. In this work, powders of 0, 2, 5, 10 and 15 mol% SrHA were converted into porous bodies and then their chemical and physical characteristics were investigated.
Keywords— Porous calcium phosphates, interconnected pore structures, ceramic scaffolds and SrHA porous scaffolds.
A. Materials
I. INTRODUCTION The growing demand for utilizing bioactive materials for orthopedic applications as well as in maxillofacial surgery, the use of bioceramics such as hydroxyapatite (HA) and tricalcium phosphate (TCP) as porous scaffolds has increased, primarily because of their biocompatibility, bioactivity, and osteoconduction characteristics with respect to bone tissue. For tissue regeneration in medicine, threedimensional scaffolds with specific characteristics are required. A very important property is a high, interconnecting porosity to enable tissue ingrowth into the scaffold [1]. Pore size distribution and pore geometry should be adapted to the respective tissue. Additionally, the scaffolds should have a basic stability for handling during implantation, which is provided by ceramic scaffolds. Thus, a number of fabrication methods for the production of porous ceramic materials have been developed. Among all
II. MATERIALS AND METHODS
•
Cellulosic sponge
Cellulosic sponge (Testafabrics Inc.,USA) was used in this experiment to replicate the porous structures of biological bone. A polymeric sponge must be selected with suitable properties. The sponges own the property of interconnected pores which will ensure the penetration of suspension during the impregnation of the sponge. The cellulosic sponge has inhomogeneous size of pores which will replicate the morphology of the actual bone. Polymeric sponge was chosen because it owns the characteristic of partly hydrophilic which allows it to adhere with water based slurry which contains SrHA. The pore size of the sponge determines the pore size of the final ceramic after firing. It is also able to recover its original shape and convert into a gas at a temperature below that required to fire the ceramic. Cellulose sponge can satisfy these requirements and the pore size could range from 100-1000 µm. •
Dispersing agent
In this experiment, an ammonium salt of polyacrylic acid (Duramax D-3005, Rohm and Haas) was used as a
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 827–830, 2011. www.springerlink.com
828
I. Sopyan, M. Mardziah, and Z. Ahmad
dispersing agent. Polymeric dispersing agents play a critical role in determining the properties of a wide range of materials. Many industrial processes depend on dispersants, such as industrial preparations involving slurries or suspensions. While the nature of the dispersed systems varies, the function of the dispersing agent is always to control the interaction between fine particles, often solids, suspended in a second medium, often water. Polymeric dispersing agents function by adsorbing at the surface of the suspended particle and thereby provide a protective layer that hinders particle attraction. They reduce the attractive forces between fine particles, which allow them to be more easily separated and dispersed. B. Experimental Procedures An optimized condition for preparing the porous bodies was studied. This covers the optimal conditions for HA slurry preparation, impregnation of slurry into the cellulosic sponges, as well as thermal cycles for organic component volatilization. The procedure for the preparation of the porous sample can be explained as follows. SrHA powder which was synthesized in our laboratory by a sol-gel method was stirred vigorously in distilled water. Proper amount of dispersing agent (Duramax D-3005, Rohm and Haas) was then added to homogenize the solution and continue to stir for 24 hours. Cellulosic sponge templates (Testafabrics Inc., USA) were cut to appropriate dimensions and immersed in the slurry. After soaking into the slurry, the sponges were dried in ambient air for 72 hours. The obtained green body was heat-treated to burn out the sponge at 600°C for 3 hours at a heating rate of 5°C/min. Subsequent sintering of the porous body was performed at 1300°C at a heating rate of 5°C/min for 3 hours holding time and then cooling in the furnace. Afterward, the porous scaffolds were physically and chemically characterized using FTIR, SEM and XRD. Their porosity and mechanical strength were also evaluated. The relative density of the porous scaffold was measured by the Archimedes method. The porosity of all porous samples was determined from its actual and relative densities. The relative density was calculated by taking the theoretical density of HA as 3.156 g cm-3. The specimens of porous scaffolds were cylindrical (11mm height x 11mm diameter) in shape with a length to diameter ratio of 1:1. A Lloyd LR10K mechanical tester with 10 kN load cells was used for the compression mechanical tests. The crosshead speed was set at 0.5 mm/min, and the load was applied until the scaffold was crushed completely. During the compressive strength tests, the stress and strain responses of the samples were monitored. The compressive strength was calculated from the maximum load registered during the test divided by the
original area. Five samples of each type were tested and the strength average value together with the standard deviation value was calculated.
III. RESULTS A. XRD Analysis After sintering, the samples were crushed and examined by X-ray diffraction (XRD) method. All SrHA porous samples sintered at 1300°C detected a typical spectrum of nanocrystalline apatite containing mainly HA phase with each of the sample shows high crystallinity characteristic (Fig. 1). No impurities or undesired phases such as α-TCP, CaO or SrO were detected in the scaffolds. This shows that the HA phase remain stable even after being sintered at high temperature.
Fig. 1 XRD analysis of 0, 2, 5, 10 and 15% SrHA porous bodies sintered at 1300°C
B. FESEM Evaluation The pore structure of SrHA porous samples were examined closely by image analysis using Field Emission Scanning Electron Microscopic (FESEM). These porous samples were obtained after drying for 72 hours and then subjected to 1300˚C sintering for 3 hours. Sintering at 1300°C may cause substantial growth to occur, therefore strongly affect the mechanical properties. The macropores exist in the body with 100-900 µm in diameter size. Similar morphologies have been previously demonstrated to allow
IFMBE Proceedings Vol. 35
Fabrication of Porous Ceramic Scaffolds via Polymeric Sponge Method
Fig. 2 FESEM macroporority images of 0 mol% SrHA porous bodies after sintering at 1300°C
829
sample 10% SrHA has higher density compared to the rest of the samples. It should be noted that the size of the internal voids might be large enough to provide a favorable environment for bone ingrowth within the voids. From this result, it can be concluded that after doping HA with Sr, the particle will diffuse into a more progressive densification of particles as shown by 10% Sr doping. In this case, Sr acts as a sintering additive which will improve sintering behavior of HA leading to improvement in mechanical strength. On the other hand, 15% SrHA porous structure is no longer interconnected confirming very low degree of fusion between the individual particles, leading to higher porosity and lower density.
(a)
(b)
(c)
(d)
Fig. 3 FESEM macroporosity images of 10 mol% SrHA porous bodies after sintering at 1300°C
osteoblast penetration and physiological fluid permeation of the whole porous apatite scaffold [3]. The image of porous 0 and 10% SrHA in Fig. 2 and Fig. 3 respectively, shows more completely densified pore walls which have contributed to its improved mechanical behavior. On the contrary, the FESEM micrograph of porous 0% SrHA shows that the sample has bigger roundish pore size due to poor densification behavior during the sintering process. As a result, the density was also low (1.51 g/cm3) and the percentage of porosity is higher (52%). Bigger particle results in poorer contacts between the particles and as a consequence, impedes the sintering process which leads to the higher fracture strength of the material. It is well known that the mechanical strength of a scaffold generally decreases as its porosity increases. High porosity implant however, will favor faster bone ingrowth, thus an optimum balance between porosity and strength must be achieved. Fig.4 proves that undoped HA porous sample has more voids compared to 10% SrHA. Same observation can be made for sample 2% SrHA. This also establishes that
Fig. 4 FESEM micrographs of (a) 0, (b) 2, (c) 10 and (d) 15 mol% SrHA porous bodies after sintering at 1300°C C. Compressive Strength of the SrHA Porous Scaffolds The compressive strength was measured on the porous samples with a crosshead speed of 0.5 mm/min. The highest compressive strength value recorded was 2.11 MPa for 10% SrHA while the lowest value recorded was 0.23 MPa for 15% SrHA. The result shows that incorporation of 10% Sr into HA gives higher average compressive strength of 1.81±0.26 MPa compared to that of 0% SrHA with 0.87±0.15 MPa. According to Teixeira [4], improved compressive strength can be achieved by an addition of elements such as bioglasses or polymers into the initial ceramic slurry which will contribute for a more resistant scaffold. Even though the achieved compressive strength in this study is still low compared to human cancellous bone, it was reported that bone ingrowth after 3 months of implantation could increase the mechanical strength from 2-
IFMBE Proceedings Vol. 35
830
I. Sopyan, M. Mardziah, and Z. Ahmad
20 MPa [5] resulting in improved in vivo mechanical performance. Thus, it would be appropriate to say that the 10% SrHA porous bodies produced in this experiment still show acceptable compressive strength for human cancellous bone implants while showing considerable porosity by employing the simplified sponge method. On the other hand, 15 mol% incorporation of Sr induced further loss in crystallinity which may attribute to the destabilizing effect on the apatite structure after more Sr addition. This explains the dramatic reduction of mechanical strength in the 15% SrHA porous bodies to 0.48±0.16 MPa, as shown in Fig. 5. Its calculated percentage of porosity was also higher compared to the other samples, consistent with the fact that higher porosity tends to give lower compressive strength.
since they have less ability to support bone ingrowth and therefore difficult to bond with the living tissue. The samples were evaluated under compressive mechanical testing, with an average compressive stress value of 1.81±0.26 MPa for 10% SrHA. In this work, it was found that incorporation of 10 mol% Sr has increased the compressive strength of the porous bodies by a factor of about two or doubled the compressive strength of the undoped HA at 41% porosity. This is due to the smaller Ca atoms which have been substituted by bigger Sr atoms resulting in reduced crystal size which promotes better densification of the porous bodies throughout the sintering process. However, further decreasing size of 15% SrHA was no longer desired since this high amount of Sr concentration has caused destabilizing effect on the apatite structure, which subsequently weakens the porous scaffold.
REFERENCES
Fig. 5 Graph of compressive strength vs mol% of Sr concentration
1. Miao X, Sun D (2010) Graded/gradient porous biomaterials. Materials 3: 26-47. 2. Jo IH, Shin KH, Soon YM, Koh, YH, Lee JH, Kim HE (2009) Highly porous hydroxyapatite scaffolds with elongated pores using stretched polymeric sponges as novel template. Materials Letters 63: 1702-1704. 3. Landi E, Logroscino G, Proietti L, Tampieri A, Sandri M, Sprio S (2007) Biomemitic Mg-substituted hydroxyapatite: from synthesis to in vivo behavior. J Mater Sci: Mater Med. 19: 239-247. 4. Teixeira S, Rodriguez MA, Pena P, De Aza AH, De Aza S, Ferraz MP, Monteiro FJ (2009) Physical characterization of hydroxyapatite porous scaffolds for tissue engineering. Materials Science and Engineering C 29: 1510-1514. 5. Sopyan I, Mel M, Ramesh S, Khalid KA (2007) Porous hydroxyapatite for artificial bone applications. Science and Technology of Advanced Materials 8: 116-123.
IV. CONCLUSION The address of the corresponding author:
Porous scaffolds were prepared via polymeric sponge method using 0, 2, 5, 10 and 15 mol% SrHA sol-gel derived powders as the starting material and successive sintering at 1300°C. All porous samples showed good pore interconnectivity except for 15% SrHA, with macropores of 100-900 µm in average size and ~40-55% porosity. For medical application, this fact is important since porosity represents one of the main factors defining bioactivity. Implants with low porosity have lower osteoconductivity
Author: Dr. Iis Sopyan Institute: International Islamic University Malaysia Faculty of Engineering PO Box 10, Gombak City: Kuala Lumpur Country: Malaysia Email: [email protected]
IFMBE Proceedings Vol. 35
Hydrogel Scaffolds: Advanced Materials for Soft Tissue Re-growth Z.A. Abdul Hamid1, A. Blencowe2, J. Palmer3, K.M. Abberton3, W.A. Morrison3, A.J. Penington3, G.G. Qiao2, and G. Stevens2 1
School of Materials & Mineral Resources Engineering, Universiti Sains Malaysia, Engineering Campus, 14300 Nibong Tebal Pulau Pinang 2 Department of Chemical and Biomolecular Engineering, The University of Melbourne, Victoria, 3010, Australia 3 The O’Brien Institute of Microsurgery, Melbourne Australia [email protected]
Abstract–– Hydrogels have been extensively investigated for use in tissue engineering applications as a result of their unique characteristics, including their hydrophilic nature, high affinity for water and characteristic macromolecular gel structure. In this study we have successfully synthesised novel biodegradable hydrogel scaffolds using the diepoxide, poly(ethyleneglycol) diglycidylether (PEGDGE), and the crosslinker, cystamine. These components were then covalently cross-linked with a hydrophobic polymer that acts as ‘macromolecular spring’. The incorporation of this hydrophobic α,ω-diamino polycaprolactone (PCL) secondary cross-linkers led to significant increases in the mechanical strength of the hydrogels. Fused salt templates were utilized to provide an improved interconnectivity of the resulting pores in the hydrogels. In-vivo subcutaneous implantation revealed that the covalently cross-linked hydrogel scaffolds showed enormous potential in soft tissue re-growth as a result of promising tissue regeneration and limited foreign body responses.
Hydrogels have been widely uses as biomaterials due to their unique properties whereby they can form 3D polymer networks that are insoluble, but swell in water and are able to retain a large amount of water within their structures, which is similar to the extracellular matrices (ECM) of many tissues [2]. A major challenge has been the development of hydrogels that have high levels of porosity, whilst maintaining good mechanical properties, as the pore morphology plays an important role in promoting cell growth within the scaffold. Hydrogels properties such as mechanical strength (e.g., compressive modulus (CM)), pore morphology and toxicity were investigated in order to determine their suitability for use as scaffolds for tissue engineering applications. In-vivo subcutaneous implantation of these hydrogels in rats was also conducted to evaluate, if any ‘foreign body response’ (FBR) towards the hydrogels.
Keywords–– Hydrogel scaffolds, soft-tissue engineering, invivo subcutaneous implantation.
II. MATERIALS AND METHODS
I. INTRODUCTION In designing scaffolds for tissue engineering application, the materials needs to fulfil various criteria such as:- (i) should posses interconnecting pores of appropriate size to enhance the tissue integration and vascularisation, (ii) have controlled biodegradability or bioresorbability so that tissue will eventually replace the scaffold, (iii) have appropriate surface chemistry to favour cellular attachment, differentiation and proliferation, (iv) possess adequate mechanical properties to match the specific site of implantation and handling, (v) be non-toxic to the biological environment which do not induced foreign response and, (vi) be easily fabricated into various shapes and sizes [1]. The use of three-dimensional (3D) scaffolds provides the framework for cells to attach, proliferate and form extracellular matrix (ECM).
Polyethylene glycol diglycidyl ether (PEGDGE), cystamine (Cys), ε-caprolactone (ε-CL), p-toluenesulfonic acid monohydrate (TSA), 4-nitrophenyl chloroformate (NPC), methanol (MeOH), pyridine, stannous octanoate, tetrahydrofuran (THF), dimethylsulfoxide (DMSO), dichloromethane (DCM), sodium chloride (NaCl) and 1,4cyclohexanediol were all purchased from Sigma Aldrich and used as received. Toluene was distilled from CaH2. A. Synthesis of α, ω-dihydroxyl PCL: ε-CL (20.00 g, 175.22 mmol), 1,4-cyclohexanediol (0.14 g, 1.17 mmol), stannous octanoate (0.47 g, 1.17 mmol) and anhydrous toluene (45 ml) were place in a round bottom flask complete with condenser under an argon atmosphere. The reaction mixture was then heated at 110 °C for 24 h before being diluted with THF and precipitated into cold MeOH. The precipitated polymer was collected by filtration and dried in vacuo (0.05 mmHg) to afford a white powder (19.44 g, 97%).
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 831–835, 2011. www.springerlink.com
832
Z.A. Abdul Hamid et al.
B. Synthesis of TSA-EDA salt: TSA (5.02 g, 26.29 mmol) EDA (1.58 g, 26.29 mmol) were dissolved in DCM (20 mL) and stirred at room temperature for 2 h. The resulting precipitate was collected by filtration, washed with DCM and dried in vacuo (0.05 mmHg) to afford a white powder (4.61 g, 92%). C. Synthesis of α, ω-nitrophenylcabonate PCL: The procedure was carried out in identical fashion to that reported in the literature [3], using α,ω-dihydroxyl PCL (2.60 mmol, 1 equiv.), NPC (3 equiv.) and pyridine (5 equiv.) dissolved in DCM. The reaction mixture was precipitated into cold MeOH, and the resulting precipitate was removed by filtration and dried in vacuo (0.05 mmHg) to afford a white powder (92%). D. Synthesis of α,ω-diamino PCL: α, ω− Dinitrophenylcarbonate PCL (10.01 g, 1.82 mmol) and the TSA-EDA salt (0.85 g, 3.64 mmol) were dissolved in THF and stirred for 24 h at room temperature. The reaction mixture was then filtered through basic alumina (Al2O3). The filtrate was then precipitated in cold MeOH and the resulting precipitate was collected by filtration and dried in vacuo (0.05 mmHg) to afford a white powder (8.16 g, 82%). E. Synthesis of hydrogel: Different amounts of α,ω diamino PCL (0.002, 0.004, 0.008 and 0.016 mmol) were added into the hydrogel formulation to investigate its effect upon the mechanical properties of the hydrogel. Hydrogels were prepared at Cys/PEGDGE mole ratios of 0.5 and 0.75 and fixed PEGDGE concentrations (1.2 M). PEGDGE, Cys and α,ω-diamino PCL were dissolved in DMSO in a glass vial. The mixture was then heated at 70 °C for 24 h to afford the hydrogel, which was removed from the glass vial. Porous hydrogel were also produced using fused salt templates with 100-300μm and 300-600μm salt sizes using the same conditions as for non-porous hydrogels [4,5]. F. Compression test of bulk hydrogel: Hydrogels were completely swollen in water and then cut into 6 × 8 mm (diameter × height) discs. The hydrogel discs were compressed until they fractured using an Instron Microtester 5848 to determine the compressive modulus the hydrogels. The testing speed was 0.1 mm/s and static load cell used was 100 N. G. Swelling Measurements: The dried hydrogels were immersed in distilled water for 48 h and removed periodically for weighing until the mass remained constant. Equation (1) was used to determine the percentage equilibrium swelling content (%ESC) of the hydrogels. ⎛ [Weight % ESC = ⎜ ⎜ ⎝
swollen
(W s ) - Weight
Weight
dry
(W d )
dry
(W d )] ⎞ ⎟ × 100 ⎟ ⎠
(1)
H. Pores Morphology: The pore morphology of the non porous and porous hydrogels was examined using a CryoSEM. I. In-vivo subcutaneous implantation: The experiments were conducted by implantation of the hydrogels into rats. The implants were then left for a period of time (2 and 8 weeks) before being removed. The removed implants were subsequently histologically analysed (Scheme 1).
Scheme 1 In-vivo subcutaneous implantation procedures of the hydrogels in rats
III. RESULT AND DISCUSSION Scheme 2 illustrates the synthetic route used to synthesize the α,ω-diamino PCL cross-linker.
Scheme 2: Synthetic route used to synthesize α,ω−diamino PCL crosslinker.
Figure 1(A), (B) and (C) show the 1H NMR spectra obtained for the α,ω-dihydroxyl PCL, α,ωdinitrophenylcarbonate PCL and α,ω-diamino PCL crosslinker, respectively, which confirm the successful end-group transformations. Typical resonances for the PCL backbone are present in all the spectra and are labeled as a, b and c. A
IFMBE Proceedings Vol. 35
Hydrogel Scaffolds: Advanced Materials for Soft Tissue Re-growth
small resonance at δH 3.6 ppm in the α,ω-dihydroxyl PCL spectrum (see d, Figure 1(A)) corresponds to the methylene protons adjacent to the hydroxyl end-groups. In the spectrum of the α,ω-dinitrophenylcarbonate PCL (Figure 1(B)), three new resonances are observed at δH 8.3, 7.4, and 4.3 ppm (e, f and g, respectively) corresponding to the aromatic and methylene protons present on the end-groups [3]. In the spectrum of the α,ω-diamino PCL (Figure 1(C)), the resonances due to the nitrophenyl groups have been replaced by resonances at δH 8.1, 3.0 and 2.8 ppm (h, i and j, respectively) corresponding to carbamate N-H protons and methylene protons adjacent to amides and amines present in the end-groups.
833
Fig. 2 % ESC for porous Cys/PEGDGE hydrogels prepared at Cys/PEGDGE ratios of 0.5 and 0.75 ([PEGDGE] = 1.2 M) using small (100-300 µm) and large (300-600 µm) salt sized templates
The incorporation of α,ω-diamino PCL (0.004 mmol) into the hydrogel network at a Cys/PEGDGE ratio of 0.5 resulted in slightly lower %ESC values compared to hydrogels prepared using identical formulations in the absence of PCL (Figure 3). The decrease of %ESC can be accounted for the water repulsion caused by the hydrophobic PCL component incorporated throughout the hydrogel network. Porous Cys/PEGDGE/PCL hydrogels with small pore sizes possessed lower % ESC values than those with large pore sizes (380 and 720 %, respectively). In both case, large pore sized hydrogels with or without addition of diamino PCL swelled more as a result of their higher internal void volumes.
Fig. 1 1H NMR spectra of; (A) α,ω-dihydroxyl PCL, (B) α,ωdinitrophenylcarbonate PCL and (C) α,ω-diamino PCL
A series of non-porous and porous hydrogels were then synthesized by reacting the α,ω-diamino PCL (with Mns = 5.5 kDa (P1) and Mns= 11 kDa (P2)) PEGDGE and Cytamine (Cys) (primary cross-linker), using Cys/PEGDGE mole ratios of 0.5 and 0.75. Various amounts (0.002, 0.004, 0.008, 0.016 mmol) of P1 and P2 were added in order to investigate its effect upon the mechanical strength of the resulting hydrogel. The %ESC results for porous hydrogels made at Cys/PEGDGE mole ratios of 0.5 give lower values compared to those prepared at a ratio of 0.75, as expected (Figure 2). This is due to the higher cross-linking density of hydrogels prepared at a ratio of 0.5, which restricts the swelling [6]. The larger pore sizes allow more water to penetrate and be retained by the hydrogel scaffolds. At a 0.5 mol ratio, larger pores gave higher swelling (750 %), compared to hydrogels with small pores (400 %).
Fig. 3 Comparison of %ESC for porous Cys/PEGDGE hydrogels (Cys/PEGDGE = 0.5, [PEGDGE] = 1.2M) prepared without and with the addition of PCL (0.004 mmol)
Investigation of the mechanical strength of the hydrogels was carried out using a Instron Microtester. The results show (Figure 4) that the addition of α,ω-diamino PCL into the hydrogel networks generally leads to an increase in the compressive modulus of the resulting hydrogels. The highest compressive modulus (K) was obtained when 0.004 mmol of P1 and P2 was used with a Cys/PEGDGE mole ratio of 0.5. This suggests, along with the swelling results (Figure 4) that the optimum addition of α,ω-diamino PCL is 0.004 mmol. When greater than 0.004 mmol is used it is
IFMBE Proceedings Vol. 35
834
Z.A. Abdul Hamid et al.
hypothesized that aggregation of the α,ω-diamino PCL occurs during hydrogel formation, which therefore, causes a reduction in the mechanical strength. The hydrogels prepared with P1 gave higher K values compared to P2, which was attributed to the difference in molecular weight. At higher molecular weights the chance of aggregation occurring increases, leading to a decrease in the mechanical strength of the resulting hydrogels (Figure 4). Fig. 5 Compressive modulus (K) values of porous hydrogels prepared at various PCL loadings using small (100–300 µm) and large (300–600 µm) salt templates; [PEGDGE] = 1.2 M, Cys/PEGDGE = 0.5, PCL Mn = 5.5 (P1) and 11 kDa (P2)
Fig. 4 Compressive modulus (K) values of non-porous hydrogel prepared at various PCL loadings; [PEGDGE] = 1.2 M, Cys/PEGDGE = 0.5 and 0.75, PCL Mn = 5.5 (P1) and 11 kDa (P2)
Pore morphology modification of the hydrogels is important in tissue engineering applications, especially for cell proliferation and growth. Pores sizes with ranges of 100–600 μm are needed to improve the transportation of nutrients and waste within the hydrogel scaffold, which therefore, enhances the cell proliferation. In this research, pore morphology modification was carried out using fused salt templates with large (300-600 μm) and medium (100300 μm) salt sizes. As for the non-porous Cys/PEGDGE hydrogels, the same trends were observed under increased PCL loading for the porous hydrogels, with the K values increasing initially up to a PCL loading of 0.004 mmol and then gradually decreasing for higher loadings (Figure 5). The optimum K (315 kPa) values were obtained for hydrogels prepared with small sized salt templates, using PCL P1 at a loading of 0.004 mmol; relative to the same hydrogel prepared in the absence of PCL this correlates to increases in the K values of 35 %. Compared to the non-porous hydrogels, the K values of the porous hydrogels were found, on average, to be respectively ca. 20 times lower, whereas the %ε had increased by ca. 2 times (Figure 6). Evidently, modification of the pore structure of the hydrogels with fused salt templates results in softer hydrogels as compared to the rigid and stiff non-porous hydrogels. This provides materials with higher elasticity under compressive force and therefore, higher %ε values. Thus, the porous hydrogels are more flexible and likely to withstand the physical stress put upon them when implanted.
Fig. 6 % Strain at break (%ε) values of porous hydrogels prepared at various PCL loadings using small (100–300 µm) and large (300–600 µm) salt templates; [PEGDGE] = 1.2 M, Cys/PEGDGE = 0.5, PCL Mn = 5.5 (P1) and 11 kDa (P2)
The pore morphology of the swollen hydrogels was studied using Cryo-SEM, which revealed evidence of a macropores structure throughout the hydrogels (Figure 7). In addition, interconnected micropores within the scaffold walls were also observed. Cryo-SEM images shows that the swollen pores sizes are in the range of 300–600 µm. Hydrogel with sufficient pore size and pore morphology can ultimately improve the transportation of nutrients and wastes within the hydrogels, thus enhancing cell proliferation.
Fig. 7 Cryo-SEM images of swollen porous Cys/PEGDGE/PCL hydrogel scaffold (HG2L) prepared with large sized salt template (300–600 µm)
IFMBE Proceedings Vol. 35
Hydrogel Scaffolds: Advanced Materials for Soft Tissue Re-growth
From the 2 weeks in-vivo subcutaneous implantation results, (Figure 8) it was evident that no major inflammation had occured. The formation of tissue and blood vessels within the inconnected pores of the hydrogels was observed. This indicates that the hydogels with improved and interconnected pores has promoted the regeneration of new tissues.
835
AKNOWLEDGEMENTS ZAAH acknowledges the SLAB Fellowship (The Ministry of Higher Education, Malaysia and Universiti Sains Malaysia). The authors also thank Melbourne Ventures Pty. Ltd., The University of Melbourne for funding (Growing Innovation Fund (GIF)), Dr Andrea O’Connor (Particulate Fluids Processing Centre (PFPC), The University of Melbourne for use of facilities and Sabina Zahirovic (Melbourne Venture Pty. Ltd) for helpful discussion.
REFERENCES Fig. 8 Cystamine/PEGDGE/PCL hydrogels stained with Hematoxylin and Eosin (H&E)
IV. CONCLUSION In this work, we have successfully synthesized novel biocompatible and biodegradable hydrogel scaffolds with improved mechanical properties and enhanced pore morphology (interconnected macro and micro-porous structures). An elastic polymer component, PCL [7], was covalently incorporated into the network upon hydrogel formation and provided significant improvement in the mechanical properties. Optimization of the hydrogel formulation revealed that the addition of only 0.004 mmol of α,ω-diamino PCL (Mn = 5.5 kDa) was sufficient to enhance the mechanical properties without significantly compromising the hydrophilic nature of the hydrogels. Hydrogels prepared with small size salt templates (100–300 µm) gave greater mechanical properties compared to large size salt templates (300–600 µm). The hydrogel scaffolds also showed a limited foreign body response (FBR), therefore, have potential for soft-tissue engineering applications.
1. Sachlos, E.; T., C. J. (2003) Making tissue engineering scaffolds work. Review on the application of solid freeform fabrication technology to the production of tissue engineering scaffolds. European Cells and Materials, 5, 29-40. 2. LaPorte, R. J. (1997). Hydrophilic Polymer Coatings for Medical Devices. Technomic Publishing Company, Lancaster, PA, 3. Emma L. Prime; Justin J. Cooper-White; Qiao, G. G. (2006) Coupling hydrophilic amine-containing molecules to the backbone of poly(εcaprolactone). Aus. J. Chem., 59:1-5. 4. Murphy, W. L.; Dennis, R. G.; Kileny, J. L.; Mooney, D. J. (2002) Salt fusion: an approach to improve pore interconnectivity within tissue engineering scaffolds. Tissue Eng., 8:43-52. 5. Jin. Gao; Peter M. Crapo; Wang, Y. (2006) Macroporous elastomeric scaffolds with extensive micropores for soft tissue engineering. Tissue Eng., 12:917-925. 6. Alan Y. Kwok; Greg G. Qiao; Solomon, D. H. (2004) Interpenetrating amphiphilic polymer networks of poly(2-hydroxyethyl methacrylate) and poly(ethylene oxide). Chem. Mater. 16:5650-5658. 7. (a) J. D Fromstein, K. A. Woodhouse, (2002) Elastomeric biodegradable polyurethane blends for soft tissue applications. J. Biomater. Sci. Polymer Edn., 13:391. (b) G. A. Skarja, K. A. Woodhouse, (2001) In vitro degradation and erosion of degradable, segmented polyurethanes containing an amino acid-based chain extender. J. Biomater. Sci. Polymer Edn., 12:851
IFMBE Proceedings Vol. 35
Process Optimization to Improve the Processing of Poly (DL-lactide-co-glycolide) into 3D Tissue Engineering Scaffolds M.E. Hoque1,*, Y.L. Chuan1, I. Pashby1, A.M.H. Ng2, and R. Idrus2 1
Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham Malaysia Campus 2 Tissue Engineering Centre, Universiti Kebangsaan Malaysia
Abstract— Rapid Prototyping (RP) technology is being widely used in diverse areas including tissue engineering scaffold (TES) manufacturing. However, the quality of manufacturing is significantly affected by the property of utilized materials and process parameters of the RP system. This study investigates and optimizes the process parameters for desktop robot–based rapid prototyping (DRBRP) technique to process biopolymer into 3D scaffolds. TES can be freeform fabricated according to various design configurations at room temperature with defined lay-down angle, filament diameter and filament distant. To find the most accurate dimension of the scaffold, controlling of extrusion pressure, material liquefier temperature and deposition speed were conducted. Light optical microscopy was employed to investigate and optimize the suitable process parameters. The process parameters were found to influence the scaffold morphology. However, the pore diameters of the built scaffold were in the viable range for tissue engineering applications. The optimal set of process parameters have been concluded from this study. Keywords— Tissue engineering, scaffold, process parameter, rapid prototyping, extrusion.
I. INTRODUCTION The loss or failure of an organ or tissue is one of the most frequent, devastating, and costly problems in healthcare technology. Current treatment modalities include transplantation of organs, surgical reconstruction, use of mechanical devices, or supplementation of metabolic products [1, 2]. Tissue engineering (TE) applies the principles and methods of engineering, material science, and cell and molecular biology towards the development of viable substitutes to provide a new solution to tissue loss, which restore, maintain, or improve the function of human tissues, with constructs that contain specific population of living cells, lost tissues or organ function. TE holds the promise to replace malfunctionality of in vivo tissue with regenerated tissue that is designed and constructed to meet the needs of each individual patient [3, 4]. There is always a great challenge associated with the modeling, design and fabrication of TE scaffolds (TES) to meet various biological and biophysical requirements in
regenerating tissues i.e. designing load bearing scaffolds for bone and cartilage tissue applications [5, 6]. Bone and cartilage tissue scaffolds usually have complex architecture, porosity, pore size and shape, and interconnectivity in order to provide the needed structural integrity, strength, transport, and an ideal micro-environment for the growth of cells and tissues in growth [7]. In the success of TE, scaffold plays significant role which are porous structures and provides temporary platform for the cells to adhere [5]. Scaffold is a suitable carrier structure that needs to be combined with the cell to hasten the healing process of damaged tissues [8]. Fabricated scaffold should mimic the biomechanical properties of the organ or tissue to be replaced from a biomechanical point of view. Scaffold behaves as a synthetic extracellular matrix, which permits cell adhesion, proliferation, differentiation which are suitable for surgical implantations [9]. There is a broad list of materials currently used in the fabrication of TES. TES must be naturally biocompatible, biodegradable and can be processed into porous structures. The selection of material used in the manufacturing of TES depends on the proposed tissue type, processing technique employed and its intended application [10-12]. Polymers offer great benefits in design and processing [13, 14] of TES. By using polymer scaffold, cells do not interact with proteins that are attached to some material surfaces particularly natural polymers [15]. Besides, synthetic polymers can be chemically modified to match a wide range of properties in biomedical applications, such as mechanical properties, diffusivity, density, hydrophilicity, etc. RP techniques are particularly useful for TE since it allows amazing reproducibility and the production of almost any kind of structure within the limitations of each technique used. It is possible to design a structure that mimics the natural tissue to be replaced [16]. RP techniques include selective laser sintering, fused deposition modeling, stereolithography, electron beam melting and threedimensional (3D) printing [17]. RP techniques have rigorous control over porosity, pore size, stiffness and permeability. The RP scaffolds are usually designed to have fully interconnected pore structure. RP techniques also allow the investigation of the effect of scaffold geometry on
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 836–840, 2011. www.springerlink.com
Process Optimization to Improve the Processing of Poly (DL-lactide-co-glycolide) into 3D Tissue Engineering Scaffolds
cell behavior for further optimization of the scaffold design [18]. It is here where RP offers possibilities to compromise such different requirements into one scaffold, because it provides freedom of varying structural parameters to the non-variable bulk mechanical properties of the material used [19, 20].
II. MATERIALS AND METHODS A desktop-robot-based rapid prototyping (DRBRP) meltextrusion technique was used to fabricate the scaffolds. This system provides three-axes (fig. 1) movements along x-, yand z-directions. The scaffold processing starts with generating a 3D computational model that is further sliced into 2D layers by slicing system software. Each 2D sliced layer is composed of filaments with user-defined lay-down angle (θ), filament diameter (D) and filament distant (L) (fig. 2). Thermoplastic polymer was melted in a metallic chamber by electrical heating and extruded out by means of compressed air pressure through a minute nozzle to build scaffold. The deposition of molten polymer forms a strand
Fig. 1 DRBRP system demonstrating the coordinate directions of scaffold
837
of solid polymer called filament that ultimately builds a layer of the designed scaffold. DRBRP technique allows for the directional changing of every alternative layer by θ which produces woven type pattern. The extrusion process is repeated until the building of scaffold is completed. The poly(DL-lactide-co-glycolide) (PLGA) with molecular weight approx. (Mw) 50,00075,000 g/mol and ratio of lactide:glycolide = 85:15 (Fig. 3) was used to build scaffold of 20.0 mm × 20.0 mm × 5.0 mm dimension with two lay-down patterns (0/90◦ and 0/60/120◦).
Fig. 3 Poly (DL-lactide-co-glycolide) PLGA The porous morphologies of the scaffolds (filament diameter and pore size) (Fig. 4) were studied by light optical microscopy (x25).
Fig. 4 Pore size (P) definition for lay-down patterns of 0-90 and 0-60
fabrication
Inappropriate process parameters setting may lead to the formation of scaffold with poor quality and may waste materials and building time. The fabrication process parameters of DRBRP system are illustrated as follows: (1) Liquefier temperature: The temperature attained in the molten polymer at the time of extrusion. It is controlled operated by the band heater that can be increased or decreased as per necessity. (2) Extrusion pressure: The pressure with which the molten polymer is extruded out. It is controlled by the pneumatic system. (3) Deposition speed: The speed at which the extruding nozzle moves. It is controlled by the DRBRP system. Fig. 2 The construction pattern used by the DPBRP technique IFMBE Proceedings Vol. 35
838
M.E. Hoque et al.
III. RESULTS AND DISCUSSION A. Descriptive Preliminary Data on PLGA Deposition Theoretically, the extruded filament diameter should accord with the diameter of the extruding nozzle orifice. However, practically not only the nozzle orifice determines the filament diameter but also the process parameters like liquefier temperature, extrusion pressure and deposition speed control the filament diameter. That means the accuracy of the filament diameter depended on the appropriate setting of these parameters. This study focused on the fabrication of scaffolds with the filaments that should be of the same dimension as the extruding nozzle orifice.
Fig. 6 Surface morphology of the extruded filament at different extrusion pressure of (a) 4 bar, (b) 5 bar and (c) 6 bar while the liquefier temperature and dispensing speed were remained constant as 160˚C and 4 mm/sec, respectively
B. Construction of TES The initial research is to develop the filament deposition modeling to determine the parameter involved in the filament extrusion. One of the important parameters is the polymer’s liquefier temperature. When the liquefier temperature was set too high the molten polymer was too fluid to dispense which caused excessive dispensing of the polymer and further spreading over. Even it might lead to the deposition of molten polymer to a solid object rather than a porous scaffold. However, the polymer was difficult to extrude when the liquefier temperature was set too low. Therefore, appropriate setting of the liquefier temperature is essentially important to build good scaffold. Through intensive iteration the optimal liquefier temperature for the polymer PLGA was found to be 165˚C. Fig.5 shows the surface morphology of the extruded filament at different liquefier temperatures. At lower liquefier temperature the filament diameters were smaller compared to that obtained at the optimal temperature.
Fig. 7 Surface condition of filament deposition parts which were deposited by deposition speed of (a) 3 mm/sec, (b) 4mm/sec and (c) 5 mm/sec, under liquefier temperature 165˚C and extrusion pressure 4 bar
Fig. 8 The effect of process parameters on scaffolds pore size; (a) liquefier temperature, (b) extrusion pressure and (c) deposition speed Fig. 5 Surface morphology of extruded filament at the liquefier temperatures of (a) 150°C, (b) 160°C and (c)165°C under extrusion pressure of 4 bar and deposition speed of 5 mm/sec The extrusion pressure and the dispensing speed were also varied while maintaining the optimal liquefier temperature constant. Both the extrusion pressure and the dispensing speed also influenced the filament diameter. Fig. 6
shows the surface morphology of filament extruded at various extrusion pressures. The optimal extrusion pressure was found to be of 4 bars. The optimal dispensing speed was found to be of 4 mm/s. When the dispensing speed was slow (e.g. 3 mm/s) the molten polymer was dispensed a lot producing thicker filament, while at faster (e.g. 5 mm/s) speed thicker filament was produced. It was observed that at faster speed the
IFMBE Proceedings Vol. 35
Process Optimization to Improve the Processing of Poly (DL-lactide-co-glycolide) into 3D Tissue Engineering Scaffolds
diameter of the filament was significantly larger (385μm) than the nozzle actual diameter (300μm) shown in Fig.7 (a). The image shown in Fig.7 (b) reveals that the diameter of the filaments extruded at the dispensing speed of 4 mm/s are almost same as the nozzle diameter. The Figs. 8 and 9 show the effect of set parameters in controlling the accuracy of the filament diameter.
839
high as this can lead to over-dispensing which interrupts the interconnectivity of the scaffold architecture by filling the pores with material (Fig.10). Continuous layer bonding is essential to build a scaffold. Hence the first deposited layer of the polymer must adhere to the platform, and the next deposited layer of the scaffold must then bond with the previous layer and so on until the scaffold building is completed [21].
IV. CONCLUSIONS
Fig. 9 The effect of process parameters on scaffolds filament diameters ;(a) liquefier temperature, (b) extrusion pressure and (c) deposition speed
The influences of the most important process parameters namely, liquefier temperature, extrusion pressure and deposition speed that control the extrusion process of the polymer into 3D scaffold were successfully investigated in this preliminary study. The analysis results show that these parameters had direct influences on the extruded filament diameter that ultimately affected the scaffold quality. The optimal values of the process parameters were found to be as liquefier temperature of 165˚C, extrusion pressure of 4 bar and dispensing speed of 4mm/sec. Through optimization of the process parameter the scaffolds with required characteristics could be produced. This study provides a useful and effective guideline for extrusion based TE scaffold fabrication.
REFERENCES
Fig. 10 Imperfect filaments due to (a) over dispensing and (b) under dispensing of materials The solidification process of the extruded material also influences the filament diameter. Materials that solidify quickly would not lead the molten material to spread over i.e. the dimension of the deposited filament remains same as the diameter of the filament as extruded. The appropriate selection or setting of process parameters is crucial for successful extrusion of polymer scaffolds. Many different factors contribute to successful deposition. The extruded material must flow consistently through the nozzle when pressure is applied to avoid inadequate dispensing. However, it is also essential that the pressure should not set too
1. Lalan, B.A.S., Pomerantseva, M.D.I., Joseph, P., and Vacanti, M.D. (2001). Tissue Engineering and Its Potential Impact on Surgery. World Journal of Surgery 25, 1458–1466. 2. Sonal, L. (2001). Tissue engineering and its potential impact on surgery. World Journal of Surgery 25, 1458-1466. 3. Langer, R., and Vacanti, J. (1993). Tissue engineering. Science 260, 920–926. 4. Joseph, P.V., and Robert, L. (1999). Tissue engineering: the design and fabrication of living replacement devices for surgical reconstruction and transplantation. Lancet 354, 32-35. 5. Badylak, S.F. (2007). The extracellular matrix as a biologic scaffold material. Biomaterials 28, 3587-3593. 6. Drury, J.L., and Mooney, D.J. (2003). Hydrogels for tissue engineering: scaffold design variables and applications. Biomaterials 24, 4337-4351. 7. Sun, W., Starly, B., Nam, J., and Darling, A. (2005). Bio-CAD modeling and its applications in computer-aided tissue engineering. Computer-Aided Design 37, 1097-1114. 8. Chuenjitkuntaworn, B., Inrung, W., Damrongsri, D., Mekaapiruk, K., Supaphol, P., and Pavasant, P. Polycaprolactone/hydroxyapatite composite scaffolds: Preparation, characterization, and in vitro and in vivo biological responses of human primary bone cells. Journal of Biomedical Materials Research Part A 94A, 241-251. 9. Lebourg, M., Sabater Serra, R., Más Estellés, J., Hernández Sánchez, F., Gómez Ribelles, J., and Suay Antón, J. (2008). Biodegradable polycaprolactone scaffold with controlled porosity obtained by modified particle-leaching technique. Journal of Materials Science: Materials in Medicine 19, 2047-2053.
IFMBE Proceedings Vol. 35
840
M.E. Hoque et al.
10. Cima, L., JP, V., C, V., D, I., D, M., and R, L. (1991). Tissue engineering by cell transplantation using degradable polymer substrates. J Biomech Eng-T ASME 113, 143-151. 11. Thomson, R., Wake, M., Yaszemski, M., and Mikos, A. (1995). Biodegradable polymer scaffolds to regenerate organs. In Biopolymers II. pp. 245-274. 12. Leong, K.F., Cheah, C.M., and Chua, C.K. (2003). Solid freeform fabrication of three-dimensional scaffolds for engineering replacement tissues and organs. Biomaterials 24, 2363-2378. 13. Melchels, F.P.W., Feijen, J., and Grijpma, D.W. (2010) A review on stereolithography and its applications in biomedical engineering. Biomaterials 31(24), 6121-6130. 14. Harley, B.A.C., Kim, H.-D., Zaman, M.H., Yannas, I.V., Lauffenburger, D.A., and Gibson, L.J. (2008). Microarchitecture of ThreeDimensional Scaffolds Influences Cell Migration Behavior via Junction Interactions. Biophysical Journal 95, 4013-4024. 15. Tanaka, M., and Sackmann, E. (2005). Polymer-supported membranes as models of the cell surface. Nature 437, 656-663. 16. Daily, S. (2010). 3-D Scaffold Provides Clean, Biodegradable Structure for Stem Cell Growth. In Science News. (University of Washington.: Science Daily). 17. Landers, R., Hübner, U., Schmelzeisen, R., and Mülhaupt, R. (2002). Rapid prototyping of scaffolds derived from thermoreversible hydrogels and tailored for applications in tissue engineering. Biomaterials 23, 4437-4447.
18. Starly, B., Lau, W., Bradbury, T., and Sun, W. (2006). Internal architecture design and freeform fabrication of tissue replacement structures. Computer-Aided Design 38, 115-124. 19. Moroni, L., Schotel, R., Sohier, J., de Wijn, J.R., and van Blitterswijk, C.A. (2006). Polymer hollow fiber three-dimensional matrices with controllable cavity and shell thickness. Biomaterials 27, 5918-5926. 20. Moroni, L., de Wijn, J.R., and van Blitterswijk, C.A. (2006). 3D fiber-deposited scaffolds for tissue engineering: Influence of pores geometry and architecture on dynamic mechanical properties. Biomaterials 27, 974-985. 21. Maher, P.S., Keatch, R.P., Donnelly, K., and Mackay, R.E. (2009). Construction of 3D biological matrices using rapid prototyping technology. Rapid Prototyping Journal 15, 204–210.
Corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
Yong Leng Chuan The University of Nottingham Malaysia Campus Jalan Broga, Semenyih Selangor Malaysia [email protected]
The Fabrication of Human Amniotic Membrane Based Hydrogel for Cartilage Tissue Engineering Applications: A Preliminary Study I.H. Hussin1, B. Pingguan-Murphy1, and S.Z. Osman2 1 2
Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia Department of Obstetrics and Gynecology, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
Abstract— Abundant and easily obtained, the human amniotic membrane can be an important source for material in tissue engineering applications. This study aims at the production of a new 3D scaffold synthesized from known biodegradable and biocompatible materials, namely the human amniotic membrane and fibrin. Previously, several studies have been conducted using human amniotic membranes and fibrin, separately as scaffolds but none have been done in combination and in 3D form. This newly developed 3D scaffold is further investigated and evaluated with bovine chondrocytes cultured and grown on the 3D scaffold. These constructs were then used in several tests which include H&E and Safranin-O staining, in vitro biodegradation tests and quantitative evaluation of total DNA and glycosaminoglycans contents. The tests were evaluated at 7 and 14 days. The current results indicate good biodegradation characteristics and that this new scaffold can support chondrocytes proliferation in 3D form. This confirms the suitability of this new scaffold for cartilage tissue engineering applications. Keywords— Cartilage tissue engineering, 3D scaffold, fibrin, human amniotic membrane, osteoarthritis.
I. INTRODUCTION A. Background of Research Osteoarthritis (OA) is a chronic inflammation of the synovial joint that can lead to a permanent degenerative joint disease. It is characterized by the deterioration of the hyaline cartilage and the subchondral bone. The ability of articular cartilage to repair itself is very limited due to its avascular property. The increase in cases of this disease worldwide has spurred many scientists and medical teams to find a rapid healing solution [1, 2]. Prosthetics and artificial organs may be introduced to replace damaged cartilage joints however, these are not without problems as there are the risks of foreign body rejection, material degradation and risks of infection upon implantation. The artificial joints are also unreliable as they have a limited lifespan of which causes patients to undergo repetitive future surgeries for replacement of affected parts or implants[3]. Therefore new
innovations and alternatives are required hence Cartilage Tissue Engineering has been introduced. Articular Cartilage Implantation (ACI) is a clinical approach to repair focal lesions on articular cartilage. This is most successful in younger patients with higher healing capabilities. Aging effects the ability of chondrocytes to proliferate and it decreases in density over time [4-6]. The rapidly advancing discipline of tissue engineering has become one of the most promising solutions for tissue replacement and repair. As new as it is, cartilage tissue engineering (TE) is still under an ongoing development and improvisation. Effective tissue engineering requires appropriate selection of cells and scaffolds, where the latter serves as a mechanical and biological support for cell growth and functionality [7-9]. The overall deficit of organs available for transplantation has largely prompted these remarkable developments to meet the growing demands for techniques to repair and replace damaged tissues. The main goal of such applications is to create a cell-based substitute designed on a three-dimensional (3D) scaffold seeded with cells. The dimensionality offered by the scaffold is critical as it promotes cell to cell interactions, enhancing cell assembly and overall tissue function. The there main components of a Tissue Engineering strategy are the cells, scaffold/cell carriers and the bioactive molecules/signals. These components work together in producing neo-tissue [9, 10]. Commonly used scaffolds include synthetic biodegradable porous polymers or natural materials in the form of sponge, fibres or hydrogel serving as the required 3D support for cell growth, differentiation and organization [1113]. The advantage of synthetic scaffolds is the ability to control their properties (e.g. mechanical properties and degradation rate) while natural scaffolds are more efficient at fostering cell adherence. Over the years, a number of clinical applications have been identified for human amnion membrane (HAM) [1419]. Biomaterial studies of HAM show a high concentration of collagen and hyaluronan as well as small quantities of Proteoglycans in the matrix[20]. Although several studies have been done on Human amniotic membrane, none has explored the feasibility of fabricating it into a 3D scaffold.
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 841–844, 2011. www.springerlink.com
842
I.H. Hussin, B. Pingguan-Murphy, and S.Z. Osman
The Human Amniotic Membrane possesses firm biological advantages as it is biocompatible, bioresorbable, anti-microbial, anti-fibrotic and anti-angiogenic with good biomechanical properties [16-19, 21]. Further, it also has the special properties of being able to inhibit scars formation, being anti inflammatory and helping in wound healing - all of which are absent in other natural or synthetic biomaterials. The aim of this study is to evaluate the potential of this newly developed scaffold to support cell proliferation invitro, evaluate its seeding efficiency and ability of maintaining the cells original phenotype in-vitro. B. Challenges to the Research A current breakthrough in cartilage tissue engineering has encouraged researchers to work harder in perfecting the results. The incorporation of degradable scaffold is to act as a temporary supporting matrix /fixture for transplanted cells to restore, maintain or improve tissues. Scaffolds can be made from various types of materials, including polymers [12]. There are two main classes of polymers; synthetic and natural. The design of a scaffold plays a significant to propagate cell growth within the scaffold. Therefore, there are several important properties that must be considered in designing them among which are fabrication, structure, biocompatibility, biodegradability and mechanical strength [11, 13].
carefully peeled from the chorionic membrane and placenta is discarded afterwards. The amniotic membrane was then washed several times with sterile 0.9% NaCl solution to eliminate blood and mucus. The HAM was then cut into smaller pieces (5 x 5 cm) then stored in sterile PBS with antibiotic-antimycotic solution (10,000 units Penicilin, 10,000 mg/ml Streptomycin and 25 µg/ml Amphotericin B) at 4° C and cryopreserved. These processes are shown in Figure 1(a-e). The HAM was then pulverized into smaller pieces by using BioPulverizer (Biospect Products Inc., USA) and further processed by using a Tissue Tearor (Biospect Products Inc., USA). The homogenized HAM was spun in at high speed (18,000 x g) with a high-speed centrifuge and the supernatant was gradually transferred to lower temperatures (-20° C to -80° C) before being processed in a Freeze-dryer (Labconco, USA).
A
D
B
C
E
C. Scope of the Research There are several approaches in Tissue engineering to develop a 3D scaffold. The three main approaches are to seed the chondrocytes onto inert substrates, onto acellular matrix graft or onto a bioresorbable scaffold. The concept of a bioresorbable scaffold is a scaffold that is able to degrade over a period, preferably once the cells are conformed to integrate into native tissue of interest. However, the scope of this paper only focuses on the preliminary stage, which is creating the scaffold, seeding it with bovine chondrocytes and evaluating the degradation rate and matrix formation.
II. METHODS A.
Harvest and Preparation of HAMs
Human placentas were obtained from inform consent mothers of selected elective Caesarean-sectioned deliveries in University Malaya Medical Center. This study was approved by the Medical Ethics Committee of University Malaya Medical Center (Malaysia). The placentas were processed under stringent sterile conditions. The HAM was
F
G
Fig. 1 Preparation process of HAM extract B. Harvesting and Culturing Cells The primary chondrocytes were aseptically harvested from the metacarpal joints of a 12-24 month-old Bhramal Cross bovine. The cartilage chips were placed in DMEM +20% FBS supplemented with 10% Glutamax, 10% Penstrep and 10% Hepes Buffer Solution prior to the sequential enzymatic digestion of 20 U/ml Pronase for 1 hour and 200 U/ml of Collagenase II for 16 hours. The digested cartilage was repipetted and filtered through a 70 µm and washed with the DMEM + 20% FBS and the resulting cell suspension was centrifuged at 2000 rpm for 5 minutes at room temperature. After discarding the supernatant and repeating twice, cells were counted with a Hemacytometer. Cell viability was determined by using the Trypan Blue dye (Sigma Aldrich, USA) exclusion test.
IFMBE Proceedings Vol. 35
The Fabrication of Human Amniotic Membrane Based Hydrogel for Cartilage Tissue Engineering Applications
843
C. Preparation of HAM-Fibrin Construct
E. Macroscopic Observation and Histological Evaluation
Fabrication of the new 3 dimensional scaffold was by mixing the two materials in different compositions (Table 1). The Fibrin gel was formed by injecting a mixture of 50 mg/ml of HAM and 50 mg/ml of fibrinogen suspension with 10 U of thrombin-CaCl2 solution simultaneously into a 70 mm x 50 mm custom made 316L mold (Figure 2). The mold was incubated for 45 minutes in the incubator (37° C, 5% CO2) before the transfer of the formed construct to a 24-well plate (Figure 3).
The relevant constructs (n=5) were then fixed with a 10% neutral-buffered Formalin solution overnight followed by paraffin wax embedding. Sections of 5 µm thickness were sliced from the embedded paraffin construct with a microtome. Histological evaluation was carried out after the sections were deparaffinized, rehydrated and was stained by Hematoxylin and Eosin (H&E) staining. Another test performed was the observation of the biodegradation test over a period of 30 days. Constructs (n=5) were placed in a multi-well plate and incubated at 37° C with 5% purified CO2. The growth medium was changed every 2 days (Figure 5).
Table 1 Scaffold material composition Materials
Composition (w/v)
HAM-Fibrin *
1:1
HAM-Fibrin
1:0.5
HAM: Fibrin
0.5:1
HAM: Fibrin (Negative Control)
0:1
F. Total DNA and Glycosaminoglycans(GAG) Content Biochemical assays for total DNA and Glycosaminoglycans (GAG) contents of each sample construct (n=5) was performed. The total DNA content was done by using Hoechst 33258 dye while the total GAG content was evaluated by using the dimethylmethylene blue assay[22]. In brief the constructs were digested by using Papainase (3.2 U/ml) for 18-24 hours at 60°C prior to DNA and GAG content evaluation. For the total DNA content, the digested constructs were mixed with the Hoechst dye solution. The fluorescence emissions were read by using an absorbance reader. Meanwhile, for sulphated glyosaminoglycan (GAG) content, the digested solution was mixed with the DMB assay and read by using the absorbance reader at 540 and 595 nm.
Fig. 2 3D scaffold mold
III. PRELIMINARY RESULTS A. Macroscopic Observation and Histological Evaluation The H&E staining may show positive tissue formation by observing the density of the tissue growth microscopically (Figure 4).
Fig. 3 3D Constructs in a 24-well plate D. Seeding Cells The standard method for seeding cells was followed by mixing the cells into the optimized fibrinogen-HAM (*) suspension with Thrombin-CaCl2 solution. The cells were seeded at a density of 3-5 x 106 cells per mold construct. The construct was observed and analysed at day 7 and 14.
A
B
Fig. 4 A and B) Day 7 and 14 of HAM-Fibrin construct; C and D) Day 7 and 14 of Fibrin Construct of H&E staining
IFMBE Proceedings Vol. 35
844
I.H. Hussin, B. Pingguan-Murphy, and S.Z. Osman
C
D
Fig. 4 (continued) The in-vitro degradation test showed good degradation rate as it showed full partial degradation at day 30 of the experiment (Figure 5).
A
B
Fig. 5 Partial degradation of HAM-Fibrin scaffold (A) and Fibrin (B) at day 30
B. Total DNA and Glycosaminoglycans(GAG) Content The total DNA and glyosaminoglycan (GAG) contents were increasing at higher incubation time-points, with significant difference between the highest incubation point (Day 14) and the initial incubation point (Day 7) of p < 0.05.
Fig. 6 Sulphated GAG contents of Fibrin construct and HAM-Fibrin Construct reported as GAG/DNA (µg/µg). Values are mean ± SEM of measurements obtained from the experiment
IV. DISCUSSIONS In the newly formed construct we can observe that there is formation of lacunae containing chondrocytes with round
morphological features similar to those of native cartilage. H&E staining indicates similarity to hyaline cartilage with some positive staining of Proteoglycans. These preliminary results indicate that the HAM-Fibrin construct has successfully secreted cartilage specific extracellular matrix whilst preserving the cells in their original phenotype. The initial observations relating to the degradation rate of the Fibrin and the HAM-Fibrin construct shows good biodegradation at day 30. The results also showed that over a week period the HAM-Fibrin constructs produces a significant amount of Glycosaminoglycans.
REFERENCES [1] E. Kheir, D. Shaw, Orthopaedics and Trauma, 23 (2009) 266-273. [2] W. Swieszkowski, B.H.S. Tuan, K.J. Kurzydlowski, D.W. Hutmacher, Biomolecular Engineering, 24 (2007) 489-495. [3] K.C. Chan, G.S. Gill, J. Arthroplast., 14 (1999) 300-304. [4] T.S. Hamby, S.D. Gillogly, L. Peterson, Operative Techniques in Sports Medicine, 10 (2002) 129-135. [5] O.S. Schindler, Orthopaedics and Trauma, 24 (2010) 107-120. [6] S. Trattnig, A. Ba-Ssalamah, K. Pinker, C. Plank, V. Vecsei, S. Marlovits, Magnetic Resonance Imaging, 23 (2005) 779-787. [7] L. Cui, Y. Wu, L. Cen, H. Zhou, S. Yin, G. Liu, W. Liu, Y. Cao, Biomaterials, 30 (2009) 2683-2693. [8] N. Nakamura, T. Miyama, L. Engebretsen, H. Yoshikawa, K. Shino, Arthroscopy: The Journal of Arthroscopic & Related Surgery, 25 (2009) 531-552. [9] M. Sittinger, D.W. Hutmacher, M.V. Risbud, Current Opinion in Biotechnology, 15 (2004) 411-418. [10] S. Levenberg, N.F. Huang, E. Lavik, A.B. Rogers, J. Itskovitz-Eldor, R. Langer, Proceedings of the National Academy of Sciences of the United States of America, 100 (2003) 12741-12746. [11] D.W. Hutmacher, Biomaterials, 21 (2000) 2529-2543. [12] D.W. Hutmacher, Polymers from Biotechnology, in: K.H.J. Buschow, W.C. Robert, C.F. Merton, I. Bernard, J.K. Edward, M. Subhash, V. Patrick (Eds.) Encyclopedia of Materials: Science and Technology, Elsevier, Oxford, 2001, pp. 7680-7683. [13] D.W. Hutmacher, M. Sittinger, M.V. Risbud, Trends in Biotechnology, 22 (2004) 354-362. [14] L.K. Branski, D.N. Herndon, M.M. Celis, W.B. Norbury, O.E. Masters, M.G. Jeschke, Burns, 34 (2008) 393-399. [15] E. Bujang-Safawi, A.S. Halim, T.L. Khoo, A.A. Dorai, Burns, 36 (2010) 876-882. [16] R.E. Horch, M.G. Jeschke, G. Spilker, D.N. Herndon, J. Kopp, Burns, 31 (2005) 597-602. [17] J. Leon-Villapalos, M.G. Jeschke, D.N. Herndon, Burns, 34 (2008) 903-911. [18] T. Maral, H. Borman, H. Arslan, B. Demirhan, G. Akinbingol, M. Haberal, Burns, 25 (1999) 625-635. [19] R.L. Sheridan, C. Moreno, Burns, 27 (2001) 92-92. [20] M. Meinert, G.V. Eriksen, A.C. Petersen, R.B. Helmig, C. Laurent, N. Uldbjerg, A. Malmstrom, American Journal of Obstetrics and Gynecology, 184 (2001) 679-685. [21] S.-D. Lin, C.-S. Lai, M.-F. Hou, C.-C. Yang, Burns, 11 (1985) 374378. [22] C.D. Hoemann, Molecular and Biochemical Assays of Cartilage Components, in: M.S. F. De Ceunick, and P.Pastoureau (Ed.) Methods in Molecular Medicine, Vol 101: Cartilage and Osteoarthritis,Volume 2 : Structure and In Vivo Analysis, Humana Press Inc., Totowa, NJ, pp. 127-152.
IFMBE Proceedings Vol. 35
Artificial Oxygen Carriers (Hemoglobin-Vesicles) as a Transfusion Alternative and for Oxygen Therapeutics H. Sakai1,2 1
2
Consolidated Research Organization, Waseda University, Tokyo, Japan Waseda Bioscience Research Institute in Singapore (WABIOS), Biopolis, Singapore, Republic of Singapore
Abstract— The most abundant protein in blood is hemoglobin (Hb, 12-15 g/dL), because oxygen is the most crucial for life activity. Hb is compartmentalized in RBCs, and the intracellular Hb concentration is 35 g/dL. In spite of its abundance in blood, Hb becomes toxic once it is released from RBCs. Hemoglobin-vesicles (HbV) are artificial oxygen carriers that mimic the cellular structure of RBCs to replace the blood transfusion. One injection of HbV in place of a blood transfusion is estimated as equivalent to a massive dose, such as several hundred milliliters or a few liters of normal blood contents. The fluid must therefore contain a sufficient amount of Hb, the binding site of oxygen, to carry oxygen like blood. Encapsulation of Hb can shield toxicity of molecular Hbs. HbV is much smaller than RBC (250 vs. 8000 nm), but it recreates the functions of RBCs; (i) The oxygenunloading of HbV is slower than that of a cell-free Hb solution; (ii) The colloid osmotic pressure is zero. For a massive dosage, HbV has to be co-injected with a plasma substitute such as albumin; (iii) HbV is finally captured by RES, and then degraded and excreted promptly; (iv) Co-encapsulation of an allosteric effector regulates oxygen-affinity; (v) Hemolysis is minimal during circulation and the lipid bilayer membrane prevents a direct contact of Hb and vasculature; (vi) Hb encapsulation retards reaction of NO, and HbV does not induce vasoconstriction. The obvious advantages of HbV are that it is pathogen free and blood type antigen free; moreover, it can withstand long-term storage for stockpiling. HbV has a variety of potential applications not only as a transfusion alternative but also as an oxygen therapeutic fluid that cannot be attained by the present RBC transfusion. Keywords— Blood Substitutes, Liposome, Targeted Oxygen Delivery, Nitric Oxide, Transfusion Alternative.
I. INTRODUCTION Hemoglobin (Hb) is the most abundant protein in blood (12-15 g/dL), and should be the most essential protein. However, Hb becomes toxic once it is released from red blood cells (RBCs), which is evident in some pathological hemolytic diseases. Chemically modified cell-free Hb based oxygen carriers (HBOCs), such as intra-molecularly crosslinked, polymerized, and polymer conjugated Hbs have been synthesized to prevent the toxic effect of cell-free Hbs (Fig. 1). However, no product is commercially available yet. Some safety issues are arose during the final stage of clinical trials [1]. It seems difficult to completely eliminate the
side effect of cell-free Hbs by chemical modification. Now is the time to reconsider the physiological importance of the cellular structure of RBCs; why Hb is compartmentalized in RBCs with such a complicated corpuscular structure. Hbvesicles (HbV) are artificial oxygen carriers encapsulating concentrated Hb solution (35 g/dL) with a phospholipid bilayer membrane. HbV is designed to mimic or overcome the function of RBCs. In this chapter, we focus on the concept of Hb-encapsulation and recent topics of HbV especially about reactions with gaseous molecules (O2, NO, CO) that greatly related to its safety and a new application.
Fig. 1 Various kinds of Hb-based oxygen carriers
II. HB-ENCAPSULATION IN LIPOSOMES Hb encapsulation was first performed by Chang in the 1950s using a polymer membrane. Some Japanese groups also tested Hb encapsulation with gelatin, gum Arabic, silicone, etc. Nevertheless, it was extremely difficult to regulate the particle size to be appropriate for blood flow in the capillaries and to obtain sufficient biocompatibility. Djordjevici and Miller in 1970’ prepared a liposomeencapsulated Hb (LEH). Since then, many groups have tested encapsulated Hbs using liposomes. Some failed initially, and some are progressing with the aim of clinical usage. The U.S. Naval Research Laboratory presented remarkable progress of LEH, but it suspended development about 10 years ago. What we call Hb-vesicles (HbV) with high-efficiency production processes and improved properties have been established by our group, based on nanotechnologies of molecular assembly and pharmacological and physiological aspects [2]. In spite of such large number of studies of HBOCs, no product is tested clinically because of
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 845–848, 2011. www.springerlink.com
846
H. Sakai
the difficulty in production method. Chemically modified cell-free HBOCs are much easier to produce, therefore, more researchers tested the cell-free types, and they have been more advanced than the cellular type to enter clinical trials. However, during the long history of R&D, some unexpected problems arose for cell-free HBOCs, presumably due to the direct exposure of Hb to vasculature. Our Hb-vesicles (HbV) are artificial oxygen carriers encapsulating concentrated Hb solution (35 g/dL) with a phospholipid bilayer membrane. Concentration of the HbV suspension is extremely high ([Hb] = 10 g/dL, [lipids] = 6 g/dL, volume fraction, ca. 40 vol%) and it has an oxygen carrying capacity that is comparable to that of blood. HbV is much smaller than RBC (250 vs. 8000 nm), but it recreates the functions of RBCs that have been confirmed by many animal experiments to test the effectiveness as a resuscitative fluid for hemorrhagic shock, hemodilution, and a prime for cardiopulmonary bypass [3]. Other characteristics similar with those of RBCs are, (i) the rate of O2 unloading is slower than Hb solution; (ii) colloid osmotic pressure is zero at [Hb] = 10 g/dL, and it has to be co-injected with or suspended in a plasma substitute such as albumin or HES; (iii) the resulting viscosity of a HbV suspension is adjustable to that of blood; (iv) HbV is finally captured by RES and the components are degraded and excreted. The HbV particle of itself is not eliminated through glomeruli; (v) coencapsulation of PLP as an allosteric effector, instead of 2,3-diphosphoglyceric acid, to regulate oxygen affinity; (vi) no hemolysis occurs during circulation and the lipid bilayer membrane prevents direct contact of Hb and vasculature; and (vii) reaction of NO is retarded to some extent by an intracellular diffusion barrier, and HbV does not induce vasoconstriction. In the next section we focus on the gas reactions of cell-free Hb and cellular HbV.
III. HB ENCAPSULATION RETARDS GAS REACTIONS The major remaining hurdle before clinical approval of this earliest generation of HBOCs is vasoconstriction and resulting hypertension, which are presumably attributable to the high reactivity of Hb with endothelium-derived nitric oxide (NO) [1]. It has been suggested that small molecular Hbs permeate across the endothelial cell layer to the space nearby the smooth muscle, and inactivate NO. However, cellular HbV induce neither vasoconstriction nor hypertension [4]. A physicochemical analysis using stopped-flow rapid scan spectrophotometry [5] clarified that Hb encapsulation in vesicles retards NO-binding in comparison to molecular Hb because an intracellular diffusion barrier of NO is formed. The requisites for this diffusion barrier are i) a
more concentrated intracellular Hb solution, and ii) a larger particle size. Even though various kinds of liposomeencapsulated Hb have been studied by many groups, our HbV encapsulates a highly concentrated Hb solution (> 35 g/dL) with a regulated large particle diameter (250 – 280 nm) and attains 10 g/dL Hb concentration in the suspension. The absence of vasoconstriction in the case of intravenous HbV injection might be related to the lowered NO-binding rate constant, and the lowered permeability across the endothelial cell layer in the vascular wall. The proposed mechanism of vasoconstriction induced by HBOCs in relation to gaseous molecules is not limited to NO scavenging. For example, endogenous carbon monoxide (CO) is produced by constitutive hemeoxygenase-2 in hepatocytes; it serves as a vasorelaxation factor in hepatic microcirculation. Small molecular Hb permeates across the fenestrated endothelium, scavenges CO, and induces constriction of sinusoids and augments peripheral resistance. Oversupply of O2 induces autoregulatory vasoconstriction to regulate the O2 supply. Injection of small HBOCs reportedly induces vasoconstriction, probably because of the facilitated O2 transport. We used gas-permeable narrow tubes made of perfluorinated polymer to study not only O2-release but also NO- and CO-binding profiles [6]. We examined these gas reactions when Hb-containing solutions of four kinds were perfused through artificial narrow tubes at a practical Hb concentration (10 g/dL). Purified Hb solution, polymerized bovine Hb (PolyBHb), encapsulated Hb (Hbvesicles, HbV, 279 nm), and RBCs were perfused through a gas-permeable narrow tube (25 µm inner diameter) at 1 mm/s centerline velocity. The level of reactions was determined microscopically based on he visible-light absorption spectrum of Hb. When the tube was immersed in NO and CO atmospheres, both NO-binding and CO-binding of deoxygenated Hb and PolyBHb in the tube were faster than those of HbV and RBCs, and HbV and RBCs showed almost identical binding rates. When the tube was immersed in a N2 atmosphere, oxygenated Hb and PolyBHb showed much faster O2-release than did HbV and RBCs. The diffusion process of the particles was simulated using Navier– Stokes and Maxwell–Stefan equations. Results clarified that small Hb (6 nm) diffuses laterally and mixes rapidly. However, the large-dimension HbV shows no such rapid diffusion. The NO and CO molecules, that diffuse through the tube wall and enter the lumen, would immediately react with Hb-containing solutions at the interface. Therefore, the fast mixing would be effective to create more binding site of these gas molecules. In the case of O2-release, the O2 can be removed more easily at the tube wall, where the O2 concentration gradient is the greatest. The fast mixing would create a higher concentration gradient and fast O2 transfer. The
IFMBE Proceedings Vol. 35
Artificial Oxygen Carriers (Hemoglobin-Vesicles) as a Transfusion Alternative and for Oxygen Therapeutics
purely physicochemical differences in diffusivity of the particles and the resulting reactivity with gas molecules are one factor inducing biological vasoconstriction of HBOCs.
IV. IN VIVO XYGEN TRANSPORT BY HB-VESICLES One important point when the HbV fluid is used as a transfusion alternative is that the fluid shows no colloid osmotic pressure (COP); RBCs do not either. Therefore, a large dosage of HbV resulting in a high level of blood substitution requires the addition or co-injection of a plasma expander. For example, when HbV particles are suspended in 5%-recombinant human serum albumin (rHSA), the resulting suspension shows COP of 20 Torr; that viscosity was nearly identical to that of blood (3–4 cP). These properties are important for the homeostasis of blood circulation. The small HbV particles are dispersed homogeneously in plasma phase. Ninety percent of blood exchange with HbV suspended in albumin can sustain hemodynamic and blood gas parameters and tissue oxygen tension in a rat model [7]. Moreover, a clinically relevant model of 40% blood exchange revealed that the reduced hematocrit returned to the original level in one week, and HbV captured in RES was completely degraded and excreted promptly [8], indicating that HbV is useful for preoperative blood exchange or perioperative infusion in the event of hemorrhage to prevent or minimize homologous blood transfusion [9]. Although miniaturization of the cardiopulmonary bypass (CPB) circuit has reduced the priming volume, it remains insufficiently low to achieve an acceptable level of hemodilution in small patients. Homologous blood use is considered the gold standard for CPB priming in infants despite exposure of patients to potential cellular and humoral antigens. A recent experimental study of HbV suspended in rHSA as a priming solution for CPB in a rat model demonstrated that HbV protects neurocognitive function by transporting O2 to brain tissues, even when the hematocrit is markedly reduced [10]. Results indicate that the use of HbV for CPB priming might prevent neurocognitive decline in infants caused by considerable hemodilution. An important advantage of Hb-vesicles is that the physicochemical properties of HbV are adjustable, such as oxygen affinity (P50, oxygen partial pressure at which Hb is half saturated with oxygen) and rheological properties. Historically, it has been widely believed that the O2 affinity should be regulated similarly to RBC, at about 25–30 Torr, using an allosteric effector or by direct chemical modification of the Hb molecules. Theoretically, this enables sufficient O2 unloading during blood microcirculation, as can be inferred from the arterio-venous difference in the levels of O2 saturation in accordance to an O2 equilibrium curve. It has been
847
expected that decreasing the O2 affinity (increasing P50) increases O2 unloading. Regarding blood viscosity, lowered viscosity is believed to increase cardiac output and facilitate peripheral blood flow. However, these beliefs are undergoing review and revision in the field of blood substitute research. The suspension of HbV can provide unique opportunities to modify these physicochemical properties easily and to observe their physiological impacts. Blood flow is much lower in ischemic tissues. Consequently, O2 tension is very low: e.g. 5 Torr. Normal RBCs are expected to have already released O2 before they reach the ischemic tissue. The left-shifted curve (lower P50) shows that Hb does not release O2, even in the venous side in a normal condition. However, RBC or an HBOC with a P50 lower than usual can carry O2 to an ischemic tissue; socalled “Targeted Oxygen Delivery” [11]. Dr. Erni and colleagues at Inselspital Hospital of the University of Berne developed a hamster skin flap model in which the blood flow of one branch is blocked completely; the tissue becomes completely ischemic. Exchange transfusion was performed using low and high P50-HbV, which revealed improved oxygenation of the ischemic part, especially with the low P50-HbV. Collateral blood flow is expected to occur even to the ischemic part; the HbV conveys O2 to the ischemic part via the collateral arteries. This is the first reported example demonstrating the effectiveness of HbV for an ischemic tissue, implying its applicability for other ischemic diseases [12]. In addition to the lower P50, the viscosity of the HbV suspension is expected to contribute to improvement of microcirculation. The combination of HbV and dextran solution or hydroxyethyl starch solution induces flocculation of HbV, thereby rendering the suspension non-Newtonian and viscous [13,14]. The higher viscosity of the circulation fluid would increase shear stress on the vascular wall, thereby inducing vasorelaxation. A viscous fluid also pressurizes the capillaries homogeneously to improve the functional capillary density, which is beneficial for microvascular perfusion. The representative organ preservation fluid is the University of Wisconsin (UW) solution, which comprises not only crystalloids, but also a plasma expander. One idea is to use HbV as an intra-arterial perfusion fluid to carry oxygen, nutrients, and metabolites. Actually, we tested perfusion of the liver, heart, and intestine with HbV, although the purpose of those studies was to clarify the safety to these organs [15]. We confirmed the preservation of organ functions for a few hours. Our next step will be to prolong the perfusion period to the greatest extent possible. In fact, HbV can be re-oxygenated easily by perfusion through an artificial lung device. We must design the composition of HbV suspension to provide not only oxygen but also nutrients and homogeneous fluid distribution to all capillaries, which
IFMBE Proceedings Vol. 35
848
H. Sakai
would presumably require a certain level of viscosity. Tissue reconstruction and tissue regeneration have become popular. Cell culture requires not only a supply of oxygen and nutrition but also the removal of metabolites, which can be achieved by replacing the culturing media periodically in the case of a two-dimensional cell culturing. However, in the case of constructing a three-dimensional bulky tissue, it would require perfusion with a fluid that can serve the functions of blood in addition to angiogenesis in a regenerating tissue on a scaffold. Such functionality would necessitate the design of the composition of HbV described above. Consequently, HbV can provide unique opportunities to manipulate physicochemical properties that cannot be provided by RBCs.
V. CONCLUSION The development of HbV as a cellular type HBOC lags far behind that of cell-free HBOCs. Nevertheless, it is quite noteworthy that the side effects of molecular Hb and physiological importance of the cellular structure of RBCs have been recognized through R&D of artificial oxygen carriers. The main difference between the conventional liposomal drug delivery system and HbV is the dosage. In fact, HbV is categorized as a new drug, and its safety must be scrutinized carefully and guaranteed by an appropriate method for its realization. Despite some difficulties for industrialization, the research is continued aimed at eventual realization of artificial oxygen carriers that will eventually benefit human health and welfare.
ACKNOWLEDGMENT This work was supported in part by Health and Labour Science Research Grants (Health Science Research Including Drug Innovation), Ministry of Health, Labour and Welfare, Japan, Grants-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (B19300164; B22300161), and a Supporting Project to Form Strategic Research Platforms for Private Universities: Matching Fund Subsidy from Ministry of Education, Culture, Sports, Science and Technology.
REFERENCES 1. Natanson C, Kern SJ, Lurie P et al (2008) Cell-free hemoglobin-based blood substitutes and risk of myocardial infarction and death: a metaanalysis. JAMA 299:2304-2312.
2. Sakai H, Sou K, Tsuchida E (2009) Hemoglobin-vesicles as an artificial oxygen carrier. Methods Enzymol 465:363-384. 3. Tsuchida E, Sou K, Nakagawa A et al (2009) Artificial oxygen carriers, hemoglobin vesicles and albumin-hemes, based on bioconjugate chemistry. Bioconjug Chem 20:1419-1440. 4. Sakai H, Hara H, Yuasa M et al (2000) Molecular dimensions of Hbbased O2 carriers determine constriction of resistance arteries and hypertension. Am J Physiol Heart Circ Physiol 279:H908-H915. 5. Sakai H, Sato A, Masuda K et al (2008) Encapsulation of concentrated hemoglobin solution in phospholipid vesicles retards the reaction with NO, but not CO, by intracellular diffusion barrier. J Biol Chem 283:1508-1517. 6. Sakai H, Okuda N, Sato A et al (2010) Hemoglobin encapsulation in vesicles retards NO and CO binding and O2 release when perfused through narrow gas-permeable tubes. Am J Physiol Heart Circ Physiol 298:H956-H965. 7. Sakai H, Takeoka S, Park SI et al (1997) Surface modification of hemoglobin vesicles with poly(ethylene glycol) and effects on aggregation, viscosity, and blood flow during 90% exchange transfusion in anesthetized rats. Bioconjug Chem 8:23-30. 8. Sakai H, Horinouchi H, Yamamoto M (2006) Acute 40 percent exchange-transfusion with hemoglobin-vesicles (HbV) suspended in recombinant human serum albumin solution: degradation of HbV and erythropoiesis in a rat spleen for 2 weeks. Transfusion 46:339-347. 9. Sakai H, Seishi Y, Obata Y et al (2009) Fluid resuscitation with artificial oxygen carriers in hemorrhaged rats: profiles of hemoglobinvesicle degradation and hematopoiesis for 14 days. Shock 31:192200. 10. Yamazaki M, Aeba R, Yozu R, Kobayashi K (2006) Use of hemoglobin vesicles during cardiopulmonary bypass priming prevents neurocognitive decline in rats. Circulation 114(1 Suppl):I220-I225. 11. Sakai H, Tsuchida E (2007) Hemoglobin-vesicles for a transfusion alternative and targeted oxygen delivery. J Liposome Res 17:227-235. 12. Plock JA, Tromp AE, Contaldo C, et al (2007) Hemoglobin vesicles reduce hypoxia-related inflammation in critically ischemic hamster flap tissue. Crit Care Med 35:899-905. 13. Sakai H, Sato A, Takeoka S, Tsuchida E (2007) Rheological properties of hemoglobin vesicles (artificial oxygen carriers) suspended in a series of plasma-substitute solutions. Langmuir 23:8121-8128. 14. Sakai H, Sato A, Takeoka S, Tsuchida E (2009) Mechanism of flocculate formation of highly concentrated phospholipid vesicles suspended in a series of water-soluble biopolymers. Biomacromolecules 10:2344-2350. 15. Verdu EF, Bercik P, Huang XX et al (2008) The role of luminal factors in the recovery of gastric function and behavioral changes after chronic Helicobacter pylori infection. Am J Physiol Gastrointest Liver Physiol 295:G664-G670.
Use macro [author address] to enter the address of the corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 35
HIROMI SAKAI Waseda Bioscience Research Institute in Singapore 11 Biopolis Way, #05-01/02 Helios Singapore Singapore [email protected]
Enzymatic Synthesis of Soybean Oil Based-Fatty Amides E.A. Jaffar Al-Mulla Department of Chemistry, College of Science, University of Kufa, AnNajaf, Iraq
Abstract— Ammonolysis of soybean oil into fatty amides is efficiently catalyzed by lipase (Lipozyme). In this study, enzymatic synthesis of fatty amides by ammonolysis of soybean oil and thiourea in an organic solvent was investigated. The highest conversion percentage (94 %) was obtained when the process was carried out for 24 h using thiothiourea to soybean oil ratio of 5.0: 1.0 at 40 ºC. The fatty amides were characterized using Fourier transform infrared (FTIR) spectroscopy, proton nuclear magnetic resonance (1H NMR) technique and elemental analysis. The use of immobilized lipase as the catalyst for the preparation reaction provides an easy isolation of the enzyme from the products and other components in the reaction mixture. This method employed offers several advantages such as renewable of the raw material, simple reaction procedure, environmentally friendly process and high yield of the product. Keywords— soybean oil, thiourea, fatty amides, Lipozyme.
I. INTRODUCTION Fatty amides with general formula RCONH2, where R is an alkyl chain of fatty acids, are one of oleochemicals materials. They are important for many chemical industries as they are derived from renewable, biodegradable, environmental friendly, easily available and low toxicity raw materials. They are analogous to petrochemicals which are chemicals derived from petroleum. Fatty amides have attracted a lot of attention due to due to their biological activities [1] and potential industrial applications such as surfactants, lubricants and anti-blocking agents in the plastics processing industry [2-4]. Fatty amides are semi-commodities, produced in thousands of metric tons every year by treatment of fatty acids with anhydrous ammonia at approximately 200°C and 345-690 kPa [5]. Enzyme represents an important breakthrough in the biotechnology industry because it is considered as nature's catalyst. Enzymes that are responsible of the breakdown of fats into fatty acids and glycerol in biological systems form a group called triacylglycerol lipases. The industrial application of lipase in the oleochemical industry has a lot of attractive due to the high-energy cost of the conventional chemical process and the anticipated lower price of enzymes [6]. Lipase
also offers many other advantages such as technical simplicity, high percentage of conversion, easy isolation of enzyme from the products and other components in the reaction mixture5). The lipases are powerful tools for catalyzing hydrolysis, esterification, aminolysis, hydroxaminolysis and alcoholysis reaction [7-9]. Many studies were carried out regarding use of lipase as a catalyst for synthesis different fatty compounds such as fatty amide [5], hydrazides [10], and hydroxamic acids [11] from palm oil. Enzymatic synthesis of fatty amides from fatty acids or their esters through treatment them with amine compounds has been reported [12-14]. In the present study, fatty amides are synthesized from soybean oil and thiothiourea by a one-step lipase catalyzed reaction. The presence of long chains fatty acids from soybean oil (mainly 16 and 18 carbon atoms) containing o and n atoms suggests fatty amides should be very useful as organic reagents for extraction and separation of metal ions from aqueous solution [15,16]. Additionally, it is predicted that fatty amides will offer the potential application as surfactants for many applications including clay modification to produce polymer nanocomposites [17-20].
II. EXPERIMENTAL A. Materials Soybean oil was obtained from Nacalai Co., Kyoto, Japan. Lipozyme was obtained from Novo Nordisk, Denmark. Thiourea, sodium hydroxide and hexane were purchased through local suppliers from Merck, Germany. B. Reaction Procedure Fatty amides were synthesized by reacting 3.84 g of commercial soybean oil in 20 ml of hexane as a solvent with 1.38 g of thiourea. The pH was adjusted to 7 by adding of about 5 ml of 0.1 M NaOH. The reaction was carried out in the presence of lipase catalyst in a 100 ml stoppered flask. The mixture was incubated in water shaker batch (125 rpm) at 40°C for 24 h. Hot hexane was added to the reaction to dissolve the product. The organic phase was then separated from water
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 849–853, 2011. www.springerlink.com
850
E.A. Jaffar Al-Mulla
Scheme 1 Fatty amides synthesis from soybean oil
phase using a separation funnel. To obtain the solid fatty amides, the hexane fraction was cooled in a refrigerator at 5°C for 5 h, filtered and then reprecipitated by hexane. The product was then dried in an oven at 50°C. The preparation reaction is shown in the Scheme 1.
also determined by 1H nuclear magnetic resonance (Jeol, Japan) using CD3 COOD as a solvent. Elemental analyzer (LECO CHNS-932) was used for quantitative analysis of nitrogen contents. The determination was carried out under N2 atmospheric conditions using sulfamethazine as the standard.
C. Conversion Measurement Table 1 shows the optimum reaction conditions of conversion of soybean oil amidolysis into fatty amides. The following equation (equation 1) was used to calculate the conversion percentage of soybean oil.
(1)
The amount of mmol product was determined according to nitrogen content of dry fatty amides analyzed by the elemental analysis. The amount of mmol fatty acid in the soybean oil was calculated from mmol of potassium hydroxide required to saponify 1g of soybean oil [10]. D. Analytical Methods The presence of amide in fatty amides was determined by FTIR spectra. FTIR spectra of the products were recorded by the FTIR spectrophotometer (Perkin Elmer FTIR-Spectrum BX, USA) using KBr disc technique. The presence of amide (CO NH) proton in fatty amides was
IFMBE Proceedings Vol. 35
Enzymatic Synthesis of Soybean Oil Based-Fatty Amides
851
manner, 2 molecules of fatty amide, glycerol, ammonia and carbonylsulfide are formed (the Scheme 2C). B. Screening of Enzyme In order to select the most efficient enzyme for this reaction, two commercial enzymes were screened. Lipozyme and Novozyme were used to study the effect of temperature on conversion soybean oil into fatty amides. The reaction catalyzed by Lipozyme gave the highest percentage of conversion in the reaction temperature range of 30 - 60 ºC (Table 1) indicating that Lipozyme was more active for the reaction than Novozyme. Therefore, Lipozyme was used as the lipase in this study. It was also observed that Lipozyme is most efficiently at 40 ºC. The percentage of conversion to fatty amides decreases at above 40 ºC due to destruction of its catalytic activity by thermal energy [22]. So, a reaction temperature of 40ºC was used for further studies. Table 1 Effect of temperature on conversion soybean oil into fatty amides by various enzymes: (a) Lipozyme and (b) Novozyme. Other reaction conditions: Thiourea/palm oil molar ratio = 5.0: 1.0, Hexane = 20 ml, Reaction time = 24 h and pH = 7 Scheme 2 Proposed mechanism of synthesis fatty amides from soybean oil: (A) 1st amidation, (B) Decomposition of carbamothioic O-acid and (C) 2nd & 3rd amidations
Conversion % Temperature (a)
III. RESULTS AND DISCUSSION A. Reaction Mechanism The proposed mechanism of soybean oil into fatty amides is shown in the Scheme 2. Using of the lipase catalyst leads to create active position at carbonyl group of triglyceride in substrate-lipase complex and allows attacking first or seconding the carbonyl by -NH2 of thiourea molecule to form alkoxy ion, as proposed in the Scheme 2A. Alkoxy ion is less acidity than water molecule. Therefore, alkoxy ion withdraws proton from H2O to form the alcohol and allows attacking carbonyl group of thiourea by hydroxyl of water forming 2- hydroxypropane-1,3-difatty ester or 3-hydroxypropane-1,2difaaty ester, carbamothioic O-acid and one molecule of fatty amide. carbamothioic O-acid is a compound that is unstable under normal circumstances. It is rapid decomposition to form ammonia and carbonylsulfide [21] as illustrated in the Scheme 2B. After two consecutive reactions with same
(b)
30
24
61
40
54
94
50
41
72
60
32
59
C. Effect Amount of Enzyme The results of using different amount of Lipozyme were shown in Figure 1. The percentage of conversion of fatty amides increased as the ratio of lipase: soybean oil was increased. The highest conversion percentage of fatty amides was obtained when the ratio of lipase: soybean oil was 37.0 mg to 1.0 mmol. Further increase of lipase to soybean oil ratio did not significantly increase the percentage of conversion, as the active sites of the enzyme molecules present are not fully exposed to the substrate and remain inside the bulk of enzyme particles with out contributing significantly to the reaction [23,24].
IFMBE Proceedings Vol. 35
852
E.A. Jaffar Al-Mulla
mmol. According to above reaction conditions, the highest conversion percentage of soybean oil into fatty amides was around of 94%. Table 3 shows the optimum reaction conditions of conversion soybean oil into fatty amides.
100
Convertion %
80
Table 3 Optimum Reaction Conditions of Conversion Soybean oil into Fatty Amides
60
Parameter
40
20
0 0
10
20
30
40
50
60
Lipozyme (mg): Palm Olein (mmol)
Fig. 1 Effect of catalyst loading on conversion of soybean oil into fatty amides. Other reaction conditions: Thiourea/palm oil molar ratio = 5.0: 1.0, Hexane = 20 ml, pH = 7 and Reaction temperature = 40 °C
Condition
Lipase Solvent Reaction time Temperature Ratio Lipozyme (mg): Soybean oil (mmol) Ratio thiourea (mmol): Soybean oil (mmol)
Lipozyme Hexane 24 h 40 °C 37.0: 1.0
5.0: 1.0
F. Charazterization of Fatty Amides D. Effect of Organic Solvents
(i). FTIR Spectra
In the course of the amidolysis reaction, various organic solvents were used for the study of the effect of organic solvent on the amidolysis of soybean oil. Table 2 shows that hexane (log-P= 3.5) was the best solvent for amidolysis of soybean oil due to the reaction medium was changed from hydrophilic (log-P < 3) to hydrophobic (log-P > 3) organic solvents, the overall efficiency of the enzyme changed [25]. So, hexane was chosen the solvent in the subsequent experiments. Hence, the polarity of different solvents in terms of their log-P values played crucial role. Table 2 Effect of catalyst loading on conversion of soybean oil into fatty amides. Other reaction conditions: Thiourea/palm oil molar ratio = 5.0: 1.0, Hexane = 20 ml and Reaction temperature = 40 °C, pH = 7 and Lipozyme = 0.05 mg Organic solvent
Conversion, %
Heptane Hexane Toluene Chloroform
81 94 78 59
Characteristic bands of soybean oil were observed at, 2912, 2854, 1744, cm-1 resulting from, C–H asymmetric stretching of CH2, C–H symmetric stretching of CH2, C=O stretching of ester (glyceride), respectively [10]. The fatty amides spectra show absorption bands at 3310, 1621 and 1043 cm-1 attributed to -NH2 group stretching, C=O stretching and C-N stretching of amide, respectively. The disappearance of the peak at 1744 cm-1 and presence of the peaks at 3310, 1621 and 1043 cm-1 indicate that fatty amides have been formed [5]. (ii). 1H NMR spectra of Fatty Amides (400MHz) (CD3 COOD): δ 0.88 (t, J = 8.2 Hz, 3H, CH3), 1.17 (m, H, CH2), 1.62 (2H, 2 x CH2 CH2 CO NH2), 2.03 (4H, 2 x CH2 CH = CH), 2.41 (t, J = 10.3 Hz, 2H, CH2 CO NH2), 5.33 (2H, CH = CH), 6.04 (br, S, 2H, CO NH2).
IV. CONCLUSIONS
E. Optimal Condition The percentage of conversion into fatty amides increased with increasing reaction time. The highest conversion percentage was obtained when the process was carried for 24 h. The reaction rate change after this time was small as the reaction reached the equilibrium state. The conversion percentage of soybean oil into fatty amides was highest when the ration of thiourea to soybean oil was 5.0 mmol: 1.0
In this study, a new environmentally friendly and abundant raw material was used for synthesis of fatty amides. The reaction was carried out via treatment of soybean oil with thiourea by immobilized lipase in presence of hexane as an organic solvent. The method employed offers technical simplicity, high percentage of conversion, easy isolation of enzyme from the products and other components in the reaction mixture. The product can be used as organic reagents for extraction of metal ions from aqueous solution and for clay modification to produce polymer nanocomposites.
IFMBE Proceedings Vol. 35
Enzymatic Synthesis of Soybean Oil Based-Fatty Amides
REFERENCES 1. Omura, S.; Katagiri, M.; Awaya, J.; Furukawa, T.; Umezawa, I.; Oi, N.; Mizoguchi, M.; Aoki, B.; Shindo, M. Relationship between the structures of fatty acid amide derivatives and their antimicrobial activities. Antimicrob. Agents Chemother. 6, 207–215 (1974). 2. Biermann. U.; Friedt, W.; Lang, S.; Luhs, W.; Machmuller, G.; Metzger, O. New syntheses with oils and fats as renewable raw materials for the chemical industry. Angew. Chem. Inte. 39, 2206–2224 (2000). 3. Maag, H. Fatty acid derivatives: important surfactants for household, cosmetics and industrial purposes. J. Am. Oil Chem. Soc. 61, 259–267 (1984). 4. Kuo, T.M.; Kim, H.; Hou, C.T. Production of novel compound 7,10,12-trihydroxy-8(E)-octadecenoic acid from ricinoleic acid by Pseudomonas aeruginosa PR3. Curr. Microbiol. 43, 198–203 (2001). 5. Al-Mulla, E.A.J.; Yunus W.M.Z.; Ibrahim N.A. Abdul Rahman M.Z. Enzymatic synthesis of fatty amides from soybean oil J. Oieo sci., 59, 59-64 (2010). 6. Chen, C.H.; Sih, C.J. General Aspect and Optimization of Enantioselective Biocatalysis in Organic Solvents: The Use of lipases. Angewandte Chemie International Edition 28, 695-707 (1989). 7. Medina, A.R.; Cerdan, L.E.; Gimenez, A.G.. Paez, B.C.. Gonzales, M.J.; Grima, E.M. Lipase-Catalyzed Esterification of Glycerol and Polyunsaturated Fatty Acids from Fish and Microalgae Oils. J. Biotech. 70, 379-391 (1999). 8. Mittelbach, M. Lipase Catalyzed Alcoholysis of Sunflower Oil. J. Amer. Oil Chem. Soc. 67, 168-176 (1990). 9. Hacking, M.A.; Akkus, H.; Van Rantwijk, F.; Sheldon, R.A.. Lipase and Esterase-Catalyzed Acylation of Hetero-Substituted Nitrogen Nucleophiles in Water and Organic Solvents. Biotech. Bioeng. 68, 84-91 (2000). 10. Mohamad, S.; Yunus, W.; Haron, M.; Abdul Rahman, M.Z. Enzymatic synthesis of fatty hydrazides from palm oils. J. Oleo Sci. 57, 263-267 (2008). 11. Dedy, S.; Yunus, W.M.Z.; Jelas H.; Sidik S. Enzymatic Synthesis of Fatty Hydroxamic Acids from Palm Oil. J. Oieo sci. 54, 33-38 (2004). 12. Zoete, M.C.; Kock-van A.C.; van, F.; Sheldon, R.A. Lipase catalyzed ammoniolysis of lipids: a facile synthesis of fatty acid amides. J. Mol. Catal. B: Enz. 1, 109–113 (1996).
853 13. Litjens, M.J.; Sha, M.; Straathof, A.J.; Jongejan, J.A.; Heijnen, J.J. Competitive lipase catalyzed ester hydrolysis and ammoniolysis in organic solvents: equilibrium model of solid liquid vapor system. Biotechnol. Bioeng. 65, 347–356 (1999). 14. Levinson, W.E.; Kuo, T.M.; Kurtzman, C.P. Lipase catalyzed production of novel hydroxylated fatty amides in organic solvents. Enz. Microb. Technol. 37, 126–130 (2005). 15. Al-Mulla E.A.J.; Yunus W.M.Z.; Ibrahim N.A. Abdul Rahman M.Z. Synthesis and Characterization of N,N'-Carbonyl Difatty Amides from palm oil. J. Oieo sci., 58, 467-471 (2009). 16. E.A.J. Al-Mulla, K.W.S. Al-Janabi, Chin. Chem. Lett. (2010), in press. 17. Hoidy, H. W.; Ahmad M. B.; Al-Mulla, E. A. J.; Yonus, W.M.Z.' Ibrahim, N. A. Chemical sunthesis of palm oil-based difatty acyl thiothiourea. J. Oieo sci., 59, 229-133 (2010). 18. Al-Mulla E.A.J.; Yunus W.M.Z.; Ibrahim N.A. Abdul Rahman M.Z Difatty Acyl Thiourea from Corn Oil: Synthesis and Characterization. J. Oieo sci., 59, 157-160 (2010). 19. Hoidy, W. H.; Al-Mulla, E.A.J.; Al-Janabi, K.W. Mechanical and thermal properties of PLLA/PCL modified clay nanocomposites. J. Polym. Environ. 18, 608-618 (2010). 20. Al-Mulla, E.A.J.; Suhail, H.S.; Aowda, S.A. New biopolymer nanocomposites based on epoxidized soybean oil plasticized poly(lactic acid)/fatty nitrogen compounds modified clay: Preparation and characterization. Ind. Crops Prod. 33, 23-29 (2011). 21. Benini; Stefano; Wojciech. R.; Rypniewski, Keith, S.; Wilson, Silvia M.; Stefano C.; Stefano M. A new proposal for thiourease mechanism based on the crystal structures of the native and inhibited enzyme from Bacilus pasteurii: why thiourea hydrolysis costs two nickels. Structure 7, 205-216 (1999). 22. Romero, M.D.; Calvo, L.; Alba, C.; Daneshfar, A.; Ghaziaskar, H.S. Enzematic synthesis of isomyl acetate with immobilized Candida antarcteca lipase in n-hexane, Enzyme Microb. Technol. 37, 42-48 (2005). 23. Gandhi, N.N.; Sawant, S.B.; Joshi, J.B.; Studies on the lipozyme catalysed synthesis of butyl laurate. Biotechnol. Bioeng. 46, 1-12 (1995). 24. Al-Mulla E.A.J.; Yunus W.M.Z.; Ibrahim N.A. Abdul Rahman M.Z. Enzymatic Synthesis of Palm Olein-based Fatty Thiohydroxamic Acids. J. Oieo sci., 59, 569-573 (2010). 25. Gunawan, E.R.; Basri, M.; Rahman, M.B.; Salleh, A.B.; Rahman, R.N. Lipase-catalyzed synthesis of palm-based wax ester. J. Oleo Sci. 53, 471-477 (2004).
IFMBE Proceedings Vol. 35
Rat Model of Healing the Skin Wounds and Joint Inflammations by Recombinant Human Angiogenin, Erythropoietin and Tumor Necrosis Factor-α A. Gulyaev1 and V. Piven2,* 1
National Centre for Biotechnology of Kazakhstan/Laboratory of Toxicology, 010000, 13/1, Valikhanova Str., Astana, Kazakhstan University Technology Petronas/Fundamental and Applied Science Department, Bandar Seri Iskandar, 31750 Tronoh, Malaysia
2
Abstract— This study is a part of an effort on the development of a new generation of wound healing medicines based on recombinant human peptides and cytokines, namely angiogenin (rhu-ANG), erythropoietin (rhu-EPO) and tumor necrosis factor alpha (rhu-TNF-α). The screening of a few drug matrixes was conducted and high purity polyethylene oxide was found to be the most appropriate one. Three gel forms of peptides were concocted and investigated in the study. Four rat models of surgical skin muscle wounds and two of joint acute and adjuvant arthritic inflammations were developed and tested with foresaid gels in comparison with conventional gel Solcoseryl and Butadion ointment. Plethora of morphocytological and hematological data was produced for all tested materials and improvement of wound/inflammation healing process with all peptides compared to Solcoseryl and Butadion was proven. Keywords— Wound healing, rhu-angiogenin, rhu-TNF-α, rhu-erythropoietin.
I. INTRODUCTION Wound healing in the skin and muscle tissues involves the concerted interplay of several cell types including keratinocytes, fibroblasts, endothelial cells, platelets and macrophages. The proliferation, migration, infiltration, and differentiation of these cells result in the formation of new tissue and finally wound closure [1]. This process is controlled by a signaling network of a variety of growth factors, cytokines and chemokines. Presently most important of them are considered the families of epidermal growth factor (EGF), fibroblast growth factor (FGF), vascular endothelial growth factor (VEGF), transforming growth factor beta (TGF-beta), granulocyte macrophage colony stimulating factor (GM-CSF), platelet-derived growth factor (PDGF), connective tissue growth factor (CTGF), interleukins (ILs), and tumor necrosis factor-alpha [1]. Only three of them, (PDGF, bFGF, and GM-CSF) are being used for treatment of patients whereby only PDGF has been tested in the clinical trials [2]. Meanwhile, plethora of data was reported recently on angiogenic potency of ANG, EPO and TNF-α [3]. There is an opinion that their roles in wound management are underestimated and more emphasis should be
given to the development of wound healing pharmaceutical products based on recombinant ANG, EPO and TNF-α. Given study deals with the rat models of wound cure based on these recombinant substances in comparison with traditional gel Solcoseryl that promotes angiogenesis via an increase of oxygen uptake by cells, stimulation of ATP synthesis, collagen formation and improvement of glucose transport. The cure of inflammatory arthritis was compared with Butadion ointment. The screening of a few drug matrixes was conducted and high purity polyethylene oxide was found to be the most appropriate for the said purpose. It seems to be logic to compare the specific roles of foresaid substances. and their efficacy in wound healing process.
II. MATERIALS AND METHOD Recombinant human cytokines. Rhu-ANG of clinical grade was obtained by culturing a highly productive E.coli strain BL21(DE3)pZZSA employing the innovative BIOKprocess based on gas-vortex bioreactor [4]. For this purpose the recombinant plasmid pET-21a-d(+)-based vector system that carries the synthetic gene of hu-ANG was cloned into ordinary E.coli plasmid vector. A rhu-ANG 0.01% gel in polyethylene oxide was manufactured for the study by ZAO Sajany (Novosibirsk). Rhu-EPO was produced from the cultured biomass of the recombinant strain CHOpE-9. It has been developed by transformation of the cells CHO-Kldhfr with constructed recombinant plasmid pKEP-9 obtained from the ovary cells of the Chinese hamster. For that purpose the fragment Bst EII-BglII (2383 bp) of the plasmid pSV-Ep-gpt which was modified by the human-EPO gene was inserted into the Sma-I site of the vector plasmid pNUT (5733bp) containing the murine metallothionein promoter MT-I and the gene of dihydrofolate reductase [5]. RhuТNF-α was manufactured by use the recombinant E.coli strainSG20050 with cloned into it plasmid pTNF311Δ which is coding the expression of hu-TNF-α from 5-th to 157-th amino acid under a control of a couple of early promoters of the phage T7 [6]. Both rhu-EPO and rhu-TNF-α in gel form in polyethylene oxide were provided by the State Research Centre for Virology and Biotechnology
N.A. Abu Osman et al. (Eds.): BIOMED 2011, IFMBE Proceedings 35, pp. 854–857, 2011. www.springerlink.com
Rat Model of Healing the Skin Wounds and Joint Inflammations
VECTOR (Novosibirsk). Commercial gel Solcoseryl and Butadion ointment were used in the study. Rat surgical wound model. 25 inbred rats (250g to 300 g) were involved in healing of four лштвы ща surgical wounds models produced under animals etherization: A) Flat skin muscle excision at cervical-scapular area, 3 sq cm; B) Skin incision on the mid dorsum down to the plantar fascia, 3 cm; C) Acute arthritis (induced by injection of 0.1 ml 2% formalin solution into the soil at the back of the left hind ankle; the right leg is a control); D) Adjuvant arthritis (induced by injection of 0.1ml of FCA (Freund’s Complete Adjuvant) into the ankle much as in C). The gels were administered once a day topically onto a wound (TOP, for A&B) or by rubbing in the skin followed by putting a plaster (RUBP, for C&D) onto the area of inflammation at rat’s left hind leg as right hind leg serves as a control (RLControl). Blood samples were collected for the hematological tests (HT) from caudal vein before the experiment starts and then on weekly basis. Materials for morphological investigation of wound (MIW) were obtained from sacrficed animals (by decapitation) on 3,7,10,14 and 21st days. The tissue specimens were taken from the wound areas that are contiguous to normal skin and subjected to routine histological treatment [7]. Data were processed in line with the Student-test. Cytological investigations (CI) were conducted on Kamaev’s smears[8].
855
III. RESUTS AND DISCUSSION A flat skin muscle excision model (Figure 1). The wound healing was characterized by the time of primary scab seizure; the term and quality of granulation occurrence, the epithelium development and maturation (visually) and by morphological data.
Fig. 1 FSME model (Flat skin muscle excision model) The wound healing was estimated by the time of primary scab seizure; the term of occurrence and quality of granulation; epithelium development and maturation (visually); and morphological data.
Table 1 Materials and methods of a study Experimental Group
Y
Wound model (Rats No./Groups No.), application mode, assessment mode
X
Z A (25/5)
B (25/5)
C (20/4)
D (20/4)
CONTROL
w/o gel
w/o gel
RLControl
RLControl
Gel ANG
TOP MIW, HT, CI
TOP, MIW, HT, CI
RUBP, oncometry, amputation
RUBP, oncometry, amputation
Gel EPO
TOP MIW, HT, CI
TOP MIW, HT, CI
RUBP, oncometry, amputation
RUBP, oncometry, amputation
Gel TNF-α
TOP MIW, HT, CI
TOP MIW, HT, CI
RUBP, oncometry, amputation
RUBP, oncometry, amputation
Gel Solcoseryl
TOP MIW, HT, CI
TOP MIW, HT, CI
RUBP, oncometry, amputation
RUBP, oncometry, amputation
Butadion cream
-
-
RUBP, oncometry, amputation
RUBP, oncometry, amputation
Fig. 2 FSME model. CONTROL (without a gel application), 5th day. x200, stained with HE (hematoxylin/eosin) Morpho-cytological data: Traumatic inflammation; neutrophilic&serofibrinous exudation (X ); blood vessel dilation (Y ); and inflammatory infiltration in derma (Z ); formation of feeble epithelium at the wound periphery.
The detailed analysis of histological, cytological and morphological findings from the A, B, C, D models studies revealed evident improvement of the wound healing process in the case of separate application of tested cytokines
IFMBE Proceedings Vol. 35
856
A. Gulyaev and V. Piven
X
Z
Y
Fig. 6 FSME model. Gel-Solcoseryl, 5th day, x200, stained with HE
Fig. 3 FSME model. Gel-ANG, 5th day. x200, stained with HE Morpho-cytological data: Swelling of fibrinous &leucocytal layer due to the gel matrix absorbtion (X ); neutrophiles shrink (Y ); reduced dermal inflammatory infiltration; extensive hemorrhages occurred with numerous new fibroblast formations around the blood vessel (Z ).
Morpho-cytological data: characteristic features are swelled fibrous formations (X ); numerous purulent inclusions observed though being less extensive that in the CONTROL (Y ); inflammatory infiltration in the derma reduced.
compared to the CONTROL and Solcoseryl Groups. A few preliminary experiments carried out in the same way as in foresaid ones also revealed the improvement in wound healing with blended gels of all three peptides in question. Below are present the brief data on the cure of acute and adjuvant arthritis with cytokines versus the conventional anti-inflammatory ointment Butadion (phenylbutazone).
Fig. 4 FSME model. Gel-EPO, 5th day. x200, stained with HE Morpho-cytologic data: more matured granulation tissue with elaborated network of blood vessels (X ) and macrophages; shrinking of fibrinous /leucocytal layer; extensive deep-laid edematous areas and plasma absorbtion occurred (Y )
X
Y
Fig. 5 FSME model. Gel-TNF-α, 5th day, x200, stained with HE Morpho-cytological data: Generally the healing pattern is similar to gelEPO; only thin detached superficial fibrinous /leucocytal layer was observed (X ); matured granulation tissue was developed (Y ).
Fig. 7 Manifestations of the acute (left) and adjuvant (right) arthritis The efficacy of healing of acute and adjuvant arthritis was measured by the Index of Inflammation (IoI) as follows: IoI = [(Mi – M0) / M0] x 100% , where Mi is the mass of a stump (amputated leg) of the rat cured by respective substance and, M0, is the mass of the stump non-cured animal. Anti-inflammatory efficiency seems to be still higher with Butadion. Most potent cytokine in both cases was proved to be rec-EPO. However as show the morphocytological findings the pattern of healing of the adjuvant wound was considerably better with recombinant cytokines then with Butadion. The best of them was found to be again rec-EPO. It might indicate the prevalence of impotency of direct oxygen delivery in arthritis compared to the blood capillary growth in the cases of A,B.C.D.
IFMBE Proceedings Vol. 35
Rat Model of Healing the Skin Wounds and Joint Inflammations
857
Table 2 Index of Inflammation in model arthritis Index of Inflammation (IoI), % Experimental Group
Acute arthritis, after 48 hour cure
Adjuvant arthritis, after 48 hour cure
CONTROL
-
-
Gel ANG
5,27 +/- 4.08
12.11 +/- 7.82
Gel EPO
0.09 +/- 0.11.46
-2.57 +/- 4.22
Gel TNF-α
3.46+/- 6.41
3.17 +/- 5.40
Butadion
-10.10 +/- 3.73
-2.49 +/- 8.60
IV. CONCLUSIONS This phenomenological study has shown that rhuangiogenin, rhu-erythropoietin and rhu-TNF-α promise the chances of development individual and likely complex blended drugs for the management of skin lesions. A matrix that most easily releases an active substance was found to be the gel of polyethylene oxide. The best improvement in cure of adjuvant wounds was found in the case of cure with rhu-erythropoietin. This study suggests a principal possibility of development of novel gene therapeutic products for wounds management in human, however the detailed investigations of the mechanisms of healing effects of recombinant human peptides and cytokines on molecular level is to be conducted beforehand.
REFERENCES 1. Barrientos S, Stojadinovic O, Golinko MS, et al. (2008) Growth factors and cytokines in wound healing. Wound Repair Regen. Sept-Oct., 1616(5):585-601 2. Shen JT, Falanga V, (2003) Incorporating Medical and Surgical Dermatology, J. Cutaneous Medicine and Surgery, 7(3), 217-224 3. Shraddha VB, Bhoomika RG, Mayur MP, (2010) Angiogenic targets for potential disorders, Fundamental and Clinical Pharmacology, 17:1, 29-47, DOI: 10.1111/j.1472-8206.2010.00814.x 4. Mertvetsov N, Stephanovich L, (1997) Angiogenin and mechanism of angiogenesis, Novosibirsk, Nauka Publisher, p.77 (in Russian) 5. Patent of Russian Federation #2118662, application # 97108266/13, Priority of 10.09.1998 6. Patent of Russian Federation #2236433, application #2001132071/15, Priority of 28.11.2001. 7. Gabitov V, Beisembaev A, Akramov E, Piven V, etc. (2005) Dfficacy of rhu-angiogenin in regenerative process in aseptic surgical wound. Morpho-cytological rat model, Proc. 6th Natl. Congr. On Genetics, Kuala Lumpur. 8. Sarkisov DS, Remezoy PI (1960) Experimental modeling of human diseases, 320-322.
Author: Vladimir Piven Institute: University Technology Petronas Street: Bandar Seri Iskandar Country: Malaysia E-mail: [email protected]
ACKNOWLEDGMENT We thank the State Research Centre for Virology and Biotechnology VECTOR and ZAO SAJANY (both Novosibirsk) for providing the recombinant substances.
IFMBE Proceedings Vol. 35
Author Index
A Ab Aziz, M.F. 283, 348 Abass, H. 659 Abbah, S. 11 Abbas, A.A. 73, 815 Abbasi-Asl, R. 157 Abbas, Siti Fathimah 674 Abberton, K.M. 831 Abd Razak, N.A. 743 Abdul Hamid, Z.A. 831 Abdul Jamil, M.M. 105, 447, 596, 781, 785 Abdul Latif, L. 739, 762, 765 Abdul Nasir, A.S. 40 Abdul Wahab, A.K. 20, 704 Abdulhadi, L.M. 29, 33, 659 Abdullah, A.R.W. 398 Abdullah, J.M. 480, 548 Abdullah, M.A. 37 Aboodarda, S.J. 241 Abu Osman, N.A. 125, 161, 167, 175, 179, 182, 197, 222, 241, 728, 732, 735, 739, 743, 755, 758, 762, 765 Abu-Bakar, S.A.R. 604 Abusara, Z. 3 Adibpour, F. 708 Adzila, S. 97, 108 Ahmad, M.S. 781 Ahmad, S.A. 121, 556 Ahmad, Z. 827 Ahmed, A.L. 480, 548 Ahsan, Md.R. 536 Akbarzadeh, A. 694 Akhlagpour, S. 47 Akiyama, Y. 237 Alam, S. Imran 245 Alhady, S.S.N. 398 Ali, M.E. 384 Ali, S. 728, 758, 762 Ali, S.H. 121, 556 Ali, Z. 463 Alias, Norma 720 Alizadeh, M. 215, 439 Almejrad, A. 143 Alqap, A.S.F. 108
Chang, H.S.W. 380 Chang, S.H. 153, 789 Che Azemin, M.Z. 655 Che Harun, F.K. 283, 305, 348, 507 Che Man, Y.B. 384 Chee, P.S. 305 Chen, C.C. 69, 375 Chen, C.H. 380 Chen, G.C. 367 Chen, H.C. 356 Chen, H.S. 225, 262 Chen, J.M. 148 Chen, W.C. 380 Cheng, C.C. 367 Cheong, J.P.G. 222 Chia, F.K. 674 Chiu, C.C. 266 Cho, J.M. 258, 344, 769 Choi, D.H. 516, 793 Choi, H.H. 801 Chong, S.S. 674 Chong, Y.Z. 139, 193 Chou, J.C. 69, 375 Chua, Y.P. 112 Chuah, S.Y. 631 Chuan, Y.L. 836 Chung, E.J. 769 Chung, K.C. 225, 262 C´ orcoles, E.P. 275 Cuong, N.V. 84
Amani, T.I. 398 Ambar, R. 781 Amiriyan, M. 51, 80, 102 Anas, S.A. 690 Ang, C.T. 112 Arabshahi, Z. 432 Ariff, A.K. 560 Ariff, F.H.M. 207 Arjmand, E.S. 340 Arof, A.K. 55 Arof, H. 484 Arsalan, A. 292 Arshad, L. 635 Ashofteh-Yazdi, A.R. 130 Ateeq, Ijlal Shahrukh 245 Attaran, A. 503 Augustynek, M. 320, 532 Avolio, A. 1 Ay, M.R. 47, 694, 708, 712 Azad, A.M. 292 Azmy, H. 507 B Baba, R. 393 Babusiak, B. 16 Bader, D. 10 Bahari-Kashani, M. 130 Bajuri, M.N. 773 Bani Hashim, A.Y. 739, 765 Bau, J.G. 356 Bayat, M. 167, 175, 179 Begum, T. 480, 548 Behnamghader, A. 823 Bhaskar, S. 548 Bhullar, A.S. 203 Bin Dato Abdul Kadir, M. Rafiq 215, 432, 439, 773 Binh, N.H. 279 Blencowe, A. 831 Boutelle, M.G. 275 C Cerny, M. 16, 320 Chang, C.C. 450
D
210,
Daisu, M. 237 Darzi, A. 275 Datta, R.S. 292 Davoodi, M.M. 167, 175, 179 Dawal, S.Z.M. 578 Deeba, S. 275 Desai, M.D. 611 Desai, M.R. 611 Deshpande, A.V. 308 Dhahi, T.S. 388 Dixit, V.V. 308 Doi, S. 187 Duy, L.H. 328
860 E Egawa, A. 187 Emami, A. 47 Eshraghi, A. 728, 758, 762, 778 Eslami, A. 47 Esteki, A. 200, 270 F Fallahi, A. 439 Fallahiarezoodar, A. 215 Farahpour, M. 823 Farzampour, S. 157 Fathi Kazerooni, A. 458 Fatoyinbo, H.O. 582 Federico, S. 182 Felix Yap, B.B. 635 Fong, K.M. 139 Forati, T. 823 Fujii, T. 663 Fujimoto, T. 724 Fukuyama, N. 663 G Gala, M. 16 Ganage, D. 308 Ganesan, P. 407 Gan, K.B. 424 Ge, S. 336 Ghadiri, H. 47, 694 Ghafarian, P. 47 Ghani, N.A.A. 207 Ghazanfari, N. 712 Gholizadeh, H. 728, 758, 762 Goh, J.C.H. 11 Gozalian, A. 823 Gulyaev, A. 854 Guo, L.Y. 190 Gym, L.K. 755 H Hagiwara, Y. 151 Hamdi, M. 80, 97, 108 Hameed, Kamran 245 Hamzaid, N.A. 20, 112, 735 Han, H.Z. 367 Han, S.K. 3, 182 Han, Y.H. 623
Author Index Hand, J.W. 332 Hani, A.F.M. 393, 635 Hanna, G.B. 275 Harfiza, A. 643 Harizam, M.Z. 755 Harun, N.H. 617 Haseeb, A.S.M.A. 73 Hashemi, B. 436 Hashim, U. 384, 388 Hashishin, Y. 296 Hasikin, K. 20 Hau, N.V.D. 591 He, S. 407 Heidari, M. 215 Hema, C.R. 287 Herzog, W. 3, 182 Hieda, I. 300 Higashi, Y. 724 Hirata, H. 332 Hirunviriya, S. 750 Hoa, L.M. 229 Hoa, N.V. 229 Hoque, M.E. 836 Houssein, H.A.A. 315, 463 Hsiao, T.C. 797 Hsieh, M.F. 84 Huan, .N. 229 Huang, J.S. 747 Huang, M.T. 225 Huang, T.H. 747 Huang, Y.Y. 574 Hughes, M.P. 582 Hussain, A. 527 Hussain, H. 560 Hussin, I.H. 841 Huy, H.Q.M. 678 I Ibrahim, Arsmah 650, 720 Ibrahim, F. 125, 241, 578 Ibrahimy, M.I. 536 Ibrahim, Z. 484 Ichisawa, S. 420 Idrus, R. 836 Iizuka, T. 187 Ikeya, Y. 663 Inoue, H. 249 Inoue, Y. 187 Inthavong, K. 467
Iramina, K. 336, 492, 519 Ishak, A.J. 121, 556 Ismail, A.H. 312, 315 Ismail, M.Y. 785 Iwahashi, M. 12, 492 J Jaafar, H. 667 Jaafar, M.S. 312, 315, 463 Jaffar Al-Mulla, E.A. 849 Jamil, A. 635 Janckulik, D. 363 Jang, M.Y. 811 Jason Chen, J.J. 367, 380, 600 Jayanthy, A.K. 443 Jeong, W.J. 500, 516 Jhong, G.H. 219 Jitvinder, H.S.D.S. 690 Joung, S. 116 Jumadi, A.M. 686 Jumadi, N.A. 424 Jung, W.B. 623 K Kadri, N.A. 582 Kahar, M.K.B.A. 578 Kamalanand, K. 411 Kamarudin, M.F. 283, 348 Kamarulafizam, I. 560 Kamarul, T. 815 Kamiya, S. 724 Kamyab, M. 728 Kang, D.H. 500, 516 Kang, J.H. 801 Karami, N. 270 Karimi, M.T. 758, 778 Karman, S. 20 Kashani, J. 210, 432, 439 Katayama, Y. 336, 519 Kawakami, M. 187 Kawamura, Y. 552 Kawanishi, H. 324 Kelnar, M. 363 Khai, L.Q. 328 Khalifa, O.O. 536 Khalil, I. 476 Khan, Sana H. 245 Khoa, T.Q.D. 229, 279, 328, 591, 678
Author Index Khoo, Y.J. 139 Khorsandi, R. 157 Kim, J.K. 488 Kim, K.H. 819 Kim, K.S. 801 Kim, N.H. 258, 344, 769 Kim, Y.S. 258 Kitawaki, T. 187 Kobayashi, A. 249 Kobayashi, E. 116 Kodabashi, A. 724 Koloor, S.S.R. 210 Krejcar, O. 363 Kumar, D.K. 655 L Labeed, F.H. 582 Lai, K.A. 225 Lee, A.L. 631 Lee, B.W. 488, 793 Lee, C.G. 488 Lee, C.K. 793 Lee, C.Y. 574 Lee, J.H 623 Lee, M.H. 488, 500, 516, 793 Lee, P.F. 60 Lee, S.J. 500, 516 Lee, Y.B. 488, 500, 516 Lee, Y.H. 801 Lee, Y.K. 542 Leonard, T.R. 3 Leong, X.J. 193 Li, T.H. 262 Li, T.Y. 266 Li, X.P. 2 Li, Y.L. 84 Liao, H. 716 Lim, C.L. 587 Lim, E. 20 Lim, K.C. 569, 690 Lim, K.S. 197 Lim, W.K. 682 Limsakul, C. 233, 750 Lin, C.W. 797 Lin, G.W. 797 Lin, K.P. 403 Lin, M.H. 627 Lin, S.L. 134 Lin, W.C. 403
861 Lin Wang, Y.Y. 148 Linoby, A. 283, 348 Liu, C.H. 69 Liu, H.H. 403 Liu, P.H. 219, 747 Liu, R.S. 403 Liza, S. 73 Loerakker, S. 10 Loudos, G. 708, 712 Low, C.S. 631 Low, J.H. 139 Low, Y.F. 569 Lu, C.Y. 600 Luan, K. 716 ´ ottir, A.G. 728 L´ u v´ıksd´ M Madou, M. 578 Madzin, H. 698 Mahmood, N.H. 283, 348, 496, 686 Mahmoud, A. 33 Mahmoud, H.L. 29, 33 Mahmud, R. 650 Mak, A.F.T. 8 Malarvili, M.B. 415 Malik, A.S. 635 Manaf, Y.A. 37 Manazir Hussain, S. 245 Mansor, M.M. 686 Mansor, M.S.F. 352 Mansor, W. 542 Mardziah, M. 827 Mariam, Mai 472 Mashor, M.Y. 40, 617, 667 Masjuki, H.H. 73 Mat Safri, N. 496, 507 Matsumaru, N. 552 Matsunaga, A. 492 Mazlan, M.H. 161 Md Ali, U.S. 735 Md Zin, H. 735 Merican, A.M. 815 Mineta, H. 371 Mirghami, S.E. 88 Miskon, Azizi 805 Misran, M. 60 Miswan, M.F. 596 Miyashita, T. 170 Miyata, M. 324
Miyazaki, T. 237 Mizushina, S. 332 Moghavvemi, M. 503 Mohamad, D. 596 Mohamed, N.S. 698 Mohamed Saaid, M.F. 484 Mohammed, H.A. 29, 33 Mohd Addi, M. 305 Mohd Ali, A.M. 785 Mohd Ali, M.A. 424 Mohd Kassim, N. 447 Mohd Noor, N. 393 Mohd Nordin, I.N.A. 305 Mohd Saad, N. 604 Mohd Taib, N.A. 340 Mohd Zain, N. 55 Mohd Zaman, M.H. 527 Mok, K.L. 682 Mokhtar, N. 484 Mokji, M.M. 604 Moo, E.K. 182 Moon, C.S. 801 Moon, J.W. 344 Mori, H. 663 Morisaki, A. 12 Morrison, W.A. 831 Moshrefpour Esfahani, M.H. 503 Mousavi, M.E. 200 Muda, S. 604 Mun, C.W. 623, 801, 811 Mustafa, F.H. 312, 315, 463 Mustafa, M.M. 527 Mustafa, S. 384 Muzaimi, M. 480 N Nagai, H. 237 Nagata, S. 324 Nakayama, H. 12 Nakayama, T. 296 Nam, H.Y. 815 Nam, K.C. 300 Nassereldeen, K.A. 88 Najafi Darmian, A. 694 Nazib Adon, M. 447 Nazwa, T. 388 Neghabat, R. 823 Nemati, R. 823 Neuman, M.R. 7
862 Ng, A.M.H. 836 Ng, S.C. 20, 511, 587 Ngadi, M.A. 596 Ngah, U.K. 398 Nguyen, D.H.T. 591 Nicolay, K. 10 Noh Dalimin, M. 447 Nojima, K. 336, 492 Nozari, A.A. 578 Nozari, H. 704 Nugroho, H. 393 Nuidod, A. 750 Nukman, Y. 755 Numata, K. 420 Numata, T. 187 O Ohashi, T. 151 Ohnishi, I. 116 Ohya, T. 716 Oka, H. 187 Okada, Y. 371 Okawai, H. 420 Omair, S.M. 245 Omar, A.F. 315 Omar, H. 480, 548 Omar, N.F. 92 Omar, Sarimah 674 Ong, M.K. 639 Ooi, S.N. 732 Oomens, C. 10 Oshkour, A.A. 167, 175, 179 Osman, M.K. 667 Osman, S.Z. 841 Othman, M.A. 496, 507 P Palmer, J. 831 Panchal, L. 611 Paraskeva, P. 275 Park, S.H. 769 Pashby, I. 836 Paul, A.D. 292 Paulraj, M.P. 287 Penhaker, M. 16, 320, 532 Penhakerova, P. 320 Penington, A.J. 831 Phinyomark, A. 233, 750 Phukpattaranont, P. 233, 750 Pingguan-Murphy, B. 182, 197, 815, 819, 841
Author Index Piven, V. 37, 854 Poon, C.T. 819 Pouladian, M. 694 Pourmajidian, M. 704 Purbolaksono, J. 80 Q Qiao, G.G.
831
R Rabbani, M. 458 Radiman, S. 92 Rahim, K.F. 393 Rahman, W.E.Z.W.A. 650 Ramakrishnan, S. 411 Ramasubba Reddy, M. 443 Rambely, A.S. 207 Ramesh, S. 51, 80, 102, 108 Ramli, R. 170, 484 Ramli, S.N. 496 Ranjit, S.S.S. 690 Ranu, H.S. 143, 203 Rasani, M.R. 467 Raveendran, P. 511, 587 Razman, R. 222 Razmjoo, A. 436 Reza, F. 480, 548 Rezaei, A. 823 Rezai Rad, G.A. 704 Rezayat, E. 359 Rifa’t, H.H. 755 Rosline, H. 40, 617 Rouhi, G.A. 130 Rozalina, A.H. 639 Ryu, Y.S. 488, 500, 793 S Sabtu, N.H. 105 Safaeepour, Z. 200 Safee, M.K.M. 125 Sahak, R. 542 Sakai, H. 845 Sakuma, Ichiro 9, 116, 716 Sakuragawa, S. 371 Salahuddin, L. 604 Salim, A.J. 690 Salleh, N.W. 88 Salleh, S.H. 560 Samra, K.A. 578 Samuel Lai, K.L. 102
Sano, S. 296 Sarkar, S. 708, 712 Saw, A. 112 Sawatsky, A. 3 Sayed, I.S. 643 See, E.Y.S. 11 Sekine, M. 724 Semkovic, J. 320 Seong, H.S. 258, 344, 769 Sepehri, B. 130 Shah, S.R. 611 Shamsudin, S.A. 92 Sharif, J.M. 596 Sharifi, D. 823 Shia, H.W. 134 Shiga, A. 187 Shimamoto, Y. 324 Shinozaki, Y. 663 Shirai, K. 552 Shirazi, A. 694 Sidek, K.A. 476 Sim, K.S. 631, 639, 674, 682 Soin, N. 578 Solomonidis, S. 778 Soo, Y.G. 569 Sopyan, I. 97, 108, 827 Srinivasan, S. 411 Stevens, G. 831 Strauss, D.J. 569 Stula, T. 532 Su, F.C. 190 Su, J.L. 627 Sudirman, R. 496 Sugimoto, K. 237 Sugiura, T. 332, 371 Sujatha, N. 443 Sulaiman, Hanifah 720 Suraya, R.A. 560 Suzuki, T.A. 371 Syed Shikh, S. 116 Sze, W.K. 148 T Tabatabaei, F. 200 Taghizadeh, S. 47 Tahir, Aisha 245 Takada, D. 237 Takahashi, K. 249 Takezawa, S. 12 Tamura, T. 724 Tan, C.K. 639
Author Index Tan, C.Y. 51, 80, 102 Tan, S.T. 674 Tanabe, T. 663 Tang, C.K. 55 Tang, W.C. 428 Tavakkoli, J. 359 Tengku Ibrahim, T.N. 105 Tham, L.K. 197 Thanh, N.T.M. 328 Tharakan, J.T.K.J. 548 Thien, D.D. 229, 279 Thio, T. 578 Thongpanja, S. 233 Thung, K.H. 587 Timimi, Z.A. 315, 463 Ting, C.M. 560 Ting, H.N. 20, 340, 523, 565 Ting, H.Y. 631, 674, 682 Ting, Y.T. 190 Toh, S.L. 11 Toi, V.V. 229, 279, 328, 591, 678 Tolouei, R. 51, 80, 102, 823 Torii, T. 492 Trinh, N.N.P. 279 Tsai, T.C. 367 Tso, C.P. 631, 639 Tu, J.Y. 467 Tukimin, R. 253 Tzeng, M.J. 574
863 Uslama, Jatendra 805 Uyop, N. 283, 686 V Vaghefi, S.E. Verdan, P.M. W Wan Abas, W.A.B. 20, 125, 161, 167, 179, 197, 222, 352, 484, 728, 739, 765, 819 Wan Abdullah, W.A.T. 60 Wan Mahadi, W.N.L. 253, 352 Wan Mahmud, W.M.H. 415 Wan Zaki, W.S. 105 Wang, J. 716 Wang, W.K. 148 Wong, H.C. 428 Wong, H.K. 11 Wu, C. 627 Wu, C.W. 600 Wu, H.F. 225, 262 Wu, M.S. 375 Wu, Y.J. 380 X Xu, H.
U Umetani, K. 663 Umimoto, K. 324 Urzoshi, K.R. 292
25 762
407
Y Yahya, M.Y. 773 Yamanaka, M. 170
Yanagida, J. 324 Yang, F.M. 600 Yang, F.S. 225 Yang, S.H. 262 Yang, Y.A. 769 Yap, B.K. 51, 80, 102 Yasiran, S.S 650 Yassin, I.M. 542 Yatim, A.H.M. 682 Yau, Y.H. 167, 179, 407 Yazdchi, M.R. 458 Yeh, S.J. 134, 266 Yip, C.H. 340 Yokota, Y. 552 Yong, B.F. 565 Yong, K.P. 682 Yoshitake, S. 12 Yoto, T.Y. 371 Yu, N.Y. 153, 789 Yunus, J. 139, 496 Yusof, A. 241 Yusoff, N. 735 Yusop, M.H.M. 384 Z Zabidi, A. 542 Zahedi, E. 157, 359, 424 Zainuddin, R. 698 Zakaria, A. 203 Zamani, M.K. 170 Zeinali, A. 436 Zeraatkar, N. 712 Zourmand, A.R. 523 Zulkarnain, N. 348
Keyword Index
3 3-D foot print device 143 3D joint angle 161 3D joint power 161 3D scaffold 841 3-Dimensional 167 4 400- Series thermistor 245 A A/D converter 363 Accelerometer 732 Accuracy 262 Acetylcholine 663 Actinic keratoses 315 Activity monitor 732 Acute Leukaemia 40 Acute leukemia blood images 617 Adjuvant therapy 380 Adsorption 88 Affine moment invariants 668 Air pressure monitoring 344 Air pump/motor control 344 Airflow 134 Ambulation 732 Amperometry 367 Anastomosis 275 Angiogenin 37 Animal PET 713 Ankle 112 Ankle joint 200 ANOVA 88, 352 ANT+ 283 Antheromatous disease 663 Anthropometric 193 Antimicrobial 55 Apnea 467 Arithmetic 279 Arm rehabilitation 781 Arm Swing 222 Arrhythmia 292, 552 Arthroplasty 773 Articulate patellar 755 Artifact 504 Artificial hand gripper 785
Artificial intelligence 542 Artificial knee cap 755 Artificial neural network 536, 667 Atheromatous disease 663 Atherosclerosis 411 Attenuation correction 643 Auditory late response 569 Auditory selective attention 569 Automatic speech recognition 565 Autonomic nervous system 371, 420 Autonomic neuropathy 266 Available chlorine 324 B Bactericidal activity 324 Bacterial colonies 105 Ball size 190 Bayesian inference 450 BCI 507 Beat 320 Benign 340 Bio-amplifier Bioceramic 103 Bioengineering 25 Biological tissue 296 Biomarker 37 Biomechanics 222, 229 Biomedical CT image 706 Biomedical engineering 16 Biomedical instrumentation 139 Biomedical measurement 303 Biomedical signal processing 542 BioMEMS 578 Biometric field 686 Biometrics 476 Bio-sensor 245, 275, 388 Block of Interest 686 Block positioning 686 Block-based 686 Blood cell 596 Blood disorder 596 Blood flow 403 Blood perfusion 444 Blood pressure 678 Blood substitutes 845 Blood vessel 411
Bluetooth 363 BMI 503 Body motion wave (BMW) 420 Body powered prosthetics 743 Bone marrow 815 Bone mechanics 130 Bone mineral density 47 Bone tissue engineering Boundaries 650 Bowel ischemia 275 Bowing 126 Bradford 682 Brain 237 Brain activities 328 Brain computer interface 488, 501, 516, 591 Brain machine interfaces 287 Brain temperature 332 Brain wave frequency 480 Breast cancer 37, 674 Breast tumor 340, 722 Brittle cracking 214 Bubble technology insoles 147 Burn wounds 443 C C# 363 Calcination 51 Calcium phosphate 108 Calibration 359, 716 Cancellous bone 440 Cancer cells 582 Capacitance 356 Capillary valve 578 Cartilage tissue engineering 841 CdS QDs 92 CdS-lysozyme conjugates 92 Cell death 182 Cell growth 812 Cell mechanics 182 Cell migration 428 Cell orientation 815 Cell viability 811 Cell-ECM interactions 428 Cells 447 Center of pressure 797 Cepstrum 528
866 Ceramic scaffolds 836 Change point detection 484 Children speech 523, 564 Chin 29 Chitin 55 Chitosan 55 Chondrocyte 182 Chondrocyte signaling 3 Circuit flow rate 12 Circuit pressure 12 Circuit priming volume 12 Classification 40, 121, 556, 611 Closed-loop control 789 Cluster index 750 Clustering 617, 621 CNT-GAS 88 Cognitive task 279 Color 639, 698 Color coded 283 Combined cue 516 COM-COP inclination angles 190 Common average reference 511 Common-mode noise reduction 258 Communication 320 Comparative analysis 488 Complete blood count 596 Complexity 270 Compression 587 Compressive failure 436 Compressive stress 167, 179 Computational modeling 25 Computer aided design (CAD) 735 Computerized evaluation 153 Consistency 222 Contact pressure 167 Continuous passive motion 112 Contractile phenotype 151 Contrast 643 Contrast normalization 631 Contrast stretching 604 Conventional lithographic 388 Core stability ball exercise 190 Correct rate 724 Correlation analysis 724 CR-39 NTDs 312 Cross evaluation 500 Cross-approximate entropy (Co-ApEn) 266 Cross-correlation 655 CST EM STUDIO ® 448 CT 694 Current source density 511 CW CO2 laser 296 Cyclic uniaxial loading 815 Cyclic voltammetry 357, 367 Cytochrome C oxidase 602
Keyword Index D Damage mechanics 210 Data preprocessing 415 Decompressive craniotomy 237 Dental post 219 Detectability 643 Detector 534 Developing countries 574 Developmental coordination disorder 789 DEXA 47 DHS 225 Diabetic and normal subjects 144 Diabetics 266 Dielectric constant 340 Dielectrophoresis 582 Differentiate Diffusion 60 Digital camera 33 Digital filter 258 Digital image analyzer 659 Digital mammograms 720 Digital scanner 33 Discrete wavelet transform 536 Disinfect 324 Donations 574 Dopamine 367 Dorsolateral prefrontal cortex 492 Double layer 356 Doxorubicin 84 Drowsiness detection 308 Drowsiness monitoring 308 DWI 604 Dynamic air pressure sensor 420 E Early infarct 640 ECM degradation 430 Edge Detection 720 Education 16, 25 EEG signal processing 287 Elastic tubing 241 Elbow joint 215 Electric field intensity 301 Electric fields 447 Electric loading 459 Electrical conductivity 324 Electrical double layer 60 Electrocardiogram (ECG) 283, 320, 476, 532 Electrochromic 69 Electroencephalography (EEG) 484, 488, 500, 503, 507, 511, 516, 519, 548 Electrogastrography (EGG) 249 Electrolyzed water 324
Electromagnetic field radiation 352 Electromyogram 801 Electromyographic control 556 Electromyography (EMG) 352, 536, 750 Electromyography (EMG) signal 121, 125, 157, 233, 536, 750 Electrophoretic mobility 60 Electrostatic Interaction 93 Endodontically treated teeth 210, 432 Endoscopy 631 Energy absorption 175 Enhance distance active contour 650 Epilepsy 548 Equal error rate 562 Equiangular tight frame (ETF) 705 Equivalent circuits 519 Ergonomics 155 ERP 492 Ethernet 245 Euler/Cardanic angle 161 Eumelanin 393 European union 16 Evaluation 517 Evoked 549 Exhalation 470 Explicit dynamics procedure 213 Extracellular matrix 151 Extrusion 837 eZ430-Chronos 305 F Face 659 Face detection 308 Fast fourier transform (FFT) 507 Fatty amides 849 FEA 171 Feature analysis 698 Feature extraction 41, 557, 751 Feature selection 750 Feedforward neural networks 542 Ferrule effect 432 Fibrin 841 Filtration 533 Fine motor 789 Finite element 130 Finite element analysis 169, 179, 180, 219, 432, 747, 776 Finite element method 179, 436 Finite element model 411 Finite element modeling 182 Finite element study 439 Finite integration technique 447 FIR filter 258 Flap valve 578
Keyword Index
867
Flex sensor 785 Flexible array sensor 375 Flexion relaxation 125 Flowrate 407 Fluid-structure interaction 398, 467 Flux peak 463 fMRI 724 Foot slide 222 Force estimation 157 Formant frequency 523 Forsterite 102 Fracture reduction force 116 Fracture reduction path 116 Fracture reduction robot 116 Freehand ultrasound 716 Frequency compression 527 Front 30 Full extension landing 175 Full-body suit 187 Function Fitting 519 Fundamental frequency 523 Fusing IT technology 769 Fuzzy K-means 617 Fuzzy Logic 121, 398
Heating 108 Hematocrit (HCT) 463 Hemodynamic response 600 Hemoglobin 635 He-Ne laser 463 Herbal extract 819 High frequency hearing loss 527 Histogram normalization 631 Homeostasis 797 Hounsfield units 639 Human amniotic membrane 841 Human computer interaction 536 Human pupil measurement 686 Human spine 203 Human vertebra 436 Hybrid nanobioprobe 384 Hybridization kinetics 384 Hydrogel scaffolds 831 Hydroxyapatite 51, 80, 97, 108, 823 Hypothermia 332 Hypothyroid 542 Hysteresis 200
G
ICA 635 IIR filter 258 Ilizarov ring 112, 139 Image recognition 631 Impact 130, 182 Impedance 356 In vivo 315, 823, 312 Inclination 797 Induced artifact 519 Inexpensive 139 Infants 332 Integrin expression 151 Interactive game 801 Interconnected pore structures 827 Inter-crystal scattering 708, 712 Interfacing technique 245 Internal bone remodeling 458 Inverse dynamics 161 In-vivo subcutaneous implantation 831 Ion resonance frequency 811 IQ test 678 Iris recognition 686 Iron deposition 623 IronCad 735 Ischemia 332 Ischemic stroke 639 Isometric contraction 496 Isotonic contraction 233
Gait analysis system 139 Gamma-law 604 Gastroesophageal reflux disease (GERD) 249 Gastrostomy 249 GATE 694, 708, 712 Gaussian mixture model 560 Gendarussa vulgaris 819 Gender identification in children 523 Genetic algorithm 704 GMISS 574 GNU radio 300 Gold nanoparticles 380 Granulation tissue 635 Graph 739, 765 Graphical user interface 686 Graying 769 Ground reaction force 765 H Habituation 472 Haemodialysis 12 Hair color 315 Hammerstein-Wiener model 157 Hand prosthesis 735 Handwriting 153 Heart disease 116 Heart period 415 Heart rate 305, 415, 532 Heart rate monitor 283, 348 Heart rate variability 371, 415, 552, 249
I
K Kane’s method 207 Knee biomechanics 3 Knee joint 175
L Lab-on-chip 582 LabVIEW 305 LAN 245 Laser percussion 296 Laser speckle 443 Laser-induced sound 296 Late auditory evoked potential 472 Lateral nanogap 388 Latex glove 682 LB agar 105 Legendre 587 Lesion recognition 674 Liner 728, 758 Liposome 845 Lipozyme 849 Live cell imaging 3 Loading 203 Longitudinal stress 148 Long-term 532 Look up table 292 Loss factor 340 Loss tangent 340 Low pass filtered speech 527 Low perfusion index 258 Low power portable medical equipment 332 Low-cost 732 LSCI 443 Lumbar interbody fusion 439 Lung cancer 170, 312 M Magnetic fields 805 Malay vowel 523 Malay vowel recognition 565 Malaysian university student 193 Malignant 340 Mammogram Manometer 262 Manometry 262 Mathematical model 428 Matlab distributed computing server 720 MCF-7 811 Mean frequency 233 Mechanical design 735 Mechanical properties 51, 80 Mechanical pump Mechanically ventilated patients 262 Mechanochemical 97 Median frequency 233 Medical robotic 785 Medical-grade silicone 29, 33 MEG 480, 548 Melanin 393 Melanoma 270
868 Memory biomaterials 143 Mercury 88 Mesenchymal stem cell 815 Method evaluating rehabilitation status 793 MG-63 811 MGF 704 MgO 80 Microangiography 663 Microbubbles 359 Microcalcifications 643 Microcirculation 25 Microcontroller programming 112, 139 Microdialysis 275 Microfluidic compact disc 578 Microphone 296 Microprocessor 320 Microsoft visual basic software 105 Microwave radiometry 332 Microwave sintering 827 Milling 108 Milling Speed 97 Miswak (Kayu Sugi) 480 MMV 682 Modified Beer-Lambert law 393 Monitoring 552 Monitoring device 781 Monte carlo simulation 424 Morphological analysis 627 Morphological data 407 Morphometric relation 659 Motion analysis 197 Motion assists 187 Motion changes 690 Motion-defined pattern 724 Motor control 496, 789 Motor cortex 229 Motor execution 591 Motor imagery 488, 500, 516 Motor imaging 591 Movement analysis 157 Moving K-means 617, 631 Mozart effect 678 MRI 674 MTT assay 811 Multilayer perceptron 40, 476 Multi-modality medical images 698 Murine retina 407 Muscle 125 Muscle activation 190 Muscle activity 352 Muscle weakness 3 Muscular activity 187 Musculoskeletal Model 116
Keyword Index N Nasogastric tube placement 262 Near- infrared spectroscopy 229, 591, 600 Negative feedback 245 Network model 407 Network topology 407 Neural network 287, 398, 476 Neurofeedback 500 Neuron activity 591 Neurorehabilitation 480 Neuroscience 678 Neurovascular coupling 600 NIRS machine 678 Nitric oxide 845 Noise robustness 587 Non- rapid eyes movement (NREM) Nonlinear 170, 266 Nonlinear feature extraction 270 Nonlinear finite element analysis 210 Nonlinear resonance 359 Non-stationary 484 Novelty detection 472 Numerical profile 739 O Ocular lens 25 OpenCV 308 Open-loop control 789 Optical properties 92, 424 Optimal performance 488 Optimization 134 Orthogonal moments 587 Orthosis 778 Orthotics 762 Osteoarthritis 3, 841 Osteoblast 819 Osteoporosis 225, 778 Oxidation 73 Oxide semiconductor 388 Oxygen saturation 305 P P300 492 Parallax 712 Parallel processing 720 Parametric dictionary (PD) 704 Parietal region 279 Parkinson's disease 367 Partial directed coherence 496 Particle swarm optimization 484, 542 Patellar 755 Patellar tendon reflex 197
Pattern recognition 556 PDT 348 Penetration 708 Perceptual reversal 336 Permittivity 300, 340 pH sensitivity 375 pH sensor 71 Phalange 735 Phantom 47 Phase stability Phase transformation 108 Pheomelanin 393 Photothermal therapy 380 Physical agent 480 Physical fatigue 511 Pistoning 758 Pixelated scintilator 708 Plantar foot pressures 143 Plaque 144 Plasmapheresis 12 PLEDs 158 Plethysmography 320 Point-of-care 578 Poisson process 450 Polyethylenimine (PEI) 92 Polymeric micelle 84 Polynomial 476 Polysilicon 389 Polysomnography 328 Porous calcium phosphates 827 Positioning substraction 690 Post 432 Posture 797 Posture balance 190 Power spectrum 233 Pressure distribution 744 Pressure mat 797 Process parameter 837 Proliferation 815, 819 Prosthesis 556, 732, 735, 750 Prosthesis control 121 Prosthetic foot 739, 741 Prosthetics 765 Prostration 126 Psoriasis 315 Pulse oximeter 320 Pulse oximetry 258, 260 Pulse wave 320 Pupil recognition 686 PWM 293 Q QALY 576 Quantitative computed tomography 47, 436
Keyword Index Quasi-stiffness 200 Quincunx wavelet 615 R Rabbit 312 Radio frequency (RF) 367 Radio frequency sputtering 376 Radium-226 312 Random dot kinematogram 724 RANSAC 716 Rapid eyes movement (REM) 328 Rapid prototyping 837 Rat BMSC 819 Recurrent breast cancer 451 Reference analysis 451 Reflectance optical sensor array 424 Rehabilitation 241, 785, 801 Respiratory mechanics 135 Retinal vessel diameter 655 Retrieval study 73 RF module 306 Rheumatoid arthritis 773 Rhu-angiogenin 854 Rhu-erythropoietin 854 Rhu-TNF-Į 854 Root fracture 432 RR Interval 532 RS-232 interface 321 rTMS 336, 492 S Safety 116 Satisfaction 420 Saturation 320 Saturation component 618, 621 Scaffold 823,836 Scatter 694 Screen-printing 375 SDR 302 Segmentation 597, 604, 720 Self-organizing map (SOM) 627 Semi-infinte tissue 424 Senior-friendly industry 769 Septic shock 552 Serum 37 Shape 698 Short-time fourier transform (STFT) 250 Shoulder 207 Sigmoidal relationship 384 Signal enhancement 674 Silver ion 55 Silver sulfate 55 Simulation 459 Single sweeps 570
869 Sinterability 54, 80 Sintering 97 Sinusoidal 484 Skin and hair optics 315 Skin chromophores 393 Small animal imaging 708 Smooth muscle cells 151 Socket 758 Soft-tissue engineering 831 Solenoid valve control 344 Sol-Gel method 69 Somatosensory evoke potential 600 Soybean oil 849 Spatial pressure 407 Speaker verification 560 Species specific nanobiosensor 384 SPECT 643 Spectral envelope 527 Speech intelligibility 527 Spinal column 203 Spinal cord injury 778 SPR 696 SrHA porous scaffolds 829 Stability 775 Stainless-steel 356 Stair ascending 161 Star topology 245 Statistical test 463 Stem cells 815 Stiffness 237 Strength training 241 Stress 371 Stress concentration 219 Stress distribution 215, 439, 747 Stress-relaxation 203 Stretching exercise 148 Strip-type conductive fabric sensor 793 Structural similarity 631 Study 16 Superior parietal lobule 336 Support vector machine 270 Supramarginal gyrus 492 Surface charge 60 Surface plasmon resonance 380 Suspension 728,758 SWI 623 Synchrotron radiation 663 Synthesis 102 synthetic oligo-targets 384 Synthetic phenotype 151 T Tactile 237 Targeted delivery 84
Targeted oxygen delivery 845 Tchebichef 587 Technomedicum 69 Template matching 398 Texture analysis 627 Texture characterization 611 Texture descriptor 698 Thiourea 849 Thoracoscopy 170 Thresholding 604 Tibia 130 Tibiofermoral joint 179 Time delay neural network 565 Time dependent 203 Tissue engineering 811, 823, 836 Tissue phantom 443 Tissue sections 667 Titanium dioxide 69,105 Tooth 659 Torques 207 Total elbow arthroplasty 215 Total knee replacement 73, 755 Total temporomandibular joint 747 Traction 428 Training system 641 Transabdominal 424 Transcranial magnetic stimulation 519 Transfusion alternative 847 Transitional cell carcinoma 380 Transmittance 69 Transradial prosthetics 743 Transtibial prosthesis 728 Trend 552 Tricalcium phosphate 823 Tuberculosis bacilli detection 667 U UHMWPE 73 Ulcers 635 Ultrasound 359 Ultrasound image 627 UMMC 762 Unconscious response 420 Unstable fracture 225 Urine 340 V Variability 222 Variable BPM option 293 Variable external resistance training 241 Velocity 407 Ventilator 134 Vessel profile 656 Vickers microhardness 98
870 Viscoelastic stiffness 201 Viscoelasticity 237 Viscosity 60 Visual processing 499 Visualization 363 VL-spectrometer 29, 30 Volumetry 624 W Wall shear stress 408 Warning 309 Wavelet 569 Wavelet coherence 473 Wavelet decomposition 613
Keyword Index wavelet transform 398 Wavelet transform modulus maxima 720 Wavelet-phase stability 569 Wear 73 Wearable computing technology 795 Wearable monitoring 371 Wearable system 140 Weight bearing 728 Weight reduction Wire 226 Wireless heart rate 283 Wireless sensor network 375 Welfare equipments 771
Welfare of Korea and Japan 769 White blood cells 40 Wound healing 854 Wrist 773 Writer’s cramp 153 WSN 348 Z ZigBee wireless sensor technology 348 Zygoma 30