VOLTAGE-SENSITIVE ION CHANNELS
Voltage-Sensitive Ion Channels Biophysics of Molecular Excitability
by
H. RICHARD LE...
67 downloads
969 Views
11MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
VOLTAGE-SENSITIVE ION CHANNELS
Voltage-Sensitive Ion Channels Biophysics of Molecular Excitability
by
H. RICHARD LEUCHTAG
A C.I.P. Catalogue record for this book is available from the Library of Congress.
ISBN 978-1-4020-5524-9 (HB) ISBN 978-1-4020-5525-6 (e-book)
Published by Springer, P.O. Box 17, 3300 AA Dordrecht, The Netherlands. www.springer.com
Printed on acid-free paper
All Rights Reserved © 2008 Springer No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work.
To Alice, Clyde, Penny, Jeremy, Joshua and Ilana Leuchtag, and to the memory of my parents, Käthe (Wagner) Leuchtag and Rudolf Wilhelm Leuchtag, who with United States citizenship became Kathe and Rudolph Leuchtag Light
Contents Preface
xxiii
Ch. 1 EXPLORING EXCITABILITY 1. NERVE IMPULSES AND THE BRAIN 1.1. Molecular excitability 1.2. Point-to-point communication 1.3. Propagation of an impulse 1.4. Sodium and potassium channels 1.5. The action potential 1.6. What is a voltage-sensitive ion channel? 2. SEAMLESS NATURE, FRAGMENTED SCIENCE 2.1. Physics 2.2. Chemistry 2.3. Biology 3. THE INTERDISCIPLINARY CHALLENGE 3.1. Worlds apart 3.2. Complex systems 3.3. Interdisciplinary sciences bridge the gap
1 1 1 2 3 5 5 7 9 9 11 14 18 18 18 19
Ch. 2 INFORMATION IN THE LIVING BODY 1. HOW BACTERIA SWIM TOWARD A FOOD SOURCE 2. INFORMATION AND ENTROPY 3. INFORMATION TRANSFER AT ORGAN LEVEL 3.1. Sensory organs 3.2. Effectors: Muscles, glands, electroplax 3.3. Using the brain 3.4. Analyzing the brain 4. INFORMATION TRANSFER AT TISSUE LEVEL 5. INFORMATION TRANSFER AT CELL LEVEL 5.1. The cell 5.2. Cells of the nervous system 5.3. The neuron 5.4. Crossing the synapse 5.5. The “psychic” neuron 5.6. Two-state model "neurons" 5.7. Sensory cells 5.8. Effector cells 6. INFORMATION TRANSFER AT MEMBRANE LEVEL 6.1. Membrane structure 6.2. G proteins and second messengers 7. INFORMATION TRANSFER AT MOLECULAR LEVEL 7.1. Chirality 7.2. Carbohydrates vii
21 21 24 25 25 26 27 28 29 30 30 31 31 34 35 35 36 38 39 39 39 40 40 42
viii
CONTENTS 7.3. Lipids 7.4. Nucleic acids and genetic information 7.5. Proteins 8. INFORMATION FLOW AND ORDER 8.1. Information flow and time scales 8.2. The emergence of order
42 43 43 44 45 45
Ch. 3 ANIMAL ELECTRICITY 1. DO ANIMALS PRODUCE ELECTRICITY? 1.1. Galvani’s “animal electricity” 1.2. Volta’s battery 1.3. Du Bois-Reymond’s "negative variation" 2. THE NERVE IMPULSE 2.1. Helmholtz and conduction speed 2.2. Pflüger evokes nerve conduction 2.3. Larger fibers conduct faster – but not always 2.4. Refractory period and abolition of action potential 2.5. Solitary rider, solitary wave 3. BIOELECTRICITY AND REGENERATION 3.1. Regeneration and the injury current 3.2. Bone healing and electrical stimulation 3.3. Neuron healing 4. MEMBRANES AND ELECTRICITY 4.1. Bernstein's membrane theory 4.2. Quantitative models 4.3. The colloid chemical theory 4.4. Membrane impedance studies 4.5. Liquid crystals and membranes 5. ION CURRENTS TO ACTION POTENTIALS 5.1. The role of sodium 5.2. Isotope tracer studies 5.3. Hodgkin and Huxley model the action potential 5.4. Membrane noise 5.5. The patch clamp and single-channel pulses 6. GENETICS REVEALS CHANNEL STRUCTURE 6.1. Channel isolation 6.2. Genetic techniques 6.3. Modeling channel structure 7. HOW DOES A CHANNEL FUNCTION? 7.1. The hypothesis of movable gates 7.2. The phase-transition hypothesis 7.3. Electrodiffusion reconsidered 7.4. Ferroelectric liquid crystals as channel models
47 47 48 48 49 49 49 49 50 50 50 51 51 52 52 53 53 53 54 54 55 56 56 57 57 58 59 59 59 59 60 60 60 60 60 61
Ch. 4 ELECTROPHYSIOLOGY OF THE AXON 1. EXCITABLE CELL PREPARATIONS
63 63
CONTENTS
ix
1.1. A squid giant axon experiment 1.2. Node of Ranvier 1.3. Molluscan neuron 2. TECHNIQUES AND MEASUREMENTS 2.1. Space clamp 2.2. Current clamp 2.3. Voltage clamp 2.4. Internal perfusion 3. RESPONSES TO VOLTAGE STEPS 3.1. The current–voltage curves 3.2. Step clamps and ramp clamps 3.3. Repetitive firing 3.4. The geometry of the nerve impulse 4. VARYING THE ION CONCENTRATIONS 4.1. The early current 4.2. The delayed current 4.3. Divalent ions 4.4. Hydrogen ions 4.5. Varying the ionic environments 5. MOLECULAR TOOLS 5.1. The trouble with fugu 5.2. Lipid-soluble alkaloids 5.3. Quaternary ammonium ions 5.4. Peptide toxins 6. THERMAL PROPERTIES 6.1. Effect of temperature on electrical activity 6.2. Effect of temperature on conduction speed 6.3. Excitation threshold, temperature and accommodation 6.4. Stability and thermal hysteresis 6.5. Temperature effects on current–voltage characteristics 6.6. Temperature pulses modify ion currents 6.7. Temperature and membrane capacitance 6.8. Heat generation during an impulse 7. OPTICAL PROPERTIES 7.1. Membrane birefringence 7.2. Ultraviolet effects 8. MECHANICAL PROPERTIES 8.1. Membrane swelling 8.2. Mechanoreception
64 65 66 66 67 68 68 68 69 69 69 70 72 73 73 74 74 74 75 76 76 77 77 78 79 79 80 80 80 81 83 84 84 84 84 85 85 85 86
Ch. 5 ASPECTS OF CONDENSED MATTER 1. THE LANGUAGE OF PHYSICS 1.1. The Schrödinger equation 1.2. The Uncertainty Principle
89 89 89 90
x
CONTENTS 1.3. Spin and the hydrogen atom 1.4. Identical particles—why matter exists 1.5. Tunneling 1.6. Quantum mechanics and classical mechanics 1.7. Quantum mechanics and ion channels 2. CONDENSED MATTER 2.1. Liquids and solids 2.2. Polymorphism 2.3. Quasicrystals 2.4. Phonons 2.5. Liquid crystals 3. REVIEW OF THERMODYNAMICS 3.1. Laws of thermodynamics 3.2. Characteristic functions 4. PHASE TRANSITIONS 4.1. Phase transitions in thermodynamics 4.2. Transitions of first order 4.3. Chemical potentials, metastability and phase diagrams 4.4. Transitions of second order 4.5. Qualitative aspects of phase transitions 5. FROM STATISTICS TO THERMODYNAMICS 5.1. Phase space 5.2. The canonical distribution 5.3. Open systems 5.4. Thermodynamics of quantum systems 5.5. Phase transitions in statistical mechanics 5.6. Structural transitions in ion channels
Ch. 6 IONS IN THE ELECTRIC FIELD 1. REVIEW OF ELECTROSTATICS 1.1. Forces, fields and media 1.2. The laws of electrostatics 2. MOVEMENT OF IONS IN AN ELECTRIC FIELD 2.1. Current 2.2. Ohm's law 2.3. Capacitance and inductance 2.4. Circuits and membrane models 3. CABLE THEORY 3.1. The cable equations 3.2. Application to the squid axon 4. THERMODYNAMICS OF DIELECTRICS 4.1. Electrochemical potential 4.2. The Nernst-Planck equation 4.3. Thermodynamics of electric displacement and field 4.4. Electrets
91 92 93 93 94 95 95 96 97 97 98 99 99 102 103 103 104 105 106 108 108 108 110 110 111 112 113 115 115 115 117 119 119 119 121 122 123 123 125 126 126 126 127 128
CONTENTS 5. MOTIONS OF CELLS IN ELECTRIC FIELDS 5.1. Dielectrophoresis 5.2. Electrorotation 6. MOVEMENT OF IONS THROUGH MATTER 6.1. Movement of ions through liquid solutions 6.2. Surface effects 6.3. Movement of ions through solids 6.4. Ionic switches 6.5. Ionic polarons and excitons 7. SUPERIONIC CONDUCTION 7.1. Sodium-ion conductors 7.2. Superionic conduction in polymers and elastomers 7.3. Are ion channels superionic conductors?
xi 129 129 130 130 130 131 131 132 132 132 133 136 137
Ch. 7 IONS DRIFT AND DIFFUSE 1. THE ELECTRODIFFUSION MODEL 1.1. The postulates of the model 1.2. A mathematical membrane 1.3. Boundary conditions 2. ONE ION SPECIES, STEADY STATE 2.1. The Nernst-Planck equation 2.2. Electrical equilibrium 3. THE CONSTANT FIELD APPROXIMATION 3.1. Linearizing the equations 3.2. The current-voltage relationship 3.3. Comparison with data 4. AN EXACT SOLUTION 4.1. One-ion steady-state electrodiffusion 4.2. Finite current 4.3. Reclaiming the dimensions 4.4. Electrical equilibrium 4.5. Applying the boundary conditions 4.6. Equal potassium concentrations
139 139 140 141 141 142 142 144 146 147 148 149 151 151 152 156 156 159 161
Ch. 8 MULTI-ION AND TRANSIENT ELECTRODIFFUSION 1. MULTIPLE SPECIES OF PERMEANT IONS 1.1. Ions of the same charge 1.2. Ions of different charges 1.3. The Goldman–Hodgkin–Katz equation 2. TIME-DEPENDENT ELECTRODIFFUSION 2.1. Scaling of variables 2.2. The Burgers equation 2.3. A simple case 3. CRITIQUE OF THE CLASSICAL MODEL
163 163 163 164 165 166 168 168 169 170
xii
CONTENTS
Ch. 9 MODELS OF MEMBRANE EXCITABILITY 1. THE MODEL OF HODGKIN AND HUXLEY 1.1. Ion-current separation and ion conductances 1.2. The current equation 1.3. The independence principle 1.4. Linear kinetic functions 1.5. Activation and inactivation 1.6. The partial differential equation of Hodgkin and Huxley 1.7. Closing the circle 2. EXTENSIONS AND INTERPRETATIONS 2.1. The gating current 2.2. Probability interpretation of the conductance functions 2.3. The Cole–Moore shift 2.4. Mathematical extensions of the Hodgkin-Huxley equations 2.5. The propagated action potential is a soliton 2.6. Action potential as a vortex pair 2.7. Catastrophe theory version of the model 2.8. Beyond the squid axon 3. EVALUATION OF THE HODGKIN-HUXLEY MODEL 3.1. Current separation 3.2. Voltage dependence of the conductances 3.3. Time variation of the conductances 3.4. The separation of ion kinetics 3.5. We’re not out of the woods yet 4. THE CONCEPT OF AN ION CHANNEL 4.1. Pore or carrier – or what? 4.2. "Pore" and "channel": Shifting meanings 4.3. Limitations of the phenomenological approach
173 173 174 174 175 176 176
182 182 183 183 185 186 187 187 189 189 190 190 190 192 192
Ch. 10 ADMITTANCE TO THE SEMICIRCLE 1. OSCILLATIONS, NORMAL MODES AND WAVES 1.1. Simple pendulum 1.2. Normal modes 1.3. The wave equation 1.4. Fourier series 1.5. The Fourier transform of a vibrating string 2. MEMBRANE IMPEDANCE AND ADMITTANCE 2.1. Impedance decreases during an impulse 2.2. Inductive reactance 2.3. A simple circuit model 3. TIME DOMAIN AND FREQUENCY DOMAIN 3.1. Fourier analysis 3.2. The complex admittance 3.3. Constant-phase-angle capacitance
195 195 195 197 197 198 198 199 199 201 202 203 203 205 206
178 178 179 179 180 180
CONTENTS
xiii
4. DIELECTRIC RELAXATION 4.1. The origin of electric polarization 4.2. Local fields affect permittivity 4.3. Dielectric relaxation and loss 4.4. Cole–Cole analysis 5. FREQUENCY-DOMAIN MEASUREMENTS 5.1. Linearizing the model of Hodgkin and Huxley 5.2. Frequency response of the axonal impedance 5.3. Pararesonance 5.4. Impedance of the Hodgkin–Huxley axon membrane 5.5. Generation of harmonics ?5.6. Data fits to squid-axon sodium system 5.7. Admittance under suppressed ion conduction
207 207 208 209 211 212 213 214 216 216 217 217 217
Ch. 11 WHAT’S THAT NOISE? 1. STOCHASTIC PROCESSES AND STATISTICAL LAWS 1.1. Stochastic processes 1.2. Stationarity and ergodicity 1.3. Markov processes 2. NOISE MEASUREMENT AND ANALYSIS TECHNIQUES 2.1. Application of Fourier analysis to noise problems 2.2. Spectral density and autocorrelation 2.3. White noise 3. EFFECTS OF NOISE ON NONLINEAR DYNAMICS 3.1. An aperiodic fluctuation 3.2. The Langevin equation 4. NOISE IN EXCITABLE MEMBRANES 4.1. A nuisance becomes a technique 4.2. Fluctuation phenomena in membranes 4.3. 1/f noise 4.4. Lorentzian spectra 4.5. Multiple Lorentzians 4.6. Nonstationary noise 4.7. Light scattering spectra 5. IS THE SODIUM CHANNEL A LINEAR SYSTEM? 5.1. Sodium-current characteristics 5.2. Admittance and noise 6. MINIMIZING MEASUREMENT AREA 6.1. Patch clamping 6.2. Elementary stochastic fluctuations in ion channels
221 221 222 224 224 225 225 226 227 228 228 228 230 230 230 231 231 233 234 235 235 235 238 239 240 240
Ch. 12 ION CHANNELS, PROTEINS AND TRANSITIONS 1. THE NICOTINIC ACETYLCHOLINE RECEPTOR 2. CULTURED CELLS AND LIPOSOMES 2.1. Sealing the pipette to the membrane 2.2. Reconstitution of channels in bilayers 2.3. Reconstitution of sodium channels
243 244 245 245 246 247
xiv
CONTENTS 3. SINGLE-CHANNEL CURRENTS 3.1. Unitary potassium currents 3.2. Unitary sodium currents 4. MACROSCOPIC CURRENTS FROM CHANNEL TRANSITIONS 4.1. The two-state model 4.2. Ohmic one-ion channels 4.3. Time dependence 4.4. Critique of the methodology 5. PROTEIN STRUCTURES 5.1. Amino acids: Building blocks of proteins 5.2. Primary structure 5.3. Levels of structural organization 5.4. The alpha helix 5.5. The beta sheet 5.6. Domains and loop regions 5.7. Structure classifications and representations 5.8. Alpha-domain structures 5.9. Alpha/beta structures 5.10. Antiparallel beta structures: jelly rolls and barrels 6. METALLOPROTEINS 6.1. Metalloproteins in physiology and toxicology 6.2. Voltage-sensitive ion channels as metalloproteins 7. MEMBRANE PROTEINS 7.1. Membrane-spanning protein molecules 7.2. Crystallization of membrane proteins 7.2. Biosynthesis of membrane proteins 8. TRANSITIONS IN PROTEINS 8.1. Vibrations and conformational transitions 8.2. Allosteric transitions in myoglobin and hemoglobin 8.3. Allostery in ion channels
Ch. 13 DIVERSITY AND STRUCTURES OF ION CHANNELS 1. THE ROLE OF STRUCTURE 2. FAMILIES OF ION CHANNELS 2.1 Molecular biology 2.2. Evolution of voltage-sensitive ion channels 3. MOLECULAR BIOLOGY PROBES CHANNEL STRUCTURE 3.1. Genetic engineering of ion channels 3.2. Obtaining the primary structure 3.3. Hydropathy analysis 3.4. Site-directed mutagenesis 4. CLASSIFICATION OF ION CHANNELS 4.1. Nomenclature 4.2. Classification criteria 4.3. Toxins and pharmacology 4.4. Voltage-sensitive ion channels and disease
248 249 249 249 250 250 252 253 253 253 255 256 257 259 259 260 260 262 263 263 264 265 265 265 266 267 267 268 268 268 271 271 272 272 272 273 273 273 273 274 275 275 275 276 276
CONTENTS 5. POTASSIUM CHANNELS: A LARGE FAMILY 5.1. Shaker and related mutations of Drosophila 5.2. Diversity of potassium channels 5.3. Three groups of K channels 5.4. Voltage-sensitive potassium channels 5.5. Auxiliary subunits 5.6. Inward rectifiers 5.7. Potassium channels and disease 6. VOLTAGE-SENSITIVE SODIUM CHANNELS: FAST ON THE TRIGGER 6.1. Neurotoxins of VLG Na channels 6.2. Types of VLG Na channels 6.3. Positively charged membrane-spanning segments 6.4. Proton access to channel residues 6.5. Mutations in sodium channels 7. CALCIUM CHANNELS: LONG-LASTING CURRENTS 7.1. Function of VLG Ca channels 7.2. Structure of VLG Ca channels 7.3. Types of VLG Ca channels 7.4. Calcium-channel diseases 8. H+-GATED CATION CHANNELS: THE ACID TEST 9. CHLORIDE CHANNELS: ACCENTUATE THE NEGATIVE 9.1. Structure and function of chloride channels 9.2. Chloride-channel diseases 10. HYPERPOLARIZATION-ACTIVATED CHANNELS: IT’S TIME 11. CYCLIC NUCLEOTIDE GATED CHANNELS 12. MITOCHONDRIAL CHANNELS 13. FUNGAL ION CHANNELS–ALAMETHICIN 14. THE STRUCTURE OF A BACTERIAL POTASSIUM CHANNEL Ch. 14 MICROSCOPIC MODELS OF CHANNEL FUNCTION 1. GATED STRUCTURAL PORE MODELS 1.1. Structural gated pores 1.2. Selectivity filter and selectivity sequences 1.3. Independence of ion fluxes 1.4. Gates 1.5. A "paradox" of ion channels 1.6. Bacterial model pores and porins 1.7. Water through the voltage-sensitive ion channel? 1.8. Molecular dynamics simulations 2. MODELS OF ACTIVATION AND INACTIVATION 2.1. Armstrong model 2.2. Barrier-and-well models of the channel 2.3. The inactivation gate 2.4. Beyond the gated pore
xv 277 277 277 279 279 282 282 284 284 285 285 285 287 288 288 289 290 291 291 292 293 293 294 294 295 295 297 298 301 301 301 304 304 305 306 306 307 307 308 309 309 311 312
xvi
CONTENTS 3. ORGANOMETALLIC CHEMISTRY 3.1. Types of intermolecular interactions 3.2. Organometallic receptors 3.3. Supramolecular self-assembly by % interactions 4. PLANAR ORGANIC CONDUCTORS 5. ALTERNATIVE GATING MODELS 5.1. The theories of Onsager and Holland 5.2. Ion exchange models 5.3. Hydrogen dissociation and hydrogen exchange 5.4. Dipolar gating mechanisms 5.5. A global transition with two stable states 5.6. Aggregation models 5.7. Condensed state models 5.8. Coherent excitation models 5.9. Liquid crystal models 6. REEXAMINATION OF ELECTRODIFFUSION 6.1. Classical electrodiffusion – what went wrong? 6.2. Are the "constants" constant? 7. ORDER FROM DISORDER?
313 313 314 316 317 318 318 320 321 322 322 322 323 323 324 324 325 325 326
Ch. 15 ORDER FROM DISORDER 1. COMPLEXITY AND CRITICALITY 1.1. The emergence of complexity 1.2. Power laws and scaling in physical statistics 1.3. Universality 1.4. Emergent phenomena 2. FRACTALS 2.1. Self-similarity 2.2. Scaling and fractal dimension 2.3. Fractals in time: 1/f noise 2.4. Fractal transport in superionic conductors 2.5. Self-organized criticality 3. ORDER, DISORDER AND COOPERATIVE BEHAVIOR 3.1. Temperature and entropy 3.2. The perfect spin gas 3.3. Thermodynamic functions of a spin gas 3.4. Spontaneous order in a real spin gas 4. FLUCTUATIONS, STABILITY, MACROSCOPIC TRANSITIONS 4.1. Fluctuations and instabilities 4.2. Convective and electrohydrodynamic instabilities 4.3. Spin waves and quasiparticles 4.4. The phonon gas 4.5. The spontaneous ordering of matter 5. PHASE TRANSITIONS 5.1. Order variables and parameters 5.2. Mean field theories 5.3. Critical slowing down and vortex unbinding
329 329 330 331 332 333 334 334 334 335 336 337 338 338 339 341 342 343 343 344 345 346 347 347 348 349 350
CONTENTS 6. DISSIPATIVE STRUCTURES 6.1. Thermodynamics of irreversible processes 6.2. Evolution of order 6.3. Synergetics 6.4. A model of membrane excitability Ch. 16 POLAR PHASES 1. ORIENTATIONAL POLAR STATES IN CRYSTALS 1.1. Piezoelectricity 1.2. Pyroelectricity 1.3. The strange behavior of Rochelle salt 1.4. Transition temperature and Curie-Weiss law 1.5. Hysteresis 1.6. Ferroic effects 2. THERMODYNAMICS OF FERROELECTRICS 2.1. A nonlinear dielectric equation of state 2.2. Second order transitions 2.3. Field and pressure effects 2.4. Chirality and self-bias 2.5. Admittance and noise in ferroelectrics 3. STRUCTURAL PHASE TRANSITIONS IN FERROELECTRICS 3.1. Order-disorder and displacive transitions 3.2. Spontaneous electrical pulses 3.3. Soft lattice modes 3.4. Hydrogen-bonded ferroelectrics 4. FERROELECTRIC PHASE TRANSITIONS AND CONDUCTION 4.1. Tris-sarcosine calcium chloride 4.2. Betaine calcium chloride dihydrate 4.3. Dielectric relaxation in structural transitions 4.4. Cole-Cole dispersion; critical slowing down 4.5. From ferroelectric order to superionic conduction 4.6. Ferroelectric semiconductors 5. PIEZO- AND PYROELECTRICITY IN BIOLOGICAL TISSUES 5.1. Pyroelectric properties of biological tissues 5.2. Piezoelectricity in biological materials 6. PROPOSED FERROELECTRIC CHANNEL UNIT IN MEMBRANES 6.1. Early ferroelectric proposals for membrane excitability 6.2. The ferroelectric–superionic transition model 6.3. Field-induced birefringence in axonal membranes 6.4. Membrane capacitance versus temperature 6.5. Surface charge 6.6. Field effect and the function of the resting potential 6.7. Phase pinning and the action of tetrodotoxin 7. THE CHANNEL IS NOT CRYSTALLINE
xvii 351 351 351 352 352 355 355 356 356 357 357 358 358 359 360 361 362 363 364 365 365 366 366 367 369 369 369 370 371 371 373 374 374 375 375 375 378 380 380 380 381 382 383
xviii
CONTENTS 387 387 387 388 388 391 392 393 394
Ch. 17 DELICATE PHASES AND THEIR TRANSITIONS 1. MESOPHASES: PHASES BETWEEN LIQUID AND CRYSTAL 1.1. Nematics and smectics 1.2. Calamitic and discotic liquid crystals 1.3. Helical structures: Cholesterics and blue phases 1.4. Columnar liquid crystals 2. STATES AND PHASE TRANSITIONS OF LIQUID CRYSTALS 2.1. Correlation functions in liquid crystals 2.2. Symmetry, molecular orientation and order parameter 2.3. Free energy of the inhomogeneous orientational structure 2.4. Modulated orientational structure 2.5. Free energy of a smectic liquid crystal of type A 2.6. Stability of the smectic phase 2.7. Phase transitions between smectic forms 2.8. Inversions in chiral liquid crystals 3. ORDER PARAMETERS UNDER EQUILIBRIUM CONDITIONS 3.1. Biaxial smectics 3.2. The role of fluctuations 3.3. Effect of impurities 4. FIELD-INDUCED PHASE TRANSFORMATIONS 4.1. Dielectric permittivity of liquid crystals 4.2. Unwinding the helix 4.3. The Fredericks transition 5. POLARIZED STATES IN LIQUID CRYSTALS 5.1. Flexoelectric effects in nematics and type-A smectics 5.2. Flexoelectric deformations 5.3. The flexoelectric effect in cholesterics 5.4. Polarization and piezoelectric effects in chiral smectics 5.5. The electroclinic effect 5.6. The electrochiral effect 6. THE FERROELECTRIC STATE OF A CHIRAL SMECTIC 6.1. Behavior of a liquid ferroelectric in an external field 6.2. Polarization and orientational perturbation 6.3. Surface-stabilized ferroelectric liquid crystals
394 395 396 397 398 399 399 399 400 401 401 402 402 403 404 405 405 406 407 408 408 408 409 411 413
Ch. 18 PROPAGATION AND PERCOLATION IN A CHANNEL 1. SOLITONS IN LIQUID CRYSTALS 1.1. Water waves to nerve impulses 1.2. Korteweg-deVries equation 1.3. Nonlinear Schrödinger equation 1.4. The sine-Gordon equation 1.5. Three-dimensional solitons
415 415 416 417 418 419 420
CONTENTS 1.6. Localized instabilities in nematic liquid crystals 1.7. Electric-field-induced solitons 1.8. Solitons in smectic liquid crystals 2. SELF-ORGANIZED WAVES 2.1. The broken symmetries of life 2.2. Autowaves 2.3. Catastrophe theory model based on a ferroelectric channe 2.4. The action potential as a polarization soliton 3. BILAYER AND CHANNELS FORM A HOST–GUEST PHASE 3.1. Protein distribution by molecular shape 3.2. Flexoelectric responses in hair cells 4. PERCOLATION THEORY 4.1. Cutting bonds 4.2. Site percolation and bond percolation 4.3. Two conductors 4.4. Directed percolation 4.5. Percolation in ion channels 5. MOVEMENT OF IONS THROUGH LIQUID CRYSTALS 5.1. Chiral smectic C elastomers 5.2. Metallomesogens 5.3. Ionomers 5.4. Protons, H bonds and cooperative phenomena Ch. 19 SCREWS AND HELICES 1. THE SCREW-HELICAL GATING HYPOTHESIS 2. ORDER AND ION CHANNELS 2.1. Threshold responses in biological membranes 2.2. Mean field theories of excitable membranes 2.3. Constant phase capacitance obeys a power law 2.4. The open channel is an open system 2.5. Self-similarity in currents through ion channels 3. FERROELECTRIC BEHAVIOR IN MODEL SYSTEMS 3.1. Ferroelectricity in Langmuir-Blodgett films 3.2. Observations in bacteriorhodopsin 3.3. Ferroelectricity in microtubules 4. SIZING UP THE CHANNEL MOLECULE 4.1. The size problem in crystalline ferroelectrics 4.2. Size is a parameter 5. THE DIPOLAR ALPHA HELIX 5.1. Structure of the helix 5.2. Helix–coil transition 5.3. Dipole moment of the helix 5.4. -Helix solitons in protein 5.5. Temperature effects in Davydov solitons
xix 421 422 422 422 422 424 425 427 429 429 430 430 431 433 434 435 436 437 437 438 439 439 443 443 445 445 446 447 447 447 448 448 449 450 451 452 452 453 453 454 454 454 457
xx
CONTENTS 6. ALPHA HELICES IN VOLTAGE-SENSITIVE ION CHANNELS 6.1. The -helical framework of ion channels 6.2. Channel gating as a transition in an helix 6.3. Water in the channel––again?
459 459 460
Ch. 20 VOLTAGE-INDUCED GATING OF ION CHANNELS 1. ION CHANNEL: A FERROELECTRIC LIQUID CRYSTAL? 1.1. Electroelastic model of channel gating 1.2. Cole-Cole curves in a ferroelectric liquid crystal 1.3 A voltage-sensitive transition in a liquid crystal 2. ELECTRIC CONDUCTION ALONG THE ALPHA HELIX 2.1. Electron transfer by solitons 2.2. Proton conduction in hydrogen-bonded networks 2.3. Dynamics of the alpha helix 3. ION EXCHANGE MODEL OF CONDUCTION 3.1. Expansion of H bonds and ion replacement 3.2. Can sodium ions travel across an alpha helix? 3.3. Relay mechanism 3.4. Metal ions can replace protons in H bonds of ion channels 4. GATELESS GATING 4.1. How does a depolarization change an ion conductance? 4.2. Enzymatic dehydration of ions 4.3. Hopping conduction 5. INACTIVATION AND RESTORATION OF EXCITABILITY 5.1. Inactivation as a surface interaction 5.2. Restoration of excitability
465 465 465 466 467 468 468 469 469 470 471 471 472
Ch. 21 BRANCHING OUT 1. FERROELECTRIC LIQUID CRYSTALS WITH AMINO ACIDS 1.1. Amino acids with branched sidechains 1.2. Relaxation of linear electroclinic coupling 1.3. Electrical switching near the SmA*–SmC* phase transition 1.4. Two-dimensional smectic C* films 2. FORCES BETWEEN CHARGED RESIDUES WIDEN H BONDS 2.1. Electrostatics and the stability of S4 segments 2.2. Changes in bond length and ion percolation 2.3. Replacement of charged residues with neutrals 3. MICROSCOPIC CHANNEL FUNCTION 3.1. Tilted segments in voltage-sensitive channels 3.2. Segment tilt and channel activation 3.3. Chirality and bend 4. CRITICAL ROLES OF PROLINE AND BRANCHED SIDECHAINS 4.1. The role of proline
461
475 477 477 477 478 478 479 480 483 484 484 486 486 487 488 489 491 492 492 492 494 495 496 496
CONTENTS 4.2. The role of branched nonpolar amino acids 4.3. Substitution leads to loss of voltage sensitivity 4.4. Whole channel experiments 5. NEW DATA NEW MODELS
xxi 497 498 500
5.1. Amino acids dissociate from the helix 5.2. A twisted pathway in a resting channel 5.3. A prokaryotic voltage-sensitive sodium channel 5.4. Interactions with bilayer charges
501 503 503 503
6. TOWARD A THEORY OF VOLTAGE-SENSITIVE ION CHANNELS 6.1. The hierarchy of excitability 6.2. Block polymers 6.3. Coupling the S4 segments to the electric field 6.4. A new picture is emerging
504 505 506 506 506
INDEX
509
PREFACE The goal of this book is to explore the complexity of a microscopic bit of matter that exists in a myriad of copies within our bodies, the voltage-sensitive ion channel. We seek to investigate the way in which these macromolecules make it possible for the long fibers of our nerve and muscle cells to conduct impulses. These integral components of cell membranes are marvels of nature's evolutionary adaptation. To understand them we must probe the boundaries of physics and chemistry. Since function is intimately related to structure, we examine the molecular structure of channels, focusing on physical principles that govern all matter. With the application of genetic methods, our knowledge of ion channels has broadened and deepened. In the hope that research can help ameliorate suffering, we discuss the diseases that arise from channel malfunctions due to genetic mutations. This book is intended for students and scientists who are willing to travel into uncharted waters of an interdisciplinary science. We approach the subject of voltagesensitive ion channels from various points of view. This book seeks to give voice to the viewpoints of the physical and the biological scientist, and to bridge gaps in terminology and background. Readers may find this book to have both elementary and advanced aspects: For the reader trained in the biological sciences, it reviews background in physics and chemistry; for the reader trained in the physical sciences, it reviews background in physiology and biochemistry. Beyond the introductory chapters, we follow up concepts that may be as new and challenging to you, the reader, as at first they were to me. Ten years or so ago at a Biophysical Society meeting I was talking to a fellow channel scientist, one considerably younger than I. I happened to mention that, in my opinion, voltage-sensitive ion channels will eventually have to be investigated by quantum mechanical methods. “It’ll take a hundred years before that happens,” was his response before dashing off. This book is, in a sense, directed to that scientist. He and I are older now, and while I have learned that many things take longer than we expect, I would like him to consider that some things may take less long. While his estimate may well be right for a completely worked out solution to the problem of molecular excitability, there is no better time to begin working toward that goal than now. This book refers to results condensed-state physicists have obtained in materials that exhibit structural and behavioral properties similar to those of membranes containing voltage-sensitive ion channels. I hope that this book, by bringing together molecular excitability and condensed state physics, will confirm that biology and physics are parts of the same world. For this work I am indebted to many people. At UCLA, my professors Robert Finkelstein, David Saxon, Marcel Verzeano and Jean Bath stand out. James Swihart, my graduate adviser at the Indiana University Physics Department, taught me to sail the choppy seas of research; while in Europe, he discussed my thesis with Alan Hodgkin. Other influential professors at Indiana included Alfred Strickholm, Ludvik Bass (then a visiting professor from the University of Queensland, Australia) and
xxiii
xxiv
PREFACE
Walter J. Moore. Helpful during my postdoctoral work at the New York University Physics Department were Morris Shamos, Robert Rinaldi, Abraham Liboff and Charles Swenberg, as well as Rodolfo Llinas and Charles Nicholson at the New York University Medical Center. By convincing me that classical electrodiffusion is inadequate as a mathematical model of excitable membrane currents, Fred Dodge and James Cooley prodded me into looking for the reason for that inadequacy. Harvey Fishman was my mentor and collaborator at the University of Texas Medical Branch in Galveston and the Marine Biological Laboratory at Woods Hole; he remains my friend. At Woods Hole I met and was inspired by Kenneth S. “Casey” Cole. At Texas Southern University, Floyd Banks, Sunday Fadulu, Debabrata Ghosh, Oscar Criner and Mahmoud Saleh were research collaborators. Discussions with Fred Cummings, Rita Guttman, Lee Moore, Tobias “Toby” Schwartz, Gabor Szabo, David Landowne, Malcolm Brodwick, Susan Hamilton, Arthur “Buzz” Brown, Richard “Spike” Horn, Tony Lacerda, Sidney Lang, Georg Zundel and others helped keep me focused. Donald Chang was instrumental in turning my focus from the membrane to the channel. Ichiji Tasaki has been a friend and colleague. They, together with William J. Adelman Jr., collaborated with me in organizing a conference and editing a book on structure and function in excitable cells, a precursor to this volume. Stewart Kurtz, Robert Newnham and other members of the Materials Research Laboratory of Pennsylvania State University provided valuable insights into ferroelectricity. I was fortunate in meeting Vladimir Fridkin, as our discussions have been fruitful. Vladimir Bystrov, my collaborator and friend, has applied his knowledge of physics and his boundless energy to research, writing, translating and organizing conferences. Hervé Duclohier invited me to his lab to put predictions of my channel model to an experimental test with his collaborators. Said Bendahhou and his colleagues extended the test from parts of channels to whole channels. Michael Green, a friend and colleague, and Fishman have read parts of this book and provided valuable criticism; any remaining errors are of course my own. I thank the many scientists on whose work I have depended, both those I have cited and—with sincere apologies—those I have not. My special gratitude goes to the authors whose illustrations provide figures in this volume, as well as to the permissions staffs of publishing houses and the Copyright Clearance Center. Jane Richardson kindly provided me with an updated version of a figure. The librarians who supplied me with research materials, particularly at the Butt–Holdsworth Memorial Library in Kerrville, Texas, deserve special mention. I appreciate the skill, patience and thoroughness of Springer editors Peter Butler, Tanja van Gaans and André Tournois, and typesetters Bhawna Narang and Nidhi Waddon. My wife and intellectual companion, Alice Leuchtag, has been a constant source of support and encouragement throughout the writing of this book. It is my hope that scientists will maintain an awareness of the outcomes of their research, applying science only to the building of a more just and peaceful world, in harmony with our planet. H. R. L.
CHAPTER 1
EXPLORING EXCITABILITY
Voltage-sensitive ion channels are macromolecules that act as electrical components in the membranes of living organisms. While we know that these molecules carry out important physiological functions in many different types of cells, scientists first became aware of them in the study of the impulses that carry information along nerve and muscle fibers. 1. NERVE IMPULSES AND THE BRAIN Our species, Homo sapiens, is unique among animals in its abilities to manipulate symbols, having developed languages and conceptualized space, time, matter, life, ethics and our place in the universe. These abilities are localized in the brain, about 1.4 kilograms of pink-gray organ. The complexity of the brain extends from macroscopic to microscopic—from its highly convoluted surface, through a labyrinth of lobes, tracts, nuclei and other anatomic structures, through a dense tissue of interconnected cells, through a rich mosaic of membranes, to the large molecules that make up those membranes and the membrane-spanning helical strands within them. It is remarkable that, despite the vast differences in human behavior from even that of our closest primate relatives, the molecular structures in our brains differ only in minor details from those of other mammals. Even more remarkable is the fact that such seemingly primitive forms as bacteria possess complex molecules that are shedding light on the details of related molecules in our brains. 1.1. Molecular excitability The human body, like the bodies of other living organisms, is a tumult of electrical activity. Just as an electrocardiogram shows us that the heart is a powerful generator of electric currents emanating from the coordinated action of its nerve and muscle cells, so an electroencephalogram demonstrates that the brain likewise generates electricity. The cells of the heart, brain and other organs produce electric currents in the form of transient ion flows across the membranes that cover them. The membranes are mosaic sheets of lipid and protein molecules. While the lipids form effective electrical
1
2
CHAPTER 1
insulators, proteins of a particular class are capable of dispatching pulses of rapid ion conduction. These switchable protein macromolecules are called ion channels.1 Ion channels of one type, ligand-gated ion channels, recognize and react to specific molecules in their environment. When these ligand molecules attach, the ion channel changes its conformation and starts (or stops) conducting ions. Examples of ligand-gated ion channels include receptors for tastes and odors, and the macromolecules that receive elementary messages from other cells in the form of chemical messenger molecules. Among these we find hormones, such as thyroxine and insulin, and neurotransmitters, such as dopamine and acetylcholine. Ion channels of another type switch their conductivity in response to a change of the voltage across the membrane. These channels make it possible for impulses to travel along nerve and muscle fibers; it is to these voltage-sensitive ion channels that this book is devoted. Hybrid channels exhibit both voltage and ligand sensitivity. The problem of the way ion channels respond to changes in membrane voltage—the problem of molecular excitability—has not been solved, although much progress has been made in this direction. This book will report on the background, history and ongoing efforts being made toward a solution of this problem. We will approach the problem from different directions, concerning ourselves not only with recent results, but also with earlier data and concepts. 1.2. Point-to-point communication To make large, multicellular organisms, evolution has had to solve the important problem of communication within the body. For stationary organisms such as plants, that problem was essentially solved by sending signal molecules, hormones, in the fluids that move up and down the body. Hormones also play an important part in communication within animals, but the amount and speed of the information that can be sent by this endocrine system is limited in specificity by the number of different hormones that can be synthesized and recognized, and in speed by the circulatory system that transports them. To generate a system of communication capable of controlling the muscles of the body, producing visual images and other sophisticated tasks in fast-moving organisms, the “blind watchmaker,” evolution, had to do better.2 A rapid point-to-point communication system was required. The solution, which appeared early during the evolution of such invertebrates as jellyfish, was for certain specialized cells to grow fibers of great length and to send waves of electrical and chemical energy along them. These nerve impulses are complex examples of solitary waves. They travel along a vast network of nerve fibers, the nervous system. Data about the external environment and the status of the body are fed into this point-to-point communications network from sense receptors. In vertebrates, the sense data are processed into responses and memories in the central nervous system, which consists of the brain and the spinal cord. Their outputs signal muscles to contract by way of neuromuscular junctions, and stimulate the endocrine system by activating glands. Nerve impulses are wonderful and mysterious. Every perceived sound, sight, smell and taste reaches our brains, and our consciousness, by nerve impulses. Every muscular movement, whether an eyeblink, a uterine contraction or a heartbeat, is
EXPLORING EXCITABILITY
3
controlled by nerve impulses. Even the release of chemical messengers such as adrenaline or testosterone is stimulated by nerve impulses. The nerve impulse is an integral part of what we mean by cellular excitability, the ability of living cells to respond to their environment. Nerve impulses are waves that move along axons, the long tubular fibers of neurons. One convenient way to study them is to record their electrical signatures, called action potentials. Action potentials can also be recorded from muscle, gland and other cells. Vast numbers of experiments on a great variety of animal and plant cells have been carried out by biological scientists to study the electrical responses of excitable membranes; see Figure 1.1.3 It is only by information from such experiments that our ideas regarding the underlying basis of excitability can be tested.
Figure 1.1. Dedication of a book on voltage-sensitive ion channels by Susumu Hagiwara. From Hagiwara, 1983.
1.3. Propagation of an impulse The question What is the scientific basis of excitability? has intrigued scientists for centuries. Isaac Newton evidently had a strong interest in this question as he posed these two Queries in his book Opticks4:
4
CHAPTER 1 Qu. 23. Is not Vision perform'd chiefly by the Vibrations of [the Aether], excited in the bottom of the Eye by the Rays of Light, and propagated through the solid, pellucid and uniform Capillamenta of the optick Nerves into the place of Sensation? And is not Hearing perform'd by the Vibrations either of this or some other Medium, excited in the auditory Nerves by the Tremors of the Air, and propagated by the solid, pellucid and uniform Capillamenta of those Nerves into the place of Sensation? And so of the other Senses. Qu. 24. Is not Animal Motion perform'd by the Vibrations of this Medium, excited in the Brain by the power of the Will, and propagated from thence through the solid, pellucid and uniform Capillamenta of the Nerves into the Muscles, for contracting and dilating them? ...
Much has been learned since Newton's time (including the facts that the ether doesn't exist and that nerve impulses can make muscles contract but not dilate), yet the core of the question of excitability remains unresolved; a fundamental understanding of the molecular basis of the action potential still eludes us. The neuron's long nerve fiber, the axon, is a long cylinder, which in some neurons of vertebrates has a fatty covering of myelin over it. The myelin sheath speeds up the action potential. For simplicity let us begin by considering an unmyelinated axon. The axon is bounded by a membrane called the axolemma, and it contains a watery, fibrous gel called the axoplasm. The axon is bathed in a body fluid that is essentially blood plasma, an aqueous solution rich in sodium ions, like seawater. The axoplasm has a much lower concentration of sodium ions, but a much higher concentration of potassium ions than the exterior solution. The anions, cations and neutral molecules present in the two solutions are distributed so as to make the solutions electrically neutral and at the same osmotic pressure. The sodium and potassium concentration differences represent two independent sources of energy. A voltage measurement can tell us that a healthy axon, ready for a nerve impulse, is electrically charged. As a result of surface charges on the membrane, the axoplasm is negative relative to the external solution. The internal potential relative to the external solution in the inactive cell is called the resting potential. In a typical nerve axon, the potential of the axoplasm relative to the external medium (which serves as ground potential) is about -70 mV. Sending a message requires a source of energy. Supplying energy only at the transmitting point would be inadequate, because the message would then diminish and be lost in the background noise. Energy sources therefore must be distributed all along the communication line. In this way, as in a line of carefully placed upright dominoes, there is no limit to the length of the path. However, a line of dominoes can send only one message—until energy is provided to set the dominoes up again. Thus for ongoing communication two sources of energy are needed: one to transmit the message (an impulse sufficient to knock the dominoes down) and another to restore the metastable order of the system (work to set them up again). Such a strategy is used in the body to propagate nerve and muscle impulses. The nerve or muscle fiber is maintained in a high-energy state far from thermal equilibrium, known as the resting state. This term is a misnomer because the “resting” membrane is highly charged by a strong electric field. The resting voltage across a nerve membrane is usually about 70 mV, with the inside negative. Combining this with
EXPLORING EXCITABILITY
5
the membrane thickness L of about 5 nm (1 nm = 10-9 m = 10 '), we see that the average resting electric field, E = V/L, is of the order of 107 V/m, a very high field. So the “rest” of a resting membrane is a tense one indeed! In this state, only a small stimulus is needed to initiate a wave in which the fiber rapidly falls to a state of lower energy. Part of the energy made available in this process must be passed along to a neighboring section to carry the wave on. This is the fast system, often called the sodium system for the current of sodium often ions involved in it. After the impulse has passed, the high-energy resting state—perhaps better called the excitable state—is restored to ready the system for the next impulse. This is accomplished by the slow or delayed system, often called the potassium system. 1.4. Sodium and potassium channels The high concentration of sodium ions on the outside relative to that inside the cell would tend to drive them in. In addition, their positive charge attracts them toward the negative interior of the axon. For these two reasons, the external sodium ions are at a high electrochemical potential energy relative to the axoplasm, which would drive them across the membrane through any available pathway. Macromolecules called sodium channels within the axonal membrane provide such pathways under certain conditions. When these conditions are met, the channels are said to be open; otherwise, they are closed. The terms “open” and “closed” are convenient labels but, as we shall see in the following chapters, should not be taken too literally. The voltage-sensitive sodium channels are pathways for sodium ions only when the membrane is partially depolarized. It takes only a rather modest depolarization (lowering of the absolute value of the voltage from resting) to reach the threshold at which the probability becomes high for the sodium channel to remodel itself into a different configuration, in which it becomes a selective ion conductor. Not all the sodium channels in an axon open to allow sodium ions to enter the axoplasm, however, and within a brief period of about 0.7 s most of them close again, even while the depolarization is maintained. Now the sodium system is said to be inactivated. Restoring the excitable state—setting the dominoes upright again—is the job of the potassium channels. Like the sodium channels, these are glycoprotein molecules embedded in the fatty membrane. The probability that the axonal potassium channels will open increases upon depolarization, but only after a brief delay. We emphasize an important point: The opening and closing of voltagesensitive ion channels is not rigidly controlled by the membrane voltage. These are stochastic events, so that only their probability is voltage-dependent, as we will explore in Chapter 11. 1.5. The action potential Now we can begin to see how a nerve impulse travels: Suppose a group of sodium channels in a region of an axon were to open. Then external sodium ions there would quickly enter the axon, pushed by the concentration difference and pulled by the
6
CHAPTER 1
electrostatic force. As they carry their positive charges into the axoplasm, they drive the internal voltage toward zero and beyond, to positivity. As this depolarization spreads out within a local region surrounding the group of channels, neighboring Na channels sense it, respond and stochastically open, carrying the action forward. Like the line of dominoes, the array of sodium channels exists in a metastable situation; its destabilization spreads by local interactions. Thus the signal is carried from channel to channel and down the axon to its terminal. Because of inactivation, sodium channels close after a brief opening. After a delay the potassium ions flow outward, driven by their electrochemical potential difference. Because the K+ concentration is higher inside the cell, this current is oppositely directed to that of the sodium ions. The outward current restores the resting potential difference. It will take a little longer for that patch of axon to become excitable again; this refractory period is due to inactivation. The voltage-sensitive channels are restored to their excitable configurations by a shift of their molecular configurations, and the axon is ready to conduct another impulse.
Figure 1.2. The action potential rises from the level of the resting potential to a positive peak, then drops at a slower rate to the resting potential. It may “undershoot” the resting level and approach it from below. The time marker, 500 Hz, shows that the action potential is complete in about 2 ms. This figure, published by Hodgkin and Huxley in 1939, is one of the first pictures of a complete action potential. From Smith, 1996. Reprinted by permission from MacMillan Publishers Ltd: Nature 144:710-711 copyright 1939.
An action potential, then, is a traveling electric wave normally initiated by a threshold depolarization, a sufficiently large lessening of the resting potential. (Alternately, it may be initiated by heating or injuring the axon or muscle fiber.) The entire action potential at a given point is completed in two to three milliseconds; it
EXPLORING EXCITABILITY
7
propagates along the axon, which may be as short as a millimeter or as long, for example, as a giraffe's leg. Figure 1.2 shows the time course of an action potential.5 The action potential is not a localized phenomenon. As the inward current flows, the ions spread in both directions, depolarizing adjacent regions. This activates neighboring sodium channels, moving the impulse ahead. If the depolarization has been applied, experimentally, to an excitable region of an excised axon, the action potential may start off in either direction, depending on electrode placement. However, once the impulse has started, it will only continue in the forward direction, since the sodium channels in the backward direction are inactivated. In the living organism the anatomy of the cell ensures that the impulse travels in only one direction, from cell body to axon terminal. We have seen that a useful way to look at a nerve axon is that it is a system of metastable units extended along a line, like a row of dominoes. The only thing that keeps the sodium ions from flowing in and the potassium ions from flowing out until electrical and diffusional equilibrium is attained is the impermeability of the membrane. Any breach in that impermeability will initiate an ion current. Evolution has found a way to harness that metastability by breaking the membrane's impermeability with two separate sets of molecules, permeable to different ions, thereby creating an efficient and adaptable system of information transfer. One type of ion channel is necessary to permit the signaling current to flow, and another to carry an opposing current to restore the membrane to its excitable condition. In many nerve and muscle membranes, the sodium channel plays the first, and the potassium channel the second role. This is not always the case; for example, calcium channels take the place of sodium channels at the axon terminal; the calcium ions they import into the cell trigger transmission of the signal across the synapse. Necessary as well is the energy-requiring job of maintaining the different ion concentrations inside and outside the cell; this job is carried out by metabolically driven membrane molecules called ion pumps. This brief (and incomplete) description shows us in general terms how an action potential works and what a voltage-sensitive ion channel does. What is missing from this simple picture is an understanding of the way the ion channels themselves work. That is the riddle of molecular excitability. Here begins the trail that we will seek to follow in this book. 1.6. What is a voltage-sensitive ion channel? The electric currents that are measured in experiments on axons are due to the movement of positive ions across the axolemma. The major part of the membrane area is impermeable to ions; it is occupied by a double layer of lipid molecules. Lipids are amphiphilic molecules, arranged with their polar, hydrophilic heads facing outward to the aqueous phases. Because the inner regions of the membranes are composed of the hydrophobic tails, ions lack the energy to enter, much less traverse them. It is by way of the ion channels, glycoprotein molecules that extend through the lipid bilayer, that ions may, under certain conditions, pass. The relationship between the bilayer and the protein molecules intrinsically embedded within it has been explored by the
8
CHAPTER 1
freeze–fracture technique; see Figure 1.3.6 The carbohydrate chains of the glycoproteins are seen extending outward into the extracellular phase.
Figure 1.3. Schematic sketch of a cell membrane, showing the relation of intrinsic protein molecules to the lipid bilayer. From C. U. M. Smith, 1996, after B. Safir, 1975.
Among the various types of membrane proteins we shall focus on the ones directly involved in excitability. We have already mentioned the sodium channel and the calcium channel, rapidly switching conductors of their respective ions, and the slower potassium channel. These glycoprotein molecules are called voltage-sensitive (or voltage-dependent)7 ion channels, because it is the voltage across the membrane that controls their ion conductance. Because the ion concentrations inside the axon are different from those outside, the concentration differences act, along with the potential difference, to move the ions. The voltage plays two roles: Its decrease impels a change in the conformation of the molecules in their ionic environment, and it helps to drive the ions across. In later chapters of this book we will review how these channels behave in various circumstances, that is, their function, and how they are put together, their structure. We will seek to answer the questions: ! ! !
How do the ions pass so rapidly through the voltage-sensitive ion channel? How does the channel manage to select specific types of ions to carry? What transformations does the conformation of the channel undergo that
EXPLORING EXCITABILITY ! !
9
convert it from nonconducting to conducting and back? How are the opening and closing transformations coupled to the electric field? How does the structure of channels determine their function?
These are difficult questions and, although various models have been proposed, the full answers to them are not yet known. We can expect the answers to be rather subtle, and that it will require a great deal of fundamental knowledge to understand them. For this reason let us now take a brief tour through some aspects of the sciences of physics, chemistry and biology and their interdisciplinary combinations. 2. SEAMLESS NATURE, FRAGMENTED SCIENCE One of the fundamental tenets of science is that nature is a seamless unity. Yet a survey of science as it is actually carried on shows that, in practice, science is divided into disciplines represented by departments with little communication between them. This division into physics, chemistry, biology and other branches, historically necessary though it was, has resulted in a fragmented science. 2.1. Physics Physics is a set of general concepts that deal with such concepts as space, time, force, motion, electricity, magnetism, sound, light and the fundamental structure of matter. These concepts are as important to living as to nonliving things, to “the trees and the stones and the fish in the tide.”8 Newton's mechanics is the flagship theory of classical physics. Classical mechanics allows us to isolate a problem from its environment. Newton's three laws are sufficient for many applications but fail in two realms: the fast-moving and the microscopic. The two revolutions that dealt with these realms are relativity and quantum mechanics. In solving a mechanical problem, the direct application of Newton's laws is usually not the easiest way to proceed. Instead of analyzing forces, the concept of energy gives us a more convenient approach, because of the important law that energy is conserved. The concept of energy conservation extends far beyond mechanics, because energy takes many forms, including heat, electrical, magnetic, elastic and chemical—even mass, as relativity shows, is a form of energy. Energy is not necessarily associated only with particles, but can be found in space, in the form of fields—electric, magnetic and gravitational. One branch of physics directly pertinent to voltage-sensitive ion channels is electrodynamics, which deals with electricity and magnetism. While mechanics describes a world of three independent dimensions, length, time and mass, nature provides another dimension: electric charge. This dimension adds some interesting phenomena: Resting charges produce electrostatic attractions and repulsions; when charges move, they also produce magnetic fields, perpendicular to the velocity or current. Electric and magnetic fields in space produce electromagnetic waves. Thus
10
CHAPTER 1
electrodynamics includes optics, a subject that deals not only with light, but with the entire electromagnetic spectrum, from gamma rays to radio waves. Mechanics and electrodynamics contain the basic laws governing the behavior of individual particles and their interactions with each other, but because matter consists of enormously large numbers of particles, new behavior emerges from their aggregations. Laws such as the gas laws describe matter in bulk. Studies of heat engines led to the concept of entropy, a measure of the disorder of a system, and to the formulation of the laws of thermodynamics. The kinetic theory of gases uses statistical techniques to sum the effects of many individual collisions into statements about pressure, volume and temperature. An ion channel is composed of tens of thousands of atoms; the number of electrons runs into the millions. How can we expect to see any sort of ordered behavior from such a vast assemblage of particles? Since the channel does exhibit regular responses to electric field changes, the particles must clearly be acting together in some sort of collective behavior. The branch of physics and chemistry that has been developed to apply the laws of mechanics to large numbers of atoms and molecules is statistical mechanics. Statistical mechanics allows us to understand phase transitions, collective changes in molecular ordering such as the melting of ice. The loss, when heated, of magnetic polarization in a ferromagnet and of electric polarization in a ferroelectric material are other examples of phase transitions. A closed system is one in which neither energy nor matter is allowed to enter or leave. Such a system is subject to the first law of thermodynamics, that the energy of the system remains constant, and the second law, that the entropy of the system can not decrease. These laws cannot be simply applied to open systems, which exchange energy and matter with their environment. Living organisms are open systems. The growth of a tree from a seed is an example of a system in which entropy decreases. However, if the tree together with the air, water and soil surrounding it and the light source illuminating it are isolated, the entropy of the entire closed system will increase. The decrease of the tree's entropy is more than compensated by the increase in entropy of the other components of the system. The quantum revolution originated in some seemingly minor phenomena that could not be explained by classical physics. One of these was the distribution of wavelengths of light given off by heated bodies, such as the resistance wire in a toaster or light bulb. With increasing temperature a heated object glows red, then orange, then white. The frequency of the light most strongly emitted rises with temperature in a way that classical physics was unable to explain. Max Planck took the bold step of postulating that radiant energy could be given off only in discrete packets of a magnitude that was directly proportional to the frequency of the emitted light. With this unprecedented assumption he was able to obtain a perfect fit to the data for wavelength distribution of energy emitted from a black body. Planck named these packets of energy quanta, and we call his proportionality constant h Planck's constant. The energy of a quantum is E = hv, where v is the frequency of the radiant wave emitted. Planck's constant h has dimensions of action, energy times time, or momentum times displacement. Its value is 6.63 × 10-34 joule second. We shall also use the constant ħ = h/2% = 1.054 × 10-34 J s. In terms of
EXPLORING EXCITABILITY
11
the angular frequency 7 = 2%v, the energy of a quantum is written ħ
(2.1)
Planck's discovery led to the resolution of other riddles that classical physics was unable to solve, but also raised new questions. Einstein used Planck's quanta to explain the way electrons are emitted by metal surfaces illuminated by ultraviolet light, but his successful theory of the photoelectric effect required light to be composed of particles, photons. This seemed either to contradict numerous experiments showing the wave nature of light, or to require light to have a dual nature, both wave and particle. Since we cannot deny experimental data, we must accept the duality of light. To account for the stability of the atom, Niels Bohr postulated that only a discrete set of orbits could be permitted and that electrons had to jump from orbit to orbit to emit or absorb quanta of light. The energy of a photon would be Planck's constant times the difference between the frequencies of the orbits. Louis de Broglie explained the discrete patterns of the spectral lines of atoms by assuming that the electron acts as a standing wave wrapped around the nucleus. The number of nodes in the standing wave became the principal quantum number n, giving the electronic structure an energy En = nhv, where v is the orbital frequency. Erwin Schrödinger wrote the equation governing this wave. It became clear that light is not the only thing with a dual nature: Electrons, and indeed all objects, have both particle and wave properties. The predictions of quantum electrodynamics have been shown to be accurate to more than 12 significant figures. Bohr came to the realization that quantum mechanics would have profound implications for biology, requiring us to renounce a completely deterministic account of life processes in favor of a probabilistic description. These concerns were later taken up by Schrödinger, Max Delbrück and others.9 2.2. Chemistry Chemistry is the study of matter and its interactions. It emerged from its alchemical beginnings when it began to study the qualitative and quantitative properties of pure substances, which could be separated into elements and compounds. Compounds can be broken down into their elements, which they contain in definite proportions by weight. These facts validated the Greek concept of atoms, which may be seen as bonding together to form molecules. One of the major goals of chemistry was to understand the nature of the bonds that connect atoms into molecules. Early pictures of this chemical bond showed atoms as balls with hooks that could engage one another. The pictures served to express in symbolic language the idea of the connection of two atoms to each other. The hook picture was discarded when the chemical bond was finally understood in the 1920s as a consequence of electromagnetism and quantum mechanics. The study of reactions among elements, with analyses of weights, volumes, temperatures and other quantities, revealed remarkable periodicities, the exploration of which led to the Periodic Table. A proper alignment of the columns of the Table
12
CHAPTER 1
required the concept of atomic number, which turned out to be the number of protons in the atom's nucleus. The number of neutrons varies; isotopes have the same atomic number but a different atomic mass number, the number of protons and neutrons. The rows are numbered by periods, or principal quantum numbers, while the columns, numbered by Roman numerals, indicate the number of electrons in the outer, valence, shell. The atomic weight is the average mass number of the isotopes present, weighted by their relative abundance on Earth. Molecular weight is the sum of the atomic weights in a molecule. Its unit is the dalton (Da); for macromolecules, the kilodalton is a more practical unit. For example, the molecular weight of the blood protein hemoglobin is 64.65 kDa and that of the sodium channel about 250 kDa. In the study of ion channels we will focus on the atoms that form the structures of living organisms and the metal ions that course through the channels. The element carbon is the core component of living matter. The importance of carbon derives from its four valence electrons, which form a tetrahedral bond pattern. The boundaries between organic chemistry—the study of complex carbon-containing molecules—and inorganic chemistry are not sharp, and the study of ion channels requires attention to both. Other molecules prevalent in living organisms are hydrogen, oxygen, nitrogen, phosphorus and sulfur. These form the compounds that form the bodies of organisms and underlie their metabolism, reproduction and other life activities. Still other elements, including calcium, potassium, sodium, chlorine, magnesium, iron, iodine, cobalt, zinc, molybdenum, copper, manganese and selenium, are involved in such vital protein activities as oxygen transport, information processing and enzymatic reactions. Alkali metals, the elements in Group I, are important as ions: starting from the top of the Table, we have lithium, sodium, potassium, rubidium, cesium and francium. Above them lies hydrogen, which has unique properties. Sodium, in the form of ions, is highly concentrated in seawater and blood. As we saw, it plays an important role in nerve conduction, mediated by sodium channels, which preferentially but not exclusively conduct sodium and lithium ions. Potassium is a larger, heavier atom than sodium, with chemical properties similar to those of sodium but sufficiently different that the two elements can be easily separated. Potassium channels play important physiological roles including the restoration of the resting state by the outward current. The alkaline earths of Group II include beryllium, magnesium, calcium and strontium. Calcium and magnesium both have important physiological functions, including their role in ion channels, particularly the ubiquitous calcium channels. The nonmetals in Group VII, the halogens fluorine, chlorine, bromine and iodine, form negative ions. Chlorine plays the role of counterion to cations such as sodium, potassium and calcium. Chloride channels, like potassium channels, help to restore the resting potential after a depolarization. The rare gases, helium, neon, etc., have no direct relevance to living organisms because of their inertness. However, their stable configuration of filled outer shells makes elements of Groups VI and VII electron acceptors, and elements of Groups I and II electron donors. The readiness of these elements to ionize, together with the electrostatic attraction of the ions, accounts for ionic bonding.
EXPLORING EXCITABILITY
13
Closed outer shells can be formed also by the sharing of electrons, which accounts for covalent bonds. The nature of these bonds was explained quantum mechanically by constructive interference of the electron waves of the valence electrons; a pair of electrons of opposite spin from the two bonded atoms is shared between them. As a result of the overlapping of orbitals, the electron density between the nuclei increases, leading to a net attraction. The carbon atom, for example, may share one, two or three pairs of electrons with another atom, thereby forming single, double or triple bonds, respectively. The forces between atoms to form molecules, and between molecules, are electrical in nature. A measure of the attraction an atom has for the electrons it shares with another atom is its electronegativity. In each row of the periodic table, electronegativity increases from left to right; the alkali metals are the most electropositive, followed by the alkaline earths. Within any column, electronegativity decreases downward, with increasing atomic number. Covalent bonds are formed between atoms of equal or nearly equal electronegativities. Unequal values of electronegativity lead to ionic bonds. Covalent bonds with unequally shared electrons are said to have a partial ionic character, producing a polar covalent bond. This type of bond is seen in the water molecule, H2O, in which the oxygen atom, with eight protons, attracts electrons more strongly than the lone proton of the hydrogens. This leaves the oxygen atom with a partial negative charge and the hydrogens with partial positive charges. As a consequence, water has a high electric dipole moment, which accounts for its many unusual properties. Water is a good solvent for ionic compounds, but nonpolar compounds, such as oils, do not dissolve in water; they are hydrophobic. The electrostatic repulsion between covalent bonds determines the structure of the molecule formed by them. Thus the methane molecule, CH4, in which each of the four valence electrons of the carbon atom pairs covalently with the single electron of a hydrogen atom, forms a tetrahedron; each C-H bond angle is 109.5°. A third important type of bond is the hydrogen bond, a relatively small attraction between a hydrogen atom and an electronegative atom such as oxygen or nitrogen. The hydrogen bond is weak but when present in large numbers can determine the structure of a molecule. Water, the compound essential to terrestrial life, is highly polar because of the strong electronegativity of the oxygen atom; hydrogen bonds of type OH###O are prevalent in liquid water and ice. Of primary importance is the dipole moment of the water molecule and the hydrogen bonds linking adjacent molecules together. Strong electrolytes in water solution break up into ions, which become hydrated with a water shell. Living organisms have evolved in water, and organisms that live on land retain water within their cells and body fluids. While the properties of water are extremely important to the processes of living, they do not necessarily determine the properties of the membranes that separate aqueous compartments. Chemical reactions are reversible, although in practice the backward reaction may be too small to be detectible. In a redox reaction, one molecule is oxidized, losing electrons, while another is reduced, gaining them. The speeds of reactions are greatly increased by catalysts, substances that work by contact or by intermediate reactions. Practically all biological catalysts, enzymes, are proteins, although a class of nucleic acids, RNA, also has enzymatic properties.
14
CHAPTER 1
The large and complex molecules of living organisms are polymers of functional groups, such as methyl (-CH3), amino (-NH3), carboxyl (-COOH), sulfhydryl (-SH) and phosphate (-PO4). The polymers may have linear or branched structures. Four major types of biochemical compounds have evolved: Carbohydrates, sugars and their polymers; lipids, including oils, fats and waxes; nucleic acids, such as deoxyribonucleic acid (DNA), ribonucleic acid (RNA) and adenosine triphosphate (ATP); and proteins, which serve as structural components, enzymes, or membrane proteins—such as ion channels. 2.3. Biology Biology studies the way in which living things develop, grow, adapt, reproduce, change their environment, and die. Life is a systemically maintained nonequilibrium state. The living system can maintain itself far from equilibrium for only a limited time, returning to equilibrium in death. The ordered chemical pathways of life, its metabolism, can only exist at temperatures low enough that their functional order is not destroyed, but not so low that fluids crystallize. Life on Earth is adapted to conditions on our planet, which themselves are modulated by living processes. The maintenance of an ordered system in a far-from-equilibrium state requires a steady input of energy into the system, which therefore must be an open system. Because the second law of thermodynamics does not apply directly to open systems, the growth of order in organisms undergoing respiration in no way contradicts this law. Living systems are composed of cells, which maintain their essential components within a membrane that allows for a controlled exchange of matter and information with the environment. The functions of cells are based on the chemistry and physics of structures composed of nucleic acids, proteins, lipids and carbohydrates. Living organisms contain a genetic code chemically expressed in deoxyribonucleic acid (DNA), by which they reproduce, allowing the species to survive the death of the individual. Single-celled organisms without formed nuclei, prokaryotes, reproduce by cell division. The bacteria, the only living organisms in existence for three billion of the 3.8 billion years of life on Earth, established many of the life-favoring conditions of the Earth, including the composition of the atmosphere: photosynthetic bacteria generated the oxygen of the air. Mitochondria and chloroplasts, descendants of bacteria, live within the cells of eukaryotic organisms, which have complex cells with defined nuclei, and maintain a symbiotic relationship with them. The prokaryotic bacteria and the eukaryotic protists are single-celled; the other eukaryotes—fungi, plants and animals—are multicellular. To maintain homeostasis, all organisms, prokaryotic and eukaryotic, must exchange not only matter and energy but also information with their environment. The single cell of a unicellular organism contains all the structures necessary for independent existence: DNA; the complex machinery to divide the cell and synthesize biomolecules; organelles containing enzymes for energy conversion and other functions; receptors and effectors to interact with the environment, and many other systems to maintain a stable existence in a changing environment. Because the
EXPLORING EXCITABILITY
15
surface-to-volume ratio shrinks as size increases, and enough surface area is required for the efficient exchange of nutrients and waste products with the environment, cells are necessarily small. This limitation was overcome in multicellular organisms, but at a price: New mechanisms were required for communication within the body in order to maintain homeostasis. In plants, a system of hormones suffices. In animals, the demands of mobility requires, in addition to the endocrine system, a system of point-topoint communication—the nervous system. The study of structure, anatomy, is a prerequisite to the search for explanation of the functions of the parts of the body, physiology. Clearly, physiology also requires a knowledge of physics—indeed, the two names are derived from the same root. General physiology seeks to elucidate the basic principles underlying the behavior of many biological systems. One of these pervasive phenomena is cellular excitability, the ability of cells to respond to stimuli. The detailed examination of conduction in nerve cells and associated cells is the task of neurophysiology. Neurobiology traces is history far back to the ancient Egyptians and Incas, who carried out brain operations on humans, as shown by archeological studies of skulls and documented in Egyptian papyri; knowledge of the electrical nature of nerve and muscle phenomena is much more recent. Charles Darwin, on a trip around the world as scientific officer of the Beagle, saw finchlike birds that differ from finches in their beaks, claws and lifestyle; tortoises in the Galapagos; octopi changing color. A vast array of species—why so many? Where do they all come from? They appear to be related.10 Organisms have many offspring, but many of these die before reproducing. Which ones survive? That is mostly decided by accident, but the offspring are not entirely the same. They possess a variety of traits, and the ones with favorable traits—faster, stronger, more adaptable or more resilient—have a better chance to survive by escaping starvation, predation, disease or other dangers. And, surviving, to reproduce and to pass the characteristics of their heredity, the information we now call their genes, to their offspring. The variety of living species as well as the traces of bygone forms, the fossils, give us a map to past development. That the continuing development of new species has indeed been possible is due to the fact that the genetic code of DNA is, while remarkably stable, subject nevertheless to occasional chance errors induced by chemical alteration or radiation. These mutations lead to offspring that are in most cases less suited to survive and so frequently die without progeny. However, in occasional, rare, cases, a mutation favors survival. Organisms with such favorable attributes have a higher probability of surviving and passing these attributes—now known to be due to their mutated DNA—to their offspring, increasing their representation in the gene pool. Competition and isolation, along with further favorable mutations, contribute to the development of new species. If a population with a new mutation becomes isolated from its parent population, an entirely new species may arise, no longer able to breed with the parent species. That, Darwin concluded, must be the answer to the question of the variability of species. Remarkably, another man had this idea at the same time; the theory of
16
CHAPTER 1
evolution by organic adaptation is the product of Darwin and Alfred Russel Wallace jointly. A new science was born, but many details remained to be filled in by others. To focus on one detail, the production of new species is not, as Darwin thought, a steady process. The fossil record shows that it proceeds in spurts more accurately described as a series of punctuated equilibria, as discovered by Stephen J. Gould and Niles Eldridge.11 The ability of living organisms to give rise to new forms, such as feathers from scales, is due to chance alterations of the DNA molecule, coupled with natural selection. These mutations arise from attacks on the DNA by free radicals, causing errors to creep into the genetic code. Most mutations reduce the individual's ability to survive; some are neutral. The likelihood of favorable changes, such as those that converted scales into feathers, is minuscule in a single generation, but accumulates with time, because they provide a higher probability of survival to the individual lucky enough to inherit them. The advantages that an organism derives from a mutation can transfer: Feathers served as heat insulators before enabling flight. This benefit is enjoyed by subsequent heirs of the new gene. Fossil studies allow the process of evolution to be traced back to the first living organisms, the bacteria. In these one-celled organisms, 1-2 m in diameter, we see the first examples of metabolism, photosynthesis, locomotion and reaction to external stimuli. The biochemical and biophysical capabilities of present-day protists, fungi, animals and plants have their origins in bacterial evolution. An evolutionary “tree” can be drawn that connects living and fossil forms. In it, branches not only diverge (as when bilaterally symmetrical animals split away from those with radial symmetry), but they also merge. An interesting example of this is the formation of protists from bacteria. The protists and other eukaryotes possess organelles that are modified bacteria, with their own DNA. Strictly speaking, the principles of evolution apply only to the entire organism, since parts of organisms, such as organs and cells, must die when the organism dies. Nevertheless, it is possible to speak of the evolution of proteins. Evolutionary trees have been established for cytochrome c and many other proteins. The oxygen-carrying molecules hemoglobin and myoglobin evolved into their many present forms from the divergence of a single ancestral globin gene.12 In the same way evolutionary trees have been established for voltage-sensitive ion channels, as we will see in Chapter 13. The way in which offspring inherit specific characteristics from each of their parents was a riddle to Gregor Mendel, the founder of genetics. The properties of hybridized plants in crossing experiments showed that traits were segregated in alternate forms, now called alleles. Mendel’s quantitative investigations on plants showed the influence of two “factors,” today called genes, one from each parent. The genes combine randomly and (in most cases, as we now know) independently of other genes. Often the effect of one gene on the organism will be masked by another, dominant, gene. If the dominant allele is present, it will be expressed in the organism regardless of whether the DNA contains one or two copies of it. The recessive gene will be expressed in the phenotype—the structure, physiology and behavior of the organism—only if it is present in both genes. Mendel's rules provided statistical laws by which the probabilities of the outcomes of hybrid crossings could be calculated.
EXPLORING EXCITABILITY
17
The genes are present in the DNA of the chromosomes. During meiosis, the production of haploid gametes, the two alleles of the diploid parent cell segregate from each other; each gamete produced has the same probability of owning one or the other member of the pair of alleles. In experiments in which he crossed plants that were hybridized for two different characteristics, such as seed shape and seed color, Mendel found that the results could be interpreted by his law of independent assortment. Since the genes that led to this result were on different chromosomes, we can state that genes located on different chromosomes assort independently of each other. Genes on the same chromosome, however, do not assort independently, because of the phenomenon of crossing over, whereby the parent chromosomes swap parts of each other. Since nearby genes on a chromosome will tend to cross over together while genes farther apart will have a greater probability of becoming separated, data from cross-over experiments can be used to construct a genetic map. Such a map shows the location of the identified genes on the linear chromosome. While reproduction may be a simple cell division, it frequently requires an exchange of the genetic material, DNA, between organisms, as in sexual reproduction, the fusion of two gametes. The exchange of DNA requires complex cooperative behavior, which presupposes the exchange of information between the individuals. Social behavior has evolved as an important feature of animals. In humans, this has led to the development of formal languages, science, trade and government. The way the simple DNA molecule, with only four different nucleotide components, is able to code for polypeptide chains composed of 20 amino acids was resolved by the realization that a group of three nucleotides was required to code for one amino acid residue. This is the triplet code. X-ray diffraction studies by Rosalind Franklin in the laboratory of Maurice Wilkins, and modeling by Francis Crick and James Watson, yielded the structure of DNA: an antiparallel double helix with complementary bases paired and connected by hydrogen bonds. Separation of the two strands allows enzymes to replicate the DNA, an essential precursor to cell division. A more indirect approach is required to read the code and convert it into polypeptide chains. This process requires an intermediate nucleic acid, RNA, which serves in messenger and transfer roles and in the synthesis of the polypeptide. The code is transferred to messenger RNA by a process called transcription. The triplets, transcribed to complementary RNA, are called codons. The genetic code is a “dictionary” between the 64 codons and the 20 amino acids. The cellular process of translation builds the polypeptide chain according to the code in the messenger RNA. Genetic methods are highly effective ways of studying biological systems. A study can focus on a particular phenotype, such as a cell shape or protein structure. It is remarkable that many of the genes and proteins in neurons have already been identified in such simple organisms as the nematode Caenorhabditis elegans and the fruitfly Drosophila melanogaster. The Human Genome Project is sketching the entire gene map of humans. Ecology tells us that life is a vast multidimensional hierarchical organization, from molecule to supramolecular structure such as a membrane, to organelle, cell, tissue, organ, organ system and organism. Organisms form populations, which combine
18
CHAPTER 1
with populations of other species to form communities, which in turn interact with their environment to establish an ecosystem, a part of the Earth's biosphere. 3. THE INTERDISCIPLINARY CHALLENGE The above review of the elements of their sciences facts suggests that physicists, chemists and biologists traverse orbits in fairly well separated universes. A vast gulf still remains between physics and physiology. 3.1. Worlds apart A need for repeatability and precision, and a practical desire to solve the simplest problems first, caused physics to separate from its beginnings, in which time was measured by one’s pulse, temperature calibrated by human body temperature and electricity measured by the strength of shocks. While biology was exploring and describing the vast complexity of the living world, physics discovered the power of mathematics as a language to express its reasoning and laws. At least on the experimental side, biology is close to physics. It has long used physical instruments, beginning with the microscope and the centrifuge, to delve into the world of the cell. Electronic, optical, acoustical and nuclear instruments are routine sights in biological laboratories, along with isotopes, ultracold temperatures and other experimental techniques developed in physical and chemical laboratories. In the realm of theory, on the other hand, biology and physics remain worlds apart. The division of science into discrete compartments, once a convenient simplification, is now a barrier to progress. 3.2. Complex systems In recent years science has taken upon itself the task of dealing with complex systems. The laws of physics are simple, so why is the world, particularly the biological world, so complicated? In clouds, sand dunes, ocean waves and biological molecules we see a tendency of nature to form structures. But the outcomes of physical processes are also sensitively dependent on initial conditions, a dependence called chaos. Organization and chaos coexist in complex systems, which are characterized by forces between particles, conservation of quantities such as particle number and angular momentum, and symmetries. Problems become simpler when nature provides us with well separated scales of space, time or energy. In biological systems, however, such convenient scale separations are lacking; organisms are characterized by order at many levels. The computer modeling of complex systems in many cases develops into a pattern in which the behavior is dominated by abrupt jumps. We see such intermittency in the sudden onset of stormy weather, ice ages and plagues. Such emergent properties are also seen in phase transitions such as melting, boiling and sublimation.13 The competition between order and complexity may also govern biological systems, such
EXPLORING EXCITABILITY
19
as the opening and closing of ion channels.14 Chapter 15 deals with these concepts of critical phenomena as they bear upon the problems of cellular and molecular excitability. These new concepts are helping to bridge the culture gap between the physical and biological sciences. 3.3. Interdisciplinary sciences bridge the gap The insights and perspectives of either one of the classical disciplines alone will not be enough to solve the problems of molecular excitability. The emergence of interdisciplinary sciences—biophysics, biological physics, biochemistry and biophysical chemistry—has been helpful in bridging the chasm between physics and biology. New journals and new departments have appeared, providing intermediate territories. But problems remain. Different branches of science speak languages with different vocabularies and grammars and so view the world differently. Jargons separate the sciences. Scientists know that they must communicate their ideas precisely, so they must invent new words and define the meaning of existing words more precisely. In bringing together two fields of endeavor with overlapping vocabularies, we must watch for ambiguities in meaning. The process of combining the perspectives of physics and chemistry, or physics and biology, is sometimes referred to a “reduction,” and the term “reductionism” is often used in a derogatory sense. This unfortunate association hides an important fact: When it is discovered that certain chemical phenomena can be explained by quantum and statistical physics, the chemical content is not lost. There is a tightening up as pieces of the puzzle fit together, but the chemical and the physical insights are still there (to the extent that they were correct in the first place) when they are merged into a single picture. We can expect similar simplifications from the joining of physical and biological concepts. We can portray biophysics as an attempt to bridge two ways of thinking, two languages, two literatures and two styles of research. The reconciliation of these differences can be very productive, as the history of interdisciplinary science demonstrates. However, the attempt to bridge two disciplines as different as physics and biology requires us to stretch between widely separated conceptual bases. NOTES AND REFERENCES 1. Bertil Hille, Ion Channels of Excitable Membranes, Third Edition, Sinauer Associates, Sunderland, MA, 2002. 2. R. Dawkins, The Blind Watchmaker, Penguin, New York, 1988. 3. S. Hagiwara, Membrane Potential-Dependent Ion Channels in Cell Membrane: Phylogenetic and Developmental Approaches, Raven, New York, 1983. Reprinted by permission from Wolters Kluwer Health. 4. Isaac Newton, Opticks, Dover Publications, New York, 1952. 5. C. U. M. Smith, Elements of Molecular Neurobiology, Second Edition, John Wiley, Chichester, 1996, 276; A. L. Hodgkin and A. F. Huxley, Nature 144:710-711, 1939. 6. Smith, 130; B. Safir, Scientific American. 234(4):29-37, 1975.
20
CHAPTER 1
7. The alternative term “voltage-dependent ion channels,” although commonly used, is not precise. It is not the channels but their conformation that is dependent on voltage. These molecules are sensitive to, but not dependent on, the potential difference across the membrane. 8. From Poem in October by Dylan Thomas. 9. Erwin Schrödinger, What is Life? , Cambridge University, Cambridge, 1955; Max Delbrück, Mind from Matter? An Essay on Evolutionary Epistemology, Blackwell Scientific, Palo Alto, 1986. 10. Charles Darwin, The Origin of Species by means of Natural Selection of the Preservation of Favored Races in the Struggle for Life, Avenel Books, New York, 1979. 11. Stephen J. Gould, Wonderful Life, Norton, New York, 1989. 12. Richard E. Dickerson and Irving Geis, Hemoglobin: Structure, Evolution, and Pathology, Benjamin/Cummings, Menlo Park, California, 1983. 13. Nigel Goldenfeld and Leo P. Kadanoff, Science 284:87-89, 1999; J. A. Krumhansl, in Nonlinear Excitations in Biomolecules, edited by M. Peyrard, Springer, Berlin, and Les Editions de Physique, Les Ulis, 1995, 1-9. 14. S. A. Kauffman, The Origin of Order, Oxford University, Oxford, 1993; ___, At Home in the Universe, Oxford University, 1995.
CHAPTER 2
INFORMATION IN THE LIVING BODY
Information streams from our surroundings, enters our sense organs and converges on the brain. There it is processed, with information previously stored, to issue an outgoing stream of commands to our muscles and glands. In touch with our surroundings, we use information to carry out our life activities. To promote their survival, organisms need information: from which direction the sun is shining, where food and water can be found, how to avoid predators. The negative entropy of the organism's metabolism can be applied to make that information available. The problem, then, is to convert this information into survival-enhancing responses to environmental stimuli. This requires information processing, which occurs at all the hierachical levels of the organism. The detection of information from the outside of a cell is carried out by receptor molecules embedded in the bounding plasma membranes. The response to the data received must be by effectors that either react mechanically or emit light, electric fields or chemical substances into the environment. Mechanical reactions may propel the organism or a part of it through space or emit an acoustical wave. In bacteria, mechanical reactions include motions of flagella or cilia. Information processing also takes place at time scales much longer than the lifetime of an individual organism. The species itself adapts, by organic evolution, to long-term changes in the environment. Because of the relationships between species, cooperative as well as competitive and predational, changes in one species often lead to changes in other species. New species arise when habitats become separated. Species become extinct from loss of habitat, catastrophes or other causes. In this chapter we will examine some of the systems of biological information processing and their basis in informational macromolecules. 1. HOW BACTERIA SWIM TOWARD A FOOD SOURCE The manner in which organisms extract information from their environment and process it to promote their survival is illustrated by chemosensitivity in bacteria. Motile bacteria are sensitive to chemical substances in their environment, swimming up gradients of attractants, and down gradients of repellents. While bacteria are known to have existed for more than 2.5 billion years, this process, bacterial chemotaxis, is hardly simple. 21
22
CHAPTER 2
The molecular biology of bacterial transduction has been studied in the rod-shaped cell Escherichia coli, which resides in our guts.1 These bacteria are propelled by the rotation of 8-10 flagella, a filament of which consists of a single array of subunits of a protein called flagellin. The tubular array, 0.02 m in diameter by 5-20 m long, is twisted into a helix. Although rotary motion is uncommon as a physiological adaptation, the flagellum is rotated at about 100 rev/sec by a cellular mechanism energized by a transmembrane H+ gradient. When the flagella all rotate in a counterclockwise direction, the flagella form a single bundle that propels the bacterium forward smoothly. Clockwise rotation, on the other hand, pulls the flagella outward and the bacterium tumbles irregularly; see Figure 2.1.
Figure 2.1. Rotary motion in bacterial flagella. A. Flagellar motion on counterclockwise rotation. B. Flagellar motion on clockwise rotation. From C. U. M. Smith, 1996. Copyright John Wiley & Sons Limited. Reproduced with permission.
The motion of the cell resulting from alternation of the flagellar rotation is a three-dimensional random walk consisting of runs of smooth swimming interspersed with periods of chaotic tumbling. When the cell is placed into a gradient of a chemical attractant, its swimming runs are found to be longer when directed toward the source than in any other direction. Thus the bacterium homes in on a source of attraction and, conversely, away from a source of repulsion. This system features sensory adaptation: a prolonged immersion in an attractant or repellent leads to desensitization. The attractant or repellent molecules are sensed by transmembrane molecules called receptor–transducer (R-T) proteins. Depending on the particular type of chemical, the interaction is either by a direct binding or indirectly, by receptor molecules. The genes for the R-T proteins have been isolated and their DNA codes determined. From this sequence, their membrane-spanning protein segments are identified. Each R-T protein responds to a specific set of molecules. One of these, Tsr, is sensitive to the amino acids serine, an attractant, and leucine, a repellent. The site that binds the attractant or repellent molecule is in a part of the sequence that projects out of the membrane. Inside the membrane, domains to which signaling groups attach are in the cytoplasm. The arrival of a repellent or attractor molecule generates a
INFORMATION IN THE LIVING BODY
23
chemical signal that modulates the direction of rotation of the flagellar motor, as in Figure 2.2.
Figure 2.2. Interaction of attractant molecules with receptor and receptor-transducer molecules in a bacterial membrane. From C. U. M. Smith, 1996. Copyright John Wiley & Sons Limited. Reproduced with permission.
The signaling pathway that connects the flagellum to the R-T protein has been traced by genetic analysis to involve four proteins, called CheA, CheW, CheY and CheZ. When the Tsr R-T protein accepts a serine repellent, it undergoes a conformational transition, which travels to the signaling domain, where CheW and CheA are activated. CheA accepts a phosphate group from an energy carrier, adenosine triphosphate (see Section 7.4) and passes it on to CheY, which diffuses through the cytoplasm to the flagellar motor. There the phosphorylated CheY induces clockwise rotation with subsequent cell tumbling. The role of CheZ is to desensitize the system by dephosphorylating CheY, terminating its influence.2 The binding of the attractant leucine induces a conformational change in Tsr that inactivates CheA and CheW. The flagellar motor then resumes its counterclockwise rotation and the cell swims forward. The ability of biological systems to process information was established during the evolution of the bacteria, in the first billion years after the formation of the Earth.
24
CHAPTER 2
The process of bacterial chemotaxis is a complete sensory–motor system in a single cell. In multicellular organisms, sensory and motor systems become separated, and in animals a central nervous system intervenes between them. It is interesting to note, however, that evolutionary traces of bacterial systems can be detected in mammalian macromolecules. 2. INFORMATION AND ENTROPY Bacterial chemotaxis is an example of the importance of information processing in the everyday (and every-instant) activities of living organisms. But what is information? If a system can occupy a discrete number N of states, we may not know which of these states it will occupy at a given time. The probability of a particular state or choice, on the assumption they are equally likely, is 1/N. For a tossed coin, N = 2, so the probability of a "tail" is ½; for a rolled die, N = 6, so the probability of a [ : ] is 1/6. The probability of random appearance of a particular character is the reciprocal of the number of choices. We can use the simplest case, N = 2, as our canonical example. If we know that the coin has landed heads up, we have one bit of information. When n coins are tossed, the number of possible outcomes is P = 2n. The information is the number of digits when written in binary. In bits, it is defined as the log to base 2 of the number of possibilities.3 (2.1) The probability that today is your birthday is, to someone who doesn't know it, 1/365, since, neglecting leap years, it is one of 365 (= 101,101,101 in binary) possibilities. If you tell me your birthday, you are giving me log2 365 = 8.51 bits of information. By the properties of the logarithm function, information relating to independent events is an additive property. This basic unit, the bit, measures information of all kinds, from the information on this page to the information transmitted by a bee in its dance. Entropy can be roughly described as the disorder to which closed systems tend; for a more precise definition see Chapter 5. Information is negative entropy. Every living organism is an island of information in a sea of growing entropy. This information is processed within the organism at the organ, tissue, cell and molecular levels. It is moved from organism to organism by means ranging from the transfer of plasmids to the publication of journal articles. Nonliving objects undergoing irreversible processes, such as marble statues in acid rain, gain entropy and lose their ordered forms. Living organisms are different; they are open systems. They assimilate energy from their environment—from photons or from food—and lose entropy in discarding waste matter and heat. During their lifetimes they grow and become highly ordered. Information transfer from the environment requires signal transduction, the conversion of the energy carried by an incoming signal (sound, light, odor molecules, etc.) to a form useful to the organism. There information is processed generally in a way to help the organism adapt by acting upon its environment in some way.
INFORMATION IN THE LIVING BODY
25
In single-celled organisms, this information is converted fairly directly into a response, as we saw in the case of bacterial chemotaxis. In multicellular organisms, however, a great deal of coding and processing is necessary. While in plants this signaling is limited to the transport of hormones (with some exceptions, such as the sensitive plant Mimosa pudica and the Venus fly trap), in animals it involves also the propagation of nerve impulses and the release of neurotransmitters. The signal received as, for example, an image on the retina, is converted by receptor molecules into nerve impulses that travel to specific regions of the brain. It is then sent via efferent neurons to the appropriate muscular or glandular effectors. This may result in a muscular movement or secretion, requiring a transduction of the train of impulses into a contraction or biochemical synthesis. These processes thus form a sequential system, from sensory transduction to the coding and inward conduction of impulses, to processing and memory in ganglion or brain, and then outward conduction to muscle or gland.
3. INFORMATION TRANSFER AT ORGAN LEVEL While the laws that govern natural processes are assumed to be the same whether they refer to stars, horses or atomic nuclei, their application is quite different. These are all ordered structures, but the level of complexity and their environment is quite different. Living things exhibit properties not generally seen in nonliving things, such as metabolism, genetic reproduction and organic evolution. Nonliving matter, such as a crystal, can often be described by two length scales, macroscopic (crystal dimensions) and microscopic (size of the molecular unit cell). On the other hand, living matter is ordered at many hierarchical levels, each with its own length scale: ORGANISM — ORGAN — CELL — MEMBRANE — MOLECULE e.g., animal brain neuron axolemma ion channel Levels higher than organismic and lower than molecular, as well as the tissue level between organ and cell, have been omitted for simplicity. The nervous system is closely related to all the other systems; it, along with the endocrine system, transfers information throughout the body. The central nervous system receives information from every organ and sends information to every organ. The command and control structure that is embodied in this communication network is responsible for unifying the body into a coordinated system. 3.1. Sensory organs Information comes to us through our external senses: vision, hearing, the chemical senses of taste and smell and the skin senses of heat, cold and touch. There are also
26
CHAPTER 2
internal senses that tell us the orientation of the head relative to the gravitational field, and the locations, orientations and stresses of body parts. The light-receiving part of the eye is the retina; the rest of the structure essentially serves to position and focus the image on it. The light-absorbing cells in the retina, the photoreceptors, are the highly sensitive rods and the color-receiving cones. In these cells, information from incident photons stimulates the formation of nerve impulses. The retina is in a sense part of the brain, to which it is attached by the optic nerve; neurons in the retina begin the processing of the information that becomes the representation of the image in the brain proper. The organs of audition and balance are embedded in the solid skull. Sound, received at the eardrum and impedance-matched by the bones of the middle ear from air to water acoustics, forms standing waves in a spiral structure, the cochlea. A delicate resonating membrane, the basilar membrane, fringed by hair cells, divides the aqueous cochlea into two pathways, traversed by the sound so that incoming and outgoing waves interfere to form standing waves. As different locations along the basilar membrane resonate at different frequencies, the cilia of the hair cells are agitated and the cells convert acoustic information into nerve impulses, which travel to the brain via the acoustic nerve. Balance information is recorded by the vestibular organ of the ear, consisting of three semicircular canals lined with hair cells that detect acceleration, and two sacs, each containing calcium carbonate crystals that respond to the gravitational field. The hair cells are examples of mechanoreceptors. Smell and taste are closely related senses, which depend on the recognition of molecules by sensory systems based on chemoreceptors. The vomeronasal organ, present in almost all terrestrial vertebrates, sends neural information to brain structures controlling the reproductive behavior.4 Sensations of heat, cold, sharp or blunt, heavy or light contact are subserved by special sensory cells in the skin. Additional senses include the ability to detect electric fields (by electroreceptors in the lateral organs of fish) and infrared radiation (by photoreceptors in the pit organs of certain snakes). In addition to these external senses, internal senses monitor the body. Muscle tension is signaled by sensory neurons whose terminals are wrapped around muscle fibers. They help control muscular responses by a feedback system, exemplified by the knee-jerk reflex. Reflexes are mediated by synapses in the spinal cord. The reflex arc consists of an afferent neuron that carries sensory information to the spinal cord and a motor neuron with which it synapses, directly or by way of an interneuron. The efferent motor neuron processes both excitatory and inhibitory stimuli to form responses that promote a contraction of one muscle while deterring that of another. Pain receptors are nerve endings located throughout the body that respond to cellular disturbance or injury. 3.2. Effectors: Muscles, glands, electroplax Muscle cells, myocytes, are contractile fibers of three types: Skeletal muscles carry out voluntary movements; smooth muscles line the arteries, uterus and digestive, urinary and other tracts; cardiac cells form the highly coordinated tissues of the atria and
INFORMATION IN THE LIVING BODY
27
ventricles of the heart. All myocytes are under the control of nerve fibers that synapse with them. Muscle cells are also excitable cells in their own right, conducting impulses along their length. This excitation is coupled to the contractile mechanism by specialized membranous structures. Glands are organs that contain secretory cells. Exocrine glands, such as salivary, mammary and sweat glands, secrete fluids through ducts. Endocrine glands controlled by the nervous system, such as the adrenal medulla, posterior pituitary and pineal gland, secrete their hormones into the bloodstream, which carries them to target cells elsewhere in the body. Some organs, such as the pancreas, ovary and testes, have both endocrine and exocrine functions. 3.3. Using the brain The brain is neither a hydraulic device (as the philosopher René Descartes speculated) nor a computer (as some contemporary writers assert), even though moving fluids (blood and cerebrospinal fluid) and information-processing capabilities are both important aspects of it. The brain is an organ of the body, playing an essential role in the maintenance and survival of the organism. Like other organs, it has a metabolism, requiring a steady supply of nutrients and oxygen as well as the removal of wastes and excess heat. Unlike a computer, the brain is not assembled from parts; it grows by cell division. The brain–computer analogy also breaks down on a deeper level: Unlike a computer, the brain programs itself, by conditioning and learning. The brain is not a fixed structure. Although a computer may be modified by a replacement of components, its configuration—the hardware—normally remains constant, while the software and the contents of its memory are variable. Not so for the brain, which is fully functional in childhood, a period during which it grows rapidly. The brain's structure is changeable, within limits. If a part is injured, another part often takes over its functions. Learning, a requirement for mental growth, proceeds incrementally, new knowledge being integrated into previous knowledge. When the new knowledge conflicts with earlier, emotions may be aroused. Despite the fact that brains and digital computers are quite different, they are similar in one respect: They contain components that can exist in two or more discrete states, between which they switch in response to an input. In neuronal membranes these are the ion channels. The brain, along with the spinal cord, is part of the body's rapid informationprocessing system, the central nervous system. Its inputs are our senses and its outputs are mediated by our muscles and glands. The information acquired from the external environment is subject to further processing in the brain. Learned experiences allow us to recognize seen objects, heard voices, the touch of a familiar hand, the movement of an elevator in which we are standing. This interpreted sensation is perception. Our perception can be fooled; the fooling of our visual perception by optical illusions is as common as seeing a movie. An image may be ambiguous, capable of more than one interpretation. A black-and-white figure may be seen as a vase or two faces. In processing such an ambiguous image, the mind can select one interpretation
28
CHAPTER 2
or the other, but not both simultaneously—even when we know that both exist. We can be misled not only by optical illusions, but also by auditory ones and those of other modalities. Since understanding is analogous to perception in their dependence on brain processes, we may expect that cognition itself can be fooled.5 3.4. Analyzing the brain Let us now look at the brain as the object of our analyses. A human brain, prepared for dissection, has roughly the size and shape of a large cauliflower. In its natural state in the living animal, however, the brain is a soft, throbbing mass enclosed in the skull and covered by its glistening protective meninges. Of course, it is not the human brain that is mainly used in neurophysiological research. If "curiosity killed the cat," it is often human, not feline, curiosity that did it. We owe a great debt of gratitude to the animals—mammals, birds, amphibians, reptiles, fish and others—that involuntarily sacrifice their lives for scientific research. The brain receives the lion's share of the body's blood supply. A complex distribution of arteries and veins provides the blood on which the brain depends. However, the blood is filtered by the structural complex known as the blood–brain barrier, so that its cells, except in the case of trauma, only come into contact with the cerebrospinal fluid. Pools of cerebrospinal fluid occupy the central canal of the spinal cord and the ventricles of the brain. The distribution of blood supply shifts to brain areas that are currently active—a property used in imaging techniques for locating a particular brain function and for diagnosis. The study of the human brain in all its complexity would be impossible without some system. We have to pick up the thread somewhere, and traditionally this is done by starting with an embryo and following its development, or ontogeny. This not only gives us a systematic scheme to understand brain structure but, remarkably, also helps us helps us understand how the brain developed in the evolution of the species, or phylogeny. The parallelism between phylogenetic evolution and ontogenetic development, albeit imperfect, allows us to compare the human brain to the brains of other animals, living or extinct: The development of the embryonic brain follows many of the stages that the brain followed in its billion-year evolution. Brain and skin derive from the same embryonic layer, the outer cell layer called ectoderm. A strip of ectoderm curls into a gutter, which closes to form the neural tube. Starting as three bumps on one end of the hollow neural tube in the embryo, the brain grows in complexity and size. In invertebrates such as worms, the front bump, the forebrain, plays a role in olfaction; the midbrain, in the detection of vibration and pressure, and the hindbrain, in vision. In mammals, things are far more complex; the forebrain and hindbrain each further divide into two additional parts. The embryonic mammalian brain is divided into the telencephalon and diencephalon, which make up the forebrain; the mesencephalon, the midbrain; and the metencephalon and rhombencephalon, the hindbrain. The forebrain grows out of all proportion to the rest to develop into the cerebral hemispheres, which cover the brain except for the front part of the hindbrain, from which emerges the cerebellum, important in the coordination of bodily motions. Each hemisphere communicates with the
INFORMATION IN THE LIVING BODY
29
opposite side of the body, and the two hemispheres send messages to each other over a great bridge, the corpus callosum. The rest of the neural tube becomes the spinal cord, which together with the brain makes up the central nervous system. From it, branching bundles of nerve fibers, the cranial and spinal nerves, extend outward to the skin and all other organs of the body. When our ancestors, more than 3 million years ago, walked upright, the brain had to adapt and make a 90-degree bend: The bipedal posture that freed their hands imposed a new anatomical structure upon their brains. The study of the brain and its many functions—regulation, sensation, perception, control of muscles and glands, memory, emotion, cognition—is too deep to engage in here, except for a few generalizations. One of these is the dynamicism of the brain. Like the respiratory, circulatory and female reproductive systems, the nervous system operates on an intrinsic rhythm, the sleep–wakefulness cycle. Constantly active even during sleep, the brain is a busy organ indeed during the waking hours. The structure responsible for controlling the sleep-wakefulness cycle is the ascending reticular activating system, which originates in the brainstem, where it receives information from virtually all sensory pathways. Acting on the cerebral cortex by way of groups of cells called nuclei in the thalamus, it results in the maintenance of sleep or consciousness.6 The brain’s functions are highly localized. The brainstem contains important centers in which breathing, heartbeat and other vital functions are performed. From the floor of the thalamus extends the hypothalamus, which connects with the pituitary, the master gland that controls the endocrine system. Special locations have been mapped for emotions, long-term memory, and the serial processing of information from the eyes and ears. One area of the human cerebrum is devoted to understanding language, another to forming words. Within the convoluted surface of the cerebral hemispheres are motor and sensory domains, with distorted maps of the human body neatly laid out on them of the bodily regions they subserve. The brain is subject to injuries and diseases, both functional and structural. Brain injuries have given scientists many clues to the relation between brain structure and function. The case of Phineas Gage, a worker much of whose prefrontal cortex was destroyed in an industrial accident, is famous for the insight it provides into that structure.7 Probing the brains of patients with brain tumors has given us functional maps of the cerebral hemispheres. Drugs have provided insights into the function of receptors and ion channels. Some pathologies have been traced to their mutations; these channelopathies will be discussed in Chapter 13. 4. INFORMATION TRANSFER AT TISSUE LEVEL If we look at a thin slice, suitably stained, of any organ through the optical microscope, we see an arrangement of cells. Remarkably, in the vicinity of any cell, we find a blood capillary and a nerve fiber. In this way, both of the great information systems, hormonal and neural, reach into the neighborhood of every living cell of the body. These systems connect the body into a unified system.
30
CHAPTER 2
Because of its proximity to a capillary, every cell receives a blood supply. Thus it has access to the gases, nutrients and hormones the blood contains at any moment. Because of its proximity to a neuron, each cell is connected to the brain or spinal cord. Muscle and skin tissue are innervated by excitatory and inhibitory motor fibers, as well as by sensory fibers. Other tissues may only contain sensory nerve fibers. Every living tissue of the body, except in the brain itself, contains pain fibers. 5. INFORMATION TRANSFER AT CELL LEVEL Cells are basic units of living organisms. In one-celled organisms, they carry out all the functions of the organism. Some of this autonomy remains even in the cells of multicellular organisms, as these cells can often be grown in artificial media as cell cultures. 5.1. The cell The earliest known cells, Archaebacteria, were free-living and contained within their outer membrane all the functions necessary to life: respiration, digestion, excretion, reproduction. Many new forms evolved from these. Some developed the ability to photosynthesize, splitting water molecules and releasing oxygen molecules as a byproduct. In these cells we find photoreceptors that transduce sunlight into the main source of biological energy. The oxygen that entered the environment from photosynthesis, although lethal to many organisms, became the basis for a new form of respiration, aerobic, far more efficient than the earlier, anaerobic, form. Eventually cells developed a peculiar form of cooperation: Cells were infected by predatory cells but tamed them, using their special talents for the host's own benefit. Cells that could respire aerobically became subunits of other cells in the process of endosymbiosis. These subunits are organelles called mitochondria. The endosymbiotic organelles that conferred the ability to photosysnthesize are chloroplasts. Other organelles also are believed to have endosymbiotic origins.8 Endosymbiosis changed cells in a fundamental way. The new cells, eukaryotes, look quite different under the microscope from the earlier prokaryotes. They are larger (5-30 m, as compared to the bacteria's 1-10 m) and far more complex. The characteristic for which they are named (eukaryote = "good nucleus") is the nucleus, clearly demarcated by an envelope consisting of a porous double membrane. The matrix material of which a cell is composed is a gel called the cytoplasm. Gels have interesting properties associated with the sol–gel transition. In the transition from sol to gel, new bonds form between long polymer chains to form a threedimensional structure. An example of the importance of sol–gel transformations is the method of movement of the ameba. Ameboid locomotion occurs in many types of cells, including our white blood cells, which destroy pathogens. Enclosed in the eukaryotic cell's bounding plasma membrane are the nucleus and other organelles, including the mitochondria, which oxidize food molecules to produce molecular energy packages such as adenosine triphosphate (ATP). The ribosomes are organelles that carry out protein synthesis. Other organelles include the
INFORMATION IN THE LIVING BODY
31
endoplasmic reticulum, Golgi body, chloroplast (in algae and plants), vesicles, vacuoles, cilia and flagella. The study of ion channels in organellar membranes has opened new avenues of research. The cytoskeleton, a network of protein fibers in the cytoplasm, helps determine the shape of the cell. The types of fibers in the cytoskeleton of a neuron are microfilaments, neurofilaments and microtubules (see Chapter 19, Section 3.3). The neurofilaments, intermediate in diameter between actin filaments (about 5 nm) and microtubules (about 20 nm) are also called intermediate filaments. In Alzheimer's disease, neurofilaments form chaotic tangles in the brain. The maintenance of cell shape by the cytoskeleton has been studied in great detail in red blood cells, partly because of its relation to the genetic disease sickle-cell anemia. The cytoskeleton is anchored to the cell membrane by specific protein molecules. The cell is enclosed by a fluid two-dimensional structure, the plasma membrane, 6-10 nm thick. In addition to the plasma membrane, some cells, including the cells of algae, plants and fungi, are also surrounded by a cell wall, which provides rigidity to the cell. Composed of a phospholipid bilayer with embedded proteins, the plasma membrane maintains the integrity of the cell by controlling inflows and outflows of materials. The cell membrane normally supports an electric potential difference and is traversed by ionic currents. 5.2. Cells of the nervous system Nervous tissue contains two types of cells, neurons and glia. Neurons carry messages along their fibers and transmit them from their terminals to other cells, and so are involved in information processing. Glial cells of various types maintain the mechanical, chemical and electrical conditions necessary for the neurons' functioning. The glia, of various types and sizes, outnumber the neurons ten to one and play important supporting roles in the nervous system. 5.3. The neuron Neurons are the primary information-processing cells of the nervous system. Figure 2.3 shows several types of neurons.9 Motor neurons send excitatory and inhibitory commands from the spinal cord to muscles. The mitral cell is a sensory neuron that decodes an odor message and transmits it to the brain. The pyramidal cell undergoes modifications at its synapses; see Section 5.5. Purkinje cells are highly branched neurons of the cerebellum. The human brain contains some 1011 neurons, each making roughly one thousand synaptic connections on average. Thus the estimated number of synapses in the brain is the astronomical figure of 1014. A simple motor neuron, which sends information from the central nervous system to a muscle cell, consists of a perikaryon or soma, the cell body that contains the nucleus and many organelles, a number of branching dendrites, and a single axon, which may divide into branches. Each branch ends in a terminal bouton or terminal containing fluid-filled vesicles. The terminal
32
CHAPTER 2
Figure 2.3. Types of neurons: motor, sensory, pyramidal and Purkinje. From Nicholls, Martin and Wallace, 1992.
communicates with another cell, called the postsynaptic or subsynaptic cell. This may be another neuron or a muscle or gland cell. The region of apposition of the two cells is called the synapse. Much information processing takes place at the synapse, where information is transmitted from one cell to another. Although there are simple electrical synapses, called gap junctions, most synapses are chemical. In these, information is passed by the transfer of special substances called neurotransmitters. At chemical synapses the presynaptic terminal emits this transmitter substance, which diffuses across the synaptic cleft to bind to chemoreceptor molecules in the postsynaptic membrane. Synapses are classified and named by the neurotransmitter they release; a neuron that releases acetylcholine is cholinergic, one that releases adrenaline is adrenergic. Synapses exhibit a great deal of variation; a simple form is shown in Figure 2.4.
INFORMATION IN THE LIVING BODY
33
Figure 2.4. Schematic chemical synapse. From Nicolls, Martin and Wallace, 1992.
Information enters the neuron at the dendrites and soma, which typically are covered with hundreds of synapses, some at spines. Some are excitatory, driving the cell voltage in a positive direction and so promoting the formation of an impulse at the subsynaptic axon, and some are inhibitory, making the cell more negative and thereby resisting the formation of an action potential. The pattern of action potentials, or spikes, forms as a result of an integration of these impulses at the axon hillock that joins the axon to the soma. These spikes travel as electrochemical waves along the axons and its branches, sending coded messages to the subsynaptic cells. Since the spike can be considered an all-or-nothing event, a neuron carrying, say, eight spikes per second is transmitting information at a rate of 8 bits per second. According to Donald Hebb’s rule, if neuron A repeatedly contributes to the firing of neuron B, the efficiency of A in firing B increases. This forms the neural basis of Pavlovian conditioning and associative learning.10 Since the axonal terminals may be more than one meter away from the cell body, the question arises as to how proteins and other materials are transported to the terminal. Axonal transport has been studied by radioactive and fluorescent tracers and by video microscopy. These studies show that vesicles containing proteins and other materials are driven along microtubules by vesicle-associated proteins, both outward from the soma and back toward it (orthograde and retrograde transport). The vesicles move by a rapid process at several hundred millimeters a day, and by slow processes at 1-10 mm per day.11 Neurons are classified as either myelinated or unmyelinated, according to whether or not their axons are wrapped in periodically interrupted insulating regions of
34
CHAPTER 2
myelin. The tiny gaps between the regions of the axon covered with myelin are called nodes of Ranvier, or simply nodes. Because the transmembrane flow of ions is limited to the nodes, conduction of the action potential along myelinated axons is called saltatory, i.e., jumping, conduction. When axons of the same diameter are compared, conduction speed is much greater in myelinated than unmyelinated axon. To put it another way, an organism can gain impulse speed without sacrificing compactness by choosing myelination. Myelination is only seen in vertebrate animals. Neurons function in receiving information at various sites on the dendrites and soma, integrating excitatory and inhibitory inputs to form a train of action potentials, conducting the impulses to one or more target locations, and releasing neurotransmitters at the terminals to transmit the information to postsynaptic cells. In addition, some neurons have feedback loops in their structure; others release neurohormones into the interstitial fluid. 5.4. Crossing the synapse Briefly, synaptic transmission works like this: When an action potential arrives, it spreads over the terminal. Because the terminal has ion channels that are different from those in the rest of the axon, the inward current is not of sodium but of calcium ions. When the calcium ions reach the vesicles in the terminal, they stimulate them to discharge their neurotransmitters into the synaptic cleft in the process of exocytosis; see Section 5.8. After diffusing across, the neurotransmitter molecules bind to receptors on the postsynaptic membrane, which are ligand-gated channels. In addition to this ionotropic action, neurotransmitters may also act in a metabotropic response, as discussed in Section 6 of this chapter. Neurotransmitters provide a new dimension in brain complexity, far from the outworn telephone-exchange metaphor of the brain, in which synapses are viewed as mere switches. While peripheral nervous system neurons all secrete the same transmitter, acetylcholine, central nervous system neurons paint with a rich palette of neurotransmitters. There are some fifty different neurotransmitters (and neuromodulators, which exert a regulatory effect over a larger distance), of different chemical families. In addition to acetylcholine, there are amino acids (glutamate, aspartate, glycine), amino acid derivatives (serotonin, dopamine, noradrenaline, aminobutyric acid), nucleic acids (adenosine and its phosphates) and peptides (endorphin, oxytocin, substance P, bombesin). Even as small a molecule as nitric oxide (NO) acts as a neuroactive molecule, although it is not sequestered into vesicles; it diffuses, affecting neurons as far as 100 m away from its site of synthesis.12 The existence of a variety of neurotransmitters in organs greatly increases the rate of information transmission. Synapses provide two very important functions in information processing: plasticity, the ability to change the structure of the nervous response by the growth of new synapses and the decay of unused ones, and memory, to which they contribute.
INFORMATION IN THE LIVING BODY
35
5.5. The “psychic” neuron One of the most remarkable neurons is the pyramidal cell of the cerebral cortex, dubbed by neuroanatomist Santiago Ramon y Cajal the “psychic” neuron of the brain. This name is apt, and not only because of its impressive pyramid shape, with a long axon, an extended dendrite studded with spines at the neuron’s apex, and a bushy dendritic arbor at its base; see Figure 2.3. It connects its own region of the cortex with distant cortical regions as well as with subcortical effectors. It integrates thousands of afferent inputs; through its efferent connections it regulates skeletal and smooth muscles. Unlike that of other cortical neurons, the pyramidal cell’s information output is transmitted by way of the neurotransmitter glutamate. Its reactions are modulated by a number of other neurotransmitters. The prefrontal cortex of primates carries out higher functions, such as the “working memory,” by which an item of information can be held “in mind” for several seconds and be updated moment by moment. Prefrontal pyramidal neurons exhibit tonic activity triggered by the brief presentation of a stimulus, giving a neural basis for the observations of “out of sight—out of mind” behavior in patients with prefrontal lesions. Specific neurons are coded to items of information in object space, such as a person’s face. Pyramidal neurons are compartmentalized, in the sense that some regions possess dopamine receptors and other regions have serotonin receptors. The receptors have been shown to modulate memory fields and cognitive processes. Pyramidal cells represent vast stretches of the brain’s “information highway,” being privy to the current cortical environment as well as repositories of stored knowledge.13 5.6. Two-state model “neurons” Now that we have seen something of the marvelous complexity of neurons, we must acknowledge that the word “neuron” has been adopted by a group of engineers, computer scientists and mathematicians for another, much simpler, structure. These model neurons are modeled as simple on–off components, comparable to transistors. As we have seen, biological neurons are highly evolved structures, possessing an enormous number of inputs at the dendritic tree and perikaryon and multiple outputs due to branching of the axon at the terminals. The rich variety of synaptic types and neurotransmitters make the neuron a structure of great versatility and power, far more complex than a simple two-state device. The confusion between real and binary neurons is compounded when it is used to assess the complexity of the brain. According to some model calculations, the brain is comparable to a computer consisting of 1011 transistors, one per neuron.14 By underestimating the variety and complexity of the human brain by many orders of magnitude, such calculations embolden unrealistic speculations of technological “brains.” The most important application of binary neurons is their assembly into neural nets. Although far from describing biological brains, such neural networks have been used to solve difficult problems, becoming a growing trend in computer science.
36
CHAPTER 2
Furthermore, the insights they provide into the way changing brain connections underlie learning in simple models may give us useful clues to the organizational principles of the brains of living organisms. For example, Per Bak and Dimitris Stassinopoulos have studied a “toy model” of a brain, the computer results of which suggest that brains operate in a self-organized critical state.15 The concept of selforganized criticality will be discussed in Chapter 15 of this book. 5.7. Sensory cells Sensory cells function in detecting events in the body and in the external environment, and transducing this information into the language of action potentials. Mechanoreceptors, cells that detect mechanical pressure, include stretch receptors in muscle, hair cells in the inner ear and touch receptors in the skin. Hair cells are present in both the basilar membrane, where they react to sounds of specific sound frequencies, and in the vestibular apparatus, where they react to the position of the otoliths. Taste and smell depend on chemoreceptors, which sensitively react to molecular structures. We have already seen an example of bacterial chemoreception in the first section of this chapter. A particular substance may have drastically different effects when transduced by different detectors. For example, there are two types of receptors sensitive to acetylcholine: the nicotinic acetylcholine receptor, a ligand-gated ion channel found in skeletal muscles, where it reacts to acetylcholine emitted by motor neurons, and the muscarinic acetylcholine receptor, which is found on smooth and cardiac muscle. Many chemoreptors utilize systems of interacting proteins, including the G proteins discussed in Section 6.2. Figure 2.5 illustrates the olfactory transduction pathway. An odorant molecule is carried through the mucus layer and binds to the receptor of a G protein, which releases an alpha subunit. This stimulates adenylyl cyclase to produce elevated levels of cAMP, opening cyclic nucleotide gated channels. The ionic current leads to a generator potential, which may result in the formation of an action potential in the axon.16 The lateral organs of certain fish, such as rays and skates, are highly sensitive to electric fields. They contain electroreceptors known as ampullae of Lorenzini, which respond to stimuli as small as 3 V.17 Some bacteria, insects, birds and whales exhibit sensitivity to magnetic fields.18 While magnetic fields produced by electric currents in the human heart, brain and skeletal muscle have been measured,19 the question, “How do organisms detect magnetic fields?” is still controversial. However, the use of the mineral magnetite by magnetotactic bacteria for orientation provides a biophysical mechanism for magnetoreception and suggests a role for this sense modality in early evolution.20 Migrating birds and fish navigate by following the Earth’s magnetic lines of force.21
INFORMATION IN THE LIVING BODY
37
Figure 2.5. An olfactory receptor neuron. The transduction of the binding of an odorant molecule by an odor receptor to second messenger system occurs in a fine dendrite called a cilium. The depolarization of the ciliary membrane spreads to the membrane of the soma, where it produces action potentials that travel along the axon to the olfactory bulb. From Broillet and Firestein, 1996.
Electromagnetic radiation arriving at the retina is detected by the light receptor structures, rods and cones. In the embryo, these appear first as cilia, which specialize to form stacks of disks containing visual pigment, rhodopsin in the case of rods. Rod cells have been shown to be sensitive to single photons. Ion channels play an important role in rod photoreception: In the dark, there is a current of sodium ions between outer and inner segments of the rod; illumination stops this current. The pit organs of rattlesnakes are lined with receptors sensitive to infrared radiation. Noxious thermal, mechanical and chemical stimuli are detected by sensory neurons known as nociceptors, located on pain fibers. They are also characterized by their sensitivity to capsaicin, the main pungent ingredient of “hot” chili peppers.22
38
CHAPTER 2
5.8. Effector cells The information received by receptor cells is shunted to various parts of the brain, where it is processed by interneurons receiving information from many sources. For example, information from the retinal rods and cones may lead to changes in the ciliary muscles controlling the tension of the lens, and hence focus; the external recti controlling eye rotation; and, by way of a series of intermediate structures, radiate to the visual projection area in the occipital lobes of the cerebral hemispheres. Further processing, involving also the cerebellum, may for example lead to coordinated motions such as the movements of head, arms and legs to swing a racket in a return of a served tennis ball. The pupillary light reflex requires the flow of information through the optic nerves and tracts, and synapses in various ganglia and nuclei. Eventually responses are generated by the effectors, muscles and glands. Excitability is by no means limited to the cells of the nervous system. Although the primary function of muscle cells, or myocytes, is to contract, they also conduct impulses. Many of the types of ion channels we will encounter in this book reside in the membranes of myocytes as well as neurons. Myocytes that effect the movement of one bone relative to another are skeletal muscle cells, known, because of their striped appearance in the microscope, as striated cells. Striated muscles are under conscious control of the nervous system. Another form of muscle cells are the smooth muscle cells, which raise hairs, contract arteries, transport food through the digestive tract, and perform many other autonomic functions in the body. The third type of myocytes are the cardiac cells, which are highly adapted to their function of lifelong coordinated rhythmic contraction. An interesting modification of muscle cells are the electric organs found in electric eels and other electric fish, which deliver discharges of as much as 600 volts to repel predators and stun prey. The electroplax of these organs contains nicotinic acetylcholine receptors and voltage-sensitive sodium channels; see Chapter 13. Glandular cells have the ability to secrete materials to the outside. This process, exocytosis, is the opposite of swallowing an object or bit of fluid, called endocytosis. The material to be secreted, such as milk, a hormone or a neurotransmitter, is produced at the endoplasmic reticulum and gathered into a vesicle at the Golgi apparatus. The vesicle drifts to the plasma membrane, the membranes fuse together to form an opening, and the vesicle contents are expelled. While the ability to secrete is the specialty of glandular cells, it is present in many other types of cells, particularly the neuron. In fact, the brain itself can be viewed as an immense gland.23 Protein secretions are synthesized according to RNA coding at ribosomes attached to the endoplasmic reticulum. There the polypeptide may enter the lumen or—if destined to be a membrane protein—thread itself across the membrane. Processing occurs at the Golgi body, where secretory vesicles bud off. The vesicles may be transported down the length of the axon to the terminal, where they are ready to be released when an action potential arrives.
INFORMATION IN THE LIVING BODY
39
6. INFORMATION TRANSFER AT MEMBRANE LEVEL The cell membrane is the bounding structure of the cell, separating and distinguishing it from its environment. For the cell to remain alive, it must be an open system, so the membrane must be “leaky.” Leakiness is, however, not a very appropriate term, since the requirement for survival is a highly selective permeability: Water must equilibrate, oxygen and nutrients must enter freely, carbon dioxide and other metabolic products must leave freely, but toxic materials must be denied entry. Furthermore, the excitable membrane must be electrically insulating when not conducting an impulse, but carry substantial currents when it is. 6.1. Membrane structure These requirements are elegantly met by the biological membrane, which not only covers the cell, but also forms intracellular organelles. It consists of a fluid lipid bilayer with intrinsically located proteins, including ion channels, pumps and receptors. The lipid molecules form two sheets, with their hydrophobic tails facing inward and half of their hydrophobic heads facing the cell's aqueous environment and the other half facing toward the cell's aqueous interior. As we saw in Figure 1.3, this bilayer is pierced by protein structures, which may be limited to the inner or outer leaflet of the bilayer, or span the membrane between the aqueous media. 6.2. G proteins and second messengers The excitability of membranes in heart, smooth muscle and secretory cells, and the somata and dendrites of neurons, is subject to modulation. Their electrical properties constantly adjust to the varying needs of the organism. The complexity of these adaptations is well illustrated by the function of receptors, such as chemoreceptors. This process utilizes regulatory membrane proteins called G proteins. These are members of a class of guanine-binding proteins. G proteins function as switches capable of turning on or off the activity of other molecules, and they do so for specific time durations. The G proteins operate by a process called collision coupling. The function of G proteins is to transmit messages from receptors to effectors within the cell membrane, such as enzymes and ion channels. Some enzymes send out second messengers, which trigger biochemical changes elsewhere in the cell; this is called the metabotropic response. Figure 2.6 shows a schematic diagram of the induction of a second messenger, inositol 1,4,5-triphosphate (IP3), from an incoming first messenger, not shown, which binds with the G-protein-coupled receptor R. The activated receptor splits off two subunits, G and G. The G subunit, activated by a guanosine triphosphate (GTP) energy carrier, migrates along the membrane to dock with and activate the effector phospholipase (PLC). The PLC then catalyzes a reaction that yields diacylglycerol (DAG) and the IP3 second messenger.24
40
CHAPTER 2
Figure 2.6. Schematic diagram of the induction of a second messenger, IP3, from an incoming first messenger that activates the G-protein-coupled receptor R. This causes the G subunit to migrate along the membrane to dock with and activate the effector phospholipase (PLC). From C.U.M. Smith, 1992. Copyright John Wiley & Sons Limited. Reproduced with permission.
The multiplicity of steps in this process has the valuable consequence of amplifying the signal. The production of a different second messenger, in this case cyclic adenosine 3',5'-monophosphate (cAMP), is catalyzed by the enzyme adenylyl cyclase.
7. INFORMATION TRANSFER AT MOLECULAR LEVEL Descending the hierarchy of size, we arrive at the molecular level. Here quantum effects become important, although classical physics remains adequate to explain many phenomena. Broken symmetries, exemplified at the organism level by the positions of the heart and liver, which break the bilateral symmetry of the body, become even more important in molecules. As we shall see in Chapter 18, broken symmetry may be considered a fundamental characteristic of life. 7.1. Chirality If you raise your left hand in front of a mirror, your image's right hand will go up, and it will have the shape of your right hand turned the other way, not that of your left hand. The property of handedness is called chirality. Lord Kelvin defined this concept in 1884: “I call any geometrical figure or group of points chiral and say it has chirality if its image in a plane mirror ideally realized, can not be brought to coincide with itself.”25 A helical screw is an example of a chiral object: The mirror image of a righthand screw is a left-hand screw. Highly symmetrical molecules such as O2, H2O and
INFORMATION IN THE LIVING BODY
41
benzene are nonchiral, while others, such as alanine and all other amino acids except glycine, are chiral, see Figure 2.7. Alanine comes in two forms, L-alanine and D-alanine, where L and D stand for left-handed (levo) and right-handed (dextro) respectively. Every solid made of chiral molecules—unless it is an equal mix of right-handed and lefthanded ones, called a racemic mixture—is chiral. A chiral crystal may be made of nonchiral molecules; e.g., the SiO2 molecules of -quartz are helically ordered.26
Figure 2.7. Chiral objects: quartz crystals, snails, screws, S- and R-alanine, hands and a spinning particle. From Kitzerow and Bahr, 2001.
Some living organism possess (at least approximate) mirror symmetry, but not all. In Bourgogne, France, where snails are cultivated for food, a million right-handed snails are found for every left-handed one. Alice, in Through the Looking-Glass, said, “Perhaps looking-glass milk isn't good to drink.” It isn't; protein molecules are composed of left-handed amino acids. Many organic molecules are chiral because they contain a carbon atom with four different ligands. The ligands are not in the same plane but are situated at the corners of a tetrahedron surrounding the central carbon atom. The absolute configuration of the molecule is designated rectus (R) or sinister (S).27 For molecular chirality in amino acids and carbohydrates, the configuration is designated by the prefix “L” or “D” instead of the absolute configuration, R or S. We observe two facts: 1. The two configurations are not equally distributed; biological nature has a preference in handedness even at the molecular level. L-amino acids occur much more
42
CHAPTER 2
frequently than D-amino acids, and proteins are made entirely of L-amino acids. Similarly, there is a natural preference for D-sugars. 2. Chirality has important physiological effects, as the taste, odor and drug activities of the enantiomers can differ greatly. While the amino acid (S)-asparagine has a bitter taste, the R enantiomer has a sweet taste. A disastrous example of the importance of chiral differences was seen in the drug thalidomide, prescribed in the 1960s as an antinausea agent for pregnant women. While the S enantiomer had the desired properties, the R enantiomer, also present in the racemic mixture, acted as a teratogen, causing serious birth defects in the babies. Since then, the chirality of drugs has been carefully controlled. A question that arises from these observations is whether the origin of the natural preference in chirality of biomolecules arose as a consequence of an accidental fluctuation that was amplified by evolutionary processes, or whether it arose from a systematic chiral perturbation at a more fundamental level.28 In the 1950s physicists demonstrated that left-handed and right-handed structures are not energetically equivalent in the nuclear weak force. Because of this so-called parity violation, neutrons are left-handed, while antineutrons are righthanded. The electrons of an atom may be left- or right-handed, and the symmetry of a stable atom may be broken by the absorption of light. The coupling due to electromagnetic forces between the electron's spin and its orbital motion causes a preference in the handedness of helical molecules. Quantum mechanical calculations show that the L-enantiomers of naturally occurring amino acids and the D-enantiomers of sugars do have slightly lower energy than their unpreferred enantiomers. Kinetic studies of model systems in which fluctuations and an external chiral influence (such as circularly polarized light) are present show that a period of 15,000 years is sufficient for a systematic chiral influence to determine which enantiomer will dominate. The enantiomeric homogeneity in nature thus may well have been caused by the asymmetry of the weak interactions, but this hypothesis cannot be proved within a time scale of human societies. 7.2 Carbohydrates Carbohydrates, sugars and their polymers, consist of carbon, hydrogen and oxygen. Simple sugar units, monosaccharides, often assume ring configurations. In the polymerization process, the rings may form linear chains or branched structures called polysaccharides. These units are frequently found to be attached to proteins, forming glycoproteins, discussed in Section 7.5 below. 7.3 Lipids Lipids are a great variety of molecules containing carbon, hydrogen, oxygen, nitrogen and phosphorus atoms. Simple constituents of lipids are the fatty acids, which are chains of hydrocarbons with a carboxyl group at one end. Typical of all lipids, they are nonpolar at the hydrocarbon end and polar at the carboxyl terminal; thus their long
INFORMATION IN THE LIVING BODY
43
hydrocarbon tails are at home in a nonpolar environment, while the polar carboxyl groups find their lowest energy position in a polar environment, such as a water surface: The polar heads of the lipids are hydrophilic, which the nonpolar tails are hydrophobic. This amphipathic property of lipids determines many of the structures they form, including membranes. The lipids of a membrane form its matrix, to which proteins and carbohydrates confer specific properties. Three groups of lipids are of major importance to biological membranes: phospholipids, glycolipids and steroids. Phospholipids consist of two long fatty acid chains (nonpolar tails) attached via glycerol and a phosphate group to a hydrophilic characterizing group (polar head) of choline, ethanolamine, serine or inositol. Glycolipids, such as sphingomyelin and gangliosides, form the second group of membrane lipids. Steroids differ from the others by their characteristic flat ring structure. Biomembrane structures are based on the separation of two aqueous phases by a lamellar bilayer consisting of a mixture of lipids containing various types of protein molecules. The bilayer is fluid, allowing both lipid and protein molecules to migrate within the membrane. The unusual combination of order and fluidity exhibited by lipids is an example of liquid crystals; see Chapter 17.29 7.4. Nucleic acids and genetic information Nucleic acids carry out essential functions in reproduction and energy transformation. Deoxyribonucleic acid (DNA) encodes genetic information in a readable form; while it is highly stable in the cell nucleus, its relatively rare changes constitute the important mutations that allow organisms, and species, to adapt to changing conditions. Its form is the famous double helix. Ribonucleic acid (RNA) is a highly adaptable molecule that carries the genetic message, forms part of the ribosomes that build proteins, shuttles amino acids and even has catalytic capabilities. Nucleic acids are polymers of nucleotides, which consist of a purine or pyrimidine base, a pentose sugar (ribose for RNA and deoxyribose for DNA) and a phosphate unit. In DNA, the purine bases are the two-ring adenine and guanine, abbreviated A and G; the pyrimidines are the single-ring cytosine and thymine: C and T. Since the set {A, G, T, C} can be written {00, 01, 10, 11}, each nucleotide can store and transmit two bits of information. In RNA, T is replaced by U. Adenosine triphosphate (ATP) is an important nucleotide molecule in metabolic transformations; it functions as an energy carrier because of the repulsions of its negatively charged phosphate groups. Energy from ATP is coupled to other reactions by the transfer of one or two phosphate groups. Analogous compounds, such as the GTP discussed in Section 6.2, have similar properties. 7.5. Proteins Proteins are polymers of amino acids, attached end to end. To make an amino acid, we would start with a central (alpha) carbon and attach to its four bonds an amino group,
44
CHAPTER 2
a hydrogen atom, a carboxyl group and one additional group, called the side group or R group, that determines the type of the amino acid. For example, if R is a methyl group (the nonpolar side group CH3) the amino acid is alanine. For the simplest amino acid, glycine, the R group is another hydrogen and the molecule is nonchiral, as already mentioned. For any other choice of R, there are two possible structures, D and L. Biological amino acids have the L configuration. The polymerization of amino acids to form a polypeptide is due to a connection by covalent bonds called peptide bonds. Twenty different amino acids are produced from DNA code and translated from a sequence of three RNA molecules, known as a codon. Since each RNA nucleotide contains one of four bases, U, C, A and G, there are 43 = 64 triplet codons, enough for two or more representations of each of the 20 amino acids. Other amino acids are produced from these 20 by enzymatic action. Proteins are formed from one or more polypeptides by posttranslational processing, which involves folding and formation of intramolecular and intermolecular bonds. We discuss the hierarchical structure of proteins in Chapter 12, Section 5. Proteins serve many functions: as structural building blocks, as enzymes that catalyze metabolic reactions, and as informational molecules such as the ion channels that constitute the theme of this book. Proteins may be found dissolved in blood or other body fluids or, as in the case of ion channels, embedded in membranes. Short proteins, called peptides, serve as messenger molecules. Proteins are often found in combination with lipids, forming lipoproteins. Glycoproteins possess carbohydrate side chains. The attachment of these oligosaccharide units, generally anionic, occurs at specific locations of a few amino acids. This process, called glycosylation, is carried out in the lumen of the endoplasmic reticulum and the Golgi body. Carbohydrate groups extending outward from a cell's membrane play an important role in the recognition of other cells.
8. INFORMATION FLOW AND ORDER Information flows rapidly through the body by the propagation of impulses along axons and nerve fibers. At synapses, information is transmitted by formation of vesicles containing a variety of neurotransmitters and their release into the synaptic cleft. More slowly, the convection of chemical messengers conveys hormonal information. Axonal transfer moves materials, and with them information, in both directions, away from and toward the perikaryon. Information also flows by the growth of nerve and muscle fibers and other structures. Within the cell, information is copied from its DNA archive into various forms of RNA, to be used in the synthesis of proteins. These in turn have many informational jobs, including the speeding of chemical reactions as enzymes, and so helping produce other substances in the body. Membrane proteins control the traffic across the boundaries of organelles and cells, where they direct energetic and informational processes. Voltage-sensitive ion channels play key roles in the control of bodily processes, thought and consciousness.
INFORMATION IN THE LIVING BODY
45
8.1. Information flow and time scales Information flows at many levels and time scales. Interaction with the environment yields information in the hard language of survival. The probability of survival is enhanced by adaptation. Survival patterns help determine favorable genes transmitted to offspring, thus shaping changes in the gene pool of the DNA of a species. The information flow in evolution spans many generations. It sometimes results in the origin of new species and the extinction of others.30 In the growth of the organism, information flows from the stored genetic code of DNA to RNA, particularly messenger RNA, which provides the template for the synthesis of polypeptides. These, singly or in combination, become processed into proteins. Among these proteins are the enzymes that speed the synthesis of carbohydrates and lipids, enabling the biosynthesis of the body. Thus information flows from nucleic acids to proteins, and from there to lipids and carbohydrates. A third way in which information flows is by signaling within the body. The flow of information from the environment via sense organs to the nervous system, and thence to the body's effectors, proceeds at a much higher speed than body growth. This form of information flow is a consequence of cellular excitability. Faster yet is the passage of information from molecule to molecule, or even within a single macromolecule. The explanation of this molecular excitability, as displayed in the voltage-sensitive ion channels, is a major goal of biophysics. 8.2. The emergence of order In the ordered structures of life, from the molecular level to the organismic level, we see again and again the emergence of new kinds of order from increasing complexity. From the patterns of atoms arise molecules; from the interactions of molecules come supramolecular aggregates such as membranes and organelles; from the coordination of these emerge cells; and from them, tissues, organs and organisms. Each level appears to be understandable within its proper concepts, at its own scale, but is of no help in the understanding of the next “lower” level. Understanding the way a reflex arc works is of no help in understanding the basis of the action potential, and that knowledge in turn is of not much use in trying to understand the way a voltage-sensitive ion channel works. On the other hand, the understanding of a more fundamental level can be of considerable help in making sense of the next higher level. For example, knowledge of atomic structure helps us understand the formation of molecules and crystals. The way properties at a more complex level of organization can arise from the properties of its simpler components is called emergence; the new properties are emergent properties. Dealing with emergence is not simple or direct; special mathematical techniques such as those of statistical mechanics are required.
46
CHAPTER 2 NOTES AND REFERENCES
1. 2. 3. 4.
5. 6. 7. 8. 9.
10. 11. 12. 13. 14. 15. 16.
17. 18. 19. 20. 21. 22.
23. 24. 25. 26. 27. 28. 29.
30.
C. U. M. Smith, Elements of Molecular Neurobiology, Second Edition, Wiley, 1996, 252-256. R. B. Bourrett, K. A. Borkovich and M. I. Simon, Ann. Rev. Biochem. 60: 401-441, 1991. Leon Brillouin, Science and Information Theory, Academic, 1962. A. Cavaggioni, Carla Mucignat-Caretta, G. Sartor and R. Trindelli, in Neurobiology: Ionic Channels, Neurons and the Brain, edited by Vincent Torre and Franco Conti, Plenum, New York, 1996, 165-173. See, e. g., Piattelli-Palmarini, Inevitable Illusions: How Mistakes of Reason Rule Our Minds, John Wiley & Sons, New York, 1994. John Nolte, The Human Brain: An Introduction to its Functional Anatomy, Mosby-Year Book, St. Louis, 1993, 386f. Nolte, 382. Lynn Margulis and Dorion Sagan, Microcosmos: Four Billion Years of Evolution from Our Microbial Ancestors, Summit Books, New York, 127-136. John G. Nicolls, A. Robert Martin and Bruce G. Wallace, From Neuron to Brain: A Cellular and Molecular Approach to the Function of the Nervous System, Third Edition, Sinauer Associates, 1992. J. A. Scott Kelso, Dynamic Patterns. The Self-Organization of Brain and Behavior, MIT Press, Cambridge, MA, 1995. Irwin B. Levitan and Leonard K. Kaczmarek, The Neuron: Cell and Molecular Biology, Oxford University, New York, 1991, 16-19. See e.g. Smith, 318-349. Patricia S. Goldman-Rakic, Ann. N. Y. Acad. Sci. 868:13-26, 1999. Roger Penrose, The Emperor's New Mind, Oxford University, New York, 1989. Per Bak, How Nature Works: The Science of Self-Organized Criticality, Springer, New York, 1996, 175-182. Marie-Christine Broillet and Stuart Firestein, in Neurobiology: Ionic Channels, Neurons, and the Brain, edited by Vincent Torre and Franco Conti, Plenum, New York, 1996, 155-164. With kind permission of Springer Science and Business Media. Jin Lu and Harvey M. Fishman, Biophys. J. 67:1525-1533,1994. Ulrich Warnke, in Bioelectrodynamics and Biocommunication, edited by Mae-Wan Ho, Fritz-Albert Popp and Ulrich Warnke, World Scientific, Singapore, 1994, 365-386. B. N. Cuffin and D. Cohen, J. Appl. Physics 48:3971-3980, 1977. R. P. Blakemore, Science 190:377-379, 1975. J. L. Kirschvink, M. M. Walker and C. E. Diebel, Current Opinion in Neurobiology 11:462-467, 2001. M. Tominaga and D. Julius, in Control and Diseases of Sodium Dependent Transport Proteins and Ion Channels, edited by Y. Suketa E. Carafoli, M. Lazdunski, K. Mikoshiba, Y. Okada and E.M. Wright, Elsevier Science, Amsterdam, 2000, 119-122. Smith, 289-317. Smith, 158. Kelvin, Baltimore Lectures. H. Kitzerow and C. Bahr, in Chirality in Liquid Crystals, edited by H. Kitzerow and C. Bahr, Springer, 2001, 1-27. With kind permission of Springer Science and Business Media. Kitzerow and Bahr, 3 Kitzerow and Bahr, 5ff. D. Chapman, Biological Membranes, Academic Press, 1968; Liquid Crystals & Plastic Crystals, Vol. 1: Physico-Chemical Properties and Methods of Investigation, edited by G. W. Gray and P. A. Winsor, Ellis Horwood Limited, Chichester 1974, 288-307; in Liquid Crystals: Applications and Uses, vol. 2, edited by B. Bahadur, World Scientific, Singapore, 1991. Charles Darwin, The Origin of Species, Avenel Books, New York, 1979.
CHAPTER 3
ANIMAL ELECTRICITY
It is a fundamental tenet of science that the laws of nature are universal. Thus the task of biophysicists is to account for the workings of living organisms in terms of physical laws. This book seeks to trace this endeavor in the area of voltage-sensitive ion channels. Like all subjects, electrophysiology has a history. In this chapter we will briefly review the early history of bioelectricity.1 The common view of science as an edifice that is built up steadily, with no retreats, is not correct, as pointed out by Thomas Kuhn.2 Sometimes a “brick” inserted into the structure doesn't fit right, and the parts built on top of it become ragged and uneven. Experimental results become puzzling and inexplicable. Eventually the problem becomes so obvious that something has to be done about it. Part of the building has to be dismantled, the misfit brick removed and replaced, and rebuilding started from there. So the history of science, in addition to its phases of steady growth, also has its revolutionary changes, which Kuhn called paradigm shifts. These shifts require a readjustment in our concepts, which is difficult at best. The field of cellular and molecular excitability has had its share of both successful and proposed paradigm shifts. Revolutionary changes have often begun with phenomena that could not be explained by existing theories. Experimental data may suggest new theories, but to be successful a theory must be consistent with all observed phenomena. 1. DO ANIMALS PRODUCE ELECTRICITY? The frog's leg, dissected from the animal, twitched. Luigi Galvani and his wife, Lucia, were at the dissecting table while an electrostatic machine was charging nearby. With the scalpel touching the frog’s leg at the instant a spark leaped across the terminals of the machine, the leg muscles contracted. The intimate relation between electricity and living processes has been demonstrated.3 The existence of bioelectricity in electric catfish has been known since about 2600 BC, according to Egyptian records.4 Electric fish have electric organs they use to repel predators, stun their prey and attract mates. Today, molecular biologists use these electric organs to study the molecular basis of bioelectricity; see Chapter 12. 47
48
CHAPTER 3
Electricity received its name from the Greeks, who knew that rubbing a piece of amber (elektron) gave it the property of picking up pieces of lint and other small objects. Like lodestone's attraction for iron, it mysteriously acted at a distance. The science of electricity developed over many centuries. The Leyden jar, invented in the 1740s, consists of a glass jar covered inside and out with metal foils. An electrode passing through a rubber stopper connects to the inner foil with a chain. The jar was charged with an electric machine operated by hand to produce a separation of charge by frictional electricity. The amount of charge stored in the Leyden jar was estimated by the strength of the electric shock it gave the experimenter—the first electric measuring instrument was the human body! Benjamin Franklin used a Leyden jar in his famous, and dangerous, kite experiment to demonstrate the electric nature of lightning in 1751. 1.1. Galvani’s “animal electricity” Galvani carried out an important series of experiments, the results of which he published in 1791. He used a frog’s hind leg, attached to its sciatic nerve, a bundle of nerve fibers sheathed in connective tissue that connects to the spinal cord. Galvani discovered a new property of electricity, electric current, which became known as galvanic electricity. When he hung a frog leg on a hook on his balcony, he saw the leg twitch when it touched the metal railing. Since he could see no external source of electricity, Galvani declared that it came from the frog itself. This, he asserted, demonstrated the existence of animal electricity, electric charge generated by the organism itself. Although we know today that such bioelectric charge separation exists, his experiment had not demonstrated it.5 1.2. Volta’s battery Alessandro Volta found a fallacy in Galvani's argument: Noting that the hook and railing were made of dissimilar metals, brass and iron, Volta showed in 1800 that the junction of unequal metals was sufficient to generate electricity. His voltaic pile of dissimilar metals and cloth soaked in salt solution served as a continuous source of electric current, opening a new era in chemistry and physics. Humphrey Davy used it to isolate a number of elements, including sodium, potassium, calcium and magnesium, by electrolysis in 1808. Hans Christian Oersted discovered in 1820 that a steady electric current produced a magnetic field that rotated a magnetized needle. The development of the galvanometer, a sensitive device for measuring current, followed shortly thereafter. Michael Faraday expressed the chemical effect of an electric current in his laws of electrolysis, published in 1834. Among his many other contributions, Faraday described an unusual silver compound, which today is recognized as the first known example of a superionic conductor; see Chapter 6, Section 7.
ANIMAL ELECTRICITY
49
1.3. Du Bois-Reymond’s “negative variation” Galvani's hypothesis of animal electricity received new support when it was shown, by Galvani and Alexander von Humboldt, that a nerve–muscle preparation could be stimulated by contact with a freshly cut muscle. In the 1830s Carlo Matteuci proved in a series of experiments that a wound gives rise to a current of injury at its surface, vindicating Galvani’s belief in animal electricity. In 1843 Emil du Bois-Reymond found with a galvanometer that a steady current flowed from the intact portion to the cut end of a frog nerve. This current decreased when the nerve was excited. This “negative variation,” as du Bois-Reymond called it, was later called the action current. In 1848, du Bois-Reymond generated a theory of nerve excitation on the basis of his observations on stimulation of a frog nerve–muscle preparation. He proposed that the excitatory effect is a function only of the rate of rise of the applied current. After several decades of influence, his theory was contradicted by new data on the amplitude of the threshold pulse (the lowest current strength that excited the nerve) and was abandoned.6 2. THE NERVE IMPULSE New measurements, based on new techniques, shed light on the nature of the nerve impulse, preparing the ground for quantitative models. 2.1 Helmholtz and conduction speed Hermann von Helmholtz in 1850 made the first measurement of the speed of nervous conduction, overthrowing the notion that nerve conduction was so rapid that its propagation velocity could never be measured. Helmholtz dissected a frog's leg, leaving several centimeters of sciatic nerve intact. In one of his experiments, he attached the free end of the muscle to a lever that made a mark on a rotating cylindrical surface. The time from stimulation to the muscle twitch was less when the nerve was stimulated near its attachment to the muscle than when the stimulating electrode was farther back on the nerve. The length of nerve between the two electrode positions divided by the time difference gave a mean conduction speed of 27.3 meters per second. We now know that the sciatic nerve is a bundle of many nerve fibers, with conduction speeds ranging from 10 to 100 m/s. 2.2. Pflüger evokes nerve conduction Edouard F. W. Pflüger in 1859 made a series of studies on the stimulation of nerve–muscle preparations with steady currents. He reported that nerve propagation can be evoked with weak currents by stimulation with a negative electrode (cathode), and that, with larger currents, a nerve impulse can be initiated by either starting or stopping the current; stimulation with a positive electrode (anode) can block the conduction of the impulse.
50
CHAPTER 3
The development of artificial solutions such as Ringer's solution allowed cells to remain active for longer periods of time. This, along with more sophisticated circuits and microelectrodes, provided the basis for new experiments. 2.3. Larger fibers conduct faster—but not always An important milestone was passed when experiments were first carried out on single fibers, rather than bundles of nerve fibers. This allowed the relation between fiber diameter and conduction rate of simple axons to be studied.7 An important discovery was that some axons are covered with a myelin sheath interrupted by periodic gaps about every 2 mm. These gaps, nodes of Ranvier, speed the nerve impulse by the jumping of action currents from node to node, a process called saltatory conduction. A nerve such as the optic or sciatic may contain thousands of motor and sensory fibers, both myelinated and unmyelinated. 2.4. Refractory period and abolition of action potential In muscle or nerve, a shock given briefly after the end of an action potential will fail to evoke a response. This absolute refractory period was first found in cardiac muscle and nerve trunks. It is markedly prolonged by lowering the temperature. E. D. Adrian and Keith Lucas found that a subnormal response could be evoked by a strong second shock during a recovery time called the relative refractory period. The absolute refractory period coincides with the duration of the action potential. The action potential can be terminated by a brief pulse of inward current. After a weak pulse the action potential resumes its course, but after a sufficiently strong pulse the membrane potential falls to its resting value; the action potential is abolished in a roughly all-or-none manner.8 2.5. Solitary rider, solitary wave In 1834, John Scott Russell, while riding on a horse, studied the motion of a small boat in a canal. He observed that a lump of water formed in front of the boat when it was suddenly stopped. This lump, the first recorded example of a solitary wave or soliton, moved forward with constant speed and shape. Experimentation in a tank of water showed that solitons in shallow water could be generated by the movement of a piston or the insertion of a block into the water. In 1895, Diederik Korteweg and G. de Vries derived a partial differential equation that described the shallow-water waves Russell had observed. Two features of this equation are a dispersive term and a nonlinear term. The effect of dispersion to spread the energy spectrum of the pulse is balanced by the effect of nonlinearity to distort its shape. From this dynamic balance the solitary wave emerges as an independent entity. In another type of solitary wave, dissipative effects are in a dynamic balance with nonlinear effects. A burning candle is a common example of nonlinear diffusion. The power radiated by the flame is equal to the downward speed of the candle’s surface times the rate at which energy is released by the burning vaporized wax.
ANIMAL ELECTRICITY
51
In 1900, Wilhelm Ostwald described experiments on metal wires in acids. When the oxide layer of the metal was disturbed by scratching, a reaction propagated along the wire at a speed of about 1 m/s. This phenomenon was interpreted as a model of nerve propagation.9 The word soliton was coined in 1965 by Norman J. Zabusky and M. D. Kruskal, who studied their decomposition, collision and particle-like properties by computation. The concept of soliton propagation was applied by Alexander S. Davydov to molecular systems such as the helical structures in proteins. In Chapters 18 and 19 we will apply this concept to both action potentials and the surge of ions across the membrane through the ion channel. 3. BIOELECTRICITY AND REGENERATION The story of electricity and life is intimately connected with studies of regeneration.10 The fact that plants and primitive animals possess the ability to regrow lost parts is illustrated by the study of regeneration, such as that of planarian flatworms, in the laboratory. 3.1 Regeneration and the injury current The scientific study of regeneration began in the 18th century. Crayfish, lobsters and crabs were shown to regrow legs and claws that had been amputated. Hydra, a small animal living in ponds, showed remarkable powers of regrowth. Lazzaro Spallanzani discovered the ability of the salamander to regrow its limbs, tail, jaw, and the lenses of its eyes. After the discovery of the cell, scientists concerned themselves with the process of differentiation, by which a cell becomes specialized. A fertilized egg divides by mitosis and proliferates. The growing embryo separates into three tissue layers: the endoderm, whose cells make glands and viscera; the mesoderm, which gives rise to muscle, bone and the circulatory system, and the ectoderm, which grows into skin, sense organs and the nervous system. Once a cell has differentiated, it appears incapable of reverting to the neo-embryonic form that can convert to a different type of cell. If such dedifferentiation were impossible, there could be no regeneration of one type of cell—bone, muscle, skin—from another. In a salamander that had a limb cut off, regeneration begins with a proliferation of epidermal cells to form an apex over the stump. In about a week, this apical cap encloses a ball of undifferentiated cells. The so-called “totipotent cells of this blastema then differentiate into bone, cartilage, blood vessel and skin cells. Grafting experiments show that, as the new limb forms, its shape appears from a pattern of organization that depends on its location on the body surface. Information therefore must pass from the body to the blastema, which also has information in the DNA of its cell nuclei. Where do the undifferentiated cells of the blastema come from? Experiments showed that the formation of the blastema depended on the arrival of regrowing nerve “
52
CHAPTER 3
fibers. These branch and form terminal buds that make synaptic connections with the epidermal cells. The interactions at these neuroepidermal junctions do not depend on action potentials or acetylcholine. The clue to the question of what passes across the gap came from A. M. Sinyukhin, who related the regeneration of tomato plants to the current of injury at the wound. This electric current initially flows inward but reverses to outward after the callus is formed. As the outward current increased, cells more than doubled their metabolic rate, became more acidic and produced more vitamin C. Externally applied currents in the same direction augmented these effects. 3.2. Bone healing and electrical stimulation Robert O. Becker sought to solve the problem of nonhealing bone fractures. He compared injury currents in salamanders, which regenerated limbs, with those in frogs, which did not regenerate. While the salamander’s injury current reversed during regeneration, the frog’s only returned gradually to its preamputation baseline. The passage of electric currents through the aquarium water accelerated regeneration in larval salamanders. In the 1920s, Elmer J. Lund found that the polarity of regeneration could be controlled by small currents. Electric potentials were found on the surfaces of slime molds, worms, hydras, salamanders and mammals, including humans. Did the electric field gradient control the distribution of growth inhibitors and stimulators? Studies of planarians showed that these flatworms possess an electric potential gradient that controls the regrowth of cut parts. The polarity of a planarian could be controlled by passing a current through it. When these experiments showed that external electric fields could be used to determine whether a head or a tail would grow from a cut piece, interest grew in the healing effects of direct currents. These findings, applied to the problem of nonhealing bone fractures in humans, led to the discovery of piezoelectricity in bone. Piezoelectricity, discussed in Chapter 16, transduces a mechanical stress into an electric polarization. This polarization plays an important role in the organization of bone cells along lines of maximum stress, allowing bone shapes to adapt to their function.11 Implantation of synthetic biopolymers that generate piezoelectric currents have been shown to induce the growth of bone.12 Recent studies deal with the exposure of humans to electromagnetic waves, ever more prevalent in our technological society. Studies suggest that exposure to extremely low frequency electromagnetic radiation increases stress and the incidence of disease.13 3.3. Neuron healing W. E. Burge found in 1939 that the voltage between the head and other body surfaces became more negative during physical activity, diminished in sleep and became positive under general anesthesia. This suggested that an analog system of information transfer is present in addition to the digital information transfer by action potentials. Since the
ANIMAL ELECTRICITY
53
threshold of an action potential depends on the resting potential, any change in external potential can affect the level of neural activity. In such a change in the vicinity of an injured cell, the activity of a neuron would be depressed as a result of the injury potential. Harvey M. Fishman and collaborators studied the sealing of injured axons. Their surprising finding was that the giant axons of squids, crayfishes and earthworms respond to cutting by endocytic formation of vesicles that fuse with each other to yield axosomes with diameters up to 5 m in the axoplasm of the injured region. The axosomes move to the cut ends and accumulate to form a temporary plug that restores a barrier sufficient to allow recovery of axonal electrical function until a permanent seal (a continuous plasma membrane) is restored.14 4. MEMBRANES AND ELECTRICITY In 1883 Svante Arrhenius showed that certain compounds, strong electrolytes, dissociate into charged fragments, ions, in water, a finding that prepared the way for the membrane hypothesis. 4.1. Bernstein's membrane theory The concept that the nerve impulse was due to ionic effects at the membrane bounding the nerve fiber, the axolemma, was developed by L. Hermann, M. Cremer, Julius Bernstein, Jacques Loeb and others.15 Bernstein's 1868 membrane theory stated that living cells were electrolytes contained within a poorly permeable membrane, which supports a potential difference with the inside negative, and that during activity the membrane becomes so permeable to ions as to abolish the potential. The theory was based on measurements with the newly developed capillary electrometer and string galvanometer, which allowed him to estimate cell phase boundary potentials of 0.02V up to 0.08 V. However, Bernstein's depolarization was only a decline toward zero potential; the discovery of the sign reversal of an action potential was to come later. 4.2. Quantitative models Ostwald in 1890 studied artificial precipitation membranes and compared them to the membranes that enclose living protoplasm, pointing out their similarity as semipermeable membranes. The quantitative basis for this hypothesis had already been begun for the special case of an electrically neutral membrane in 1889 by Walther Nernst and in 1890 by Max Planck, whose quantum theory we have already discussed. U. Behn extended these results in 1897, anticipating later developments by including the role of sodium. We will study the equation of Nernst and Planck in Chapters 7 and 8, which are devoted to electrodiffusion, a theory that has had a profound influence on membrane biophysics. The laying of a telegraph cable across the Atlantic Ocean required a careful analysis of current flows, which was carried out by William Thomson (Lord Kelvin). The application of his cable equation to nerve and muscle fibers led to a quantitative model of the axon, which will be discussed in Chapter 6, Section 3.
54
CHAPTER 3
4.3. The colloid chemical theory Loeb noted that salt solutions containing only sodium ions cannot maintain excitability, but that this poisonous effect can be antagonized by calcium ions. Since these ions exist partly in combination with proteins, he argued, the substitution of one ion for another changes the physical properties of the protein. Irritability depends on the presence of Na+, K+, Ca2+ and Mg2+ in the right proportions, and a change in these proportions can generate or inhibit activity. Loeb’s theory was amplified by R. Höber in 1905, who found that the effect of ions on excitability follows the lyotropic series, the series of anions discovered by Hofmeister as critical for “salting out” proteins in water. He also showed that dyes stain a nerve soaked in potassium-rich solution, but less so in a calcium-rich solution. He explained the parallelism between stainability and excitability by a loosening of the colloidal membrane in the former case and a compaction in the latter. An important link between membranes and bioelectrogenesis was made clear in 1911 by F. G. Donnan, who studied the equilibrium distribution of ions between two solutions separated by a semipermeable membrane when one of the solutions contained a colloidal suspension.16 Torsten Teorell in 1935 extended the Donnan theory to explain membrane permeability in terms of the partition effects due to differential solubility. K. H. Meyer and J. F. Sievers in 1936 published an independent version of this “fixed charge” theory. Teorell built and analyzed a porous membrane system that exhibited oscillations due to coupling between ion and water fluxes. 4.4. Membrane impedance studies Hugo Fricke in 1923 determined the capacitance of a red blood cell membrane as 0.81 F/cm2. With the additional assumption that the membrane might have the dielectric constant of an oil, 3, he estimated the thickness of the membrane to be about 3.3 nm—a molecular dimension! Kenneth S. Cole extended Fricke's work, carrying out a series of studies on the impedance of sea urchin eggs. The analyses required for the interpretation of these measurements led Cole, together with his brother Robert H. Cole, to develop a powerful new mathematical formulation of impedance problems. We will discuss this formulation, now a part of condensed-state physics theory, in Chapter 10. The rediscovery by J. Z. Young that squid have axons of unusually large diameter revolutionized the study of neurobiology. With these “giant” axons it was possible to insert electrodes into a single fiber. In 1939, K. S. Cole and Howard Curtis made impedance measurements that demonstrated the fall of membrane resistance during excitation.17 Their results on squid axon, suggested by their earlier experiments on the large cylindrical single cell of the alga Nitella, showed that an action potential was accompanied by a large increase in membrane conductance. Since Figure 3.1 shows that the voltage change comes first, it suggests that the membrane depolarization precedes the change in membrane conductance.
ANIMAL ELECTRICITY
55
Figure 3.1. Superposition of a band measuring membrane resistance decrease on a line showing the action potential during the passage of an impulse across a squid giant axon. The marks indicate 1 ms time intervals. From Cole and Curtis, 1939. Reproduced from The Journal of General Physiology, 1939, 22:649-670. Copyright 1939 The Rockefeller University Press.
The study of membrane impedance was an important step in the electrical modeling of excitable membranes; see Chapter 10. 4.5. Liquid crystals and membranes In 1854 R. Virchow, a biologist known for his fundamental contributions to cell theory, described myelin figures. These lipid–water systems were the first known representatives of a state of matter intermediate between liquid and solid, the liquid crystals.18 The first systematic description of liquid crystals was made in 1888 by the botanist Friedrich Reinitzer, who studied the phase transitions of cholesteryl benzoate. On heating the solid structure, Reinitzer observed the formation of a turbid liquid at 145.5°C, which became transparent on further heating to 178.5°C, thus showing three distinct phases, solid, liquid crystal and liquid.
The physicist Otto Lehmann, who coined the term liquid crystal, found that many organic compounds exhibited a liquid crystalline phase marked by the mechanical properties of a liquid but the optical properties of a crystalline solid.
56
CHAPTER 3
Since these phases are neither liquids nor crystals, G. Friedel in 1922 proposed the term mesomorphic states. He separated them into three classes, smectic, nematic and cholesteric; these are described in Chapter 17. Liquid crystals, sometimes called paracrystalline materials, have long been recognized as important aspects of biological structures and their functions. Joseph Needham wrote19 [L]iving systems actually are liquid crystals, or, it would be more correct to say, the paracrystalline state undoubtedly exists in living cells... The paracrystalline state seems the most suited to biological functions, as it combines the fluidity and diffusibility of liquids while preserving the possibilities of internal structure characteristic of crystalline solids.
F. Rinne, in 1933, and J. D. Bernal, in 1933 and 1951, pointed out the intimate connection between naturally occurring liquid crystals and life processes. Nowhere is this connection more compelling than in the structures and functions of biological membranes, which are a type of liquid crystal.
5. ION CURRENTS TO ACTION POTENTIALS Let us now turn our attention to the ions that cross the excitable membrane and the currents they carry. 5.1. The role of sodium E. Overton in 1902 showed that frog muscles became inexcitable when immersed in solutions with less than one-tenth the normal concentration of sodium chloride. Of the two ions of NaCl, the chloride ion was not the essential one; it could be replaced by nitrate, bromide and other anions without inducing a loss of excitability. That left the sodium ion; it could only be replaced without loss of excitability by its close relative, the lithium ion. In 1943, David Goldman published an analysis of electrodiffusion applied to nerve membranes.20 Here he simplified the mathematical model by what became known as the constant-field approximation. The Goldman equation was extended by Hodgkin and Bernhard Katz to provide an equation that gave a value for the equilibrium voltage across a membrane with multiple species of ions, positive and negative. In 1949 Hodgkin and Katz carried out experiments on squid giant axon, in which they replaced the external sea water with solutions deficient or enriched in
ANIMAL ELECTRICITY
57
Figure 3.2. Demonstration by Hodgkin and Katz that the height and rate of rise of the action potential depends on sodium-ion concentration. Record 1 shows an action potential (with respect to resting potential) in seawater. Records 2-8 show the slowing and diminution of the action potential 30,46, 62, 86, 102, 107 and 118 seconds after an isotonic dextrose solution without sodium was applied to the axon. Traces 9 and 10 show the return of the action potential 30 and 90-500 seconds after the reapplication of seawater. From C. Hammond, 1996.
sodium ions. They showed that both the height and the rate of rise of the action potential depended strongly on sodium-ion concentration; see Figure 3.2.21 Variation of the potassium-ion concentration resulted mainly in a change in the resting potential. These findings supported their sodium hypothesis: that the permeability of the axonal membrane in the active state was primarily to sodium ions, while its permeability in the resting state was mostly to potassium ions. 5.2. Isotope tracer studies Experiments on epithelial membranes, such as toad bladder and frog skin provided new techniques and data. Isotope tracer studies to study the fluxes of individual ion species were carried out by Hans Ussing, who also developed mathematical equations to analyze them. 5.3. Hodgkin and Huxley model the action potential During World War II many biophysicists were involved in military activities. At the end of the war, Cole established a Department of Biophysics at the University of
58
CHAPTER 3
Chicago. With George Marmont, an electrical engineer, he devised methods and circuits to “tame” the action potential. A long internal wire electrode created an isopotential region within the axon by greatly lowering the axon's internal resistance. This space clamp effectively prevented the action potential from propagating, allowing its time variation to be examined without spatial complications. To establish a firm electrical control of the membrane, Cole developed a feedback system, the voltage clamp. This This system allowed the experimenter to study the currents elicited by an imposed voltage stimulus. Joined by Andrew Huxley, Hodgkin and Katz then modified the Goldman equation to describe the electrical behavior of squid axon under voltage clamp. Hodgkin and Huxley conducted a series of incisive experiments on voltage-clamped squid axons. By replacing the sodium ions with impermeant ions, they were able to separate the membrane current into its different ionic components, and gained enough information from their experiments to describe these ion currents quantitatively. This work reached its completion in four papers published in 1952 by Hodgkin and Huxley, in which they generated a set of model equations on the basis of their measurements.22 Remarkably, these equations allowed them to reproduce a close approximation to the shape of an action potential. As the underlying mechanism of excitability, Hodgkin and Huxley proposed that the voltage dependence of the conductance change is due to movements of a charged or dipolar component of the membrane; see Chapter 9. An enormous expansion of electrophysiology occurred as these ideas and techniques were applied to many different types of cells of different species. One important new development was the internal perfusion of the axon. Another was the use of toxins such as the pufferfish toxin tetrodotoxin for ion separations. M. F. Schneider and W. Knox Chandler, and separately Clay Armstrong and Francisco Bezanilla, and R. D. Keynes and E. Rojas, were challenged by Hodgkin and Huxley's speculation that the opening of an ion channel was due to the response of an ionic or dipolar unit within it, a concept that suggested that the motion of this unit could be detected if ionic currents were suppressed. These groups observed gating currents, presumably corresponding to such movements.23 5.4. Membrane noise Suppose you insert a tiny electrode into the brain of a snail, goldfish or cat. As you push the probe deeper into the brain, you will observe regions where spikes are far apart in time and other places where they are more frequent. But one thing you will notice at all locations: The time between spikes is random; the patterns seldom repeat. This randomness is seen at the cell level and at the membrane level as well. In 1966, Hans E. Derksen and Alettus A. Verveen reported that the resting potential of nerve membranes is not steady but is subject to spontaneous fluctuations.24 The analysis of these spontaneous electrical fluctuations plays an important role both in the isolation of ion channels and our understanding of excitability. We will discuss noise measurements and their physical interpretation in Chapter 11.
ANIMAL ELECTRICITY
59
5.5. The patch clamp and single-channel pulses The need for accurate noise measurements spotlighted the importance of electrode area: Since random noise averaged out over the surface of large electrodes, its measurement required small, patch, electrodes. In the hands of Alfred Strickholm and Harvey Fishman, the patch clamp took form.25 It was brought to a high point of development by Bert Sakmann and Erwin Neher, whose technique made it possible to see currents from single channel molecules.26 Other studies showed that these molecules were proteins. These new techniques ushered in a new world of single-channel pulse studies that are helping decipher the structures and the functions of these ion channels. 6. GENETICS REVEALS CHANNEL STRUCTURE The investigation of the structure of ion channels required the isolation of the proteins and a means of characterizing them. 6.1. Channel isolation The nerve poison tetrodotoxin served as a powerful tool in setting apart the sodium channel. By incorporating proteins into bilayers in vesicles it became possible to study their electrical properties. Raimundo Villegas and colleagues were able to show that the structures responsible for excitability are single protein molecules, to isolate these channel molecules and estimate their molecular weight.27 After these accomplishments, the science of genetics was called into play. 6.2. Genetic techniques The discovery of the structure of DNA opened the door to a great variety of techniques for characterizing and altering genetic materials, known as genetic engineering. A group led by Shosaku Numa in the laboratory of Masaharu Noda used recombinant DNA techniques to clone the nicotinic acetylcholine receptor and, in 1983, the sodium channel from electric eel.28 From the complementary DNA they were able to deduce the primary structures of these molecules. From the primary structures and the known properties of the amino acids, channel structures were modeled. The Noda group found that the sodium-channel molecule has a mass of about 260 kDa and consists of four roughly equal repeats. A membrane-spanning segment in each homologous repeat, S4, was found to have a pattern of repeated, positively charged residues. The S4 segments were identified as voltage sensors of voltage-sensitive ion channels. Studies in fruit-fly genetics later provided a wedge by which the potassium channel could be approached. A mutation in these flies, Shaker, that led to tremors, was found to originate in a certain potassium channel. Many ion channels have been identified, and their structure is emerging. Research is also clarifying the relationship between channel malfunction and disease. The classification of ion channels will be discussed in Chapter 13.
60
CHAPTER 3
6.3. Modeling channel structure The property of hydrophobicity of the constituent amino acids gave an indication of their location in the protein relative to the lipid bilayer; amino acid residues in the bilayer tend to be more hydrophobic than those near the aqueous surfaces. This property made it possible to begin making models of the channel’s physical structure from its amino acid sequence. Recent successes in crystallizing certain bacterial channels has given renewed impetus to the modeling process.
7. HOW DOES A CHANNEL FUNCTION? As knowledge of the structure and function of ion channels grew, the question remained: What is the relation between structure and function? More simply put, how does an ion channel work? Experiments based on naive interpretations of the predictions of Hodgkin and Huxley continued to be useful, but the question of the structure–function relationship remained an open problem. 7.1. The hypothesis of movable gates The discovery of single-channel currents helped establish a scientific ideology in which the channel was viewed as a gated pore. The problem appeared to be one of finding a movable part of the channel that could occlude or clear the aqueous pathway over which ions traversed the channel. The arrival of molecular engineering seemed to promise that these structures would become apparent. That promise has not been fulfilled; molecules do not behave like macroscopic devices. 7.2. The phase-transition hypothesis As one alternative explanation, Ichiji Tasaki proposed a model of two stable states, with a phase transition between them.29 Other proposals included an analogy between channels and semiconductors, and the effects of dipolar mechanisms.30 Microscopic models of channel function are discussed in Chapter 14. 7.3. Electrodiffusion reconsidered In 1973 I reconsidered the electrodiffusion model, developing exact solutions to the equations and using these to generate current–voltage curves. Although the solutions to the nonlinear problem were qualitatively different from the linear solutions, the I–V curves were quite close to those obtained from the constant-field approximation. The exact solutions turned out to be no better descriptors of the behavior of excitable membranes than the conventional approximations.31 Thus the problem appeared to be not in the approximations but in the formulation and application of the model.
ANIMAL ELECTRICITY
61
Since some assumption or assumptions of the model must be wrong, I was drawn to the assumption that the parameters of the equations were taken to be constants. The parameter that stood out particularly was the dielectric permittivity, the factor by which the force of an electrostatic interaction is reduced by a particular medium. What if it or the ionic mobility, or both, depended on the electric field? That is not a far-fetched idea; after all, Hodgkin and Huxley had made their conductances depend on voltage. The problem of the electric-field dependence of the dielectric permittivity had already been worked out by condensed-state physicists studying materials called ferroelectrics. In 1920, crystals of sodium potassium tartrate tetrahydrate, commonly known as Rochelle salt, were found to have electrical properties unlike any material seen before.32 Rochelle salt is the first known example of this class of materials; others were found slowly. Since the phase transitions that these materials undergo in an electric field are strikingly analogous to the transitions of ferromagnetic materials such as iron in a magnetic field, these materials were named ferroelectrics. 7.4. Ferroelectric liquid crystals as channel models The understanding of ferroelectrics has grown rapidly, and many new ferroelectric materials, including polymers and liquid crystals, were discovered. It was in the 1960s and 1970s that several suggestions of possible ferroelectric behavior in nerve and muscle membranes were made. A. R. von Hippel wrote33 in 1969 that “True relations may exist between ferroelectricity, the formation of liquid crystals, and the generation of electric impulses in nerves and muscles.” A class of liquid crystals, smectic C*, exhibits phase transitions controlled by the electric field. These ferroelectric liquid crystals are made of elongated molecules that form layers in which they are tilted with respect to the layer perpendicular. Furthermore, they must be chiral. From the literature I, later joined by Vladimir Bystrov, noted a number of similarities between the behavior of ferroelectric liquid crystals and that of ion channels of excitable membranes.34 A certain class of amino acids present in ion channels, with branched nonpolar sidechains, was found to be particularly effective in these transitions35; see Chapter 21. NOTES AND REFERENCES Kenneth S. Cole, Membranes, Ions and Impulses, University of California, Berkeley, 1968, 1972; A. L. Hodgkin, Conduction of the Nervous Impulse, Charles C. Thomas, Springfield, 1964; William J. Adelman, Jr., in Structure and Function in Excitable Cells; edited by William J. Adelman, Jr., Van Nostrand Reinhold, New York, 1971, 274-319; Torsten Teorell, in Structure and Function in Excitable Cells, edited by Donald C. Chang, Ichiji Tasaki, William J. Adelman, Jr. and H. Richard Leuchtag, Plenum, New York, 1983, 321-334. 2. Thomas S. Kuhn, The Structure of Scientific Revolutions, University of Chicago, Chicago, 1962. 3. Robert O. Becker and Gary Selden, The Body Electric: Electromagnetism and the Foundation of Life, William Morrow, New York, 1985. 4. Cole, 1. 5. Mary A.B. Brazier, A History of Neurophysiology in the 19th Century, Raven, New York, 1988.
1.
62 6. 7. 8. 9. 10. 11. 12.
13. 14. 15. 16. 17. 18. 19. 20. 21.
22. 23.
24. 25. 26. 27.
28.
29. 30. 31. 32. 33. 34. 35.
CHAPTER 3 Ichiji Tasaki, Physiology and Electrochemistry of Nerve Fibers, Academic, 1982, 22ff. Tasaki, 87. Tasaki, 65-69. Alwyn Scott, Nonlinear Science: Emergence and Dynamics of Coherent Structures, Second Edition, Oxford University, 2003, 1-6. Becker and Selden, 25-76. Becker and Selden, 118-149. Maurice V. Cattaneo, in Electrical and Optical Polymer Systems, edited by Donald L. Wise, Gary E. Wnek, Debra J. Trantolo, Thomas M. Cooper and Joseph D. Gresser, Marcel Dekker, Inc., New York, 1998, 1213-1222. Becker and Selden, 271-329. H. M. Fishman, K. P. Tewari and P. G. Stein, Biochim. Biophys. Acta 1023:421-435, 1990; H. M. Fishman and G. D. Bittner, NIPS 18:115-118, 2003. Cole, 54. W. J. Moore, Physical Chemistry, Third Edition, Prentice-Hall, Englewood Cliffs, N.J., 1962, 760. Kenneth S. Cole and Howard J. Curtis, J. Gen. Physiol. 22:649-670, 1939. Glenn H. Brown and Jerome Wolken, Liquid Crystals and Biological Structures, Academic, New York, 1979. Joseph Needham, Biochemistry and Morphogenesis, Cambridge, 1942, 661. D. E. Goldman, J. Gen. Physiol. 27:37-60, 1943. A. L. Hodgkin and B. Katz, J. Physiol. 108:37-77,1949. By permission of Wiley- Blackwell Publishing. Reprinted from Constance Hammond. Cellular and Molecular Neurobiology, Academic, San Diego, 1996, 119, with permission from Elsevier. A. L. Hodgkin and A. F. Huxley, J. Physiol. (Lond.) 116:449-472; 116: 473-496; 116: 497-506; 117: 500-544, 1952. M. F. Schneider and W. K. Chandler, Nature (Lond.) 242:244-246, 1973; C. M. Armstrong and F. Bezanilla, Nature (Lond.) 242:459-461, 1973;_ J. Gen. Physiol. 63:533-552, 1974; R. D. Keynes and E. Rojas, J. Physiol. (Lond.) 239:393-434, 1974. H. E. Derksen and A. A. Verveen, Science 151:1388-1389, 1966. H. M. Fishman, Proc. Natl. A cad. Sci. USA 70:876-879, 1973; H. M. Fishman, J. Membrane Biol. 24:265-277, 1975. O. P. Hamill, A. Marty, E. Neher, B. Sakmann and F. J. Sigworth, Pflügers Arch.391:85-100, 1981. Raimundo Villegas, Gloria M. Villegas, Zadila Suárez-Mata and Francisco Rodriguez, in Structure and Function in Excitable Cells, edited by D. C. Chang, I. Tasaki, W. J. Adelman Jr. and H. R. Leuchtag, Plenum, New York, 1983, 453-469. M. Noda,, S. Shimizu, T. Tanabe, T. Takai, T. Kayano, T.Ikeda, H. Takahashi, H. Nakayama, Y. Kanaoka, N. Minamino, K. Kangawa, H. Matsuo, M. A. Raftery, T.Hirose, S. Inayama, H. Hayashida, T. Miyata and S. Numa, Nature 312, 121-127, 1984. I. Tasaki, Physiology and Electrochemistry of Nerve Fibers, Academic, New York, 1982. L. Y. Wei, Bull. Math. Biophys. 31: 39-, 1969; Ann. N. Y. Acad. Sci. 227: 285-, 1974. H. Richard Leuchtag, thesis; H. R. Leuchtag and J. C. Swihart, Biophys. J. 17:27-46, 1977. M.E. Lines and A. M. Glass, Principles and Applications of Ferroelectrics and Related Materials, Clarendon, Oxford, 1977, 1. A. R. von Hippel, J. Phys. Soc. Japan 28 (suppl.):1-6, 1970. H. R. Leuchtag, J. Theor. Biol. 127, 321-340, 1987; 341-359, 1987; H. R. Leuchtag and V. S. Bystrov, Ferroelectrics 220:157-204, 1999. O. Helluin, M. Beyermann, H. R. Leuchtag and H. Duclohier, IEEE Trans. Dielect. Elect. Insul. 8(4):637-643, 2001.
CHAPTER 4
ELECTROPHYSIOLOGY OF THE AXON
Now that we have defined the problem of biological excitability, reviewed the informational structures of living organisms and perused the early history of animal electricity, it is time for us to get down to the specifics of electrophysiology, the study of the electrical behavior of cells, tissues and organs. We must follow this trail before we can descend to the molecular level.
1. EXCITABLE CELL PREPARATIONS A squid can squirt ink and swim forwards and backwards, by jet propulsion, expelling water backward or forward through its siphon. The anatomical studies of these marine cephalopods by L. W. Williams and J. Z. Young showed that they possess a pair of remarkable structures, which turned out to be axons almost a millimeter in diameter. 1 A single axon of the squid occupies a cross section that in a rabbit leg nerve would contain hundreds of nerve fibers. With this discovery of an axon so large that it has come to be referred to as a “giant” axon came the ability to study the properties of a single axon in isolation. This experimental preparation has given us great insight into nerve conduction. Although squid come in many sizes, even up to “sea monsters” several meters long, a typical squid used for neurophysiological experiments at the Marine Biological Laboratory in Woods Hole, Massachusetts is about 30 cm long. Connecting the squid's cerebral ganglion to its mantle is a pair of giant axons, about 600-800 m in diameter; see Figure 4.1.2
Figure 4.1. Squid, showing location of giant axons in the stellate nerves. Drawing by T. Inoué, from Guevara 2003.
63
64
CHAPTER 4
These axons can be impaled with “piggyback” double electrodes (one for current and one for voltage). They are also suitable for internal perfusion experiments, in which the axoplasm is rolled or flushed out and replaced with artificial solutions. This gives the experimenter the ability to control ion concentrations both internal and external to the membrane. When the axoplasm is extruded and analyzed, we learn that its ion concentrations are quite different from those of the fluid bathing the axon; see Table 4.1. The membrane potential difference Vm (intracellular relative to extracellular) is determined by the Nernst potentials VI of ions I, shown in the Table for the concentrations listed and temperature 20ºC.
Table 4.1. Intracellular and extracellular (seawater) ion concentrations and Nernst potentials of squid giant axon at 20ºC
Ion
K+ Na+ Ca2+ Cl-
In (mM) 400 50 0.0001 100
Out (mM)
VI (mV)
10 460 10 540
-93.1 56.0 145.3 -42.6
1.1. A squid giant axon experiment A squid is a thing of beauty, a slim foot-long cylindrical body swimming serenely backward and forward in the seawater tank. Its large eyes are on the side of its head; from the front, a thick bundle of tentacles emerges, always in motion. Its body is an ever-changing display of colored spots, chromatophores, which enlarge and contract, revealing the squid’s moods to its neighbors, swimming alongside it. We select a large male, catching it carefully because it has strong suckers on its tentacles and a sharp beak in its mouth. In its struggle to avoid the net it releases a cloud of ink. On a light table in the lab, the squid is quickly decapitated with scissors. The skinned mantle, illuminated from below, shows two faint lines diverging diagonally backward: the hindmost giant axons. Tied with fine threads into plastic dishes of seawater and dissected out, one axon is refrigerated for later use and the other carefully cleaned under the dissecting microscope to remove its connective tissue layer. Its diameter measured and the recorded, the axon is mounted in a plastic chamber filled with artificial seawater and placed in a Faraday cage—to shield it from extraneous electric fields—on a vibration-
ELECTROPHYSIOLOGY OF THE AXON
65
free table. The solutions have been prepared so as to avoid gradients of osmotic pressure. The temperature of the preparation is carefully controlled. Platinized platinum electrodes are inserted into the axon and the external solution. A stimulator provides the signal, and the voltage or current data being acquired are shown on an oscilloscope and captured on a computer disk. In an experiment on a carefully dissected squid giant axon, the resting potential is about --55 to --60 mV. Rapid depolarization of the membrane by about 10 mV stimulates an impulse. The action potential, measured from the resting potential, is about 100 mV, making the voltage inside the axon relative to the external solution roughly +45 mV at the peak of the action potential; see Figure 1.1 of Chapter 1. This region of positivity, called the overshoot, showed that Bernstein's potassium model was inadequate, demanding a revision of the accepted concept that the action potential is simply the elimination of the potassium potential. It made it necessary to consider the effects of sodium, as well as potassium, current through the membrane.3 1.2. Node of Ranvier As we saw in Chapter 3, the study of electrobiology began with the sciatic nerve of the frog. Vertebrate nerve is composed of many myelinated and unmyelinated fibers, enclosed in a sheath of connective tissue. In 1928, E. D. Adrian and D. W. Bronk recorded action potentials from individual nerve fibers by dissection of a rabbit nerve. Further experiments by Kaku (J. Kwak) in the laboratory of G. Kato developed the technique of recording the electrical activity of isolated nerve and muscle fibers. In motor nerve fibers of the toad, 10-15 m in diameter, the axon is covered by a myelin sheath interrupted about every 2 mm by a node of Ranvier. The fiber thins to about 1 m at the node, providing a narrow ring of contact between the axon and the external aqueous medium. By moving a stimulating electrode along a myelinated fiber, Ichiji Tasaki showed that the threshold for stimulating an action potential was a minimum at each node and rose steeply in the internodal myelinated regions. While electric current flows ohmically from node to node, the myelin sheath acts as an insulator, with extremely low dc conductance. Although myelinated axons are much smaller in diameter than squid axons, their geometry can be used to advantage in electrophysiological experiments. Since the myelin sheath is an effective insulator, current and voltage measurements can be made by isolating the nodes of Ranvier. A single axon dissected from a frog sciatic nerve is placed on a specially constructed chamber with three solution pools. The pools are insulated with petroleum jelly or air gaps, and electrodes are inserted into them. When an anesthetic solution, such as cocaine–Ringer’s, was introduced into the middle pool, the threshold of the middle node became unmeasurably high. Even though the node was inexcitable, however, action potentials traveled through the fiber. In some cases, action potentials were able to cross two inexcitable nodes, but never three. The interpretation given by Alan Hodgkin was that the action current at a node was strong enough to excite the unanesthetized nodes beyond the anesthetized zone. When the current drops below the threshold level for the nearest excitable node, conduction is blocked. The current pathway therefore must include the external fluid.
66
CHAPTER 4
By using glass pipette microelectrodes devised by Gilbert Ling and R. W. Gerard in 1949, W. L. Nastuk and Hodgkin in 1950 recorded action potentials from the interior of frog muscle fibers. This method is also used to record from nerve fibers, even in nerve trunks or the brain, but its application is limited by the injury to the fiber due to the penetration of the electrode. Studies of the node of Ranvier helped clarify the all-or-none law, according to which a small stimulus can produce a powerful response. When the stimulus is above threshold, the response is independent of stimulus strength—just as the power of a gunpowder explosion is independent of the size of the flame that touches it off. That the all-or-none law does not apply to the node when the duration of the stimulating current pulse is varied was shown by Ichiji Tasaki in 1956. With a new method of recording action potentials, Tasaki found that longer pulses produce smaller action potentials; see Figure 4.2.4 As the Figure shows, the shape of the nodal action potential differs from that of the squid axon. It is wider, with a shoulder at the end of the absolute refractory period, beyond which the voltage drops more steeply. Figure 4.2 also illustrates the bifurcation of the voltage traces into electrogenic and active responses, depending on slight differences in the stimuli. Studies of threshold stimulation by linearly rising voltage pulses in isolated myelinated fibers made it possible to investigate the process of accommodation. As du Bois–Reymond had noted, a slowly rising current fails to produce excitation even when its intensity rises well above the level that would excite the nerve when suddenly initiated or terminated. Action potentials are observed when the time rate of rise of the voltage exceeds a critical rate. 1.3. Molluscan neuron A great deal of progress has been achieved by the study of the nervous systems of invertebrates, chiefly annelids, arthropods and molluscs. Molluscan species in particular have the double advantage of a simple nervous system—as few as 10,000 neurons—and giant, easily impaled neurons. Neurons of snails and other molluscs showed that the inward current evoked by depolarization is not always carried only by Na+, but may, in addition, be carried by divalent ions. Action potentials with prominent Ca2+ components are found in effector processes such as secretion, contraction and bioluminescence. Susumu Hagiwara and his coworkers found a transient potassium current in a mollusc in 1961. This current, called the A current, IA, is distinctly different from the delayed potassium current of squid axon. Found in arthropods and vertebrates as well as molluscs, IA appears in encoder neurons, which transduce stimulus voltage into repetitive spikes.5 2. TECHNIQUES AND MEASUREMENTS The foundation of all research in biophysics is experiment. Theoretical models and hypotheses suggest experiments, but only the experimental results, properly interpreted,
ELECTROPHYSIOLOGY OF THE AXON
67
Figure 4.2. The action potential evoked in a node of Ranvier by a pulse of current decreases in amplitude (upper right of panels) as the duration of the pulses (lower right) decreases. The method is shown at the top. Excitable node N1 is located between inexcitable nodes N0 and N2, used as stimulating (S) and recording (V) electrodes. Note that the action potential has a shoulder, unlike that of a squid axon. From Tasaki, 1982.
can validate or reject them. Repeated experiments are necessary for reliability of results. 2.1. Space clamp The axon, with instabilities that complicated interpretation of the data, was tamed by techniques devised by George Marmont and Kenneth Cole. To analyze the behavior of
68
CHAPTER 4
the membrane, the traveling action potential had to be stopped. This was accomplished by the space clamp, an axial wire pushed into the axon to carry current, together with an external electrode to measure the voltage. Guard electrodes, maintained on both sides of the central measuring electrode at equal potential to it by electronic feedback, disposed of the troublesome time-varying currents at the electrode boundaries. The space clamp simplified the problem of measuring the action potential by holding the impulse fixed in space. 2.2. Current clamp The next step was to control the current through the membrane. This was accomplished by a feedback circuit known as a current clamp. Small current stimuli, inward or outward, produced linear, electrotonic, responses in the voltage, while outward currents displayed threshold behavior. A short-duration pulse of outward current above threshold led to a large positive voltage excursion with no external current passing through the membrane, essentially a stationary action potential; see Figure 1.2 of Chapter 1. But, while propagation had been removed from the measurement, the membrane voltage continued to refuse to stand still for a measurement. 6 2.3. Voltage clamp A technique that came to be of great importance in electrophysiology was devised to control the potential. Cole developed the voltage clamp, a feedback circuit that permitted the experimenter to vary the membrane voltage at will, while monitoring the current.7 It was hoped that this would yield an experimental preparation without the threshold behavior and instability associated with the current clamp. The voltage clamp was intended to answer the question, “What current will produce a given change of potential?” To accomplish this, the membrane voltage was connected to one input of an operational amplifier. A command voltage step was connected to the other input, and the difference between it and the actual membrane voltage, the so-called error voltage, was brought to zero by a negative feedback adjustment of the applied membrane current. The new techniques revolutionized neuro-physiology and allowed the excitable membrane to be analyzed on the basis of an equivalent electrical circuit. In spite of these improvements, spatial and temporal instabilities sometimes reassert themselves in the form of what has been called the “abominable notch” 8 and oscillation. Although improvements in amplifier bandwidth and electrode design have helped, the only remedy to the notch found to be reliable is to cut the external sodium concentration in half. 2.4. Internal perfusion The axoplasm can be removed by applying mechanical pressure with a roller, and replaced with an artificial solution. An alternative method of intracellular perfusion is
ELECTROPHYSIOLOGY OF THE AXON
69
the double cannulation technique. A narrow inlet cannula and a wider outlet cannula are introduced into a length of squid axon from opposite directions. The inlet cannula is inserted into the outlet cannula. The perfusion solution washes out the axoplasm and flows continuously during the experiment. If the pH, osmolarity and electrolyte composition of the internal and external solutions are properly chosen, the axon can maintain its ability to conduct action potentials for 10 hours or longer.9 The technique of internal perfusion makes possible the study of the axon under complete control of both intracellular and extracellular ion environment. It has been applied to muscle and other cells as well as squid and other giant axons. Substitution of anions has relatively small effects, while there is a high sensitivity to the substitution of cations. 3. RESPONSES TO VOLTAGE STEPS Measurements in which the response of the excitable membrane is observed to unfold in time (as opposed to frequency) are called time-domain measurements. The frequency-dependent impedance of the membrane can also help us distinguish between competing theories of channel function. This work will be discussed in Chapter 10. 3.1. The current–voltage curves The response of the potassium system to voltage variations is shown by a steady-state current–voltage curve. The squid axon exhibits an inward rectification; depolarizing voltages produce steadily increasing K+ currents, while voltage changes in the hyperpolarizing direction increase the current only slightly. Voltage-clamp current responses yield an I–V curve, also designated the I(V) curve to emphasize that V is the independent, and I the dependent, variable. This curve contains a region of “negative resistance,” in which the slope of the I–V curve becomes negative. As the voltage is increased from rest potential to about 60 mV above it, the inward current increases. This is due to the increasing permeability of the axonal membrane to sodium ions.
3.2. Step clamps and ramp clamps Although the voltage-clamp method allows the experimenter to impose any voltage function on the axon, a conventional technique soon developed: The voltage is initially maintained at a holding potential close to the resting potential. From there it is stepped discontinuously to one of a series of steady potentials, both hyperpolarizing and depolarizing. The step is maintained long enough to bring the axon to a new steady state, and then the axon is returned to holding. The constant voltage during a step eliminates voltage as a variable at each clamp level. The currents induced by the return step are called tail currents.
70
CHAPTER 4
Variations on the simple step clamp include brief prepulses, which change the subsequent membrane response, and other programmed steps.
Figure 4.3. Squid axon response to a slow (0.5 V/s) and fast (50 V/s) rising ramp clamp. (A) The succession of curves shows the effect of adding 50 mM tetraethyl ammonium to the internal perfusion solution of potassium fluoride. (B) The upward succession of curves shows the effect of replacing the external seawater solution with seawater plus 100 nM tetrodotoxin. From Fishman, 1970.
Another useful voltage function is the ramp clamp, which consists of a voltage rising or falling linearly with time. Harvey Fishman pointed out that continuous current–voltage characteristics of both the early and late currents could be obtained directly by means of a ramp clamp.10 Figure 4.3 shows that this method can be used to follow the time course of the slow and fast currents without pharmacological separation by the specific inhibitors of IK , tetraethyl ammonium (TEA), and of INa , tetrodotoxin (TTX). 3.3. Repetitive firing In many physiological activities, neurons fire at regularly repeated intervals, like a dripping faucet. Repetitive firing in excitable cells occurs in such rhythmic activities as the heartbeat, respiration and circadian rhythms, but is also associated with skeletal movement, peristalsis and sensory reception. These effects in pacemaker cells are endogenous to the cell and due to membrane feedback mechanisms. Beating pacemaker neurons fire single spikes at regular intervals, whereas bursting pacemaker neurons fire bursts of spikes in a regular pattern during the depolarized phase of their membrane potential. The ionic currents responsible for repetitive neuronal activity, first studied in molluscan neurons, have also been examined in crab axons and mammalian neurons such as the cerebellar Purkinje cells of guinea pigs.11
ELECTROPHYSIOLOGY OF THE AXON
71
Figure 4.4. Data from a bursting pacemaker neuron, R15, in the mollusc Aplysia californica. In voltage-clamp data (A) voltage steps from holding potential Vh are shown as light lines and current traces as heavy lines. Voltage calibration is 45 mV; current calibrations are 500, 200, 100 and 50 nA for graphs 1, 2, 3 and 4-5 respectively; time calibration is 2.5 s. The quasisteady-state I–V curve (B) shows a region of negative slope. The inset shows bursting pacemaker potential oscillations in the unclamped cell, with calibrations 50 mV, 26 s. From T. G. Smith, Jr., 1975, 1980. Reprinted by permission from MacMillan Publishers Ltd: Nature 253:450-452 copyright 1975.
The current–voltage relationship of pacemaker cells under voltage clamp shows a region of negative slope, in which depolarization leads to an increase in inward current. This current, which tends to depolarize the cell, is regenerative between about -50 mV and -35 mV; see Figure 4.4.12 A normally silent muscle cell or axon fires spontaneously if the external calcium concentration is lowered. The different behavior of pacemaker and nonpacemaker neurons in Aplysia has been shown to be related to differences in the accumulation of potassium ions at the external membrane surface.13 The ability of an excitable membrane to generate repeated pulses makes it comparable to an electrical oscillator. A depolarizing current that excites the membrane can produce repetitive firing until the process of accommodation occurs; in toad motor fibers the time constant of accommodation varies from 10 to 300 ms.14
72
CHAPTER 4
The repeat intervals are determined primarily by the intensity of the stimulus and the refractory period of the cell. It has been modeled as a relaxation oscillator, which is quite different from a resonant circuit. One simple model of a relaxation oscillator is a block sliding on a rough surface, being dragged by a soft spring, the other end of which is moving at constant speed. The phase of the rhythm may be reset by a brief electric pulse above threshold.15 In the neuronal pacemaker cycle of a molluscan neuron, the K+ flux increases with increasing Ca2+ concentration.16
Figure 4.5. The electric currents of a nerve impulse, traveling from left to right, form a pair of toroidal patterns riding on the axon. The schematic cross section shows sodium ions separating from the inner and outer membrane surfaces and permeating open sodium channels. The potassium currents are not shown. From C. Hammond, 1996.
3.4. The geometry of the nerve impulse The description of an axon or muscle fiber as a cylindrical tube is in many cases inadequate. In muscle fibers, the membrane is folded inward to form T tubules, in which the excitation wave couples to the contraction mechanism. The membrane of the ribbon-shaped giant axon of the mollusc Aplysia is also riddled with folds. By increasing the surface area, such infoldings slow the electrical response of the fiber to a stimulating pulse. Nevertheless, the picture of an axon as a circular cylinder is often a useful approximation. If we take a snapshot of the electric currents of the nerve impulse along a cylindrical axon, the sodium currents of the action potential can be described as a pair of toroids riding on the axon. At the leading edge, sodium ions are rushing back into
ELECTROPHYSIOLOGY OF THE AXON
73
the excited patch of the membrane externally and forward internally. Behind the region of open channels we see sodium currents flowing forward externally and backward internally; see Figure 4.5.17 Another pair of toroids, delayed and with directions reversed, would represent the potassium currents.
4. VARYING THE ION CONCENTRATIONS Under a given set of conditions, some species of ions permeate the membrane while others are excluded. The understanding of this property of selectivity is a central problem, along with that of gating, of excitability in membranes. The effects of varying ion concentrations on voltages can be explained quantitatively by an equation derived from electrodiffusion theory by the assumption of a uniform electric field across the membrane. This equation, the Goldman-Hodgkin-Katz equation, derived in Chapter 8, allows permeabilities to be calculated for Na+, K+ and Cl-. Let us review some of the experimental findings for the various current components of the action potential. 4.1. The early current The action potential in squid axon is completely abolished by the removal of external sodium ions. When Hodgkin and Katz changed the external solution from seawater to a dextrose solution free of Na+, the action potential flattened out in two minutes, returning slowly after the reapplication of seawater; see Figure 3.2 of Chapter 3.18 Note that the loss of Na+ also lengthens the time for the voltage to rise to a peak. In 1949, when Hodgkin and Katz made these measurements, the effect of varying the internal Na+ concentrations could not be determined, but they assumed that the action potential overshoot is determined solely by the Na+ concentration-ratio and given by the Nernst equation for sodium ions, VNa = (RT/zF) ln ([Na]o/[Na]i). Since the internal sodium-ion concentration was known to be approximately one-tenth that of seawater, the overshoot expected from the Nernst relation is about 58 mV, roughly consistent with the overshoot observed in seawater. Later experiments with internally perfused axons showed that, when part of the internal potassium was replaced with sodium, the action potential overshoot, while decreasing as [Na]i was increased, remained well above the values predicted by the Nernst relation.19 When the sodium ions are replaced by lithium, the currents are almost identical over time periods of less than one hour. Tasaki and collaborators also found that sodium ions could be replaced by a number of nitrogenous univalent cations in intracellularly perfused squid axons without losing excitability. These include hydrazinium ion, H2N-NH3+, hydroxylamine, guanidine and aminoguanidine. Experiments with these ions require increased external divalent ion concentrations to prevent membrane depolarization.20
74
CHAPTER 4
4.2. The delayed current Changes in the potassium concentration of the external solution affect the resting potential primarily. In a K +-free solution the resting potential is more negative than in seawater, while the peak of the action potential remains the same, thus increasing the height of the action potential from its base. If the K+ concentration is doubled, from 10 to 20 mM, the action potential shrinks both at the bottom and at the top. Axons become inexcitable by increases in external K+ long before they are completely depolarized. The different selectivities of the early and delayed currents pointed to two separate mechanisms. This was later confirmed with the discovery of different ion channels. The ammonium ion is unusual in that it permeates both the early and delayed channels. The substitution of Rb or Cs ions for K greatly prolongs the action potential. This suggests that the mobilities of these ions in the K channel is lower than that of K+ and therefore, according to a macromolecular interpretation, that they are more strongly bound by charged negative sites within the channel.21 4.3. Divalent ions A number of different types of calcium currents have been found in vertebrate as well as invertebrate cells, indicating the presence of different calcium channels. These channels are also permeable to other divalent ions, such as Ba2+ and Sr2+. The study of calcium channels is complicated by the low intracellular Ca2+ concentration, 10-7 M or less.22 An unusual property of calcium channels is their sensitivity to changes in the internal Ca2+ concentration. Increases in [Ca2+]in reduced the calcium conductance, decreasing the amplitude of subsequent calcium action potentials.23 A further complication is a potassium current activated by intracellular injection of calcium.24 Because of the high sensitivity of many intracellular enzymatic reactions to [Ca2+]in, a rise in it may result in cell death. Thus it is not surprising that cells have a high buffering capacity at physiological levels of [Ca2+]in, binding 99.95% of imposed Ca loads. Most of this buffering capacity appears to reside on the mitochondria and endoplasmic reticulum.25 4.4. Hydrogen ions Since metabolic processes in the cell produce acid, cells must have a mechanism for regulating pH. The regulation of the internal H+ concentration is similar to, but more complicated than, the sodium–potassium pump system.26 The early current in frog node is strongly dependent on hydrogen-ion concentration, but in a manner that is contrary to expectations. Since the membrane permeability calculated from reversal potentials for H+ is much greater than that for Na+, one might expect from the Independence Principle (see Chapter 9) an increase in external [H+] (lower pH) to increase the Na+ conductance, gNa. Actually, as Figure 4.6 shows, both gNa and gK are greatly reduced at low pH. In a model that approximately
ELECTROPHYSIOLOGY OF THE AXON
75
agrees with the data, Na channels are “blocked” by protonization of a single acid group with a pKa of 5.2 and K channels by a group with a pKa of 4.4.27
Figure 4.6. Titration of sodium (filled circles) and potassium (open circles) conductances of frog node of Ranvier as the pH of the external solution is varied. From Hille, 2001. Reproduced from the Journal of General Physiology 1968, 51:221-226 and 1973, 61:669-686. Copyright 1968, 1973 The Rockefeller University Press.
Contrary to the permeability calculated from the reversal potential and the Goldman–Hodgkin–Katz equation, the measured permeability of sodium channels to protons is minute, consistent with the slow dissociation of the acid groups with which the protons associate as they move through the channel. Possible roles for protons in the conduction mechanism have been proposed; see Chapters 14 and 20. 4.5. Varying the ionic environments To reduce the complexity of the axon system, the question arose, How simple can the electrolyte solutions be made without losing the axon’s ability to develop action potentials? The answer is that the salt of a single divalent cation outside and a single monovalent cation is sufficient. Calcium, strontium or barium are favorable external cations; the internal cation can be any alkali metal, tetramethylammonium, tetraethylammonium, choline or hydrazine, among others. The complete replacement of both intracellular potassium and extracellular sodium ions does not suppress axonal excitability. Magnesium cannot be used as the external cation without some calcium. The explanation of this is that the tendency of magnesium toward hydration causes the membrane colloid to swell. The difference in threshold concentration among ions of alkali metals is similarly attributed to their ability to loosen the compact structure of the membrane macromolecules (the voltage-sensitive ion channels). These cations form the following sequence according to their “depolarizing power”: K Rb > Cs > NH4 > Na > Li
76
CHAPTER 4
The similarity in the stereochemical properties of K+ and Ca2+ may account for the great depolarizing power of K+. Both have a coordination number of 8, while Na+, with a coordination number of 6, would be less effective in displacing the Ca2+ in the membrane protein. The “bi-ionic” action potentials, with only two cation species, are characterized by long duration, abrupt termination and high resistance. A molecular interpretation is that, in the resting state, the membrane macromolecules are cross-linked by calcium ions; at a depolarization, some of these calcium bridges are broken and the membrane swells, reducing its resistance.28 5. MOLECULAR TOOLS Neurotoxins are useful tools for identifying and isolating the Na+ and other ion channels. During the course of evolution, certain organisms have developed these molecules as specialized weapons for defense. 5.1. The trouble with fugu One neurotoxin is tetrodotoxin (TTX), which is found in the liver and gonads of the pufferfish. The pufferfish is eaten as a delicacy in Japan, where it is called fugu. Although the fish are prepared in restaurants by specially trained cooks, fatal accidents occasionally happen. Even at micromolar concentrations, TTX effectively destroys the ability of sodium channels to conduct ions. Its action is reversible; when the TTX is washed off, the channels conduct normally again. A marine microorganism produces a similar toxin, saxitoxin, and a Central American tree frog produces yet another, chiriquitoxin. These toxins may have a common evolutionary origin—they may be synthesized in symbiotic bacteria within these hosts.29 TTX, STX and CTX inhibit the sodium current only when applied to the outside of the axon. Their structures are similar in that they all contain a guanidinium group, based on H2N+=C(NH2)2, a highly resonant, planar, positive ion. Because TTX and its congeners bind 1:1 to the voltage-sensitive sodium channel, they are used to measure the density of channels in various membranes. For example, the nodal regions of a myelinated rat axon have 700 Na channels per m2 and the (unmyelinated) squid axon has 330 Na channels per m2.30 These neurotoxins have been called “channel blockers” because in the approach to channels that considers them to be water-filled pores, the toxins are interpreted simply as plugs. In this approach the TTX molecule is thought of as binding to the channel and mechanically blocking it. Experiments show, however, that the action of these toxins depends critically on their molecular structure.31 The guanidinium group apparently has a special role, probably involving its positive charge. An alternative explanation is presented in Chapter 16. Figure 4.7 depicts the chemical structures of TTX and STX, as well as those of the local anesthetic procaine and the tetraethylammonium ion, a quaternary ammonium ion; the latter two ions inhibit ion conduction in potassium channels.32
ELECTROPHYSIOLOGY OF THE AXON
77
Figure 4.7. Molecular structures of tetrodotoxin, saxitoxin, procaine and tetraethyl ammonium ion. From Hille, 2001.
5.2. Lipid-soluble alkaloids There are other neurotoxins with various effects on the Na channel. The lipid-soluble alkaloids, aconitine and veratridine, found in species of the buttercup and lily families respectively, are sodium-channel poisons. Another, batrachotoxin, from Colombian arrow-poison frogs, eliminates the inactivation response from the membrane. Other classes of lipid-soluble activators of Na channels are grayanotoxins and pyrethroids such as allethrin; see Figure 4.8.33 Because these toxins facilitate the opening and delay the closing of Na channels, they are called Na-channel agonists.34 5.3. Quaternary ammonium ions No specific neurotoxin has been discovered for the potassium channel. However, certain ions fairly effectively impede the K+ channel when the potassium inside the axon is replaced with them. They include quaternary ammonium ions such as tetraethyl ammonium, TEA+, and tetramethyl ammonium, TMA+. These ions form bonds that are more stable than the bonds that the permeant K+ ions form with the pathway and compete for ion sites.
78
CHAPTER 4
Figure 4.8. The alkaloid toxins batrachotoxin, veratridine, aconitine and grayanotoxin that cause Na channels to remain open. From Conley and Brammar, 1999.
The TEA+ ion prolongs the falling phase of action potentials by interfering with the potassium current. When enough sites are filled with TEA+, the K+ permeation pathway no longer exists.35 Other agents that interfere with the potassium current are Cs+, Ba2+ , 4-aminopyridine and other organic cations with quaternary nitrogen atoms. The aqueous pore model describes the action of these ions as “blocking” or “ plugging” the channel.36 These are macroscopic, not molecular, concepts. These substances give us the means of suppressing one or both channels at will. When both ion conductances are suppressed, there remains a tiny displacement current, which because of its transient asymmetrical nature is assumed to be related to the dipolar shifts associated with the opening of the channel, and so is called the gating current; see Section 2.1 of Chapter 9. 5.4. Peptide toxins The venoms of scorpions and sea anemones contain mixtures of polypeptide toxins. A painful sting can lead to paralysis, cardiac arrhythmia and even death. When purified and sequenced, they are found to be single polypeptide chains held in compact structures by internal disulfide bonds. The scorpion peptides are 60-76 amino acids long, and the coelenterate peptides, 27-51. One class of these toxins, the -NaTx toxins, slow the inactivation of Na channels, leading to action potentials of much longer duration. Another class, the NaTx toxins, shift the voltage dependence of activation. By causing the channels to remain open at the normal resting potential, they produce long, repetitive trains of action potentials. A third class, members of the -KTx family, block potassium channels.37
ELECTROPHYSIOLOGY OF THE AXON
79
6. THERMAL PROPERTIES Modeling the action potential in terms of electrical circuits has its limitations. To gain more information about macromolecular changes occurring in axon membranes, electrical studies must be supplemented by experiments on nonelectrical signs of nervous excitation. These include thermal, optical and mechanical effects. Interest in the effect of temperature on bioelectric potentials goes back to Bernstein, who in 1902 assumed that ion permeability varied with temperature.38 Many investigators since Bernstein had studied the effect of temperature on electrical activity, but while they agreed that the resting potential has a low temperature coefficient, the data on spike amplitude was in conflict. Conventional studies of temperature dependence of axon parameters were limited to two temperatures. This suffices to calculate the quantity Q10, defined as the ratio by which a dependent variable increased when the temperature was raised by 10°C. However, in a number of studies the temperature is treated as a continuous variable. 6.1. The effect of temperature on electrical activity To resolve the question of temperature dependence, Hodgkin and Katz examined the effect of temperature on the giant axon of the squid.39 Changing the temperature of the seawater in which the axon was bathed, they found that their measurements were reversible as long as the temperature did not exceed 35°C, above which “the axons tended to fail progressively.” Propagation in squid axon is abolished at the heat-block temperature of about 38°C and may be restored when the fiber is cooled. The lowest temperature they achieved without risk of damaging the fiber was 1°C. In their experiments, the resting potential diminished only slightly while the spike amplitude dropped with increasing rapidity as the temperature was raised from 3 to almost 40°C. The positive phase of the action potential increases to a maximum at 2025°C and decreases again. The variation of action potential shapes shows a narrowing of the spikes with increasing temperature. The rate of fall of the action potential has a larger temperature coefficient than the rate of rise, although the Q10s of the conductances are similar. A. Krogh suggested that the transfer of sodium through cell membranes involves a transient reaction with membrane molecules, not a simple diffusion through pores.40 Thus, as Hodgkin and Katz have pointed out, the temperature coefficient for sodium ions may be larger than that for potassium and chloride. Spyropoulos found a cold-block temperature, the lower limit of excitability at which voltage spikes are abolished, at about 1°C in the squid Loligo vulgaris.41 However, cold block is species-dependent. In the squid Dorytheutis bleekeri, F. Kukita and S. Yamagishi found reversible cold block to occur below -20°C.42
80
CHAPTER 4
6.2. The effect of temperature on conduction speed The effect of temperature on the conduction velocity of axons in a nerve was studied in 1967 by R. A. Chapman. Figure 4.9 shows that conduction speed in Loligo vulgaris varies continuously, rising from cold block to a maximum at about 32 °C and decreasing sharply at the approach of heat block.43
Figure 4.9. Effect of temperature on conduction speed. The solid and dashed line is a model fit to the data. From R. A. Chapman, 1967. Reprinted by permission from MacMillan Publishers Ltd: Nature 213:1143-1144 copyright 1967.
6.3. Excitation threshold, temperature and accomodation One set of temperature measurements was of the threshold for excitation. Study of the threshold showed that the product of the stimulating current I and the duration of the pulse, t, is reasonably constant. This is the total charge Q0 = I t. For squid axon, Q0 was found by Rita Guttman to be 1.4 × 10-8 C/cm2, nearly independent of the temperature.44 Experiments with slowly rising deplorizations of squid axons in external media with lowered divalent ion concentrations show a relationship between accommodation and repetitive firing.45
6.4. Stability and thermal hysteresis Internal perfusion of giant axons made possible the further exploration of membranes.
ELECTROPHYSIOLOGY OF THE AXON
81
Figure 4.10. Cyclic temperature changes induce hysteresis in membrane potential. The top diagram shows the arrangement of inlet and outlet cannulas, ground (E) and recording (R) electrodes, and the thermocouple (TC). A cycle of temperature variation took about 3 minutes. Electrolyte compositions are given for loops A and B. From Y. Kobatake, 1975.
Under ionic conditions in which the cations of the external solution were exclusively divalent and the internal cations monovalent, stable action potentials were evoked. The gradual introduction of sodium ions into the external solution resulted in giant oscillations, culminating in jumps between the active and resting states.46 When the temperature of these axons was varied slowly, hysteresis loops were observed in the membrane potential; see Figure 4.10. These results suggest that a phase transition was taking place in the membrane system. The implications of this will be discussed in Chapters 14 to 21. 6.5. Temperature effects on current–voltage characteristics Voltage ramp clamp recordings of squid axon responses show the changes in current–voltage characteristics for slow and fast ramps as the temperature is varied from 5 to 25°C; see Figure 4.11. Slow-ramp speeds of 0.5, 1, 2 and 5 V/s and varying fastramp speeds, as labeled, were used. For comparison, step clamp data are also shown for the 10°C experiments.47
82
CHAPTER 4
Figure 4.11. Effect of temperature and ramp rate (marked) on axonal I(V) curves. The voltage was ramped (left) from resting potential RP to 150 mV above RP, and (right) from a hyperpolarization of -30 mV to 120 mV; see insets. From Fishman, 1970.
ELECTROPHYSIOLOGY OF THE AXON
83
6.6. Heat pulses modify ion currents A perturbed molecular system will adjust to its thermodynamic equilibrium in a process called relaxation. Studies of rate phenomena can often be described by a linear relaxation process characterized by one or more relaxation times. However, nonlinear and even irreversible processes can be investigated by relaxation methods, such as those induced by temperature jumps.48
Figure 4.12. Response of the Na+ current in frog node to a temperature jump induced by a laser-generated heat pulse. The perturbed current is superimposed on a control record. The temperature jump occurred 0.7 s after the depolarizing step of 25 mV. From Moore, 1975.
L. E. Moore and collaborators investigated ion-current changes in frog node induced by a laser-generated heat pulse applied to the nodal region.49 A sudden change in temperature defines a new state to which the ion conductances relax. The temperature jumps were estimated to be about 2-3 C°. The data were interpreted by linear relaxation theory applied to the Hodgkin–Huxley formalism, in which ion conductances depend on electric field, temperature, pressure and calcium concentration (see Chapter 9). Relaxation due to temperature changes may be expected to have the same time constants as relaxation due to voltage steps. Figure 4.12 shows the current relaxation induced by a temperature jump during the peak of the inward Na+ current.50 In steady-state measurements, the effect of heating was to increase the delayed currents. The inward transient currents increased at small depolarizations but decreased at large depolarizations. In the relaxation experiments, the sodium currents initially increased, then decreased, as Figure 4.12 shows. The responses suggest a process involving multiple relaxation times. Together with the results of other experiments, these findings are consistent with a membrane system dependent on voltage, temperature and Ca2+ concentration.
84
CHAPTER 4
6.7. Temperature and membrane capacitance Yoram Palti and William J. Adelman, Jr. used the ramp clamp as a method for determining the capacitance of the excitable membrane.51 From the ramp rate and the measured current they obtained data on the dependence of membrane capacitance with temperature in squid axon. Their results showed that membrane capacitance increases with temperature, rising steeply as the temperature is raised to near 40°C. An interpretation of this remarkable result in terms of a ferroelectric phase transition model is discussed in Chapter 16, Section 6.4. 6.8. Heat generation during an impulse Excitation in both invertebrate and vertebrate nerve exhibits a diphasic variation in temperature. Spyropoulos found that heat is produced during an action potential and absorbed after the membrane had returned to its resting value.52 Richard D. Keynes and collaborators showed that the excitation process of a nervous tissue involves an exothermic reaction.53 The observations that an action potential is accompanied by first a generation and then an absorption of heat54 suggest that membrane structures undergo a transition to a more highly ordered state during an action potential. Similar patterns occur in transitions of ferroelectric materials from a paraelectric to a ferroelectric state and the helix–random coil transitions of macromolecules. 7. OPTICAL PROPERTIES Since the conductance changes associated with an action potential must be associated with conformational changes in ion channels, it is useful to explore nonelectrical signs that may reflect these changes. Optical studies can provide independent information on these macromolecular transformations.55 Here we will discuss voltage-sensitive birefringence in excitable membranes and their sensitivity to ultraviolet light. Hervé Duclohier has reviewed techniques that combine fluorescence measurements with simultaneous electrical measurements in cell membranes and reconstituted systems.56 Light scattering spectroscopy experiments will be discussed in Chapter 11, Section 4.7. 7.1. Membrane birefringence The refractive index n of a material is the ratio of the speed of light in vacuum to that in the material. In birefringence or double refraction, the value of n depends on the direction of polarization of the light. Experiments by L. B. Cohen and collaborators demonstrated changes in turbidity and birefringence in excitable membranes during an action potential.57 The birefringence responses are diphasic: a decrease in light intensity is followed by an increase. The detection of optical signals from vitally stained nerve fibers made it possible to record optical signals simultaneously with electrical signals.58
ELECTROPHYSIOLOGY OF THE AXON
85
The layer of the axon directly under the axolemma consists of longitudinally oriented filamentous material, including neurofilaments and microtubules. Dye studies show that these filaments exhibit positive uniaxial birefringence, which undergoes rapid changes during the action potential. An interpretation of these results is that an initial depolymerization of filaments into monomers is followed by its reverse, polymerization.59 7.2. Ultraviolet effects Irradiation of frog node of Ranvier with ultraviolet light in the wavelength range of 240310 nm raises the stimulation threshold and diminishes the action current. The fractional change of the action potential decreases linearly with the duration of irradiation. Irradiation of the internode was ineffective. The effect was ascribed to a specific interference with a UV-sensitive excitation process.60 Voltage clamp experiments showed that this process follows the exponential kinetics of a first-order reaction. Ultraviolet radiation produces an irreversible shift of the steady-state sodium inactivation curve toward more negative voltages. The spectral sensitivity peaks at about 280 nm. The effect of UV irradiation does not alter the ionic selectivity or sensitivity to tetrodotoxin, and does not depend on temperature. The steady-state potassium current is left almost unchanged. The interference of UV with sodium channels is an all-or-none process in which a single photon disables one channel, while the remaining channels function normally. The effect of UV on the channel, referred to as a “selective blocking,” is subject to modification by external ion concentration changes. The UV sensitivity of the sodium system is increased by an increase of the external Ca2+ or H+ concentration, while hyperpolarizing the membrane reduces the UV sensitivity.61 UV irradiation cumulatively and irreversibly reduced the asymmetrical displacement (gating) currents of the sodium system without affecting its time constants. Increases in leakage and capacity currents became pronounced ~5 min after high doses of UV were applied.62 8. MECHANICAL PROPERTIES We have seen above that observations of light scattering have suggested a swelling of the axon during the passage of an action potential. Conversely, mechanical stimulation of an axon can lead to the production of action potentials. 8.1. Membrane swelling Tasaki and Iwasa recorded the mechanical response of the axon surface with a piezoelectric probe made of polyvinylidene fluoride (PVDF), a synthetic polymer. A narrow sheet of the piezofilm, maintained in tension by a nylon thread, contacted the axon by way of a few attached bristles (B). The pressure sensor was placed immediately above the internal wire that recorded the action potential; see Figure 4.13.63
86
CHAPTER 4
The transient pressure rise at a squid axon produced an electric signal in the piezofilm that was averaged over 500-2000 trials to improve the signal-to-noise ratio. The pressure increase of about 0.1 Pa (1 dyn/cm2) was immediately followed by a decrease, representing a shrinkage of the axon.
Figure 4.13. Mechanical response of a squid giant axon during passage of an action potential. (Left) Schematic diagram of the experimental setup, showing the stimulating (S) and recording (R) electrodes. (Right) The top trace shows the rise and subsequent fall of the axonal pressure. The bottom trace shows the action potential (peak about 110 mV above resting potential). From Tasaki and Iwasa, 1983.
Membrane swelling was also observed by optical means, in which light reflected from gold particles on the axon surface was measured. The surface displacement was about 0.5 nm.64 8.2. Mechanoreception Cells such as arterial baroreceptors and the vestibular hair cells of the ear are sensitive to external pressure and vibration. These mechanoreceptor cells contain stressactivated (or stress-inactivated) channels, in which the probability of channel opening is a function of the applied mechanical stress. The resulting receptor potential is translated by the cell into action potentials firing at variable frequencies. The adaptation found in the molecular mechanisms of some mechanoreceptors maximizes their sensitivity over a broad stimulus domain.65
NOTES AND REFERENCES 1. L.W. Williams, The Anatomy of the Common Squid, Loligo Pealii (Leseur), Leiden, 1909; John Z. Young, Cold Spring Harbor Symposia on Quantitative Biology 4: 1-6, 1936. 2. Michael R. Guevara, in Nonlinear Dynamics in Physiology and Medicine, edited by Anne Beuter, Leon Glass, Michael C. Mackey and Michèle S. Titcombe, 2003, 87-121. With kind permission of Springer Science and Business Media. 3. Kenneth S. Cole, Membranes, Ions, and Impulses, University of California, Berkeley, 1972, 145-148. 4. Reprinted from Ichiji Tasaki, Physiology and Electrochemistry of Nerve Fibers, Academic Press, New York, 1982, 37-64, by permission from Elsevier. 5. John A. Connor, in Molluscan Nerve Cells: From Biophysics to Behavior, edited by John Koester and John H. Byrne, Cold Spring Harbor Laboratory, 1980, 125-133. 6. G. Marmont, J. Cell Comp. Physiol. 34: 351-382, 1949. 7. K.S. Cole, Arch. Sci. Physiol. 3:253-258, 1949.
ELECTROPHYSIOLOGY OF THE AXON
87
8. Cole, ref. 3, 325ff. 9. Irwin Singer and Ichiji Tasaki, in Biological Membranes: Physical Fact and Function, vol. 1, Academic Press, London, 1968, 347-410. 10. H. M. Fishman, Biophys. J. 10:799-817, 1970. 11. Thomas G. Smith, Jr., in Koester and Byrne, Molluscan Nerve Cells: From Biophysics to Behavior, Cold Spring Harbor Laboratory, 1980, 135-143; Rodolfo Llinás, in Koester and Byrne, 145-155. 12. Thomas G. Smith, Jr., Jeffery L. Barker and Harold Gainer, Nature 253: 450 -452, 1975; Smith, op. cit., 136. 13. Douglas Junge, Nerve and Muscle Excitation, Sinauer Associates, Inc., Sunderland, MA, 1981, 115132. 14. J. J. B. Jack, D. Noble and R. W. Tsien, Electric Current Flow in Excitable Cells, Clarendon Press, Oxford, 1983, 305-378. 15. Tasaki, 114-129. 16. Anthony L. F. Gorman, Anton Hermann and Martin V. Thomas , in Koester and Byrne, Molluscan Nerve Cells: From Biophysics to Behavior, Cold Spring Harbor Laboratory, 1980, 169-180. 17. Reprinted from Constance Hammond, Cellular and Molecular Neurobiology, Academic Press, San Diego, 1996, 150, with permission from Elsevier. 18. A. L. Hodgkin and B. Katz, J. Physiol. 108:37-77, 1949; Constance Hammond., Cellular and Molecular Biology, Academic Press, San Diego, 1996, p. 119. 19. Tasaki, 140-143 and 216-218. 20. Tasaki, 211f. 21. Tasaki, 273. 22. S. Hagiwara and G. Yellen, in Koester and Byrne, Molluscan Nerve Cells: From Biophysics to Behavior, Cold Spring Harbor Laboratory, 1980, 33-40. 23. D. Tillotson, in Koester and Byrne, Molluscan Nerve Cells: From Biophysics to Behavior, Cold Spring Harbor Laboratory, 1980, 41-48. 24. H. Dieter Lux, in Koester and Byrne, Molluscan Nerve Cells: From Biophysics to Behavior, Cold Spring Harbor Laboratory, 1980, 105-114. 25. Floyd J. Brinley, in Koester and Byrne, Molluscan Nerve Cells: From Biophysics to Behavior, Cold Spring Harbor Laboratory, 1980, 73-80. 26. Roger C. Thomas, in Koester and Byrne, Molluscan Nerve Cells: From Biophysics to Behavior, Cold Spring Harbor Laboratory, 1980, 65-72. 27. Bertil Hille, Ion Channels of Excitable Membranes, Sinauer, Sunderland, Mass., 2001, 476f. 28. Tasaki, 232-255. 29. Hille, 63. 30. Hille, 396. 31. C. Y. Kao and S. E. Walker, J. Physiol. 323, 619, 1982. 32. Hille, 63. 33. Reprinted from Edward C. Conley and William J. Brammar, The Ion Channel FactsBook: Voltage Gated Channels, Academic, San Diego, 1999, 822, with permission from Elsevier. 34. Hille, 641 ff. 35. C. M. Armstrong, J. Gen. Physiol. 50:491-503, 1966; 54:553-575, 1969; 58:413-437, 1971. 36. Hille, 65, 503-537. 37. Hille, 635-645. 38. J. Bernstein, Pflüg. Arch. ges. Physiol. 92:521, 1902. 39. A. L. Hodgkin and B. Katz, J. Physiol. 109:240-249, 1949. 40. A. Krogh, Proc. Roy. Soc. B 133:140, 1946. 41. C. S. Spyropoulos, J. gen. Physiol. 48:49-53, 1965. 42. F. Kukita and S. Yamagishi, Biophys. J. 35:243, 1981. 43. R. A. Chapman, Nature 213:1143-1144, 1967. 44. Rita Guttman, in Biophysics and Physiology of Excitable Membranes, edited by W. J. Adelman Jr., Van Nostrand Reinhold, New York, 1971, 320-336. 45. Eric Jacobsson and Rita Guttman, in The Biophysical Approach to Excitable Systems, edited by William J. Adelman, Jr. and David E. Goldman, Plenum, New York, 1981, 197-211. 46. Y. Kobatake, in Membranes, Dissipative Structures, and Evolution, edited by G. Nicolis and R. Lefever, John Wiley, New York, 1975, 319-340. 47. Harvey M. Fishman, Biophys. J. 10:799-817, 1970.
88
CHAPTER 4
48. M. Eigen and L. de Maeyer, in Investigation of Rates and Mechanisms of Reactions, Part II, edited by S. L. Friess, E. S. Lewis and A. Weissberger, Interscience, New York, 1963, 895-1054. 49. L. E. Moore, J. P. Holt, Jr. and B. D. Lindley, Biophys. J. 12:157-174, 1972. 50. Reprinted from L. E. Moore, Biochim. Biophys Acta 375:115-123, 1975, with permission from Elsevier. 51. Y. Palti and W. J. Adelman, Jr., J. Membr. Biol. 1:431-458, 1969. 52. C. S. Spyropoulos, J. Gen. Physiol. 48: 49-53, 1965. 53. J. V. Howarth, R. D. Keynes and J. M. Ritchie, J. Physiol. 194:745-793, 1968. 54. I. Singer and I. Tasaki, in Biological Membranes: Physical Fact and Function, volume 1, edited by Dennis Chapman, Academic, London, 1968, 347-410. 55. L. B. Cohen and D. Landowne, in Biophysics and Physiology of Excitable Membranes, edited by W. J. Adelman Jr., Van Nostrand Reinhold, New York, 1971, 247-263. 56. Hervé Duclohier, J. Fluorescence 10:127-134, 2000. 57. L. B. Cohen, R. D. Keynes and B. Hille, Nature (London) 218:438-441, 1968. 58. Tasaki, 305. 59. Tasaki,308-310. 60. Hans-Christoph Lüttgau, Pflügers Archiv 262:244-255, 1956. 61. J. M. Fox, Pflügers Archiv 351:287-301; 303-314, 1974. 62. J. M. Fox, B. Neumcke, W. Nonner and R. Stämpfli, Pflügers Archiv 364:143-145, 1976. 63. I. Tasaki and K. Iwasa, in Structure and Function in Excitable Cells, edited by D.C. Chang, I. Tasaki, W. J. Adelman, Jr. and H. R. Leuchtag, Plenum, N. Y., 1983, 307-319. With kind permission of Springer Science and Business Media. 64. K. Iwasa and I. Tasaki, Biochem. Biophys. Res. Comm. 95:1328-1331, 1980. 65. Owen P. Hamill and Don W. McBride, Jr., NIPS 9:53-59, 1994.
CHAPTER 5
ASPECTS OF CONDENSED MATTER
The electric behavior of excitable membranes reflects the properties of the ion channels embedded in their lipid matrix and the composition of the aqueous media. To further our goal of explaining the behavior of the voltage-sensitive ion channels, we will review the physics of condensed materials with an eye to possible clues to the understanding of these channels. If we are to have a fair start in understanding their behavior in basic physical terms, we should be sure we understand how the behavior of simpler forms of matter is explained by condensed matter physics. We will clarify our ideas about the quantum basis of matter and review the way material structures undergo cooperative changes, phase transitions, in their largescale conformation. From there we will explore the way these macroscopic phenomena arise from the atomic structure of matter. These elementary principles should help us apply the principles of physics to the configurational transitions that ion channels undergo. 1. THE LANGUAGE OF PHYSICS Newton's classical mechanics has now been supplanted by quantum mechanics, which has become the accepted formulation for the analysis of microscopic systems and even some large systems, such as superconductors, superfluids and quantum interference devices. As B. S. Chandrasekhar1 put it, Just as English is the language of Shakespeare, quantum mechanics is the language of physics. Quantum mechanics applies to the microscopic aspects of ion channels along with all other forms of matter. 1.1. The Schrödinger equation We saw in Chapter 1 that the application of Planck's relation, E = ћ 7, to the electron's motion saved atomic physics from the deep contradictions of the classical theory. The electronic orbitals were quantized, and energy could be emitted from (or absorbed by) an atom only in whole quanta. Quantum mechanics developed from the realization that particles of matter had wave properties, analogous to light. Erwin Schrödinger wrote a wave equation for the wavefunction 5 of a nonrelativistic particle in a potential U(r), 89
90
CHAPTER 5
(1.1)
where the Hamiltonian operator, representing the total energy of the system, is given by (1.2)
The Laplacian operator /2 reduces to d2/dx2 in one dimension. The time-independent Schrödinger equation for a stationary system with energy E is (1.3)
The first step in solving any dynamical problem in quantum mechanics is usually to write down the appropriate Hamiltonian for the system. The first term on the right of Equation 1.2 corresponds to the kinetic energy of the system, the second term the potential energy. Since Schrödinger's equation is a differential equation, its solution involves integrations. Integration operations in quantum mechanics are often represented in the form of diagrams known as Feynman diagrams. In any bounded system, Schrödinger's equation has solutions for only discrete values of E, the eigenvalues En. To each eigenvalue corresponds a function that solves Schrödinger's equation, called an eigenfunction. Different eigenfunctions may have the same eigenvalue, often as a result of the symmetry of the system; these functions are called degenerate. The number of degenerate eigenfunctions generally increases as the energy of the system rises. 1.2. The Uncertainty Principle Werner Heisenberg arrived at an alternative formulation of quantum mechanics by asking, What happens to the electron between two discrete states such as Bohr orbits? He showed that any attempt to follow the electron by, say shining a light beam on it, would severely perturb the position of the electron, perhaps even jar it completely out of the atom. Since we can never follow the electron's path, we should be willing to admit that the concept of a path between states is without foundation. Heisenberg proceeded to bracket the limits of our ignorance in his famous Uncertainty Principle: The position x and momentum p of a particle can not be measured precisely, but only to within uncertainties x and p, which must be large enough so that their product is no smaller than a quantity of the order of Planck's constant. A similar relationship exists between the uncertainties in energy and time.
ASPECTS OF CONDENSED MATTER
91
The interpretation of the wavefunction 5 is still controversial, but it is agreed that the square of the absolute value of this complex number, |5|2, must correspond to the probability of finding the electron in a given quantum state, as Max Born found. 1.3. Spin and the hydrogen atom Spectral and magnetic measurements showed that another number, beyond x and p, was needed to describe the electron's motion. This is the electron's spin angular momentum. Measured in units of ħ = h/2%, spin had the unexpected property of being expressed in half integers, such as 0, ±½, ±1, etc. The exclusion principle of Wolfgang Pauli stated that only one electron could exist in a state described by a set of quantum numbers. In this way quantum mechanics provided a way of explaining the hydrogen atom, and eventually all the atoms of the periodic table. The hydrogen atom is described by four quantum numbers, n, l, m and s, called respectively the principal, azimuthal, magnetic and spin quantum numbers. The Pauli exclusion principle states that no two electrons can exist in the same quantum state, that is, with the same values of n, l, m and s. The lowest energy state consistent with the exclusion principle, its most stable state, called the ground state, is used to characterize not only hydrogen but all atoms. With the conventional notation of s, p, d and f to designate states with l = 0, 1, 2 and 3, respectively, an atom's electronic configuration can be described compactly. A 1s orbital is a spherically symmetric distribution with n = 1 and l = 0. A 2s orbital, with n = 2, has the same symmetry but has a larger radius and more energy. As an example, we can write the ground state of a sodium atom as 1s22s22p63s1, meaning that the n = 1 "shell" contains two electrons (of opposite spin), the n = 2 shell has two electrons with l = 0 and six electrons (two in each of the three space directions) with l = 1, and the n = 3 shell has one electron. Note that the removal of the n = 3 electron by ionization leaves the very stable neon configuration, 1s22s22p6. The ionization of two elements and the electrostatic attraction of their ions accounts for their ionic bonding. Consider, for example, the sodium chloride molecule. Since the electron configuration of sodium in the ground state is 1s22s22p63s1, it has a single 3s electron outside a closed subshell, which is weakly bound to the atom. It takes only 5.1 eV to remove this electron,2 leaving the atom a positive ion, Na+. The chlorine atom in its ground state has the configuration 1s22s22p63s23p5; the neutral atom lacks one electron of filling the 3p subshell. The addition of one electron lowers the atom's energy by 3.8 eV. Thus the addition of only 1.3 eV suffices to transfer an electron from an Na to a Cl atom, and this energy is easily supplied by the electrostatic attraction, since the Coulomb potential energy equals -1.8 eV at a separation of 4.0 Å.3 By 1927 the concepts of quantum mechanics were applied to molecules. The atomic orbitals overlap to form a new orbital called a molecular orbital, which like an atomic orbital contains two electrons of opposite spin. The description of diatomic molecules includes a principal quantum number n and a quantum number , which gives the component of angular momentum along the axis joining the two nuclei. In analogy to the atomic terminology, states with = 0, 1, and 2 respectively are designated ), % and .
92
CHAPTER 5
A covalent bond in which the overlap is concentrated along the internuclear axis, as in the hydrogen molecule H2, is called a sigma ()) bond. The combination of different s and p orbitals forms hybrid orbitals. The combination of one 2s orbital and two 2p orbitals, called sp2 hybrid orbitals, form three lobes lying in a plane and an unhybridized 2p orbital perpendicular to the plane. These orbitals may form double bonds: a sigma bond lying in the plane of the two nuclei, and a bond, called a pi (%) bond, formed by the overlapping p orbitals. The concept of quantum mechanical resonance is of great importance in determining the structure of multiatomic molecules. One of the most important applications of quantum mechanics is the study of condensed (i.e., non-gaseous) matter. In later chapters we will see that a number of topics of condensed-matter physics are of special significance to the study of ion channels. 1.4. Identical particles—why matter exists One of the surprising ways in which quantum mechanics differs from classical mechanics is in the way particles interact when they bounce off each other in scattering experiments. Suppose a stream of particles, which are helium nuclei, are fired at a target of oxygen nuclei. It is convenient to think of this collision as happening in a moving frame of reference, so that the particles move toward each other while the center of mass remains fixed. After the collision they will be moving in opposite directions again, but at some angle from the original directions, which we may take to be 0° for the and 180° for the oxygen nucleus. Suppose detectors mark the arrival of each particle without distinguishing whether it is an or an oxygen nucleus. Then the probability of a particle arriving at a particular one of the detectors is simply the sum of two probabilities: the probability that the particle is scattered through an angle (and is an ) plus the probability that the particle is scattered through an angle 180° - (and is an oxygen nucleus). The same thing happens when the target is hydrogen, carbon, or anything else—except helium. When helium nuclei bounce off helium nuclei, the results are different, by as much as a factor of two when is 90°. That is because with identical particles there is no way in principle to tell which of the particles entered the detector. So quantum mechanics tells us that we must add the wavefunction amplitudes 5, not the probabilities 52. When we do that, we obtain results that agree with experiment. However, the particles can interfere with either the same phase or with opposite phase. This means that there are two kinds of particles, called Bose and Fermi particles. With Bose particles, such as photons and particles, the amplitude of the particle that is exchanged simply adds to the amplitude of the particle that goes directly into the detector. However, with Fermi particles, such as electrons and sodium ions, the amplitude of the particle that is exchanged subtracts from the amplitude of the particle that goes directly into the detector. Thus bosons and fermions, as these particles are called, act in completely different ways. Fermions obey the Pauli exclusion principle, as we have already seen for electrons. Without fermions, matter could not exist at all. Without the exclusion principle, matter would simply collapse: electrons would be drawn into the nucleus,
ASPECTS OF CONDENSED MATTER
93
making the existence of atomic matter impossible. Two fermions are never found in the same state—you might say they are antisocial. The bosons, on the other hand, are very social. Any number of Bose particles can exist in the same state. The result is a phenomenon called Bose condensation, in which bosons congregate in large numbers in the same positional or motional state. The interaction of large numbers of particles can lead to correlated motions, called cooperative (or collective) phenomena. Such behavior is seen in superconductors and in helium at low temperatures, as well as in phase transitions. The two isotopes of helium, of mass 3 and mass 4, present an example of the difference on collective behavior of fermions and bosons. Helium at low temperatures is nature’s simplest, most orderly liquid. Liquid helium-4 has a phase transition at 2.19 K, below which it is superfluid, and the motion of the Bose particles is quantized. Helium-3, a fermion, becomes superfluid at a much lower temperature, a few thousandths of a degree, because a pair of fermions behaves like a single boson. 1.5. Tunneling If a particle is held in place by walls that rise so high that the particle's energy is insufficient to scale them, it will be trapped, at least within the scheme of classical mechanics. However, quantum mechanics provides an exception to this rule: the probability of passage depends not on the height but on the thickness of the wall. We recall that the particle can be described as a wave, of wavelength h/mv. If the wall, or more precisely the potential barrier, is thin in comparison with the wavelength of the particle, the particle can diffract through it, so that there is some probability of its appearing on the other side. This phenomenon, called tunneling, has been observed to occur, particularly in low-mass particles such as electrons and protons. Quantum tunneling is considered a limiting factor in the development of information technology. Transistors have been developed with key dimensions as small as 50 nm, but the shrinking of nanostructures is expected to reach the physical limit of quantum tunneling at “gate lengths” of 10-20 nm.4 When we compare this to the 5-nm thickness of a biological membrane, we see that quantum tunneling effects in ion channels cannot be ruled out. 1.6. Quantum mechanics and classical mechanics Quantum mechanics has a totally different look from classical mechanics, as Richard Feynman pointed out:5 Things on a very small scale behave like nothing that you have any direct experience about. They do not behave like ... anything that you have ever seen. ... We know how large objects will act, but things on a small scale just do not act that way.
Rather than dealing with measurable real quantities, quantum mechanics deals with complex wavefunctions that are not measurable quantities. By taking the absolute square of a wavefunction, 5 2, we obtain the probability of finding the system in a
94
CHAPTER 5
given state. While classical mechanics yields dynamical quantities, quantum mechanics yields only probabilities. Another important difference between classical and quantum mechanics is that in quantum mechanics the object of study cannot be isolated from the rest of the world. In particular, when we wish to determine some property of an object, that is, make a measurement, we interact with it. Thus both we and the object are changed in the process. In the quantum world there are no events, only probabilities. Events only come into being in the process of measurement, the interaction of a quantum system with a classical detector. The laws of classical mechanics can be obtained from quantum mechanics by taking the limit in which Planck's constant goes to zero. (Of course, the value of h does not change, but quantum effects in most cases become negligible for large systems, for which the action terms are much greater than h.) This concept is called the Correspondence Principle. It states that the classical theory accounts for phenomena in the limit at which quantum discontinuities may be considered negligibly small. Accordingly, a formal analogy must exist between quantum mechanics and classical mechanics. In an abstract formulation of quantum mechanics, the wave functions are viewed as vectors in a mathematical function space. These vectors are acted upon by operators such as the Hamiltonian. In distinction to classical mechanics, particles (or quasiparticles) are not necessarily conserved in quantum mechanics. Operators called creation and annihilation operators can change the number of particles in a system. An example of this process is the formation of an electron–positron pair coupled with the annihilation of a photon. 1.7. Quantum mechanics and ion channels If we could solve Schrödinger's equation for an ion channel, we should be able to derive all its properties from that solution. But there is a difficulty. Even if we knew the exact structure of a voltage-sensitive ion channel—and at this time we don't—we cannot solve Schrödinger's equation for a molecule with hundreds of amino acid residues, tens of thousands of atoms, millions of electrons. Moreover, even if we had such a solution, we would be so inundated with data that we couldn't make sense of it. Although the only problems physicists have been able to solve exactly have been restricted to two-particle problems such as the hydrogen atom, they have developed powerful approximation methods. These usually take as their starting point the exact solution known for a simpler system and treating the additional complexity as a small perturbation of it. Among the systems that have been successfully studied in this way are crystals, which can be greatly simplified because of their symmetries. At a much lower level of order than crystals are glasses. Glasses possess properties of both liquids and solids in that they appear to be rigid but are disordered and capable of flowing, although extremely slowly. Paradoxically, these amorphous solids exhibit many of the same properties as crystalline solids. The fact that many of the concepts developed for crystals apply quite well to glasses shows that this behavior is robust, and not dependent on the structural details.
ASPECTS OF CONDENSED MATTER
95
Biological molecules are highly complex objects, products of billions of years of adaptation to changing environments. They are neither highly symmetrical like crystals nor totally disordered like glasses. Yet the fact that the same principles apply to both systems gives us reason to hope that these principles may be applicable to ion channels. 2. CONDENSED MATTER The traditional phases of matter, solid, liquid and gas, are all present in living organisms. Solids maintain their shapes under stress; they may be crystalline, their atoms forming regularly repeated patterns in space. Crystals in the human body give rigidity and strength to the teeth and bones in which they lodge. Liquids in the body include blood, lymph, cerebrospinal fluid, saliva, urine and stomach acid. Gases are found in the ears, lungs and abdomen, as well as the swim bladders of fish and the bones of birds. However, the bodies of living organisms clearly contain many tissues that do not fall into the three traditional categories of solid, liquid and gas, so that a classification scheme that contained only the three classical states of matter would not go far toward describing the complexity of the body. Fortunately we need not look far: Physics recognizes that there are intermediate phases between the long-range order of crystals and the disorder that characterizes liquids. These phases, between liquids and crystalline solids, are called mesophases or liquid crystals. A molecular liquid crystal, when heated, may lose longrange order in one or two dimensions but retain it in the other two or one. Solids, liquid crystals and liquids belong to the category of condensed matter. 2.1. Liquids and solids Liquids are disordered phases that flow freely when not confined but tend to maintain a constant volume. Liquids frequently act as solvents, and when their solutes are ions, they are good electrical conductors. Further heating destroys even the short-range order that holds the molecules of a liquid together, leading to the gas phase. In a liquid, molecules attract each other strongly enough to cohere, but not enough to snap into a tight crystalline order. Thus, in a liquid as in a gas, all directions are equivalent; these phases are highly symmetrical. A crystal, on the other hand, has specific symmetry properties involving a set of translation vectors. In a molecular crystal the centers of mass of the molecules are located on a three-dimensional lattice. This means that there are three vectors such that the molecule will be exactly reproduced by moving integer steps along any of these three directions. If a is a vector from a specific point of a molecule (in its mean configuration) to the corresponding point of a neighboring molecule, we can (disregarding the boundaries of the crystal) move by a finite number of a-translations without changing any property of the crystal. Because the crystal has three dimensions, there are two other independent vectors, b and c, not necessarily orthogonal to each other or to a, with the same translation invariance, so that we write
96
CHAPTER 5 (2.1)
where f is any quantitative property of the crystal, such as the quantum wavefunction or electric field, r a displacement from an arbitrary origin, and l, m and n arbitrary integers. The study of crystalline solids is one of the most productive applications of quantum mechanics. The symmetry of a crystal allows us to think of it as a single unit cell repeated over and over in three directions. This periodicity of the crystal—and all of its physical properties—allows us to study it with the help of Fourier analysis.6 Since this symmetry property is a subset of the much broader symmetry of a liquid or a gas, we can say that a solid has lower symmetry than a liquid or gas. In the formation of a crystal from a liquid by freezing, the liquid's symmetry is said to be broken. Broken symmetries are interesting because they result in the formation of new directions that were selected from the continuum of possible directions. A biological example of a broken symmetry is the development of the animal and vegetal poles in a spherical egg by the process of fertilization. Phase transitions from higher to lower symmetry always involve broken symmetries. A symmetry of the crystalline type is said to exhibit long-range order. Heating a crystal to the melting point destroys this order. It converts the material to a liquid, which displays only short-range order. Metallic solids are good electrical conductors because their valence electrons are not localized but are free to travel through the metal. Nonmetals are generally insulators, and transition elements such as germanium and silicon form semiconductors. Solids may also be ionic conductors, as we will see in Chapter 6. The structure of real crystals is not always a perfect lattice, but may be interrupted by defects. The simplest type is located at one atom, and is called a point defect. The atom may be missing, or an additional atom inserted, or the atom may be replaced by one of a different type (impurity). A defect that involves a line of atoms is called a line defect. 2.2. Polymorphism A solid that is a lattice of atoms of one type is an atomic crystal; a lattice of molecules is a molecular crystal. More than one structure is possible in either case. The term polymorphism, diversity of nature, is used in crystallography as the possibility of at least two different arrangements of the molecules of a compound in the solid state. The different forms are characterized by different crystal habits, melting points and other properties. For a material with two forms, one will be stable at low temperature and the other at high; the forms are separated by a specific transition temperature.7 An example of polymorphism in an atomic crystal is carbon, which, under different conditions, can crystallize as graphite or diamond. Many examples exist of polymorphism in molecular crystals, including proteins. Polymorphism is particularly prevalent in macromolecular crystals, including proteins such as the forms of human hemoglobin. Conformational transitions in proteins will be discussed in Chapter 12.
ASPECTS OF CONDENSED MATTER
97
2.3. Quasicrystals Quasicrystals are solids with a forbidden symmetry, such as the icosahedral form, which cannot exhibit translational symmetry. Such substances have been found; e.g., alloys of different metals. The growth of quasicrystals cannot be based on the local pattern of atoms, but must be nonlocal. The mechanism taking place must be the evolution of a linear superposition of alternative arrangements of many atoms, followed, when the difference between energy levels reaches one quantum, by the singling out of one arrangement that becomes actualized in the quasicrystal. Roger Penrose has suggested that brain plasticity is based on the growth and contraction of dendritic spines, causing activation and deactivation of synapses. He further speculates that this growth (or contraction) is governed by quantum-mechanical processes such as those involved in quasicrystal growth.8 2.4. Phonons A simple solid of neutral atoms such as argon can be described as a regular lattice of equilibrium positions about which the atoms vibrate in thermal motion. These vibrations are quantized, so that the crystal is characterized by an excitation spectrum. The analysis of this system is greatly simplified by the symmetry relation (2.1). Let us visualize the system as a lattice of masses connected by springs to their nearest neighbors. Fourier analysis yields the types of wave motions that the solid can sustain. From this analysis, physical properties of the solid, such as its specific heat, can be calculated. The coherent motion of large numbers of particles is called a collective excitation. A sound wave propagating in a background of thermally vibrating molecules is an example of a collective excitation. The unceasing motion of the lattice may be described as a system of waves traveling through the crystal, with different amplitudes, directions and frequencies. These waves are characterized by the relation of the velocity of monochromatic waves to their wavelengths, called the dispersion relation. This relation often displays two branches, describing two modes of motion. This distinction may be illustrated by a linear diatomic chain, in which pairs of ions of unequal mass constitute a primitive unit cell. In the acoustic mode, the ions within a primitive cell move together, while in the optical mode, they move 180° out of phase. The optical branch of the dispersion relation is higher in frequency than the acoustic branch. In the acoustic mode the ions within a primitive cell move as a unit, as in a sound wave; in the optical mode the ion vibrations can couple with an electromagnetic wave. If we send energy into the crystal, say in the form of x rays, this energy can only be absorbed by the electrons in discrete quanta. These quantized excitations are called phonons; they may be acoustic or optical. The absorption and emission of a phonon can be visualized as a particle collision, with the significant exception that the law of conservation of momentum does not hold, since the crystal as a whole can absorb momentum.
98
CHAPTER 5
2.5. Liquid crystals Liquid crystals are phases more ordered than liquids and less ordered than solids. They are formed by rod-shaped or disk-shaped molecules. Common phases of uniform matter include nematic, cholesteric and smectic; the cholesteric form may be considered a modified form of the nematic. Transitions from one phase to another, such as melting, freezing, evaporation and condensation, are usually accompanied by the release or absorption of energy. Such energy changes are also observed in transitions involving the mesophases. Liquid crystals undergo transitions at definite temperatures, such as: isotropic liquid Û nematic Û smectic Û crystalline solid These delicate phases of matter are of importance to the biophysics of membranes and ion channels.
Figure 5.1. A lipid bilayer enters a liquid crystalline phase at transition temperature Tc. From Adam et al., 1977.
Lipid bilayers exhibit transition temperatures at which phase transitions occur. For example, lecithin with a saturated C18 fatty acid chain undergoes a transition at Tc = 41°C. Below that temperature the bilayer is fairly rigid, with its carbohydrate chains extended in parallel. Above 41°C, the order of the chains is disturbed and the bilayer is fluid; see Figure 5.1.9 A compound that exists in different phases due to its interaction with a solvent is lyotropic. If its phase depends on both temperature and solvent concentration, as in excitable membranes, it is called amphotropic. Lyotropic mesogens are molecular species capable of forming a lyotropic mesophase. In addition to their high molecular mass and nonspherical shape, properties that they share with thermotropic mesogens, lyophilic mesogens are amphiphilic: The molecule is separated into a hydrophilic part (soluble in polar solvents such as water) and a hydrophobic part (soluble in nonpolar solvents but poorly soluble in water).10 Lipids, with polar heads and nonpolar tails, are examples of lyophilic mesogens. Molecules with both hydrophilic and hydrophobic parts, called amphiphilic, act as surface active agents or surfactants. The molecules may be anionic (with negative headgroup), cationic (positive) or zwitterionic (both charges). The shapes of the molecules determine the configurations of the aggregates they form, including micelles and bilayers, as shown in Figure 5.2.11 Bulky headgroups favor the formation
ASPECTS OF CONDENSED MATTER
99
of micelles, spherical aggregates with strong surface curvature. The formation of different shapes from the shape of the monomer is modeled by the concept of a packing parameter, introduced by J. Israelachvili.12 3. REVIEW OF THERMODYNAMICS Thermodynamics uses macroscopic concepts to discuss the large-scale behavior of matter. Its statistical foundations are discussed in statistical mechanics, reviewed later in this chapter and in Chapter 15. 3.1. Laws of thermodynamics The sensation of warmer is the basis of our physical concept of temperature. To make it quantitative, we construct a thermometer consisting of a mass of gas in a cylinder fitted with a piston. The position of the piston tells us the volume V of the gas, and the force on it divided by its area is the gas pressure P. If two bodies, insulated from their environment, are brought into contact, heat flows from the warmer to the colder. After they have been in contact for a long time, they are in thermal equilibrium and we say they are at the same empirical temperature t. If body A is in thermal equilibrium with body B, tA = tB. If, for three bodies A, B and C, it is true that tA = tB and tB = tC, then the zeroth law of thermodynamics tells us that tA = tC. This property allows us to construct a scale of temperature with an arbitrary zero point. Monatomic gases tend to behave similarly at high temperatures, giving us the notion of an ideal gas, which obeys the law (3.1) where N is the number of molecules of gas and k is Boltzmann’s constant. The temperature at which the volume of an ideal gas tends to zero at constant pressure is defined as the absolute zero of temperature, and T is called the absolute temperature; it is measured in kelvins. An equation such as (3.1), which gives the temperature as a function of the state variables of the system, is called a thermal equation of state. Induction from a large number of careful experiments shows that energy is conserved. Heat Q, work W and internal energy U are three forms of energy, and for a closed system the first law of thermodynamics can be expressed as (3.2) where dW is the differential of work done by the system.
100
CHAPTER 5
Figure 5.2. The shapes of lyotropic liquid crystal molecules determine the configurations of the aggregates they form. The concept of a packing parameter of J. Israelachvili explains the origin of aggregate shape in the space requirements of hydrophobic and hydrophilic moieties. SDS, sodium dodecyl sulfate; CTAB, hexadecyltrimethylammonium bromide. From K. Hiltrop, 2001, after Israelachvili, 1985.
ASPECTS OF CONDENSED MATTER
101
An equation that gives the internal energy U in terms of the state variables of the system is called a caloric equation of state. A simple thermodynamic system of one chemically pure component has two degrees of freedom, say P and V, with dW = PdV. It is characterized by a thermal and a caloric equation of state. Such simple systems also differ in the nature of their phase transitions: A mass of hydrogen may exist in three kinds of phase, solid, liquid and gas, while for helium there exists an additional, superfluid, phase. A molecular substance may also exist in one or more liquid crystalline phases. The internal energy function U has the useful property that, for a process carried out in a sequence of infinitesimally small steps for which the system can be assumed to be in equilibrium, reversing the process will bring it back to its original value. Such processes are said to be quasi-static. Thus, in a reversible cyclic process, the internal energy makes a closed loop. This is not true in general for the functions Q or W; unlike dU, dQ and dW are not perfect differentials. However, mathematical analysis suggests that an integrating factor exists that will turn these functions into reversible functions. For Q, that factor was shown by Sadi Carnot's experiments on heat engines and subsequent experiments to be 1/T. Thus we can write the first law in the form
(3.3)
We define (3.4)
where S is the entropy. Unlike dQ or dW, dS is a perfect differential. From the way entropy has been defined, we know that quasi-static processes leave the entropy unchanged. For all other processes in a closed system, experiment shows that the entropy always increases. That is the second law of thermodynamics.13 As V or T becomes large, the system behaves like a perfect gas; classical mechanics works well here. This is not true near absolute zero. According to the second law and the definition of entropy, Equation 3.4, the T = 0 isotherm is singular. In the neighborhood of absolute zero, where the energy of the system is low, the fact that energy is quantized leads to significant effects. It is observed that, when the temperature is lowered to zero with other variables held constant, the value approached by entropy is independent of all other variables and may be equated to zero. The isentropic (constant-entropy) and isothermal characteristics approach coincidence as the temperature tends toward zero; this is the third law of thermodynamics. It is also expressed in the statement that T = 0 cannot be attained in any finite number of steps.
102
CHAPTER 5
From the statistical viewpoint that entropy measures the disorder of a system, the third law indicates that matter becomes more highly organized as the temperature is lowered. The shift toward higher order may occur continuously or by way of a discrete phase transition. 3.2. Characteristic functions Thermodynamic states of the system correspond to definite values of certain characteristic functions, which depend only on the initial and final states of the system and not on the details of the process connecting them. Such characteristic functions include V, P, S, U and T. Other characteristic functions may be defined, of which the most useful to us here is the Gibbs function G, (3.5) Note that the products PV and TS pair one extensive variable (V or S), which scales with the size of the system, with one intensive variable (P or T), which does not scale with size. Since the first and second laws may be combined to give (3.6) we can write (3.7)
and, by differentiating 3.5 and substituting 3.7, (3.8) For reversible processes, in which the equals sign holds, Equations 3.7 and 3.8 imply (3.9) (3.10)
For each characteristic function there is a thermal and a caloric equation. By taking second derivatives, we can obtain the conditions of compatibility known as the Maxwell relations. For example, from Equation 3.10 we obtain
ASPECTS OF CONDENSED MATTER
103
(3.11)
When T and P are the independent variables it is convenient to choose G as the characteristic function. The extension of thermodynamics to open systems, linear irreversible thermodynamics, is discussed in Chapter 15, Section 6. 4. PHASE TRANSITIONS For homogeneous materials, we divide the total energy, total mass, total magnetic or electric moment by the volume V to obtain the energy density, mass density, magnetization or electric polarization. These quantities, mechanical variables, are mostly continuous as external fields, such as pressure, temperature, magnetic field or electric field, are varied. There are regions in which a mechanical variable is not uniquely determined but has a choice between options. We are familiar with the fact that, at T = 373 K and P = 1 atm, the density ' of density of H2O may be high (water phase) or low (steam phase). This happens on a subset of points of the PT plane, a line that extends from the triple point, where ice, water and steam are in equilibrium, to the critical point, Tc = 647 K and Pc = 218 atm, at which water and steam have the same density.14 A second example is the ferromagnetic state of the metals iron, cobalt and nickel. The magnetization vector M is not fixed at low temperature when the applied magnetic field H is zero; it may point in different directions. This free choice of directions ceases when the temperature T exceeds Tc, the critical temperature known as the ferromagnetic Curie temperature. A third example is the ferroelectric state of dielectric crystals, such as Rochelle salt and triglycine sulfate, or certain liquid crystals, such as some chiral organic polymers; see Chapter 17. In this case the polarization vector P is not fixed when the applied electric field E is zero; it may point in different directions. Polarization vanishes at zero field when the temperature T exceeds Tc, the critical temperature known as the ferroelectric Curie temperature. Thermodynamic effects due to electric fields will be considered in the next chapter. Phenomena that occur near a critical point are called critical phenomena; these are discussed in Chapter 15. 4.1. Phase transitions in thermodynamics The Gibbs function G is a smooth function over most of the PT plane, but, as we just saw, there may be discontinuities along certain lines in its first derivatives, S and V, given by Equation 3.10. These are called first order phase transitions. If S and V are continuous as well as G, but the second derivatives are discontinuous, the transition is said to be of second order.
104
CHAPTER 5
Systems are, in practice, often found that are not in equilibrium but are in socalled metastable states. These states are near, but not on, the equilibrium curves. For example, a vapor can be compressed to a pressure beyond its vapor pressure if nuclei for initiating condensation are lacking. Such a metastable vapor, called supersaturated, is unstable against disturbances such as the passage of an airplane, which may leave a “vapor trail.” Similarly, a liquid may be heated to a temperature above the boiling point, becoming superheated. Metastable states play essential roles in phase transitions of real systems. 4.2. Transitions of first order We can now derive equations for a first-order transition. For changes at constant pressure and temperature, dP = dT = 0, Equation 3.8 shows that the Gibbs function must be nonincreasing, (4.1) where the equals sign applies to reversible changes. In an irreversible change, G will tend to move toward the minimum value consistent with the constraints on the system. We assume that the necessary mechanisms for lowering the Gibbs function are available, so that the process will not stop at a metastable point, as in supercooling. For a system consisting of two phases characterized by specific Gibbs functions g1 and g2, we can write
for the Gibbs function G and the mass M of the total system. The only constraint on the system is the conservation of mass. If g1 < g2, the value of G will be lowered to its minimum Gmin if phase 2 converts to phase 1. Then
The system will be stable with arbitrary amounts m1 and m2 only it the specific Gibbs functions are equal, so the condition for phase stability is (4.2) From the differential form of equation 4.2, dg1 = dg2, we can write (4.3)
ASPECTS OF CONDENSED MATTER
105
Using the notation ( ) = ( )1 - ( )2, we obtain from Equation 4.3
With equations 3.10, this becomes the Clausius–Clapeyron relation (4.4) where L = T S is the latent heat of the transition per molecule and V indicates a packing difference between the two phases. The Clausius–Clapeyron equation determines the rate at which the equilibrium pressure varies with the equilibrium temperature in the P-T diagram. For example, if phase 1 is liquid and phase 2 solid, S and T will typically be positive, so that dP/dT (and its reciprocal, dT/dP) is positive. One exception is water, since the specific volume of the liquid is less than that of the solid at melting, and ice floats. Then the Clausius–Clapeyron equation implies that the freezing point of water decreases with increasing pressure, a fact that makes ice-skating possible. The specific volumes of the liquid and gas phases may be made equal by heating at constant P or increasing the pressure at constant T. At a critical point, labeled (Pc, Tc), the two phases become indistinguishable, and since V = 0, dP/dT cannot be determined from the Clausius–Clapeyron equation, Equation 4.4. 4.3. Chemical potentials and phase diagrams In an equilibrium between two phases, such as a vapor–liquid equilibrium, a substance is not necessarily homogeneous, even if it is chemically pure. The liquid droplets and the vapor phase are homogeneous parts of a condensing system, characterized by separate Gibbs potentials G1 and G2. Each phase is characterized by the number of its molecules, N1 and N2, as well as the common variables P and T. The total Gibbs potential is the sum of the Gibbs potentials for each phase, G = G1 + G2, and it will be a minimum for equilibrium. The total number of molecules, N = N1 + N2, remains constant as the number in each phase, N1 and N2 vary. The phase equilibrium is specified by
The Gibbs potential per molecule, = (0G/0N)P, T, is called the chemical potential.15 With this definition, the equilibrium between two phases can be written (4.5)
106
CHAPTER 5
This equation defines a relation between P and T for every given value of . The relation is a curve dividing the PT plane into areas occupied by the different phases. This plot is called a phase diagram, which can be thought of as a section through a three-dimensional PT graph, with the phases separated by intersecting surfaces. The intersection of the surfaces of 1 and 2 above the PT plane define the line along which the phases are in equilibrium.
Figure 5.3. A phase diagram, showing the triple point B, at which the solid, liquid and gas states are in equilibrium, and the critical point, C, at which the liquid and vapor phases become indistinguishable. From Finkelstein, 1969.
In an isotropic substance with three phases, solid, liquid and vapor, the surfaces may intersect pairwise to give three equilibrium curves: solid–vapor, solid–liquid and liquid–vapor. At a point at which all three surfaces intersect, the triple point, all three phases are in equilibrium. A phase diagram can help us see the way the chemical potentials of two coexisting phases behave in the vicinity of equilibrium; see Figure 5.3. 4.4 Transitions of second order In a second-order phase transition, not only G but also its first derivatives are continuous. For such a transition, we know from (3.10) that (4.6) (4.7)
Expressing equation 4.6 in differential form, we have
ASPECTS OF CONDENSED MATTER
107
from which we obtain (4.8)
where is the expansion coefficient and K the compressibility, (4.9)
(4.10) Similarly, taking the differential form of Equation 4.7, we have (4.11)
With 4.11, the Maxwell relation 3.11, Equation 4.9 and the definition of CP, the specific heat at constant pressure,
we have (4.12)
Equations 4.8 and 4.12 are called the Ehrenfest equations. Unfortunately, they only apply when the discontinuities are finite, which is often not the case. However, modified forms are useful even when the specific heat becomes infinite at the transition.16
108
CHAPTER 5
4.5. Qualitative aspects of phase transitions Matter becomes more highly organized as its temperature is lowered. This tendency summarizes the statistical viewpoint that entropy measures the "disorder" of a physical system. The molecules of a gas are not as highly correlated as those of a liquid, and those of a liquid are less ordered than those of a solid. As we saw in Section 2 above, liquid crystals undergo transitions between these, the degree of order increasing with transitions from isotropic liquid to nematic, then to smectic and finally to crystalline solid. As independent variables such as temperature and pressure are changed, the degree of organization usually changes continuously. However, if we visualize horizontal or vertical lines cutting across the PT plane of Figure 5.3, we see that discontinuous changes appear in the state of aggregation for a first-order transition. The change of organization that occurs during a transition is associated with nonanalytic behavior of the thermodynamic functions. These functions measure the difference in the symmetry of the separate phases. Solution of this difficult problem requires, first, relating the quantum mechanical energy spectrum with its degeneracies to the symmetry of the corresponding phase and, second, relating the thermodynamic functions to this spectrum by the principles of statistical mechanics. 5. FROM STATISTICS TO THERMODYNAMICS A number of experimental facts, particularly about phase transitions, cannot be explained by the laws of thermodynamics alone. They require a consideration of the actual dynamical behavior of the system in equilibrium. Whereas thermodynamics describes a closed system in thermodynamic equilibrium by a small number of variables such as P and V, the microscopic point of view tells us that the number of degrees of freedom of a system of macroscopic size is of the order of Avogadro's number. For a macromolecule such as a sodium channel of 250 kDa, the number of degrees of freedom is of the order of 105. Clearly, the thermodynamic description is an enormous abbreviation of the microscopic system. The connection between the microscopic and macroscopic points of view, studied by Ludwig Boltzmann, Josiah Gibbs and others, is called statistical mechanics. 5.1. Phase space Mechanics describes a system of particles by their generalized coordinates and momenta, whose equation of motion is given in terms of the energy operator, the Hamiltonian. Each of the N particles can be described as moving in a space of six dimensions, three for the coordinates and three for the momenta. More abstractly, the entire system can be described as a single point in a phase space of 6N dimensions. Within its constraints, the system can occupy many such points; the set of such points is called the Gibbs ensemble. The laws of mechanics, classical or quantum, tell us the way in which the Gibbs ensemble moves in phase space. A number of integrals of the motion of
ASPECTS OF CONDENSED MATTER
109
individual points may be obtained, such as energy, electric charge and angular momentum. When energy is conserved, each point must move on a hypersurface called the ergodic surface. Except for special symmetries or constraints, there is no reason to believe that the system will prefer one region of the ergodic surface over another. Such unrestricted motions, which come arbitrarily close to every point of the ergodic surface, are termed quasi-ergodic. We can postulate that, in the subspace of the Gibbs ensemble in which energy and other integrals of the motion are constant, the density of points is constant. In other words, all parts of the phase space allowed by the given energy and external constraints represent equally probable states of the system. Two important expressions for ensembles that depend only on the energy integral are the canonical and the grand canonical ensembles. We can divide the phase space into cells for comparison to experiment. A mathematical analysis permits us to equate the statistical mechanical terms to their corresponding thermodynamic terms, with the following results: For a closed system in equilibrium, the density &i of the ith cell is given by the canonical ensemble,17
(5.1) where (5.2)
The quantity Q, called the partition function, may in principle be calculated from the exact microscopic Hamiltonian by Equation 5.2. The link between statistical mechanics and thermodynamics is clearly seen by the free energy A, the work done in a reversible isothermal process, (5.3) which is related to the partition function by (5.4) When A is known, the thermal and caloric equations of state of the system can be obtained from the formulas (5.5)
(5.6)
110
CHAPTER 5
5.2. The canonical distribution In the present analysis the physical system has been represented by a single point in phase space. It is nevertheless possible to obtain information about the distributions of individual molecules from this approach. From the canonical distribution we can obtain the spread in energy of a gas by integration. The probability that the system as a whole has energy E is (5.7) To find the distribution of one particular molecule, say molecule 1, we must integrate over the position and momentum coordinates of the other molecules. Under reasonable assumptions, this integration gives for the probability that the first molecule has momentum between p1 and p1 + dp1 (5.8) For a canonical Gibbs ensemble, the molecules thus are distributed in momentum space by the Maxwell–Boltzmann distribution. Correlations between positions, momenta or other variables of molecules can be obtained by appropriate integrations of the probability function. 5.3. Open systems For an open system, one that can exchange particles, mass, energy and momentum with its surroundings, the grand canonical distribution is the appropriate formalism. The probability that the system is characterized by the energy E and the number of particles N is (5.9)
This equation is comparable to the canonical distribution of Equation 5.1. Open system differ from closed systems in internal equilibrium in their time behavior. While in the equilibrium system every flux is compensated by an opposite flux and every gain cancelled by a corresponding loss, this is not true of the open system. Here the symmetry of the system is broken by fluxes that pass through the system, moving it to a state far from equilibrium. We recall that biological systems are open systems. Another open system is the laser, in which an active material such as ruby held between two parallel mirrors is illuminated by light coming from an ordinary lamp. When the power of the lamp is greater than a certain threshold, the light in the cavity forms a coherent wave. When one of the mirrors is semi-transparent, a beam of
ASPECTS OF CONDENSED MATTER
111
coherent light is emitted.18 A simpler example is a flute or recorder, in which a stream of air hitting a sharp edge resonates in an open tube to produce a sound of discrete frequencies determined by the geometry of the tube. 5.4. Thermodynamics of quantum systems Josiah Gibbs, the American physicist who developed the principles of statistical mechanics, applied them to classical mechanics and hence was unsuccessful in describing real physical systems. However, the deep correspondence between quantum and classical mechanics (Section 1.6 of this Chapter) allows us to apply the principles of statistical thermodynamics to quantum systems with little change to the formalism. The quantum partition function may be obtained from the matrix formulation of quantum mechanics; here, following Finkelstein's book, we will simply postulate it.19 The Hamiltonian operator for a system of structureless particles may be written
(5.10)
where the momentum operator pi = -iW h /i, and the potential energy Uij is a function of the magnitude of the displacement vector between the ith and jth particles. The timeindependent Schrödinger equation is given by Equation 1.3, H5 = E5. If a system is fully described by a set of quantum numbers a1, a2, ..., the energy spectrum of the solution to Schrödinger's equation can be written as a function of the quantum numbers E(a1, a2, ...). We now define the partition function as (5.11)
where the sum extends over the complete spectrum. According to the Correspondence Principle, we postulate that the connection with thermodynamics is given by Equations 5.4-5.6, which give the thermal and caloric equations of state. Remarkably, these equations, together with the knowledge of the atomic structure of the hydrogen atom, and taking account of the Coulomb field between electrons and protons, make it possible in principle to determine the equations of state of all the phases and phase transitions of hydrogen. The partition function Q may be written in a more abbreviated form than Equation 5.12 if we symbolize the set of states with quantum numbers a1, a2, ... as .
(5.12) The set of states often has symmetry, so that different states possess the same
112
CHAPTER 5
energy. The number of states gn with energy level En is called the degeneracy of that level. The degeneracy generally increases with energy. In terms of the distinct energy levels n, the partition function may be written
(5.13) 5.5. Phase transitions in statistical mechanics In statistical mechanics, all thermodynamic properties are supposed to be deduced from the atomic structure by the application of Schrödinger’s equation and the partition function. This general formalism should yield the phase transitions and the equations of state of the separate phases. The detailed structure of the condensed phases should fall out of this formalism. The degeneracy of the energy states determines the dimensionality of the system’s representation. The degeneracy is high in the gas phase, but decreases as the energy is decreased. We can illustrate this problem by considering a mass of hydrogen at a pressure such that only the solid and vapor forms exist. The qualitative features of the spectrum are shown in Figure 5.4.20
Figure 5.4. Spectrum of a system that may exist in either a gas or solid phase. From Finkelstein, 1969.
The spectrum shows that at low temperature the hydrogen is frozen into a solid, in which only rotational states of the macroscopic crystal appear. At somewhat higher
ASPECTS OF CONDENSED MATTER
113
temperatures, the solid exhibits the universal Debye spectrum, with its specific heat proportional to the cube of the temperature. With rising temperature, the detailed crystal structure with its symmetries (labeled ) is observed. Above the sublimation temperature Ts, hydrogen is a gas. At very high temperature, T4, the degeneracy is so high that the spectrum is well approximated by the states of an ideal gas. 5.6. Structural transitions in ion channels We have seen that quantum mechanics is the correct formulation for the analysis of microscopic systems such as ion channels. A condensed matter system can exist in different phases, characterized by different physical properties. Which of its possible phases a region of matter occupies depends on its temperature, pressure and the magnitude and direction of any fields applied to it. We will discuss the application of these concepts to fluctuations in Chapter 11 and to structural transitions such as the open–close transitions of voltage-sensitive ion channels in the chapter on critical phenomena, Chapter 15. NOTES AND REFERENCES 1. B. S. Chandrasekhar, Why Things Are the Way They Are, Cambridge University, 1998, 5. 2. The electron volt (eV) is the change of potential energy of an electronic charge moved against the potential difference of one volt. 3. R. T. Weidner and R. L. Sells, Elementary Modern Physics, Allyn and Bacon, Boston, 1960, 426f. 4. T. N. Theis and P. M. Horn, Physics Today 56 (7):44-49, July 2003. 5. Richard P. Feynman, Robert B. Leighton, Matthew Sands, The Feynman Lectures on Physics, vol. III, Addison-Wesley, Reading, Mass., 1965, 1-1. 6. Fourier analysis is discussed in Chapter 10. 7. Joel Bernstein, Polymorphism in Molecular Crystals, Clarendon, Oxford, 2002. 8. Roger Penrose, The Emperor's New Mind, Oxford, 1989, 437f. 9. G. Adam, P. Läuger and G. Stark, Physikalische Chemie und Biophysik, Springer, Berlin, 1977, 274f. With kind permission of Springer Science and Business Media. 10. Alexander G. Petrov, The Lyotropic State of Matter: Molecular Physics and Living Matter Physics, Gordon and Breach, Amsterdam, 1999. 11. K. Hiltrop, in Heinz-Siegfried Kitzerow and Christian Bahr, in Chirality in Liquid Crystals, edited by Heinz-Siegfried Kitzerow and Christian Bahr, Springer, New York, 2001, 447-480. With kind permission of Springer Science and Business Media. 12. Reprinted from J. Israelachvili, Intermolecular and Surface Forces, Academic, London, 1985, with permission from Elsevier. 13. This law has been expressed in several ways: A transformation whose only final result is to transform into work heat extracted from a source which is at the same temperature throughout is impossible (Kelvin). A transformation whose only final result is to transfer heat from a body at a given temperature to a body at a higher temperature is impossible (Clausius). 14. S. K. Ma, Modern Theory of Critical Phenomena, W. A. Benjamin, Inc., 1976, 2ff. 15. Minoru Fujimoto, The Physics of Phase Transitions, Springer, New York, 1997, 5. 16. Robert J. Finkelstein, Thermodynamics and Statistical Physics: A Short Introduction, W. H. Freeman, San Francisco, 1969, 49-52. 17. See, e.g., Finkelstein, 101-129. 18. See, e.g., Giorgio Careri, Order and Disorder in Matter, Benjamin/Cummings, Menlo Park, 1984, 98103. 19. Finkelstein, 122. 20. Finkelstein, 188.
CHAPTER 7
IONS DRIFT AND DIFFUSE
Coming to the heart of membrane science, we now focus on electrodiffusion, a theoretical model that has played a central role in the development of our understanding of ion currents through membranes. 1. THE ELECTRODIFFUSION MODEL Simplicity is a powerful idea, as important in science as in music, art and lifestyle. As the Shaker tune has it, 'Tis a gift to be simple. Physicists in particular treasure the simple: “Try simplest cases first” is a common admonition. So it comes as no surprise that the first theory on the behavior of ions in membranes was a simple one. This chapter is devoted to the mathematical development of this model, the electrodiffusion model. An equivalent model in solid-state physics is called drift–diffusion. Electrodiffusion deals with problems of ion flow from a macroscopic perspective, that is, without worrying about the details of the environments of the individual ions; it treats the membrane as a continuous medium with two bounding surfaces. While electrodiffusion is important as one of the principal starting points of excitablemembrane theory, it is by no means an endpoint. It may be thought that modeling a membrane as a featureless wall cannot be a valid approach, since we know that the membrane has a richly complex structure, as we saw in Chapter 2. To be sure, if we could isolate each component and study it separately, it would seem preferable to do so. Unfortunately, this can not be done for the protein components. Their conduction properties can only be studied when they are embedded in a lipid bilayer, and not in isolation. The technique of patch clamping has taken us part of the way toward the goal of channel isolation, but it must be remembered that even in a micrometer-size patch, the channels only occupy a small fraction of the membrane area. So we really can't study the function of protein channels separately. And even if we could, the channel properties would still have to be recombined with the lipid properties to obtain a holistic model of a membrane’s behavior between two aqueous phases. It is perfectly valid to apply a continuum approach to the study of membranes, as long as we are aware of the limitations of the method. After all, we use the gas laws, which treat gases as continua, even though we are aware that gases consist of individual
139
140
CHAPTER 7
molecules colliding with each other and the walls of the container. The gas laws are accurate for macroscopic measurements, and continuum models of membrane conduction, if we get them right (and that's a big if!), should be accurate for membrane measurements with large (not micropatch) electrodes. We just have to realize that the results will give us an average in which the individual channel behavior will be diluted and smoothed out. This dilution and smoothing is seen in macroelectrode experiments, which give records in which single-channel currents can not be distinguished. 1.1. The postulates of the model An ion's motion, as postulated by Nernst and Planck, is ruled by two tendencies. One is the tendency of a particle to diffuse, bouncing drunkenly from a region of high concentration to one of low concentration. The other tendency of the ion is to drift, like a tumbleweed on a windy prairie, migrating in the direction that the force of the electric field exerts on its electric charge. The ion's movement, according to the electrodiffusion model, is made up of these two components, diffusion and migration (or drift). Electrodiffusion is one of the key concepts of membrane science; it plays a central role in the science of voltage-sensitive ion channels because of its mathematical simplicity and consistency. As we already saw in Chapter 3, it led to the development of the membrane hypothesis in 1890-1902. Also, as we will see in the next Chapter, it was the starting point for a paper by Hodgkin and Katz that led to the important 1952 model of Hodgkin and Huxley, which provided the first mathematical description of an action potential. And, as we will see in Chapter 14, electrodiffusion provides a clue that connects ion-channel science to a branch of condensed-state physics, ferroelectricity. Like all models, electrodiffusion is a simplification of the real situation. We will be making some rather unrealistic assumptions, but that is the price we have to pay for the insights we can obtain from a mathematical model. Diffusion and electrodiffusion are macroscopic models, models that deal with matter as a continuum. The results of analyses of macroscopic models are validly applied to measurements at a scale that includes a huge number of molecules, so that the measured variables can be treated as continuous functions. As mentioned above, one example of a continuum model is the gas laws, which treat pressure as a continuous variable. It took a long time after these macroscopic laws were discovered by Boyle and Charles before the kinetic theory of Boltzmann, Maxwell, Clausius and others explained the nature of pressure as an emergent property of numerous discrete molecular collisions. Application of the electrodiffusion theory is sometimes criticized because of its disregard of the molecular structure of the membrane. However, since the structure of the molecules that constitute the membrane is still largely unknown, it is just this feature that makes it useful. The application of the electrodiffusion model to a membrane does not mean that one is asserting that the actual membrane is a featureless wall; rather, it means that one is taking a macroscopic view of the membrane.
IONS DRIFT AND DIFFUSE
141
1.2. A mathematical membrane Although electrodiffusion has been used in various ways, we will begin with the simplest case, that of a uniform membrane bounded by planar surfaces. Electrodiffusion is based on certain simple model assumptions. The real membrane is of course nonuniform and its surface may well be quite uneven; so, since we know these assumptions are not true, why bother with electrodiffusion at all? The answer is that electrodiffusion is a simple powerful tool to examine a number of features that a real membrane must possess. It gives us a mathematical representation of an idealized membrane with the microscopic details blurred out by an implicit averaging. These assumptions need not be true, but the inexorable logic of mathematical reasoning endows the model with a consistent structure. That means that, if the conclusions of our model turn out to be contradicted by experimental fact, we will know that the source of the error must be one or more of the assumptions. That can lead us to the next step in our reasoning. Our model consists of two aqueous phases, separated by a membrane of uniform material. In three dimensions, the membrane will be in the xy plane, and the z axis will be along the outward membrane normal. For simplicity we consider the membrane to be bounded by parallel planar surfaces. The membrane thickness is L. On one side, labeled I, is the internal, cytoplasmic solution; on the other side, II, is an aqueous solution corresponding to the external fluid bathing the axon. We will follow the convention that the external potential is zero and the internal potential is the membrane voltage Vm. We assume that the aqueous solutions are electrically neutral, at least in the bulk.1 Since the membrane is thin and not necessarily a good electrical conductor, it need not be electrically neutral. 1.3. Boundary conditions Within the membrane, the number density of ions, N, will vary continuously from its value at the left boundary, N(0) = NI, to its value at the right boundary, N(L) = NII; these boundary values are collectively called the boundary conditions. We must not think, however, that the ion concentration just inside the membrane at the boundary will be equal to that just outside. After all, the solubility of an ion or molecule will be different in different media. The ratio of solubility of the membrane to that of water is called its partition coefficient and given the symbol .2 The partition coefficient of a membrane is not directly measurable. We can write
(1.1) where Ni and No are the concentrations of the ion in the aqueous solutions inside and outside the cell, respectively. It should be noted that we are here assuming that the aqueous media inside and out are “well stirred,” so that their boundary values are equal to their bulk values. This
142
CHAPTER 7
is more likely to be the case when the axon is internally perfused than in vivo. Ion accumulation and depletion may play significant roles in electrical behavior, as we saw in Chapter 6. 2. ONE ION SPECIES, STEADY STATE We shall begin with several additional simplifications. Of the various ions in the system of a real membrane, we shall focus on only one ionic species that can permeate through the membrane. We shall also assume that the system is in a steady state, so that the flow of ions is not changing in time. Note that a steady state does not imply equilibrium, since at equilibrium the flow of ions would cease. Time dependence and multi-ion electrodiffusion will be treated in Chapter 8. 2.1. The Nernst–Planck equation Let us temporarily disregard the charge of the ion and consider the diffusion of a neutral molecule, such as a molecule of sugar. Our particle is repeatedly colliding with other particles of its own kind, which in general are not uniformly distributed. The majority of the collisions are with particles arriving from the direction of highest concentration. These tend to impel the particle in the other direction, so the average movement is from higher to lower concentration, down the concentration gradient. In one dimension, we can say that the ion flux due to diffusion is proportional to the negative derivative of concentration with distance. (2.1) This equation is Fick's first law. The derivative is a one-dimensional component of the concentration gradient; the scalar flux is a component of a vector. N is the number density of the ions.3 Flux is the number of ions crossing a unit area in unit time, so that its mks units are m-2s-1. The proportionality constant D is called the diffusion coefficient. Since the units of N are m-3, the units of D are m2s-1, square meters per second. An empirical relation, valid when the boundary concentrations are not too dissimilar, is (2.2) where P is the permeability of the given ion in the membrane. Now let us restore the ion's charge and compute the migration component. The force F on an ion of charge q due to the electric field E at its location is qE.4 The field is the negative gradient of the electric potential V, (2.3)
IONS DRIFT AND DIFFUSE
143
Averaging over many collisions, we say that the ion is dragged by the force F through a viscous medium, so that its velocity v is proportional to the force, v = uF. The proportionality constant u is called the mechanical mobility5; its units are mN-1s-1. Therefore, the average drift velocity of an ion is uqE. The flux is the ion concentration times the average velocity, so that (2.4) Adding the two contributions, Equations 2.1 and 2.4, we obtain the net flux due to both diffusion and migration.
Since each ion carries charge q, the current density J is q times the net flux. (2.5) The units of current density are amperes per square meter, Am-2. In the stationary state, J is uniform across the membrane, as we will show in Section 8.2, so we can treat J as a constant here. The charge q of the ion is the product of its (signed) valency Z times proton charge e, (2.6) The constants D and u both describe the motion of the ion, so it is not surprising that they are related. As Einstein showed, the diffusion coefficient equals the mobility times the temperature, expressed in energy units, (2.7) Using the boundary conditions, Equation 1.1, with Equations 2.2 and 2.7, we can show that the empirical permeability P is (2.8) Using Einstein's relation to eliminate D from (2.5) and rearranging, we obtain (2.9)
144
CHAPTER 7
This is the Nernst–Planck equation, introduced in Chapter 2. Using the values of constants, we can calculate the value of kT/e; see the Box on Chemical Notation on page 146. The Nernst–Planck equation describes the ion current as made up of two contributions, the first depending on the gradient of the ion concentration and the second depending on the product of field and concentration. The first term, the concentration gradient, will be negative if NI is greater than NII, as is usually the case for the potassium ion. Because of the negative sign on the right-hand term, the diffusion contribution to the current will therefore be positive, representing an outward flux. Because N is always positive, the direction of the migration current will be determined by E and the sign of the ionic charge. We note that E = -dV/dz and the voltage across the membrane is the potential inside relative to that outside, (2.10) because the outside potential is by convention set equal to zero. Thus, for the membrane at resting potential difference, when the potential inside is negative relative to the outside, the membrane voltage will be negative and the field will be inward. Because the two negative signs cancel, the migration contribution to the ion current for a positive ion will be inward at resting potential. 2.2. Electrical equilibrium We see that the diffusion and migration components of the current density may be in opposite directions, as for example for the potassium ion at resting potential. When they cancel out exactly, the membrane system is in electrical equilibrium. No net current flows, J = 0, so that Equation 2.9 becomes (2.11)
Rearranging, we find that (2.12)
where we have used the fact that the derivative of the natural logarithm is the reciprocal of the dependent variable times its derivative. Using (2.4), we obtain
IONS DRIFT AND DIFFUSE
145
We integrate both sides to obtain
(2.13)
Using Vm = VI - VII, we rearrange this equation to find
(2.14) This is the equation for the Nernst potential; notice that the units of concentration cancel. It is often convenient to write it in terms of the common logarithm, with Equation 2.6,
(2.15)
Taking the exponential of both sides of (2.14) yields (2.16) This equation can also be derived directly from the partition function; see Section 5 of Chapter 5. Here is an example of the use of the Nernst equation; the answers are given in Table 4.1 of Chapter 4: Calculate Vm for (a) Ni = 400 mM; No = 10 mM, Z = 1; (b) Ni = 50 mM; No = 460 mM, Z = 1; (c) Ni = 0.0001 mM; No = 10 mM, Z = 2; (d) Ni = 100 mM; No = 540 mM, Z = -1. Assume a temperature of 20ºC. These values are typical for potassium, sodium, calcium and chloride respectively in squid axon membranes. How are the boundary conditions applied? Let us suppose, for concreteness, that the internal solution (i) is 400 mM KCl and the external solution (o) is 10 mM KCl. To keep things simple, let us suppose that the membrane is permeable to K+ ions but not to Cl- ions. If we ignore electrical forces for now, it is clear that the potassium ions will tend to diffuse outward. Since the KCl is completely dissociated, there are 10 times 10-3 times Avogadro's number potassium ions in a liter, 103 cubic centimeters, of external solution. Therefore the number density of potassium ions in o is No = 10×10-3×6.02×1023/(103 cm3) = 6.02×1018 cm-3. In MKS (meter-kilogram-second) units we can write No = 6.02×1024 m-3 Ni = 40×6.02×1024 m-3 = 1.20×1026 m-3
146
CHAPTER 7 CHEMICAL NOTATION Chemists and biophysicists often count in units of the Avogadro number, NA = 6.02 x 1023 mol-1. In these units, the Boltzmann constant k is replaced by the molar gas constant R = NA k = 8.31 J mol-1 K-1, and the proton charge e is replaced by the Faraday constant F = NA e = 9.65 x 104 C mol-1. Instead of the number density N, the concentration c is used, in units of moles per liter; 1 mol l-1 = 103 NA m-3. Since kT/e = RT/F, and the units of concentration cancel, we can write the Nernst equation V = (RT/zF) ln (cII/cI). However, because the number of ions crossing a channel is small compared to NA and the additional constant complicates the equations, chemical notation generally will not be used in this Chapter. A useful rule of thumb is that kT/e = RT/F is approximately equal to 25 mV at room temperature.
If we know that the value of the ionic current density and concentration at the left and right boundaries, we could use the Nernst–Planck and Gauss equations to find the concentration a short distance to the right of the left boundary, say one hundredth of the membrane thickness, and then use these values to go another short distance, and so on. Such numerical methods are often useful.6 Starting from one end and working toward the other, called a shooting method, has the disadvantage that it may be difficult to end up at the correct boundary value on the other side. When both boundary conditions are specified the problem is a two-point boundary value problem. We will not use numerical analysis here, but seek a general solution. This solution will have arbitrary constants, and we can set these to values for which the boundary conditions are satisfied. 3. THE CONSTANT FIELD APPROXIMATION The Nernst–Planck equation 2.9 and Gauss's law together form a system of differential equations. We solve the problem exactly below, but for now we can simplify the math by making a reasonable assumption based on an examination of the physical situation. We are dealing with a very thin membrane, and while it is true that the charges within the membrane will change the electric field, this contribution to the field will often be small compared to the external field across the membrane. Thus the field E can be taken as uniform along the space variable z. As a function of z, therefore, the field will be assumed to be constant.
IONS DRIFT AND DIFFUSE
147
3.1. Linearizing the equations Within this constant-field approximation, Gauss's law is replaced by the simple equation (3.1) With this substitution, the Nernst–Planck equation (2.9) becomes (3.2)
where the E has been absorbed into the constant term in parentheses. This differential equation is linear, unlike the situation in the more general equation 2.9, which is made nonlinear by the term in EN. Because of the linearity of this equation, its solution is straightforward. Let (3.3)
Then, since J is constant,
so that Equation 3.2 becomes (3.4)
This equation can be solved for y in the same way we solved for the Nernst equation in Equation 2.12.
Integrating, we have
148
CHAPTER 7
where c is a constant. Using (3.3) to eliminate y, we obtain
We can evaluate c by applying the left-hand boundary condition, N = NI at z = 0:
Substituting and rearranging, we find that
(3.5)
where the dependence of N on z is explicitly indicated. Taking the exponential of both sides and simplifying,
Rearranging, we obtain
(3.6)
Equation 3.6 shows that under the constant field assumption the ion concentration has an exponentially rising or falling profile across the membrane, and that the sign of the prefactor of the variable term depends on whether J is less than or greater than q2uENI. If they are equal, N is independent of z. 3.2. The current–voltage relationship To calculate the important current-voltage relationship, we apply the right-hand boundary condition, N(L) = NII, to Equation 3.6,
IONS DRIFT AND DIFFUSE
149
and solve for J to obtain
Since the membrane voltage Vm = EL, we can write after some rearrangement
(3.7)
where we have used Equations 1.1. According to Equation 3.7, the J–V curve approaches linear asymptotes as the membrane voltage attains large positive and negative values. For Vm +, J/Vm q2uNI/L and, as Vm -, J/Vm q2uNII/L. The ratio of these quantities, known as the rectification ratio, thus is equal to NI/NII. By examining the behavior of Equation 3.7 as we approach zero membrane voltage, we can compare the results to the flux of a neutral molecule through a membrane. Noting that, for small values of x, ex 1 + x, we see that Equations 2.2 and 2.8 can be recovered. 3.3. Comparison with axonal membrane data Now that we have an approximate solution to the stationary electrodiffusion problem for a single ion, we can compare it with data. Figure 7.1 shows steady state and peak inward current–voltage characteristics measured at 5, 10 and 20°C.7 Stationary theory of course does not apply to the peak inward case, but even the steady state data show conflict with the electrodiffusion results. This is seen primarily in the rectification ratio, which is much greater in the axon than in the mathematical model. The results of the comparison are disappointing: The squid axon is a much better rectifier than the classical electrodiffusion model predicts. The electrodiffusion model in this approximation does not do justice to the potassium system.
150
CHAPTER 7
Figure 7.1. Current-voltage characteristics for squid axon membrane under voltage clamp at 5, 10 and 20 °C. Steady state currents are labeled Iss and early inward peak currents, Ip. From Cole, 1972.
When the inside and outside concentrations are equal, electrodiffusion theory predicts a straight line for the current–voltage curve: In theory, this system is ohmic. It turns out, however, that, while the I–V characteristic is linear over a limited region near the origin, the nonlinearity of the membrane asserts itself in bumps elsewhere along the voltage axis.8 When we consider the sodium system, the failure of the model is even more pronounced, as it fails to explain the negative resistance of the sodium current. In a 1965 review assessing electrodiffusion, Cole wrote that “such simple models and such elementary analyses ... are not adequate.”9 By that time, many electrophysiologists had already abandoned electrodiffusion in favor of the Hodgkin–Huxley model; see Chapter 9. But have we really learned all we can from the electrodiffusion model? What features of the solution have we lost in the linearization inherent in the constant field approximation? Let us
IONS DRIFT AND DIFFUSE
151
proceed to the exact solutions, which have been known since 1936.10 4. AN EXACT SOLUTION The simple assumption that the electric field is a constant across the membrane cannot be valid. The ions are, after all, charged, and so they must contribute to the field. The relationship that expresses this fact is Gauss's law, which we have already seen in Chapter 6. 4.1. One-ion steady-state electrodiffusion In three dimensions, in rationalized mks (SI) units, Gauss's law is (4.1)
where D is the electric displacement vector and qN is the charge density. The relation between D and the electric field E can be written, as in Equation 1.4 of Chapter 6, (4.2) where the relative permittivity is assumed, in classical electrodiffusion, to be a constant. Inserting (4.2) into (4.1), we have (4.3) In one dimension, this becomes (4.4)
This becomes the companion equation to the Nernst–Planck equation (2.9). There are two dependent variables, N and E, and two equations relating them, so the problem is, as mathematicians say, well posed. These equations can be solved simply for the special case J = 0, but we will proceed to the general case of a finite current first.11 4.2. Finite current Equations 2.9 and 4.4 form a complete system of differential equations, in the sense that they contain enough information to specify a solution, given the boundary conditions. We could solve them just as they stand, but they look a little cluttered. What if the constants could all be made to disappear? Of course, we can't just set them equal to
152
CHAPTER 7
one, but we can get the same effect by absorbing the constants into the variables. Let N0 be an arbitrary unit of concentration. Then we can define a distance unit
(4.5)
a quantity closely related to the Debye length. We will replace E, N and J by their lower-case equivalents, and convert z to s, by using the following equations for our transformation of variables: (4.6)
(4.7) (4.8) (4.9)
(4.10) Substituting (4.6) to (4.10) into Equations (2.9), (4.4) and (2.3), we arrive at the dimensionless equations (4.11) (4.12) (4.13)
We can see that they are like the earlier equations but with the constants set equal to one (or two), and it was done quite legitimately! When we are finished solving them, we can simply use the scaling transformation (4.6)-(4.10) in reverse to put the constants back in. Now let's solve the equations. Substituting (4.12) into (4.11) we eliminate n to obtain (4.14)
IONS DRIFT AND DIFFUSE
153
a second-order differential equation. We integrate it to obtain the first-order equation (4.15) where g = eI2 - nI is a constant. Since the value of s0 in Equation 4.6 has not yet been assigned, we now set it so that (4.16)
With this assignment, the constant g in (4.15) vanishes, and we have (4.17) This is a form of Riccati's equation. To simplify it, we substitute (4.18) where we have used (4.13). Differentiating the first of these equations twice, we obtain (4.19) (4.20)
Since from Equation 4.17 the quantity in parentheses equals js, we arrive at (4.21)
We assume that j g 0; the zero-current case will be solved below. To convert Equation 4.21 to a standard form, we absorb the constant j by the substitution (4.22) Then, by the chain rule, (4.23)
154
CHAPTER 7
so that (4.21) becomes (4.24) This is the Airy equation. Since the Airy equation is an equation of second order, it has two linearly independent solutions; these are called the Airy functions and written Ai()) and Bi()). They and their first derivatives Ai'()) and Bi'()) are tabulated, and their properties
Figure 7.2. The Airy function Ai(x). From Wolfram, 2002.
listed, in mathematical tables and handbooks.12 Computer routines for them are available. Higher derivatives can be calculated from these by using (4.24); e.g., Ai''()) = ) Ai()). The Airy functions are oscillatory for negative arguments. When the argument ) is positive, Ai()) decays and Bi()) grows rapidly. Figure 7.2 shows the function Ai(x).13 The general solution to Equation 4.24 is (4.25) where a and b are arbitrary constants to be determined by the boundary conditions. From (4.18) we obtain for the electric potential (4.26) Factoring out the a and substituting (4.22) results in (4.27) where R = b/a. The constant a is determined by the reference potential.
IONS DRIFT AND DIFFUSE
155
For the dimensionless field e we obtain from (4.19)
(4.28)
where the as have canceled. This result could be obtained more directly by differentiating the potential (4.27). For the ion concentration we use, from (4.12) and (4.17), (4.29) a parabolic relation, which with (4.25) results in
(4.30)
Figure 7.3. Exact solutions of the electrodiffusion equations: plots of ion concentration n, electric field e and voltage v, as functions of distance s, for the positive dimensionless current density j = 1. The electric field at s = 0, Ie = 2, -2, as labeled. Left: g = -2; Middle: g = 0; Right: g = 2. Adapted from Leuchtag and Swihart, 1977.
156
CHAPTER 7
As often happens with nonlinear differential equations, these solutions are not well behaved. From Equations 4.27, 4.28 and 4.30 we see that singularities occur at points at which (4.31) These singular points separate the functions into branches or domains, as in the Maxwell -Wagner effect; see Figure 6.3(c), page 128. Figure 7.3 shows a set of these functions for j = 1 and g = -2, 0 and 2.14 The graphs display the logarithmic singularity of the potential, the positive and negative infinities of the field and the positive infinity of the concentration. Whether these singularities have any physical interpretation is not clear. When we use the boundary conditions to apply these equations to a membrane at a stationary state, we accept only solutions that do not contain singularities. Negative values of n, which are unphysical but appear in the solutions, must also be rejected. 4.3. Reclaiming the dimensions These equations contain the dimensionless quantities jb, js and jas. When we apply the transformation equations (4.5-4.10) back to the dimensionful solutions, we arrive at
(4.32)
where 1 and 2 are units of length. The field equation (4.28) becomes
(4.33)
and the dimensionful equations for N and V can be obtained similarly from 4.32, 4.8, 4.30, 4.10 and 4.27. 4.4. Electrical equilibrium Since the case of zero current density was excluded from these solutions, we must deal with it separately. We can call it the condition of electrical equilibrium.
IONS DRIFT AND DIFFUSE
157
When J = 0, Equation 4.4 can be used to eliminate N from Equation 2.11, resulting in the second-order differential equation
(4.34) This can be integrated to the first-order equation (4.35) where ±A2 is a constant of integration. With (4.4), this becomes (4.36) Note that the E–N relationship is parabolic. The vertex of the parabola may be above, on or below the E-axis, depending on whether the constant, ±A2, is positive, zero or negative. Evaluating (4.36) at the left boundary, we see that (4.37) Thus the upper sign applies when the diffusion term is dominant in the current density, Equation 2.5, while the lower sign applies when the drift effect is dominant. Equation 4.35 can be rearranged to read
(4.38) For the case in which diffusion and drift effects are balanced, A2 = 0, this is integrated to (4.39) giving for this case the solution
158
CHAPTER 7
(4.40)
Note that this solution has a singularity at z = 2kT/qEI. In the osmodominant case, i.e., with the upper sign, Equation 4.38 has the multibranched solution
(4.41)
For the electrodominant case, with the lower sign in Equation 4.36, a physically meaningful solution can only exist when E2 > A2 since N must be positive. This solution is
(4.42)
Equations 4.41 and 4.42 describe functions composed of branches separated
IONS DRIFT AND DIFFUSE
159
by singularities. They are thus qualitatively different from those obtained by the constant field assumption. Singularities appear because of the limitations of the model as a descriptor of the physical system. However, even for a valid model, singularities are difficult to observe experimentally, due to instrumental resolution.15 4.5. Applying the boundary conditions Applying the boundary conditions (1.1) to the exact solutions of the one-ion steadystate classical electrodiffusion equations, we obtain membrane profiles of voltage, electric field and ion concentration for a given current. The resulting I–V curve is continuous if the current distribution in the membrane remains continuous; however, for certain sets of the thickness parameter l and the boundary concentrations n1 and n2, the electrodiffusion equations have no continuous solution. From the physical point of view, the membrane is too thick or the ion concentrations too low to carry the required current. It is not possible to write out an explicit current–voltage relation, since the current enters the solution (4.39) in an involved way. Instead, current–voltage curves are obtained by applying the boundary conditions for each value of the current separately and iterating.16 Figure 7.4 shows data of Cole and Moore fitted by this method.17 For positive currents, the fit is good up to about 50 mV. At higher outward currents, saturation occurs. This is qualitatively explainable by ion accumulation in the space between the axon and the Schwann-cell layer. Below the potassium reversal potential of about -50 mV, the fit becomes very poor. Although Cole and Moore give no data points for negative currents, rectification is known to be very high, so that the current density would remain close to the negative voltage axis. Thus we must conclude that single-ion electrodiffusion is unable to fit the rectification data. This is further discussed in reference 9, a review of electrodiffusion by Cole. However, it is necessary to point out that this model is not appropriately applied to squid axon in seawater, since it neglects the other ions present. Calcium ions in particular can interfere with monovalent ions, as we saw in Chapter 6, Figure 6.10. This would be a factor in inward currents but not outward, because external [Ca2+] is substantial while internal [Ca2+] is negligible. This consideration alone is qualitatively sufficient to explain why outward currents are well fitted by the model and inward currents are not. In the profiles, the field e increases from left to right. The concentration profiles drop down sharply at the left boundary for large negative (inward) currents, then shift toward the outer membrane surface as the currents become outward and increase. The consistency between the profiles is demonstrated by the steepening of the field profiles as the concentration of positive ions within the membrane increases: More force is required to push an ion against the repulsion of the ions already present in the membrane.
160
CHAPTER 7
Figure 7.4. Exact single-ion electrodiffusion applied to 1960 squid axon data of Cole and Moore. (Top) Current density versus membrane potential difference. The dimensionless fitting parameters and temperature are given on the graph. Note that the model does not fit the expected rectification. (Bottom) Membrane profile of electric field (left) and ion concentration (right) for dimensionless current densities varying from 1.5 to 9.0.
IONS DRIFT AND DIFFUSE
161
4.6. Equal potassium concentrations An attempt to fit a series of current–voltage data obtained by Eduardo Rojas and Gerald Ehrenstein18 was again only partially successful; see Figure 7.5. Rojas and Ehrenstein obtained data from a squid axon intracellularly perfused with 600 mM potassium fluoride in seawater (open circles), then immersed in 600 mM KCl (squares), and lastly again in seawater (filled circles). The steady-state current–voltage curve for equal internal and external potassium-ion concentrations was nearly linear. The data points were fitted with exact electrodiffusion using parameters shown in the Figure.
Figure 7.5. Computed fit of current–voltage data obtained from squid axon by Rojas and Ehrenstein (1965) by exact solutions for single-ion electrodiffusion. The observed rectification in seawater is much greater than that calculated by single-ion electrodiffusion. From H.R. Leuchtag, unpublished calculations.
The calculated fits did not properly account for the rectification property in artificial seawater; although no data points are given for the seawater runs below -60
162
CHAPTER 7
mV, the currents for these membrane potentials would be expected to be much smaller than those predicted by single-ion electrodiffusion because of the rectification problem mentioned above. As we saw, the failure to fit the seawater data for inward currents should not be surprising, since seawater contains the divalent ion Ca2+, which may affect monovalent ion conductivity. It is significant that the rectification dilemma does not occur in the axon immersed in 600 mM KCl, a solution without divalent cations. Separate calculations with the constant field approximation gave fits almost identical to those given be the exact electrodiffusion computations. Although of course the field profiles are different, the current–voltage curve for the exact solution is very similar to the one obtained from the constant-field approximation. Like the approximate solution, the exact solution of the classical electrodiffusion equations does not resolve the problem of the rectification ratio. We will discuss this failure in Section 6 of Chapter 14.
NOTES AND REFERENCES 1. As we saw in Chapter 6, aqueous solutions are not neutral all the way to the membrane or other boundary. There is a double layer, of counterions and coions, within a few tenths of a nanometer of the surface. 2. A. L. Hodgkin and B. Katz, J. Physiol. (Lond.) 108:37-77, 1949. 3. We have here assumed that no interaction exists between the ion and the membrane; such interactions are often accounted for by replacing N by N, where is the activity coefficient. 4. In some works, the symbol E is used to stand for voltage. 5. The mechanical mobility is the reciprocal of the coefficient of friction. 6. See, e.g., R. P. Feynman, R. B. Leighton and M. Sands, The Feynman Lectures on Physics, AddisonWesley, Reading, MA, 1963, vol. I, Chapter 9. 7. Kenneth S. Cole, Membranes, Ions and Impulses, University of California, Berkeley, 1972, 450. By permission of University of California Press. 8. G. Ehrenstein and H. Lecar, Ann. Rev. Biophys. Bioeng. 1:347, 1972. 9. K. S. Cole, Physiol. Rev. 45:340-379, 1965. 10. F. Borgnis, Z. Physik 100:117, 478, 1936. 11. H. R. Leuchtag and J. C. Swihart, Biophys. J. 17:24-46, 1977. 12. G. N. Watson, A Treatise on the Theory of Bessel Functions, Cambridge University, 1962; H. A. Antosiewitz, in Applied Math. Series 55: Handbook of Mathematical Functions, edited by M. Abramowitz and I. A. Stegun, U. S. National Bureau of Standards, Washington, 1964. 13. Stephen Wolfram, A New Kind of Science, Wolfram Media, Inc., Champaign, Ill., 2002, 145. 14. H. R. Leuchtag and J. C. Swihart, Biophys. J. 17:27-46, 1977. 15. Nigel Goldenfeld, Lectures on Phase Transitions and the Renormalization Group, Addison-Wesley, Reading, MA, 1992, 133. 16. N. Sinharay and B. Meltzer, Solid-State Electron. 7:125, 1964. 17. K. S. Cole and J. W. Moore, J. Gen. Physiol. 43:971-980, 1960; details of the data fitting are in H. Richard Leuchtag, Indiana University Ph. D. thesis, University Microfilms International, Ann Arbor, Mich., 1974, 142-162. 18. E. Rojas and G. Ehrenstein, J. Cell. Comp. Physiol. 66 (Suppl. 2):71-77, 1965.
CHAPTER 8
MULTI-ION AND TRANSIENT ELECTRODIFFUSION
In Chapter 7, we reviewed the theory of electrodiffusion as applied to a membrane traversed by a steady current of a single species of ions. Let us now consider two complications: multi-ion electrodiffusion and time-dependent electrodiffusion. We will also note that the system of electrodiffusion equations obeys a set of scaling rules. We close this chapter with a critique of the classical electrodiffusion model as applied to membranes containing voltage-sensitive ion channels.
1. MULTIPLE SPECIES OF PERMEANT IONS, STEADY STATE We now extend the discussion of the classical electrodiffusion model to the multi-ion case, which is important because membranes are permeated by multiple species of ions, and because of the role multiple ions play in ion-channel theory. We see from Equations 2.9 and 4.4 of Chapter 7, to be designated 7.2.9 and 7.4.4, that we have characterized ions by two properties, charge q (or valency Z) and mobility u. Let us begin by considering two ion species of the same charge; of course they will generally have different mobilities, say u1 and u2. We can say that they belong to the same charge class, and that they have the same valency. For example, Na+, K+, Li+ and NH4+ belong to the charge class with valency +1. Ions with valency -1, e.g. Cland Br- form a different charge class, and those with valency +2, e.g. Ca2+ and Mg2+, a different one yet. The electrodiffusion problem of a system of ions in the same charge class, albeit of different mobilities, is much simpler than the more general problem in which valency also differs. 1.1. Ions of the same charge We can write the generalizations of Equations 7.2.9 and 7.4.4 for two ion species of the same charge class by the equations
163
164
CHAPTER 8
(1.1)
(1.2) (1.3)
Adding Equations 7.1.1 and 7.1.2, with N = N1 + N2, we obtain
(1.4)
Thus we end up with essentially the same Equations as 7.2.9 and 7.4.4, provided we define J = J1 + J2 and u' such that (1.5)
We can conclude that for two (and, by extension, any number of) ion species of the same charge, the forms of the solutions will be the same as those we already worked out for a single charge, although the problem is complicated by the fact that the boundary conditions for each ion species will be different. 1.2. Ions of different charges The situation is much more complex if both the valency and the mobility are different. We have derived the Nernst–Planck equation for this case from thermodynamics, Equation 4.9 of Chapter 6, (1.6)
A detailed mathematical analysis shows that, instead of the first-order Equation 7.4.17, we obtain a second-order equation when two charge classes are present, a third-order with three charges, and so on.1 The multi-ion problem is simpler when we consider the situation at electrical equilibrium. This problem was solved in the constant field approximation by David
MULTI-ION AND TRANSIENT ELECTRODIFFUSION
165
Goldman,2 with later modifications by Alan Hodgkin and Bernhard Katz.3 1.3. The Goldman–Hodgkin–Katz equation For the multi-ion case, the current equation for a uniform field, Equation 7.3.7, can be written
(1.7)
Let us consider the case in which the membrane is permeated by ions of only two charges, q1 = e and q2 = -e. There will be two ions of the first class, Na+ and K+, labeled 11 and 12 respectively, and one of the second class, Cl-, ion 21. Then
(1.8)
where we have used the definition of the partition function in Equation 7.1.1 and that of permeability in 7.2.8. Similarly, for the potassium ion,
(1.9)
For the chloride ion, we find
(1.10)
For the net current density J = JNa + JK + JCl , we obtain from Equations 1.8-
166
CHAPTER 8
1.10, using the definition of permeability P given in Equation 7.2.8,
For the equilibrium case, J = 0, this equation is solved for Vm to obtain the Goldman–Hodgkin–Katz equation,
(1.11)
We recall from the Box on Chemical Notation (page 146) that the factor kT/e can be replaced with RT/F. Figure 8.1 shows a current–voltage characteristic calculated by Douglas Junge4 based on data obtained by Douglas Eaton and colleagues for an Aplysia giant neuron at 6°C.5 The ion concentrations given by Junge for the solutions, in mM, are [K]i = 280, [Na]o = 485, [Na]i = 61, [Cl]o = 485, [Na]i = 51, and the permeability ratios are PNa/PK = 0.12 and PCl/PK = 1.44. In these calculations, the external 10 mM Ca2+ concentration was neglected. In Equation 1.11, the absolute values of the permeabilities are not required; it is sufficient to specify their relative proportions. For a resting axonal membrane, the proportions PK : PNa : PCl = 1 : 0.04 : 0.45 were established by fitting data to the equation. At the peak of the action potential, the proportions become 1 : 20 : 0.45, showing a great increase in sodium ion permeability while the other permeabilities are unchanged. Note that the application of Equation 1.11 to the active membrane is contrary to the assumption of a steady state. While it is true that the first derivative of the current is zero at the maximum point, the second derivative is negative. Therefore the membrane is not in a steady state and the application of Equation 1.11 to the active membrane is invalid. Nevertheless, it has become commonplace, and we shall use it in the following example: For an axon (Table 4.1) with an internal solution of 400 mM K+, 50 mM Na+ and 100 mM Cl-, and an external solution of 10 mM K+, 460 mM Na+ and 540 mM Cl-, and with the permeabilities given in the text, the resting and action potentials may be calculated from the Goldman equation.6 2. TIME-DEPENDENT ELECTRODIFFUSION We now extend our discussion of simple electrodiffusion models to include time variation, particularly transient responses to currents. Time-dependent electrodiffusion in excitable membrane systems has been studied by a number of authors.7 For the timedependent case, we have to specify initial conditions as well as boundary conditions.
MULTI-ION AND TRANSIENT ELECTRODIFFUSION
167
Figure 8.1. Current–voltage curves predicted by the constant-field approximation for four external potassium ion concentrations. The ordinate variable E is the membrane voltage. Based on data by Eaton et al., 1975, for the Aplysia giant neuron. From Junge, 1981.
We will limit our discussion to the case of a single permeant species of ions. The Nernst–Planck equation now becomes, in partial derivative form, (2.1) Ions entering or leaving the region of interest change the number of ions enclosed within; this fact is summarized by the equation of continuity, (2.2) We can verify whether the ionic current density is uniform in the steady state: When 0N/0t = 0, 0Jion/0z must also vanish. Gauss's law, Equation 7.4.4, describes the effect of the ionic charge on the electric field.
168
CHAPTER 8
2.1. Scaling of variables While linear homogeneous differential equations, such as Equation 3.4, have the property that any solution multiplied by a constant, or any sum of solutions, is also a solution, this is not the case for nonlinear equations. However, these also exhibit certain symmetries. The system of electrodiffusion equations, Equations 6.1-6.3, obeys a set of rules according to which new solutions can be generated from existing ones. Given a real nonzero number , a new solution can be generated by the scaling rules:
(2.3)
We can verify the validity of these rules by substituting them into the equations: the s cancel. Note that the voltage V is invariant under transformation 2.3; so are combinations such as Jz3 and Nt. Such groupings have appeared as the arguments of functions in our exact solutions, such as Equation 4.27. The transformation keeps t and N positive even for negative values of . The scaling rules can be used, for example, to represent the solution for an electrodiffusion membrane of a certain thickness with that for a thicker or thinner one. 2.2. The Burgers equation If we substitute Gauss's law, Equation 7.4.4, into the equation of continuity, 2.2, we obtain (2.4)
which can be integrated over z to give (2.5) Since we are dealing with partial derivatives, instead of a constant of integration we have an arbitrary function of time, J(t). Relation 2.5, due to Maxwell, states that the ionic current plus the displacement current8 equals the total current J(t), which must be equal to the external current applied to the electrodes. Therefore this formulation is directly applicable to a current-clamp experiment.
MULTI-ION AND TRANSIENT ELECTRODIFFUSION
169
When we substitute Equations 2.1 and 2.2 into Equation 2.5 and simplify, we obtain (2.6)
As usual in classical electrodiffusion, we assume mobility u and relative permittivity to be constant. Equation 2.6, in dimensionless form, is a well known partial differential equation. Its homogeneous form, with J(t) = 0, called Burgers's equation,9 has been used to model turbulence, shock waves and other nonlinear dissipative phenomena. The full equation, with a nonzero forcing function J(t), called the forced or driven Burgers equation, has been applied to wind-driven water waves.10 2.3. A simple case Because the membrane boundary conditions and initial condition specify values of N, essentially the first derivative of E, rather than E itself, the problem is complicated. The problem in which the variable is specified at the two boundaries is called a Dirichlet problem, and the problem in which the first derivative of the variable is specified is a Neumann problem. To examine some of the features of Equation 2.6, we will explore a low-temperature approximation with no forcing function. When we look at Equation 2.6, we see that the second-order, diffusive term is the only one containing the temperature T. As T 0, that term must become negligible compared to the other terms, so it is useful to explore the equation without the second-order term. To further simplify matters, we will consider the case in which a finite membrane current is switched to zero at time t = 0, so that J(t) = 0, leaving (2.7) This equation has a simple general solution (2.8) where F is an arbitrary analytical function, as can be verified by differentiation of Equation 2.8 with respect to z and t. Remarkably, this equation represents a wave of electric field E in which the wave velocity v is directly proportional to E; if the quantity in parentheses is written as z - vt, we see that v = quE. The wave advances forward across the membrane when qE is positive and backward in the opposite case. The wave is dispersive, in the sense that it moves more rapidly the larger the field becomes. Because different parts of the waveform therefore travel at different speeds, the waveform tends to sharpen as the wave travels. The property of dispersion is seen in the breaking of ocean waves on a beach.
170
CHAPTER 8
The characteristics of Equation 2.8 illustrate the properties of the homogeneous Burgers equation. If we restore the diffusion term, its effect is to soften the sharp bends in the ion profile. We will see equations of this type again in the next chapter and in Chapter 18, which discusses solitons in liquid crystals. 3. INADEQUACY OF THE CLASSICAL MODEL We began Chapter 7 with the Shaker quotation, “'Tis a gift to be simple.” Yet the simple premise that the ion flux is the sum of diffusion and migration fluxes has, through the magic of mathematical analysis, turned into web of considerable intricacy, which we have here only begun to explore. The tree of electrodiffusion has borne many fruits: such important results as the Nernst potential, the Goldman and Goldman–Hodgkin–Katz equations, the strange singularities of the exact solutions, the unexpected similarity of time-dependent electrodiffusion to shock waves, and other insights into the way ions might move through membranes. But something is missing. With a few bold assumptions we have been able to obtain a deep insight into the problem of an abstract electrodiffusion membrane. But the price we have had to pay is to lose touch with the real membrane. We found that out when we saw that the predictions of the model do not agree with the experimental data. As pointed out by Cole in 1965, classical electrodiffusion has been unable to explain many of the properties of real excitable membranes, such as the negative differential resistance of the sodium channel and the unusual rectification characteristics of the potassium channel.11 We have seen that some of these difficulties arise from the inappropriate application of single-ion theory to a multi-ion experiment. In particular, the data fits attempted have not accounted for the presence of calcium ions in the solutions. Nevertheless, all the difficulties may not be resolvable simply by accounting for the interaction of the calcium ions with the other ions in the membrane. While classical electrodiffusion fails to account for the behavior of excitable membranes, we know that it contains special assumptions only justified by simplicity of the analysis. These include the assumption of constancy of the dielectric permittivity and the ionic mobility u. Because these assumptions greatly narrow the scope of classical electrodiffusion, we cannot peremptorily reject the electrodiffusion approach. Rather, we must call upon our intuitive skills and background knowledge to alter the assumptions so as to steer our model closer to reality. Where have we gone wrong? Which assumptions are faulty? We will leave these questions for the reader to ponder as we move on, but we will take them up again in Section 6 of Chapter 14. NOTES AND REFERENCES The two-charge case was analyzed by L. Bass, Trans. Farad. Soc. 60:1656-1663, 1964 and L. J. Bruner, Biophys. J. 5:867-886, 1965. H. R. Leuchtag, J. Math. Physics 22:1317-1321, 1981 generalized the analysis to an arbitrary number of charge classes. 2. D. E. Goldman, J. Gen. Physiol. 27:37-60, 1943. 3. A. L. Hodgkin and B. Katz, J. Physiol. (London) 108:37-77, 1949. 4. Douglas Junge, Nerve and Muscle Excitation, Second Edition, Sinauer Associates, Sunderland, 1981, 38. 1.
MULTI-ION AND TRANSIENT ELECTRODIFFUSION
171
5. D. C. Eaton, J. M. Russell and A. M. Brown, J. Membrane Biol. 21:353-374, 1975. With kind permission of Springer Science and Business Media. 6. Resting potential = -58.7 mV; action potential = 43.6 mV. 7. See Kenneth S. Cole, Physiol. Rev. 45:340-379, 1965, Jarl V. Hägglund, J. Membrane Biol. 10:153170, 1972, and references cited therein. 8. The vector quantity D = 0 E is called the electric displacement. 9. This equation belongs to the quasilinear parabolic class of partial differential equations. It appeared in a 1915 paper by Harry Bateman (Monthly Weather Review 43:163-170) and a 1950 paper by J. M. Burgers (Proc. Kron. Ned. Acad. 53:247-261), and is extensively discussed in Burgers's book, The Nonlinear Diffusion Equation. Asymptotic Solutions and Statistical Problems, D. Reidel, Dordrecht, 1974. Solutions to the Burgers equation are listed in E. R. Benton and G. W. Platzman, Appl. Math. 30:195-212, 1972. 10. H. R. Leuchtag and H. M. Fishman, in Structure and Function in Excitable Cells, edited by D. C. Chang, I. Tasaki, W. A. Adelman Jr. and H. R. Leuchtag, 415-434, Plenum, New York, 1983. 11. K. S. Cole, Physiol. Rev. 45:340-379, 1965.
CHAPTER 9
MODELS OF MEMBRANE EXCITABILITY
In this chapter we will examine models of the onset and propagation of a traveling action potential, including the powerful phenomenological model, the Hodgkin and Huxley (HH) model.1 This milestone work, based on quantitative interpretation of experimental data, has determined the direction of progress in electrophysiology for decades, led to numerous experimental discoveries and spawned many related models. We will review its assumptions, the way its equations are interpreted and the predictions this model has produced. While a successful phenomenological model is a great step forward, it is not the solution of the problem. The HH and related models, based as they are on macroscopic measurements, cannot be expected to explain what is happening at the molecular level. Still, it was not long before speculative models of the workings of the ion channel molecules began to appear. A scientific problem is solved only when the behavior of the system can be derived from a priori principles, and that still remains to be done for the processes underlying nerve and muscle excitability. 1. THE MODEL OF HODGKIN AND HUXLEY In Chapter 4 we noted the development of electronic instrumentation for the squid axon, particularly the voltage clamp, which gave the experimenter control over the membrane voltage. Hodgkin visited Cole at the University of Chicago to acquaint himself with the new device, and returned to Cambridge to improve the voltage clamp and pursue a highly focused series of measurements with his colleagues, Bernhard Katz and Andrew Huxley. It had become clear to them that the electrodiffusion theory (at least as then formulated) was inadequate to explain the behavior of nerve membranes (see Chapter 8). Hodgkin and Katz showed that the sodium ion was responsible for the excitation and overshoot of the action potential in squid axon.2 They became convinced that the action potential, which not only neutralized the resting potential but actually became positive, was expressing an increased membrane permeability to sodium ions. This was a new concept, because up to that time Bernstein's idea that only potassium ions played a role in nerve conduction had prevailed.3
173
174
CHAPTER 9
1.1. Ion-current separation and ion conductances By replacing Na+ with an impermeant ion in the external solution, Hodgkin, Huxley and Katz were able to separate the current measured with the voltage clamp into its ionic components. This approach later became supplanted by the use of toxins and other channel blockers. With the picture on classical electrodiffusion in confusion due to its failure to fit axonal data, the Cambridge group tried an eclectic approach. Borrowing from electrodiffusion, chemical kinetics and circuit theory, they developed a new formalism, which they could adjust to the demands of the data they were generating. Hodgkin and Huxley developed techniques for breaking down the problem into smaller steps; at each step, they examined their voltage-clamp data to see what mathematical forms were needed to provide a fit. At the culmination of their model-building, they arrived at a partial differential equation that predicted a traveling wave. The profile of their computed wave provided a good fit to the measured action potential, a remarkable achievement that provided a sense of closure to the problem. 1.2. The current equation Hodgkin and Huxley observed that the electric current crossing a membrane could be described as a sum of ionic components, of which the currents carried by sodium, INa, and potassium, IK, were the most important. With the other components lumped together as IL, called the leakage current, the net ionic current was (1.1) In a series of careful experiments, they found that these currents depended on voltage approximately linearly for an instantaneous response over a limited range, according to the relationship (1.2) where the subscript i refers to Na, K or L. Because gi is a current divided by a finite voltage increment, it is called a chord conductance, as opposed to a slope conductance dI/dV. Slope and chord conductances are in general different and need not even be of the same sign. Since the ith ion current is zero and the current changes sign when V = Vi, the constant Vi is called the reversal potential. This is roughly equal to the Nernst potential of the ion at equilibrium, Equation 7.2.14. Equation 1.2 states that the current through the channels permeable to ion i in a given patch of membrane is the product of the ion chord conductance gi and the driving force V - Vi. Although the equation has an ohmic form, it is not linear: The conductances are not constant, but were found to be functions of V as well as calcium ion concentration and even time. The dielectric properties of the bilayer were modeled by a membrane capacitance c, so that the net ion current was described by the equation
MODELS OF MEMBRANE EXCITABILITY
175
(1.3)
Current, capacitance and conductances are usually referred to unit area of the axonal membrane. Hodgkin and Huxley took gL to be a constant; it was later found to be voltage-dependent. Equation 1.3 can be represented by a schematic circuit of parallel capacitive and conductive branches, in which the sodium and potassium conductances are represented as variable resistors in series with their ionic reversal voltages; see Figure 9.1, from Hille’s adaptation4 of the Hodgkin and Huxley diagram.5
Figure 9.1. Equivalent circuit of the Hodgkin–Huxley current equation. The symbol E stands for the reversal voltage of the respective branch. The variable resistances are controlled by membrane voltage. From Hille, 2001, after Hodgkin and Huxley, 1952d.
1.3. The independence principle Hodgkin and Huxley treated the axonal currents carried by different ions as traversing separate entities acting in parallel. This concept, based on data from isotopic ion measurements analyzed by an electrodiffusion analysis by Hans Ussing,6 became known as the independence principle. Ussing’s calculations predict that the ratio of simultaneous influx and efflux of an ion at one potential is equal to the ratio of the external to internal electrochemical activities of the ion.
176
CHAPTER 9
1.4. Linear kinetic functions In their squid axon experiments, Hodgkin and Huxley demonstrated that the instantaneous current-voltage relationship is linear, and separated the fast (Na) and slow (K) chord conductances as nonlinear functions of first-order kinetic functions of membrane voltage. In these experiments, the state of the ion-conducting pathways was altered by prepulses of different size and sign. For the fast current, the rise and spontaneous fall of the conductance was modeled by separate linear kinetic functions for activation (m) and inactivation (h). The delayed current, carried by potassium ions did not appear to inactivate substantially and so was modeled by the single activation function n. The value of this function is given by the linear differential equation (1.4) The solution to this first-order linear equation can be obtained directly. We rewrite it in the form (1.5)
where (1.6)
(1.7)
Equation 1.5 is solved to obtain (1.8) where n0 is the value of n at zero time, when the membrane potential is displaced from an equilibrium state. 1.5. Activation and inactivation The early current declines from its peak during prolonged depolarizations. A conditioning pulse of depolarization prior to the depolarizing clamp shows that this inactivation is delayed by several milliseconds (like the rise of the potassium current). The sodium current exhibits both activation m and inactivation h, given by the equations
MODELS OF MEMBRANE EXCITABILITY
177
(1.9) (1.10)
The functions m and h have solutions of the same form as Equation 1.8. The actual functions differ, however, because of the different values of the parameters. As these linear first-order functions were not adequate to fit the voltage-clamp data, Hodgkin and Huxley generated nonlinear function from their powers and products. For the K+ current, data fitting gave the relationship
(1.11) between the potassium conductance and its activation function; gK is the maximum potassium conductance. The fast current was fitted by the equation (1.12) The coefficients and in Equations 1.4, 1.6 and 1.7 are functions of membrane voltage, calcium-ion concentration and temperature; gNa is the maximum sodium conductance. The parameters m0, -m, m, h0, -h and h are defined analogously to Equations 1.6 to 1.8. The voltage dependence of the parameters was modeled by modifying the Goldman–Hodgkin–Katz equation to obtain
(1.13 )
where V equals membrane voltage minus resting voltage. The values of the HH parameters may be adjusted for different experimental preparations. Note that -m is an order of magnitude smaller than either -h or -n.
178
CHAPTER 9
These functions are given a probability interpretation similar to that for n, given in Section 2.2 below. Three simultaneous events of probability m are assumed to be necessary to open the sodium channel, while a single event of probability (1 - h) suffices to block it. 1.6. The partial differential equation of Hodgkin and Huxley When Equations 1.11 and 1.12 are substituted into 1.3, we obtain (1.14) From our discussion of cable theory in Chapter 6, particularly Equation 3.5 there, the current density of a nerve fiber in a large volume of external conducting fluid can be written as
(1.15) A voltage impulse V traveling at constant speed must be describable by a function V(x - t). Therefore 0V/0t = - 0V/0x and 02V/0t2 = 2 02V/0x2, so that we obtain
(1.16)
This, Hodgkin and Huxley’s differential equation, determines the time course of the model action potential. Note that the voltage dependence in this equation is both explicit and implicit; n = n(t, V, [Ca2+]), and similarly for m and h. 1.7. Closing the circle The final paper of the four Hodgkin and Huxley published in 1952 contains a calculation of the wave profile obtained by the data fit, closing the circle.7 The calculation gave a remarkably good fit to a measured action potential, providing a worthy conclusion to their work. Surprisingly, Huxley’s later calculations revealed that in addition to the propagated action potential the HH equations also predict an unstable slow wave.8 Figure 9.2, from a book by Alwyn Scott, shows both.9
MODELS OF MEMBRANE EXCITABILITY
179
Figure 9.2. A propagating action potential and an unstable threshold impulse generated by the HH partial differential equation. From Scott, 2003.
2. EXTENSIONS AND INTERPRETATIONS OF THE HH MODEL By providing a quantitative framework for analyzing excitability in nerve and muscle cells, the Hodgkin and Huxley model revolutionized membrane biophysics. While it does not answer all questions about ion currents across membranes, it became the basis for a host of new measurements and theoretical developments. 2.1. The gating current The HH equivalent circuit tells us that there are specialized structures within the membrane that respond to electric fields and selectively conduct specific ions: the voltage-sensitive ion channels. We noted in Chapter 1 that voltage plays two roles, one in impelling a change in the conformation of the channel molecules, and the other in helping to drive the ions across the membrane. This suggests that the conformational change may induce a tiny current, called the gating current, even when no ions are available to cross the membrane. (The word gating in this traditional term is used here in a metaphorical sense and should not be interpreted as implying the presence of material “gates” in the channel; see the discussion in Chapter 14.) As the underlying mechanism of the open–close transition, Hodgkin and Huxley proposed that the voltage dependence of the conductance change is the result of movements of a charged or
180
CHAPTER 9
dipolar component of the membrane.12 Such movements, apparently necessary for the transition to occur, can be expected to induce displacement currents (see Chapter 8). The prediction stimulated a search for a membrane current not due to the crossing of ions. Of course, these tiny currents normally would be masked by the much larger currents of ion transfer. Hodgkin and Huxley predicted that when the permeant ions are removed from the system, the charges or dipoles that control the gating process would be detectible, making possible an examination of the underlying processes. This experiment was done by three groups, who sought to detect these gating currents by eliminating permeant ions from the membrane or by blocking conducting systems.10 Any shift in the distribution of charges or dipole moments could elicit a gating current. The measurement of the gating current, which required rather sophisticated protocols, showed that there were indeed minute currents, equivalent to positive charges moving outward. By applying equations based on certain assumptions, they determined that the gating current was equivalent to a small number of elementary charges flowing outward. The number of these gating charges was determined to be about six. The interpretation of these findings is clearly critical to the understanding of channel function. One current view of the way channels respond to electrical stimulation is that charge movements drive conformational changes in the channel.11 This driving was often interpreted as an electrostatic force or torque that caused a hypothetical cluster within the channel, called the gate, to move in such as way as to relieve an obstruction of the supposed aqueous pore. This hypothetical mechanism will be discussed in Chapter 14. 2.2. Probability interpretation of the conductance functions Equation 1.11 has been interpreted to mean that four charged particles are required to move to a certain region of the membrane under the influence of the electric field to allow a path for potassium to form. If n is the probability for one n particle to be in place, the probability for all four to be is n4. This has been modeled by the following sequential reaction scheme:
CLOSED
0
: 4
2 1
: 3
3 2
: 2
4 3
:
4
OPEN
An analogous probability interpretation of Equation 1.12 requires three m particles to be present and the h particle to be absent for the sodium channel to be open. 2.3. The Cole–Moore shift In 1960, Kenneth S. Cole and John W. Moore performed a series of experiments with results that conflicted with the Hodgkin–Huxley equations. The squid axon was initially
MODELS OF MEMBRANE EXCITABILITY
181
voltage clamped to a hyperpolarizing potential, ranging from -52 to -212 mV, before being raised to its depolarizing potential of +60 mV. To inhibit the activation of sodium channels, Cole and Moore set the depolarizing potential at a value equal to the equilibrium potential of the sodium system, V = VNa, so that no sodium current would flow. The residual current after the capacitance transients was the sigmoidal potassium current. As Figure 9.3 shows, the negative preconditioning voltage pulses produces a series of records shifted in time, parallel to themselves. The greater the hyperpolarizing prepulse, the longer the delay period.12
Figure 9.3. Transient current density across squid giant axon membrane after a voltage clamp to a potential near the sodium reversal potential from seven different holding potentials, as indicated. In the data of Cole and Moore (1960) , the current rise is delayed by an interval that increases with the magnitude of the hyperpolarization. From Cole, 1972.
182
CHAPTER 9
While Cole and Moore found their results in agreement with the HH assumption that the potassium current is a function of a single variable of state, n, they found the HH fourth power inadequate to fit the observed delays. Instead, they found that a 25th power function fitted the data well, and proposed to replace Equation 1.11 with the equation (2.1) The sequential scheme of Section 2.2 turned out to be inadequate to model the parallel rises of the potassium current together with the measured inductive delays.13 Alternative theoretical models proposed to overcome these difficulties will be discussed in Section 5 of Chapter 14. 2.4. Mathematical extensions of the Hodgkin and Huxley equations Among the many influences of the Hodgkin and Huxley model were the many mathematical developments spawned by its equations. The model made it possible to simulate the response of an axon on a digital computer. The effects of temperature were dealt with by scaling the capacitance and time, while leaving the s and s (or -s) unchanged.14 At a Celsius temperature T, C becomes 1C and the unit of time becomes 1/1 where, for Q10 = 3, 1 = 3(T - 6.3)/10. The HH equations belong to the class of reaction–diffusion systems,15 as does an early model of impulse conduction by Franklin Offner and collaborators.16 Simplified versions of the HH equations were studied by Richard FitzHugh17 and JinIchi Nagumo and collaborators18 and by V. S. Markin and Y. A. Chizmadzhev.19 Catherine Morris and Harold Lecar adapted the HH model to membrane switching by calcium rather than sodium ions in their study of voltage oscillations in barnacle muscle.20 The Burgers equation, which we studied in the section on time-dependent electrodiffusion of Chapter 8, is a related nonlinear diffusion equation. 2.5. The propagated action potential is a soliton Solitary waves described by nonlinear partial differential equations are studied in mathematical physics under the name of solitons. Rigorous solitons are characterized as stable propagated waves with finite energy that are continuous, bounded and localized in space, with certain special properties: They collide elastically, keeping their identity after a collision, and a soliton may decompose into two or more solitons. Nonlinear waves in the real world do not usually possess these special properties, and rigorous solitons are considered a special case of solitons.21 Thus we can say that the propagated action potential is a soliton. It is not a rigorous soliton, since two colliding action potentials, far from passing through each other, will destroy one another because of the property of inactivation. But it is a soliton, quantitatively described by the Hodgkin and Huxley partial differential equation. We will further explore the concept
MODELS OF MEMBRANE EXCITABILITY
183
of solitons in Chapter 18, where we explore solitons in liquid crystals. 2.6. Action potential as a vortex pair The action potential consists of a pair of vortical rings (“smoke-rings”), oppositely directed; see Figure 9.4.22 Vortices are of interest in certain areas of physics, such as Benard cells and superfluidity, where microscopic quantized vortices have been postulated; see Chapter 15.
Figure 9.4. The nerve impulse as a vortex pair. a) An impulse travels toward the depolarized end on the left of a squid axon. The membrane potential shows a positive overshoot. b) The current (only shown for the upper membrane surface) flows in toroidal closed circuits, with a double toroid at the action potential and a single vortex, the injury current, at the cut end of the axon. From Cole, 1965.
2.7. Catastrophe theory version of the Hodgkin and Huxley model An interesting mathematical development that grew out of the Hodgkin and Huxley model is an analysis based on a general theory of nonlinear differential equations, catastrophe theory, developed by French mathematician René Thom.23 Thom's ideas were applied to the nerve impulse, as well as the heartbeat, by E. C. Zeeman, whose analysis provides a mathematical insight into the Hodgkin and Huxley equations.24 Thom recognized that certain essential characteristics of differential equations could be seen by a geometrical approach. He categorized equations according to their topology into groups with a fairly small set of characteristic shapes. One of these is the cusp, the sharp change of direction of a point on the rim of a rolling wheel. The key features of a differential equation can be seen from a consideration of its display as a sheet in three-dimensional space, suspended over a table. The surface of the table is
184
CHAPTER 9
called the control plane. The folds of the sheet will determine the motion of a point moving along it. Zeeman applied catastrophe theory to the nerve impulse as described by a simplified version of the Hodgkin and Huxley formalism. He models the current flow along the axon and the flow of ions through the membrane by two separate equations, rather than combining them into a single differential equation, as Hodgkin and Huxley do. This is because the propagation wave and the repolarization wave are physically different: the former can be stopped by cutting the axon but the latter continues, determined by local events. In the analogy of Chapter 1, a domino would continue to fall after tipping, even if the line of dominoes were disrupted by a gap. Starting with the Hodgkin–Huxley equations, Zeeman obtained simpler equations that will not be reproduced here. These equations may be plotted on the control plane to look like Figure 9.5.25 The state of the axon is represented by a point in three-dimensional space (a, b, x). Variables a and b determine the lower, control, plane, where a is a linear function of the potassium conductance gK and b is a linear function of the voltage above resting potential, V. The vertical coordinate x represents the negative sodium conductance, -gNa . The system point moves on the upper sheet, which is a single-valued attractor outside the cusp region of the control plane, and triple-valued within that region, with two attractors separated by a repellor. The membrane is in equilibrium at the resting potential, V = 0. In a threshold depolarization, the system jumps off the upper attractor to land on the lower attractor. This fast action is followed by a smooth return to equilibrium as the potassium conductance rises and then falls slowly. The application of a step by a voltage clamp raises V to Vc , displacing the state from equilibrium, point E, to position F in the clamp plane. The fast equation carries the state to point G on the slow manifold and the slow equation moves it to H, where the slow vector is perpendicular to the clamp plane. The return follows the dotted flow lines slowly to equilibrium. These concepts give us a new, pictorial way to look at the action potential. Their application qualitatively fits the data. The application of catastrophe theory to the HH description of the action potential suggests that the HH equations have captured certain important aspects of the problem, and that the actual form of the HH equations is not very critical to their ability to provide this description. The fitting of the action potential by the HH equations does not prove that they are correct in every detail. Catastrophe theory has also been applied to a molecular model of excitability, as we will see in Chapter 18. While the correctness of Thom’s mathematical analysis is undisputed, the application of catastrophe theory to the action potential and other biological and social science problems has been severely criticized.26 The model of the nerve impulse is faulted for disagreeing with the HH data, denying “universally accepted concepts” and leading to the wrong propagation speed for the action potential. Whether the application of catastrophe theory to the nerve impulse is a “blind alley” or a challenging approach is left for the reader to decide.
MODELS OF MEMBRANE EXCITABILITY
185
Figure 9.5. Catastrophe theory model of the nerve impulse. From Zeeman, 1977.
2.8. Beyond the squid axon After the successful description of squid-axon data by the HH formalism, the question naturally arose, Will it work for other preparations? On the basis of Ichiji Tasaki’s observation that the node of a myelinated axon already is fitted with two useful connections, the insulated internodes, innovative methods were developed for voltage clamping the node.27 A Swedish investigator, Bernhard Frankenhaeuser, and his collaborators extended the method to the frog node of Ranvier. Frankenhaeuser, and with Dodge, used negative feedback to prevent longitudinal current flow into a node so as to measure the resting and action potentials.28 In 1963, he used this method to develop a quantitative description of potassium currents in the toad Xenopus by equations closely related to those of Hodgkin and Huxley. As in those equations, the
186
CHAPTER 9
potassium permeability PK was assumed to be a power function of a first-order equation, but the data gave an exponent of 2 for the potassium activation n. (2.2) PK is a constant. They found that the potassium current also displays where inactivation, although much slower than that of the sodium current. Because of the slowness of the inactivation k, it can be absorbed into the constant prefactor for short pulses (<200 ms).29 Yves Pichon and collaborators, using cockroach axon, obtained a best fit30 with (2.3) The set of exponents chosen by Hodgkin and Huxley is clearly not unique. 3. EVALUATION OF THE HODGKIN AND HUXLEY MODEL The great advantage of the HH model, and perhaps the reason it is so popular among axonologists, is the fact that it works. The HH model provides a quantitative basis for explaining the results of experiments and for suggesting new experiments. It also provides us with a language for describing an observed effect. These advantanges provide experimentalists with an arsenal of useful tools but, while we have here emphasized the phenomenological nature of the HH model, it has been interpreted much more literally by some investigators. The HH model is not a microscopic model in which interactions at the molecular level are considered. Electrical effects are described by integrated quantities such as the voltage, the integral of the electric field across the membrane, and ionic currents, which are the summed results of the motions of ions through many molecular channels. It has this in common with mean-field theories, such as the van der Waals theory of liquid–gas transitions. These models, and their limitations, are discussed in Chapter 14. As a macroscopic model, the HH model clearly cannot predict fluctuations. It came as a surprise that the space-clamped action potential predicted by Hodgkin and Huxley was not all-or-none; responses of intermediate size appeared in the vicinity of threshold. This gradedness becomes significant at high temperature, and had already been observed in squid axons above 30%C. Further heating produces large currents and eventual heat block. Computer simulation of the HH axon also showed that the model will not accommodate to slowly rising potentials without producing action potentials. Inactivation and the opening of potassium channels makes this finding counterintuitive, as one might think the threshold will rise and stay ahead of the rising applied current. Stability theory, however, predicts a limit cycle and repetitive firing; see Section 6.3 of Chapter 4.
MODELS OF MEMBRANE EXCITABILITY
187
Squid axons in lowered external divalent cations fire repetitively, and when a slowly rising clamp current is applied they break into repetitive activity rather than accommodating. Computer simulations show subthreshold oscillations with a frequency of about 100 Hz and amplitude of the order of a millivolt.31 The Hodgkin–Huxley work is an example of a traditional theory, which aims to construct a faithful representation of a physical system. To explain the results of further experimentation, it fine-tunes the parameters or adds new ones. In a computer program for membrane currents, 17 parameters can be counted.32 An alternative approach would be to start with a minimal model that contains only the essential physics. 3.1. Current separation The HH partial differential equation contains the implicit assumption that the capacitive behavior of the excitable membrane can be separated from its resistive behavior. This assumption is troubling, on both theoretical and experimental grounds. Theoretically, the resistive properties and capacitive properties of matter are considered in electrodynamics to be part of a single complex function, with resistance conventionally shown along the horizontal axis and capacitance along the vertical. Integral relations connect the real and imaginary components. Experimentally, this correlation between conductive and capacitive behavior is found in impedance and noise studies. While a component of capacitance, attributed to the bilayer, can be usefully separated from the channel behavior, this requires care, to avoid the lumping of capacitive effects of channels (and other membrane proteins) with those of the bilayer. We will return to this point in Chapters 10 (impedance modeling by circuit elements) and 16 (Curie–Weiss behavior of channels). Hodgkin and Huxley further assumed that the resistive behavior can be separated into components specific to ion species, along with the catchall leakage. The separation of ionic currents into Na+ and K+ currents has proven to be the most brilliantly successful of the HH assumptions, as it predicted the existence of distinct ion channels, each traversed by specific ions. Nevertheless, it is not without its drawbacks, as the fast channel conducts a set of ions including K+ and the slow channel conducts an overlapping set including Na+. Except for the ammonium ion, which permeates both channels, these crossover effects are relatively small. Less useful is the leakage branch, for which there is little need with carefully dissected axons. The behavior of both fast and slow systems also requires that the conductance functions depend on the Ca2+ concentration. This provides a significant hint that the ions of different species and valencies interact with one another within a channel. 3.2. Voltage dependence of conductances The assumption of ohmic forms for the voltage dependences of the ionic currents because of their ohmic behavior in the neighborhood of the resting potential led to the introduction of the conductance functions gNa and gK. However, the fact that these conductance functions are not ohmic is clearly shown by the need to make these
188
CHAPTER 9
quantities functions of V. The dependence on voltage of these conductances was illustrated in unforgettable fashion by Kenneth S. Cole, who personified the gNa and gK parameters as Nat and Kal, two characters whose job it is to measure the voltage across the membrane and set their particular ion conductance accordingly; see Figure 9.6.33 Nat, forceful and impulsive, applies the early current; Kal patiently restores the resting conditions. How do Nat and Kal do their job within the narrow confines of a molecule? That is one of the central questions left unanswered by the HH formalism.
Figure 9.6. Kal and Nat control the potassium and sodium conductances. From Cole, 1972.
Further studies have shown that better data fits are obtained by replacing the single parameter h by two (slow and fast inactivation) or more factors. It has also been shown that activation and inactivation are not independent but coupled, even though Hodgkin and Huxley implicitly assumed that m and h are independent. To have them “coupled” undercuts the method of the separation of variables. Actually, this coupling is to be expected, since the variables m and h describe the same system, although under different conditions, as defined by the regime of prepulses and test pulses employed. Not all sets of measurements agreed with the m3h, n4 scheme and, in the spirit of phenomenology, researchers have tinkered with it, as we have seen in Equations 2.1 to 2.3. Frankenhaeuser found that, for the node of Ranvier of a frog axon, the function m2h provided a better fit than the HH form m3h. As we saw above, Cole and Moore discovered that an n25 function fitted the potassium data better than the Hodgkin and Huxley n4. Furthermore, it is not clear that anything is gained by splitting gNa into separate factors for activation and inactivation. It may be that these processes are not separate, but part of a single nonlinear motion. This question will be discussed in Chapter 14.
MODELS OF MEMBRANE EXCITABILITY
189
3.3. Time variation of the conductances Another troubling aspect of the conductance functions is their time variation. The explicit appearance of the variable t in the functional forms of the ion conductances implies that they are not invariant to time translations. Since systems that conserve energy are invariant to time translations,34 we may conclude that the conductances represent nonconservative systems. To illustrate this property, I have modified Cole's figure to give watches to Nat and Kal, which they had to consult along with the voltage since their action is time-dependent; see Figure 9.7.35
Figure 9.7. Kal and Nat are aware of the time as well as the voltage.
It is inconsistent to introduce time into the theory at two different levels: first, in the conductance-setting response of Nat and Kal, and then in the VdC/dt term of the HH partial differential equation. 3.4. The separation of ion kinetics One problem, the separation of ion kinetics into separate terms, activation and inactivation, has led to controversies regarding their possible coupling and uncoupling. An analogy can be made between an action potential and the equation for a falling body. When you throw a ball straight up, it rises, stops for an instant, and falls. It may seem that you can simplify analysis of the motion by dividing it into a rising and a falling phase—but that doesn't help. The law for the entire motion is d2y/dt2 = -g, which describes both the up and down movements, given the initial conditions. In a uniformly ascending or descending frame of reference, the instant at which the motion stops changes with the speed of the frame. The description of sodium current is more complicated, because the underlying laws, approximated by the HH equations, are nonlinear. But the division into activation and inactivation may be just as artificial as dividing free fall into a rising and a falling phase would be. Hodgkin and Huxley set out to describe the nonlinear kinetics of INa in terms of linear laws. They devised variables m and h for the normalized current conductance,
190
CHAPTER 9
each obeying a linear kinetic equation. Then they formed a nonlinear function of m and h; after trial and error, they found that m3h gave them an acceptable fit. Certain combinations of prepulses and test pulses bring out one variable strongly, and certain other combinations emphasize the other, so these have become the standard operational definitions of m and h. The statement that m and h become uncoupled for a certain channel mutation is equivalent to saying that for the mutated channel under the given conditions the Hodgkin and Huxley assumption of m and h independence is not too bad. Nevertheless, the assumption that Na-channel activation and inactivation are separate and kinetically independent processes has been challenged.36 A later formulation under new operational definitions further divided h into slow and fast inactivation. If the underlying kinetics is nonlinear, it clearly would take more terms to get a better description based on linear kinetics. What complicates the picture is the fact that INa is a statistical combination of individual channel openings and closings, which are molecular events dependent on local fields, ion occupations and bonds. This of course applies also to the other ioncurrent systems. 3.5. We’re not out of the woods yet Nothing in the HH equations suggests the presence of noise in the membrane voltage. Yet noise is, as we shall see in Chapter 11, a ubiquitous finding in excitable membranes. To be sure, the HH model can be, and has been, expanded by the addition of a random noise term. But this is only more data fitting. In a physical model, the noise would come out of, not be put into, the model. We are not at that point yet. Hodgkin, Katz and Huxley began with electrodiffusion and linear kinetics, and ended up with a phenomenological theory. By seeking to force linear relations on a nonlinear system they had to pay the price of a cumbersome system with many arbitrary constants. Despite its drawbacks, it has the advantage of being a quantitative model that fits the data. This tremendous advance has been widely accepted, appreciated and applied. 4. THE CONCEPT OF AN ION CHANNEL With the demonstration of separate functions for the fast inward current, carried by sodium ions, and the slower outward current carried by potassium ions, emphasis shifted to the study of the ion-selective pathways that give rise to these currents, thereby bridging to microscopic models. The name we give these pathways is important, as they influence the way we think about them. As we will see in Chapter 12, these ion currents are now known to pass through specialized protein molecules. 4.1. Pore or carrier—or what? A historical dichotomy that has muddied the waters of ion conduction through membranes is that ions must pass through either a pore or a carrier. A pore is a fixed
MODELS OF MEMBRANE EXCITABILITY
191
hydrophilic pathway within a protein molecule or interacting aggregate of molecules. A carrier (transporter in current usage) is a movable structure that binds an ion at one membrane–solution interface and, after migrating to the opposite interface, releases it into the other solution. Valinomycin, a cyclic molecule that is a highly specific K+ transporter, is an example. One example of a carrier that transports sodium ions across a thick membrane is a redox-active ferrocene crown ether molecule; see Chapter 12. Tetsuo Saji and Iwao Kinoshita used 0.5 M pentaoxa[13]ferrocenophane (1) as a carrier and 0.5 mM tetrabutylammonium hexafluorophosphate as the supporting electrolyte in a liquid CH2Cl2 membrane.37 In a U-type electrochemical cell, the IN aqueous phase was 0.5 M NaCl and the OUT phase, 0.5 M NH4Cl. The transport scheme is shown in Figure 9.8. The metal ion M+ complexes at the left, high-concentration surface with (1) to form (1)M+, which crosses the membrane. At the right interphase the complex is oxidized at electrode W1 and the ion is lost to the aqueous phase. The (1)+ diffuses back to the left interphase, where it is reduced at electrode W2 to continue the cycle. The Na+ current rise time is measured in hours.
Figure 9.8. Electrochemical ion transport of sodium ions by a ferrocene crown ether. W1, W2, C1, and C2 are working electrodes, and R1 and R2 are reference electrodes. From Saji and Kinoshita, 1986.
A test that distinguishes the two types is the presence of back transport of the empty carrier. According to investigators who believe in the dichotomy, if the sodium channel fails that test, it “must be” a pore. Since the 108 ions known to traverse this channel per second does not allow sufficient time for back transport, the pore hypothesis is supposedly established. But the concepts of pore and carrier need not be viewed
192
CHAPTER 9
as mutually exclusive cases. Indeed, Peter Läuger has called them “two limiting cases.”38 That suggests that intermediate cases exist, and that channels may turn out to have some properties of both types. 4.2. “Pore” and “channel”: Shifting meanings Two names have been used for the ion-selective molecular pathway, pore and channel. As Stevens39 has pointed out, the term “pore” was once dropped in favor of the less specific term “channel” because the permeability properties did not appear to match those expected from macroscopic water-filled pores; however, when the term “channel” became applied to the protein molecule, the “hole in the protein” through which ions were believed to pass again became labeled the “pore.” The need for such a “hole” is implicitly assumed. Ion permeation in a random medium, called percolation, does not require a pore; it will be discussed in Chapter 18. By determining that the voltage-sensitive ion channel is not a carrier, we have not established that it is a pore in the structural sense, a fixed hole of definite shape. with walls and a lumen filled with water. If the word “pore” is applicable to the voltage-sensitive ion channel at all, it will have to be a functional one, a region through which ions may permeate, without any implications of a fixed structure; see the diagram of superionic conduction in Figure 6.4 of Chapter 6 and the discussion of percolation in Chapter 18. Because of the connotations of the word “pore,” we will use the terms structural pore and functional pore, or the more neutral word, “pathway.” 4.3. Limitations of the phenomenological approach The model developed by Hodgkin and Huxley was viewed by a number of workers as unsatisfactory in that it lacks a description of the microscopic mechanism of excitation, and a number of alternatives were proposed.40 These models of excitability were reviewed in 1971 by Goldman.41 While the usual interpretation of the Hodgkin and Huxley model is that the channel has distinct mechanisms for gating, conductance and driving force, evidence from kinetics, noise and admittance studies indicates that gating and transport are not independent. The kinetics and conductance of ion channels are affected by ion concentrations, particularly of divalent ions, suggesting that gating is not a separate process from conductance.42 Although the model of Hodgkin and Huxley is not the endpoint of the quest for a theory of molecular excitability, it has greatly advanced this endeavor. By the application of the inductive method, it has provided a quantitative language for the description of excitable behavior and helped to usher in the era of molecular approaches to voltage-sensitive ion channels. But there are questions you can't ask of the HH model: What is the origin of channel noise? How does the temperature dependence arise? How can we deal with the molecular physics of the channel? A deductive theory remains to be developed. For this, we will have to examine alternative models that originate in the principles of chemistry and physics. We will begin to explore these molecular models in Chapter 14.
MODELS OF MEMBRANE EXCITABILITY
193
NOTES AND REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30.
31. 32. 33. 34. 35. 36. 37. 38.
A. L. Hodgkin and A. F. Huxley, J. Physiol. (Lond.) 116:449-472, 1952a; 116:473-496, 1952b; 116:497-506, 1952c; 117:500 -544, 1952d. A. L. Hodgkin and B. Katz, J. Physiol. (Lond.) 108:37-77, 1949. K. S. Cole, Membranes, Ions and Impulses, Univ. of California Press, Berkeley, 1972, 267. Bertil Hille, Ion Channels of Excitable Membranes, Third Edition, Sinauer, Sunderland, MA, 2001, 41. Hodgkin and Huxley, 1952d. By permission of Wiley-Blackwell Publishing. H. H. Ussing, Acta Physiol. Scand. 13:43-56, 1949. Hodgkin and Huxley, 1952d. By permission of Wiley-Blackwell Publishing. A. F. Huxley, J. Physiol. (Lond.) 148:80P-81P, 1959. By permission of Wiley-Blackwell Publishing. Alwyn Scott, Nonlinear Science: Emergence and Dynamics of Coherent Structures, Second Edition, Oxford University, Oxford, 2003, 126. M. F. Schneider and W. K. Chandler, Nature (Lond.) 242:244-248, 1973; C. M. Armstrong and F. Bezanilla, Nature (Lond.) 242:459-461, 1973; J. Gen. Physiol. 63:533-552, 1974; R. D. Keynes and E. Rojas, J. Physiol. (Lond.) 239:393-434, 1974. B. Hirschberg, A. Rovner, M. Lieberman, and J. Patlak, J. Gen. Physiol. 106:1053-1068, 1995. K. S. Cole and J. W. Moore, Biophys. J. 1:1-14, 1960; K. S. Cole, Membranes, Ions and Impulses, Univ. of California, Berkeley, 1972, 449. By permission of University of California Press. Michael E. Starzak, The Physical Chemistry of Membranes, Academic, New York, 1984, 320-324. Cole, 297. Scott, 110-175. F. Offner, A. Weinberg and C. Young, Bull. Math. Biophys. 2:89-103, 1940. R. FitzHugh, Biophys. J. 1:445-466, 1961. J. Nagumo, S. Arimoto and S. Yoshizawa, Proc. IRE 50:2061-2070, 1962. V. S. Markin and Y. A. Chizmadzhev, Biophysics 12:1032-1040, 1967. C. Morris and H. Lecar, Biophys. J. 71:192-213, 1981. L. Lam, in Solitons in Liquid Crystals, edited by Lui Lam and Jacques Prost, Springer, 1992, 9-50. Kenneth S. Cole, in Theoretical and Mathematical Biology, edited by T. H. Waterman and H. J. Morowitz, Blaisdell, New York, 1965, 136-171. An elementary approach to catastrophe theory is given in H. Haken, Synergetics: An Introduction, Third Edition, Springer, Berlin, 1983, 133-146. E. C. Zeeman, Catastrophe Theory, Addison-Wesley, Reading, Massachusetts, 1977, 81-140. Zeeman, 123. Raphael S. Zahler and Hector J. Sussman, Nature 269:759-763, 1977. B. Hille, in Biophysics and Physiology of Excitable Membranes, edited by W. J. Adelman Jr., Van Nostrand Reinhold, New York, 1971, 230-246. B. Frankenhaeuser, J. Physiol. (London) 135:550-559, 1957; F. A. Dodge and B. Frankenhaeuser, J. Physiol. (London) 143:76-90, 1958. B. Frankenhaeuser, J. Physiol. (London) 169:424-430, 1963; ___, 169:445-457, 1963. Yves Pichon, Denis Poussart and Graham V. Lees, in Structure and Function in Excitable Cells, edited by D. C. Chang, I. Tasaki, W. J. Adelman, Jr. and H. R. Leuchtag, Plenum, New York, 1983, 211-226. Eric Jakobsson and Rita Guttman, in The Biophysical Approach to Excitable Systems, edited by W. J. Adelman, Jr. and D. E. Goldman, Plenum, New York, 1981, 197-211.. Y. Palti, in Biophysics and Physiology of Excitable Membranes, edited by W. J. Adelman Jr., Van Nostrand Reinhold, New York, 1971, 183-193. Cole, 1972, 272. By permission of University of California Press. L. D. Landau and E. M. Lifshitz, Mechanics, Addison-Wesley, 1960, 13f. H. R. Leuchtag, Bull. Am. Physical Soc. 29:930, 1984. R. W. Aldrich, D. P. Corey and C. F. Stevens, Nature 306: 436-441, 1983. Tetsuo Saji and Iwao Kinoshita, J. Chem. Soc., Chem. Commun. 1986:716-717. Reproduced by permission of The Royal Society of Chemistry. P. Läuger, in Membranes, Dissipative Structures and Evolution, edited by G. Nicolis and R. Lefever, John Wiley Interscience, New York, 1975, 309-318.
194
CHAPTER 9
39. C. F. Stevens, in Proteins of Excitable Membranes, edited by B. Hille and D. M. Fambrough, Wiley, New York, 1987, 99-108. 40. K. S. Cole, 285-291; I. Tasaki, Physiology and Electrochemistry of Nerve Fibers, Academic, New York, 1982; D. C. Chang, Bull. Math. Biol. 39:1-22, 1977; D. C. Chang, Physiol. Chem. Phys. 11: 263-288, 1979; G. Baumann, Math. Biosci. 46, 107-115, 1979; H. R. Leuchtag, Biophys. J. 62:2224, 1982. 41. D. E. Goldman, in Biophysics and Physiology of Excitable Membranes, edited by W. J. Adelman Jr., Van Nostrand Reinhold, New York, 1971, 337-358. 42. H. M. Fishman, Prog. Biophys. and Molec. Biol. 46:127-162, 1985.
CHAPTER 10
ADMITTANCE TO THE SEMICIRCLE
We now turn to the frequency-dependent impedance and its reciprocal, the admittance, of excitable membranes. Although these are linear variables—while many of the interesting features of voltage-sensitive ion channels are nonlinear—measurement of these quantities is important because they can help us distinguish between competing theories of ion channel function. As mentioned in Chapters 3 and 4, much pioneer work in this field was done by Kenneth S. Cole and his collaborators, who made impedance measurements on sea-urchin eggs, red blood cells and squid axons. These studies showed that, while membrane resistances vary greatly, all these cells have membrane capacitances of about 1 F/cm2. In 1939, Cole and Howard Curtis recorded the action potential and the impedance of the squid-axon membrane simultaneously, showing that the conductance of the axon membrane increases during the time the action potential passes over it.1 This suggests that impedance measurements can give us valuable clues to the conformational changes in membrane molecules and the ion-conduction process, varying nonlinearly with both time and voltage. One such clue is a phenomenon called constant-phase capacitance, which is characterized by certain semicircular plots. 1. OSCILLATIONS, WAVES AND NORMAL MODES All waves involve vibrations, and the action potential is no exception. The movement of metallic ions across membranes can be analyzed in terms of oscillatory phenomena. 1.1. Simple pendulum An appropriate place to begin a discussion of oscillations is the simple pendulum, a nonlinear oscillator. When the bob is directly below its support, the pendulum tends to return spontaneously from a small displacement: it is at a stable equilibrium. With the bob directly above the point of support, a small perturbation tends to increase the motion, and the system is in an unstable equilibrium. For simplicity we neglect friction and assume the swings of the pendulum are small. When the bob, of mass m, is displaced by an angle , its weight mg produces a restoring force mg sin tangential to the arc s of its swing. 195
196
CHAPTER 10 By Newton's second law, the acceleration along s is
(1.1)
Since the pendulum length L is constant, s = L with in radians, the motion of the pendulum is governed by the nonlinear equation
(1.2) The pendulum has a stable equilibrium point at = 0 and an unstable one at = %. For small , we may linearize this differential equation by the approximation sin . The equation of motion then simplifies to (1.3)
where we have introduced the angular frequency 7 2%f = (g/L)½, a constant. The period of the oscillations, - = 1/f = 2%/7, is therefore independent of amplitude for small oscillations, a fact that was already known to Galileo, who timed the swings of a church chandelier with his own pulse. It is convenient to use the imaginary exponential solution to this linear differential equation, (1.4) Equation 1.3 is a second-order differential equation, with two constants of integration, the phase 1 and the amplitude A, both assumed real. The constants are determined by the initial conditions. Solution 1.4 is complex, since2 (1.5) Because the angle is a real quantity, we elect to use its real part, Re[], (1.6) If the pendulum is initially displaced by an angle 0 and released from rest at time t = 0, the constants become A = 0 and 1 = 0. A similar analysis can be made of an oscillatory electric circuit, such as an inductance and a capacitance in parallel. In the LC circuit, we may say by analogy to
ADMITTANCE TO THE SEMICIRCLE
197
the pendulum that the potential energy of the charged capacitor is converted to the kinetic energy of the current in the inductor, and back. 1.2. Normal modes Not all oscillation problems are as simple as the simple pendulum and the LC circuit, but linear systems can often be reduced to a combination of simple oscillations by a change of variables. To illustrate this technique we will consider a double pendulum. Suppose we have a second bob suspended from the first. The two rods are assumed massless and the system is frictionless. For this system we obtain a set of two equations of motion. For small oscillations in a plane, these can be simplified by defining new variables, which allow us to separate to separate the complex oscillation of the double pendulum into two motions that may be excited independently. One is the in-phase oscillation of the common center of mass and the other is the out-of-phase movement of the two bobs about the fixed center of mass. These independent motions, with two normal frequencies, are called the normal modes of the system. Even in more complicated linear problems, the number of normal modes is equal to the number of degrees of freedom of the system. It is, of course, possible that the normal frequencies of two or more distinct modes are the same. In this case the modes are referred to as degenerate modes. Nonlinear systems, which do not obey the principle of superposition, cannot be analyzed by the simple technique of separation of variables. 1.3. The wave equation When an array of oscillators is extended in space, coupling between them produces a wave. The wave equation in one space dimension is
(1.7)
where v is the wave velocity. This equation is subject to initial and boundary conditions, as when it is applied to a vibrating string. The solution of this linear equation is (1.8) Here k is called the wavenumber or, in problems involving more than one dimension, the wavevector. In terms of the wavelength , k = 2%/. If the wave velocity v = 7/k depends on the frequency 7, the medium is said to be dispersive. Dispersion is a nonlinear distortion of the wave profile such as that observed in water waves breaking on a beach.
198
CHAPTER 10
1.4. Fourier series The pendulum is a resonant system. When driven at different frequencies (as in pushing a child on a swing), the energy transferred to the pendulum will be a maximum at its natural frequency, 71 2%f = (g/L)½. A vibrating string stretched between fixed supports, on the other hand, has many natural frequencies. When the entire string is moving in phase, it is vibrating at its fundamental frequency; the only stationary points, nodes, are at the fixed ends. In this, the fundamental or first harmonic, the wavelength is twice the length of the string. In the second harmonic, a node appears in the middle, the wavelength is halved (to equal the string length) and the frequency is doubled. The third harmonic has two nodes and its frequency is three times the fundamental, and so on. Thus the frequencies form the sequence 71, 72 = 271, 73 = 371, ... When the amplitudes of the vibrations are small, the vibrating string can be considered a linear system. As a consequence, any motion of the string can be considered to be a linear superposition of its harmonics. This motion can be described by the Fourier series
(1.9) The Fourier coefficients may be obtained by integration over the period 2l, the length of the string.
(1.10)
1.5. The Fourier transform of a vibrating string Although the set of frequencies in the Fourier series is infinite, it is discrete and does not represent all possible frequencies. For a continuous rather than discrete system, Equation 1.9 is converted from a series to an integral. An integral representation of an arbitrary function is given by the Fourier transform and its inverse,3
(1.11)
The Fourier transform is one of a number of integral transforms that are useful
ADMITTANCE TO THE SEMICIRCLE
199
in solving differential equations and other applications. Although we have used it here to describe the motion of a stretched string in space, we will be using the Fourier transform mainly to describe events in time. 2. MEMBRANE IMPEDANCE AND ADMITTANCE Current and voltage measurements across excitable membranes are used to model the ion-conduction process.4 2.1. Impedance decreases during an impulse
Figure 10.1. Impedance of Nitella. (a) Complex plot of impedance at various frequencies (kc = kHz) at rest (solid line) and during passage of impulses (broken line). (b) An equivalent circuit with constant membrane capacitance Cm and variable membrane resistance Rm. From Cole, 1965.
200
CHAPTER 10
In early experiments, Cole and Curtis used the plant cell Nitella, about the size of a squid axon but with the slow conduction rate of a few centimeters per second. They measured the complex impedance at various frequencies. The complex impedance is defined as
(2.1) where 7 = 2%f is the angular frequency, V voltage, I current, R the resistance and X the reactance. While there are a number of ways to plot the frequency response, the one that shows this behavior most clearly is the complex-plane plot, in which the imaginary part of the impedance, Im(Z) = X, is plotted as a function of the real part of the impedance, Re(Z) = R. The data appear as one or more semicircular arcs, the center of which may be depressed below the R axis. The impedance of the Nitella membrane decreased in magnitude and recovered as the impulse went past. The impedance was measured at several frequencies, and the parameters of a circuit model were fitted to the data. A plot of the impedance versus the real component of the impedance is shown in Figure 10.1. The results were interpreted by an equivalent circuit in which the membrane was represented by a constant capacitance and a variable resistance in parallel; the membrane and an access resistance forms a branch in parallel with a shunt resistance.5 The impedance loci of the resting and active membrane were found to lie on semicircular arcs with centers lying on a line tilted below the R axis. Cole and Curtis fitted the data with a model in which the membrane capacitance was constant but the resistance varied with frequency by a power function. The significance of this will be explored in Section 3.3 below. In 1939, Cole and Curtis carried out the historically important experiment mentioned in Chapter 3. They simultaneously recorded the action potential and the impedance of the squid-axon membrane; see Figure 3.1. The results demonstrated that the electrical conductance of the membrane increases during the time the action potential passes over it. This implies that the membrane is becoming more permeable to ions during the few milliseconds that the action potential is passing over the patch of axon sampled by the electrodes. The peak conductance averaged about 40 times the resting conductance. These experiments were carried out with external electrodes. However, when an electrode was inserted inside the axon, a considerable overshoot surprisingly appeared; see Figure 9.4. While the previous results had confirmed Bernstein’s hypothesis that the permeability to potassium ions increases in excitation, the rise of the potential into the positive region could not be so explained; it required the inflow of sodium ions. The current flow through the membrane was calculated by cable theory, which showed the local circuit currents sketched in Figure 9.4b. These toroidal trajectories suggest that the current flow is nonlinear. Measurements at short and long times (high
ADMITTANCE TO THE SEMICIRCLE
201
and low frequencies) showed radical changes in conductance consistent with a high degree of nonlinearity. 2.2. Inductive reactance In 1941, K. S. Cole and R. F. Baker measured the longitudinal impedance of a squid axon for a range of frequencies, beginning at high frequencies. “However,” wrote Cole,6 “since frequency had long been our most powerful parameter and the axon and the bridge were connected, it was inevitable that we go to lower frequencies and make a complete series of measurements over the available range. Below 200 [Hz] the bridge could not be balanced, and only after reviewing the data did I think to add capacit[ance] to the unknown arm and obtain a balance. This showed that the capacit[ance] of the preparation was negative, and negative it continued to be down to the lower limit of 30 [Hz].” The appearance of inductive reactance in the axon membrane “was shocking to the point of being unbelievable.” Figure 10.2 shows the membrane impedance locus, measured with external electrodes and corrected for the cable impedance.7
Figure 10.2. Squid axon membrane impedance, showing capacitive and inductive reactances in relative units. The frequencies are shown in kHz. From Cole, 1965.
Hodgkin and Huxley8 likewise pointed out that the n and h processes produced inductive reactances. As the inductance was replaced by a capacitance when the membrane potential dropped below the potassium reversal potential, they concurred with Cole’s earlier conjecture in ascribing it primarily to potassium.9 The crossing of the horizontal axis to give an inductive reactance, like that of a coil of wire wrapped around an iron armature, clearly came as a surprise. Some electrophysiologists interpreted this positive reactance literally as a magnetic-field process, but it should be understood that, since these currents are perturbations on an
202
CHAPTER 10
imposed dc voltage, the terms “capacitive” and “inductive” are inappropriate and confusing. The phase relations of the outward K+ currents at low frequencies must necessarily be opposite to those of the inward Na+ currents at high frequencies. While a half-wave in the depolarizing direction will increase the inward INa at 1 kHz, it will require a half-wave in the hyperpolarizing direction to increase the inward IK at 100 Hz. Therefore the sign of the reactance is attributable to the sign of the relevant electrochemical potential gradient across the axonal membrane. 2.3. A simple circuit model As a simple (although, as we have seen, unphysical) model of inductive behavior in the frequency domain, consider an inductance L in series with a conductance g; see Figure 10.3.10
Figure 10.3. A series conductance–inductance circuit can model the behavior of a time-varying conductance. From Fishman et al., 1981.
The differential equation in the time domain for the gL circuit is, from Equations 2.5 and 2.11 of Chapter 6, (2.2)
which can be written
(2.3)
where - = gL. For constant v, this is comparable to the Hodgkin–Huxley formulation for the n parameter associated with K+ conductance, Equation 1.5 of Chapter 9. For a constant voltage step, Equations 10.2.3 and 9.1.5 are formally identical. Therefore a linear gL circuit can duplicate the linear behavior of K+ conductance at any constant voltage.
ADMITTANCE TO THE SEMICIRCLE
203
3. TIME DOMAIN AND FREQUENCY DOMAIN In the description of time-varying impedances and admittances, Fourier analysis is an important tool. The fluctuations in the time domain are converted into frequency distributions, which allow us to study the system in the frequency domain. The time variation of a periodic function can be described by the Fourier series11
(3.1)
The Fourier coefficients cn may be obtained by integration over the period T,
(3.2)
The sum of a Fourier series is periodic with period T. The reciprocal of the period, 1/T, is called the fundamental frequency, and n/T is the frequency of the nth harmonic. 3.1. Fourier analysis As the length of the period is increased, the fundamental frequency approaches zero while the harmonic frequencies become closer. In the limit as T becomes infinite, the harmonics may be considered as distributed continuously along the frequency scale. The infinite series may then be replaced by an infinite integral, as given by the following pair of equations:
(3.3)
The Fourier integral may be regarded physically as the sum of elementary sinusoidal oscillations, exp (i2%ft) = exp (i7t), multiplied by the complex amplitude factor C(f) df. The function y(t) is a wave in the time domain, which may be used to describe functions that are not periodic, such as noise voltages or currents. The function C(f), defined as the Fourier transform of y(t), is an equivalent description of the wave in the frequency domain.
204
CHAPTER 10
Basic combining rules simplify Fourier analysis. The Fourier transform of the sum of two functions is the sum of transforms of the functions. The Fourier transform of the product of two functions, say
(3.4)
is
(3.5) where (3.6)
The latter integral is defined as the convolution of the functions C1 and C2. In words, Equation 3.6 states that the Fourier transform of the product of two functions of time is the convolution of their individual Fourier transforms. The application of this product rule, by substituting Equation 3.6 with f = 0 into 3.5, yields the equation
(3.7) where the asterisk refers to the complex conjugate, in which i is replaced by -i. If y1(t) = y2(t) = y(t), with Fourier transform C(f), Equation 3.7 becomes
(3.8) The physical significance of Equations 3.7 and 3.8, called Parseval’s formulas, becomes clear if we let y1(t) represent a voltage and y2(t) a current. Then the integral on the left of 3.7 represents the total energy over all time; this can be computed
ADMITTANCE TO THE SEMICIRCLE
205
by integrating the product of the Fourier transform of one and the conjugate of the Fourier transform of the other over all frequencies. The function C(f) 2 therefore represents an energy density on the frequency scale. The Fourier transform of the time derivative of a function is i7 times the Fourier transform of the function. The Fourier transform of the integral of a function is 1/(i7) times the transform of the function.12 3.2. The complex admittance The complex admittance Y(7), (3.9) is the reciprocal of the impedance. To obtain the admittance of the gL circuit we take the Fourier transform of differential equation 2.3, in which the d/dt operator is replaced by i7. (3.10) Some caution is required to avoid confusion between the imaginary unit i = (-1)½ and the current in the time domain, i(t). Solving for V(7), we have from (2.5), (3.11)
where we have again used - = gL. Therefore the admittance of the gL circuit is, from (2.4) and (2.6), (3.12)
Equation 2.7 can be expressed in a rectangular form (3.13)
where G is the conductance and B the susceptance, or in a polar form
206
CHAPTER 10
(3.14)
where the magnitude of the admittance is (3.15)
and its phase angle is (3.16) There are three common ways to present admittance data, shown for the gL circuit in Figure 10.4. In the complex plane plot, also called the Cole–Cole plot, the real and imaginary parts of the admittance are shown on the complex plane, with frequency 7 as a parameter. The Bode plots present log magnitude, |Y(7)| and phase 1 vs log frequency. The real and imaginary parts of the admittance may be plotted as separate functions of frequency, as shown in Figure 10.4c. 3.3. Constant-phase-angle capacitance The admittance of the sodium or potassium systems in a membrane, when plotted on the R-X plane, produces semicircular arcs, as in Figure 10.1. These plots were analyzed by Kenneth S. Cole and his brother, Robert H. Cole, and are known as Cole-Cole curves.13 In red blood cells, muscle, egg, and other cells, these semicircles exhibit the feature that their centers are depressed below the R axis; it appears to be a ubiquitous property of biological membranes. This behavior has been related to the capacitive reactance of the system. In the analysis of these experiments, a constant phase angle capacitance C*, modeled by a power law with a fractional exponent, was discovered by Kenneth S. Cole. This law has also been found to apply to other systems that have been studied in physics and chemistry, characterized as exhibiting critical phenomena, which we discuss in Chapter 15. Analogies with these systems may help us understand the basis of gating and selectivity in channels. The constant phase angle has been determined in squid axon membrane to be about 75-80° with membrane capacitance of about 0.7 F/cm2 over a range of 10-70 kHz.14 To analyze the constant phase angle behavior of systems simpler than excitable membranes, let us review the theory of dielectric relaxation.
ADMITTANCE TO THE SEMICIRCLE
207
Figure 10.4. Three ways of plotting complex admittance data for a complex gL circuit. (a) The complex plane plot, also known as the Cole–Cole plot. (b) The Bode plots of magnitude and phase versus frequency; fc is called the corner frequency. (c) The real and imaginary part of the admittance vs. frequency. From Fishman et al., 1981.
4. DIELECTRIC RELAXATION The dielectric response of a condensed state system may be studied in terms of the response of the system to a uniform sinusoidally varying electric field.15 The electric permittivity introduced in Chapter 6 reflects the extent to which localized charge distributions can be distorted by an external electric field. This process of polarization is only linear for field strengths up to about 105 V/m. Since fields across excitable membranes are of the order of 107 V/m, we must consider the case of a nonlinear relation between the electric field E and the electric induction D. 4.1. The origin of electric polarization Materials with spherically symmetric molecules are called nonpolar. In these materials, polarizability due to an applied electric field arises from displacement of the electron
208
CHAPTER 10
cloud relative to the nuclei, electronic polarization, and displacement of the nuclei with respect to each other, atomic polarization. In ionic crystals, the displacement of the positive and negative ions causes ionic polarization. In molecules that are not symmetrical, a displacement between the centers of positive and negative charge exists, giving them a permanent dipole moment. In these polar molecules, the polarization depends on the orientation of the dipoles. The total polarizability T of a molecule is the sum of electronic, atomic and orientational terms. The magnitude of the dipole moments is traditionally measured in debye (D) units; 1 debye = 3.33 × 1030 coul m. The displacement of an electronic charge through 0.1 nm (= 1 Å) gives a dipole moment of 4.8 D. The permanent dipole moment of water, for example, is 1.8 D. In static electric fields or alternating fields of frequency less than 109 Hz, water exhibits a relative permittivity of about 80. At frequencies above 1011 Hz, drops to about 4.5, because the orientational polarizability is absent at ultrahigh frequencies due to the inertia of the molecule.16 4.2. Local fields affect permittivity When a dielectric material is polarized by the application of an electric field, the dipole moments of the constituent molecules change. The dipole effect induced by a local field may be a rotation of the permanent dipole moment or a variation of the atomic or electronic polarization. The local microscopic field acting on specific molecular sites E1 depends on the action of the macroscopic field E on the dipole moment of the molecule and its neighbors. We recall from Chapter 6 that (4.1) where the polarization P is the induced dipole moment per unit volume of the dielectric. The dielectric susceptibility 3 is defined by the relation (4.2) Since the dipole moment of each molecule is the total polarizability T times the local field El the polarization of a uniform material is given by (4.3) where N is the number of dipoles per unit volume. From Equations 4.1-4.3 we obtain (4.4)
ADMITTANCE TO THE SEMICIRCLE
209
To obtain the macroscopic permittivity from the polarizability, the ratio of the local field to the macroscopic field is required. In a gas at low pressure, where the mean distance between nearest neighbor molecules is large enough that the dipolar interaction is negligible, El = E and 3 = NT/0. For dense gases, liquids, dilute solutions and solids, more difficult analyses are required.17 4.3. Dielectric relaxation and loss As the frequency of the applied electric field is increased, some of the motions that lead to specific components of the polarization are unable to keep up. The slowest polarization mechanism is often that of dipole reorientation. When the dipoles are unable to rotate fast enough to stay in alignment with the field, the polarization decreases. The consequent reduction in permittivity, accompanied by energy absorption, is called dielectric relaxation or dispersion. The fall in orientation polarization occurs at frequencies varying from 10-1 Hz for large hindered macromolecules to 1012 Hz for small molecules. Dispersions due to atomic and electronic polarizations occur at higher frequencies. The orientational relaxation process may be modeled by assuming that the magnitude of the polarization P is composed of one part, P1, arising from atomic and electronic displacements, and a second part, P2, due to the slower dipolar reorientation. The rapid response of P1 to the applied field E implies that P1 = 31E. The slower P2 approaches its final value 32E at a rate proportional to the difference,
(4.5) where - is the relaxation time. In the study of isotropic materials, it is more convenient to use the complex dielectric permittivity * than the complex capacitance C*, which depends on the dimensions of the sample. * describes the phase lag of the displacement D = *0 E with respect to the external field E. In an alternating field E = E0ei7t, the polarization becomes (4.6)
where 7 is the angular frequency and - is the relaxation time for the macroscopic relaxation mechanism. The complex permittivity corresponding to this polarization is given by the Debye equation,
(4.7)
210
CHAPTER 10
where s = (0) and = () are the permittivities at the limiting low and high frequencies respectively. Equation 4.7 applies to simple molecular structures with only a single orientational process. For liquids with dipolar molecules, s is much greater than . In water, for example, s = 81 but = 1.78. In nonpolar liquids, the dielectric permittivity equals the square of the optical refractive index. It is customary to define the real and imaginary components of the complex permittivity 1 and 2 by the equation (4.8) where the real and imaginary components are given by
(4.9)
The frequency spectra of 1and 2are shown in Figure 10.5.18 The ratio of 2 to 1- determines the phase angle 1 of the dielectric losses, (4.10)
The dielectric losses give rise to an active component of the electric current even in a purely insulating medium, which has no free charge carriers. The magnitude of the conductivity caused by dielectric losses is equal to 702.19 From Equations 4.9 and 4.10, we can use trigonometric identities to obtain
(4.11)
These equations can be rewritten to express 2 and 1 in a parametric
ADMITTANCE TO THE SEMICIRCLE
211
Figure 10.5. Frequency variation of the dielectric parameters 1 and 2 for a Debye relaxation process with - = 10-6/2% sec. From Pethig, 1979.
representation of a circle,
(4.12)
Figure 10.6(a) shows that this circle, based on the Debye equation 4.7, has its center located on the 1 axis.20 Experimental data such as those of Figure 10.1 show, however, that the center may be depressed below the 1 axis. This case, shown in Figure 10.6(b), has been analyzed by Kenneth S. Cole and Robert H. Cole.21 4.4. Cole–Cole analysis A useful mathematical representation of the depressed semicircle is the empirical Cole–Cole form, in which the constant phase angle capacitance is fitted by a capacitance that is proportional to a fractional power of the imaginary unit i times angular frequency 7,
212
CHAPTER 10
Figure 10.6. The relation between the imaginary and real parts of the dielectric permittivity, as modeled by (a) Debye theory and (b) Cole–Cole theory. From Blinov, 1983.
(4.13)
where h is a distribution parameter, which is zero when there is no distribution of relaxation times; in this case Equation 4.13 reduces to the Debye form, Equation 4.7. Since, by application of Equation 1.5,22
a plot of ", the imaginary part of *, as a function of the real part, ', is a circular arc, the radius of which from () makes an angle of ½%h with the ' axis, as shown on Figure 10.6(b). The distribution parameter h increases with increasing temperature and the number of degrees of freedom of the molecules of the dielectric material. The relaxation time - can be obtained from the ratio of two chords to an arbitrary frequency point, u and v, as also shown in Figure 10.6(b). When several relaxation mechanisms are present in the material, additional semicircles, linked to the first, appear. Cole–Cole circles with centers below the real axis also have been found to characterize ferroelectric and liquid crystalline substances, as we shall see in Chapters 16 and 20. 5. FREQUENCY-DOMAIN MEASUREMENTS Let us look at some experimental data. Unlike the earlier Cole–Curtis and Cole–Baker experiments, these experiments are done with internal voltage and current electrodes,
ADMITTANCE TO THE SEMICIRCLE
213
and with internally perfused axons. Squid are available for only a few months of the year, but data fitting is done all year with the data collected in the summer. It is convenient to do the initial fitting to the mathematical model of Hodgkin and Huxley, with its parallel branches for capacitance and different ionic conductivities. A linearized version of it was developed to fit to admittance data.23 5.1. Linearizing the model of Hodgkin and Huxley In the HH model, three variables determine the ionic conductances, sodium activation m, sodium inactivation h and potassium activation n. A voltage clamp is applied with a sinusoidally varying signal of angular frequency 7 superimposed on a constant potential V0. The amplitude V1 of the sinusoid is small, say 1 mV. The differential equation for n is, from Equations 1.4 and 1.7 of Chapter 9, (5.1) with similar equations for m and h. For a small perturbation in the voltage, V, we can write a Taylor expansion to first order,
(5.2)
A similar expansion applied to Equation 1.14 of Chapter 9 yields the expression
(5.3)
Transforming to the frequency domain and eliminating m, h and n, W. Knox Chandler, Richard FitzHugh and Cole derived the expression
(5.4) where g is a constant given by (5.5) and gn, gm, gh and the -s are functions of voltage V0.
214
CHAPTER 10
When Equation 5.4 is plotted on a plane with the real part horizontal and the imaginary part vertical, the first two terms plot as vertical lines and the conductance terms become semicircles. 5.2. Frequency response of the axonal impedance To fit complex conduction data from an experiment, the terms i7C + g in Equation 5.4 must be subtracted by “unfolding,” obtaining an inverse transform. A useful method for characterizing excitable membranes, as well as other materials, is the response to a small-signal sinusoidal voltage superimposed on a steady voltage applied to the sample.24 The membrane with its ion channels can be characterized by an equivalent circuit. The effects of the sodium and potassium systems are identified by means of toxins and ion substitutions in the internal and external media. Figure 10.7a shows the complex admittance of a squid axon under voltage clamp.25 The command voltage is the sum of a steady voltage and a small perturbation voltage of amplitude 1 mV. The perturbation stimulus is a pseudorandom binary noise sequence retrieved from a read-only memory and low-pass filtered. The plot shows the
Figure 10.7. Complex admittance magnitude Y and phase angle SY of a squid axon under voltage clamp. From Poussart et al., 1977.
ADMITTANCE TO THE SEMICIRCLE
215
magnitude and phase angle of the admittance over a frequency band from 4 to 1024 Hz. Both the Na and K systems are operative. A characteristic antiresonance occurs at depolarized potentials in the region 20-200 Hz. This is not seen in the -67 mV case, in which the phase curve remains positive; this response is similar to that of a simple RC circuit. The antiresonance, which shifts toward higher frequencies with increasing depolarizations, represents the interaction of the potassium ion conduction system with membrane capacitance. It disappears in internal perfusion experiments in which the K system is suppressed by the replacement of internal KF with CsF. Now there is no antiresonance, and the depolarized potentials approach phase angles of 180° at low frequencies.
Figure 10.8. The sharpness of the pararesonance of squid axon increases when the axon is immersed in a medium of low divalent cation concentration. The seawater has been replaced with a mixture of one part seawater and three parts isotonic NaCl solution. From Tasaki, 1982.
The suppression of the Na system results in an increased admittance magnitude. The only way the removal of a conductance can increase the absolute value of admittance is if that conductance is negative, i.e., 180° out of phase with the potassium current. The antiresonance can be brought back by adding a shunt conductance, either K or leakage. These features exhibit a striking similarity to a linearized version of the Hodgkin–Huxley model, but also with some differences.
216
CHAPTER 10
5.3. Pararesonance The membrane impedance measured with a small ac perturbation shows a sharp maximum at one frequency. This is related to Monnier’s phenomenon of pararesonance, in which nerve fibers treated with a mild chemical stimulant exhibit a marked periodicity. The threshold for stimulating an action potential drops to a minimum at the pararesonant frequency. Tasaki showed that the sharpness of the pararesonance increases greatly when the external medium is replaced with a medium with a low divalent cation concentration, suggesting that Ca+2 ions modulate Na+ activation; see Figure 10.8.26 5.4. Impedance of the Hodgkin–Huxley axon membrane An analysis of the small-signal impedance of squid axon membrane, based on the Hodgkin–Huxley model, clearly shows two separate resonances. The impedance modulus |Z| plotted by David E. Clapham in three-dimensional perspective as a function of frequency and membrane voltage, is shown in Figure 10.9. The frequency ranges
Figure 10.9. Small-signal impedance of the Hodgkin–Huxley model of squid axon membrane. From DeFelice, 1981, after D. E. Clapham.
ADMITTANCE TO THE SEMICIRCLE
217
from 2 to 400 Hz and the mean membrane potential extends from -95 mV to -25 mV. At the resonance points the inward and outward currents sum to zero, giving an apparently infinite impedance.27 5.5. Generation of harmonics When alternating voltage perturbations, V, imposed upon a constant voltage step across a membrane, become so large that the linear approximation is no longer adequate to describe the current response, higher harmonics are generated. For small V, with driving frequency f, the fundamental (first harmonic) response, likewise at frequency f, is sufficient to describe the response. At larger perturbations, the second harmonic appears at 2f, the third at 3f and the fourth at 4f, representing current outputs proportional to ( V)2, ( V)3 and ( V)4, respectively. Below 3 kV/cm, the second harmonic is less than 10% of the fundamental response (down 20 dB). At 10 kV/cm the nonlinearity is substantial. On a log–log plot, the slope of the second (third, fourth) harmonic is twice (three, four times) that of the fundamental; see Figure 10.10. The appearance of these higher harmonics is evidence of the nonlinearity of the membrane conduction process.28 It is notable that nonlinearity is high at resting potential (-60 mV, with an average field over 5 nm of -120 kV/cm) but lower at the firing threshold, at which sodium channels open. This indicates that Na-channel opening is correlated with a gain in the linearity of the conduction process. 5.6. Admittance under suppressed ion conduction Suppression of voltage-sensitive ion conduction has resulted in the measurement of asymmetry currents; see Chapter 9, Section 2.1. These currents have been interpreted as manifestations of the movement of membrane-bound charges associated with the opening and closing of ion channels; they are known as “gating currents.” Studies of squid axon admittance in the frequency domain made it possible to model these currents with a circuit consisting of an RC branch in parallel with the membrane.29 The origin of the gating current of voltage-sensitive ion channels may be in dielectric loss. The measurements showed a potential-dependent admittance amplitude and phase at low frequencies. At frequencies above 200 Hz, where capacitance dominates the admittance behavior, a small decrease in capacitance is observed at depolarized potentials. These results appear to contradict the measurements in the time domain, and suggest two possible explanations: the asymmetry current (a) is a displacement current that inactivates with time, or (b) arises from nonlinear delayed currents. Desorption models are possible explanations of the observed data. Electrostriction, the dimensional change of materials when exposed to an electric field, has been proposed as a mechanism to account for the capacitance decrease.30 To further explore the significance of the admittance data, we will study the effects of fluctuations, a topic to which we turn in Chapter 11.
218
CHAPTER 10
Figure 10.10. Response of a squid axon membrane to a sinusoidal 10-Hz voltage perturbation of various amplitudes. The horizontal axis represents the peak-to-peak value of the sinusoid in microvolts. As the amplitude of the sinusoid increases, higher harmonics appear. From Fishman et al., 1981.
NOTES AND REFERENCES 1. Kenneth S. Cole and Howard J. Curtis, J. Gen. Physiol. 22:649-670, 1939. 2. See, for example, Mary L. Boas, Mathematical Methods in the Physical Sciences, John Wiley & Sons, New York, 1966, 56. 3. Boas, 281- 311, 601-621. 4. M. E. Starzak, The Physical Chemistry of Membranes, Academic, Orlando, 1984. 5. Kenneth S. Cole, in Theoretical and Mathematical Biology, edited by Talbot H. Waterman and Harold J. Morowitz, Blaisdell, New York, 1965, 136-171. 6. Kenneth S. Cole, Membranes, Ions and Impulses, University of California, Berkeley, 1972, 77f.
ADMITTANCE TO THE SEMICIRCLE
219
7. K. S. Cole and R. F. Baker, J. Gen. Physiol. 24:771-788, 1941. 8. A. L. Hodgkin and A. F. Huxley, J. Physiol. (Lond.) 117:500 -544, 1952d. 9. Kenneth S. Cole, Membranes, Ions and Impulses, University of California, Berkeley, 1972, 298. 10. H. M. Fishman, L. E. Moore and D. Poussart, in The Biophysical Approach to Excitable Systems, edited by W. J. Adelman, Jr. and D. E. Goldman, Plenum, New York, 1981, 65-95. With kind permission of Springer Science and Business Media. 11. W. R. Bennett, Electrical Noise, McGraw-Hill, New York, 1960, 198-202. 12. See, for example, Boas, 591-615. 13. K. S. Cole and R. H. Cole, J. Chem. Phys. 9:341-351, 1941. 14. R. E. Taylor and W. K. Chandler, Biophys. Soc. Abstr., TD 1, 1962; Kenneth S. Cole, Membranes, Ions and Impulses, University of California, Berkeley, 1972, 54. 15. Minoru Fujimoto, The Physics of Structural Phase Transitions, Springer, New York, 1997, 142. 16. Ronald Pethig, Dielectric and Electronic Properties of Biological Materials, John Wiley, Chichester, 1979. 17. H. Fröhlich, Theory of Dielectrics, Clarendon, Oxford, 1949. 18. Pethig, 17. 19. L. M. Blinov, Electro-optical and magneto-optical properties of liquid crystals, John Wiley, Chichester, 1983, 33. 20. Blinov, 61f. Copyright John Wiley & Sons Limited. Reproduced with permission. 21. K. S. Cole and R. H. Cole, J. Chem. Phys. 9:341-351, 1941. 22. Boas, 56f. 23. W. K. Chandler, R. FitzHugh and K. S. Cole, Biophys. J. 2:105-127, 1962. 24. J. Ross Macdonald, in Superionic Conductors, edited by G. D. Mahan and W.L. Roth, Plenum, 1976, 81-97. 25. D. Poussart, L. E. Moore and H. M. Fishman, Ann. N.Y. Acad. Sci. 303: 355-379, 1977. By permission of Wiley-Blackwell Publishing. 26. Reprinted from Ichiji Tasaki, Physiology and Electrochemistry of Nerve Fibers, Academic, 1982, 187-189. With permission from Elsevier 27. Louis J. DeFelice, Introduction to Membrane Noise, Plenum, New York, 1981, 359. With kind permission of Springer Science and Business Media. 28. H. M. Fishman, L. E. Moore and D. Poussart, in The Biophysical Approach to Excitable Systems, edited by W. J. Adelman, Jr. and D. E. Goldman, Plenum, New York,1981, 65-95. With kind permission of Springer Science and Business Media. 29. H. M. Fishman, L. E. Moore and D. Poussart, Biophys. J. 19:177-183, 1977. 30. F. J. Blatt, Biophys. J. 18:43-52, 1977.
CHAPTER 11
WHAT’S THAT NOISE?
In 1827 Robert Brown, a botanist, observed pollen grains and dust particles in aqueous suspension playing out a ceaseless dance under his microscope. It was not until the development of the kinetic theory of gases in the latter part of the 19th Century that Brownian movement was found to be caused by thermal motions of the molecules in the liquid environment of the particles. The quantitative behavior of Brownian particles in thermal equilibrium with the molecules of a liquid was analyzed by Albert Einstein in 1905. He later applied his results to the movement of ions in solution. The random motions of charged particles can generate electrical noise in an electric circuit. These electrical variations were detected by J. B. Johnson in 1927. A quantitative theoretical treatment of the effect was provided by H. Nyquist in 1928. Stochastic processes, subject to random fluctuations, are frequently observed in physics, chemistry and biology.1 While ionic solutions, carbon resistors and electrode surfaces all generate electrical noise,2 the membranes of neurons and muscle cells produce intrinsic noise that reflects functional details at the molecular level. The interesting and revealing fluctuations exhibited by an electroencephalogram result from the integrated electrical activity of nerve cells in the brain. The observation of electrical fluctuations in membranes opened up a powerful new field of study, fluctuation analysis. Noise, often considered a mere nuisance, reflects molecular mechanisms and can provide structural information about systems that can not be obtained otherwise. The experimental study of noise in excitable membranes showed the need for measurements over small areas, which was met with the development of patch clamping. With this technique, electrical membrane noise revealed the statistical properties of ion currents. This led to the experimental discovery of individual ion channels, which were shown to be glycoprotein macromolecules embedded in the lipid bilayer. The study of fluctuations continues to provide further insights into the functioning of these voltage-sensitive ion channels.3 1. STOCHASTIC PROCESSES AND STATISTICAL LAWS The behavior and properties of bodies consisting of a very large number of individual particles are governed by the laws of statistical physics, which we reviewed in Chapter 5. The large amount of information involved in the motions of the huge number of
221
222
CHAPTER 11
particles that constitute a macroscopic body are difficult to manage by mechanics, classical or quantum. However, new regularities, statistical laws, appear among pairs of thermodynamic quantities, such as volume and pressure, induction and electric field, entropy and temperature. The relations of macroscopic physics are relations between such mean values. Since these quantities do not fully describe the systems but are the result of statistical averaging, they are fundamentally probabilistic rather than deterministic. Because of the enormous numbers of particles that compose macroscopic bodies, the relations between the mean values are valid to very high accuracy, so that the relations between physical quantities are usually treated as determinate. Nevertheless, deviations from the mean values do occur; the quantities fluctuate. When the bodies under consideration are small, down to molecular size, the fluctuations become significant, as in Brownian motion.4 Voltage fluctuations are measured by the use of electronic filters, networks designed to pass only a limited range of frequencies from its input; see Figure 11.1. By the use of many such filters of different pass bands, a fluctuating variable in the time domain can be converted into a power-density spectrum.
Figure 11.1. Oscillogram of wide-band thermal noise within a frequency range from 200 to 3,000 Hz. From Bennett, 1960.
1.1. Stochastic processes A stochastic variable differs from a deterministic variable. A stochastic variable X is defined by specifying the set of possible values x it can take, its range or state space, and the probability distribution over this set. The probability distribution for a continuous one-dimensional set is given by P(x), the probability density function, which is non-negative. P(x) is normalized by the condition
(1.1) where the integral extends over the range of the stochastic variable. The average or expectation value of any function f(x) defined on the state space of the variable is
WHAT’S THAT NOISE?
223
(1.2)
The mean or first moment of the stochastic variable X is given by
(1.3)
and the mean square or second moment is
(1.4)
Higher moments are defined analogously. The variance of a probability distribution is (1.5)
The variance is the square of the standard deviation ). A function of a stochastic variable, as well as an ordinary variable such as time t, is also a stochastic variable. The function represents a stochastic process, written (1.6)
We can regard this function as an ensemble of sample functions in which the stochastic variable X is replaced by one of its sample values x. Averaging over the probability density, we find
(1.7)
An average of particular interest is the autocorrelation function
(1.8)
For equal times, t1 = t2, the autocorrelation function reduces to the time-dependent variance )2(t) = - 2.
224
CHAPTER 11
1.2. Stationarity and ergodicity If all the moments of a stochastic process are unaffected by a time shift, the process is called stationary. The mean of a stationary process is independent of time even though the instantaneous values vary. The autocorrelation function of a stationary process depends only on the absolute time difference |t1 - t2|. For many processes there is a constant time difference -c such that (t1, t2) is zero or negligible for |t1 - t2| > -c. -c is called the autocorrelation time. Remarkably, many extremely complicated processes lead to observable averages that obey simple laws: Instantaneous current fluctuations in a resistive circuit average to Ohm’s law. The rapid irregular bombardment of a piston by gas molecules, when integrated by the inertia of the piston, averages to a motion determined by Boyle’s law. A monatomic gas in a cylinder is determined by specifying 3 position coordinates and 3 momentum coordinates for each of the N molecules at some initial time. In principle, the force on the piston may be computed from these 6N coordinates, the microstate of the system. The force may be represented by Yx(t), where x refers to the 6-dimensional microstate. Statistical mechanics replaces the evolving system by an appropriate ensemble of systems obeying the same equations of motion but starting from different initial microstates x. The structure of this ensemble is specified by a 6N-dimensional phase space, in which microstate x is one point. The replacement of the single system with the ensemble converts x into the stochastic variable X. Integrating over an appropriate probability distribution PX allows us to compute the averages that we interpret as macroscopic quantities. The probability we calculate is thus an ensemble average rather than a time average. This procedure is reasonable if the function Yx(t) will pass through essentially all values accessible to it in a sufficiently long time. Justifying this ergodic assumption is a key problem for statistical mechanics. In practice an additional assumption is necessary. The random phase approximation consists in repeated averagings over the rapidly varying variables so as to retain only the slowly varying ones that describe the evolution of the system. The resulting equations, such as the Nernst–Planck equation of electrodiffusion and the rate equations for chemical reactions, are only approximate. The small deviations from them that are observed are the fluctuations or noise with which we are here concerned. 1.3. Markov processes Stochastic processes may be specified by constructing a hierarchy of distribution functions. The probability for YX(t) to take the value y at time t is designated P1(y, t). As usual, P1 is nonnegative and obeys the normalization condition
(1.9)
WHAT’S THAT NOISE?
225
The conditional probability density P1|1(y2, t2 | y1, t1) is defined as the probability density of taking on the value y2 at t2 provided that its value at time t1 is y1. In the class of stochastic processes that possess the Markov property, the conditional probability density at time tn is uniquely determined by the value yn-1 at tn-1. Therefore, for a Markov process, the entire hierarchy of probability densities can be constructed from P1(y1, t1) and the transition probability P1|1(y2, t2 | y1, t1). When a Markov process is also stationary, the transition probability does not depend on two times but only on the time interval between them. For example, in a circuit consisting of a resistor kept in a bath at constant temperature in parallel with a capacitor, the voltage is a stationary Markov process. 2. NOISE MEASUREMENT AND ANALYSIS TECHNIQUES In the study of fluctuations, as in impedance studies, Fourier analysis is a valuable tool and can provide information on single channel data.5 It allows us to shift our attention from the complex random function Y(t) to a series of sinusoidal functions into which Y(t) is decomposed by spectral analysis. 2.1. Application of Fourier analysis to noise problems Let us expand a sample function Yx(t) of the stochastic process Y(t) in a fixed time interval, 0 < t < T. Assuming that the process has a zero average, Y(t) = 0, we can write the sample function as a Fourier series of sine waves,
(2.1)
where the nth Fourier coefficient is given by
(2.2)
The coefficients obey the Parseval identity (see Equation 3.8 of Chapter 10)
(2.3) The equations for the stochastic variable Y(t) is obtained by averaging over all values of x with probability density PX(x). The Fourier coefficients An,x become
226
CHAPTER 11
stochastic variables An with an average
(2.4) The Parseval identity then averages to
(2.5) 2.2. Spectral density and autocorrelation If the process Y(t) is stationary, with a zero average and a finite correlation time -c, the second moment is independent of time and may be taken out of the integral in Equation 2.5, giving
(2.6) Under these assumptions the mean square of the fluctuations is equal to half the sum of the mean squares of the Fourier coefficients. Each An refers to a single sine wave with angular frequency 7. The way in which is distributed among the frequencies remains to be determined. This is done by computing the spectral density of the fluctuations, S(7), defined by
(2.7) The frequency interval from 7 to 7 + 7 is simulated in the laboratory by a high-pass filter and a low-pass filter, respectively. The period T must be chosen large enough to include many values of n within the interval 7, while this interval must remain small. The spectral density is given by the cosine transform of the autocorrelation function (-), Equation 1.8 with - = t2 - t1, according to the Wiener-Khinchine theorem,
(2.8)
WHAT’S THAT NOISE?
227
Equation 2.8 is valid when (-) decreases rapidly for time intervals greater than the autocorrelation time -c. The interpretation, in Equation 3.8 of Chapter 10, of C(f) 2 as an energy density along the frequency scale makes it possible to extend Fourier analysis to waves with a nonconvergent integral. This is important in noise analyses, where the wave oscillates with finite amplitude throughout all time, but cannot be resolved into discrete sinusoidal components.6 The quantity often measured in fluctuation studies is the spectral density function of the voltage, SV(f). 2.3. White noise If two resistors of equal resistance R are connected in parallel and maintained at the same absolute temperature T, the voltage across one of the resistors must equal the other to be in thermal equilibrium. This equality must hold for all frequency components.7 The thermal noise of an ideal resistor, “white noise,” is, according to the Nyquist relation, (2.9) The voltage spectral density function depends on resistance R and temperature T, but is independent of frequency. Since kT has the unit of energy = power × time, the units of SV are V2s or V2/Hz. For an arbitrary circuit, the real part of the impedance Z takes the place of R. (2.10) The spectral density function of the current noise for an arbitrary circuit of admittance Y is given by (2.11) In an electric circuit containing a potential barrier, such as a rectifying junction in a transistor, the current is limited to charge carriers—electrons or ions—with sufficient energy to surmount the barrier. This produces a current distribution that is determined by the thermal fluctuations of the charge carriers, producing a type of noise called shot noise. Analyses based on shot noise have been used in theories of acetylcholine-induced noise in ACh receptors and photon-induced noise in photoreceptors.8
228
CHAPTER 11 3. EFFECTS OF NOISE ON NONLINEAR DYNAMICS
While noise is often responsible only for blurring effects, it can actually modify the deterministic dynamics of a nonlinear system. Acting as a driving force in the equations of motion, noise can shift bifurcation points or induce transitions for which there is no deterministic counterpart. It can change the motion of a system when the value of a parameter changes. Noise can arise from the combined action of a large number of variables, such as thousands of synaptic inputs to a neuron, the random times at which ion channels open and close in a patch of membrane, or molecular motions. To obtain a simple lowdimensional description, this information may be considered simply as a fluctuating current source. 3.1. An aperiodic fluctuation Mathematically, noise is a quantity that fluctuates aperiodically in time. To be useful, it should have well defined properties such as a distribution of values with a mean and other moments, and a two-point correlation function. While the noise variable takes on different values at each observation, its statistical properties remain constant. Since we usually do not have access to the noise variable experimentally, we begin with assumptions about the noise and its coupling to the dynamical state variables. Noise that arises from the process of measurement is called observational noise. If the noise is intrinsic to the dynamical system, it may be additive, simply added to the deterministic part of the dynamics. In multiplicative noise, the stochastic part of the dynamical equation is the product of the noise variable with a state variable. 3.2. The Langevin equation The modification by noise of the response of a nonlinear dynamical system can be illustrated by mathematical models. A certain insight into the effect of noise on a system can be obtained by coupling it with Gaussian white noise. The noise term, which has a Gaussian spectrum with zero mean, is simply added to the deterministic part of the equation. If the deterministic part of the dynamical equation is nonlinear, it is called the nonlinear Langevin equation.9 One simple example of a nonlinear Langevin equation is (3.1) where !(t) represents the noise process; see Figure 11.2.
WHAT’S THAT NOISE?
229
Figure 11.2. Noise in a nonlinear system in two realizations of the nonlinear Langevin equation. (A) At a low noise intensity, D = 0.5, the system spends a long time fluctuating on either side of the origin before switching to the other side. (C) Increasing the noise intensity to 1.0 increases the frequency of switches. The corresponding normalized probability densities, (B) and (D), show the broadening of the distribution by the increase of the noise intensity. From Longtin, 2003.
This equation, which models the overdamped noise-driven motion of a particle in a bistable potential, has three fixed points in its deterministic part, an unstable one at the origin and stable ones at ±1. For small values of the noise intensity D , the system spends a long time fluctuating around the fixed points on either side of the origin before making the switch to the other side. Computer-generated realizations of Equation 3.1 for two values of D are shown in Figure 11.2. An increase in the noise intensity results in more frequent switches across the origin and broadens the probability density around the stable points. The mathematical device of adding noise to a continuous function, however, is not helpful in understanding the physical dynamics of a molecule such as an ion channel. Because the fluctuations are an integral part of the molecular motion, the smooth macroscopic behavior of a membrane system is simply the average of the fluctuations of its component channels.
230
CHAPTER 11
In the case of Equation 3.1, the maxima are not displaced by noise; nor are new maxima created. For additive noise in one dimension there are no noise-induced states. However, these may appear in more complicated systems. 4. NOISE IN EXCITABLE MEMBRANES During the 1950s, physiologists became aware of irregularities in firing patterns and wondered where the randomness was coming from. As mentioned in Chapter 3, this led to the measurement of membrane noise. Measurements of membrane potential or current showed noise levels two to three orders of magnitude in excess of the noise accountable from fluctuations of membrane resistance due to thermal fluctuation. Fluctuation analysis provides critical information that cannot be obtained otherwise. Among other things, it can provide a criterion as to whether a system is linear or nonlinear. 4.1. A nuisance becomes a technique Noise analysis was first applied to biomembranes by Hans E. Derksen in 1965, and Derksen and Alettus A. Verveen in 1966, who measured the spontaneous voltage fluctuations of a single frog node of Ranvier.10 Their studies began with an investigation of the response of axons to stochastic stimulation. When an axon is repeatedly stimulated with near-threshold pulses, the response is an increasing function of stimulus frequency. The latency of the ensuing action potential is also variable. These findings led to the discovery of different types of membrane noise, which varied with the composition of the bathing fluid. As more investigators began studying membrane noise, it became apparent that noise analysis would become an important tool for the investigation of membrane processes at the molecular level. 4.2. Fluctuation phenomena in membranes After the initial work on node, studies were carried out by Denis Poussart11 on crayfish axon, who used a flowing sucrose solution to create an “artificial node” to minimize the area under study. Harvey Fishman, and with L. E. Moore and Denis Poussart initiated the study of electrical potassium noise in squid axon by measuring from a small isolated patch of membrane.12 The group of E. Wanke, Louis DeFelice and Franco Conti explored the relationship between current noise, voltage noise and membrane impedance, finding to a good approximation13 (4.1) Yves Pichon, Denis Poussart and Graham V. Lees measured noise in cockroach giant axon.14
WHAT’S THAT NOISE?
231
Bernhard Katz and Richard Miledi15 and C. R. Anderson and Charles F. Stevens applied noise analysis to fluctuations induced by acetylcholine on the neuromuscular junction. Further studies of noise in excitable membranes are described below. 16
4.3. 1/f noise The noise spectrum Verveen and Derksen found in frog node had a spectral density
(4.2) where b is constant. This type of noise, called 1/f noise or flicker noise, is also observed in semiconductors. Since large areas smoothe out local fluctuations, the small area of the node of Ranvier and the fact that it is insulated from the rest of the axon by the myelin sheath makes it a good choice for a noise measurement. Verveen and Derksen found that the 1/f noise disappeared when the node was bathed in isotonic KCl, leaving a white-noise spectrum. This and other experiments showed that the 1/f noise was associated with the “passive” flow of K+ across the membrane. As we will see in Chapter 15, 1/f noise is a ubiquitous phenomenon related to fractals, a recently developed mathematical discipline with applications in many fields. Fractal noise, which occurs in nonequilibrium processes, has been shown to encompass the concept of 1/f noise. 4.4. Lorentzian spectra Further studies by Elias Siebenga and Verveen17 at more positive voltages demonstrated another component of the fluctuation spectrum, (4.3)
a Lorentzian function, where fc is the corner frequency, at which SV(fc) = c/2. At frequencies high above the corner frequency, SV(f) drops off with the inverse square of frequency. As the Lorentzian component is reduced with application of TEA+ while TTX had little effect, we can conclude that it is associated with the transitions of the K channels. Figure 11.3 shows noise spectra of a node, with voltage varying from a holding potential of -70 mV to a depolarized potential of +30 mV.18
232
CHAPTER 11
Figure 11.3. Voltage noise spectra of a node of Ranvier in normal Ringer’s solution at room temperature. From DeFelice, 1981, after Siebenga, 1974.
With a patch clamp technique that minimizes the membrane area (see Section 6), Fishman obtained data that were fitted by a sum of white, 1/f and Lorentzian components,
(4.4)
As Equation 4.1 shows, the power-density spectrum of voltage noise is a reflection of the current-noise spectrum filtered by the membrane impedance. Powerdensity spectra of current noise in squid axon are shown in Figure 11.4.19 The figure shows measurements at two potentials, rest and depolarized by 30 mV, for two internal solutions, normal control and with 10 mM tetraethyl ammonium ion, TEA+. The TEA+
WHAT’S THAT NOISE?
233
drastically reduces the potassium current and eliminates the Lorentzian hump at low frequencies. The corner frequencies are 75 Hz at 30mV and 50 Hz at rest.20
Figure 11.4. Membrane current fluctuations in an internally perfused squid axon at rest and depolarized by 30 mV. The solid curves are controls and the dashed curves are spectra after 10 mM TEA+ is added to the perfusate. From Fishman, Moore and Poussart, 1975, 1977.
4.5. Multiple Lorentzians The power spectra of spontaneous fluctuations in the Na current of the same axon was accurately fitted as the sum of two Lorentzian functions,
(4.5)
where 1 and 2 are the corner frequencies. Additional Lorentzian terms form a multiple Lorentzian function.
234
CHAPTER 11
4.6. Nonstationary noise While the simplest way to study fluctuations is to measure current fluctuations in a steady state, it is sometimes desirable to observe noise in a nonstationary state. For example, sodium channels spontaneously inactivate during a steady depolarization. A measure of the extent to which fluctuations at different times are correlated is the covariance. The covariance specifies over what time periods the noise is correlated. An estimate of the covariance from a group of n records is given by
(4.6) where the Yi are current values from the ith record and the s their means. When t1 = t2, the covariance is equal to the variance; as the time difference decreases, the covariance tends to zero. Figure 11.5 shows the covariance in a node of Ranvier depolarized to -5 mV.21 The computation was from 390 current records divided into groups of 4, which were averaged.
Figure 11.5. Covariance in a node of Ranvier depolarized to -5 mV and the time course of the Na conductance. Points are experimental values and the continuous curves were calculated from a kinetic scheme. From Sigworth, 1984.
Spectral densities have also been obtained from nonstationary noise analysis, both from multiple records and from a single record of a time-varying current. Even though sodium currents are transient, the study of their fluctuations with methods requiring stationarity is possible. Because of the incompleteness of inactivation, a measurable sodium current persists for hundreds of milliseconds. Sodium channels held depolarized for seconds or minutes become unavailable for opening by a process of slow inactivation, distinct from the fast inactivation described by Hodgkin and Huxley. The study of gating by stationary fluctuations requires a compromise for the duration of depolarizations between fast and slow inactivations.22
WHAT’S THAT NOISE?
235
4.7. Light scattering spectra Light scattering experiments on excited nerve bundles of the walking legs of crab showed increases in the light intensity scattered by 90° of 2 × 10-4 per impulse. The light scattered by 45° from squid axon showed a decrease of 1.5 × 10-6 that closely followed the shape of the action potential.23 Analysis of further studies suggested that persistent changes in refractive index may be due to swelling of the axon following stimulation, possibly resulting from changes in osmotic pressure.24 Light scattering spectroscopy experiments in squid axon showed fluctuations similar to those observed in voltage fluctuation experiments.25 5. IS THE SODIUM CHANNEL A LINEAR SYSTEM? Fluctuation analysis is a powerful tool that allows us to determine whether the axon is a linear or nonlinear system. As we recall from Chapter 8, Hodgkin and Huxley used linear relations to describe the functions for channel activation and inactivation, but constructed the conductance variables from nonlinear functions of m, n and h. This ambiguity leaves the question of ion channel linearity to be decided by further experimentation. 5.1. Admittance fits to a circuit model In 1983 Fishman, Leuchtag and Moore compared noise and impedance spectra from the same axon. For the impedance measurement, the axon was stimulated by a Fourier-synthesized pseudorandom signal. The rationale of the study was to test the assumption of linearity that was prevalent in theoretical discussions, and to provide data on fluctuations, which reflect the microscopic motions of all the charged particles in the system. The test for linearity of the system is whether the sodium-channel fluctuations correspond to the impedance in accord with the Nyquist relation for the voltage noise spectrum, Equation 2.10. Data fits to the steady-state sodium-conduction kinetics are based on an equivalent circuit that is a modification of the linearized Hodgkin–Huxley model; see Figure 11.6. The model used was the linearized Hodgkin–Huxley circuit, except for two modifications, a small series resistance to model access structures, and a constant-phase capacitance, as described by Cole and Cole in 1941; see Section 4.4 of Chapter 10.
236
CHAPTER 11
Figure 11.6. Linear circuit model for fitting sodium admittance data. Rs is the access resistance, CM* accounts for dielectric loss, and the other circuit elements conform to the linearized Hodgkin–Huxley model. From Fishman et al., 1983.
Figure 11.7a shows plots of squid-axon admittance magnitude and phase in steady state with potassium conduction suppressed by the replacement of internal potassium with cesium. Model fits to Na impedance components are shown at six depolarization voltages, from 0 to 50 mV. The solid curves are the best fits of the circuit parameters to the 400 complex data points. The command voltage is the sum of a steady voltage and a small perturbation of amplitude 1 mV. The data are steady-state admittances, acquired during an 80 ms interval 5 s after the onset of the depolarizing pulse. Resistance R and reactance X are in kilohms. Because capacitive reactance is negative, it is plotted below the R axis. The admittance phase angle plot at the -60 mV holding potential, labeled 0 mV, shows a phase near zero at low frequencies, rising to about 76° at 5 kHz. The low-frequency admittance flips 180 degrees as the axon is depolarized. For the five positive depolarizations, the phase at low frequency is 180°, decreasing to ~80° at 5 kHz. The 180° phase at low frequency indicates a steady-state negative (inward) conductance. Figure 11.7b shows complex-plane plots of impedance. The resistance is positive at 0 mV depolarization from -60 mV holding potential whereas larger depolarizations have negative resistance in the low-frequency range, consistent with the ion concentration gradient, which makes inward currents easier than outward. Curve fitting to the data yielded the eight parameters of the circuit model. Here Rs is the access resistance, CM* the constant-phase capacitance with its phase angle , and R1, R2, L and C are other circuit elements that conform to the linearized Hodgkin–Huxley model. The R1-L branch models the first-order Na inactivation in the frequency domain; the natural frequency of this branch, f1 = L/R1, is related to the HH inactivation time constant -h by the relation f1 = (2%-h)-1. The R2-C branch models the first-order Na activation; its natural frequency, f2 = R2C, is related to the activation time constant -m by the relation f2 = (2%-m )-1. The inactivation frequency parameter f1 ranges from 14 to 83 Hz, and the activation frequency parameter f2 ranges from 102 to 993 Hz. The phase parameter = 1 - h = 0.90 ± 0.01 at all voltages. The semicircle is therefore depressed by %h/2 = 0.10 (90%), a 9% angle.26
WHAT’S THAT NOISE?
237
Figure 11.7. Model fits to squid-axon data. The points represent data and the lines are fits to the circuit model of Figure 11.6. (a) Admittance magnitude and phase versus frequency. (b) Complex-plane plots of impedance. From Fishman et al., 1983.
238
CHAPTER 11
5.2. Current noise and admittance compared Current noise spectra taken on the same squid axon, under the same conditions, at six voltages above the holding potential are shown in Figure 11.8.
Figure 11.8. Power spectra of fluctuations in sodium current of the same axon as in Figure 11.7. The voltage changes indicated are from an estimated holding potential of -60 mV. The solid line shows a fit of the data to a double Lorentzian function. From Fishman et al., 1983.
The membrane admittance is given by
(5.1)
where the relaxation times -1 = (2%f1)-1 and -2 = (2%f2)-1 and the natural frequencies f1 and f2 are independent of the particular circuit model chosen. The existence of conduction and fluctuation data from the same axon invites a comparison between their characteristic frequencies. Figure 11.10 shows a plot of fluctuation corner frequencies 1 and 2 as functions of admittance frequencies f1 and f2. If they represented the same system, they would lie on the 45% line shown. The plot shows that the noise frequencies, reflecting microscopic kinetics, were consistently higher than the admittance frequencies, reflecting ion translocations. Since admittance is a linear measurement, the lack of agreement between noise and admittance frequencies shows that the microscopic process is nonlinear.
WHAT’S THAT NOISE?
239
Figure 11.9. Relationship between lower (f1) and upper (f2) frequencies from admittance and the lower (1) and upper (2) corner frequencies from noise measurements, obtained from the same axon as in the two previous figures. From Fishman et al., 1983.
The conduction noise predicted from the admittance by the Nyquist relation was much smaller and of different shape. The implication is that the Na current noise reflects a nonlinear, nonequilibrium process. A theoretical model of ion conduction in voltage-sensitive ion channels must explain the fluctuation data. 6. MINIMIZING MEASUREMENT AREA To study the noise spectrum of squid axon, it is necessary to reduce the membrane area under study, to keep the noise from different parts of the region under study from averaging out. While in myelinated axons the exposed region is naturally restricted to the nodal area, in squid axon such a restriction in area must be imposed by a suitable electrode design.
240
CHAPTER 11
6.1. Patch clamping To record and analyze noise produced by membrane ion-channel conduction, Harvey Fishman in 1973 devised an external patch electrode, based on previous work by Alfred Strickholm, to isolate a small area of membrane.27 A membrane patch of squid axon, of area 10-4 to 10-5 cm2, was electrically isolated in the megohm range with flowing sucrose solution between two concentric glass pipettes. In 1982, the patch clamp technique was significantly improved by Erwin Neher and colleagues, who obtained seals in the gigohm range. This was achieved with membranes of cultured cells, which lack access structures such as the Schwann-cell layer of squid axon. With membrane areas of 10-7 to 10-8 cm2, they were able to resolve currents in the subpicoampere range.28 6.2. Elementary stochastic fluctuations in ion channels The patch clamp is now widely used to study transitions between conducting states of single ion channels in a large variety of biological membranes. The characterization of stochastic processes and the information obtained from them about the structures and functions of ion channels will be discussed in Chapter 12. NOTES AND REFERENCES 1. A. H. W. Beck, Statistical Mechanics, Fluctuations and Noise, John Wiley, New York, 1976; N. G. van Kampen, Stochastic Processes in Physics and Chemistry, North-Holland, Amsterdam, 1981; E. Frehland, Stochastic Transport Processes in Discrete Biological Systems. Springer-Verlag, New York, 1980. 2. W. R. Bennett, Electrical Noise, McGraw-Hill, New York, 1960. 3. L. J. DeFelice, Introduction to Membrane Noise, Plenum, New York, 1981; H. M. Fishman and H. R. Leuchtag, in Electrical Noise in Epithelial Tissues, edited by S. I. Helman and W. Van Driessche, Academic, Orlando, 1990, 3-35. 4. L. D. Landau and E. M. Lifshitz, Statistical Physics, Addison-Wesley, Reading, Mass., 1969, 343-400. 5. Joseph Patlak, in Membranes, Channels and Noise, edited by Robert S. Eisenberg, Martin Frank and Charles F. Stevens, Plenum, New York, 1984, 197-234. 6. Bennett, 207-209. 7. DeFelice, 232-249. 8. DeFelice, 329f, 380, 448f. 9. André Longtin, in Nonlinear Dynamics in Physiology and Medicine, edited by Anne Beuter, Leon Glass, Michael C. Mackey and Michèle S. Titcombe, Springer, New York,2003, 149-189. With kind permission of Springer Science and Business Media. 10. H. E. Derksen and A. A. Verveen, Science 151:1388-1389, 1966; K. S. Cole, Membranes, Ions and Impulses , University of California Press, Berkeley, 1972, 535. 11. D. Poussart, Proc. Natl. Acad. Sci. USA 64:95-99, 1969. 12. H. M. Fishman, Proc. Natl. Acad. Sci. USA 70:876-879, 1973; Harvey Fishman, L. E. Moore and Denis Poussart, J. Membr. Biol. 24:305-328, 1975. 13. D. J. M. Poussart, Biophys. J. 11:211-234, 1971; C. F. Stevens, Biophys. J. 12:1028-1047, 1972; E. Wanke, L. J. DeFelice and F. Conti, Pfluegers Arch. 347:63-74, 1974. 14. Yves Pichon, Denis Poussart and Graham V. Lees, in Structure and Function in Excitable Cells, edited by D. C. Chang, I. Tasaki, W. J. Adelman, Jr. and H. R. Leuchtag, Plenum, New York, 1983, 211-226. 15. Bernhard Katz and R. Miledi, Nature 226:962-963, 1970. 16. C. R. Anderson and C. F. Stevens, J. Physiol. 235:655-691, 1973. 17. E. Siebenga and A. A. Verveen, in Biomembranes 3: Passive Permeability of Cell Membranes, edited by F. Kreuzer and J. F. G. Slegers, Plenum, New York, 1972, 473-482. 18. E. Siebenga, Thesis, Rijksuniversiteit te Leiden, 1974. DeFelice, 1981. With kind permission of Springer Science and Business Media.
WHAT’S THAT NOISE?
241
19. H. M. Fishman, Proc. Natl. Acad. Sci. USA 70:876-879, 1973. 20. H. M. Fishman, L. E. Moore and D. Poussart, J. Memb. Biol. 24:305-328, 1975. With kind permission of Springer Science and Business Media.-, Ann. N. Y. Acad. Sci. 303:399-423, 1977. 21. F. J. Sigworth, in Membranes, Channels and Noise, edited by Robert S. Eisenberg, Martin Frank and Charles F. Stevens, Plenum, New York, 1984, 21-48. With kind permission of Springer Science and Business Media. 22. W. Nonner, in Membranes, Channels and Noise, edited by Robert S. Eisenberg, Martin Frank and Charles F. Stevens, Plenum, New York, 1984, 117-138. 23. L. B. Cohen, R. D. Keynes and B. Hille, Nature (London) 218:438-441, 1968. 24. L. B. Cohen and R. D. Keynes, J. Physiol. (London) 212:259-275, 1971. 25. L. E. Moore, M. Tufts and M. Soroka, Biochim. Biophys. Acta 382:286-294, 1975; A. Watanabe, J. Physiol. 389: 223-253, 1987. 26. H. M. Fishman, H. R. Leuchtag and L. E. Moore, Biophys. J. 43:293-307, 1983. 27. A. Strickholm, J. Gen. Physiol. 44:1073, 1961; H. M. Fishman, Proc. Nat. Acad. Sci. USA 70:876-879, 1973; H. M. Fishman, J. Membr. Biol. 24:265-277, 1975. 28. E. Neher, in Techniques of Cellular Physiology, vol. 2, P120, Elsevier/North Holland, New York, 1982, 1-16.
CHAPTER 13
DIVERSITY AND STRUCTURES OF ION CHANNELS
We are now ready to explore the vast variety and diversity of voltage-sensitive ion channels. While this book is primarily devoted to a study in depth of the way in which these channels carry out their functions, this chapter is devoted to a broad review of voltage-sensitive ion channels in living organisms. To deal with such a vast and growing array of data it has become necessary develop classifications of ion channels into families. The most important of these are based on the organic evolution of these protein molecules. When studying emergent phenomena in complex systems, the reductionist approach of looking at smaller and smaller components is not enough. We must remain aware of the important information we gathered at larger scales, both in the time domain and in the frequency domain. While we have been looking at voltage-sensitive ion channels as an intellectual challenge, a problem to be solved, these channels are crucial components of our lives. When channels mutate, they sometimes cause diseases, and we will briefly discuss these for each type of channel. Ion channels occur in all biological kingdoms. In plants, the photoreceptor phytochrome is involved in flowering, seed germination and other plant responses by controlling transcription of many light-activated genes dealing with chloroplast development. The 124-kDa phytochrome protein contains a chromophore that responds to light by controlling the influx of Ca2+ into the cell.1 Fungal and bacterial channels are discussed in the last two sections. The rest of the chapter will focus on voltagesensitive ion channels found in the animal kingdom. 1. THE ROLE OF STRUCTURE Although it would be of great value to learn the detailed structure of every channel of interest, we should not expect the knowledge of the structure by itself to tell us how the channel works. Would a person who did not know about ferromagnetism or the Earth's magnetic field understand how a compass works by taking it apart? Clearly, structural information is only part of the picture; we need a theory as well. But structure is an important part of the picture: We cannot understand channels without knowing their structure.
271
272
CHAPTER 13
There are many different types of voltage-sensitive ion channels. Although they have different structures, they function in fairly similar ways. The function, then, is a likely to be a robust property, one that does not depend critically on the precise structure. A similar conclusion can be obtained from genetic experiments: Although most mutations destroy function, many do not. The function of a voltage-sensitive ion channel—a discrete onset of a unit current followed later by its cessation, in which the probabilities of these events depend strongly on the membrane voltage—is an emergent property of the system as a whole. And that system includes not just the channel itself but also the bilayer in which it is embedded, the aqueous inner and outer solutions with their solutes, both ionic and neutral, and the physical conditions, such as temperature, pressure and electric field. All channel experiments, including those with patch clamping, provide these components. Because of the large size and amphiphilic nature of ion channels, it is difficult to obtain their three-dimensional structure, either by x-ray crystallography or nuclear magnetic resonance techniques. However, some structural data on bacterial ion channels and related structures have been obtained; see Section 14 of this chapter. 2. FAMILIES OF ION CHANNELS Ion channels span a vast duration of evolutionary time, being present in bacteria and all other biological kingdoms. Their evolutionary development and differentiation is an ongoing area of study. 2.1 Molecular biology The evolutionary theory of Darwin and Wallace was complemented by Gregor Mendel's laws of genetics and the structural role of DNA elucidated by Crick and Watson on the basis of crystallographic data obtained by Rosalind Franklin. Mutations were found to be structural alterations in the DNA due to attack by chemicals or radiation. Subsequent research showed the way in which coded information in DNA is transcribed into RNA, then translated into polypeptide chains, which are processed into protein molecules. The science of molecular biology made it possible to control this process. With artificial mutations, changes in molecular structure can be introduced to help decipher the dependence of function on particular components of the protein molecule. 2.2. Evolution of voltage-sensitive ion channels We have seen from a number of examples that ion channels play very important roles in the physiology of all kinds of living organisms. As we know, living organisms have not always existed on Earth, but came into being by processes of geochemical and biological evolution. Little is known of the first billion years of Earth's history, but we know from fossil evidence that at the end of this period living organisms, archaebacteria, were present. Ion channels evolved in bacteria, and from them radiated to the other kingdoms. The primal function of ion channels and membrane voltages in animal cells is believed to be the equalization of their internal and external osmotic activities.2 The vast diversity in structure and function of voltage-gated cation channels appears to have originated in the processes of primordial genetic duplication and
DIVERSITY AND STRUCTURES OF ION CHANNELS
273
selection. The sequencing of bacterial genomes has revealed candidates for ancestral K-channel genes. These show structural homologies with the genes of K channels from protists, jellyfish, worms, squid, insects and vertebrates. The distribution of K-channel genes in present-day organisms has been explained in terms of the duplication of the gene or chromosome, chromosomal rearrangement and/or differential gene silencing (repression). Numerous related genes coexist within an organism’s genome. Clusters of functionally related genes may be found in the same chromosomal region. 3. MOLECULAR BIOLOGY PROBES CHANNEL STRUCTURE The tools developed in the revolutionary growth of molecular biology in the late 1970s have given us the possibility of analyzing the molecular structure of ion channels. 3.1. Genetic engineering of ion channels Several types of enzymes have made it possible to manipulate nucleic acids: ! !
Reverse transcriptase is an enzyme that catalyzes the formation of a DNA copy of an RNA molecule. Restriction enzymes are bacterial enzymes that are used to cut DNA at specific sites.
Egg cells, oocytes, are valuable in ion-channel research. Unfertilized eggs of invertebrates, fish, amphibians, reptiles, birds and mammals are electrically excitable due to ion channels in their membranes. Oocytes from Xenopus laevis, yeast and various cell lines are used as expression systems to study the currents in ion channels inserted into their plasma membranes. Mammalian cell lines, such as Chinese hamster ovary (CHO) or human embryonic kidney (HEK) are also used for heterologous expression of channels by transfection. The cells are injected with messenger RNA encoding the channel of interest, processed to removed outer layers and subjected to patch clamp or two-electrode voltage clamp recording. 3.2. Obtaining the primary structure An important first step in seeking the structure of an ion channel is to obtain the sequence of its amino acids. This can be obtained by applying the genetic code to its DNA sequence, obtained by cloning the mRNA encoding the channel. Variant forms, isoforms, of a channel protein may be present in closely related species or different tissues of the same species. Tissue-specific variations in channel structure may be the result of alternative RNA splicing of exons from a single gene. 3.3. Hydropathy analysis The next step in structural analysis is to decide whether a particular region of the polypeptide is located within the region of the lipid bilayer or not. Hydrophobic regions
274
CHAPTER 13
are likely to be in the membrane; hydrophilic regions are more likely to be in one of the aqueous regions, within or outside the cell. A hydropathy index has been established for each of the amino acids, positive for hydrophobic residues and negative for hydrophilic ones. To do a hydropathy analysis of a particular channel, the hydropathy index of each residue is plotted as a function of the residue number, which determines the location of the residue from the N terminus to the C terminus; see Figure 13.1.3 Since at least about 20 amino acids are required to span the bilayer, regions such as those from about 90 to 110 and from ~160 to 190 in Figure 13.1 are likely candidates to be membrane-spanning segments, and are therefore labeled TM1 and TM2. A problem arises when a run of hydrophobic residues is less than 20 amino acids long, such as the region labeled H5. Is it outside the bilayer despite the energy cost to the channel structure, or does it form a hairpin loop that dips into the membrane and comes out on the same side? The latter model is rather ubiquitously assumed in the literature for a number of different channels. We will discuss the question of the role of H5, also known as the P region, in Section 5 below and in Chapter 20.
Figure 13.1. Hydropathy analysis of the inward rectifier channel Kir6.2 provides signposts to the topology of the channel within the membrane. Amino acid residues are plotted from the N to the C terminus. A continuous length of 20 hydrophobic amino acids is considered sufficient to span the membrane. Two such transmembrane domains, TM1 and TM2, are shaded. It has been proposed that the H5 region, not long enough to cross the bilayer, forms an inward hairpin loop. From Ashcroft, 2000.
Hydropathy analysis is a valuable tool, but the answers it gives are subject to interpretation. This requires additional information, such as that obtained from a structural model of the channel. 3.4. Site-directed mutagenesis Much of our knowledge of the relationship of channel structure to function comes from analyzing mutant channels. Individual amino acids within a functionally important region are mutated to determine the effects of specific residues. This technique, sitedirected mutagenesis, has provided information on mutations, including those involved
DIVERSITY AND STRUCTURES OF ION CHANNELS
275
in disease processes. Mutations are identified by stating the one-letter abbreviation of the original amino acid, then its position in the polypeptide sequence, and then the letter for the amino acid to which it has been changed. For example, in the mutation L272F, the leucine at position 272 is replaced by a phenylalanine.
4. CLASSIFICATION OF ION CHANNELS The diversity of living organisms is in part due to the amazing diversity of their molecular components, including ion channels. The development of cloning has made it possible to relate the electrophysiological behavior of channels to the genes of the organism’s DNA. In this way a link has been established from gene expression through molecular characteristics to physiology, pharmacology and pathology. This link is being established in a great and rapidly growing volume of experimental results. Every year new types of channels are reported, and hundreds of different channel types have been reported. Clearly, any summary of channels in a book will soon become obsolete. In fact, computer data banks have been generated to make this information available in a systematic, up-to-date form. 4.1. Nomenclature In this chapter we follow convention and frequently use the term “voltage-gated” rather than the less pictorial (and perhaps less misleading) “voltage-sensitive.” Voltage-gated calcium, sodium and potassium channels are abbreviated VLG Ca, VLG Na, VLG K. The designation for the chloride channels is VLG Cl- or simply VLG Cl.4 The large principal subunit of an ion channel, designated or 1, exhibits the characterizing properties of the channel: ion conduction sensitive to voltage changes, and toxin-binding sites. Auxiliary or regulatory subunits , designated by the following letters of the Greek alphabet, modify these properties. 4.2. Classification criteria While a goal of this book is to broaden our concepts of the physical relation between structure and function of these channels, the experimental literature is based on rather fixed concepts. Since it would be awkward to rework the language in which experimental results have been reported, we will here accept the terminology and withhold our reservations. In this spirit, we will repeat the currently accepted picture: ! !
Voltage-gated ion channels induce transmembrane ion flow in response to sensed changes in transmembrane potential. They have permanently charged or dipolar regions that are forced to move by variations in the local electric field.
276 !
CHAPTER 13 These mechanical movements of the voltage sensor couple to conformational changes associated with channel gating.
Since we are dealing with voltage-gated ion channels, the initiation of gating clearly has little or no dependence on extracellular or intracellular ligands or protein phosphorylation, in contrast to the case of ligand-gated ion channels. Nevertheless, these factors do modulate VLG responses, for example by altering the voltage dependence of gating, the duration or amplitude of the current the activation kinetics or channel protein interactions. Studies of the evolution of channels suggest that some channels, such as the calcium-activated potassium channel KCa, have retained voltage sensitivity while being obligately modulated by ligands, while other ligand-gated channels have lost their voltage sensitivity. While most of our functional knowledge of voltage-gated ion channels comes from excitable cells such as nerve and muscle, it is worthwhile noting that some subtypes of these channels have been detected in non-excitable cells, such as blood and glandular cells. The roles of particular channels are linked to control of the cell’s “excitability cycle.” VLG Ca channels trigger the release of calcium ions from vesicles and initiate muscle contraction; VLG Na channels depolarize their membrane to initiate the regenerative action potential in propagation, while VLG K channels counteract the depolarizing effects of Na and Ca channels, thus modifying the firing patterns. Together with inwardly rectifying K channels, some classes of voltage-gated K channels contribute to the control of the resting potential, thereby influencing the excitability threshold, cell volume, basal secretion rate and maintenance of the cell’s vascular or muscular “tone.” Patterns of receptor-coupled modulation of voltage-gated Ca and K channel effectors affect the molecular mechanisms of plasticity and signal integration. 4.3. Toxins and pharmacology As we have already seen, naturally occurring toxins have helped provide important advances in neurophysiology. These include the puffer-fish poison tetrodotoxin, which impedes the activation of sodium channels at nanomolar external concentrations, and its analogs. Some chemicals, such as pronase, remove inactivation. Toxins from scorpions and coelenterates, and lipid-soluble toxins, slow or shift inactivation in Na channels.5 The use of toxins that selectively block certain types of Ca channels, such as the neuropeptide 7-conotoxin GVIA from the cone snail Conus geographus and the peptide toxin 7-agatoxin IVA from the venom of the funnel web spider Agegenopsis aperta, are useful in the determination of channel types involved in specific functions. Radioactively labeled hormones, drugs and other ligands are also used to identify channel types, their locations and their functions. 4.4. Voltage-sensitive ion channels and disease Brain surgery and drugs have provided important information on brain function. The drug culture has produced results destructive to the individual addicts but serendipidous
DIVERSITY AND STRUCTURES OF ION CHANNELS
277
to science, as in the study of Parkinson’s Disease.6 The study of Alzheimer's disease and prion diseases has focused attention on the medical implications of ion-channel research. The study of ion channels allows us to relate the function, and the malfunction, of cells to the molecular level. Certain diseases are caused by malfunctions of voltage-sensitive ion channels. Mutations in the genes that encode these protein molecules lead to defects in channel function, resulting in nerve and muscle diseases called channelopathies. Epilepsy, migraine headaches, cardiac dysrhythmia and some forms of muscular dystrophy are all suspected of resulting from mutations for ion-channel genes. Research in ion channels has opened the possibility of new therapies to address the underlying pathophysiology of these diseases.7 Since ion channels play a vital role in many physiological processes, it is not surprising that a number of diseases in humans and animals have been linked to mutations in genes that code for voltage-sensitive ion channels. The mutant channels frequently provide important clues as to the roles that particular types of channels play in the body. These naturally occurring mutations are often simulated in animal experiments by artificial mutations. 5. POTASSIUM CHANNELS: A LARGE FAMILY Potassium channels are quaternary structures of four polypeptides. The probability of channel opening is regulated at the molecular level by charged amino-acid residues within transmembrane segments that interact with the electric field. Each of the four subunits of a voltage-sensitive ion channel contain a positively charged segment, S4, that is associated with this interaction, and is therefore said to be part of the voltage sensor of the channel. A negatively charged glutamate residue of segment S2 has also been shown to be involved in the gating process. Experimental evidence shows that the four S4 segments traverse virtually the entire electric field across the activated membrane. Analysis of gating-current experiments reveals that conformational changes occur at voltages less depolarized than those required to initiate current flow. This suggests that the channels passes through several nonconducting conformations before becoming conducting.8 5.1. Shaker and related mutations of Drosophila The Shaker gene of the fruitfly Drosophila melanogaster contributed significantly to our understanding of the molecular biology of voltage-sensitive K+ channels. The mammalian counterparts of this and related Drosophila channels are KV1 (Shaker), KV2 (Shab), KV3 (Shaw) and KV4 (Shal). 5.2. Diversity of potassium channels Potassium channels, which set the resting potential and modulate the action potential of excitable cells, exist in a plentiful variety. Over 100 subunits constituting molecular
278
CHAPTER 13
components of K channels have been identified, and the number continues to grow. In addition to the principal, , subunits that provide the ion-conducting pathways, many K channels possess auxiliary protein subunits that modify channel properties, as found also in Ca and Na channels. Identical polypeptides may assemble to form functional homomultimeric channels, or different (but similar) polypeptides may assemble to form heteromultimeric channels, such as the G-protein-activated K (GIRK1 or Kir3.1) channels. Potassium channels of several types are known:9 ! ! ! ! !
Voltage-gated K channels, designated Kv, activated by membrane depolarization, Ca-activated K channels, whose open probability depends on the internal Ca2+ concentration, inward rectifying K channels, named Kir, which favor the influx over the efflux of potassium ions, “leak” K channels, which are not controlled by voltage or ligands, and Na-activated K channels.
Figure 13.2. Schematic representation of the three groups of K+ channel principal subunits, those with six (6TMD), four (4TMD) and two transmembrane domains (2TMD). Each of these groups is divided into families, which can be subdivided. From Coetzee, et al., 1999.
Figure 13.2 is a schematic representation of the three groups of K+ channel principal subunits. They are classified into three groups in terms of their predicted membrane topology—those that have six transmembrane domains (TMDs), those with four transmembrane domains and those with only two transmembrane domains. Each group of principal subunits is divided into discrete families on the basis of sequence similarity. A functional classification places the voltage- and Ca2+ -regulated K+
DIVERSITY AND STRUCTURES OF ION CHANNELS
279
channels in the 6TMD group, the “leak” K+ channels in the 4TMD group, and the inward rectifier K+ (Kir) channels in the 2TMD group. Also shown in the Figure are some of the auxiliary subunits that have been shown to alter expression levels and/or kinetics of K+ channels. They are grouped together with the principal subunits with which they interact. 5.3. Three groups of K channels Experimental evidence shows that the subunit is a tetrameric assembly with fourfold symmetry. This assembly is similar to the four internally homologous repeats of the Ca and Na channels. The principal subunits are classified by structure into three groups, characterized by two transmembrane (TM) domain proteins, four TM domain proteins and six TM domain proteins. Figure 13.3 shows a potassium channel from the soil nematode Caenorhabditis elegans. The channel shown, from gene n2P16, is a member of the 4TM family with two pore-forming regions. While the amino and carboxy termini are variable within the family, the membrane-spanning domains are highly conserved.10
Figure 13.3. Structure of a potassium channel from the nematode Caenorhabditis elegans. From Wang et al, 1999.
The subunit of the inward rectifier channel is a tetramer of twotransmembrane regions. Seven subfamilies of the Kir family are known. The α subunit of the “leak” K channel is a dimer of four-transmembrane regions, which has two P domains. Although the current through these channels responds to changes in extracellular K+ concentration according to the Goldman–Hodgkin–Katz equation, at least some members of this family can be modulated by arachidonic acid or H+. 5.4. Voltage-sensitive potassium channels The six-TMD proteins are components of voltage-gated Kv channels, Ca2+-activated K channels and members of several other channel families. Voltage-sensitive potassium
280
CHAPTER 13
channels consist of ion-conducting subunits, which may associate with an accessory subunit. Delayed rectifier potassium channels from frog node display a relative selectivity sequence Tl + > K+ > Rb + > NH4+ > Cs + > Li + > Na +. The subunit of the channel is composed of a tetramer of subunits, each of which contributes to the formation of the ion-conducting pathway or pore, formed around the central axis of the protein. Each subunit contains six transmembrane segments, S1-S6, which are highly conserved, and intracellular N and C termini of variable length and composition.
Figure 13.4. Single channel currents elicited by steps to from -90 to +50 mV for the wildtype channel and channels with G, V and I substituted for A at position 463 of Shaker channels. The current amplitudes increase with increasing bulk of the hydrophobic sidechains; that for A463I is roughly twice that of A463G. From Zei et al., 1999.
DIVERSITY AND STRUCTURES OF ION CHANNELS
281
The S4 segment has a positively charged amino acid at every third position, and so is thought to be involved in voltage-dependent activation of the channel. The linker between the S5 and S6 segments, called the H5 region or P loop, possesses a highly conserved “signature” sequence of amino acid residues, TXTTXGYG, which is necessary for K+ selectivity. In particular, the glycine-- tryptophan--glycine (GYG) motif is almost universally conserved in the P regions of potassium-selective channels. Observations that mutations in this region can alter ion-conduction properties suggest that S6 helices of voltage-sensitive K+ channels form part of the channel pore. Substitutions at position A463 of Shaker channels alter K affinity, the rate of C-type inactivation, the efficacy of internal blocking agents and the interaction of external permeant ions with channel closing. Experiments with bacterial channels (Section 12 of this chapter) reveal that this residue inhabits a region of tight packing. Figure 13.4 shows single channel currents elicited by steps to +50 mV for the wild-type channel and channels with glycine, valine and isoleucine substituted for alanine.11 The figure shows ball-and-stick figures of the sidechains; note the branched chains of V and I. Four Kv3 genes, related to the Drosophila Shaw gene, have been identified in mammals. The products of the cloned Kv3.1 and Kv3.2 genes express delayed rectifier currents while Kv3.3 and Kv3.4 express transient outward currents that inactivate rapidly, called A-type currents. The first three are found in the central nervous system, while Kv3.4 is found in the sympathetic ganglia and skeletal muscles. Figure 13.5 shows that, when expressed in transfected HEK293 cells, Kv3.1, .2 and .3 lack fast inactivation while the Kv3.4 currents are clearly transient. The appearance of fast inactivation depends not only on the genetics of the ion channel but also on the cell in which it is expressed.12 Fast inactivation in Kv3.4 or Kv1.4 channels can be removed by exposure of the cytoplasmic surface to mild oxidizing conditions such as air exposure. Inactivation can be restored by application of reducing agents. This effect depends on redox of a cysteine residue in the N-terminal (“ball”) domain. A role as part of an oxygen sensor complex has been proposed. Other chemical agents also modulate the inactivation process.13 As we saw, KV channels are composed of four subunits, each with six transmembrane segments. If the subunits are identical, encoded by the same mRNA, the channel is a homo-oligomer. However, not all KV channels are homomeric. A channel composed of different subunits is a hetero-oligomer. Heteromeric KV channels, sometimes called chimeras, may be produced by genetic engineering, but also exist in native cells. Microwave irradiation alters the kinetics of Ca2+-activated K channels, KCa. 2+ At [Ca ] = 33 M and Vm = +30 mV, the irradiation shortens the open times and lengthens the closed times, decreasing the open state probability Po. However, at low calcium concentrations, irradiation increases Po. Switching off the low-power irradiation slowly restores Po to its value before irradiation. The electromagnetic field appears to alter cooperativity and binding of internal Ca2+.14
282
CHAPTER 13
Figure 13.5. K+ currents through Kv3 channels during depolarizations from a holding potential of -80 mV. The depolarizing pulses ranged from -40 to +40 mV in 10-mV increments. From Rudy et al., 1999.
5.5. Auxiliary subunits Several types of K channels have auxiliary subunits, some integral and other peripheral to the membrane. The best known of these are the Kv subunits of the voltage-gated K channels. One isoform of these subunits, Kv1, induces inactivation in channels that are otherwise noninactivating; this property is ascribed to a variable inactivating ball domain. Some K channels contain an additional auxiliary subunit, Kv2, which accelerates inactivation. Both of these subunits may also shift the voltage of activation. Auxiliary subunits also enhance expression of the channels by acting as chaperones during channel biosynthesis. MinK is a 15-kDa single-transmembrane protein that coassembles with a voltage-sensitive ion channel in cardiac and other cells; see Figure 13.2. 5.6. Inward rectifiers The potassium channels of the squid axon, whose currents were discussed in Chapter 4, surprisingly are not members of the KV family, but belong to the family of inward rectifiers, Kir. These channels are much smaller than the voltage-gated channels, with only 390-500 amino acids and molecular masses of about 40 kDa. The property of inward rectification that these channels exhibit, by which inward currents are much smaller than outward currents, is shown in Figure 13.6.15
DIVERSITY AND STRUCTURES OF ION CHANNELS
283
Figure 13.6. Inward rectification in a Xenopus oocyte injected with mRNA encoding the inward rectifier Kir 2.1. The currents (left) were elicited by a series of voltage steps. Steady state currents are shown versus voltage relative to holding potential (right). From Ashcroft, 2000, after Carina Ämmälä.
We recall from Chapter 7 that the rectification property caused concern in the fitting of squid axon data with electrodiffusion. The K channels of squid axon are of the Kir type. The rectification property of Kir channels has been found to arise from the presence of Mg2+ and other ions with multiple positive charges in the internal solution, including charged polyamines such as the spermine4+ ion. These ions bind near the internal surface of the channel (a region that has been referred to as “the inner mouth of the pore.”) As expected from electrodiffusion, membranes with Kir channels show a linear current–voltage relation in symmetrical K+ solutions. Intracellular Mg2+ introduces rectification dependent on the Mg2+ concentration. The cause of inward rectification appears to be not a “block” but an effect involving electrostatic repulsion due to multivalent cations entering the channel from inside the membrane. This effect, illustrated in Figure 6.10 of Chapter 6, is the creation of vacancies at monovalent sites due to the presence of divalent ions. Ions of valency +2 or higher would replace K+, leaving vacant sites. Since neither the divalents nor the vacancies contribute to the potassium current, the outward IK would be reduced. Just as depolarization drives these multivalent cations into the channel, hyperpolarization drives them out of the channel, removing the effect of electrostatic repulsion on the K+ sites. Kir channels share with Kv channels the stretch of amino acids between two membrane-spanning segments known as the H5 or P region, which has been shown to project into the pore. The interaction with the scorpion toxin Lq2 is similar in Kir, Kv and K Ca channels, suggesting a broadly similar pore structure. C. Dart and collaborators have produced a structural model of this region in Kir2.1. The model of the residues 138-149, ETQTTIGYGFRC, is shown in Figure 13.7. 16
284
CHAPTER 13
Figure 13.7. A molecular model of the H5 region of the inward rectifier potassium channel Kir2.1, viewed from the outside. From C. Dart et al., 1999.
The residues predicted to have sidechains projecting into the pore are T141, T142, I143,Y145 and F147. Differences between this picture and that of the bacterial channel KCSA suggests that the model proposed by Doyle et al. (see Section 13 of this chapter) may not be applicable to all K channels.
5.7. Potassium channels and disease When the motions of an etherized mutant fruitly evoked images of the gyrations of a go-go dancer, the gene for the mutation became known as eag, for ether-à-go-go. A family of genes related to eag, called erg and elk, was later isolated from Drosophila. A human eag-related gene, HERG, was found to be strongly expressed in the heart, where it has been linked to a form of the long QT syndrome.17 Mutations in voltage-sensitive potassium channels have been found to be responsible for familial periodic cerebellar ataxia. KCNQ1 and minK mutations not only prolong the action potential but may also cause deafness. Benign familial neonatal epilepsy has been associated with mutations in KCNQ2 and KCNQ3.18 6. VOLTAGE-SENSITIVE SODIUM CHANNELS: FAST ON THE TRIGGER The voltage-sensitive sodium channel carries the initial inward current of the action potential. The rush of sodium ions through a hydrophilic pathway in the channel drives
DIVERSITY AND STRUCTURES OF ION CHANNELS
285
the cell rapidly from its negative resting voltage to neutrality and beyond, to a positive spike. The inactivation of the sodium channel, usually together with the opening of a parallel pathway carrying potassium ions outward, terminates the action potential. The voltage-gated sodium channel in squid axon is highly permeable to hydrogen, sodium and lithium ions, much less to all other monovalent cations. In frog node, the permeability ratios above 0.2 are in the sequence H+ > Na + > HONH3+ > Li + > HNNH3+ > Tl +.19 The three features characterizing this channel are activation, inactivation and selective ion permeability. The activation lasts only a few milliseconds. Voltagesensitive sodium channels are present in central and peripheral nervous systems, neuroendocrine tissue, skeletal muscle and heart cells. 6.1. Neurotoxins of VLG Na channels The sodium channel has become the target of various neurotoxins from different species. These are classified by the sites on the channel at which they bind. Tetrodotoxin and saxitoxin are polar, heterocyclic guanidines that bind at site 1 of the channel, accessible from the outside of the membrane. These toxins block ion flow through the channel.20 Lipid-soluble toxins binding at site 2 include batrachotoxin, veratridine, aconitine and grayanotoxin. They shift the voltage dependence of activation in the direction of hyperpolarization and prevent channel inactivation, resulting in a persistent activation at normal potentials; see Figure 4.7 of Chapter 4. The binding of polypeptide toxins from scorpions and sea anemones at site 3 is voltage-dependent. 6.2. Types of VLG Na channels Voltage-gated sodium channels consist of a principal subunit of approximately 260 kDa, associated in brain and muscle tissues with accessory subunits. The mammalian sodium channel isoforms have been divided into three groups: Type 1 channels, designated Nav1.x, share significant sequence similarity; type 2, Nav2.x, show significant differences while remaining approximately 50% or less identical with type 1. Type 2 channels are present at high levels in heart, skeletal muscle and uterus. A type 3 channel, Nav3.1, has been found in rat. All isoforms demonstrate fast inactivation and are modulated by auxiliary 1 and 2 subunits. Their ion conduction is eliminated by nanomolar concentrations of TTX. Sodium channels from adult skeletal muscle are also sensitive to conotoxin.21 6.3. The charged membrane-spanning segments The sodium channels of eel electroplaque were analyzed, and their primary structure deciphered, by M. Noda and collaborators in the laboratory of S. Numa.22 Their work showed that the S4 segment is a charged array. The region exhibits a pattern of cationic amino acids, arginine (R) and lysine (K) at every third position, separated by nonpolar
286
CHAPTER 13
residues; many of these are the branched hydrophobics, valine (V), leucine (L) and isoleucine (I). There are also two or three negative charges in the S2 and S3 segments of these channels. The pattern of repeated positive charges, evocative of Armstrong’s early model (Section 2.1 of Chapter 14), was recognized as a likely component of the mechanism of voltage-sensitive gating. This recognition was reinforced by the finding that this pattern is strongly conserved among all voltage-sensitive channels; see Figure 13.8.23 This pattern even shows up in molecules not considered to be voltage-gated ion channels. As Figure 13.8 shows, these are a hyperpolarization-activated channel (I h or HCN1; see Section 9 of this Chapter) and a cyclic nucleotide gated channel (CNG; Section 10).
Figure 13.8. Alignment of amino-acid sequences of the S4 segments of different channels shows a homologous pattern of positively charged residues, R and K (shaded background). The S4 segments are from Shaker K, domains (in parentheses) of Nav1.2, Cav1.1, HCN1 (Ih) and CNG channels. The glutamic acid residues (E, in boxes) are negatively charged and the rest are electrically neutral. From Hille, 2001.
Replacement of one or more basic, positively charged residues of S4 in rat brain Na channels with neutral (glutamine) or acidic (negatively charged) residues reduced the steepness of activation and shifted the activation curve in the negative direction, opening the channel at more negative potentials. Similar results were obtained from K channels. While these results were roughly in agreement with expectations based on a model of channel opening due to the movement of gating charges, unexpected effects appeared when neutral residues were replaced. When the leucine in Na channel domain II, marked with an asterisk (L*) in Figure 13.8, was changed to a phenylalanine, the activation gating was shifted in the positive direction, implying that this leucine stabilizes the open state in the wild-type channel. Similarly in Shaker K channels, changing either of the two starred leucines to valine shifted activation to the right. Later
DIVERSITY AND STRUCTURES OF ION CHANNELS
287
experiments, however, showed no change in integrated gating current from changes in neutral residues. In more elaborate experiments, the effect of inactivation on activation was eliminated. It was also realized that the limiting steepness of activation can be measured accurately only when the probability Po of channel opening is smaller than 10-3, and that gating charge zg is measurable by integrating gating currents. Experiments on Shaker K channels suggested that at least three charges per subunit move fully across the electric field during activation. The first four basic residues from the outside are the most sensitive. Reduction of gating charge also appeared with neutralization of negative charges. When one of the three membrane acidic residues, E293 in Shaker S2, was replaced with a neutral residue, the zg decreased by more than four unit charges. 6.4. Proton access to channel residues The concept of gating charges as permanently buried residues moving within the membrane received a setback with the introduction of the technique of cysteine substitution in 1995. In the disease paramyotonia congenita (see next Section), the outermost arginine residue of the S4 segment, domain IV, of the muscle Na channel Nav1.4 is mutated to a cysteine or histidine. The mutant channels show a failure of rapid inactivation from open, causing repetitive firing in muscles. The behavior of the histidine mutant R1448H depends on the pH of the extracellular medium, indicating that the residue is accessible to protons. The free sulfhydryl group of cysteine makes it a readily modifiable residue. In the technique of cysteine substitution, a cysteine is engineered into a desired location. A water-soluble reagent is then added to the solution to diffuse to the site and react with the sulfhydryl group of the cysteine, modifying it. The reagents used are based on the methane thiosulfonate (MTS) moiety.24 The accessibility of the cysteine in R1448C was tested by the use of two charged MTS reagents. The reaction was found to be voltage dependent, with the relative peak gNa shifting from 0.0 at a membrane potential of -80 mV to 1.0 at +20 mV. An interpretation of these data is that the S4 segment emerges into the extracellular space during a depolarization. A number of accessibility studies were made on cysteine-substituted K and Na channels with reagents applied internally or externally. Generally, the residues close to the extracellular surface become more accessible to externally applied reagents, and those near the inside less accessible to internally applied reagents, when the channel is conducting (open) than when it is nonconducting (closed). When a histidine residue is substituted for the arginine positions R2 and R3 in Shaker-IR channels mutated to pass no K + current, the transient component of the gating current becomes dependent on pH. Over a range of voltages, the channel carries an H+ current down the proton gradient (see Section 4.4 of Chapter 4).
288
CHAPTER 13
Figure 13.9. Disease mutations in the human gene coding for the subunit of the adult ion
• hyperkalemic periodic paralysis, ! paramyotonia congenita, # potassium-aggravated myotonia. From Ashcroft, 2000.
channel of skeletal muscle, SCN4A.
6.5. Mutations in sodium channels Mutations in human genes encoding voltage-sensitive Na channels cause several genetic disorders. Mutations in the gene SCN4A coding for the subunit of the adult voltagesensitive ion channel of skeletal muscle Nav1.4 have been found in hyperkalemic periodic paralysis (HyperPP), paramyotonia congenita and other hereditary disorders. The patients in these diseases have muscle weakness or hyperexcitability exacerbated by cold and increased plasma potassium concentration. The mutations underlying these diseases affect inactivation, producing a persistent inward Na+ current. They occur in the S4 segment of repeat IV, the inactivation loop between repeats III and IV or at the cytoplasmic end of segments 5 or 6; see Figure 13.9.25 In contrast to the mutations that affect inactivation directly, mutations in the S4 segments slow the rate of fast inactivation. Mutation analysis shows that the S4 segment of repeat IV plays a greater role in coupling activation to fast inactivation than the other S4 segments. 7. CALCIUM CHANNELS: LONG-LASTING CURRENTS The most ubiquitous types of voltage-gated channels are those selective for calcium ions. After an action potential has traveled some distance to the terminal of an axon, it must signal its arrival, altering the physiology of the cell. That is the role of voltagegated calcium channels, which provide a rapid means of entry for calcium ions into the cell. By linking the transient changes in membrane potential to the inflow of calcium ions, they initiate a variety of cellular responses, including secretion and muscular contraction. Voltage-gated Ca channels are highly permeable to Sr2+ and Ba2+ as well
DIVERSITY AND STRUCTURES OF ION CHANNELS
289
as Ca2+. The permeability ratios for L-type calcium channels are in the sequence Ca2+ > Sr2+ > Ba2+ >> Li + > Na + > K+ > Cs +.26 Because these channels do not inactivate rapidly, they maintain inward currents for considerable time periods. This is of great importance in the regulation of endocrine organs and glands, in which a prolonged depolarization is necessary to drive secretions. The longer depolarizations also sustain the contraction of smooth and cardiac muscle. Because of the sensitive dependence of many biological functions on Ca2+ levels, cytoplasmic Ca2+ concentrations must be regulated and modulated. This modulation is carried out by voltage-sensitive calcium channels, through which Ca2+ ions can quickly enter the cell. In neurotransmitter and hormone release from presynaptic terminals and secretory cells, the released substance may modulate the calcium channels that control its own secretion, by autocrine regulation. Alternatively, it may interfere with exocytosis in other neurons by inhibiting Ca2+ influx at their prejunctional endings, in presynaptic inhibition.27 7.1. Function of VLG Ca channels Calcium channels regulate gene expression and mediate cell death. Several types of voltage-sensitive Ca channels govern the exocytosis of neurotransmitter vesicles in presynaptic neuronal terminals. The fusion of the vesicles with the presynaptic membrane requires triggering by Ca2+.
Figure 13.10. The predicted topological structure of the 1 subunits of calcium channels of types N and P/Q. The synaptic protein interaction sites on intracellular loop II–III are in the regions 718-963 for type N and 722-1036 for type P/Q, as marked. From Catterall, 1999.
The release of neurotransmitters from presynaptic nerve terminals is initiated by the influx of Ca2+ through clusters of N-type and P/Q-type calcium channels. The high concentration of Ca2+ triggers the exocytosis of neurotransmitter vesicles. Since this concentration drops off rapidly with distance, the channels are believed to be
290
CHAPTER 13
structurally connected to the vesicles by specialized proteins. The synaptic protein interaction occurs at specific sites on the intracellular loop II–III, shown in Figure 13.10.28
7.2. Structure of VLG Ca channels A variety of voltage-sensitive calcium channels has been described, located in both excitable and nonexcitable cells. A single cell may have more than one type of calcium channel. These may be located in different parts of the cell, thereby dividing it into functional compartments. By influencing local cytoplasmic levels of Ca2 +, a ubiquitous second messenger, calcium channels are unique in the diversity of cellular functions they regulate. In the embryo, VLG Ca channels control proliferation, differentiation and cell–cell interactions. In the developing and mature nervous system, they regulate coupling of electrical excitation to gene expression and modulate many intracellular signaling pathways. Voltage-sensitive calcium channels are divided into high-voltage activated (types L, N, P and Q), intermediate (type R) and low-voltage-activated (type T). Calcium channels consist of a subunit, termed 1, which forms the ion pathway (or pore), plus several auxiliary or regulatory subunits, termed , and . They are therefore hetero-oligomeric proteins. L-type Ca channels from skeletal muscle contain the 1 subunit (175 kDa), a disulfide-linked subunit dimer 2 (143 and 27 kDa), an intracellular subunit (50 kDa) and a transmembrane subunit (33 kDa).29
Figure 13.11. Subunit composition of voltage-dependent calcium channels from neurons. The ion-conducting 1 domain interacts with the cytoplasmic subunit. A disulfide bond is shown on the highly glycosylated (indicated by 5 symbols) 2 subunits. The G-protein - complex binds to the I-II loop of 1, modulating channel activity. From Burgess and Noebels, 1999.
DIVERSITY AND STRUCTURES OF ION CHANNELS
291
The subunit composition of a neuronal voltage-sensitive Ca2+ channel is shown in Figure 13.11. The ion-conducting 1 subunit, consisting of four homologous domains (I-IV), interacts with the cytoplasmic subunit. Disulfide bonds link the highly glycosylated 2 and subunits. The G-protein - complex binds to the I-II loop of 1, modulating channel activity. A protein interaction site on the II-III linker of 1, labeled S, couples the channel excitation to transmitter release at axon terminals.30 7.3. Types of VLG Ca channels Biophysical and pharmacological criteria are used to used to distinguish various types of VLG Ca channels, labeled L, T, N, P, Q and R: L-type Ca channels are found in virtually all excitable and many non-excitable cells. They are present in brain and in skeletal, cardiac and most smooth muscle. In skeletal and cardiac muscle they have a voltage-sensing function in the transverse tubules, where the influx of Ca2+ through L-type Ca channels triggers a large Ca2+ release from the sarcoplasmic reticulum to initiate contraction. L-type calcium channels mediate currents sensitive to 1,4-dihydropyridines (e.g. nifedipine), phenylalkylamines (e.g. verpimil) and benzothiazepines (e.g. diltiazem). N-type Ca channels are involved in neurotransmitter release in neuron terminals. They may be blocked by the cone snail neuropeptide 7-conotoxin GVIA. They, along with Ca channels of types L, P and Q, are characterized by their high voltage activated (HVA) currents, which have a high threshold for activation. Their inactivation rates vary from 100 ms in sympathetic neurons to 1.5 s in supraoptic neurons. Noradrenaline inhibits the action of N-type channels at presynaptic receptors via a G protein-second messenger system. P-type Ca channels are particularly prevalent in cerebellar Purkinje cells. They are sensitive to nanomolecular concentrations to 7-agatoxin IVA. They inactivate slowly, with inactivation times of about 1 second. Q-type Ca channels have also been observed; it has been suggested that these are identical to P-type channels; the designation P/Q-type is sometimes used. T-type Ca channels, also termed low voltage activated (LVA) channels, are relatively sensitive to depolarizations. They are called transient because of their rapid voltage-dependent inactivation, but their tail currents decay (deactivate) 10-100 times more slowly than other VLG Ca channels. T-type calcium channels are thought to be responsible for oscillatory neuronal activity in the central nervous system with possible implications to sleep/wakefulness regulation and motor coordination. They are also involved in pacemaker activity in the heart. 7.4. Calcium-channel diseases Diseases involving Ca channels have been identified in rats, mice and humans.31 In certain mutations affecting excitation–contraction coupling, an action potential traverses the muscle fiber but does not result in muscle contraction. The presence of antibodies binding to presynaptic Ca channels in an autoimmune disorder called Lambert-Eaton myasthenic syndrome results in muscle weakness.
292
CHAPTER 13
A disease in which the arginine residues that provide the positive charges to the S4 helices are replaced with uncharged polar residues is hypokalemic periodic paralysis (HypoPP, not to be confused with the Na-channel disease HyperPP). This autosomal dominant genetic disease produces episodic attacks of muscle weakness. Other clinical phenotypes in humans include familial hemiplegic migraine, malignant hyperthermia, myopathy, retinal signaling defects linked to night blindness, episodic ataxia, dyskinesia and epilepsy.32 8. H+-GATED CATION CHANNELS: THE ACID TEST H+-gated cation channels are members of a family of channels that are sensitive to pH in sensory neurons, neurons of the central nervous system and oligodendrocytes, a type of glial cells. In sensory neurons, they are proposed to mediate the perception of pain that accompanies tissue acidosis. Cloning shows that they are members of a superfamily, NaC/DEG, that includes epithelial Na+ channels, mechanosensitive neurons in nematodes called degenerins and a peptide-activated Na+ channel. The first proton-gated cation channel cloned, designated ASIC1 (acid sensing ionic channel 1), has two transmembrane domains with a large extracellular component. It has a conductance of ~14 pS, is permeable to Na+, Li+ and (less to) Ca2+. ASIC1 desensitizes within a few seconds under prolonged application of extracellular acid. Figure 13.12 shows the response of ASIC1 to increases in hydrogen-ion concentrations. A mammalian homologue of degenerin, MDEG1, displays an inward current that rapidly decreases, then increases, finally decreasing with a sustained component.33
Figure 13.12. Acid sensing in ASIC1, a proton-gated channel expressed in Xenopus oocytes. Left: The Na+ current response to the indicated pH change, showing desensitization. Right: Amplitude of the inward current as a function of external pH. From Waldmann et al., 1999.
Lowering of extracellular pH also affects voltage-sensitive potassium channels, suppressing currents in Kv1.2 and Kv1.4. The amino acids responsible for H+ modulation appeared to lie in the S5-S6 linker region.34
DIVERSITY AND STRUCTURES OF ION CHANNELS
293
9. CHLORIDE CHANNELS: ACCENTUATE THE NEGATIVE The ubiquitous voltage-gated chloride channels, designated ClC, are permeated by chloride, the most abundant aqueous ion on earth. Although chloride conduction began in the description by Hodgkin and Huxley with the unflattering designation of “leak,” the functional roles of chloride channels is now being recognized: Chloride channels regulate synaptic transmission and cellular excitability, muscle tone and blood pressure. In addition to the voltage-gated chloride channels, there are two other families of chloride channels with different structures: They are the cystic fibrosis transmembrane conductance regulator (CFTR) and related channels, and the ligandgated Cl channels activated by the neurotransmitters glycine and -aminobutyric acid (GABA). 9.1. Structure and function of chloride channels The cloning in 1990 of a voltage-gated chloride channel from an electric fish led to the discovery of a widespread molecular family of ancient lineage. Present in virtually all organisms from bacteria to protists, fungi, animals and plants, they share the features of anion selectivity and rectification.35
Figure 13.13. Bursts from single electroplax chloride channels in a planar bilayer membrane. The recordings were made at a holding potential of -90 mV. The states labeled on the record and inset are inactivated (I), one ‘protochannel’ open (M), two ‘protochannels’ open (U) and closed (D). The conductance of a single ‘protochannel’ is 10 pS. From Conley and Brammar, 1999, after Miller, 1984.
The ClC family differs drastically, in both structure and function, from the other families of ion channels described in this chapter. They are formed from a single
294
CHAPTER 13
subunit rather than the “barrel-stave” plan of other channels; the voltage dependence of their gating arises from movement of the permeant ions through the electric field, rather than the movement of charge on the protein, as postulated for other channels. The voltage-gated chloride channel cloned from the electric organ of the ray Torpedo mamorata, ClC-0, is strongly selective for anions, with a permeability sequence Cl- > Br- > I-. It displays a bursting process with two conducting states, U and M, a nonconducting state D and an inactivated state I; see Figure 13.13.36 Members of ClC are characterized by well defined tissue specificities in mammals: ClC-1 is found in skeletal muscle, kidney, liver and heart; ClC-2 is ubiquitous, being activated by cell swelling; ClC-3 is found primarily in brain, but also in lung, kidney and adrenal gland; ClC-4 exists in liver, brain, heart, spleen and kidney; ClC-5 is expressed mainly in kidney, with lesser amounts in brain, liver, lung and testis. ClC-Ka and ClC-Kb are largely specific to the kidney, where they function in Cltransport across epithelial membranes. 9.2. Chloride-channel diseases Mutant chloride channels are implicated in several diseases of muscle and kidney.37 Mutations in ClC-1 can cause dominant myotonia congenita (Thompsen’s disease) or recessive generalized myotonia (Becker’s disease). Mutations in ClC-5 channels can cause renal tubular disorders such as Dent’s disease (kidney stones). 10. HYPERPOLARIZATION-ACTIVATED CHANNELS: IT’S TIME The autonomous beating of the heart and the respiratory rhythm are examples of biological systems that exhibit periodic activity. Other examples include the rhythmic firing of neuronal networks and the steady cycle of circadian rhythms, such as our sleep–wakefulness cycle mediated by the reticular system. Synchronization of the activity of neuronal populations by the endogenous 40-Hz oscillation may bind together the components of the perceptual representations of our external world. The periods of these cycles have a vast range, from milliseconds to days. While slow, homeostatic cycles, such as the primary circadian pacemaker, depend on a rhythmic interplay of transcription and translation, faster cycles depend on the biophysical properties of ion channels in the excitable membranes of pacemaking nerve and muscle cells. A hyperpolarization-activated cation channel termed Ih is a primary source of rhythmic firing in heart and brain Ih is a member of the voltage-gated potassium channel superfamily. The genes encoding these channels belong to the mammalian gene family HCN (Hyperpolarization-activated, Cyclic-Nucleotide-sensitive, Cation Non-selective).38 The Ih channels, first described in sinoatrial node cells of the heart, are found in cardiac muscle and Purkinje fibers, and in peripheral and central neurons. Unlike other members of the voltage-gated channel superfamily, Ih channels activate in response to a hyperpolarization rather than a depolarization. As a result, they carry an inward current at hyperpolarized potentials. This depolarizing current drives the membrane potential back toward the threshold of other voltage-sensitive channels,
DIVERSITY AND STRUCTURES OF ION CHANNELS
295
which thus maintains rhythmic firing. Ih channels display only a weak selectivity for K+ over Na+, with a reversal potential of -35 mV. Cs+ and Rb+ ions block them. The activation of Ih is shifted by second messengers such as cyclic AMP, thus making the cell sensitive to transmitters and hormones. The binding of cAMP shifts the activation curve of Ih to more depolarized potentials, altering the inward current and consequently the depolarization rate and the oscillation frequency. One example of this effect is the speeding of the heartbeat by -adrenergic agonists such as adrenaline (epinephrine). 11. CYCLIC NUCLEOTIDE GATED CHANNELS Although ligand-gated channels are beyond the scope of this book, let us briefly examine cyclic nucleotide gated (CNG) channels. These channels, from photoreceptors and olfactory sensory neurons, consist of two subunits, and . CNG channels are of particular interest here because the amino acid sequence of the subunit displays a significant structural homology with those of voltage-gated ion channels.39
Figure 13.14. Proposed topology of the CNG channel. The broken line represents the segment, from lys 2 to val 155, deleted in the truncated mutant channel. Note the assumption of an inverted P (pore) region between S5 and S6. From Bucossi et al., 1996.
12. MITOCHONDRIAL CHANNELS Ion channels are found on the membranes of the organelles within a cell as well as the plasma membrane of the cell. Mitochondria are the organelles that transduce the chemical energy of oxygen and food molecules into the phosphorylation of the
296
CHAPTER 13
nucleotide adenosine diphosphate, ADP, into the metabolically active energy carrier adenosine triphosphate, ATP. Mitochondria are enclosed in two membranes, both of which contain ion channels. While the outer one of which does not bar the passage of small molecules, the inner membrane is the site of oxidative phosphorylation, the process of energy coupling. The outer mitochondrial membrane possesses two types of channels, the voltage-dependent anion-selective channel (VDAC) and the peptide selective channel (PSC). The inner membrane, which folds into the mitochondrial matrix, has at least six identified channel types.
Figure 13.15. The activity and voltage dependence of the voltagedependent anion-selective channel from Neurospora crassa mitochondrial membrane. (A) The channel was inserted into a bilayer with symmetrical solutions of KCl, CaCl2 and buffer. The current response to a triangular voltage wave to ±60 mV shows the channel closing with higher potentials of both polarities. (B) Probability that the channel is open, calculated from the time spent in the fully open state, as a function of voltage. The current was obtained from a patch excised from a liposome with reconstituted VDAC, in calcium-free solutions. From K. W. Kinally et al., 1996.
DIVERSITY AND STRUCTURES OF ION CHANNELS
297
VDAC, with a molecular weight of about 30 kDa, is believed to have a barrel structure; a pore opening of 3 nm has been reported. Because of its presumed similarity to bacterial porin, VDAC is also called mitochondrial porin. VDAC has an anion-selective open state of 650 nm and a predominant half-open state of 300 pS that is slightly cation selective. ADP and ATP are freely permeant only through the fully open state. Figure 13.15 illustrates the activity and voltage dependence of VDAC.40 13. FUNGAL ION CHANNELS—ALAMETHICIN The fungus Trichoderma viridie produces an ion channel, alamethicin. Alamethicin has a weak antibacterial action, and a number of natural and synthetic analogues are used as antibiotics. Alamethicin belongs to a family of peptides, peptaibols, that have a high content of the amino acid -aminobutyric acid (Aib) and an -amino acid at the C terminus. The molecule is 20 amino acids long, eight of which are Aib. High resolution crystallography shows that alamethicin contains a single helix with a kink at a proline at position 14.
Figure 13.16. A single alamethicin molecule (A) is an helix with a kink at a proline at position 14. (B) A proposed channel structure, viewed from the N terminal, consists of a pore formed by six of the monomers. From Ashcroft, 2000, after Sansom, 1993.
When alamethicin is incorporated into planar lipid bilayers, single-channel currents of discretely varying amplitudes are observed; see Figures 13.16 and 13.17. This behavior is different from that of voltage-sensitive ion channels from animals, and various models to explain it have been proposed.41
298
CHAPTER 13
Figure 13.17. Single-channel currents from a lipid bilayer doped with alamethicin exhibit channel openings of different amplitudes. From Ashcroft, 2000, after Sansom, 1991.
14. THE STRUCTURE OF A BACTERIAL POTASSIUM CHANNEL In 1998 a group in the laboratory of Roderick MacKinnon crystallized a simple channel from a bacterium, Streptomyces lividans. The bacterial ion channel studied by the MacKinnon group has multiple aqueous pores. The polypeptide chain of KCSA, a nonvoltage-gated potassium channel, comprises 158 residues. These are folded into two transmembrane helices, a pore helix and a cytoplasmic tail of 33 residues that was removed before crystallization. Four subunits arranged around an axis of fourfold symmetry form the K+ channel molecule. The two membrane-spanning domains of KCSA are homologous to the S5 and S6 regions of voltage-sensitive K+ channels. The subunits pack together to form the
Figure 13.18. Two opposing subunits of the KCSA channel. From an adaptation by Branden and Tooze, 1999, of a figure by Doyle et al., 1998.
DIVERSITY AND STRUCTURES OF ION CHANNELS
299
transmembrane ion pore. The C-terminal helix of each subunit faces the interior pore, while the N-terminal helix faces the lipid bilayer outside. The four inner helices of the molecule are tilted and kinked, so that the subunits open like petals of a flower towards the extracellular space; see Figure 13.18.42 Doyle and collaborators found a region 12 Å long, the selectivity filter, through which ions moved without a hydration shell. Main-chain atoms line the walls of the passage forming the selectivity filter with carbonyl oxygen atoms pointing into the pore to form binding sites for the K+ ions. The conducting pathway of the selectivity filter is so narrow that a potassium ion has to shed some of its waters of hydration to solvate into the filter. The MacKinnon group reports that 80% of the membrane voltage was across this region. Ion binding sites in the pore were shown to be formed by the backbone carbonyls of the signature sequence amino acids (see Section 5.4 of this chapter), while the S6-like helices cradle the selectivity filter and line the internal vestibule of the channel. 43 The KCSA channel, lacking the domains homologous to the charged S4 and S2 regionsin voltage-sensitive ion channels, appears to be limited as a model for gating in these channels. Further studies of crystallized ion channels will be discussed in Chapter 21.
NOTES AND REFERENCES
`
1. Bruce Alberts , Dennis Bray, Julian Lewis, Martin Raff, Keith Roberts and James D. Watson, Molecular Biology of the Cell, Second Edition, Garland, New York, 1989, 1180. 2. Clay M. Armstrong, in Pumps, Transporters, and Ion Channels, edited by Francisco V. Sepulveda and Francisco Bezanilla, Kluwer Academic/Plenum New York, 2005, 1-10. 3. Reprinted from Frances M. Ashcroft, Ion Channels and Disease, Academic, San Diego, 58. Copyright 2000, with permission from Elsevier. 4. Edward C. Conley and William J. Brammar, The Ion Channel FactsBook: Voltage-Gated Channels, Academic, San Diego, 1999. 5. Bertil Hille, Ion Channels of Excitable Membranes, Third Edition, Sinauer, 2001, 635-646. 6. C. U. M. Smith, Elements of Molecular Neurobiology, Second Edition, Wiley, 1996, 459f. 7. Brenda Patoine, BrainWork Jan/Feb 2001, 7; L. J. Ptacek, Current Opinion in Neurology 11:217-226, 1998. 8. Diane M. Papazian, William R. Silverman, Meng-chin A. Lin, Seema K. Tiwari-Woodruff and ChihYung Tang, In Ion Channels–from atomic resolution physiology to functional genomics, Wiley, Chichester, 2002, 178-192. 9. William A. Coetzee, Yimy Amarillo, Joanna Chiu, Alan Chow, David Lau, Tom McCormack, Herman Moreno, Marcela S. Nadal, Ander Ozaita, David Pountney, Michael Saganich, Eleazar Vega-Saenz De Miera and Bernardo Rudy, in Molecular and Functional Diversity of Ion Channels and Receptors, edited by Bernardo Rudy and Peter Seeburg, New York Academy of Sciences, New York, 1999, 233285. By permission of Wiley-Blackwell Publishing. 10. Zhao-Wen Wang, Maya T. Kunkel, Aguan Wei, Alice Butler and Lawrence Salkoff, in Rudy and Seeburg, 286-303. By permission of Wiley-Blackwell Publishing. 11. Paul C. Zei, Eva M. Ogielska, Toshinori Hoshi and Richard W. Aldrich, in Rudy and Seeburg, 458-464. By permission of Wiley-Blackwell Publishing. 12. Bernardo Rudy, Alan Chow, David Lau, Yimy Amarillo, Joanna Chiu, Alan Chow, David Lau, Ander Ozaita, Michael Saganich, Herman Moreno, Marcela S. Nadal, Ricardo Hernandez-Pineda, Arturo Hernandez-Cruz, Alev Erisir, Christopher Leonard and Eleazar Vega-Saenz De Miera, in Rudy and Seeburg, 304-343. By permission of Wiley-Blackwell Publishing. 13. Conley and Brammar, 374-616. 14. V. N. Kazachenko, E. E. Fesenko, K. V. Kochetkov and N. K. Chemeris, Ferroel. 220:317-328, 1999.
300
CHAPTER 13
15. Reprinted from Ashcroft, 136. Copyright 2000, with permission from Elsevier. 16. C. Dart, M. L. Leyland, P. J. Spencer, P. R. Stanfield and M. J. Sutcliffe, in Rudy and Seeburg, 414-417. By permission of Wiley-Blackwell Publishing. 17. Michael C. Sanguinetti, in Rudy and Seeburg, 406-413. 18. Ashcroft, 97-123. 19. Hille, 457. 20. Robert L. Barchi, Ann. Rev. Neurosci. 11:455-495, 1988. 21. Alan L. Goldin, in Rudy and Seeburg, 38-50. 22. M. Noda, S. Shimizu, T. Tanabe, T. Takai, T. Kayano, T.Ikeda, H. Takahashi, H. Nakayama, Y. Kanaoka, N. Minamino, K. Kangawa, H. Matsuo, M. A. Raftery, T.Hirose, S. Inayama, H. Hayashida, T. Miyata and S. Numa, Nature 312:121-127, 1984. 23. Hille, 603-612. 24. Hille, 551. 25. Reprinted from Ashcroft, 81. Copyright 2000, with permission from Elsevier. 26. Hille, 459. 27. E. Carbone, V. Magnelli, V. Carabelli, D. Platano and G. Aicardi, in Neurobiology: Ionic Channels, Neurons, and the Brain, edited by Vincent Torre and Franco Conti, Plenum, New York, 1996, 23-40. 28. William A. Catterall, in Rudy and Seeburg, 144-159. By permission of Wiley-Blackwell Publishing. 29. Herman Moreno Davila, in Rudy and Seeburg, 102-117. By permission of Wiley-Blackwell Publishing. 30. Daniel L. Burgess and Jeffrey L. Noebels, in Rudy and Seeburg, 199-212. 31. Ashcroft, 161-183. 32. Burgess and Noebels, 200; Moreno Davila, 111f. 33. Rainer Waldmann, Guy Champigny, Eric Lingueglia, Jan R. de Weille, C. Heurteaux and Michel Lazdunski, in Rudy and Seeburg, 67-76. By permission of Wiley-Blackwell Publishing. 34. Conley and Brammar, 501. 35. Merritt Maduke, Christopher Miller and Joseph A. Mindell, Annu. Rev. Biophys. Biomol. Struct. 29:411-438, 2000. 36. Reprinted from Conley and Brammar, 1999, ref. 4, 154-195, with permission from Elsevier; C. Miller, Proc. Natl. Acad. Sci. USA 81:2772-2775, 1984. . 37. Rudy, in Rudy and Seeburg, 3. 38. Bina Santoro and Gareth R. Tibbs, in Rudy and Seeburg, 741-764. 39. L. Y. Jan and Y. N. Jan, Nature 345:672, 1990; L. Heginbotham, T. Abramson and R. MacKinnon, Science 258:1152-1155, 1992; H. R. Guy, S. R. Durell, J. Warmke, R. Drysdale and B. Ganetzki, Science 258:730, 1991; Giovanna Bucossi, Mario Nizzari and Vincent Torre, in Neurobiology: Ionic Channels, Neurons, and the Brain, edited by Vincent Torre and Franco Conti, Plenum, New York, 1996, 1-11. With kind permission of Springer Science and Business Media. 40. Kathleen W. Kinally, Timothy A. Lohret, Maria Luisa Campo and Carmen A. Mannella, J. Bioenerg. Biomembr. 28(2):115-123, 1996. With kind permission of Springer Science and Business Media. 41. Reprinted from Ashcroft, 400; M. S. P. Sansom, Copyright 2000, with permission from Elsevier. Prog. Biophys. Mol. Biol.55:139-235, 1991; ___, Q. Rev. Biophys. 26(4): 563-567, 1993. 42. D. A. Doyle, J. M. Cabral, R. A. Pfuetzner, A. Kuo, J. M. Gulbis, S. L. Cohen, B. T. Chait and R. MacKinnon, Science 280:69-76, 1998. 43. Zei et al., 458.
CHAPTER 14
MICROSCOPIC MODELS OF CHANNEL FUNCTION
The phenomenological models of Chapter 9 are not complete explanations of the voltage-dependent gating, selectivity and permeation processes they are intended to explain. The Hodgkin–Huxley and related models fail to explain microscopic phenomena, such as the depressed Cole–Cole semicircles and the observed fluctuations. In this chapter we examine proposed microscopic models that seek to build a theoretical scaffolding to answer the questions posed in Chapter 1. Noise and admittance studies show that the Na channel is a nonlinear, nonequilibrium system. Neurotoxin studies show that the transition to single-channel sodium conduction is suppressed by a single TTX molecule. We will analyze the conventional view that the channel is a water-filled structural pore before considering alternative models more or less grounded on physical and chemical principles. Since speculation appears to be necessary for a leap to a new paradigm, we will review a number of proposed models, most but not all microscopic. 1. GATED STRUCTURAL PORE MODELS We recall the five questions posed in Chapter1:
! ! ! ! !
How do the ions pass so rapidly through the voltage-sensitive ion channel? How does the channel manage to select specific types of ions to carry? What transformations does the channel conformation undergo to convert from nonconducting to conducting and back? How are the opening and closing transformations coupled to the electric field? How does the structure of these channels determine their function?
A number of ways to answer these questions have been proposed. One, the gated-pore hypothesis, appears simple, perhaps even obvious. It provides mechanistic answers to at least the first three of the above questions. 1.1. Structural gated pores In the gated-pore model, a voltage-sensitive channel is perforated by a structural pore filled with an aqueous solution. The ion flow through a pore is controlled by a 301
302
CHAPTER 14
mechanical gate that opens and closes. Selectivity is achieved by a sieve-like narrowing of the pore, the selectivity filter, which allows one species of ion to flow through the channel while blocking others. The electric field actuates the gates by an unspecified mechanism. A sketch of this model, with additional features of a vestibule and binding sites for neurotoxins, is shown in Figure 14.1.1 More recent versions of this model contain additional details, such as the binding sites of specific toxins.
Figure 14.1. Gated pore model, as drawn in 1975. Analyses of sodium channels suggested that the channel contained distinct functional regions: gate, selectivity filter, vestibules and voltage sensor. From Hille, 2001.
Arguments in favor of the gated-pore model fall into two types: 1.
Calculations of the expected behavior of ions in pores of molecular dimensions agree with measurements on a model channel, such as the antibiotic molecule gramicidin.
MICROSCOPIC MODELS OF CHANNEL FUNCTION 2.
303
The alternatives generally considered, systems based on enzymes and transporters, are much slower than pores, making them incapable of explaining the rapid ion currents displayed by voltage-sensitive ion channels.
Regarding the first argument, calculations on a circular cylindrical pore with axis perpendicular to the membrane plane were carried out by Bertil Hille. The conductance of a circular cylindrical pore of radius a and length l, perpendicular to the membrane plane and filled with a medium of resistivity ', is
(1.1)
In this equation, the resistance of the selectivity filter and the access resistance from the bulk media to the two sides of the pore are neglected. Hille assumes a pore with radius a = 0.3 nm and length l = 0.5 nm, with aqueous vestibules in front and back. With an assumed resistivity of 80 ohm cm, the uncorrected pore conductance is 714 pS. A correction for the access resistance due to the aqueous boundary layers reduces the single channel conductance to 366 pS. An additional correction due to the limit on flux set by diffusion in the absence of an electric field reduces the ion flux through the channel according to the equation
(1.2) where the second term in the denominator accounts for the effective lengthening of the pore due to diffusional access resistance. With a diffusion coefficient D of 1.4 × 10-5 cm2 s-1 and an ion concentration c of 150 mM = 9.0 × 10-19 cm-3, the unidirectional flux equals 3.7 × 107 s-1, equivalent to a current of 5.9 pA.2 The process of dehydrating and rehydrating the ion, if necessary, will require time, further reducing the ion flux and perhaps acting as the rate-determining step. By comparison, measured channel conductances range from about 4 to 240 pS in K channels. In gramicidin A channels, a maximum Na+ conductance of about 15 pS has been measured. Given the ad hoc assumptions, the measured conductances are roughly consistent with the predictions of the aqueous pore model. Let us turn to argument 2. Of all possible alternative models, the conventional approach presents only two: The channel could be a pore, or it could be a carrier molecule. Since carriers are too slow, the argument goes, one is left only with the pore model. This argument, based on the elimination of one out of only two selected alternatives, is not convincing. Other alternative models will be proposed in this and later chapters. The pore model has an extensive literature, including discussions of whether ions travel in a single file, how much water they drag with them, and the coupling of fluxes within a pore.3
304
CHAPTER 14
1.2. Selectivity filter and selectivity sequences To account for the fact that the channel permits only one kind (or a few kinds) of ion species to flow through it, the pore is postulated to contain a narrow region, called the selectivity filter, through which only ions of the appropriate size can pass. The way in which the channel's high conductance is to be maintained through the narrow selectivity filter (assumed to be 0.3 nm in diameter) is unexplained. In the pore hypothesis, the ions are said to be in contact with the walls of the pore. Thus an ion that is too large would simply not fit into the pore. This picture is in conflict with our knowledge of matter on the atomic scale. One object is never right up against another object. As Feynman4 puts it, “they are slightly separated, and there are electrical forces on a tiny scale.” It is misleading to model an ion as a hard sphere, no less so when it is simulated by computer graphics. To visualize an ion as a hard sphere in contact with the “wall” of a pore contradicts the fact that each ion and each atom of the channel consists of a nucleus surrounded by electrons. To understand the interaction between the ion and the channel, we have to look at the quantum and electrical forces between them. Ions are not hard balls of a certain diameter. While it is true that ion diameters have been measured, the measured diameters depend on the way in which the measurement is carried out. The ion diameter measured in a crystal of sodium chloride is different from that in solution. It would be interesting to know what the Na+ diameter is at different locations along a sodium channel, but these measurements have not been made. While careful studies have been made of the permeabilities of a given channel (in the frog node) to different ions, we cannot say that the only difference between these ions is their diameter. In the early versions of the aqueous-pore model, selectivity was supposed to depend on the size of the hydrated ion. However, the actual selectivity sequences for many ion channels are inconsistent with the sequence of increasing size of the ion with its hydration shell.5 This is true for both the ionic dependence of the resting potential, reflecting principally the K+ channel, and the peak of the action potential, reflecting the Na+ channel.6 Meves and Chandler7 found the sodium system of active nerve membrane to discriminate between alkali metal ions in the permeability sequence Li > Na > K > Rb > Cs. Because this sequence is the same as the order of the increasing nonhydrated size of the ions, it appears reasonable to conclude that Na+ and other cations pass through the Na+ channel without a hydration shell. So the concept of a selectivity filter contradicts the aqueous pore model in the case of the sodium channel. 1.3. Independence of ion fluxes Flux experiments with radioactive ions were carried out in 1955 by Alan Hodgkin and Richard Keynes to explore the interaction of ion flows within a pore. The fluxes recorded are net fluxes: a movement of 5 ions to the left is indistinguishable from 10 to the left and 5 to the right. By the use of mathematical results of Hans Ussing,8 they measured the ratio of unidirectional fluxes within the membrane. Difficulties with the Ussing derivations were pointed out in 1984.9 After experimentation with a mechanical
MICROSCOPIC MODELS OF CHANNEL FUNCTION
305
model, Hodgkin and Keynes concluded that the pore was “long.”10 A long pore is defined as one that is one atomic diameter wide but several atomic diameters long, so that the molecules would be lined up in a single file. Since these molecules are unable to pass each other, their mobility is greatly reduced.11 1.4. Gates In the pore model, the channel is a structural pore that spans the membrane, filled with an aqueous solution. When the channel is open, the ions flow through the water in the pore. The pore can be blocked by a postulated movable barrier called a gate. By a mechanism that depends on a charge or dipole in the gate but is otherwise unspecified, the gate is said to be coupled to the electric field across the membrane. The hypothesis of a moving gate has been supported by the observation of tiny gating currents, which are not due to the translocation of ions; see Chapter 9, Section 2.1. Gates have been pictured as sliding or hinged doors, pores that pinch or twist shut, free or tethered particles that block the aperture, pipes that swing along or across the membrane, subunits that assemble or disassemble, or charges that reposition to repel the permeant ions. In the more than 40 years that this approach has been used, many gating mechanisms have been proposed; none has been shown to account for the data. The gate is a “black box,” an unexplained mechanism. The gated pore model inappropriately models the microscopic system of the conduction pathway as a macroscopic one. This picture can be carried to an absurdity: If these sliding or rotating gates were to function like their macroscopic counterparts, they would have to be lubricated to keep them from freezing up. A film of oil would be needed to let the joints slide freely. But since the oil is made up of lipid molecules, this is impossible since the gap between the sliding surfaces would have to be of the same order of magnitude as the membrane thickness, which is spanned by only two lipid molecules. So these macroscopic devices would not fit into a channel molecule. There is room only for atoms and atomic groups. The gated-pore model is at an inappropriate scale; it seeks to explain a molecular process with a macroscopic device. When the first single-channel experiments showed sharp rises and falls in the ionic current, some channel researchers viewed this as confirmation of their belief in a microscopic gate within the channel molecule that blocked and unblocked a pore. However, the fact that single-channel records show that discrete currents start and stop suddenly is open to more than one interpretation. Can sharp rises and falls of currents occur without a mechanical gate? Yes; for example, in systems that form domains. An interesting case of this type is the phenomenon of Barkhausen currents in ferroelectric crystals; see Chapter 16, Section 3.2. The condensed-state physicists who study Barkhausen pulses do not assert the existence of gates that open and close, but explain them in terms of phase domains. So it is quite possible to think of sharp rises and falls of current without relying on gates for their explanation. The word “gating” is used purely as a metaphor, and it is only in this sense that it will be used in our discussion of voltage-sensitive ion channels.
306
CHAPTER 14
Since any shift in the distribution of charges or dipole moments—or hydrogen bonds—could elicit a gating current, the gating current can be interpreted as due to a nonlinear capacitance. This suggests that the opening and closing of a channel is a conformational transition, akin to a phase transition, that travels across the channel. The advancing, electrically polarized boundary of the transition then is a moving charged surface, and this movement of charge may be interpreted as the gating current. 1.5. A “paradox” of ion channels The remarkable selectivity of the potassium channel, through which K+ ions pass while the smaller Na+ ions do not, has been viewed as a paradox; these ions are considered to be “featureless spheres,” differing only in diameter. When an ion is dissolved in water, the charge-density function can to a good approximation be considered a spherically symmetric distribution and local electrical neutrality can be assumed. The first of these assumptions is made in the Poisson–Boltzmann equation and both are required in the Debye–Hückel treatment, approaches that have been used in a number of analyses of ion motion through channels. These approaches are inadequate when the ion is in a nonsymmetrical environment, particularly when it is bonded to another atom, conditions likely to exist in a voltage-sensitive ion channel. The structures of Na+ and K+ differ in that they contain different numbers of shells, resulting in qualitatively different radial charge distributions. Sodium ions, with only two orbitals, K (n =1) and L (n =2), have two charge maxima. K+ ions, with three orbitals, K, L and M (n =3), have three maxima.12 What about the angular distribution of charges? The idea that the alkali metal ions permeating a channel are necessarily spherically symmetric is not correct. For example, charge distributions on each ion of an ionic crystal such as NaCl have only approximate spherical symmetry. They have some distortion near the region of “contact” with neighboring atoms, as confirmed by x-ray studies of electron distributions. While it is true that a sodium or potassium ion is spherically symmetrical in its ground state in a vacuum, far from any fields, this is not the situation in an ion channel, where the permeant ion is in close proximity to the atoms that constitute the channel and is subject to local electric fields. The spherical symmetry is destroyed by the induced polarization of the ion's electron core. The Na+ and K+ in an ion channel are distinct from each other and nonspherical.13 To explain the K+ channel selectivity, Francisco Bezanilla and Clay Armstrong14 and Bertil Hille15 proposed a circular selectivity filter consisting of oxygen dipoles. Such a filter was found by the group of Roderick MacKinnon in the bacterial KcsA channel in a rigid ring of carbonyl (C=O) groups, which removes the hydration waters from the potassium ions but leaves sodium ions hydrated and unable to pass through the channel.16 1.6. Bacterial model pores and porins We have argued above that the continued application of the aqueous-pore model to the functional component of voltage-sensitive ion channels appears to be unjustified and
MICROSCOPIC MODELS OF CHANNEL FUNCTION
307
unproductive.17 Nevertheless, an aqueous-pore model may be valid for certain other types of channels. These are structures used by fungi and bacteria to punch holes in the membranes of other cells; examples are gramicidin A and alamethicin; see Section 13 of Chapter 13. Other structures that have been described as aqueous pores are receptors, porins and anion channels such as the cystic fibrosis conductance regulator (CFTR)18, as well as the pores produced in a presynaptic membrane when vesicles fuse with them. These proteins have structures quite different from those of voltage-gated potassium, calcium and sodium channels, and are not genetically homologous to them. Pore models of nicotinic acetylcholine receptors and bacterial channels are discussed in Chapters 12 and 13 respectively. 1.7. Water through the voltage-sensitive ion channel? Referring to the high current flux through ion channels, Hille wrote, “So far as we know, such high throughput is compatible only with an aqueous pore mechanism.”19 Since we know (see Chapter 6) that superionic conductors have conductances comparable to aqueous solutions, we are not compelled to agree with that statement. Another relevant question is, Does water pass through the channel along with the ions, as aqueous-pore conduction suggests? Experiments that suggest that it does have a common problem: to force water through the channel, a strong osmotic-pressure gradient is introduced across the membrane. To maintain the channel in its normal physiological state, however, the inner and outer solutions must be at the same osmotic pressure. So while these experiments tell us that water passes through an artifactually altered channel, it leaves us in the dark as to whether water passes through the channel during its normal activity. Without the aqueous pore model, the word “pore” could still be used for the stochastic pathway an ion takes through the atomic sites of the ion channel, but it would be a functional rather than a structural pore. 1.8. Molecular dynamics simulations Computer simulation allows the motions of individual ions and water molecules to be followed explicitly in a model pore. Numerical integration of Newton’s laws of motion is applied to a classical potential energy function for the system. The electric field surrounding a molecule is modeled by continuum electrostatics. The diffusion of an ion in the electrostatic field of the surrounding protein is modeled via a random force related to the diffusion coefficient. Current computer power permits the motion to be analyzed for about 10 ns, about one tenth of the time required by an ion to move through the channel. Molecular dynamics techniques have been applied to gramicidin, alamethicin, bacterial channels and inward rectifier potassium channels.20 Calculations based on an aqueous tapering pore bounded by rigid protein walls were carried out by Michael E. Green and colleagues. Charged amino acids are
308
CHAPTER 14
represented as rings of charge spaced 90° apart in an axially symmetric pore perpendicular to the membrane plane. K+ ions, represented as spheres, induce charges at the pore boundaries. The positions and orientations of water molecules were determined by Monte Carlo simulations based on electrostatics. Gating is initiated by the tunneling movement of a proton from one basic residue to another in a simulated S4 segment of the channel, followed by a cascade. In a recent paper, Alla Sapronova, Vladimir Bystrov and Green postulate that voltage gating consists of the tunneling of a proton, followed by H+ transfer along a chain of positive amino acid residues, bringing the proton to a critical gating region. The weakening of a short hydrogen bond in this region, caused by the addition of the proton, allows the four channel domains to separate, providing an opening of the channel through which the ions pass. The group also considers the possibility of a ferroelectric system in which the cooperative motion of protons represents one step of gating, which can be treated as a phase transition.21 2. MODELS OF ACTIVATION AND INACTIVATION Let us now shift our point of view and consider kinetic models, such as those presented in Chapter 9. These models, such as the important model of Hodgkin and Huxley, are not molecular models but, because they deal with the interaction of ions with sites in the channel, they fall within the purview of microscopic models. One problem with conventional channel models is the separation of ion kinetics into separate factors, activation and inactivation. Hodgkin and Huxley set out to describe the nonlinear kinetics of INa in terms of linear laws. They devised functions m(V) and h(V) for the normalized current conductance, each obeying a linear kinetic equation. Then they formed a nonlinear function of m and h; after trial and error, they found that the product m3h of activation and inactivation factors gave an acceptable fit. Controversies subsequently arose regarding the possible coupling of m and h. Although Hodgkin, Huxley and Katz described the potassium current as lacking inactivation, IK inactivation was later reported by S. Nakajima and collaborators22 as well as others. Potassium inactivation in frog muscle is nearly complete in 2 s, and the steady-state inactivation curve shows that inactivation occurs at membrane potentials above -40 mV. The description of sodium current is complicated, because the underlying laws, which the Hodgkin–Huxley equations approximate, are nonlinear. If a single set of differential equations applies both to the early and the late currents, as one would expect of a correct theory of a particular channel, the splitting of the conductance into activation and inactivation factors would be arbitrary and unnecessary. Certain combinations of prepulses and test pulses bring out one variable strongly, while other combinations emphasize the other, so these have become the operational definitions of m and h. The statement that m and h become uncoupled for a certain channel mutation is equivalent to saying that for the mutated channel under the given conditions the Hodgkin and Huxley assumption of m and h independence is not too bad. However, since there are conditions for which the factors are coupled, the factorization procedure is not generally valid and so may as well be abandoned.
MICROSCOPIC MODELS OF CHANNEL FUNCTION
309
A later formulation, under new operational definitions, further divided h into slow and fast inactivation. If the underlying kinetics is nonlinear, it would take more and more terms to get a better and better description based on linear kinetics. What complicates the picture is the fact that INa is a statistical combination of individual sodium channel openings and closings, which are molecular events dependent on local fields, ion occupations and bonds. 2.1. Armstrong model An early model of the channel’s voltage sensor was proposed by Clay Armstrong. Based on the requirement of providing a favorable pathway for permeant ions, this model pairs oppositely charged surfaces in two membrane-spanning protein segments. A positively charged segment ratchets through a series of intermediate positions to move a sodium ion across the membrane.23 2.2. Barrier-and-well models of the channel A microscopic approach recognizes that the translation of ions across a membrane must involve the surmounting of potential energy barriers between discrete sites at which ions bind momentarily. The motion of an ion from one site to another is assumed one-dimensional, perpendicular to the membrane. The ion enters the membrane by surmounting an energy barrier and drops into a well that represents a metastable state in the channel.
Figure 14.2. A barrier-and-well model of the hopping of cations across a membrane down an electric field, activated by their thermal energy. From Pethig, 1979.
310
CHAPTER 14
After crossing multiple barriers it emerges into the solution on the opposite side. The ions can move in either direction. As Hille points out, rate theory “... compresses the complexities of diffusion through a sea of condensed matter into rate constants of hopping over just a few barriers.”24 Ion transport is modeled as a hopping activated by the ion’s thermal energy, as diagrammed in Figure 14.2.25 This concept was elaborated by Henry Eyring and R. B. Parlin, who analyzed membrane permeability in statistical-mechanical terms in 1954.26 Diffusion through membranes is viewed as a one-dimensional random walk process, subject to the theory of absolute reaction rates. A diffusing particle moves in one direction until a barrier is encountered. The probability of a forward or backward jump depends on the local concentration Ci, the distance between barriers i and a rate constant ki forward and k’i backward. Fi‡ are free energies of activation, and Fn is the difference of free energies between the inside and outside of the membrane. Figure 14.3 shows a schematic plot of such a series of barriers.27 In a steady state, the rate constant is determined by the difference between the free energies of activation at the two neighboring barriers. This is expressed by a Boltzmann equation. According to Eyring absolute rate theory, the dependence of the rate constants ki and k’i for a transition on the free energy Fi‡ per mole is given by the equation (2.1) where (<1) is a reaction probability called the transmission coefficient, kB is the Boltzmann constant, h is Planck’s constant and R is the gas constant. At room temperature the prefactor kBT/h is approximately 6 × 1012 s-1.
Figure 14.3. A series of free-energy barriers such as those found in the diffusion of ions through a membrane. From Eyring and Urry, 1965. When a potential difference V is applied to the membrane, the flux will depend on V. This is dealt with in the theory by assuming that the electric field is uniform
MICROSCOPIC MODELS OF CHANNEL FUNCTION
311
across the membrane, so that the local electric potential rises linearly across the membrane, as in Figure 14.2. The linearity assumption means in effect that the charges of the permeant ions do not perturb the electric field, apparently contradicting the laws of electrostatics. The analysis yields a form that is comparable to continuum electrodiffusion. When the energy barriers are chosen to vary linearly across the membrane, the ion current density for the Parlin–Eyring model reduces to that for the Goldman uniform field model (Equation 3.7 of Chapter 7). 2.3. The inactivation gate If the pore model is to conform to the Hodgkin–Huxley description of the sodium current, it must explain both its activation (m3) and inactivation (h) factors. To this end, an additional gate, the inactivation gate, has been installed in the model pore. This new gate must close, after a delay, upon depolarization. Clues as to the nature of this fast inactivation were found when quaternary ammonium ions such as tetraethyl ammonium (TEA+) injected into the axoplasm were found to simulate the inactivation process. Fast inactivation was irreversibly removed by perfusing the axon internally with protease solutions, which removed a cytoplasmic portion of the channel. To model this behavior, a mechanism based on a familiar device was contrived: the ball-and-chain model of inactivation.28 A movable part of the channel, visualized as a ball tethered by a chain attached to the cytoplasmic surface of the channel, moves into position upon depolarization to block the outward passage of sodium ions through the pore; see Figure 14.4.29
Figure 14.4. The ball-and-chain model of inactivation, proposed by Armstrong and Bezanilla in 1977. Channel activation is accompanied by a shift in gating charges and the creation of a receptor at the inner mouth of the pore for the inactivation gate. The positively charged ball is attracted to the receptor and binds to it, occluding the pore. From Hille, 2001. Reproduced from The Journal of General Physiology, 1977, 70:567-590. Copyright 1977 The Rockfeller University Press.
While a plug in a pore may appear to adequately explain the phenomenon of fast activation, the mechanical blocking by the “inactivation ball” may not be necessary. Experiments at Richard Aldrich’s lab showed that the deletion of a domain at the N
312
CHAPTER 14
terminal of the Shaker K channel produces channels without fast K inactivation.30 An analogous region exists in Na channels in the cytoplasmic III-IV linker. These cytoplasmic chains of inactivating channels have positively charged residues. Thus a simpler explanation would be that the approach of these residues repels the approach of the outwardly permeating ions electrostatically. In this case, the chain must lodge securely enough in the cytoplasmic surface of the channel that the charge of the ion does not push it out. 2.4. Beyond the gated pore The inadequacy of the aqueous pore model is evident from:
! ! ! ! !
its inability to account for selectivity sequences, at least for the Na channel, its inability to explain gating without special “black box” mechanisms at the wrong scale, its inability to explain the frequency-dependent admittance and fluctuation properties of channels, its inability to account for optical phenomena such as voltage-dependent birefringence, and the lack of evidence found at the molecular level for the special structures it postulates.
We understand macroscopic gadgets because they are important in our everyday lives. The model's attribution of different functions of the channel—permeation, gating and selectivity—to different parts of the channel has a certain aesthetic appeal, because the workings of lawn mowers, cars and computers require different parts for different functions. This principle even works well in understanding our own bodies—at least at the organ and cell levels, but not at the molecular level. Is there something better? Perhaps, but to search for it we must abandon the macroscopic models in favor microscopic models such as those that have been developed in the study of the condensed state. We will examine some of these alternatives later, going beyond the simple answer of the aqueous-pore model into more difficult but ultimately more credible approaches. We should, however, be aware that the years of dominance of the aqueous-pore model have left their mark on the literature; it is firmly ensconced in experimental approaches of the field as well as its vocabulary. Because the aqueous-pore hypothesis has been so thoroughly accepted, its language has been incorporated into structural models. The words “gate,” “open” and “selectivity filter” have become an accepted part of the vocabulary of channel biophysics. Since these words are part of the literature of the field, we have to adapt to them, while being alert to avoid any circular reasoning. Since we will not always be able to avoid such loaded terminology, we must take the pragmatic view that these terms are only labels and do not necessarily represent any physical structures. We have to consciously remind ourselves that “gating” has nothing to do with the movement of a gate; that there is no “pore” that “opens” when the channel opens, but that the channel undergoes a transition from a nonconducting to an ion-conducting conformation.
MICROSCOPIC MODELS OF CHANNEL FUNCTION
313
Since we are proposing to abandon the concept that ions move through a water-filled pore in voltage-sensitive ion channels, let us see what chemistry can suggest about the interaction of metal ions with organic materials. 3. ORGANOMETALLIC CHEMISTRY Supramolecular chemistry deals with intermolecular interactions between molecules that act as building blocks for molecular assemblies. Many important biological molecules are coordination compounds, which have organic ligands coordinated to metal atoms or ions through intermediate donor atoms such as oxygen, nitrogen, sulfur and phosphorus; these are termed metal-organic. The direct attachment of organic groups to metal and semimetal atoms leads to organometallic compounds, which possess a direct metal–carbon bond, M–C. Because the metal is more electropositive, this will be a polar bond, with a partial positive charge on the metal and a partial negative charge on the carbon. Complex biological systems frequently display self-organization, the spontaneous association of molecules into stable, well defined aggregates. To form the architecture of a supramolecular array, information is required to make molecular recognition possible. The interacting species must be reciprocally complementary, both geometrically and energetically. Although an individual guest–host interaction may be weak, the cumulative effect of multiple binding sites leads to strong complexation. Molecular solids tend to fill space compactly. In nonpolar solvents such as the hydrocarbons of lipid membranes, the polar organometallic molecules with electronegative functional groups or groups capable of forming hydrogen bonds self-assemble. They will interact to build an internal core, wrapped into a lipophilic external jacket adjacent to the solvent. This process of selfassembly is at the molecular level, as in the formation of micelles and bilayers.31 3.1. Types of intermolecular interactions Two aspects of supramolecular chemistry are distinguished: A supramolecular array or assembly is a spontaneous association of a large number of molecular components. A supermolecule is a discrete oligomolecular species that results from the intermolecular association of a few components. The types of interactions between the building blocks and the forces that bind them define the properties of the supramolecular array and its architecture. Directional intermolecular forces can induce self-assembly and self-organization. The molecules that form building blocks in a selfassembled, ordered supramolecular structure form an organized network with specific architectural or functional features. The types of noncovalent interactions between the building blocks of a supramolecular system are metal–ion coordination (dative bonds), electrostatic forces, hydrogen bonding, donor–acceptor interactions, van der Waals interactions and secondary bonds.
314
CHAPTER 14
Hydrogen bonding is a major driving force in the self-assembly of organic molecules. The hydrogen bonds may be assisted by ionic interactions. The process of self-assembly requires the presence of donor and acceptor sites, C–H groups capable of forming C–H###O or C–H###N hydrogen-bonds, and fragments of opposite charge.
Figure 14.5. Normal covalent (left) and dative (right) bonds in a boron–nitrogen pair. From Haiduc and Edelman, 1999.
Dative bonds involve the sharing of a lone electron pair from an atom called a donor with an empty orbital of another atom, the acceptor; see Figure 14.5. Dative bonds are also called Lewis acid–base interactions; all coordination chemistry is based on them. Only when the supramolecular organization is through dative interactions does the strength of each bond approach that of covalent bonding, formed by electron pairing. Because dative bonds are weaker, their interatomic distances are longer. Secondary bonds are interactions characterized by interatomic distances longer than single covalent bonds but shorter than van der Waals distances. While secondary bonds are weaker than covalent or dative bonds, they are strong enough to influence the coordination chemistry of atoms, combine pairs of atoms, close intramolecular rings or establish intermolecular associations, which combine molecules into supramolecular arrays. A formal similarity exists between secondary bonding and hydrogen bonding. The secondary bonding is basically a linear interaction, X–A###Y, where A###Y is the secondary bond. Like the hydrogen bond X–H###Y, it is a linear asymmetric system, although exceptions exist. Secondary bonds can be considered a particular case of donor–acceptor interactions. Metals of the upper rows of the periodic table tend to form the stronger dative bonds, discussed above, with donors such as fluorine, oxygen and nitrogen. Coordination chemistry becomes generalized in supramolecular chemistry into a system consisting of a receptor, which serves as a host molecule for a substrate, a guest atom, ion or molecule. Applying these ideas to the subject at hand, we can think of the voltage-sensitive ion channel as the biochemical receptor and the permeant metal ion as the substrate. A possible model for an interaction between ions and the channel is the molecular recognition between crown ethers and alkali metal ions. 3.2. Organometallic receptors Alkali metal ions can form host–guest complexes with large rings, such as organocyclosiloxane receptors. This potassium complex of a 14-membered ring is shown in Figure 14.6. The ring is basically planar, and the cation guest is coplanar with it. The K+###O distances average 2.93 '. Similar ion-complexed rings are found in the cation-specific antibiotics valinomycin, enniatin B and nonactin.32
MICROSCOPIC MODELS OF CHANNEL FUNCTION
315
Figure 14.6. A potassium complex of a tetradecamethylcycloheptasiloxane ring. From Haiduc and Edelman, 1999.
A family of sandwich complexes incorporating several metal ions has been studied by x-ray diffraction. In one of these, sodium ions are attached to the top and bottom of the sandwich, producing a crown conformation; see Figure 14.7. The cyclohexasiloxane ring is planar, but the sodium ions are displaced outward from the sandwich. The supramolecular architecture involves both self-assembly and ion recognition of crown ether type.
Figure 14.7. Detail of a sandwich-type guest–host assembly incorporating a sodium ion. From Haiduc and Edelman, 1999.
Iron-containing macrocycles called ferrocene coronands undergo reversible one-electron oxidation and the redox potential is usually sensitive to metal–ion complexation by the crown ether part of the molecule. Complexes of this type have been prepared in reactions with lithium, sodium and potassium thiocyanates. The macrocyclic ligands extract cations in the order Tl+>Rb+>K+>Cs+>Na+. An example of a ferrocene coronand as a potassium-ion carrier that can transport sodium ions across membranes is given in Figure 8.7 of Chapter 9.33 The seemingly minor differences between sodium and potassium ions can induce major changes in receptor molecules. The recognition and complexation of these receptors with alkali metal ions is well illustrated in cobaltocenium moieties combined with polyoxo macrocyclic ethers. Their electrochemical sensitivity is shown in Figure 14.8. While the receptor with two crown ether macrocycles forms a 1:1
316
CHAPTER 14
complex with potassium, it forms a 1:2 complex with sodium. This switching of the molecule is due to the different recognition properties of the crown to the sodium and potassium ions.34
Figure 14.8. A cobaltocenium receptor with potassium ions (left) forms a sandwich 1:1 complex, and with sodium ions (right) forms an extended 1:2 complex. From Haiduc and Edelman, 1999.
3.3. Supramolecular self-assembly by %-interactions Supramolecular self-assembly by ionic interactions is found mainly in organometallic compounds of the most electropositive metals, the alkali and alkaline earth metals.35 In these polar organometallics self aggregation—from dimers to polymers—is common. Supramolecular alkali metal compounds may involve % coordination. Ligands are labeled according to their hapto number n, the number of ligand atoms within bonding distance of the metal atom.36 Cyclopentadienyl complexes, polymers of the five-membered rings MC5H5, where M = Li, Na, K, Rb, Cs, are characterized by extensive supramolecular selfassembly. Their solid-state structures have been elucidated by powder diffraction with high-resolution synchrotron radiation. Both [LiC5H5]n and [NaC5H5]n polymers form multidecker structures, with cations linearly coordinated by two 5-bonded cyclopentadienyl rings; see Figure 14.9.37
Figure 14.9. Cyclopentadienyl rings coordinate sodium atoms. Lithium forms a similar structure. From Haiduc and Edelman, 1999.
It is interesting to note that the potassium analog of these structures, Figure 14.10, differs in that [KC5H5]n forms zigzag structures, in which the potassium ions form an angle of 138.0° with its neighboring ions. [RbC5H5]n and [CsC5H5]n form similar chain structures. Supramolecular self-assembly by % interactions also occur in many other organometallic compounds.
MICROSCOPIC MODELS OF CHANNEL FUNCTION
317
Figure 14.10. Cyclopentadienyl complexes form zigzag structures with potassium, rubidium and cesium ions. From Haiduc and Edelman, 1999.
We have seen that metal ions can form a variety of covalently bonded structures within organic molecules. In Chapter 18 we will see that they also take part in hydrogen bonding. 4. PLANAR ORGANIC CONDUCTORS Organic crystals formed of stacked planar molecules have been found that transport electrons without energy loss along one-dimensional chains. Molecular structures of this type include 7,7,8,8-tetracyano-p-quinodimethane (TCNQ), tetrathiofulvalene (TTF), tetraselenafulvalene (TSF) and tetramethyltetraselenafulvalene (TMTSF). The cyclic structures of these compounds are shown in Figure 14.11.38
Figure 14.11. Planar organic molecules that form crystals with high anisotropic conductivity. From Davydov, 1985.
Arrangements of these molecules to form chains of donor and acceptor atoms (such as Cs) or groups (PF6- or ClO4 -) result in electron transfer with anisotropies in conductivity as high as 103 to 1. Crystals of (TMTSF)2PF6 became superconducting at temperatures below 0.9 K and pressure of 1.2 × 104 atm. Crystals of (TMTSF)2ClO4 became superconducting below 1.2 K at atmospheric pressure.
318
CHAPTER 14
The high conductivity may arise from the transfer of excess electrons or positively charged vacancies called holes. The stacks of molecules form soft structures in which intermolecular distances easily become varied. The distances between molecular layers are controlled by weak van der Waals forces. Structural deformations lead to nonlinear displacements of electrons or holes. These, together with the structural deformations they induce, are referred to as quasiparticles. The transfer of these quasiparticles is described theoretically as solitons or electrosolitons. The nonlinear differential equations describing the electrosolitons take into account the spins and interactions of the quasiparticles. During ATP synthesis, electrons in mitochondria and chloroplasts travel in pairs with oppositely directed spins, as in redox reactions. In protein molecules, helices serve as bridges for electron pairs, with an electron pair traveling along a chain of peptide groups. When the electric field exceeds a certain critical value, the pairing of the electrons is broken. Quantum fluctuations occur even at zero temperature, but are amplified with increasing temperature. These fluctuations tend to break the electron pairing. The Coulomb repulsion between the electronic charges may be weakened by screening. Quasi-two-dimensonal organic semiconductors such as bis(ethylenedithio)tetrathiafulvalene (BEDT-TTF) have been reported. Organic superconductivity has also been observed in fullerenes, nanotubes and arenes, with a critical temperature in a lattice-expanded fullerene as high as 117 K.39 Solitons in liquid crystals and proteins will be discussed in Chapters 18 and 19. 5. ALTERNATIVE GATING MODELS While many scientists were proceeding by the inductive method of searching for theory in the data, guided by a mental picture involving an aqueous pore, a voltage-activated gate and a selectivity filter, other scientists were heeding Buckminster Fuller’s advice:40 “In order to change something, don’t struggle to change the existing model. Create a new model, and make the old one obsolete.” Using the deductive method, they sought to explain the phenomena of voltage-sensitive ion channels from physical and chemical principles. As we search for a molecular approach, let us remember that quantum mechanics rules the microscopic realm. 5.1. The theories of Onsager and Holland Lars Onsager as early as 1967 suggested an ion channel composed of protein helices with polar groups to provide local solvation for the ion.41 In 1973, B. W. Holland, following Onsager's proposal, postulated that a channel for inorganic ion transport consists of a bundle of -helical protein strands, oriented with the axes of the helices perpendicular to the membrane surfaces. Nonpolar sidechains on the helices stabilize the configuration in the lipid bilayer. The oxygen and nitrogen atoms of the helices form chains of electronegative centers traversing the membrane. Positive ions can pass along these chains by hopping
MICROSCOPIC MODELS OF CHANNEL FUNCTION
319
from one such center to the next. The energy required for the passage of an ion along an helix chain will be much lower than that required for passage through the lipid, and may be further reduced by cooperative motion of the protons linking the N and O atoms.42 The potential energy of the proton as a function of position between the N and O atoms will have two strong but unequal minima. While the proton normally sits at the lower minimum, closest to the nitrogen atom (normal polarization), it may also sit at a metastable site near the oxygen atom (anomalous polarization). It may shuttle between the two sites, thereby facilitating the hopping of a cation along the chain. Anomalously polarized hydrogen bonds will be weakened and hence longer, so lengthening the helix if many of the hydrogen bonds are in the metastable state. If the orientation of the helix is such that the N–H vector is directed toward the inner surface of the membrane, the transmembrane electric field might be strong enough to make the anomalous configuration stable, since the lengthening of the distance between the charges will make the dipole moment of the anomalous configuration larger than that of the normal configuration. For this explanation to apply, it is necessary for the change in dipole field energies to be larger than thermal energies. This condition is satisfied for differences in proton displacement of 0.1 nm. The postulated hopping mechanism involves two types of cooperative motion by the protons of the hydrogen bond. When an ion is located on an electronegative site —a nitrogen or oxygen atom—the neighboring H bonds will reduce the electrostatic potential energy by being polarized in directions away from this site. Hopping of the ions to the next site requires a cooperative reorientation of the polarizations of nearby hydrogen bonds. The diffusion coefficient for the hopping process will depend on the activation energy of this cooperative process. The helix will be stressed by the interaction of its dipole moment with the resting potential. When the field is reduced in a depolarization, part of its internal potential energy will be released into longitudinal vibrational modes of the helix (perpendicular to the membrane plane). The polar modes of the protons in the hydrogen bonds will be particularly affected, and as a consequence the activation energy for ion hopping will be lowered. The coupling of the polar modes with the inorganic ions will locally raise their temperature, also facilitating ion transport. How does this model, developed long before ion channels were identified, compare with our present knowledge? It must be pointed out that Holland, writing in 1973, was not aware that the sodium channel and potassium channel are separate structures, and refers to the “competition” of the two ions for sites in the channel. While the assumption of -helical strands is remarkably consistent with current models, the assumption that their axes are perpendicular to the membrane plane has not been confirmed; actually, as we saw in Chapter 13, the axes are tilted at an angle to the membrane normal. This does not disturb the postulated sensitivity to the transmembrane field; however, it opens up a new possibility: a sensitivity to tilt and to electric fields in the plane of the membrane. These are essential parts of the transition model described in Chapters 20 and 21. At another point, Holland notes that the flow of ions through the channel transfers more energy into the polar modes. If the rate at which energy is lost from the
320
CHAPTER 14
interacting system exceeds the generation rate, the conductance of the permeant ion will fall back to zero. Holland uses this argument to explain a subthreshold response; however, it may perhaps be better applied (from today's perspective) to inactivation of the sodium channel. 5.2. Ion exchange models Donald Chang pointed out that a channel is a conductive pathway for permeant ions and not necessarily a single pore. Citing morphological findings, Chang asserts that ion channels are connected to the protein network of the adjacent cytoplasm. This membrane–cortex model maintains that the physiological membrane consists of the plasmalemma plus a sublemmal protein structure, the cell cortex.
Figure 14.12. Structure of the membrane protein in the membrane–cortex model (A) and in the rigid-pore model (B). From Chang, 1983.
The combined structure possesses ion-exchange properties, and conformational changes in these proteins lead to their electrical properties. Ions pass through the channel by moving through the interstices of the protein structure or by ion exchange and hopping between charged sites. Figure 14.12 compares the structure of the membrane protein in the membrane–cortex model to that of the rigid-pore model.43 Electron-microscopic studies have found that the layer of cytoplasm adjacent to the plasmalemma contains contractile and structural proteins in a dense structure. The intactness of this structure is correlated with the excitability of the axon.44 Biochemical, physiological and optical evidence also supports the role of the cortical protein in excitation. Electrophysiological data obtained
MICROSCOPIC MODELS OF CHANNEL FUNCTION
321
by Chang shows that the assumption of the independence of ion flows (Section 1.3 above) is violated when internal ion concentration is varied. Experiments with chemicals known to disrupt cytoplasmic proteins support this model. Colchicine, which disrupts the assembly of microtubules, reversibly and selectively suppresses the early conductance of the channel. Cytochalasine B, which disrupts microfilaments, decreases both the late and the early current. The concept that the internal surface of an axon contains structures extending into the axoplasm that contribute to membrane excitability received support from the finding that the excess noise of material from the cytoplasmic surface of squid axon exhibited 1/f fluctuations similar to those of excitable membranes.45 5.3. Hydrogen dissociation and hydrogen exchange Ludvik Bass and Walter Moore pointed out that the selective increase of membrane permeability by a factor of 40 from a depolarization of a few millivolts must require an extraordinarily nonlinear link that is stable to thermal fluctuations even in small areas such as the node of a myelinated axon. A possible link is the hydrogen ion concentration, since a pH increase of 0.1-0.2 relative to its normal value of 7.2-7.4 in the axoplasm causes spontaneous repetitive action potentials.46 Citing this sensitivity of ion permeation to alkalosis, they proposed a mechanism based on the effect of an electric field on the hydrogen dissociation constant (Wien dissociation effect) of a weak acid–base system. For a rapid depolarization, Bass and Moore calculate a pH increase sufficient to initiate an action potential.47 The electric conductivity of electrolytic solutions increases with field strength up to several times its ohmic value at high values of the electric field. This effect sets in at about 105 V/cm, of the order of fields in excitable membranes. In a medium of low dielectric constant, normally neutral ion pairs may dissociate and become mobile, contributing to the electric current. The increase in charge carriers increases the ionic conductivity.48 The process is considered to involve both association/dissociation and diffusion of protons. Excitation via a transient alkalosis of the membrane may involve the effect of pH on protein conformation and/or Ca2+ binding. As an example, Bass and Moore cite a helix–coil transition involving changes in the net charges of the sidechains. Less drastic changes, such as a proton cascade initiated by H+ tunneling across a threshold, have also been considered.49 Charles Schauf and collaborators have studied the effects of substituting heavy water, D2O, for H2O as the solvent bathing voltage-clamped Myxicola giant axons. Effects of such an isotope substitution may be primary, due to the alteration of the solvent, or secondary, due to the replacement of hydrogen with deuterium in the molecular structure of the ion channel. D2O substitution resulted in a decrease in the maximum sodium and potassium conductances and a slowing of the activation and inactivation kinetics of sodium currents.50
322
CHAPTER 14
5.4. Dipolar gating mechanisms Dipolar electric-field gating mechanisms were proposed by L. Y. Wei.51 Wei points out that the Goldman equation fails to predict correct potentials in cation exchange membranes, and further, that the cable analog is flawed since the movement of cations across a membrane is not analogous to the impenetrability of the cable insulator. As an alternative model of the axon’s electrical structure, Wei proposed electric dipole layers at the two membrane surfaces, each with the negative terminal in the aqueous phase. Thus the membrane can be (macroscopically) described as a pnp configuration, where p and n stand for excess positive and negative ions, respectively. The negative layers repel anions and so explain the fact that the membranes generally are permeable to cations only. Wei considers the resting membrane potential to be the sum of the barrier potentials plus the “true” membrane potential between the dipole layers. Taking the simple case in which dipoles have only two stable states, parallel and antiparallel to the majority orientation, he assumes that the dipoles may relax by a flip-flop mechanism. He derives a force condition, which is consistent with excitability of axons under a wide variety of ionic environments, such as those already demonstrated by the Tasaki group, discussed in Chapter 4. Wei explains thermal and optical effects by considering the excitation of the dipoles. When the stimulus is removed, the dipoles relax to a lower quantum state, emitting infrared radiation; this has been measured, as we saw in Chapter 4. 5.5. A global transition with two stable states We have seen in Chapter 9 that Hodgkin and Huxley described sodium and potassium currents in terms of driving forces and voltage-dependent conductances. But the “driving” of the ion movement could be a more subtle effect. If we think of the ion channel as a thermodynamic system, its overall conformation could be seen as dependent on a number of variables, including temperature, pressure and electric field. We can visualize the channel as undergoing a global transition, similar to a phase transition. In this alternative view, the response of the channel is viewed in statistical rather than mechanical terms. The two-stable-states approach of Ichiji Tasaki, at the membrane level, emphasized measurements of light scattering and birefringence,52 and led Tasaki and Kunihiko Iwasa to the discovery of membrane swelling during an action potential.53 Phase transitions have been observed in lipids, but their application to channels is controversial; they will be discussed at length in the following chapters. 5.6. Aggregation models Gilbert Baumann and Paul Mueller54 proposed an alternative channel approach in the concept of a modular channel that opens and closes by an aggregation process. A specific type of aggregation model developed by Terrell L. Hill and Yi-Der Chen was rejected by them on the basis of its failure to match data by Cole and Moore on induction and superposition. Baumann55 then developed a more general aggregation
MICROSCOPIC MODELS OF CHANNEL FUNCTION
323
model that overcame the objection by Hill and Chen. Baumann defined aggregation as the process by which a number of molecules come together to form aggregates, which can grow to a limiting size. In a nonrestricted aggregation reaction monomers can come together to form dimers; further addition of monomers can yield trimers and tetramers. This aggregation model was capable of explaining the Cole–Moore effect, in which a hyperpolarizing prepulse delays onset of the ion current in potassium channels. 5.7. Condensed state models Building on earlier work by Alexander Mauro and H. G. L. Coster and colleagues, Gerold Adam proposed an ionic semiconductor model for steady state electrical characteristics of the squid axon membrane.56 In Adam’s ionic psn-junction model, the capacitance is practically independent of voltage, as observed in squid-axon membrane. The model assumes that the membrane consists of a positively charged n layer at the extracellular surface, a central s layer with negligibly few fixed charges, and a negatively charged p layer at the intracellular surface. The s layer is assumed to obey the Nernst–Planck equation (Section 2.1 of Chapter 7). A cooperative transition changes the ionic permeabilities of the membrane. An implication of the model is that the charged groups of the channel protein controls its ionic content and the membrane potential. Wolfgang Schwarz57 found thermal hysteresis in the Hodgkin–Huxley kinetic parameters as indications of phase transitions in nerve and muscle membranes. 5.8. Coherent excitation models Investigations of properties of materials use the concept of collective behavior, as seen in the collective modes of lasers. Dealing with the properties of matter exclusively on the atomic level is impractical because of the astronomically large number of microscopic states, on the order of exp(1022). The concept of collective behavior should be of particular interest in biology, because energy passes continuously through any living system, which means that it will be far from thermal equilibrium. The physicist Herbert Fröhlich, known for his contributions to field theory, condensed matter theory and his book Theory of Dielectrics, proposed that the energy of living systems will lift a few modes of motion into different states far from equilibrium. These excited states are stabilized by nonlinear effects to become metastable states.58 Fröhlich invokes the quantum mechanical concepts of boson condensation and off-diagonal long range order to arrive at conclusions that remain controversial. The remarkable dielectric properties of biological membranes led Fröhlich to suggest that electric polarization waves should be strongly and coherently excited in membrane macromolecules. By using estimates of the thickness and elastic modulus of cell membranes, he estimated the frequency of these waves to be in the range of 10111012 Hz. Fröhlich proposed that ionic displacements may lead to the establishment of ferroelectric or antiferroelectric states; see Chapter 16.
324
CHAPTER 14
In a model calculation, Fröhlich assumed that a field of polarization waves was coupled nonlinearly to a field of elastic waves. His results, applied to macromolecules, implied that they would have metastable excited states with very large dipole moments. The consequences of this model involve energy storage as well as the recognition and attraction of one system by another and the induction or repression of processes. While Fröhlich mentions sense receptors and the propagation of nerve impulses as possible applications, the theory is short on biological detail. A search for oscillations in the range of 1011-1012 Hz yielded generally negative results. 5.9. Liquid crystal models According to a liquid-crystal model of S. E. Bresler and V. M. Bresler, the voltage drops when the membrane is depolarized at some point, while the electric-field vector at adjacent points rotates by 90 degrees. Since the promolecules are assembled into liquid crystalline domains, their behavior when the electric field vector E rotates will be determined by their electric anisotropy, and so likewise will the structural components of the molecules. When the field rotates by 90 degrees, the domains rotate along with it.59 Larson and Lundström modeled 1/f noise in nerve membranes by assuming that the free energy of the liquid crystalline bilayer depends on the gradients of the directions of the molecular hydrocarbon chain, so that fluctuations in the directions of the lipid segments modulate the conductivity of an ion channel.60 6. REEXAMINATION OF ELECTRODIFFUSION We have seen a number of alternative models, but all are incomplete. The excitable membrane may well be like a semiconductor, but what is the relation between these disparate systems? The transformation of an ion channel under depolarization from nonconducting to conducting does indeed appear to be like a phase transition, but just what kind of phase transition is it? Hydrogen ions surely must play a role in the transition, but what is that role? Membrane excitability is a complex process, and we will learn much more about it when we study it at the molecular level. However, the additional details may only prove confusing if we have not properly addressed the problem at the membrane level. So let us reconsider the quantitative model of electrodiffusion. We closed Chapter 8 with a question for the reader to ponder: Should we abandon electrodiffusion or reexamine its assumptions? As we have seen, Hodgkin and Huxley have taken the former path with their empirical model. Now let us explore the alternative path and carefully review the premises of the classical electrodiffusion model. We use the word “classical” here to suggest that there may be aspects of electrodiffusion that were not explored in the biophysical literature until the 1970s.
MICROSCOPIC MODELS OF CHANNEL FUNCTION
325
6.1. Classical electrodiffusion – what went wrong? The classical electrodiffusion model, applied to the single-ion case, makes the following assumptions: ! ! ! ! ! ! ! ! ! !
Nernst-Planck equation Equation of continuity Gauss's law Einstein's relation, D = ukT Scalar dielectric permittivity Scalar ion mobilities Noninteraction of ion with membrane Electric field is normal to membrane Constant dielectric permittivity Constant ion mobility
A further assumption is the application of single-ion electrodiffusion to experimental data from membranes with multiple permeant ions. Worse yet, among the ions ignored are divalents such as calcium. This misapplication is bound to lead to false conclusions about the applicability of the theory. As we look at the above list, we note that the Nernst–Planck equation is the basic assumption of the model, while the equation of continuity and Gauss's law are unimpeachable physical laws. Einstein's relation has been challenged occasionally but generally stands on firm ground. The assumptions that the permittivity and mobilities are scalar rather than of the more general tensor character will be discussed in Chapter 17. The assumption that the electric field is necessarily normal to membrane was challenged by the Breslers. Let us consider the last two assumptions, the constancy of the permittivity and of the ion mobilities. 6.2. Are the “constants” constant? What gives us the right to assume that permittivity and ion mobility would be constant in an excitable membrane? Nature and evolution are certainly not bound by any desire of investigators to keep things simple. The common name for the permittivity, “dielectric constant,” may have contributed to our blindness, but there is no objective reason to assume that these quantities should be constant. Given that they are not necessarily constant, what would they be functions of? What could they depend on? Here the model of Hodgkin and Huxley, with its voltagedependent mobilities, comes to our aid. Of course, voltage is not a local property but an integral of the normal electric field. Why would these quantities not depend on the electric field? Let us focus first on the dielectric permittivity. Are there materials that possess field-dependent permittivities? Indeed there are; a particular subgroup of these materials are the ferroelectrics. We have already mentioned these in Chapter 3 and earlier in this chapter, but shall leave them for detailed discussion in Chapter 16.
326
CHAPTER 14 7. ORDER FROM DISORDER?
The mechanically gated pore picture is, despite its shortcomings, the way channel function is often described in textbooks. We have seen both negative and positive reasons for questioning this model. On the negative side are the data the pore model does not explain: the constant-phase impedance, the fluctuation spectra and many thermal, mechanical and optical effects. On the positive side, the mechanical gates of the model are at the wrong scale to describe molecular processes. Physical analyses have been developed that do not depend on the assumption of a membrane-spanning structural pore. In these models, which we may call a priori models, the ions are not viewed as hydrated when passing through the channel. Instead the channel is postulated, in at least one model, to contain enzymatic subunits that catalyze the reaction that dehydrates the ions and solvates them into the protein channel. The size of the hydrated ions therefore does not determine ionic permeability in these models, and selectivity within the conducting pathway can be explained by the affinity of sites for the bare ions. One such model will be discussed, and compared with experimental data, in Chapter 19. The alternative models discussed require a shift in the way we view the conformational transition of the channel. In the gated-pore model we see order arising from order: channel openings and closings from a structural gate. Some of the alternative models, however, require the emergence of order from disorder. How is this possible? Let us see in the next chapter.
REFERENCES AND NOTES 1. 2. 3. 4. 5. 6.
7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.
Bertil Hille, Ion Channels of Excitable Membranes, Third Edition, Sinauer, Sunderland, 2001. Hille, 351-354. O. S. Andersen, Ann. Rev. Physiol. 46:531-548, 1984. R. P. Feynman, The Feynman Lectures on Physics II, Addison-Wesley, 1964, 1-9. The physical basis of selectivity is reviewed in J. M. Diamond and E. M. Wright, Ann. Rev. Physiol. 31:581-646, 1969. G. Eisenman, in Symposium on Membrane Transport and Metabolism, edited by A. Kleinzeller and A. Kotyk, Academic, New York, 1961, 163-179; D. Junge, Nerve and Muscle Excitation, Sinauer, Sunderland, 1981, 41. H. Meves and W. K. Chandler, J. Gen. Physiol. 48:31, 1965. H. H. Ussing, Acta Physiol. Scand. 13:409-442, 1949. H. R. Leuchtag, Biophys. J. 45:263a, 1984. K. S. Cole, Membranes, Ions and Impulses, 349-352. Hille, 356f. G. Herzberg, Atomic Spectra and Atomic Structure, Dover, 1937. Charles Kittel, Introduction to Solid State Physics, Third Edition, Wiley, New York, 1966, 89f. F. Bezanilla and C. M. Armstrong, J. Gen. Physiol. 60:588-608, 1972. B. Hille, J. Gen. Physiol. 61:669-686, 1973. D. A. Doyle, J. M. Cabral, R. A. Pfuetzner, A. Kuo, J. M. Gulbis, S. L. Cohen, B. T. Chait and R. MacKinnon, Science 280:69-77, 1998. H. R. Leuchtag, Biophys. J. 62:22-24, 1992. H. Hasegawa, W. Skach, O. Baker, M. C. Calayag, V. Lingappa and A. S. Verkman, Science 258:1477, 1992. Hille, 89.
MICROSCOPIC MODELS OF CHANNEL FUNCTION
327
20. Mark S. P. Sansom, Indira H. Srivastava, Kishani M. Ranatunga and Graham R. Smith, TIBS 25:368-374, 2000. 21. M. E. Green and J. Lewis, Biophys. J. 59, 419-426, 1991; M. E. Green and J. Lu, Colloid and Interface Sci. 171, 117-126, 1995; J. Lu and M. E. Green, Prog. Colloid Polym. Sci. 103:121-129, 1997; J. Lu, J. Yin and M. E. Green, Ferroelectrics 220:249-271, 1999; Alla Sapronova, Vladimir S. Bystrov and Michael E. Green, Frontiers in Bioscience 8:1356-1370, 2003. 22. S. Nakajima, S. Iwasaki and K. Obata, J. Gen. Physiol. 46:97-115, 1962. 23. C. M. Armstrong, Physiol. Rev. 61:644-683, 1981. 24. Hille, 326. 25. Ronald Pethig, Dielectric and Electronic Properties of Biological Materials, John Wiley, Hoboken, NJ, 1979. 26. R. B. Parlin and H. Eyring, in Ion Transport across Membranes, edited by H. T. Clarke and D. Nachmansohn, Academic, New York, 1954, 103-118. 27. Henry Eyring and Dan W. Urry, in Theoretical and Mathematical Biology, edited by Talbot H. Waterman and Harold J. Morowitz, Blaisdell, New York, 1965, 57-95. 28. F. Bezanilla and C. M. Armstrong, J. Gen. Physiol. 70:549-566, 1977; C. M. Armstrong and F. Bezanilla, J. Gen. Physiol. 70:567-590, 1977. 29. Hille, 629; C. M. Armstrong and F. Bezanilla, J. Gen. Physiol. 70:567-590, 1977. 30. T. Hoshi, W. N. Zagotta and R. W. Aldrich, Science 250:533-538, 1990; W. N. Zagotta, T. Hoshi and R. W. Aldrich, Science 250:568-571, 1990. 31. I. Haiduc and F. T. Edelmann, Supramolecular Organometallic Chemistry, Wiley VCH, Weinheim, 1999, 1-26. 32. Ernst Grell, Theodor Funck and Frieder Eggers, in Membranes: A Series of Advances, edited by by George Eisenman, Marcel Dekker, New York, 1975, 1-126. 33. T. Saji and I. Kinoshita, J. Chem. Soc., Chem. Communic. 1986:716-717. 34. Haiduc and Edelmann, 53. 35. Haiduc and Edelmann, 371. 36. P. Powell, Principles of Organometallic Chemistry, Second Edition, Chapman and Hall, London, 191. 37. R. E. Dinnebier, U. Behrens and F. Olbrich, Organometallics 16: 3855, 1997; Haiduc and Edelmann, 429. 38. A. S. Davydov, Solitons in Molecular Systems, D. Reidel Publishing Co., Dordrecht, 1985, 78-91. 39. J. H. Schön, Ch. Kloc, B. Batlogg, Science 293:2432-2434, 2001; 10.1126/science.1064773. 40. R. Buckminster Fuller, Critical Path, St. Martins Press 1981. 41. L. Onsager, in The Neurosciences, edited by C. G. Quarton et al., Rockefeller University, New York, 1967,75-79. 42. B. W. Holland, in Cooperative Phenomena, edited by H. Haken and M. Wagner, Springer-Verlag, New York, 1973, 404-412. 43. Donald C. Chang, in Structure and Function in Excitable Cells, edited by D.C. Chang, I. Tasaki, W. J. Adelman, Jr., and H. R. Leuchtag, Plenum, New York, 1983, 227-254. With kind permission of Springer Science and Business Media. 44. J. Metuzals and I. Tasaki, J. Cell Biol. 78:597-621, 1978. 45. Harvey M. Fishman, Biophys. J. 35:249-255, 1981. 46. I. Tasaki, I. Singer and T. Takenaka, J. Gen. Physiol. 48:1095-1123, 1965. 47. L. Bass and W. J. Moore, in Structural Chemistry and Molecular Biology, edited by A. Rich and N. Davidson, W. H. Freeman, San Francisco, 1968, 356-369. 48. See, e.g., pages 43-47 of P. Läuger and B. Neumcke, in Membranes, vol. 2, edited by G. Eisenman, Marcel Dekker, Inc., New York, 1-59. 49. Alla Sapronova, Vladimir S. Bystrov and Michael E. Green, J. Molec. Struct. (Theochem) 630: 297-307, 2003. 50. C. L. Schauf, in Structure and Function in Excitable Cells, edited by D.C. Chang, I. Tasaki, W. J. Adelman, Jr. and H. R. Leuchtag, Plenum, New York, 1983, 347-363. 51. L. Y. Wei, Bull. Math. Biophys. 31:39-58, 1969; Ann. N. Y. Acad. Sci. 227, 285-293, 1974. 52. H. Sato, I. Tasaki, E. Carbone and M. Hallett, J. Mechanochem. Cell Motility 2:209-217, 1973; I. Tasaki, Physiology and Electrochemistry of Nerve Fibers, Academic, New York, 1982. 53. K. Iwasa, I. Tasaki and R. C. Gibbons, Science 210: 338-339, 1980; I. Tasaki and K. Iwasa, Japan. J. Physiol. 32, 69-81, 1982; I. Tasaki, Physiology and Electrochemistry of Nerve Fibers, Academic, New York, 1982; Ferroelectrics 220:305-316, 1999.
328
CHAPTER 14
54. G. Baumann and P. Mueller, J. Supramolec. Struct. 2:538-557, 1974. 55. G. Baumann, Math. Biosci. 46:107-115, 1979. 56. A. Mauro, Biophys. J. 2:179-198; H. G. L. Coster, E. P. George and R. Simons, Biophys. J. 9:666684, 1969; G. Adam, J. Membrane Biol. 3:291-312, 1970. 57. W. Schwarz, Pflügers Arch. 382:27-34, 1979; Pradip Das and W.H. Schwarz, phys. Rev. E 51:35883612, 1995. 58. H. Fröhlich, in Coherent Excitations in Biological Systems, edited by H. Fröhlich and F. Kremer, Springer, Berlin, 1983, 1-5. 59. S.E. Bresler and V.M. Bresler, Dokl. Akad. Nauk SSSR 214: 936-939, 1974; S.E. Bresler, Sov. Phys. Usp. 18 (1):62-73, 1975. 60. K. Larsson and I. Lundström, in Lyotropic Liquid Crystals and the Structure of Biomembranes, edited by Stig Friberg, American Chemical Society, Washington, DC, 1976, 43-70.
CHAPTER 15
ORDER FROM DISORDER
David Ruelle1 has written, “Scientists know how hard it is to understand simple phenomena like the boiling and freezing of water, and they are not too astonished to find that many questions related to the ... functioning of the brain ... are for the time being beyond our understanding.” Wouldn’t it be an amazing example of serendipity if the two questions were, at root, the same questions? If the conformational transition of a voltage-sensitive ion channel were really a kind of phase transition? Such ideas were already proposed by Ichiji Tasaki for the excitable membrane in the 1960s.2 We have seen that the concept of a mechanical gate opening to admit ions into a preformed pore is not only inadequate but also inappropriate to the molecular scale of a voltage-sensitive ion channel. An effort to provide a viable alternative will require a more sophisticated approach, based on contemporary physical concepts. This chapter is intended to review some of the background needed for such an approach. In Chapter 6 we discussed critical phenomena, such as the disappearance of any physical distinction between the liquid phase and the gas phase of water at its critical point. The densities of liquid and vapor become equal, and the latent heat of vaporization vanishes. In this chapter we will explore the subject of critical phenomena in more detail. In open systems a type of transition takes place that leads to new kinds of structures, based on dissipative regimes. We will apply these ideas to a number of phenomena observed in ion channels, such as power laws, 1/f noise and threshold behavior. 1. COMPLEXITY, CRITICAL PHENOMENA AND POWER LAWS The laws of physics are simple, even though mathematical solutions are often difficult to work out. Why then is nature so complex? It has been proposed that the answer to this question is the tendency of large systems with many components to evolve into a poised, critical state, far out of balance, in which minor disturbances lead to avalanches of all sizes. Anomalies of physical quantities caused by an increase in fluctuations at some transitions are called critical phenomena. These anomalies, such as the disappearance of any physical distinction between the liquid phase and the gas phase of water at its critical point, arise in many physical systems, including ferromagnetism,
329
330
CHAPTER 15
superconductivity and ferroelectricity. Below a certain critical temperature Tc, the configuration of these systems takes on a certain order, which is lost above Tc. A quantity that is zero above the transition temperature and nonzero below it is called an order parameter. As a system approaches its critical temperature, its relaxation time increases anomalously; this phenomenon is called critical slowing down.3 According to a model developed by Per Bak, changes of systems often occur through catastrophic events rather than following a smooth path. Dynamic interactions among individual elements of the system have been found to establish a self-organized critical state.4 A simple experiment can illustrate self-organized criticality. Suppose you gradually drop sand onto a table. A sandpile grows steadily at first, but as it becomes steeper, little sandslides begin to appear. As the pile steepens, the slides become larger. Eventually there are slides of all sizes, with some spanning the entire pile. Now that the system is far out of balance, its behavior is no longer understandable in terms of the interactions between the grains. The avalanches form a dynamic of their own, which can be understood only from a holistic description of the properties of the entire pile. The sandpile's behavior is no longer evident from a description of the behavior of its individual grains—it has become a complex system. 1.1. The emergence of complexity The search for the origin of complexity has recently emerged as active science, as we noted in Chapter 1. But can results of such generality be helpful? Doesn’t each science work within its own domain? Yes, but emergent properties appear, in which quality arises from quantity. Chemical behavior arises from the laws of thermodynamics and quantum mechanics applied to the structure of atoms; physiology depends on physics and chemistry. Systems with large variability are defined as complex. Variability exists on wide range of length scales, with new surprises appearing at each scale. How does the variability of the living world, with its huge number of different species, arise out of simple laws? Complexity cannot explain any detailed fact of nature. Variability precludes the possibility of condensing observations into small number of equations. A theory of complexity can explain why variability arises and what typical patterns may emerge, but it cannot predict a particular outcome of a system. Traditionally, science has dealt only with causes and effects. Finding the cause of a particular pattern in nature gives us the ability to do something about it. But we give up that ability when statistics is used as an explanation and no detailed explanations beyond it are sought. This limitation may be due to the methods used, such as in statistical mechanics, where we give up the possibility of calculating the detailed motions of the particles. It may be due to the sensitivity of a system to minor perturbations, which amplify to affect the outcome strongly, as in chaos theory. Or it may be due to the quantum nature at the basis of all physics, which has uncertainty built into the laws themselves. We have to decide in each case whether a statistical or a detailed approach is the more appropriate one.
ORDER FROM DISORDER
331
1.2. Power laws and scaling in physical statistics Many quantitative relations can be described by an equation in which one variable is proportional to a power of another. For example, Johannes Kepler related the orbital period T of a planet (its year) to its mean radius R from the Sun, (1.1) Another example of a power law is the wave velocity c of shallow-water waves as a function of the depth h of the water, (1.2) In the two examples given, the powers are rational fractions, 3/2 and ½ respectively. In some phenomena, power laws have been found in which the exponent is not a rational fraction, at least in so far as is presently known. In physiology, relative growth and the effect of stimulus magnitude on the sensations produced obey power laws.5 Measurements of the basal metabolic rate Pmet (in kg/day) of animals of various species as a function of their body mass mb (in kg) yields the allometric scaling law6 (1.3)
Consider the standard problem of a random walk, in which a "drunken walker" steps out in a random direction from her present position; each of her following steps are again in random directions. The distance she attains from her starting position after N steps will increase slowly. For the average of an ensemble of such walks when N is large, the distance R is given by (1.4) Again, the power is a rational fraction, ½. Now let us restrict the allowable positions to those not previously occupied by our walker. This self-avoiding walk is an analogue for the growth of a polymer, since the newly added molecule cannot grow into a space already occupied by the existing polymer chain. This restriction biases the distribution of walks toward greater distances from the origin. A law that has been found to apply to this case, both theoretically and experimentally, is (1.5) as N becomes large. Here the exponent does not appear to be a simple rational fraction. Laws such as Equations 1.3 to 1.5 are called scaling laws. We saw an example of scaling laws in the equations of classical electrodiffusion, Equation 6.4 of Chapter 8, where the integer powers of -2, -1, 0, 1, 2
332
CHAPTER 15
and 3 appeared in the scaling parameter for the variables t, z, V, E, N and J, respectively. 1.3. Universality Another power law in statistical physics describes the coexistence of a liquid and a gas phase. These phases, as in boiling water, may coexist below a critical or transition temperature Tc; see Figure 5.2 of Chapter 5. A portion of the phase diagram of temperature versus density ' at constant pressure is sketched in Figure 15.1. The coexistence curve forms two branches, above and below 'c, the density at Tc. Below Tc the system cannot pass from the gas phase to the liquid phase without passing through a regime of mixed gas and liquid. At a temperature above Tc , on the other hand, the transition from gas to liquid is continuous.7
Figure 15.1 Phase diagram, temperature T versus density ', of a fluid at fixed pressure. From Lectures on Phase Transitions and the Renormalization Group, by Nigel Goldenfeld. Reprinted by permission of Westview Press, a member of Perseus Books Group.
What is the shape of this coexistence curve near the critical point? If '- and '+ are the values of the density at the right and left branches of the coexistence curve, their difference is given by the power law (1.6)
The power to which the temperature difference is raised is an example of a critical exponent. The number 0.327 ± 0.006 was measured for sulfur hexafluoride, but surprisingly this critical exponent is, within the uncertainty of the exponent, the same for different substances. For helium-3, for example, it is 0.321 ± 0.006. Such critical exponents occur also in magnetism, heat capacity and transport properties such as electrical conductivity. Remarkably, the same values of occur in the critical exponents of widely different systems. For example, the onset of magnetization in magnetic systems called Ising ferromagnets, described in Section 4.3 of this chapter, is given by
ORDER FROM DISORDER
333 (1.7)
The exponent, again not easily identified with a simple rational fraction, appears within experimental precision to be the same as that in Equation 1.6. The fact that different systems appear to exhibit the same set of critical exponents is called universality. The critical exponents turn out to be related by scaling laws. The study of these has led physicists to a new type of analysis called the renormalization group.8 1.4. Emergent phenomena Let us extend our earlier discussion of sandpiles to a much larger complex system, the crust of the Earth. Cracks in the crust propagate catastrophically, so that one part of the system can affect many others. Although we read reports of only the largest earthquakes, nature does not put large and small earthquakes into different categories: All earthquakes follow a simple distribution law, the Gutenberg–Richter law. According to this law, when the logarithm of the number of earthquakes of a given magnitude is plotted against their magnitude on the logarithmic Richter scale, the result is a straight line. From the properties of logarithms we know that linearity on a log-log plot indicates an underlying power law, in which some variable N depends on another variable s as (1.8) Taking the logarithm of both sides of equation (1.7), (1.9) we see that a linear relation holds, with - as the slope of the line. When - = 1, N is inversely proportional to s. It follows from the Gutenberg–Richter law that the relationship between the number of earthquakes and the energy they release is a power law. This law also describes the behavior of sandpiles in laboratory and computer model studies. In computational studies, continuing the process of dropping “computer sand” onto a simulated pile results in the formation of many avalanches. These avalanches come in all sizes, with the smallest occurring most frequently and the largest only rarely. The distribution of avalanche sizes N(s), and the lifetimes of the avalanches, follows power laws similar to Equation 1.8. Remarkably, similar power laws appear in economics, in the frequency of words in works of literature and in the extinctions of biological species. These distributions all follow a smooth pattern that forms a linear relation on a log-log plot. Like the patterns of catastrophes discussed above, these phenomena are emergent; they are not obvious consequences of underlying dynamical rules.
334
CHAPTER 15 2. FRACTALS
Phenomena that exhibit the same behavior at all scales exhibit a relationship described by B. B. Mandelbrot as fractal. These include earthquakes, cotton prices and species extinctions. 2.1 Self-similarity When a figure illustrating the behavior of a fractal is magnified, new features appear. While nonfractals may exhibit a characteristic scale, fractals exist over a range of sizes. Fractals exhibit scaling, in which the value of a property of the system depends on the resolution of the measurement. Fractals also exhibit self-similarity, so that a piece of an object looks like the entire object. Mathematical objects can be devised in which the similarity is exact, but more commonly the similarity is statistical rather than exact. Examples of statistical self-similarity are the dendritic patterns of neurons and the branching of arteries, veins and bronchi.9 The concept of self-similarity can help us resolve a riddle in genetics: How can the human genome, with only 105 genes, determine the structures of the 106 capillaries in the heart and the 1011 neurons in the brain? We can explain the information disparity if we assume that the DNA, instead of determining structures, determines the rules that generate these structures. The repeated application of these rules then leads to the synthesis of many similar structures, with self-similar pieces at different resolutions.10 The phenomenon of self-similarity has been observed in currents passing through ion channels; see Chapter 19, Section 2.5. 2.2 Scaling and fractal dimension The value of a fractal property depends on the resolution used to make the measurement. Since the relation specifying the self-similarity determines how the smaller features relate to the larger ones, self-similarity determines the scaling relationship. To describe the scaling relationship we need the concept of fractal dimension. The dimension tells us how many new pieces we see when we focus to a higher resolution. There are several ways to define fractal dimension; when the similarity is approximate but not exact, we use the capacity dimension. This is defined by counting N(r), the smallest number of “balls” of radius r required to cover an object entirely. Since the resolution is proportional to 1/r, we define the capacity dimension d as
(2.1) in the limit as the radius of the balls shrinks to zero. The fractal dimension can be
ORDER FROM DISORDER
335
determined from a scaling law. Equation 2.1 implies that the number of features measured at scale r is proportional to rd. Fractal behavior can be seen, for example, by measuring the length of the coast of Norway with rulers of different sizes. Norway's coastline is broken by numerous fjords, which themselves are broken by fjords, and so on. The length L of the coast may be measured by counting the number of square boxes of a certain size, r, needed to cover the coastline. The total length is the number of line segments, N(r), multiplied by the length r of each one. So L(r) = r N(r), which is proportional to r1-d. The log-log plot of L(r) versus r yields a straight line, which can be described by the power law (2.2) The quantity d, the fractal dimension, has the value 1.52 for the Norway coastline.11 Thus the power to which r is raised in Equation 2.2 is -0.52. The coast of Britain is less wiggly, or less space-filling, with d = 1.25.12 Note that these fractal dimensions fall between the non-fractal dimensions of a line, 1, and a plane, 2. The coast of Norway can be said to be scale-free, in the sense that a part of a fjord at one scale looks like another part, or the whole fjord, at a different scale. This property is shared by clouds, mountains and galaxies. As we will see in Chapter 19, it is also shared by the distribution of openings of ion channels. 2.3 Fractals in time: 1/f noise Just as these examples show self-similarity in space, there are also examples of statistical self-similarity in time. The fluctuations of the voltage across a cell membrane are self-similar when the pattern of large variations over long times is statistically repeated in the pattern of smaller variations over briefer times.13 As we saw in Chapter 11, this is 1/f noise. Observations of 1/f noise have also been made in systems as diverse as light from quasars, the flow of the Nile and highway traffic. Even variations in music have been shown to have a 1/f spectrum. When the intensity of light from a quasar is plotted against time, the trace looks like a mountain landscape, with bumps of all sizes superimposed.14 Spectra with a 1/f dependence, with an exponent between 0 and 2, are sometimes also called 1/f noise. The spectrum is a superposition of periodic signals of all frequencies, as in white noise, the hiss of a radio between stations. However, 1/f noise differs from white noise in that the signal intensity in 1/f noise varies inversely as the frequency, while in the white noise case, = 0, there are no interesting variations in the frequency, no correlation between the signal at one moment and that at the next. 1/f noise is a kind of fractal distribution in time. It is significant that systems in equilibrium do not exhibit 1/f or fractal behavior.15 Because of the ubiquity of 1/f noise, we would expect it to have a general, robust explanation. Bak and his collaborators came to the conclusion that it has to be a cooperative phenomenon in which elements of a large system work together. Systems
336
CHAPTER 15
exhibiting 1/f noise have many degrees of freedom and are open systems, with energy being supplied from outside. 2.4. Fractal transport in superionic conductors Ion transport in media with fractal geometry, called fractal transport, is considered in percolation theory; see Chapter 18. Fractal behavior in superionic conductors may be due to the ionic motion, static disorder of media such as polymer or amorphous electrolytes or macroscopic disorder, as in ceramics or composite structures. Whether fractal effects appear in the presence of disorder depends on the physical phenomenon of interest and the type and degree of disorder.16 An example of fractal conductivity with restricted dimensionality is the motion of ammonium ions in non-stoichiometric alumina; see Section 8.1 of Chapter 6. The ion motion shows a remarkable quasi-one-dimensional microscopic diffusion, due to the fact that mobile ammonium ions can only diffuse along frontier “lines” between extended defects of blocked sites. Fractal conductivity is observed in dispersed ionic conductors and mixed ion solid electrolytes. An example of the latter is seen with mixtures of alkali ions in crystals or glasses. The conductivity of NaxK1x beta alumina exhibits a minimum around x = 0.3 (70% K), as seen in Figure 15.2.17
Figure 15.2. Log-conductivity isotherms of Na/K aluminas as functions of alkali composition. Solid lines are data from Bruce and Ingram, 1983; the dashed line is from Chandrasekhar and Foster, 1983. From Sapoval et al., 1989.
ORDER FROM DISORDER
337
Dispersed ionic conductors are mixed-phase systems in which the addition of isolating particles produces a significant increase in conductivity. This surprising phenomenon has been interpreted as due to the space-charge effects along interfaces between the two phases. 2.5. Self-organized criticality The answer to the search for the origin of complexity cannot be found in systems in equilibrium; these do not exhibit large catastrophes, fractals or 1/f noise, except under very specific circumstances, such as the critical behavior at a phase transition. We do not find evolution in balanced systems. The complexity of fractals is not robust, and so cannot explain the ubiquitous occurrence of complex behavior in nature. Nor is chaos in itself sufficient to produce complex outcomes. Models of chaos do indeed produce order at some critical point, when a parameter has been tuned just so. But in general, chaotic systems produce white noise rather than 1/f noise. Self-organized critical systems evolve without interference from an outside agent. Just as sand must fall for a long time before the sandpile becomes steep enough to form avalanches, the Earth must have existed for a long time before developing earthquakes. Organic molecules must have existed for a long time before developing the punctuated equilibria of evolution. The toppling of sand in a sandpile is a canonical example of self-organized criticality; it shows a mechanism by which quantity becomes quality. The sandpile is an open dynamical system, since sand grains are being added steadily and the height of the grains represents their potential energy. This is converted to kinetic energy by the toppling of grains, and finally to heat energy when these come to rest. The flow of heat spreads the energy through the system. If sand grains are scattered on an empty table at randomly selected points and this process is repeated long enough, the sandpile will organize itself into a highly susceptible state, at which the addition of a single grain starts an avalanche. Until the system has arrived at a critical state, the response to small disturbances is small, and the behavior of the sandpile follows a predictable pattern. However, when the system is critical, a single grain, dropped at the right place, can lead to a massive flow of sand. Thus contingency is relevant at the self-organized critical state, and the ability to forecast events has been lost. It would be futile to follow the trajectories of the particles in the hope of predicting the future. Looking ahead, we might surmise that the opening of an ion channel is something like the onset of an avalanche. The ion concentration gradients, temperature and electric field create conditions such that a fluctuation can initiate a massive flow of ions across the membrane through the specialized structure of the molecule. The duration of this ionic avalanche will depend on the local conditions, which have been altered by the avalanche itself. In the language of the mechanical model, we may call the initiation of an avalanche the “opening” and its end the “closing” of the channel.
338
CHAPTER 15 3. ORDER, DISORDER AND COOPERATIVE BEHAVIOR
Why does matter spontaneously form macroscopic ordered structures, such as crystals, sound waves and action potentials? These examples of long-range order cannot easily be extrapolated from knowledge of their underlying microscopic system. To seek to understand the spontaneous formation of order in a complex system, we have to adopt a phenomenological approach, as no general microscopic theory is available that will predict such macroscopic structures.18 Concepts of order and disorder emerge from a microscopic system only after we have applied a model appropriate for bridging the gap between the microscopic and macroscopic realms. Our present understanding has been influenced by the study of superfluids, which provided the insight that order is the coherence of the fundamental collective state, while disorder is a gas of quasiparticles distributed over excited collective states. 3.1. Temperature and entropy Let us take a fresh look at the concepts of temperature and entropy from the viewpoint of statistical thermodynamics. The widespread belief that heat is only energy in disordered form and that temperature is only the degree of mean agitation of the molecules, while adequate for a simple model such as a perfect gas, does not provide an adequate understanding of these concepts. A thermodynamic equation that connects entropy S, internal energy U, and absolute temperature T of a body in internal equilibrium is given by
(3.1) where the enthalpy, H = U + PV, is held constant. The general relationship shown by this equation, shown in Figure 15.3 is free of any reference to the idea of thermal agitation as the cause of disorder in matter.19 The perfect gas, which gives us relationships between temperature, pressure and entropy, is a useful model in thermodynamics. Its limitation, however, is that its internal energy is entirely kinetic and therefore reflects the molecular disorder of the gas, as do the entropy and temperature. The equipartition theorem of kinetic energy, which states that the mean kinetic energy is ½kT per degree of freedom, is valid for the perfect gas and for systems in which the potential energy is a quadratic function of the displacement, but is not valid for a number of other systems for which a temperature can be measured. A perfect gas obeys Boyle's law, according to which the specific volume of a gas at constant pressure is proportional to the absolute temperature. This implies zero volume at zero temperature. Since a volume cannot be negative, negative temperatures must be impossible. However, the perfect-gas model is too restrictive, and we shall see that there are systems for which negative temperatures exist.
ORDER FROM DISORDER
339
Figure 15.3. Entropy as a function of internal energy. When entropy decreases with increasing energy, temperature is negative. From Finkelstein, 1969.
3.2. The perfect spin gas Let us consider a magnetic needle placed into an external magnetic field H. The needle, with a magnetic dipole , will experience a torque that tends to line it up with the field. If it is able to dissipate energy as heat, it will come to thermodynamic equilibrium. Assuming now that there is an array of these needles in the field, and that they do not interact with each other, they will line up with their mean orientations along the field, but with thermal agitations about this mean that depends on the temperature. In this way we see the formation of magnetic order, a form of energy that is in competition with thermal disorder. The orientation of the magnets at equilibrium is determined by two forms of energy in competition: magnetic field energy H and thermal energy kT.
Figure 15.4. Energy levels of a magnetic dipole in a magnetic field. From Careri, 1984.
Let us now go from a classical magnet to the case of a particle, say an electron. The competition between order and disorder is still present although, because of the low value of the magnetic energy, the two energies become equal at the low temperature of
340
CHAPTER 15
one kelvin, and disorder prevails at room temperature. A system in which the spins respond to the external field but do not interact with each other is called a perfect spin gas. In the analogy, the role played by pressure in the molecular gas is played by the magnetic field in the spin gas. Because the atom in the spin gas is a quantum system, the magnetic dipole can take on only certain discrete orientations, a phenomenon called space quantization. Because of the limited number of orientations allowed the spinning particle, its energy levels form a finite set. The ith energy level is given by the equation, Ei = iH, where is a constant. We will assume a system with six energy levels separated by steps of H; see Figure 15.4. The distribution of the spins among the energy levels is given by Boltzmann's law, according to which the number of spins occupying the ith level, ni, is proportional to exp(-Ei/kT). The number of spins ni of energy Ei drops as energy rises; see Figure 15.5.
Figure 15.5. Boltzmann distribution of a perfect spin glass. From Careri, 1984.
The sum of these populations equals the total number of particles, N = ni. Taking the logarithm of ni, we see that (3.2) This model can be diagrammed by plotting a histogram of the population logarithms versus energy. If the Boltzmann distribution law holds, the ends of the segments of the population logarithms form a straight line with slope proportional to the temperature, as shown in Figure 15.6. A steeper slope, tan , indicates a higher temperature. Let us consider two limiting cases. At zero temperature, = 0 and only one level can be occupied, so there is maximum order. At infinite temperature, all levels are equally occupied, = 90°, so there is maximum disorder. The temperature is a parameter characterizing the statistical distribution at equilibrium. Outside of equilibrium, Boltzmann’s law does not apply, and no temperature can be identified.
ORDER FROM DISORDER
341
Figure 15.6. The temperature of a spin gas is proportional to the slope, tan , of the energy level distributions versus log-population. From Careri, 1984.
For a system not in equilibrium, Boltzmann's distribution law no longer applies, and the points on the diagram no longer lie along a straight line, so no temperature can be identified. Thus temperature is a quantity that characterizes the spread of the population distribution among energy levels in a system at thermodynamic equilibrium. Temperature reveals disorder but cannot measure it; only entropy can measure disorder. 3.3. Thermodynamic functions of a spin gas The internal energy of the system is the sum of the energies at each energy level, energy times occupation number, E = Eini. The differential of this energy, dE, can be divided into work and heat contributions: (3.3) Here we can identify W as the work done by the external field H when the energy levels are varied: (3.4)
Heat exchange, dQ, is necessary to change the population distribution at fixed energy levels. To shift populations downward in energy requires the removal of heat from the system. Since E is a state function, the energy levels and populations cannot be changed independently of one another. Only part of the internal energy of the system is free to be used in an isothermal process. The differential of free energy F at constant temperature is
342
CHAPTER 15 (3.5)
Since the thermal portion of the internal energy TdS must remain bound to the system, not all the internal energy variation can be exchanged with the outside as work. Let us now do work on the spin gas by increasing the magnetic field at constant temperature. The intervals between the levels will widen, which tends to increase the slope of the ln ni – Ei curve. Since the slope measures the temperature, that cannot happen in an isothermal process, and there must be a rearrangement in the populations, increasing the occupation numbers at the lower levels at the expense of those at the higher levels. We have seen that the internal energy partitions into a work term and a heat term, and that the heat term depends on the occupation distribution of the levels. So heat must be removed. This example illustrates that not all of the internal energy variation can be exchanged with the outside in the form of work, but that the thermal portion TdS remains bound to the system. In some cases, it is possible for the slope of the ln ni – Ei line to be negative, which means that the system has a negative temperature, shown in Figure 15.3. This has been demonstrated in experimental systems. The population inversion responsible for the operation of a laser is an example of a negative temperature. The distribution of sodium ions across an excitable membrane is another example of a population inversion; negative resistance can be interpreted as a result of negative temperature. The distribution of potassium ions is a population inversion in the opposite direction; see Table 4.1 of Chapter 4. 3.4. Spontaneous order in a real spin gas In a real system of magnetic dipoles, the spins interact. Each dipole is affected by the magnetic fields due to the other dipoles. This will change the energy levels of the system, and consequently also the populations of the various levels. The exact solution of this problem is difficult. By neglecting all but nearest-neighbor interactions the problem has been solved, but only for one and two dimensions. However, intuitive arguments can explain the emergence of spontaneous ordering in a real system below a critical temperature. The mean field of the magnetic dipoles is called the Weiss field. A magnetic dipole in the real system will line up in the field of its neighbors. This results in a lowering of the energy of the system, which is opposed by the thermal agitation. The internal field acts in the same way as an external field: It lowers the entropy of the system. At a certain temperature, the Curie temperature TC, the internal magnetic field energy equals the disordered thermal energy, kTC = H. Below this temperature, the spins will be aligned; the resulting macroscopic field is referred to as the spontaneous magnetization M, the net magnetic moment per unit volume. Above TC, the dipoles are randomly aligned, and so their mean value M is zero. The ordered phase below the Curie temperature is called the ferromagnetic phase, and the disordered phase above TC, the paramagnetic phase; see Figure 15.7.20 If the system is cooled from just above the Curie temperature, a cooperative process occurs that lines up all the dipoles. Heat must be removed from the system to lower the entropy at the Curie temperature. A spontaneous formation of this type of magnetic order is called a ferromagnetic transition.
ORDER FROM DISORDER
343
Figure 15.7. Phases in a magnetic system. From Careri, 1984.
The existence of the internal field is responsible for the occurrence of spontaneous magnetic ordering in ferromagnetism. A region of a ferromagnetic material in which the spins are ordered in one direction is called a ferromagnetic domain. If heat is removed, the phase boundary will move and the ferromagnetic domain will grow as the paramagnetic domain shrinks. A single crystal may have more than one domain. Ferromagnetism in the transition metals, such as iron, cobalt and nickel, is a complex quantum process, but a process similar to the one described occurs in paramagnetic salts at very low temperatures. 4. FLUCTUATIONS, STABILITY AND MACROSCOPIC TRANSITIONS Complex systems, which are subject to fluctuations, cannot be adequately treated with a deterministic causal description. Statistical mechanics shows that, in the neighborhood of critical points, thermal fluctuations are amplified to attain a macroscopic level and drive the system to a new phase.21 The change of the system involves coherent behavior, which may result in an increase in spatial order. 4.1. Fluctuations and instabilities One example of the evolution of order within constraints is the hydrodynamic instability that results from uniformly heating a thin horizontal layer of fluid from below. This Rayleigh-Bénard instability may occur in such diverse systems as heating water in a pan and the weather patterns of the Earth’s atmosphere. These are open systems in a nonequilibrium steady state. When the temperature difference is small, a temperature gradient is maintained from bottom to top, and a density gradient from top to bottom. In this linear regime, heat is transferred uniformly by conduction. As the fluid at the bottom heats, its density decreases, reversing the normal configuration, which is stable in the gravitational field. To place these lighter layers at
344
CHAPTER 15
the top requires a break in the symmetry of the system. The establishment of a new configuration thus depends critically on the presence of fluctuations. Depending on the geometrical constraints, the new configuration may be a pattern of parallel rolls or hexagonal cells.22 Figure 15.8 shows a Bénard system, in which an upward thermal flux above a critical threshold has generated a system of parallel cylindrical vortices.23 The upward arrows indicate a rise of warmed, less dense fluid, and the downward arrows a fall of cooled, denser fluid.
Figure 15.8. A Bénard cell, with parallel cylindrical vortices. The z component of the velocity of a volume of fluid varies sinusoidally with the horizontal coordinate x. From Careri, 1984.
The macroscopic structure is strongly dependent on the external conditions. The vertical component of the local velocity of the fluid in the cylindrical vortices, vz, is a sinusoidal function of the horizontal coordinate x, with a maximum value that depends on the temperature gradient. The cells of a Rayleigh–Bénard system are usually in motion, as seen in the weather patterns on Earth driven by temperature differences between the equatorial and polar regions. The convection cells are dissipative structures dependent for their existence on the flow of energy. The transition from uniform conduction to synchronized convection cells is a nonequilibrium phase transition to macroscopic order. Similar coherent structures occur in chemical cycles, such as those of cellular metabolism.24 An analogous instability in which the gravitational field is replaced by an electric field, the electrohydrodynamic instability, represents a nonequilibrium steady state in which the temperature gradient is replaced by an electric potential gradient. 4.2. Convective and electrohydrodynamic instabilities An electrohydrodynamic instability appears at defined voltages between electrodes in weakly conducting liquids. In cells with a dc field between planar electrodes, vortical motion appears as a result of the nonuniform distribution of space charge near one of the electrodes. Figure 15.9 shows the analogous formation of vortex structures in gravitational and electrohydrodynamic instabilities.25
ORDER FROM DISORDER
345
Figure 15.9. Two types of instability vortices. (a) A convective instability due to a temperature gradient opposite to the (gravitational) field. Cold regions are labeled C and warm regions, W. (b) An electrohydrodynamic instability arising from unipolar charge injection into a weakly conducting liquid. From Blinov, 1983.
The electroconvective instability leads to a nonlinear current–voltage characteristic and an inhomogeneous distribution of space charge. Analysis of threshold voltage and other data suggest that the electroconvective instability arises in the electric diffusion layer.26 It will be recalled that an axonal membrane near rest potential is in such a nonequilibrium steady state. The electrogenic response below threshold is the linear region, analogous to the steady temperature gradient region in the Rayleigh–Bénard case. The action potential, induced by a potential gradient above threshold, is a pair of toroidal vortices, a localized domain of an electrohydrodynamic instability constrained by the axon’s cylindrical geometry to move along its axis. The selective permeation and circulation of ions replaces the hydrodynamic motion of the fluid as a whole; see Figure 4.5 of Chapter 4. It took the voltage clamp to overcome this instability in the laboratory, and even then it reasserts itself as the “abominable notch” mentioned in Chapter 4. Repetitious firing corresponds to the propagation of multiple convective cells. Electrohydrodynamic instabilities in liquid crystals will be discussed in Section 1.8 of Chapter 18. 4.3. Spin waves and quasiparticles The system will be very sensitive to fluctuations in the process of spontaneous ordering; if a fluctuation causes a nonzero field in some direction, all the spins will seek to line up with it. The order generated extends far beyond the immediately neighboring spins of the spin where the fluctuation originated; we refer to this phenomenon as long-range order. When an array of interacting spins is arranged with all dipoles pointing in the same direction, the energy of the array is at a minimum, so it is convenient to call this the ground state of the system. Let us consider the ways in which disorder can be introduced into this ordered system. To simplify the model down to its bare bones, we
346
CHAPTER 15
suppose the spins have only two allowed states, up and down; models of this type are called Ising models. In the ground state, all the spins are up, , let us say. Then the smallest amount of disorder possible is to have one spin down and the rest up, . By the laws of quantum mechanics, this disturbance must be quantized. This quantum of disorder in a highly ordered system can travel through the system like a particle, so we call it a quasiparticle. A general result of quantum mechanics is that the energy at equilibrium can be lowered by spreading out the particles, so reducing the energy gradient. Therefore we can lower the energy of a distribution of these disturbances by spreading them apart. The energy of the quasiparticle, by Planck's law, E = hf, must be proportional to the corresponding frequency and inversely proportional to the wavelength. This can be illustrated by a magnetic dipole that precesses around the direction of the magnetic field. By analogy to the photon, the quasiparticle of the magnetic system is called a magnon. The quasiparticle concept allows us to describe quantized deviations from the perfect order that prevails in the ground state. This concept is useful only in so far as the waves do not overlap, so that the wave–particle duality applies. Then the magnons can be visualized as traveling like molecules of a rarefied gas. Just as the velocity of gas molecules is a vector, the magnon, or any quasiparticle, can be described as a vector, the wavevector k, the magnitude of which is the wavenumber 2%/, the number of waves per unit length. At temperature T, the average magnon frequency is kT/h. A useful way to look at a quasiparticle is as a correlation between thermal fluctuations. Above zero temperature there must be disorder. In the absence of magnons, this disorder is randomly distributed, but a space–time correlation between the thermal fluctuations can be described as a magnon wave, a collective motion of the dipoles. 4.4. The phonon gas The magnon system we have been examining provides an excellent model for a number of analogous systems. One of these is an atomic crystal. At low temperatures, the crystal will be highly ordered, with every atom at a point of a geometrical lattice. Even at zero temperature, however, the atoms cannot be exactly at rest at their lattice points, because the uncertainty principle of quantum mechanics requires a residual motion called zero-point energy. As the temperature rises, thermal energy introduces a modicum of incoherence in the atomic motions, and the atoms will typically be found at some distance away from their ideal lattice positions. Because of the forces between atoms, these motions will be correlated, and, of course, they must be quantized. These vibrational quanta travel through the solid at a speed close to that of sound. Thus, the thermal motions can be described as the movement of quasiparticles, which are called phonons. In the wave–particle duality discussed in Chapter 1, just as the photon is the particle of the electromagnetic wave and the magnon is the quasiparticle of the spin wave, the phonon is the quasiparticle of the elastic wave. Phonons are not localized, but are distributed throughout the atoms of a crystal as a collective form of energy.
ORDER FROM DISORDER
347
The thermal energy of a solid can be described as a phonon gas: an incoherent, disordered quantum system. At low temperature, when the internal energy is low, the low-frequency, long-wavelength phonons have the greatest statistical weight. A single phonon at absolute zero represents order; phonons spreading out in all directions, with different wavelengths and phases, as seen at higher temperatures, represent disorder. Since the displacements of an ensemble of atoms in a solid can be either perpendicular to the direction of wave propagation or parallel to it, phonons are of two types, transverse and longitudinal. As we saw, phonons are analogous to photons; however, they also interact with them in the case of ionic solids, which are held together by ionic bonds. The oscillations of the ions, due to the phonons and at the same frequency as the phonons, generate electromagnetic waves in the infrared. Thus the thermal energy of the phonon gas can be radiated into space as a photon gas, or "radiant heat." Absorption of infrared radiation by matter is this process in reverse. 4.5. The spontaneous ordering of matter In cooling a liquid, in which the atoms are close together but disorganized, we reach a critical temperature at the melting point, when an ordered crystal is formed: the liquid freezes. This phase transition is clearly a cooperative phenomenon, in which all the atoms participate. As in the magnetic system, order may nucleate independently at separate locations to form domains, called crystallites; atoms in different crystallites are ordered in different directions. In addition to the distributed thermal disorder described by the phonon gas, there is another form of lattice disorder, which appears even at low temperature. These are states of localized disorder, called point defects. An atom may be jammed into an interstitial position; a lattice position may be vacant, or an impurity atom may be in the crystal, either in an interstitial position or substituting at a lattice position. Looking at the two systems, magnets and molecules, we can point out the analogous features: The action of the external magnetic field is analogous to the action of external pressure on the atomic or molecular system. Corresponding to the magnetic levels, we have atomic or molecular levels. The internal Weiss field has its analog in the van der Waals field, the force between adjacent atoms. The Curie temperature is analogous to the critical temperature. Spin waves are analogous to lattice phonons, and spin inversion to point defects. While we have used the appearance of ferromagnetism in arrays of magnetic dipoles as an example of the cooperative formation of an ordered phase, similar behavior occurs in arrays of electric dipoles. The transition to a ferroelectric phase results in a permanent electric moment in the absence of an electric field. This spontaneous electric polarization is akin to ferromagnetism, but arises from a more complex microscopic effect. We will devote much of Chapters 16 and 17 to the exploration of ferroelectricity in crystals and liquid crystals. 5. PHASE TRANSITIONS As we have seen, a phase transition is a spontaneous cooperative phenomenon, in which
348
CHAPTER 15
matter becomes more (or less) highly ordered, as in freezing (or melting). It may lead to the formation of domains, with ordering in different directions. 5.1. Order variables and parameters We saw in Chapter 6 that the density of water on the transition surface separating the liquid and vapor phases is undetermined; the density of the liquid phase is higher than that of the vapor phase. This density difference becomes smaller with increasing temperature and pressure and completely disappears at the critical point, Tc = 647 K and Pc = 218 atm, at which water and steam have the same density. An undetermined mechanical variable, such as the density on the transition surface, is called an order parameter. The order parameter is a thermodynamical variable marking the ordered phase below the critical temperature Tc. The order parameter is obtained as an ensemble average of microscopic variables )m, which denote the activity of ions or molecules at sites m. The different structures may be identified as pseudospins in an Ising model. In the disordered phase above Tc, the variables )m are in fast random motion, so that their time average <)m>t is close to zero. In the ordered phase below Tc, the )m are correlated in slow motion. The ordered phase is divided into domains or sublattices, separated by domain walls; see Figure 15.10.27
Figure 15.10. A two-dimensional binary system with two states of pseudospin indicated by filled and open circles. The high-temperature phase (left) consists of intermingling sublattices. Below the critical temperature, ordered phases of two “opposite” domains have formed, separated by domain walls (broken lines). From Fujimoto, 1997.
An example of an order parameter is found in binary alloys, such as brass. When it is composed of equal parts of copper and zinc, the alloy crystallizes in a body-
ORDER FROM DISORDER
349
centered cubic system in which every zinc atom is surrounded by eight atoms of copper, and vice versa. As the temperature rises, thermal fluctuations cause more and more atoms to diffuse and accommodate themselves in the “wrong” crystalline sites. At a critical temperature, Tc = 739 K, the disorder becomes complete and every subsystem contains an equal number of atoms of each type. The order parameter, proportional to the concentration difference in the two subsytems, goes to zero above 739 K. This example also provides a good introduction to the concept of symmetry breaking. The disordered phase of the alloy at high temperature displays a greater symmetry than the ordered phase: The disordered phase is invariant to more translation operations between lattice planes than the ordered phase. Hence the formation of order as the temperature is lowered breaks that symmetry. We shall discuss broken symmetries in living systems in Chapter 18. The order parameter of a ferromagnetic material is the magnetization vector.28 The order parameter of a ferroelectric material, as we shall see in Chapter 16, is the polarization vector. 5.2. Mean field theories Phase transitions have been analyzed by mean-field theories and Landau’s theory. In these a physical variable is replaced by its average value, and fluctuations about that value are ignored. Van der Waals’s theory of the liquid-gas transition is one of the mean-field theories. In terms of the volume per particle v = V/N in a gas or liquid, the equation of state of an ideal gas is, from Equation 3.1 of Chapter 5, (5.1) Van der Waals proposed an equation of state that applies to both liquid and gas phases. By reducing the volume by the excluded volume b due to the size of the atoms, taking into consideration the attractive interactions between atoms, he derived the equation
(5.2) By fitting a and b at high temperature, the temperature, volume and pressure at the critical point can be determined, with reasonably good agreement to the measured values. Equation 5.2 can be rescaled to apply to all fluids, giving the law of corresponding states. The theory predicts values of the critical exponents, surprisingly yielding the same values as the mean field theory of a ferromagnet. It also shows that the specific heat at constant pressure diverges in the neighborhood of the critical point.29 Landau theory gives a useful description of second-order transitions but it breaks down on the approach to the critical point. Landau theory predicts the power laws but does not yield correct values of the exponents. Moreover, it shows that fluctuations are not negligible near a critical point, contradicting one of the premises on which it is based!
350
CHAPTER 15
In the approach to a critical point, the fluctuations of a molecular system extend over long distances, making correlations possible at long range. When this effect is taken into consideration in the calculations, values of the critical exponents are obtained that agree with those found experimentally. This type of analysis called the renormalization group.30 How do phase transitions occur in principle? Consider, as an example, a mass of hydrogen at a pressure such that the macroscopic system exists only in the solid and vapor forms. According to statistical mechanics, the thermodynamic properties of the system should fall out from knowledge of the atomic structure with the aid of Schrödinger's equation and the partition function. Therefore these equations should be sufficient to allow the deduction of the sublimation of the hydrogen and the equations of state in both phases. The problem is to determine the sublimation curve from the spectrum of states existing under the given constraint. From Figure 5.4 of Chapter 5, we note the following qualitative features of the spectrum: At high temperatures, the spectrum is well approximated by the states of an ideal gas. Near absolute zero, the spectrum consists of a discrete set of rotational states. At slightly higher but still very low temperatures, the law of Peter Debye, according to which the specific heat varies as T 3, becomes dominant. At intermediate temperatures the states of hydrogen are those corresponding to the crystallographic symmetry of solid hydrogen. This set of states has less symmetry than the Hamiltonian of the system, which, since it describes the gaseous as well as the solid phases, must have complete rotational and translational symmetry. Thus the set of states of the solid is only a subset of the set of states of the system. This means that the dimensionality of the system's representation, its degeneracy, depends on energy. The degeneracy decreases generally as the energy is decreased: While the levels of the gas are highly degenerate, the lowest energy level is nondegenerate.31 5.3. Critical slowing down and vortex unbinding When a system approaches the critical point, anomalies appear in its dynamical properties. As we approach the critical temperature, it takes longer and longer for the system to come to equilibrium after any external disturbance. In this phenomenon, critical slowing down, the relaxation time goes to infinity as the temperature approaches Tc. The curve of potential energy becomes flatter and flatter as we approach the equilibrium point, so the system approaches equilibrium more and more slowly. This is due to the thermal fluctuations, which can cause the order parameter to move away from its equilibrium value.32 An example of critical slowing down is in the ferromagnetic transition discussed above. When there are wide domains in which the spins run parallel, it takes longer for a thermal fluctuation to rotate this entire group. More generally, we can say that modes of long wavelength are very slow near the critical temperature.33 Some authors have proposed that phase transitions are associated with the unbinding of vortices. Physical theory predicts that a single vortex in an infinite system has infinite energy, and therefore cannot exist. However, a pair of oppositely directed vortices has a finite energy, depending on the separation distance of the vortices.34 We
ORDER FROM DISORDER
351
have already seen, in Sections 4.1 and 4.2, that vortices in Rayleigh–Bénard systems, electrohydrodynamic instabilities and propagated action potentials occur in pairs. 6. DISSIPATIVE STRUCTURES The ordering principle that applies to systems in equilibrium cannot explain the formation of the highly ordered structures and coordinated functions of biological organisms.35 Their maintenance and reproduction requires a continuous exchange of matter and energy with their surroundings. 6.1. Thermodynamics of irreversible processes Ilya Prigogine has formulated an extended version of the second law of thermodynamics, which deals with irreversible processes that establish the arrow of time. This new version of the second law applies to open as well as isolated systems. It states that the inequality governing the variation of entropy during a time interval dt takes the form (6.1) where deS is the flow of entropy due to exchanges with the surroundings and diS is that due to irreversible processes such as heat or electrical conduction, diffusion or chemical reactions. Since deS may be negative as well as positive, an ordered state may be maintained or even grow. Thus the biosphere, in a solar energy gradient, evolves into ordered structures. Concentration gradients in a chemical reaction chain produce ordered forms, such as spirals. Prigogine and his colleagues found that their theory predicts the existence of a class of systems showing two forms of behavior: a linear regime with maximum disorder and a nonlinear regime with coherent behavior. Order is created far from equilibrium and destroyed near thermodynamic equilibrium. The Bénard cell of Figure 15.8 is an example. Here the temperature gradient serves as an external constraint that, sufficiently far from equilibrium, leads to the spontaneous emergence of convection patterns. The generation of coherent light in a laser is another example. 6.2. Evolution of order The evolution of order in biological systems is a much more complex example of the same principle. The variety of regimes in chemical kinetics, involving autocatalysis, cross-catalysis, activation and inhibition, can produce a multiplicity of ordered states of spatial and functional order. The application of Equation 6.1 can lead to the appearance of large deviations from chemical equilibrium, such as those produced across cell membranes in active transport. When the entropy production of the system is expressed in macroscopic quantities such as flows and forces, a steady state emerges under the condition of
352
CHAPTER 15
minimum entropy production. The emergent regime will be stable under the condition that the excess entropy increases in time. Remarkably, fluctuations can switch the system into an unstable region where the fluctuations are amplified and drive the system into the new regime. Thus in a Bénard cell the symmetry of the equilibrium state can be broken by a thermal fluctuation to give rise to the ordered pattern of cylindrical rolls. In this way order arises from fluctuations. There are two distinct types of evolution. One is the thermodynamic concept of disappearance of structure with the growth of uniformity, as expressed in the second law of thermodynamics. The other is the creation of new forms in organic evolution. These drastic difference in behavior are not in conflict, however, as they describe different thermodynamic situations, the former near equilibrium and the latter far from equilibrium. Biological structures evolve in open systems in which energy is dissipated. By casting off entropy, these systems maintain coherence and gain order and information. A hierarchy of structures forms, based on the boundary conditions of the system. A change in conditions gives rise to instabilities and transitions from one such structure to another, with the nature of the emergent phase dependent on the vagary of a random fluctuation.36 6.3. Synergetics Hermann Haken generalized the effects of cooperation found in phase transitions to the realms of chemistry, biology and sociology in the approach called synergetics. Whereas order in physical equilibrium systems is obtained by lowering the temperature, the order of biological systems is maintained by an energy flux. One physical system whose ordered state can only be maintained by an energy flux through it is the laser. Its active medium is pumped by power from a lamp. The power output of the laser rises slowly until it reaches a threshold. When this threshold is reached, the output light intensity rises rapidly. This is brought about by an ordered state in which the emitted photons become coherent in phase. The laser can be viewed as a model system of the way order is built up from disorder, particularly in biological systems. These systems are dissipative structures. They involve an instability in which the symmetry of the low-energy state is broken by fluctuations.37 6.4. A model of membrane excitability A model by R. Blumenthal, J. P. Changeux and R. Lefever applies the concept of dissipative structures with multiple steady states to membrane excitability.38 The model, however, misses the essentially electrical nature of excitability by assuming that the permeating species is a nonelectrolyte. The membrane is assumed to be composed of protomers, lipoproteic units that may be considered equivalent to channels with their surrounding regions of bilayer. As a minimum, each protomer is assumed to have two conformations, a relatively stable state S and one at higher free energy, R. The free energy difference between the two states is for an isolated protomer, but cooperativity between the
ORDER FROM DISORDER
353
protomers reduces it by an amount proportional to , the average number of protomers already in R, (6.2) where the proportionality parameter is a positive constant. In the model, the ligand binds to either side of the protomer and permeates, in a single jump, across the membrane. Adsorption and translocation are assumed to occur only at the state of higher energy R. This is contrary to what happens in real excitable membranes, since the state of higher electrical energy, the resting state, is impermeant to sodium ions, while permeation arises with depolarization. Thus it is not surprising that Blumenthal and collaborators identify the resting state, with no major ion currents, as a dissipative structure, instead of the active state, in which ions traverse the membrane. Membrane excitation is viewed as an assisted phase transition.39 NOTES AND REFERENCES David Ruelle, Chance and Chaos, Princeton University, 1991, 153. Ichiji Tasaki, J. Gen. Physiol. 46:755-772, 1963; ___, Ferroel. 220:305-316, 1999. Nigel Goldenfeld, Lectures on Phase Transitions and the Renormalization Group, Addison-Wesley, Reading, MA, 1992. 4. Per Bak, How Nature Works: The Science of Organized Criticality, Springer, New York, 1996. 5. Talbot H. Waterman, in Theoretical and Mathematical Biology, edited by Talbot H. Waterman and Harold J. Morowitz, Blaisdell, New York, 1965, 13. 6. Jack A. Tuszynski and Michal Kurzynski, Introduction to Molecular Biophysics, CRC Press, Boca Raton, 2003, 403-407. 7. Goldenfeld, 5-19. 8. Goldenfeld, 229-384. 9. Larry S. Liebovitch, Fractals and Chaos Simplified for the Life Sciences, Oxford University, 1998, 3-15. 10. Liebovitch, 24f. 11. Bak, 20, 6. 12. Liebovitch, 58f. 13. A. M. Churilla, W. A. Gottschalke, L. S. Liebovitch, L. Y. Selector, A. T. Todorov and S. Yeandle, Ann. Biomed. Engr. 24:99-108, 1996. 14. Bak, 21. 15. Bak, 28. 16. B. Sapoval, M. Rosso and J. F. Gouyet, in Amulya L. Laskar and Suresh Chandra, Superionic Solids and Solid Electrolytes: Recent Trends, Academic, Boston, 1989, 473-514. 17. Reprinted from Sapoval et al., 503, Copyright 1989, with permission from Elsevier; J. A. Bruce and M. D. Ingram, Solid State Ionics 9&10:717-723, Copyright 1983, with permission from Elsevier; G. V. Chandrashekhar and L. M. Foster, Solid State Comm. 27:269-273, Copyright 1987, with permission from Elsevier. 18. Giorgio Careri, Order and Disorder in Matter, Benjamin/Cummings, 1984. 19. Robert J. Finkelstein, Thermodynamics and Statistical Physics: A Short Introduction, W. H. Freeman, San Francisco, 1969, 148. 20. Figures 15.4 and 15.5 are from Careri, p. 5; 15.6, p. 8, and 15.7, p. 17. 21. I. Prigogine and G. Nicolis, in From Theoretical Physics to Biology, edited by M. Marois, Karger, Basel, 1973, 89-109. 22. See, e.g., J. A. Scott Kelso, Dynamic Patterns: The Self-Organization of Brain and Behavior, MIT Press, Cambridge, Mass., 1995, 6-8. 23. Careri, 104. 24. M.-W. Ho, The Rainbow and the Worm: The Physics of Organisms, 2nd edition, World Scientific, Singapore, 1998, 37-60. 1. 2. 3.
354
CHAPTER 15
25. Lev M. Blinov, Electro-Optical and Magneto-Optical Properties of Liquid Crystals, John Wiley, Chichester 1983, 205. 26. S. A. Pikin, Structural Transformations in Liquid Crystals, Gordon and Breach, 1991, 267-283. 27. Minoru Fujimoto, The Physics of Structural Phase Transitions, Springer, New York, 1997, 28. With kind permission of Springer Science and Business Media. 28. S. K. Ma, Modern Theory of Critical Phenomena, W. A. Benjamin, Inc., 1976, 4. 29. Goldenfeld, 124-127. 30. Goldenfeld, 229-282. 31. Finkelstein, 188. 32. Goldenfeld, 212. 33. Careri, 86. 34. Goldenfeld, 348f. 35. Ilya Prigogine, Gregoire Nicolis and Agnes Babloyantz, Physics Today 25 (11):23-28, Nov.1972; 25 (12):38-44, Dec. 1972. 36. P. Glansdorff and I. Prigogine, Thermodynamic Theory of Structure, Stability and Fluctuations, WileyInterscience, London, 1971. 37. H. Haken, in Cooperative Phenomena, edited by H. Haken and M. Wagner, Springer-Verlag, New York, 1973, 363-372. 38. R. Blumenthal, J. P. Changeux and R. Lefever, J. Membrane Biol. 2:351-374, 1970; Compt. Rend. 270:389-392, 1970. 39. Glansdorff and Prigogine, 272-286.
CHAPTER 16
POLAR PHASES
Analogy is a very powerful method of extending our knowledge. By seeing the similarities between the object under investigation with another, better understood object, we can often obtain insights that advance our thinking. However, the method of analogy has a serious drawback: We don't always know when to stop. We often continue to equate the object with its analog even when the method is no longer fruitful. We must be prepared to drop the analogy when it no longer yields worthwhile results. We have seen that the gated-pore analogy is no longer a useful way to look at ion channels at the research level. Now let us consider another analogy: that the channel acts like a ferroelectric material. To justify this juxtaposition of concepts, we must see if there are indeed significant similarities. At the same time, we must also look for any differences between the structures and behaviors of voltage-sensitive ion channels and ferroelectrics. We have seen from the comparison of noise and admittance spectra in Chapter 11 that the behavior of excitable membranes is nonlinear, implying nonlinearity in the responses of voltage-sensitive ion channels. Other clues to nonlinearity are the generation of second and higher harmonics (Chapter 10, Figure 10.10) and the semicircles with depressed centers on Cole–Cole curves (Chapters 10 and 11). In this chapter we will explore a nonlinear mathematical model. In condensed state physics, the dipole moment per unit volume is called the polarization, and materials that have a spontaneous polarization that can be reoriented by an external field are called ferroelectrics. Ion channels in membranes share many interesting properties with ferroelectric materials, including pyroelectricity, piezoelectricity, field-sensitive birefringence, a surface charge and a transition temperature, taken to be the heat-block temperature. These similarities provide reasons to suspect that ion channels are ferroelectric, as proposed by a number of investigators.1 1. ORIENTATIONAL POLAR STATES IN CRYSTALS In Chapter 6 we saw that materials composed of polar molecules display interesting behavior, such as spontaneous polarization. We will further examine this behavior, particularly in crystals, in this chapter; the characteristics of ferroelectricity in liquid crystals will be discussed in Chapter 17.
355
356
CHAPTER 16
1.1. Piezoelectricity Certain crystals polarize under applied stress due to the separation of positive and negative centers of charge. In the direct piezoelectric effect, a mechanical stress gives rise to a voltage; in the converse effect, an applied voltage results in elastic strain. The relation between polarization and stress is linear for fields up to about 105 V m-1. Piezoelectricity may be measured directly, or indirectly by second harmonic generation (see Chapter 10, Section 4). 1.2. Pyroelectricity Some 24 centuries ago the Greek philosopher Theophrastus described a stone that could attract straws and bits of wood. These attractions probably were probably caused by electric charges produced by temperature changes in the mineral tourmaline, a pyroelectric material. Pyroelectricity is defined as the temperature dependence of the spontaneous polarization in certain anisotropic solids. The unit cells of these crystals have a dipole moment. The spontaneous polarization Ps (dipole moment per unit volume) is nonzero in pyroelectrics.
Figure 16.1. Polarization change Ps and pyroelectric current vs temperature for samples of different composition x of crystals of Sr1-xBaxNb2O6. From Lines and Glass, 1977, after Glass, 1969.
If a sample is cut as a thin disk with its parallel surfaces perpendicular to its crystallographic symmetry axis, the spontaneous polarization is equivalent to a layer of bound charge at each surface. Electrons or ions will be attracted to these surfaces. If a pair of electrodes attached to the surfaces are connected to an ammeter, a change in temperature will be accompanied by a current. A temperature increase will decrease the orientational order of the dipoles, lowering the magnitude of Ps and the bound charge. A decrease in temperature allows the dipoles to align more strongly, thereby increasing the magnitude of Ps and the bound charge. In an open circuit, a voltage can be measured. Pyroelectric devices are capable of measuring temperature changes as small as 0.2 K and detecting infrared radiation.2
POLAR PHASES
357
The change of spontaneous polarization Ps as a function of temperature can be measured by integrating the pyroelectric charge. Figure 16.1 shows families of curves of Ps and pyroelectric current vs temperature for samples of different composition x of crystals of Sr1-xBaxNb2O6. Well above the transition temperature Tc the polarization vanishes.3 1.3. The strange behavior of Rochelle salt Potassium tartrate tetrahydrate, NaKC4H4O6 #4H2O, commonly known as Rochelle salt, is a member of the pyroelectric class of materials. Like other pyroelectrics, Rochelle salt has a temperature-dependent dipole moment. Joseph Valasek discovered in 1920 that it becomes electrically polarized when heated or placed in an electric field. Unlike other pyroelectrics, however, Rochelle salt not only retains its polarization when the field is removed, but reverses its polarization when a field is applied in the opposite direction. Rochelle salt, in other words, exhibits an electric dipole moment that can be reoriented by an electric field. By definition, it is a ferroelectric crystal.4 The next ferroelectric was not discovered until 1935. Actually, it turned out to be a family of ferroelectrics, consisting of potassium dihydrogen phosphate, ammonium dihydrogen phosphate, and the corresponding arsenates. In 1945, the ferroelectricity of barium titanate, the first member of the oxygen octahedral family of ferroelectrics, was discovered. The unusual behavior of BaTiO3 turned out to be due to long-range dipolar forces of the crystalline lattice. The focus of inquiry turned to a particular mode of motion in the lattice, termed the soft mode. Guanidine aluminum sulfate hexahydrate, C(NH2)3Al(SO4)2#6H2O, abbreviated GASH, and several alums were found to be ferroelectric in the 1950s. Our understanding of ferroelectrics has grown rapidly from the 1960s on, and hundreds of ferroelectric materials, including polymers and liquid crystals, have been discovered. 1.4. Transition temperature and Curie-Weiss law A ferroelectric phase transition belongs to a class of phase transitions denoted by the appearance of spontaneous electric polarization. When a ferroelectric is heated, the spontaneous polarization disappears—continuously or discontinuously—at the transition temperature or Curie point. In some ferroelectric materials, such as GASH, no ferroelectric phase transition occurs because the crystal melts before this temperature is reached. The phase above the transition temperature is often termed paraelectric. As the transition temperature is approached from above, the dielectric permittivity rises in a divergent manner described by the Curie–Weiss law (1.1) where K is the Curie constant and T0 is equal to the Curie temperature TC only for continuous transitions.
358
CHAPTER 16
Below TC the spontaneous polarization can develop in at least two directions. To minimize free energy, crystals usually polarize in each of these directions, and each volume of uniform polarization is called a domain. Domains, separated by domain walls, may form or grow in a constant electric field. A simplified model of a domain structure in barium titanate, illustrating the curtain effect, is shown in Figure 16.2.5
Figure 16.2. Simplified model of domains in a ferroelectric crystal, showing the motion of a double domain wall in the curtain effect. From Bochynski, 1994.
The study of polycrystalline ferroelectric materials or ceramics allows the preparation of a wide range of compositions. A ferroelectric ceramic is an aggregate is ferroelectric single-crystal grains or crystallites with dimensions typically of 0.5 to 50 m. Macroscopic properties of ceramics differ from those of single crystals because of the effects of grain boundaries and the imperfect alignment of the grains. 1.5. Hysteresis The main feature that distinguishes ferroelectrics from other pyroelectrics is that spontaneous polarization can be at least partially reversed by an applied electric field. Polarization reversal generates dielectric hysteresis loops in ferroelectrics, as shown in Figure 16.3.6 Starting from zero field, the sample behaves at low fields like an ordinary dielectric, with the displacement D increasing nonlinearly. When the field E is reversed, however, the curve does not retrace itself. Cycling the field traces a hysteresis loop; the area within the loop is a measure of the electrical work done in the process of reversing the domains. At the coercive field Ec, polarization reversal occurs. The remanent polarization Pr is different from the spontaneous polarization Ps if reverse nucleation of domains occurs before the field reverses. The upper and lower branches of the hysteresis loop represent enantiomorphous structures. 1.6. Ferroic effects Piezoelectricity, pyroelectricity and ferroelectricity are examples of a general phenomenon in which a crystal can be switched between two states of orientation by the application of a force. Crystals that have two or more orientation states or domains that can be switched by a driving force are called ferroic. In ferromagnetism the switching
POLAR PHASES
359
Figure 16.3. Ferroelectric hysteresis loop, showing spontaneous polarization Ps, remanent polarization Pr and coercive field Ec. From “Principles and Applications of Ferroelectrics and Related Materials” by Lines, M.E. and Glass, A.M. (1977). By permission of Oxford University Press.
force is the magnetic field, which changes the spontaneous magnetization. In ferroelectricity the force is the electric field, which changes the spontaneous polarization. In the same way, a mechanical stress is the switching force in ferroelastic materials, which have two or more orientation states differing in spontaneous strain. These states can be reoriented by mechanical stress.7 In materials that are both ferroelastic and ferroelectric, with the two effects coupled, the spontaneous polarization may be totally or partially reversed with an applied stress X, giving D–X loops analogous to the D–E loop of Figure 16.3. A coupling between ferroelectric modes and nonpolar modes such as elastic modes can lead to extrinsic ferroelectricity (also called improper ferroelectricity). These ferroelectrics are driven by a coupling to nonpolar instabilities, so that the spontaneous polarization is a secondary rather than the primary order parameter. 2. THERMODYNAMICS OF FERROELECTRICS In a phenomenological theory, the electrical and mechanical properties of ferroelectrics are described in terms of thermodynamic potentials. Since the characteristics of these materials are nonlinear, they must be derived from a nonlinear equation of state. Thermodynamic theory, discussed in Chapter 6, has been applied to phase transitions in ferroelectric materials by A. F. Devonshire and others. The system is described in terms of a thermodynamic potential, which is a minimum at equilibrium. There may be several minima, the lowest of which corresponds to an absolutely stable system, while at the others the system is only metastable. A change in the constraints, such as the applied electric field, may cause a formerly metastable state to become absolutely stable. In that case, the system undergoes a phase transition. The transition
360
CHAPTER 16
temperature is determined by equating the potentials for the two states. 2.1. A nonlinear dielectric equation of state We have seen in Chapter 6 that the equation of state for a linear dielectric is D = 0E, where is the constant dielectric permittivity, E the electric field and D the electric induction. In one dimension, this can be written as
(2.1) where is a constant. We can think of the D term as the first term of a power series that describes a nonlinear relation between D and E,
(2.2) With odd powers, D and E are always in the same direction, but this is not true of the even powers. Since most materials do not exhibit such asymmetric behavior, we drop the even powers. Thus the first three terms of our expansion are
(2.3) where , and are constant coefficients that may be temperature-dependent. Note that the nonlinear equation of state reduces to the linear one when = = 0. The linear equation is therefore a special case of the nonlinear equation of state. Equation 2.3 is commonly used as the constitutive equation for ferroelectrics. Equation 2.3 can be derived from the elastic Gibbs energy,
(2.4) where, as in Chapter 5, U is internal energy, T absolute temperature and S entropy. The ith component of stress is Xi and the corresponding strain is xi. If we differentiate Equation 2.4 and substitute the first law of thermodynamics in the form8 (2.5) we obtain
POLAR PHASES
361 (2.6)
Dropping the i subscripts, we obtain in one dimension (2.7)
If we expand G1 in the form (2.8) we can apply Equation 2.7 to obtain Equation 2.3. 2.2. Second order transitions When the parameters and are positive, the function G1 takes the forms sketched in Figure 16.4a. By Equation 2.7, the minima of G1 give the equilibrium values of E. Above = 0, G1 has a single minimum at D = 0. As becomes negative, E bifurcates into two equilibrium positions, and the symmetry of the system is broken to generate a spontaneous polarization. Because of the temperature dependence of , Ps is a function of temperature. As passes through zero, Ps undergoes a continuous, second order transition, as shown in Figure 16.4b. In his phenomenological theory, Devonshire assumed that depends linearly on temperature (2.9) where is a positive constant and = 1/ is the reciprocal isothermal permittivity. The Curie–Weiss law, Equation 1.1, follows from Equation 3.4. The entropy associated with the long-range ordering of dipoles is, within the Devonshire approximation, given by (2.10)
Figure 16.4c shows that the slope of versus temperature is negative below Tc and twice the absolute value it has above Tc.9
362
CHAPTER 16
Figure 16.4. Phenomenological model of a second-order phase transition. (a) Free energy as a function of displacement for positive and negative values of . The temperature dependence near the transition of (b) the spontaneous polarization, which vanishes above Tc, and (c) the reciprocal permittivity at constant stress X and temperature T. From “Principles and Applications of Ferroelectrics and Related Materials” by Lines, M.E. and Glass, A.M. (1977). By permission of Oxford University Press.
2.3. Field and pressure effects At temperatures close to a ferroelectric phase transition, the polarization induced by external fields is large because of the high values of the susceptibility. As a result, the transition temperature depends on the electric field. The electric induction D includes both the spontaneous and the field-induced polarization. The shift in the transition temperature with electric field can be calculated from an equation similar to the Clausius–Clapeyron equation (Equation 4.4 of Chapter 5). Using Equation 3.5 with D = D, we obtain
(2.11)
The effect of an electric field on the polarization of a ferroelectric predicted by thermodynamic theory is shown in Figure 16.5.10 As a result of the shift of the transition temperature with field, if a sample at a temperature slightly above the zero-field transition temperature is depolarized from, say, E2 to zero, it would make a transition from ferroelectric to paraelectric. The temperature dependence of ferroelectric properties can be divided into those properties that depend on volume through thermal expansion and those present even without a volume change. The effect of hydrostatic pressure on a second-order transition is a linear shift of the Curie temperature. The shift rate may be positive or negative and can be obtained from the volume change and entropy change by the Clausius–Clapeyron equation.11
POLAR PHASES
363
Figure 16.5. As the magnitude of the field E is increased, the spontaneous polarization Ps and Curie temperature Tc shift toward higher temperatures T, for(a) a second-order and (b) a first-order transition. At sufficiently high fields, the discontinuity at Tc in the first-order transition vanishes. From Lines and Glass, 1977, after Devonshire, 1954.
2.4. Chirality and self-bias The Curie point of a ferroelectric material may be moved and broadened by partially replacing a more symmetrical component with a less symmetrical one. Permanent polarization (called poling in ferroelectric jargon) has been achieved in organic ferroelectrics by doping crystals of triglycine sulfate (TGS) with trialanine sulfate. We recall that alanine is chiral, but glycine is not. Substitution with D-alanine (or Lalanine) results in a poled single crystal. The effect of this doping is shown in the hysteresis loops of Figure 16.6.12
Figure 16.6. Doping with a chiral additive displaces the hysteresis loop along the field axis. (a) Triglycine sulfate doped with L-alanine. (b) Pure triglycine sulfate. From Newnham, 1975.
364
CHAPTER 16
The internal bias field that alanine produces in TGS modulates the dielectric behavior of the crystal near the ferroelectric Curie temperature. Its effects are equivalent to an external electric field in undoped TGS. The bias field, obtained from hysteresis loop measurements, decreases the peak value of the dielectric constant, shifting it and spreading it to higher temperatures; see Figure 16.7. A similar bias-field effect in excitable membranes is the Cole–Moore shift, discussed in Chapter 9, Section 2.3. 2.5. Admittance and noise in ferroelectrics The dielectric constant and dielectric loss are obtained from small signal dielectric measurements of the real and complex admittance of a crystal, (2.12) where the dielectric permittivity = 1 + i2 is separated into real and imaginary parts. These measurements are used for the identification of phase transitions and the recording of transition temperatures. The conductivity ) may be electronic or ionic. Admittance and noise in excitable membranes are discussed in Chapters 10 and 11. For consistency with the literatures of different fields, we are here using a different sign convention for 2 than the one in Equations 4.7 and 4.8 of Chapter 10. If the phase transition is sharp, the dielectric constant 1 and the dielectric loss 2 peak at the same temperature, with both following Curie–Weiss behavior. The rate at which the dielectric polarization responds to a change in the electric field depends on the dielectric relaxation times - of the various components of the polarization. Dipolar defects in dielectric lead to relaxation effects describable by the Debye equation, Equation 4.7 of Chapter 10. For a system of noninteracting dipoles with a symmetrical double-well potential with barrier energy E, the relaxation time is given by the Arrhenius equation (2.13)
In order–disorder ferroelectrics, the enhancement of the local field due to the electrostatic interaction of the dipoles results in a dielectric spectrum of Debye form but with a temperature dependence of the relaxation time given by the Ising model; see Chapter 15. Dielectric properties in the limit of zero field can also be obtained from the measurement of a power spectrum of the polarization fluctuations or electric noise.13 The temperature dependence of the dielectric constant of triglycine sulfate on the internal bias field Eb induced by various concentrations of alanine impurities is shown in Figure 16.7.14
POLAR PHASES
365
Figure 16.7. Temperature dependence of the dielectric constant of triglycine sulfate on the internal bias field Eb induced by various concentrations of alanine impurities. Curves 1 to 6 correpond to field values Eb = 0, 5.6, 8.0, 19.2, >34 and >80 kV cm-1 respectively. From Lines and Glass, 1977, after Bye et al., 1972.
3. STRUCTURAL PHASE TRANSITIONS IN FERROELECTRICS Ferroelectric materials undergo transitions between paraelectric and ferroelectric phases. Materials exhibiting a more complex phase, antiferroelectricity, with opposite neighboring polarizations, also exist. In these materials a sublattice polarization takes the place of the spontaneous polarization of ferroelectrics. The phase transitions in ferroelectrics and antiferroelectrics have been studied by simplified condensed-state models. The unification of the concepts of ferroelectric and antiferroelectric transitions has resulted in the more general concept of the structural phase transition. 3.1. Order–disorder and displacive transitions A simple model of structural phase transitions considers a quasi-one-dimensional chain of identical anharmonic oscillators linked to one another by dipole–dipole interactions. The Hamiltonian energy operator describes two interlacing sublattices, A and B. Atoms of sublattice A, with larger mass, maintain fixed distance l with each other. The nth atom of sublattice B is at a variable distance un from its corresponding atom in lattice A. Each oscillator corresponds to a B particle of mass m in a symmetric potential with nearest-neighbor interactions. The interatomic interaction, characterized by a parameter 3, can lead to instabilities and phase transitions.15
366
CHAPTER 16
The dynamics of systems of this type is often characterized by vibrations with frequencies that decrease sharply as the critical temperature is approached. Locally ordered regions called clusters may arise. The potential function usually studied is in the form of two minima at u = ±a separated by a potential barrier of height h0. The equations of motion of the system are conveniently studied under two different relations between the potential height h0 and coupling energy 3a2. When 3a2 << h0, the probability for a particle to jump to its neighboring well is small, and the system is said to undergo an order–disorder transition. Crystals such as NaNO2, NH4Cl and KCN belong to this class. Under certain conditions the period of oscillation increases without bound and a soft mode arises; see Section 3.3 below. If, on the other hand, the coupling energy is large compared to the potential height, 3a2 >> h0, the collective excitations of many nodes of the lattice become dominant, and we say that the system undergoes a displacive transition. Crystals of the displacive type include SrTiO3, BaTiO3 and KNbO3. In this case a vibrational state moves along the chain, taking on an essentially collective nature. For vibrations of small amplitude, in which nonlinear effects are negligible, the excitations represent ordinary phonons. When the anharmonicity is taken into account, topological solitons—kinks and antikinks—appear. These are described in Chapter 18. 3.2. Spontaneous electrical pulses The displacement current observed when the spontaneous polarization of a ferroelectric is reversed often is accompanied by transient current pulses, called Barkhausen pulses. These pulses appear in the switching current when an electric field is slowly increased to reverse the polarization. They were detected by H. Mueller as clicks in a loudspeaker connected via an induction coil to a crystal of Rochelle salt.16 In samples of barium titanate, 105 to 106 Barkhausen pulses have been counted during a complete polarization reversal. The pulses carry a charge of about 2 × 10-14 C, both in the direction of the switching current and in the opposite direction. Barkhausen pulses occur when wedge-shaped domains form and grow in the forward direction, and when two domains fuse together. The counting rate of Barkhausen pulses varies in proportion with the volume of reversed polarization, and in materials with biased hysteresis loops shows the same asymmetry. The shape and number of the pulses varies with the material and its defect content. Barkhausen pulses provide a technique for studying polarization reversal in small volumes.17 3.3. Soft lattice modes Fluctuations of correlated pseudospins dominate the threshold of a continuous phase transition. This makes the fluctuations responsible for critical anomalies, in the transition region, of thermodynamic quantities. An alternative explanation of the instabilities of displacive crystals is a lattice mode, known as a soft mode, with a characteristic frequency that depends on temperature, diminishing toward the transition temperature. Soft modes may be either propagating (like an action potential) or diffusive (like an electrotonic potential).
POLAR PHASES
367
Figure 16.8. Temperature dependence of the real part 1 of the dielectric function of silver sodium nitrite along the [101] axis at the frequencies marked. From Lines and Glass, 1977, after Gesi, 1970.
In all phase transition, the spontaneous appearance of a nonzero order parameter in the low-temperature phase breaks the inherent symmetry of the system. In ferroelectrics, this is the spontaneous polarization vector, which appears in a particular direction. The breaking of symmetry may be continuous or discrete. In the continuous case the minima of free energy are infinitesimally close to each other. In the discrete case, as in ferroelectrics and antiferroelectrics, the minima will be separate. The frequency of the modes that restore the symmetrical phase must decrease continuously as the temperature approaches the Curie point from below. The frequency of this soft mode, will be zero at T = T0.18 Ferroelectric systems in which a soft mode dispersion occurs well below infrared frequencies exhibit a dielectric response approximating a Debye relaxation. This is shown in the temperature dependence of 1 for silver sodium nitrite, AgNa(NO2)2; see Figure 16.8.19 3.4. Hydrogen-bonded ferroelectrics A number of molecules and radicals promote ferroelectricity. Rochelle salt is only one of many ferroelectric tartrates. Other noncentrosymmetric groups, such as sulfates, sulfites, nitrates and nitrites, form ferroelectric materials. Hydrogen bonding plays a key role in many ferroelectrics. The transition between the high-temperature paraelectric state and the low-temperature ferroelectric state is an order–disorder phenomenon. Above the transition temperature, protons are
368
CHAPTER 16
statistically distributed among crystallographic sites; below it, long range ordering takes place, lowering the symmetry and giving rise to ferroelectricity. The motion of protons is closely related to the ferroelectric properties of these materials. Their Curie constant (see Equation 1.1) is generally 103 K or less. Because of the importance of hydrogen bonding in protein structure, the study of hydrogen-bonded ferroelectrics is of particular interest to the study of voltagesensitive ion channels. We recall in particular the role of hydrogen bonds in linking the loops of helices.
Figure 16.9. Ordering of hydrogen ions in the O-H###O bonds in (a) ferroelectric potassium dihydrogen phosphate, KH2PO4, and (b) antiferroelectric ammonium dihydrogen phosphate, NH4H2PO4. Occupied (&) and empty (%) proton sites connect the PO4 tetrahedra. From Newnham, 1975.
An example of a hydrogen-bonded ferroelectric is potassium dihydrogen phosphate, KH2PO4, abbreviated KDP. The structure consists of a complex of interpenetrating lattices of K+ ions and PO4 tetrahedra. KDP is an order–disorder ferroelectric. Pairs of PO4 groups are linked together by hydrogen bonds of the type OH###O. Theoretical models of potassium dihydrogen phosphate follow the idea that the dielectric properties of the crystal are determined by the configuration of the protons in the hydrogen bonds. In addition to the electrostatic energy configurations, longrange dipolar forces and the possibility of tunneling must be considered; see Figure 16.9a.20 The main contribution to the polarization along its axis comes from the deformation of the (KH2)3+–(PO4)3- complex along the axis. An Ising-type model that relates spontaneous polarization to proton disorder describes the long-range dipole interaction as a sum of individual dipoles. A fuller model is based on a Hamiltonian with three components, one for the motion of the protons, one for that of the massive ions in the lattice and a third term to describe their interactions. The coupling is pictured as an interaction of the O-H###O bond dipole moments with the electric field due to polar displacements of the lattice. A coupling between the ferroelectric soft
POLAR PHASES
369
mode and the transverse acoustic phonon has been studied in a random-phase approximation. A related compound, ammonium dihydrogen phosphate, NH4H2PO4, is an antiferroelectric in which the K+ ions are replaced by NH4+. However, the symmetry and proton ordering are different, and the ammonium salt does not polarize spontaneously; see Figure 16.9b. The replacement of hydrogen by deuterium in KDP compounds spectacularly raises the Curie temperature by a factor of nearly 2. In crystals containing a mixture of KDP and deuterated KDP, the saturation polarization increases by as much as 20%. The isotope shift of Tc became the stimulus for a tunneling model in which the proton tunnels between the two equilibrium positions of the hydrogen bond. The greater mass of the deuteron reduces the frequency of tunneling, and thereby significantly affects the mechanics of the phase transition.21 4. FERROELECTRIC PHASE TRANSITIONS AND CONDUCTION In one class of crystalline systems, the structural change that underlies phase transitions is due to continuous displacements of active ions or molecules. These displacive phase transitions are classified as second order. The displacement vector effects a symmetry change of the crystal at the transition temperature. The displacement of an active group may violate local lattice symmetry, inducing strains in the lattice. The strain effect is significant in the critical region, in which a specific coupling between displacements of active groups and pseudospins is responsible for the phase transition. 4.1. Tris-sarcosine calcium chloride Tris-sarcosine calcium chloride (TSCC) is a crystal with ferroelastic domains that undergoes continuous uniaxial ferroelectric phase transitions. Sarcosine is methyl glycine, H3C-NH2-CH2COOH, and the formula unit of TSCC is (sarcosine)3CaCl2. The active group for the ferroelectric phase transition is a Ca(sarcosine)6 complex, in which the Ca2+ ion is surrounded by a distorted octahedron of six carbonyl oxygens; see Figure 16.10.22 4.2. Betaine calcium chloride dihydrate Crystals of betaine calcium chloride dihydrate, which contains the amino acid betaine, (H3C)3N-CH2COOH, exhibit seven modulated phases at atmospheric temperature between a normal crystalline phase above 164K and the ferroelectric phase at 45K. The formula for BCCD is (betaine)-CaCl2#2H2O. In the crystal, betaine groups are planar and form Ca-(betaine)2 complexes. The calcium ions are coordinated by two oxygen atoms of two ligand betaines, two chloride ions and two OH2. The active groups are mainly in librational motion.23
370
CHAPTER 16
Figure 16.10. Each calcium ion in the active group of tris-sarcosine calcium chloride is surrounded by an octahedron of oxygen atoms from six sarcosines. From Fujimoto, 1999.
BCCD is an example of a ferroelectric crystal in which the sinusoidal modulations of the pseudospins are not commensurate with the lattice periodicity. These incommensurate phases have been an area of intense study since 1976.24 4.3. Dielectric relaxation in structural transitions At temperatures away from the critical temperature, the dielectric behavior of the crystal can be interpreted in terms of the symmetry properties of the unit cell. This is not the case in the critical region, where the crystal is spatially inhomogeneous due to sinusoidal dielectric fluctuations. The dielectric response can be treated as a linear response, with the nonlinearity treated as a perturbation. If the motion of a massive molecule does not follow the oscillating electric field at high frequencies, the motion of the polar pseudospin is dominated by the relaxational damping force. The susceptibility 3, defined in Chapter 10, Equation 4.2 can be represented as a Debye relaxation with a characteristic time constant -. The dielectric permittivity for a wavevector q then is given by the function
(4.1) where is the ionic polarizability, defined in Equation 4.3 of Chapter 10.
POLAR PHASES
371
Dropping the subscript q for brevity, we can write Equation 4.7 in the form for Debye relaxation, (4.2)
where () = 0(1 + ) and (0) = () + (e/mk)0. The sign difference between the denominator in this equation and that of Equation 4.7 of Chapter 10, due to a different sign convention, may be removed by replacing i with -i. As we saw in Equation 4.12 of that chapter, Equation 4.8 can be written in a parametric representation of a circle. Raising the i7- term to a fractional power, 1 - h, yields the Cole–Cole circle, with its center depressed below the real axis; see Figure 10.6. 4.4. Cole-Cole dispersion; critical slowing down For a ferroelectric phase transition obeying the Curie–Weiss law, Equation 1.1, the relaxation time will depend similarly on temperature, - (T - Tc)-1. In the frequency domain, the real and imaginary parts of the dielectric permittivity 1 and 2 therefore exhibit critical slowing down near the ferroelectric phase transition. Anomalous dielectric relaxation, with the center of the Cole–Cole semicircle below real axis, has been observed in crystals of the order-disorder type. In the vicinity of the ferroelectric Curie point, the dispersion is represented by the Cole–Cole relation for the complex dielectric constant.25 This relation involves a critical exponent, as we saw in the constant-phase capacitance measurements of squid axon. Figure 16.11 shows four Cole–Cole arcs for temperatures within a degree of the transition temperature of triglycine sulfate. The largest measured dispersion possesses a dielectric constant greater than 105. The measurements show that the parameter h of Equation 4.13 of Chapter 10 increases substantially on the approach to the Curie point, increasing the angle the center of the semicircle makes with the 1 axis.26 4.5. From ferroelectric order to superionic conduction Cesium dihydrogen phosphate (CDP), CsH2PO4, is a pseudo-one-dimensional ferroelectric with a transition temperature Tc = 155K. Like other crystals of the potassium dihydrogen phosphate (KDP) family, it is also a proton conductor with a superionic phase (si) transition. Below room temperature, the ionic conductivity is negligible and the frequency dispersion of the complex dielectric permittivity is determined by critical dipolar relaxation, with an additional dispersion at low frequencies due to domain wall relaxation. The Cole–Cole diagram in the ferroelectric phase shows that, on cooling from 0.1° to 30° below the transition temperature, the dispersion parameter 1- h decreases from 0.96 to 0.52.
372
CHAPTER 16
Figure 16.11. Critical slowing down of the relaxation frequency in the approach to the transition temperature of triglycine sulfate. The frequencies on the arcs are in MHz. From Luther and Müser, 1970.
Above 350K the proton conductivity is controlled by processes at the interfaces between the crystal and the electrodes. The structural transition temperature is Tsi = 503K. The values of conductivity found above Tsi, between 0.001 and 0.01 S cm-1, are typical for superionic conductors.27 Interesting properties arise in mixtures of ferroelectrics and antiferroelectrics due to frustration from opposing orderings. Cesium dihydrogen phosphate (CDP), CsH2PO4, has one-dimensional ferroelectric order below 159 K and a superionic transition at 504 K. The related compound, ammonium dihydrogen phosphate (ADP), NH4H2PO4, is antiferroelectric; the paraelectric phases of the two crystals have different structures. The dielectric properties of the mixed crystal, cesium ammonium dihydrogen phosphate, Cs1-x(NH4)xH2PO4, fitted with a pseudo-one-dimensional Ising model, exhibited anomalies from the pure CDP. Although grown from an aqueous solution of 20% molar concentration of ADP, the crystals were found with only x = 0.05. Figure 16.12 shows the components b1 and b2 along the ferroelectric b axis.28 A striking feature of the mixed crystal is the appearance of a thermally activated conductivity in the paraelectric phase along the a- and c-axes. The conductivity apparently arises from the mobility of protons (or deuterons in isotopic exchange experiments). Cycling the temperature results in a large (~40 K), anomalous thermal hysteresis in the conductance. Deuteration raises the transition temperature from 156 K to 268.3 K in CDP and from 159.3 K to 276 K in the mixed crystal. Since the molecular radius of NH4+ is much less than that of the Cs+ for which they substitute, the ammonium ions will be free to move within the larger space.
POLAR PHASES
373
Figure 16.12. Temperature dependence of the real and imaginary parts of the dielectric permittivity along the b axis for cesium ammonium dihydrogen phosphate, Cs1-x (NH4)xH2PO4, with x = 0.05, at frequencies varying from 31.6 Hz to 1.00 MHz. From S. Meschia et al., 1999.
Nuclear magnetic resonance measurements reveal two features: ammonium hindered rotation at 162 K and proton motion between lattice sites at 328 K. The electrical conductivity of single crystals and powdered pellets of sodium sulfate, Na2SO4, is shown in Figure 16.13. The crystal structure passes through several phases, numbered I to V. The conductivity, due to Na+ movement, jumps by an order of magnitude as the crystal is heated or cooled.29 4.6. Ferroelectric semiconductors The contribution of the free energy of the electrons to that of the lattice and the interaction of electrons with the soft ferroelectric mode leads to the appearance of
374
CHAPTER 16
Figure 16.13. Temperature dependence of the log conductivity of (a) a single crystal of Na2SO4 along the a axis, and (b) a powder pellet. The arabic numbers indicate the sequence of heating and cooling cycles; the roman numerals are the structural phases. From Choi, 1994.
ferroelectric semiconductors. These nonlinear semiconductors exhibit screening of spontaneous polarization, induction waves and space charge, as well as photoconductivity. In the spectral region of photosensitivity of a ferroelectric semiconductor such as antimony sulfoiodide (SbSI), illumination can increase the concentration of electrons (or holes) in traps, shifting the transition temperature and other physical properties.30 5. PIEZO- AND PYROELECTRICITY IN BIOLOGICAL TISSUES Ferroic effects have been observed in DNA and proteins since the 1960s, as reviewed by Herbert Athenstaedt.31 5.1. Pyroelectric properties of biological tissues Pyroelectric crystals possess a permanent electric dipole moment due to the alignment of molecular dipoles. Temperature changes alter the molecular orientations, producing the pyroelectric effect. Mechanical deformations also modify the dipole moments, leading to a piezoelectric effect. Every pyroelectric body is also piezoelectric. As we saw in Section 6.6 of Chapter 4, temperature pulses perturb ion currents in frog node. This effect is similar to the pyroelectric currents of ferroelectric materials.
POLAR PHASES
375
5.2. Piezoelectricity in biological materials Pyroelectric and piezoelectric behavior has been observed in tendon, bone and other tissues.32 The piezoelectric properties of polymers and biological tissues were reviewed by R. G. Kepler and R. A. Anderson33 and by Eiichi Fukada.34 Piezoelectricity and related phenomena have been observed in wood, bone, tendon, silk fibroin, actin and myosin, keratin and nucleic acids, as well as tissues containing collagen, elastin and muscle proteins. Freeman Cope proposed piezoelectricity and pyroelectricity as bases for force and temperature detection in nerve receptors.35 To Morris Shamos and Leroy Lavine, the apparent universality of the piezoelectric effect in tissues suggests a basic part in physiology, providing a new link between biology and physics.36 A. N. Güzelsu and A. Akçasu proposed an explanation of the action potential as a local disturbance of dipole orientation that propagates along the axon by piezoelectric electromechanical coupling within the membrane.37 More recently, Xiaoxia Dong, Mark Ospeck and Kuni Iwasa observed an unusually large piezoelectric coefficient in the membrane motor of the cochlear outer hair cell. Currents measured in response to sinusoidal stretching showed a reciprocal relationship expected from a piezoelectric motor.38 The pineal gland of the brain controls the circadian rhythm by producing melatonin during the hours of darkness but not when light strikes the retinas. Sidney Lang and collaborators observed crystalline deposits within the gland’s secretory tissue by scanning electron microscopy at a scale of 10 m. These faceted crystals are similar to bone hydroxyapatite but with an unexpectedly high aluminum content of 3.4%. Second harmonic generation observed in human pineal samples showed a nonsymmetry that, according to crystallographic considerations, implies that the samples are piezoelectric. In a serendipidous discovery, the Lang group also observed second harmonic generation in brain samples even from regions that contained no crystals, such as pituitary, cortex and cerebellum, suggesting nonlinear behavior in normal tissues.39 6. PROPOSED FERROELECTRIC CHANNEL UNIT IN MEMBRANES The concept of ferroelectric transformations in the membranes of excitable cells has a history extending back for over 40 years. 6.1. Early ferroelectric proposals for membrane excitability Herbert Athenstaedt in 1961 noted that living organisms, tissues and cells exhibit ferroelectric properties.40 In a study of brain memory, Peter Fong suggested: "Since the nervous input is electric in nature, one naturally thinks of ferroelectricity as a possible mechanism for brain [memory] recording."41
376
CHAPTER 16
As we noted in Chapter 3, A. R. von Hippel predicted in 1969:42 A variety of polar coupling phenomena can be expected to produce ferro- and antiferroelectric transitions in ... biological systems. True relations may exist between ferroelectricity, the formation of liquid crystals, and the generation of electric impulses in nerves and muscles.
L. Y. Wei wrote in 1969:43 ... electric excitation and conduction, heat production and absorption, infrared emission and the birefringence change in nerve axon are all due to the field-dipole interaction which brings forth quantum transitions of the dipoles at the membrane interface. Thus Fong's theory and ours are of the same nature and may be grouped under one heading: the dipole theory of nerve.
At an international conference in 1971 on “From Theoretical Physics to Biology,” B. T. Matthias pointed out:44 [D]uring the last decade the number of organic ferroelectrics has increased by orders of magnitude ... One of the most interesting groups is formed by the many [ferroelectric] glycine compounds discovered ... Since glycine is involved in so many biological systems, it seems quite likely that many of the biological systems will ultimately also be ferroelectric. ... [T]he Curie points cover precisely the correct temperature range.
In the discussion, L. Rosenfeld commented on Matthias’s presentation: "In particular, I was somewhat frightened by the appearance of this myth of ferroelectricity."45 Rosenfeld cites an earlier comment by Victor F. Weisskopf, encouraging physicists to look at many-particle interactions and nonlinearities, which require new ways of thinking: “But to jump to conclusions that this has anything to do with living phenomena seems to me to be incredibly presumptuous and incredibly naive.”46 This comment from Weisskopf may help explain Rosenfeld’s “fright” of the “myth” of ferroelectricity. In 1972, S. P. Ionov and G. V. Ionova proposed a qualitative microscopic model of the postsynaptic membrane as a uniaxial ferroelectric and ferroelastic. The ferroelastic property makes the protein sensitive to a substance, acetylcholine, adsorbed on it. The critical temperature of its order–disorder phase transition is assumed to be sensitive to the neurotransmitter adsorbed on it. Depolarization brings about a phase transition leading to a sharp increase in ionic conductance. They model ion conductance as a relay mechanism in which metal ions move along equivalent sites in a transmembrane lattice.47 Aharon Katchalsky related domain phenomena in single molecules to the field effects of action potentials:48 ... although piezoelectricity does not develop on a microscopic scale, it seems that other domain phenomena appear in single biopolymer molecules. ... Recently [Dr. Eberhard Neumann and I] have found that strong electric fields of the order of magnitude 20kV/cm—which is close to the field effects of action potentials in the nerve—may imprint conformational changes on biopolymeric systems in molecular dispersion.
POLAR PHASES
377
Herbert Fröhlich developed a theory of coupled, coherently moving dipole systems in biological membranes; his model predicts the appearance of a ferroelectric or "quasi-ferroelectric" state. In 1973 he stated:49 [I]n the crystalline state ... the relative ground state would be ferroelectric. In the disordered state it is not. But as you have energy passing through it, or as you have these local electric fields, the ferroelectric state which would be close by it, can be excited. Ferroelectricity is here a manner of speaking of shorter ranges that are polarized.
George Eisenman and R. Margalit noted in 1978:50 ... implications for the orientations and positions of protons in the extended H-bond arrays expected in such channels, which suggests that interesting dielectric properties such as a Curie point, ferroelectricity, and piezoelectricity ... may occur in peptide channels under appropriate conditions of ionic loading.
Donald Edmonds suggested in 1982 that a ferroelectric column of water may exist as an electric field sensor in ion channels.51 However, he later rescinded that hypothesis.52 In 1983, H. Richard Leuchtag and Harvey Fishman wrote,53 Some components of the membrane, particularly the proteins, may contribute a field dependence to the membrane dielectric permittivity, possibly making it ferroelectric.
Leuchtag’s model of a ferroelectric–paraelectric phase transition in excitable membrane units appeared in 1987.54 This was followed up in 1988 by the concept of activation in sodium channels as a ferroelectric–superionic transition. In this model, the ferroelectric state of ion channels characterizes the resting state of an excitable membrane; depolarization triggers the appearance of a disordered state, which permits the conduction of permeant ions.55 Alexander Petrov and collaborators stated in 1991,56 Thus, our experiments provide a good reason to expect some ferroelectric phenomena in biomembranes, with spontaneous polarization vector parallel to the membrane plane, not normal to it. If this is the case, ionic transport process and other phenomena like ATP-synthesis, membrane excitability etc. should be interpreted by paying attention to the membrane ferroelectricity.
In 1992, L. A. Beresnev, S. A. Pikin and W. Haase noted the close similarity between biomembranes and ferroelectric liquid crystals.57 They ascribed the behavior of excitable membranes to changes in the tilt angles of lipid molecules in the membranes. In Chapter 20 we discuss a model in which this role is played by changes in tilt angle—but of segments of the protein channels rather than lipid molecules.
378
CHAPTER 16
6.2. The ferroelectric–superionic transition model In the ferroelectric–superionic transition model, I proposed that the voltage-sensitive sodium channel contains a fibrous ferroelectric channel unit, which passes through the membrane and extends into the cytoplasm. When the channel is at rest potential, it is ferroelectric. A depolarization shifts the Curie point to below the temperature of the channel, causing a phase shift to nucleate at the outer surface of the channel and propagate inward. V. M. Gurevich showed that, when ferroelectrics underwent phase transitions, their conductivities frequently jumped by several orders of magnitude.58 Thus the abrupt openings and closings of channels could be explained as phase transitions between phases of vastly different ionic conductivities. Although I speculated that this unit was a carbohydrate moiety of the glycoprotein channel, this is not critical to the model. The mathematical development of the ferroelectric–superionic transition model is independent of its interpretation; let us briefly review it here.59 Equation 2.3 of this chapter is combined with Equations 2.9 and 4.1 of Chapter 7 and 2.2 of Chapter 8 to obtain the differential equation
(6.1) Here J(t) represents the total current, ionic plus displacement. In the linear case, with = 1/ and = = 0, this equation reduces to the forced Burgers equation, Equation 6.6 of Chapter 8. The Burgers equation yields a shock-wave solution for a currentclamp case. A more general solution describes a phase transition traveling across the membrane in a transverse wave.60 Suppose at time zero a constant current is turned on through the channel, J(t) = J = const. Then a transverse wave solution (6.2) where y is the phase function z - ct, may be found by integrating the ordinary differential equation
(6.3) where W(D) = uq (D + D3 + D5). In the phase-plane method, a new variable w = dD/dy is defined to transform Equation 6.3 into the first-order equation61
POLAR PHASES
379
(6.4)
The solution to Equation 6.4 is
(6.5)
A domain boundary may then be considered to be traveling across the channel, switching the electric displacement from one value to another. If the displacement varies from an initial value D1 with a boundary condition that w(D1) = 0 to a maximum D2 , so that w(D2) = 0, a layer enhanced in free ions forms at the domain boundary. When D decreases again, a layer depleted of free ions forms. Under these conditions, the solution for the transverse wave may be written
(6.6) Figure 16.14 illustrates the way in which a moving boundary may be visualized as transforming a channel unit initially in a ferroelectric phase to a nonpolar phase with ion-conducting properties.
Figure 16.14. In the ferroelectric electrodiffusion model of a voltage-sensitive sodium channel, a ferroelectric channel unit at resting potential (a) is in a ferroelectric phase. In the depolarized membrane (b), a phase transition to a nonpolar phase nucleates at the outer channel surface and travels inward. The moving domain boundary carries a layer enhanced in free ions. From Leuchtag, 1988.
380
CHAPTER 16
This moving transition is interpreted by the assumption that the resting channel is in a ferroelectric state, and that a threshold depolarization initiates a nonpolar phase, which grows at the expense of the ferroelectric phase. The transition is characterized as an order-disorder, discontinuous transition. The transition front is negatively charged, but it is screened by permeant cations. With ionic conduction suppressed, the inwardly moving negative bound charge produces the displacement current experimentally observed as the gating current.62 6.3. Field-induced birefringence in axonal membranes The refractive index and dielectric coefficient of some ferroelectric materials can be very high, and most ferroelectrics exhibit nonlinear optical effects such as large values of birefringence, which can be altered by mechanical strain and applied electric field.63 Birefringence in ferroelectrics is a result of domain-wall movements induced by electric fields. Since we have seen in Chapter 4, Section 7.1 that excitable membranes exhibit rapid changes in birefringence during and after the passage of an action potential, this represents a similarity in behavior between the two systems.64 6.4. Membrane capacitance versus temperature In Section 6.7 of Chapter 4, we mentioned that the suggestion of ferroelectricity in excitable membranes was supported by the demonstration,65 on the basis of data by Palti and Adelman,66 that the capacitance of squid axon membrane obeys the Curie-Weiss law. This gives the capacitance per unit area below the transition temperature T0 as a function of temperature T, (6.7) where T0 is an extrapolated Curie temperature and k is a constant. Figure 16.15 shows the membrane capacitance of a squid axon as a function of temperature, with data points fitted to the Curie–Weiss law, Equation 6.7. The parameters for the extrapolated Curie point are C0 = 1.182 F/cm2, k = 2.20 K F/cm2 and T0 = 49.80 °C. 6.5. Surface charge As we saw in Section 1.2, pyroelectrics, and so also ferroelectrics, have a surface charge. The spontaneous polarization of ferroelectrics similarly leads to a surface charge, which is usually neutralized by ions in the bounding media. The surface charge of a sodium channel, as determined by Knox Chandler, Alan Hodgkin and Hans Meves67 in 1965, is one electronic charge per (2.7 nm)2 or 2.2 C cm-2, which lies in the range between the spontaneous polarizations of the ferroelectric crystals diglycine nitrate (1.5 C cm-2) and triglycine sulfate (2.8 C cm-2).68 The volume induction of ferroelectric semiconductors is partially screened by a layer of surface energy levels, the Schottky barrier. This screening can affect the domain structure of a crystal.69
POLAR PHASES
381
Figure 16.15. Membrane capacitance of a squid axon as a function of temperature. The data points are from Palti and Adelman (1969), and the line is a fit to the Curie–Weiss law, Equation 6.7. From Leuchtag, 1995.
6.6. Field effect and the function of the resting potential The hypothesis of a ferroelectric channel is consistent with observations if we assume that the closed, excitable channel is a ferroelectric phase, and that the transition to an open channel removes the ferroelectric order.70 The thermodynamic theory of ferroelectrics shows that the Curie point rises with increasing electric fields.71 This effect is sufficient to explain the stochastic opening of the Na channels when the membrane is depolarized.72 The voltage threshold for excitability obeys, for short pulses, the relation I t = Q, where I is the magnitude of the stimulating current pulse and t its duration; see Chapter 4, Section 6.3. While the constant charge Q depends only slightly on temperature, the time constant of excitation, defined as the ratio of Q to the rheobasic current I0, decreases markedly as the temperature is increased.73 Thus the excitation
382
CHAPTER 16
time will be a minimum at the heat block temperature or upper Curie point, suggesting that depolarizational excitation is due to a downward shift in the Curie point as a result of the decrease in the electric field. The decrease in the transition temperature when the external field is lowered can account for the transition induced by a depolarization of an excitable channel. In the absence of an electric field, the channel in the open state is a selective ion conductor. In the presence of the strong electric field associated with the resting potential (about 105 V/cm, inward), it is an insulator: it is in the closed state. When this field is decreased by a threshold amount, the individual channel stochastically undergoes a discrete transition to the open state. A physical interpretation of these facts is that the closed state of the Na+ channel is ferroelectric and that the decrease of the field across the channel lowers the transition temperature (Curie point), leading to a phase transition to a nonpolar state. The ferroelectric state is maintained by induced forces from the applied electric field. Possible stabilizing forces induced by an electric field are electrostriction, found in all materials, and the field–dipole force acting on the membrane-spanning helices and the piezoelectric force, which depends on the direction of the field.74 The channel at rest is not at equilibrium, but is maintained in an unstable state; it is ready to be released by a depolarization, which allows it to relax into its equilibrium state. Thus in the ferroelectric–superionic transition model the function of the resting potential is to create a strong electric field that will displace the Curie point above the normal temperature of the channel, so that it will be in a ferroelectric state. Now a sufficiently large depolarization will lower the field and with it the Curie point, which drops below the channel temperature, leading to a phase transition to a less ordered, nonpolar state. It is this state that is assumed to be the superionically conducting, metalloprotein state. In this state, the permeant ions can travel across the membrane by hopping from site to site along the channel. Thus it is possible to switch from the ferroelectric state to an ion-conducting state by a depolarization. For this it is necessary is for the physiological temperature of the organism to lie between the transition temperature at zero field and the transition temperature at the high (resting) field. In this way a reduction in the resting field of sufficient magnitude, a threshold depolarization, must lead to a phase transition from a ferroelectric to a nonpolar state in the ion channel.75 6.7. Phase pinning and the action of tetrodotoxin In the ferroelectric-superionic transition hypothesis, the transition that dramatically raises the sodium conductance is a condensed-state phase transition, from a ferroelectric to an ion-conducting nonpolar state. This qualitatively explains excitability by a physical process. But many questions remain, such as how to account for blocking of channel conduction by tetrodotoxin and related neurotoxins. The concept that it “plugs up the pore” is not an explanation on the molecular level, as we saw in Chapter 14. On the other hand, the ferroelectric-superionic transition hypothesis suggests a physical explanation of TTX action.
POLAR PHASES
383
The opening of a channel requires a phase transition, from ferroelectric to nonpolar. The ferroelectric phase is described quantum mechanically as a collective pseudospin mode. In the critical region such a pseudospin condensate propagates as a wave through the medium. Defects in a crystal can have a pronounced effect on the hysteresis loop and dielectric constant, as we saw in Figures 16.6 and 16.7. In general, defects tend to increase the coercive field, calling for a higher voltage for a polarization reversal to occur. These imperfections can prevent a phase transition from occurring. If the depth of the imperfection potential is significant compared to the energy of the pseudospin condensate, the condensate will be immobilized or pinned in the vicinity of the stationary imperfection.76 If TTX is such an imperfection in the channel, the binding of a single TTX molecule would freeze the channel in a ferroelectric state and so prevent the channel from opening. We recall that TTX and its congeners contain a guanidinium group, H2N+=C(NH2)2, chemically related to the ferroelectric GASH, C(NH2)3Al(SO4)2#6H2O. By binding to the channel, it pins the ferroelectric phase, preventing propagation of the transition front across the channel. Thus the channel cannot enter the ion-conducting nonpolar phase. 7. THE CHANNEL IS NOT CRYSTALLINE While we have seen many similarities between the responses of voltage-sensitive ion channels and ferroelectric materials, ion channels are not crystalline: Crystals are rigid, whereas ion channels have conformational flexibility. The phenomenological models we have considered in this chapter do not provide us with microscopic details of excitable membranes, since they are based on crystalline materials. However, ferroelectricity also has been observed in polymers and liquid crystals. Since biological membranes are known to be liquid crystals, we will explore in the next chapter an extrinsic form of ferroelectricity that occurs in certain types of liquid crystals. NOTES AND REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
H. R. Leuchtag and V. S. Bystrov, Ferroel. 220:157-204, 1999 and references therein. Sidney B. Lang, Physics Today 58(8):31-36, August 2005. M. E. Lines and A. M. Glass, Principles and Applications of Ferroelectrics and Related Materials, Clarendon Press, Oxford, 1977, 144. Reprinted with permission from A. M. Glass, J. Appl. Phys. 40:4699. Copyright 1969, American Institute of Physics. Lines and Glass, 1. Copyright 1994 from Zenon Bochynski, Ferroel. 157:13-18, 1994. Reproduced by permission of Taylor & Francis Group, LLC., http://www.taylorandfrancis.com Lines and Glass, 103. Robert E. Newnham, Structure–Property Relations, Springer-Verlag, New York, 1975, 94-101. The difference between this equation and Equation 3.2 of Chapter 5 is due to a different sign convention. Lines and Glass, 71-73. Lines and Glass, 169-173. Reproduced from A. F. Devonshire, Advances in Physics 3:85, 1954 by permission of Taylor & Francis Ltd., http://www.informaworld.com Lines and Glass, 160-168. Newnham, 93. With kind permission of Springer Science and Business Media. Lines and Glass, 133-139.
384
CHAPTER 16
Lines and Glass, 116; K.L. Bye, P.W. Whipps and E.T. Keve, Ferroel. 4: 253, 1972. A. S. Davydov, Solitons in Molecular Systems, D. Reidel Publishing, Dordrecht, 1985, 219-226. Helen D. Megaw,. Ferroelectricity in Crystals, Methuen, London, 1957, 32f. Ennio Fatuzzo and Walter J. Merz , Ferroelectricity, North-Holland, Amsterdam, 1967, 233f; Lines and Glass, 111f. 18. R. Blinc and B. Zeks, Soft Modes in Ferroelectrics and Antiferroelectrics, North Holland, Amsterdam, 1974, 1-18. 19. Lines and Glass, 140; K. Gesi, J. Phys. Soc. Japan 28: 1365, 1970. 20. Newnham, 89-91. With kind permission of Springer Science and Business Media. 21. Lines and Glass, 293-321. 22. Minoru Fujimoto, The Physics of Structural Phase Transitions, Springer, New York, 1997, 55-59. With kind permission of Springer Science and Business Media. 23. Fujimoto, 191-201. 24. Fujimoto, 142-150; M. Quilinchini and J. Hlinka, Ferroel. 183:215-224, 1996. 25. Y. Makita, I. Seo and M. Sumita, J. Phys. Soc. Japan 28 (suppl.):268-270, 1970. 26. G. Luther and H. E. Müser, Z. angew. Phys. 29:237, 1970. With kind permission of Springer Science and Business Media. 27. A. I. Baranov, V. P. Khiznichenko, V. A. Sandler and L. A. Shuvalov, Ferroel. 81:183-186, 1988. 28. Copyright 1999 from S. Meschia, S. Lanceros-Méndez, A. Zidanšek, V. H. Schmidt and R. Larsen, Ferroel.226:159-167, 1999. Reproduced by permission of Taylor & Francis Group, LLC., http://www.taylorandfrancis.com 29. Copyright 1994 from Byoung-Koo Choi, Ferroel. 155:159-164, 1994. Reproduced by permission of Taylor & Francis Group, LLC., http://www.taylorandfrancis.com 30. V. M. Fridkin, Ferroelectric Semiconductors, Consultants Bureau, New York, 1980, vii-ix, 279-307, 9-11. 31. H. Athenstaedt, Naturwiss. 48:465-472, 1961. 32. Herbert Athenstaedt, Ann. N. Y. Acad. Sci. 238:68-94, 1974. 33. R. G. Kepler and R. A. Anderson, CRC Crit. Rev. Solid State Mat. Sci. 9:399-447, 1980. 34. Eiichi Fukada, Quart. Rev. Biophys. 16:59-87, 1983. 35. Freeman W. Cope, Bull. Math. Biol. 35:31-41, 1973. 36. Morris H. Shamos and Leroy S. Lavine, Nature 213:267-269, 1967. 37. A. N. Güzelsu and A. Akçasu, Annals N. Y. Acad Sci. 238:339-351. 38. Xiao-xia Dong, Mark Ospeck and Kuni H. Iwasa, Biophys. J. 82:1254-1259, 2002. 39. Sidney B. Lang, 10th Int. Symp. on Electrets, 175-182, 1999; S. B. Lang, A. A. Marino, G. Berkovic, M. Fowler, and K. D. Abreo, Bioelectrochem. Bioenerg. 41:191-195, 1996. 40. H. Athenstaedt, Naturwiss. 48,:465-472, 1961. 41. P. Fong, Bull. Ga. Acad. Sci. 30:13-23, 1972. 42. A. R. von Hippel, J. Phys. Soc. Japan 28 (suppl.): 1-6, 1970. 43. L. Y. Wei, Bulletin of Mathematical Biophysics 31:39-58, 1969. 44. B. T. Matthias, in From Theoretical Physics to Biology, edited by M. Marois, Karger, Basel, 1973, 1221. 45. L. Rosenfeld, in From Theoretical Physics to Biology, 34. 46. V. F. Weisskopf, in From Theoretical Physics to Biology, 20. 47. S. P. Ionov and G. V. Ionova, Dokl. Biophys. 282:22-24, 1972. 48. A. Katchalsky, in From Theoretical Physics to Biology, 14-15. 49. H. Fröhlich, in From Theoretical Physics to Biology, 14-15; H. Fröhlich , Riv. Nuovo Cimento 7: 399418, 1977. 50. G. Eisenman and R. Margalit, in Proceedings of the Johnson Foundation's 50th Anniversary Conference, edited by J. S. Leigh, P. L. Dutton and A. Scarpa, Academic, New York, 1978, 1-11. 51. D. T. Edmonds in Biophysics of Water, edited by F. Franks, John Wiley, New York, 1982, 173-175. 52. D. T. Edmonds, personal communication. 53. H. R. Leuchtag and H. M. Fishman, in Structure and Function in Excitable Cells, edited by D.C. Chang, I. Tasaki, W. J. Adelman, Jr., and H. R. Leuchtag, Plenum, New York, 1983, 430. 54. H. R. Leuchtag, J. Theor. Biol. 127:321-340, 1987; 127: 341-359, 1987. 55. H. R. Leuchtag, Ferroelectrics 86:105-113, 1988. 56. A. G. Petrov, A. T. Todorov, B. Bonev, L. M. Blinov, S. V. Yablonski, D. B. Fubachyus, and N. Tvetkova, Ferroel. 114:415-427, 1991. 57. L. A. Beresnev, S. A. Pikin and W. Haase, Condensed Matter News 1(8):13-18, 1992. 58. V. M. Gurevich, Electric Conductivity of Ferroelectrics, Israel Program for Scientific Translations, Jerusalem, 1971. 14. 15. 16. 17.
POLAR PHASES
385
59. Copyright 1988 from H. R. Leuchtag, Ferroel. 86, 105-113, 1988 (Correction: For W in Equation 11 read w). Reproduced by permission of Taylor & Francis Group, LLC., http://www.taylorandfrancis.com 60. H. R. Leuchtag, J. Theor. Biol. 127: 341-359, 1987; H. R. Leuchtag, Ferroel. 86:105113, 1988. 61. M. S. Shur, Sov. Phys.-Solid State 10, 2087, 2827, 2925, 1969; ___ Bull. Acad. Sci. USSR–Phys. Ser. 33, 187, 1969; E. V. Chenskii, Sov. Phys.-Solid State 11, 534, 1969. 62. H. R. Leuchtag, Proc., 1990 IEEE 7th Intern. Symp. On Applications of Ferroelectrics, IEEE, Piscataway, NJ, 1991, 279-283; H. R. Leuchtag, Proc., 1991 IEEE Northeast Bioeng. Conf., IEEE, Piscataway, NJ, 1991, 169-174. 63. Lines and Glass, 479-481; J. C. Burfoot and G. W. Taylor, Polar Dielectrics and their Applications, Univ. of California, Berkeley and Los Angeles, 1970. 64. H. R. Leuchtag, Proc., 1990 IEEE 7th Intern. Symp. On Applications of Ferroelectrics, IEEE, Piscataway, N.J., 1991, 279-283. 65. Reprinted from H. R. Leuchtag, Biophys. Chem. 53:197-205, copyright 1995, with permission from Elsevier. 66. Y. Palti and W. J. Adelman, Jr., J. Membr. Biol. 1:431-458, 1969. 67. W. K. Chandler, A. L. Hodgkin and H. Meves, J. Physiol. 180: 821-836, 1965. 68. Lines and Glass, 629. 69. Fridkin, 41-98. 70. H.R. Leuchtag and V.S. Bystrov, Ferroel. 220:157-204, 1999. 71. T. Mitsui, I. Tatsuzaki, and E. Nakamura, An Introduction to the Physics of Ferroelectrics , Gordon and Breach, New York, 1976; Lines and Glass, 169 f. 72. H. R. Leuchtag, Proc., 1990 IEEE 7th Intern. Symp. On Applications of Ferroelectrics, IEEE, Piscataway, N.J., 1991, 279-283. 73. R. Guttman, in Biophysics and Physiology of Excitable Membranes, edited by W. J. Adelman, Jr., Van Nostrand Reinhold, New York, 1971, 320-336. 74. H. R. Leuchtag, Biophys. J. 66:217, 1994. 75. Leuchtag and Bystrov, 1999. 76. Minoru Fujimoto, The Physics of Structural Phase Transitions, Springer, New York, 1997, 91-108.
CHAPTER 17
DELICATE PHASES AND THEIR TRANSITIONS
Kenneth S. Cole wrote about living membranes in 1972,1 “Although the structure is not certain, it seems likely to be that of liquid crystals.” In Chapters 3, 5 and 14 we described some of the history of the field of liquid crystals, their place in the science of condensed matter and models of excitable membranes based on their properties. Because of their exquisite sensitivity to external stimuli such as temperature changes, stresses and fields, Pierre Gilles deGennes has called liquid crystals the “delicate phases of matter.”2 This chapter is devoted to a deeper examination of these metaphases, their symmetries, transitions and responses to thermal, mechanical and electrical stimuli. We will focus particularly on phases that exhibit ferroelectric behavior and those that contain metal ions. In Chapter 18 we discuss solitons in liquid crystals, and in Chapter 20 we will look at the liquid crystalline nature of ion channels. 1. MESOPHASES: PHASES BETWEEN LIQUID AND CRYSTAL As mentioned in Chapter 5, liquid crystals exist in three forms: nematics, cholesterics and smectics. They are also categorized by the way they exhibit their polymorphism. Liquid crystals are commonly used in low-power displays, such as those on laptop computers and cell phones; this book is being written on a computer with a flat panel LCD monitor. 1.1. Nematics and smectics Mesophases, phases that lie between the isotropic liquid and solid crystalline states, are classified by the form of the density function '(r) and the local molecular orientation function n(r). In the case of elongated molecules, the unit vector n(r), the director, indicates the direction of the long axes of the molecules at point r. The phase in which both ' and n are constant is the nematic phase, abbreviated N. Mesophases with a density function '(r) that is periodic along an axis, designated the z axis, and constant along xy planes, exhibit a layered structure, with the long axes of the molecules parallel to each other in the layers; these are smectic phases. Smectics in which the director is normal to the layer plane are called Smectic A, abbreviated
387
388
CHAPTER 17
SmA or SA; those in which the director is tilted are Smectic C (SmC or SC); see Figure 17.1. 1.2. Calamitic and discotic liquid crystals As we saw in Chapter 5, mesophases are characterized by whether a transition is most easily induced in them by varying the temperature or by changing the concentration; thus they are divided into the categories of thermotropic and lyotropic.3 Figure 17.1 shows phase sequences typical of thermotropic liquid crystals, both calamitic, formed of rod-like molecules, and discotic, formed of disk-shaped molecules.4 Discotic liquid crystals form columnar phases, Co.
Figure 17.1. Phase sequences in thermotropic liquid crystals with increasing temperature. The top sequence shows typical calamitic liquid crystals, and the bottom shows phases of discotic liquid crystals, including columnar phases. The director is n. From Kitzerow and Bahr, 2001.
An example of a rod-like molecule is p-azoxyanisole, about 2.0 nm long and 0.5 nm wide5
1.3. Helical structures: Cholesterics and blue phases As discussed in Chapter 2, chiral objects do not contain any planes of symmetry; they occur in one of two enantiomorphic forms, right- and left-handed. The chirality of molecules in a liquid crystal can result in the formation of helical structures. Adjacent chiral molecules have a tendency to form a small angle between them, shown as in Figure 17.2.6
DELICATE PHASES AND THEIR TRANSITIONS
389
Figure 17.2. Adjacent chiral molecules tend to form a small angle between them. From Pieranski, 2001.
Instead of a uniform alignment of the director field, the director in chiral nematic structures is perpendicular to a helix axis, with its azimuthal angle changing continuously. Because it was first observed in derivatives of cholesterol, the chiral nematic phase is also called the cholesteric phase. Media with constant density, '(r) = const, and a macroscopically modulated structure n(r) form the category of cholesteric liquid crystals; see Figure 17.3.7 When the chiral nematic phase is miscible with a nonchiral nematic phase, the pitch 2%/q0 can be increased by dilution.
Figure 17.3. Helicoidal molecular arrangement in a cholesteric liquid crystal. From “ The Physics of Liquid Crystals” by DeGennes, P.G. and Prost, J. (1974). By permission of Oxford University Press.
In cholesteric liquid crystals, molecular structure and external fields have a profound effect on cooperative behavior and phase structure. When the pitch of the helicoidal structure is of the order of the wavelength of visible light, a striking
390
CHAPTER 17
appearance of Bragg reflection occurs. Circularly polarized light selectively reflects from the liquid crystal, showing a brilliant color effect that was probably responsible for the first observations of liquid crystals. This selective reflection has been used in many applications, such as thermometers and credit cards. Blue phases are twisted structures that cannot be described by a single twist axis.8 They were probably the first liquid crystals phases that Otto Lehmann observed at the beginning of the twentieth century. Blue phases owe their name to the color seen in early investigations, but they can reflect other colors as well, including violet, yellow and red. Blue phases exist in a narrow temperature interval between isotropic and cholesteric phases. Two cylinders with double twist match if the surface director tilt is %/4, but the region where all three cylinders meet forms a network of defect lines called disclination lines. Figure 17.4 shows the geometry of a blue phase. The pitch of the helix is p. The inset shows the director lines around the core of the disclination.9
Figure 17.4. Blue phase, showing three regions with double twist. Two cylinders with double twist match if the surface director tilt is %/4, but the region where all three cylinders meet is singular, forming a network of disclination lines. The inset shows the director lines around the core of the disclination. From Lavrentovich and Kleman, 2001.
There can be as many as three blue phases in the absence of an electric field. With increasing temperature, they are BPI, BPII and BPIII. BPI and BPII have cubic symmetry, with lattice parameters of the order of visible light wavelengths, while BPIII has the symmetry of the isotropic phase. It is remarkable that a liquid can exhibit cubic symmetry, thus being periodic in space.
DELICATE PHASES AND THEIR TRANSITIONS
391
BPIII, the “most enigmatic of the blue phases,” is also known as the grey or fog phase. It has a filamentary structure, with higher values of shear elasticity and viscosity than the BPII or isotropic phases.10 1.4. Columnar liquid crystals In addition to nematic and smectic liquid crystals, there are phases with positional order in two dimensions. These strings of one-dimensional liquids are called columnar liquid crystals. They are formed by disk-like molecules positioned in a plane perpendicular to the column axis; see Figure 17.1. The position of a molecule along one axis is not defined with respect to its neighbors in adjacent columns. Chirality may appear in columnar phases either within the column or in the lattice of columns. Loss of mirror symmetry may occur as a consequence of a helical orientation of the molecular director or of the position of the molecules, or by a helical distortion of the column lattice. A net dipole moment may appear in tilted columnar, as well as smectic, phases. In molecules consisting of a rigid core with attached flexible chains of different thicknesses, a kink between core and chain is favored. The kink fixes the orientation of chiral polar groups at the core–chain junction, so that they lose rotational freedom. The dipolar components perpendicular to the column axis do not compensate one another but add to create a net dipole moment. In this way, two-dimensional structures with pyroelectricity and ferroelectricity form.
Figure 17.5. Tilted columnar phases are switched by a reversal of the electric field. When the field E and spontaneous polarization Ps are directed upward, the plane of light polarization is rotated and light passes through the crossed polars. With field and spontaneous polarization downward, no light passes through. From Bock, 2001.
A few chiral tilted columnar systems exhibit electro-optical properties. Tiltinduced dipoles can produce switching in these. An electroclinic effect (see Section 5.7) of a few degrees at tens of V/m and no hysteresis has been observed in some cases. In other cases, bistable hysteretic switching, representing ferroelectric behavior, was noted. The reorientation of dipoles by an electric field is observed optically between crossed polarizers; see Figure 17.5.11
392
CHAPTER 17
The switching times observed with different materials vary from tens of s to 1 s, indicating different mechanisms. Columnar structures are found in densely packed DNA and in the -helixforming poly--benzyl-L-glutamate (PBLG), with the columns arranged into a hexagonal pattern; see Figure 17.6.12
Figure 17.6. DNA and poly--benzyl-Lglutamate molecules form columnar hexagonal phases. From Bock, 2001, after Livolant, 1991.
2. STATES AND PHASE TRANSITIONS OF LIQUID CRYSTALS The problem of stability and variability of structure occupies a special place among the physical properties of liquid crystals, because structural transformations in such anisotropic substances lead to a radical change in their properties. The most distinct changes occur in the optical characteristics of liquid crystals. It is now known that the liquid crystalline structure is characteristic of a number of diverse systems, including polymer solutions and biological membranes.13 Let us begin our discussion with a consideration of liquid crystal materials that are macroscopically uniform in three dimensions, that is, bulk liquid crystals. The fundamental property of a liquid crystal is the presence of an orientational degree of freedom, characterized by a macroscopic spatial ordering of the molecules. This degree of freedom gives rise to unique properties in these soft systems, due to the high sensitivity of the spatial distribution of orientational order to changes in temperature, electric and magnetic fields, elastic stresses, viscous flow and concentration in mixtures. In addition to the orientational degree of freedom, liquid crystalline media also exhibit degrees of freedom related to ionic and neutral impurities, elastic deformations, partial positional ordering of molecules and flow patterns. In complex phenomena of
DELICATE PHASES AND THEIR TRANSITIONS
393
irreversible transport of electric charge, mass and heat, these degrees of freedom may interact. Nondissipative changes may be described by thermodynamic theory based on the symmetry properties of the systems studied. Instabilities in these modulated structures of dissipative media can be studied by a continuum approach. Nonlinear modeling becomes necessary for the study of the behavior of a dissipative system above the instability threshold. These nonlinear models provide a description of phenomena such as orientational turbulence in an inhomogeneous liquid crystalline medium. 2.1. Correlation functions in liquid crystals While we have described liquid crystals along traditional lines, a more precise form of description is based on correlation functions. The traditional forms of classification are inadequate to describe the rich variety of liquid crystalline structures and their transformations. A more complete analysis becomes possible with a study of the mutual correlations between the positions of the atoms constituting the molecules. In certain highly symmetrical cases, the methods developed for the study of solid crystals are adequate. In general, however, an analysis of multiparticle correlations of atomic distributions is necessary to describe the structure and symmetry of liquid crystals. The simplest of these, the pair correlation function '12(r12), is sufficient in many cases. Here r12 is the displacement vector from atom 1 to atom 2, and '12 dV2 is the probability of finding atom 2 within a volume dV2 when the position of atom 1 is fixed. However, this function cannot be used to describe structures that do not have a center of symmetry. For the description of chiral structures, more complex correlation functions are needed. An example is the four-particle correlation function between the positions of four atoms. The presence or absence of a center of symmetry in the molecular system can be characterized by a molecular pair correlation function '12M(r12,l1,l2), which depends on the distance between the centers of mass of the molecules and on the orientations l1 and l2 of the long axes of molecules 1 and 2. In a theoretical approach that has proven fruitful, the structure of liquid crystals is described in terms of mathematical functions that exhibit the same symmetries as those postulated for the liquid crystal structure of interest. Such representations should possess all the degrees of freedom of the material, and hence be capable of describing its phase transformations. This approach, called the grouptheoretical approach for the branch of mathematics it applies, may eventually be useful for the description of complex biological molecules such as ion channels. Ion channels embedded in a lipid membrane separating two unlike aqueous phases are far more complex than the uniform assemblies of rod-shaped or disk-shaped molecules studied by liquid-crystallographers. Nevertheless, the study of the simpler liquid crystals that have been analyzed can produce valuable insights and results. The study of these assemblies is likely to enhance our understanding of voltage-sensitive ion channels, as these materials exhibit strong dependence on electric and magnetic fields and on mechanical stresses, as well as interesting kinetic and optical responses.
394
CHAPTER 17
2.2. Symmetry, molecular orientation and order parameter An isotropic liquid is the most symmetrical of liquid crystal phases, since all rotations leave it unchanged, even for a medium composed of chiral molecules. Lowering of the symmetry of the isotropic liquid by phase transitions can lead to nematic, cholesteric and/or smectic phases. In a uniaxial liquid crystal, in which the two orientations of the director n are equivalent, the most symmetrical form that a liquid crystal exhibits is the isotropic liquid form it attains at high temperatures. In this form the molecular orientations are uncorrelated. As the temperature is lowered past a transition point, the liquid crystal may enter a nematic phase, a smectic phase of type A with or without a center of symmetry, or a cholesteric phase. In these forms, the molecular orientations are correlated. The presence or absence of molecular order can be expressed in terms of an order parameter Q, which is zero in the symmetric, isotropic phase but nonzero in the ordered phases. The concept of an order parameter, discussed in Chapters 15 and 16, is useful in the discussion of the spontaneous symmetry breaking that occurs at a phase transition. For the nematic, smectic or cholesteric cases, an order parameter can be formed as a multicomponent (tensor) quantity formed by local averages of quadratic combinations of the coordinate projections of the unit vector l. The order parameter, Qik(r), determines the fraction of the molecular axes oriented along n at a given point r. In the isotropic phase, where the molecular orientations are random, and hence uncorrelated, the order parameter vanishes, Q = 0. 2.3. Free energy of the inhomogeneous orientational structure Although the orientational order parameter Q is a function of temperature, it is nearly independent of temperature far from the phase transition points, and so is not subject to strong thermal fluctuations. The quantity Q can therefore be viewed as a constant for an anisotropic medium at a given temperature. The director n, on the other hand, is subject to appreciable thermal fluctuations between transition points, and is also very sensitive to external fields. This sensitivity is the basis for practically all the instabilities of orientational structure arising from external actions and temperature changes. Orientational deformations in a nematic liquid crystal can be described in terms of an expression for the total free energy F. This expression is a function of n(r), written so as to satisfy the symmetry properties of the nematic liquid crystal, the fact that n is a unit vector and that the directions n and -n are equivalent. Since n2 = 1, /n2 = 0. The contribution of F that includes the volume energy but not the surface energy is F0, given by (2.1) The Frank coefficients, K1, K2 and K3, characterize the orientational elasticity of the material. The K1 term represents splay, K2 twist and K3 bend. For p-azoxyanisole at
DELICATE PHASES AND THEIR TRANSITIONS
395
120°C, the measured elastic constants are K1 = 0.7 × 10-11 N, K2 = 0.43 × 10-11 N, and K3,= 1.7 × 10-11 N.14 The quantity n # (/×n) in the twist term is called the Lifshitz invariant. The basic deformation modes of nematics and smectics are illustrated in Figure 17.7.15 In smectics, twist deformations require screw dislocations separated by grain boundaries, and bend deformations require edge dislocations.
Figure 17.7. The basic deformation modes of nematics and smectics. Splay (a), twist (b) and bend (c) deformations in a nematic director field, and splay (d), twist (e) and bend (f) deformations in a smectic director field. From Kitzerow, 2001.
2.4 Modulated orientational structure A nematic fluid containing chiral molecules cannot go into a spatially uniform phase, because it is unstable with respect to spatial modulations of the director. A macroscopic inhomogeneity will form in cholesteric liquid crystals with distances large compared to molecular dimensions. In this case the expression for the free energy density F0 must contain an invariant of the form (2.2) Here K2 is the temperature-dependent Frank coefficient for twist elasticity and q has the
396
CHAPTER 17
dimensions of a wave number of the inhomogeneous structure. This cholesteric structure is helicoidal. Figure 17.3 shows a cholesteric mesophase with right-hand rotation, counterclockwise upward. A director with components
(2.3)
minimizes expression 2.2. The molecular centers have no order, but the molecules in a given plane perpendicular to the z axis are parallel to each other. The structure is periodic along the z axis. Because of the assumed equivalence of n and -n, the spatial period of the director’s rotation L = %/|q0| is equal to one-half the pitch. The sign of q0 distinguishes between right- and left-handed helices. 2.5. Free energy of a smectic liquid crystal of type A As the temperature is lowered, a liquid crystal in the isotropic phase or the nematic phase may spontaneously form a wave structure. This structure, a kind of onedimensional crystal, may be represented by the density wave (2.4) The amplitude of the wave is |5|, the phase is 7, and the layer thickness l is 2%/k. This structure, in which the director n is parallel to the wave vector k, is a smectic liquid crystal of type A. In many cases, the free energy density can be expressed as the sum of three terms, representing the nematic and smectic A phases and their interaction: (2.5) The first term represents the free energy density of the uniaxial nematic phase N with order parameter Qij, the second term that of the SmA phase A with order parameter 5 , and the third term the interaction NA. The additional orientational order entailed in the appearance of the density wave in a N Û A transition from nematic to smectic A is due to the increased attractive forces between molecules, which decreases the energy of the system. From these equations and some additional assumptions, a phase diagram of the liquid crystal can be generated, as shown in Figure 17.8.16 The constants appearing in the expressions for the free energies and order parameters depend strongly on the lengths of the molecules. This has been demonstrated experimentally by the use of homologous series of compounds with varying lengths. Phases are shown as a function
DELICATE PHASES AND THEIR TRANSITIONS
397
of temperature and molecular length l for a homologous series of substances. Curve TIN separates the isotropic from the nematic phase, and curve TNA separates the nematic from the smectic A. Curves TIN, TNA and TIA intersect at a triple point. The phase diagram shows that the smectic A phase can be entered from either the isotropic or the nematic phase.
Figure 17.8. Phase diagram for phase transformations between isotropic (I), nematic (N) and smectic A (A) phases. From Pikin, 1991.
The relations on which the phase diagram shown in Figure 17.8 is based also apply to transitions from the isotropic phase into the cholesteric phase and into the smectic phase of type A without a center of symmetry, called the A* phase. 2.6. Stability of the smectic phase How stable is the structure of a type-A smectic with respect to stresses and external fields? Under what conditions can thermal fluctuations smear out the periodic function '(z)? In the absence of external influences, smectics A with central symmetry and nematics are stable relative to the formation of a modulated structure, since they have no Lifshitz invariants. Smectics A*, chiral molecules without central symmetry, as well as cholesterics, admit the existence of a Lifshitz invariant n# (/×n), as used in Equations 2.1 and 2.2. If the director n tilts away from the z axis, this is of the type
(2.6) Note that this expression reduces to q0 for the cholesteric case of Equation 2.3. Such inclinations of the director lower the symmetry of the smectic layer. When the layer thickness d is small but not microscopic, elastic interactions prevent the smearing of the periodicity of the smectic-A density function by thermal fluctuations. Weak external perturbations may result is a periodic flexing of the smectic layers, as shown in Figure 17.9. The director remains perpendicular to the plane of
398
CHAPTER 17
the smectic layer. When the tensile stress exceeds a threshold, dislocations appear in the smectic layer structure.
Figure 17.9. A periodic modulation appears under homogeneous stretching along the z axis. From Pikin, 1991.
2.7. Phase transitions between smectic forms The structure and properties of liquid crystals depend on their symmetries. The most symmetrical smectic form is that of type A, which is invariant to all rotations about the z axis and has a horizontal mirror plane. However, the asymmetrical smectic A* phase has no mirror plane. Let us consider the transition when the starting phase is smectic A*. The molecular clusters in this phase are nonpolar but assume one of two enantiomorphic forms, right- or left-handed. They are invariant to the infinite set of all rotations about the axis perpendicular to the layer plane and to 180° rotations about any axis lying in the plane. Since these axes of symmetry intersect at a point, this symmetry is said to be a point symmetry. This symmetry may be symbolized by a cylindrical screw. This point symmetry characterizes also the two-particle correlation function '12M and the point symmetry of each layer. A phase transition from this highly symmetrical phase results in a phase of lower symmetry. There are a number of such phases. One of particular interest here is the phase C, which results from the tilting of the director; see Figures 17.1 and 17.10.
DELICATE PHASES AND THEIR TRANSITIONS
399
2.8. Inversions in chiral liquid crystals In some materials, properties that depend on their chirality invert their directionality with temperature. These include inversions in the helical twist direction in cholesteric and smectic C* systems, and reversals in the direction of spontaneous polarization in ferroelectric phases.17 These are examples of spontaneous symmetry breaking. 3. ORDER PARAMETERS AND EQUILIBRIUM CONDITIONS Liquid crystals are complicated systems with many degrees of freedom. The order parameters corresponding to these degrees of freedom interact to determine physical properties, such as the form of the phase diagrams and the nature of phase transitions. These properties are determined by the conditions for thermodynamic stability of the liquid crystalline modifications. 3.1. Biaxial smectics In the transitions A C and A* C*, a uniaxial smectic phase makes a transition by tilting, with a corresponding change of optical, electric and magnetic properties. Denoting as z the direction normal to the smectic layers, we define the polar angle as the tilt of the molecular director from the z axis. The director n and the z axis form a plane that makes an azimuthal angle 1 with the x axis, as shown in Figure 17.10.18
Figure 17.10. Orientations in the smectic C phase. The director n of the molecule and the z axis form a plane at angle 1 with the x axis. The unit normal of this plane is n', in the xy plane. From Pikin, 1991.
The order parameter has two components, reflecting the tilt and the azimuthal angle 1. It can be written in complex form
400
CHAPTER 17 (3.1)
In the expression for free energy, ! only appears in the combination ! !* = 2. Thus fluctuations in 1 do not affect the energy. Changes in , on the other hand, are connected to compressions and dilatations of the smectic layer and require a considerable elastic energy. From symmetry considerations we can write an order parameter. Its components !1 and !2 are the quadratic combinations nznx and nzny of the components of n: (3.2) The structure has a second axis of symmetry, a twofold axis, and so is biaxial. The unit vector in the xy plane perpendicular to the plane of n and z is designated n'. The symmetry of the smectic C phase permits the director n' to rotate regularly from one layer to the next, leading to a screw axis in the modulated C phase. This symmetry conforms to a Lifshitz invariant,
(3.3) From Figure 17.10, the components of the unit vector n can be seen to be (3.4)
The modulated phase has the structure shown in Figure 17.11. The director rotates about the z axis, with the angle 1 varying while remains constant. In writing equation 3.1 for the order parameter, we implicitly assumed that the degree of smectic order is independent of temperature. This assumption is valid when the transitions N-A and A-C are widely separated on the temperature scale. If the phase transitions are close together, the interaction between the transition parameters must be taken into account. It may be necessary to go beyond the thermodynamic framework and bring microscopic theory to bear on the problem. 3.2. The role of fluctuations Phase transformations identified as second-order transitions in phenomenological theory are actually first-order transitions, because an additional degree of freedom comes into play. In addition to the thermodynamic averages of the variable, it is necessary to consider the thermal fluctuations in the orientation of the director, n(r).
DELICATE PHASES AND THEIR TRANSITIONS
401
Figure 17.11. Modulated structure of a chiral smectic phase. From Pikin, 1991.
This theory leads to the prediction of a small temperature hysteresis for the nematic–smectic phase transition.19 3.3. Effect of impurities The structure and properties of liquid crystals are very sensitive to the presence of impurities in them. In some cases this may lead to a renormalization of the critical indices of power laws.20 We recall the discussion of power laws in Chapter 15; we will apply this to the constant phase capacitance of axonal membranes in Chapter 18. Note that permeant ions can be considered “impurities” in an ion channel. 4. FIELD-INDUCED PHASE TRANSFORMATIONS Electric and magnetic fields induce structural transformations in liquid crystals because of the anisotropy in their dielectric and diamagnetic properties. Field-induced instabilities can affect the orientational state of nematics and smectics and the periodic orientational structure of cholesterics. These changes in macroscopic structure arise at a threshold value of the field and change the symmetry of the liquid crystal discontinuously. They are equivalent to a second-order phase transition. The magnitude of the distortion of the order parameter for suprathreshold fields varies continuously from zero to values of the order of unity in strong fields.
402
CHAPTER 17
4.1. Dielectric permittivity of liquid crystals In an anisotropic medium, characteristics such as the dielectric permittivity are direction-dependent. The components of the electric induction vector D are related to the components of the electric field E by the equations (4.1)
The nine components of the matrix are functions of the director n. The diagonal components, such as xx, are equal to each other and can be labeled . The off-diagonal components such as xy obey expressions of the form
g
(4.2) where ] and are dielectric constants measured in directions perpendicular and parallel to the local director. The difference a is the dielectric anisotropy. The presence of an electric field contributes the term
g
(4.3) to the expression for the free energy density. From Equations 4.1-4.3, this becomes (4.4) The local electric field E is the externally imposed field altered by the presence of the dielectric. Analogous expressions can be written for the magnetization induced in a nematic liquid crystal by a magnetic field.21 When electric fields are applied to liquid crystals, molecules tend to align, either parallel to the field (for a > 0) or normal to it (for a < 0). In blue phases, an increasing field lengthens the lattice parameter and causes birefringence. When the field is further increased, phase changes appear to other blue phases and ultimately the nematic phase. The orientation of blue-phase crystallites is also affected by electric fields.22 4.2. Unwinding the helix The dielectric instability of a cholesteric structure consists of an unwinding of the helix. This occurs for cholesterics with positive dielectric anisotropy a under the action of an
DELICATE PHASES AND THEIR TRANSITIONS
403
electric field E perpendicular to the axis of the cholesteric helix, z. For fields smaller than the critical field Ec, the spatial period of the structure h increases with increasing field E, becoming much greater than the unperturbed pitch h0. The unwinding of the helix is illustrated in Figure 17.12.23
Figure 17.12. Unwinding of the helix of a cholesteric structure. (a) Zero field; (b) The pitch h increases with increasing electric field; (c) At the critical field Ec the helix is unwound and the director is collinear with the field. From Pikin, 1991.
At the transition point E = Ec and higher fields, the directors of the molecules are parallel and the structure becomes nematic. Analyses confirmed by experiment show that the threshold field Ec depends on the pitch of the undistorted helicoid h0 and the dielectric anisotropy a. In liquid crystals that do not exhibit spontaneous polarization, the effects are quadratic in the external field. The threshold nature of these transitions is a consequence of the finite thickness of the liquid-crystal layer and the rigidity of the boundary conditions, which give rise to finite gradients of the orientational disturbances. 4.3. The Fredericks transition When a magnetic or electric field above a certain threshold is applied to a nematic liquid crystal, a change in the orientation of the molecules is induced. This effect, discovered and studied by V. K. Fredericks and V. Zolina, is a consequence of the anisotropy of the diamagnetic and dielectric susceptibilities of the ensemble of nematic molecules. The effect depends on the elastic properties of the medium and on the boundary conditions. The Fredericks transition is of second order.24
404
CHAPTER 17
A uniform nematic structure is perturbed by a magnetic field above its critical value H = HF, which depends on the direction of the field relative to the directors; see Figure 17.13.25 The instability of the orientational structure due to an external electric field is analogous to that due to a magnetic field, if conductivity is neglected. The transition sets in at the Fredericks threshold field, E > EF.26
Figure 17.13. The Fredericks transition induced by a magnetic field in a nematic liquid crystal layer. The starting orientations are shown in the top row and the distorted orientations below. (a) Bend, (b) splay and (c) twist deformations. From Pikin, 1991.
5. POLARIZED STATES IN LIQUID CRYSTALS In general, liquid crystal systems do not exhibit an intrinsic ordering of constant dipole moments of molecules. The electric polarization P of the medium does not exhibit a singularity as temperature varies. The contribution of polarization to the free energy F can be written as (5.1) where the summations are over the three space coordinates, 3-1 is the inverse dielectric susceptibility tensor and E is the electric field in the medium. The dielectric susceptibility tensor 3 = (/0) - 1 = - 1 is positive at all temperatures. A polarization in the absence of an electric field can be induced only by a phase transformation involving the orientations of the molecules. In this transformation, the dielectric susceptibility 3 of the medium is changed by 3. Such a change can also
DELICATE PHASES AND THEIR TRANSITIONS
405
be induced by orientational distortions of the liquid crystal due to an external field. For a polarized state to exist in the medium, its polarization vector must be linked to the orientational degrees of freedom by a relation that depends on the symmetry properties of the liquid crystal. This appearance of polarization is somewhat analogous to piezoelectricity in solids. The analogy is incomplete, however, and the term flexoelectric is used for this effect. 5.1. Flexoelectric effects in nematics and type-A smectics In a nematic liquid crystal with a center of symmetry, a flexoelectric effect can occur, as shown by Robert B. Meyer.27 This is a linear effect due to the formation of a modulated orientational structure n(r) induced by an electric field E. It is due to a splay deformation caused by the anisotropic shape of molecules with constant dipole moments. The polarization per unit area Ps is given by (5.2)
where f is an area flexoelectric coefficient; see Figure 17.14.28
Figure 17.14. Flexoelectric polarization Ps induced by curvature. R1 and R2 are the principal radii of curvature of the membrane. The flexocoefficient f for the case shown is positive. From Petrov, 1999.
The flexoelectric effect can be explained as a linear coupling between the electric polarization and splay and bend deformations of liquid crystals. It is observed in both homogeneous and inhomogeneous fields, but the temperature dependence is different in the two cases. While the flexoelectric effect in homogeneous fields depends on a dipolar interaction, the interaction is quadrupolar in inhomogeneous fields. In the latter case, an induced optical birefringence appears. 5.2. Flexoelectric deformations The theory of the flexoelectric effect has been used to explain the response of nematic layers to uniform electric fields. The Fredericks effect is excluded in these experiments
406
CHAPTER 17
by a suitable choice of dielectric anisotropy. Boundary effects at the solid surfaces bounding the nematic layer determine the molecular orientation in the unperturbed state. The flexoelectric deformation is an orientational distortion in the xz plane, which depends on the z-coordinate.29 The polar angle in flexoelectric deformation and the Fredericks deformation has a different dependence on z. For the flexoelectric effect, the functional dependence is odd, with vanishing at some intermediate value, while for the Fredericks effect, the dependence is an even function and attains a maximum value at some intermediate position z. These intermediate values are only at the center of the layer when the boundary conditions are symmetrical. When the orientation of the electric field E is changed, the flexoelectric deformation takes on a threshold character. The spatial distribution (z) of the polar angle is a function of the boundary conditions and the direction and magnitude of E. This effect has a polar character, in that the responses when the field is reversed differ in magnitude. 5.3. The flexoelectric effect in cholesterics Although nematics are nonpolar, splay or bend deformations lead to polarization for some molecular shapes. In addition to the helix unwinding shown in Figure 17.12, flexoelectric deformations appear that reorient the optic axis and the medium becomes biaxial. When an electric field is applied normal to the pitch axis of such a cholesteric liquid crystal, the helix distorts, as shown in Figure 17.15.30
Figure 17.15. The flexoelectric effect in a cholesteric liquid crystal. The electric field perpendicular to the plane of the paper rotates the directors. From Chilaya, 2001.
DELICATE PHASES AND THEIR TRANSITIONS
407
5.4. Polarization and piezoelectric effects in chiral smectics The phase transition from smectic A* to smectic C* results in a lowering of the symmetry from a continuous rotational symmetry about the z axis perpendicular to the planes to a two-fold rotation about an axis of symmetry in the xy plane. This transition can be accompanied by the appearance of a spontaneous polarization P, with components Px and Py. The polarization vector is parallel to the twofold symmetry axis. We can obtain an order parameter of two components for the phase transition A* C* from symmetry considerations. Its components !1 and !2 are the quadratic combinations nznx and nzny of the components of n: (5.3) The two-component order parameter of the smectic C* phase is illustrated in Figure 17.16.31 From the figure, we can read off the components of the director, as in Equation 3.4. The polarization P is parallel to the second-order axis n'. Its components, Px and Py transform, respectively, as !2 and -!1 in chiral smectics. This piezoelectric coupling is given by the relations (5.4)
where a double-angle trigonometric identity was used.
Figure 17.16. The two-component order parameter of the smectic C* phase. From Pikin, 1991.
408
CHAPTER 17
5.5. The electroclinic effect A proportionality similar to equation 5.4 also exists when the polarization vector P is replaced by the electric field vector E. The relation then describes an inclination of the molecules, such as those in the chiral untilted smectic phase A*, induced by an electric field. This effect, called the electroclinic effect, was first observed by Stephen Garoff and Meyer.32 Thus the application of an electric field can lower the symmetry, inducing the transformation from A* to C*. The electroclinic effect gives rise to a helicoidal twisting of the molecular axes and consequently of the dipole moments. The flexoelectric effect contributes to the spontaneous polarization P of the C* phase. Conversely, the flexoelectric effect can also be induced by a distortion of the orientational structure of the liquid crystal. As the temperature is lowered to Tc, the polar angle vanishes and the A* phase is restored. 5.6. The electrochiral effect In an experimental configuration in which the director is homeotropic on one of the substrates and homogeneous on the other, the hybrid aligned nematic (HAN) configuration, anchoring energy and the sum of the flexoelectric moduli can be evaluated. In this configuration, the flexoelectric effect produces twist deformation in an electric field normal to the initial director plane. In the electrochiral effect, charged chiral ions induce twist in the HAN structure. The twist moves through the cell from the homeotropic to the homogeneous boundary, in accordance with the polarity of the applied field. The electrochiral effect is linear, and depends on the sign of the electric field. The effect is observed only for small fields, because larger fields induce a Fredericks transition to the homeotropic orientation, suppressing the linear effect.33 6. THE FERROELECTRIC STATE OF A CHIRAL SMECTIC Ferroelectric liquid crystals are ferroelectrics with fluid properties.34 However, the existence of ferroelectric liquid crystals raises a problem. The interaction of constant electric dipole moments that are rigidly coupled to the molecules appear to be weaker than the van der Waals attractive forces between molecules of high molecular weight. Thus one can expect that liquids will solidify before becoming ferroelectric, and this is generally the case. One might speculate that a function of the lipid phase that separates voltage-sensitive ion channels is to reduce the van der Waals force between them. The question of whether ferroelectric (and antiferroelectric) ordering can exist in smectic phases was raised by W. L. MacMillan. His molecular model suggested that the tilt of the smectic C phase is due to the alignment of oppositely directed dipoles above and below the molecular center. R. B. Meyer concluded in 1974 that any tilted smectic phase such as a smectic C phase should be ferroelectric.35 Spontaneous polarization in a liquid crystals will appear when the following requirements are met:36
DELICATE PHASES AND THEIR TRANSITIONS ! ! ! !
409
The layered structure of a smectic Tilted orientation of the long axes of the molecules Chirality of the mesogen molecules Transverse dipole moment
Because of the improper nature of ferroelectricity in liquid crystals and the important role of chirality, it has been given the special designation of helielectricity.37 Figure 17.17 shows the chemical structure of several ferroelectric liquid crystals.38 SmC* molecules have two characteristic motions: The molecules can rotate around the layer normal in a motion called the spin or Goldstone mode, or the tilt angle between the molecular axis and the director can change, a motion called the soft mode; see Figure 17.16. A transition in which the tilt angle decreases to zero is a SmC SmA transition. The chirality of these SmC* molecules produces a remarkable effect when the dipole moment of a sidechain has a component in the plane of the layer (or the membrane, if the liquid crystal contains only a single or double layer): A 180° spin rotation causes a reversal of the dipole moment, generating ferroelectricity. 6.1. Behavior of a liquid ferroelectric in an external field Ferroelectric behavior in liquid crystals was first discovered decycloxybenziledene-p’-amino-2-methylbutylcinnamate (DOBAMBC).
in
p-
It is characterized by two linked ring structures, amyl alcohol as a chiral source and terminal aliphatic chains.39 The transverse molecular dipole moment of DOBAMBC is about 1 debye and the measured polarization is P 1.5 × 10-9 C/cm2. Ferroelectricity appears in DOBAMBC as a result of the lowering of symmetry in the transition from SmA* to SmC*. The temperature dependence of the spontaneous polarization Ps and tilt angle of DOBAMBC are shown in Figure 17.18.40 The SmA*–SmC* transition occurs at 95 °C.41
410
CHAPTER 17
Figure 17.17. Structures of some ferroelectric liquid crystals. (A) Schematic of general structure. (B) Assignment of absolute configuration labels R and S. (C–G) Chemical structures of five ferroelectric molecules or homologous series. From Goodby, 1986.
DELICATE PHASES AND THEIR TRANSITIONS
411
Figure 17.18. The temperature dependence of the spontaneous polarization and tilt angle of DOBAMBC. From Yoshino and Sakurai, 1991.
6.2. Polarization and orientational perturbation For a chiral smectic liquid crystal in external field E we add the term -P#E to the expression for the free energy density F1. The components of the polarization P are (6.1) P' is the projection of the polarization vector on the line of intersection of the xy plane and a plane perpendicular to the second order axis; see Figure 17.16. Variation of the free energy with respect to P and P' gives the required equations for (r) and 1(r). The external field distorts the helicoidal structure of the C phase; in weak fields the distortions of (r) - 0 and 1(r) - 10(z) are small. Distortions untwist the helix of dipole moments and increase the tilt angle in strong fields. In weak fields, with E Q Ec, at which complete untwisting of the spiral occurs, the perturbations of director orientation can be separated into an azimuthal perturbation and a polar one with unperturbed pitch h0 = 2%/q0.
412
CHAPTER 17
Figure 17.19. Two models of ferroelectric liquid crystal molecules in bent configurations. (Left) Chiral molecule with a transverse steric dipole Si and a substitution group with long axis oi. (Right) Asymmetric molecule with two polarizable centers. From Pikin and Osipov, 1991.
For a ferroelectric crystal, the field E is assumed to be oriented in the xy plane; Ez = 0.42 This may be surprising to researchers seeking to apply this discussion to the ion-channel problem, since the relevant field has been almost invariably considered to be the transmembrane field, in the z direction. Because of fluctuations in the spontaneous polarization, ferroelectric liquid crystals exhibit current noise.43 Ferroelectric liquid crystal molecules are generally bent or banana-shaped, as in the two models shown in Figure 17.19. They can be obtained from nonchiral smectogens by the substitution of groups such as CH3, Cl or CN for a hydrogen atom in the alkyl chain; these groups act as chiral centers. Molecule i on the left of Figure 17.19 is chiral, provided that vector oi is not in the plane defined by vectors ai and mi. The bend angle (in radians) of molecule i is , and its steric dipole Si has the magnitude D, where D is the molecular diameter. The chiral interaction between molecules is determined by the induction between the chiral center and polarizable core of the neighboring molecule. The molecule j on the right of Figure 17.19 possesses two effective centers with asymmetrically directed polarizabilities aj1 and aj2. Details of the chiral intermolecular interaction between molecules i and j are given in the article by S. A. Pikin and M. A. Osipov.44 The interactions of banana-shaped molecules are likely to be significant to voltage-sensitive ion channels, most of whose membrane-spanning -helical segments are bent at a single proline residue; see Chapters 13 and 21.
DELICATE PHASES AND THEIR TRANSITIONS
413
6.3. Surface-stabilized ferroelectric liquid crystals Ferroelectric liquid crystals show many similarities to solid ferroelectrics, but having no crystal lattice cannot have any domains. The stable configuration of the smectic C* phase is characterized by the helicoidal structure, with its polarization externally canceled. The helielectric phase is stable in an extended system of three dimensions, with only line defects possible. The energy degeneracy is lifted in ultrathin samples in which the director orientation is fixed at the planar electrodes. The orientational anisotropy of the interfaces is transmitted to the volume of the liquid crystal by surface interactions, making possible new structures with walls and ferroelectric domains. This structure is known as the surface-stabilized ferroelectric liquid crystal. The rich physics of these monostable and bistable cells has led to a large variety of practical applications. Because of the surface constraints on the molecular axis, the angles of the helix have to adjust to them. It is expected that the “weak” azimuthal angle 1 will adjust, while the “hard” variable , the polar angle, will be less affected. The molecule will be on a cone, dependent on the two boundary conditions. The application of an external electric field can cause the molecule to rotate on the cone and realign. Activation energy, dependent on the surfaces, has to be provided to move the molecule from one state to another.45 NOTES AND REFERENCES K. S. Cole, Membranes, Ions and Impulses , University of California, Berkeley, 1972, 540. P. J. Collings, Liquid Crystals: Nature's Delicate Phase of Matter, Princeton University, Princeton, 1990. Glenn H. Brown and Jerome J. Wolken, Liquid Crystals and Biological Structures, Academic, 1979, 12. Heinz-Siegfried Kitzerow and Christian Bahr, in Chirality in Liquid Crystals, edited by Heinz-Siegfried Kitzerow and Christian Bahr, Springer, New York, 2001, 13. With kind permission of Springer Science and Business Media. 5. P. G. deGennes and J. Prost, The Physics of Liquid Crystals, 2nd Edition, Clarendon, Oxford, 1993, 3. 6. P. Pieranski, in Kitzerow and Bahr, 28-66. With kind permission of Springer Science and Business Media. 7. deGennes and Prost, 14. 8. Kitzerow and Bahr, in Kitzerow and Bahr, 18. 9. O. D. Lavrentovich and M. Kleman, in Kitzerow and Bahr, 115-158. With kind permission of Springer Science and Business Media. 10. Peter P. Crooker, in Kitzerow and Bahr, 186-222. 11. Harald Bock, in Kitzerow and Bahr, 355-374. With kind permission of Springer Science and Business Media. 12. F. Livolant, J. Mol. Biol. 218:165-181, Supramolecular organization of double-stranded DNA molecules in the columnar hexagonal liquid crystalline phase, Copyright 1991, with permission from Elsevier. 13. S.A. Pikin, Structural Transformations in Liquid Crystals, Gordon and Breach, New York, 1991. 14. deGennes and Prost, 103. 15. Heinz-Friedrich Kitzerow, in Kitzerow and Bahr, 300. With kind permission of Springer Science and Business Media. 16. Pikin, 12f. 17. J. W. Goodby, P. Styring, A. J. Slaney, J. D. Vuuk, J. S. Patel, C. Loubser and P. L. Wessels, Ferroel. 147:291-304, 1993. 18. Pikin, 43. 19. Pikin, 60-62. 20. Pikin, 98-100. 21. Pikin, 7. 1. 2. 3. 4.
414
CHAPTER 17
22. Peter P. Crooker, in Kitzerow and Bahr, 186-222. 23. Pikin, 124 24. Pieranski, 32f; deGennes and Prost, 123-133, 189f; L.M. Blinov, Electro-Optical and Magneto-Optical Properties of Liquid Crystals, John Wiley, 1983, 107-134. 25. Pikin, 111-116. 26. Pikin, 119-123. 27. R. B. Meyer, Phys. Rev. Lett. 22: 918-921, 1969. 28. Reprinted from Alexander G. Petrov, The Lyotropic State of Matter: Molecular Physics and Living Matter Physics, 296, Copyright 1984, with permission from Elsevier. 29. Pikin, 163. 30. Guram Chilaya, in Kitzerow and Bahr, 159-185. With kind permission of Springer Science and Business Media. 31. Pikin, 159. 32. S. Garoff and R. B. Meyer, Phys. Rev. Lett. 38: 848-851, 1977. 33. L. M. Blinov and V. G. Chigrinov, Electrooptic Effects in Liquid Crystal Materials, Springer, New York, 1994, 195. 34. N. A. Clark and S. T. Lagerwall, in Ferroelectric Liquid Crystals: Principles, Properties and Applications, edited by J. W. Goodby, R. Blinc and N. A. Clark, Gordon and Breach, Philadelphia, 1991, 1-97. 35. R. B. Meyer, L. Liebert, L. Strzelecki and P. Keller, J. Phys. (Paris) Lett. 36:69-71, 1975. 36. Petrov, 335. 37. H. R. Brand, P. E. Cladis and P. L. Finn, Phys. Rev. A 31:361-365, 1985. 38. J. W. Goodby, Science 231:350-355, 1986. 39. J. W. Goodby, in Ferroelectric Liquid Crystals: Principles, Properties and Applications, edited by J. W. Goodby, R. Blinc and N. A. Clark, Gordon and Breach, Philadelphia, 1991, 99-247. 40. K. Yoshino and T. Sakurai, in Ferroelectric Liquid Crystals: Principles, Properties and Applications, edited by J. W. Goodby, R. Blinc and N. A. Clark, Gordon and Breach, Philadelphia, 1991, 317-363. 41. N. A. Clark and S. T. Lagerwall, in Ferroelectric Liquid Crystals: Principles, Properties and Applications, edited by J. W. Goodby, R. Blinc and N. A. Clark, Gordon and Breach, Philadelphia, 1991, 4. 42. Pikin, 201. 43. I. Muševič, R. Blinc and B. Žekš, The Physics of Ferroelectric and Antiferroelectric Liquid Crystals, World Scientific, Singapore, 2000, 293-301. 44. S. A. Pikin and M. A. Osipov, in Ferroelectric Liquid Crystals: Principles, Properties and Applications, edited by J. W. Goodby, R. Blinc and N. A. Clark, Gordon and Breach, Philadelphia, 1991, 249-316. 45. N. A. Clark and S. T. Lagerwall, in Ferroelectric Liquid Crystals: Principles, Properties and Applications, edited by J. W. Goodby, R. Blinc and N. A. Clark, Gordon and Breach, Philadelphia, 1991, 24-53.
CHAPTER 18
PROPAGATION AND PERCOLATION IN A CHANNEL
Ions must permeate voltage-sensitive ion channels to cross the membrane under the influence of an electric field. How do the ions interact with the field and the channel? How does a depolarization initiate ion conduction? What makes the current stop spontaneously? In this chapter we briefly review two mathematical approaches to transport through disordered matter, solitons in liquid crystals and percolation theory. We will relate this to ferroelectric liquid crystal behavior in voltage-sensitive ion channels. We will then proceed to the interactions of metal ions with large organic molecules and liquid crystals, and enquire into the interaction of permeant ions with the protein structure of the channel. 1. SOLITONS IN LIQUID CRYSTALS In Chapters 1 and 3 we observed that an action potential can be considered to be a solitary wave, a soliton. Since we have seen that a biomembrane such as the axolemma is an example, albeit an atypical one, of a liquid crystal, let us here explore the idea of solitons in liquid crystals. Because we have argued that membrane excitability shares many features with ferroelectricity, we will be particularly interested in the effects of electric fields and solitons on ferroelectric liquid crystals. We will seek to apply soliton theory at two levels: the entry of the conducting phase into the ferroelectric resting phase within a voltage-sensitive ion channel, and the associated action potential. Thus the emphasis is on voltage-sensitive ion channels not as isolated objects but as intrinsic components of a propagated impulse along a water–macromolecule–water system. Molecular solitons across ion channels are coupled to an action potential traveling along an axon. In voltage-clamped axons, the external circuit provides an electronic feedback that suppresses instability. Solitons are localized waves that travel without significantly changing their shape. As originally defined in the mathematical literature, solitons preserve their identity after colliding with one another. However, the concept of solitons has been generalized to include nonlinear waves that do not possess this pairwise collision property. A localized wave, such as the transmembrane movement of ions across a channel or an action potential traveling along an axon, can be considered a soliton.
415
416
CHAPTER 18
Solitons have been studied in many natural and laboratory systems including molecular systems, pulses propagating in solids, structural phase transitions, polymers and liquid crystals.1 In addition to its aspect of a nonlinear wave, a soliton may also be viewed as a nonlinear excitation.2 1.1. Water waves to nerve impulses Solitons in liquid crystals, called walls, were discussed as solutions of certain partial differential equations by W. Helfrich, Pierre Gilles de Gennes and F. Brochard, and observed in the laboratory by L. Leger. The word soliton first appeared in the liquid crystal literature when Patricia Cladis and S. Torza observed solitary waves in the flow of nematics between coaxial cylinders. Small solitary vortices propagating in the subcritical region of an electroconvective instability were observed by R. Ribotta.3 Walls in lyotropic nematics were investigated by Figueiredo Neto and associates.4 R. Pindak and coworkers observed walls in free-standing smectic C films in an electric field. In 1980, N. A. Clark and S. T. Lagerwall studied walls induced by an electric field in ferroelectric smectics.5 M. Yamashita and H. Kimura6 used a kink solution to explain the behavior transition from a smectic C* to a smectic A. Director waves were linked to waves in biomembrane lipids in 1968 by J. L. Fergason and G. H. Brown7 and in 1982 by N. M. Chao and S. H. White.8 The Hodgkin–Huxley equations that model nerve impulse propagation are closely related to equations describing fronts propagating into unstable states in nematic liquid crystals.9 Cladis and W. van Saarloos point out that smectic C* may provide additional insight into the problems of biological systems since it is chiral, with a spatial symmetry more closely related to these systems.10 One way to define solitons is as solutions of particular nonlinear wave equations. We became familiar with equations of this type in Section 6 of Chapter 8, which dealt with the time-dependent electrodiffusion equation, and in Chapter 10, where we examined the partial differential equation of Hodgkin and Huxley. Let us begin with some simpler equations that have the virtue of being more easily analyzed. This discussion is mainly based on an article by Lui Lam.11 1.2. Korteweg-deVries equation As we know from our discussion in Chapter 10, the simple wave equation (1.1)
has the solution = (x, t) = (x - ct), where c is constant and is an arbitrary function. The wave travels with velocity c, maintaining an undistorted shape. Since all components of the wave travel with the same speed, we say it is free of dispersion. By contrast, the equation
PROPAGATION AND PERCOLATION IN A CHANNEL
417 (1.2)
has dispersion, as we can see by substituting the solution (1.3) into the equation with 7 = k3. That implies that the phase velocity 7/k = k2, so that a wave consisting of components with different wavenumber k will spread out, i. e., be dispersive. Another wave equation, dispersionless but nonlinear, is
(1.4)
The solution is = (x - ct); substitution shows that the velocity c is equal to the wave amplitude . This results in a distortion of the waveform and a decrease with time of the pulse width. The effects of spreading and narrowing in Equations 1.2 and 1.4 exactly cancel in the Korteweg-deVries (KdV) equation,
(1.5)
The reader may find it interesting to compare the KdV equation to the Burgers equation discussed in Chapter 8, in which the highest spatial derivative is of second order rather than third. By the transformation t’ = t/ , x’ = x/ and 0 = 6g, we write Equation 1.5 in the form (1.6)
where g and are constants. If g = 0 the KdV equation becomes linear. A solitary wave solution to the KdV equation is (1.7)
418
CHAPTER 18
where ! = x - x0 - Vt. The hyperbolic secant function, defined as sech x = 2/(ex + e-x), drops to zero for large positive and negative x and rises to a maximum at x = 0. Thus the solution 1.7, for positive g, is a traveling positive bump; for negative g it is a density decrease; see Figure 18.1. The velocity V is related to the amplitude and the width of the wave, so that taller and slimmer waves travel faster (this is not the case for all solitons). The Korteweg–deVries equation has been used to describe waves in shallow water, ion-acoustic and magnetohydrodynamic waves in plasma and phonon packets in nonlinear crystals.12
Figure 18.1. Soliton solutions for the Korteweg–deVries equation. The amplitude 0(!) is shown for positive and negative values of the nonlinearity coefficient g. From Davydov, 1985.
1.3. Nonlinear Schrödinger equation A version of Erwin Schrödinger’s famous wave equation that is, as we saw in Chapter 6, central to quantum mechanics is the nonlinear Schrödinger equation,
(1.8)
where (x, t) is a complex number and is a nonlinearity parameter. It has a soliton solution, (1.9)
where a is the velocity of the envelope wave and b the velocity of the carrier. Stationary waves, called breather solitons or breathing modes, exist in this solution. Ultrashort light pulses in optical glass fibers are solitons describable by the nonlinear Schrödinger equation. It is also useful in describing laser beam self-focusing
PROPAGATION AND PERCOLATION IN A CHANNEL
419
in dielectrics, waves in deep water, propagation of signals in optical fibers, vortices in fluid flow and the one-dimensional Heisenberg ferromagnet.
1.4. The sine-Gordon equation The sine-Gordon equation, which is studied by particle physicists among others, is
(1.10)
Figure 18.2 shows a mechanical model of a discrete version of the sine-Gordon equation, consisting of pins in a rubber band.13 A pin’s angle with the vertical is represented by u(x,t) = (x, t).
Figure 18.2. Pins inserted into an elastic band make a discrete mechanical model of the sine-Gordon equation. A kink is shown traveling along the x axis with velocity v. From “Nonlinear Science: Emergence and Dynamics of Coherent Structures 2/e” by Scott, Alwyn (2003). By permission of Oxford University Press.
Equation 1.10 has three types of soliton solutions, the kink, the antikink and the breather. The former two are given by (1.11)
420
CHAPTER 18
where the + refers to the kink and the - to the antikink, and the wave velocity c (<1) and x0 are constants. The breather solution (1.12) with constants t0 and a, may be considered a bound state of a kink–antikink pair. A useful way to look at a soliton equation is to convert it to an ordinary differential equation. Some insight into the sine-Gordon equation 1.10 may be obtained by applying the substitution - = x - ct. Then
(1.13)
where V = 1 + cos . We can think of the soliton as being represented by the motion of a particle of mass m = 1 - c2 moving in a periodic gravitational potential V. For c < 1, is the displacement of the particle and - the time. The kink solution, Equation 1.11 with the + sign, corresponds to a movement of the particle from a crest at = 0 to adjacent crest at = 2%, and the antikink (- sign), from = 2% to 0. When c > 1, the particle may move across several crests; this corresponds to a multisoliton. While the soliton equations we have considered here are integrable (i. e., have exact solutions), most soliton equations are not. They can be approached by perturbation methods (as small deviations from one of the integrable solitons), by numerical methods or by simulation with mechanical models or electric circuits. 1.5. Three-dimensional solitons Kinks and antikinks are examples of topological solitons, which arise in media with states of equal energy. In isotropic ferromagnetic (or ferroelectric) media, the state of lowest energy may correspond to complete magnetization (polarization), but the magnetization (polarization) vector may be in an arbitrary direction. The self-localization of an electron or ion in ionic crystals arises from its Coulomb interaction with the vector field of the polarization it causes in the medium. This interaction, equivalent to a potential well with discrete energy levels, is called a polaron. For polarons with small dispersion, the dependence of the polaron’s inverse longitudinal and transverse sizes on its velocity may be calculated; see Figure 18.3.14
PROPAGATION AND PERCOLATION IN A CHANNEL
421
Figure 18.3. The inverse longitudinal () and transverse () sizes of a mobile polaron relative to the size of a polaron at rest (0) as functions of its dimensionless velocity 0). The functions reach their maxima at 0) 0.14. From Davydov, 1985.
1.6. Localized instabilities in nematic liquid crystals Convection in a fluid under external constraints leads to a rich variety of nonlinear phenomena above the instability threshold. The rest state bifurcates to an ordered state with a stationary velocity distribution. In cases where the first ordered state is timedependent, the velocity field is homogeneous in space and the wavevector q has a well defined amplitude. At higher stresses, the flow becomes more complex, generating isolated periodic structures or intermittent bursts of turbulence. These localized modulations of the velocity field c(x,y,z,t) are solitons like those initially discovered by Scott Russell (Chapter 3), or the breaking of deep-water waves at a beach. The emergence of these solitons from localized instabilities involves a focusing of energy. A similar focusing action can be seen in the self-focusing of a light beam in a nonlinear optical medium. At voltages below those that induce chaotic motion in the fluid, the alignment of the directors is unstable to a periodic stationary bending associated with convective flow. This type of motion is referred to as normal rolls. Over some range of frequencies within the conduction regime, the structure at threshold is neither stationary in time nor homogeneous in space, but consists of localized domains of traveling waves. These wave packets are often randomly distributed in space, with an average separation of about 2.0 nm.15 Such a focusing action also may be involved in the formation of an action potential at an axon hillock. We recall in this connection that the action potential propagating along an axon has a constant amplitude in time. The interpretation of these localized instabilities, localized amplitudes of unstable states, is by an envelope (or amplitude) equation. This equation has the form of a time-dependent Landau–Ginzburg equation; see Chapter 15. This equation has a
422
CHAPTER 18
structure similar to that of the nonlinear Schrödinger equation, which as mentioned above has soliton solutions. One class of waves, twist waves, propagate through a nematic liquid crystal when a twist deformation is created in one part of the layer and then removed. Fergason and Brown suggested that such transverse waves could act as signal carriers in living organisms, although the damping coefficient of these waves is expected to be very large.16 1.7. Electric-field-induced solitons Solitons may evolve by different mechanisms. They can be induced by magnetic fields or, the case of our interest here, by electric fields. Solitons (or “walls”) induced by electric fields were observed by Pindak and collaborators in freely suspended smectic C films.17 For bulk smectic C* liquid crystals under electric field, solitons are important in the switching mechanism. 1.8. Solitons in smectic liquid crystals Phase transitions in charge density waves, chiral smectic C phases in an electric field and ferroelectrics can all be described as solitons with a complex order parameter. The smectic C* phase has a helical structure that may be unwound by an electric field in the layer plane. This is a first-order phase transition of the instability type.18 2. SELF-ORGANIZED WAVES The four models in this section run the gamut from a generalization about living systems inspired by an experimental observation of a spontaneous wave pattern, to an analysis of self-organized waves in nonlinear media, to a self-organized chemical model of ion conduction in excitable membranes, and finally to a polarization model of the action potential as a kink soliton. 2.1. The broken symmetries of life Patricia Cladis considers living systems to be the most important and most complex representatives of nonequilibrium liquid crystal systems. The three broken symmetries she calls fundamental to life are broken continuous spatial symmetry, as in liquid crystalline structures; broken mirror symmetry, as in molecular chirality; and broken time reversal symmetry, as in nonequilibrium processes. Experimenting with a minimal model displaying these “broken symmetries of life,” Cladis has shown that such a system forms patterns of definite frequency, and so “knows time.” Pattern formation, epitomized by the phenomenon of snowflakes, is the spontaneous appearance of spatial structures in uniform materials. This occurs as a result of dissipative processes under conditions far from equilibrium. The structures are characterized by a band of wavelengths and their related wavevectors q, with
PROPAGATION AND PERCOLATION IN A CHANNEL
423
wavenumbers q = q = 2%/. The observed phase boundary has a pattern with a length scale q. Broken continuous spatial symmetry was modeled by a phase transition from an isotropic liquid (I) phase to a cholesteric (N*) phase. The N* phase breaks the orientational order of the I phase, introducing a directional pattern. A phase boundary intervenes between the two phases. The broken mirror symmetry is obtained with a mixture of nematic and chiral liquid crystals. The competition between the twisting tendency and the viscosity produces a moving phase transition, characterized as an orientational diffusion. The mirror symmetry of the I phase is broken in the transition to the N* phase. The mixture of the cholesteric C15 and the nematic 8CB was held between two parallel glass plates so prepared as to keep the director n uniform and in the plane of the plates. The cholesteric–isotropic transition temperature TChI decreases linearly with increasing C15 concentration. The broken time reversal symmetry requires a continuous energy input, achieved by two features of the experiment: The first is a temperature gradient, from the isotropic phase in the vicinity of the hot contact to the cholesteric phase in the vicinity of the cold contact. The second is a steady velocity v imposed on the mixture, forcing the material to develop a moving phase transition front in its frame of reference. The velocity v was adjusted to keep the transition temperature stationary in the lab frame. The transition front was visualized by optical image processing with a polarizing microscope with crossed polarizer P and analyzer A. The development of patterns in the transition line is shown in Figure 18.4.19
Figure 18.4. Broken symmetry in a temperature gradient G. A moving wave pattern forms spontaneously in the transition line between the isotropic and cholesteric phases. From Cladis, 2001.
424
CHAPTER 18
From a planar interface at equilibrium, v = 0, there evolved “gracefully” a modulated interface with a definite wavelength. The temperature gradient maps the temperature onto the z axis, (2.1) where T is the kelvin temperature and G is the temperature gradient. At speeds below a critical speed vc, irregularities in the surface are smoothed out by the phase transition, stabilizing the planar interface. A higher speeds, a perturbational bump of the cholesteric phase into the warmer liquid, due to thermal fluctuations, becomes stabilized, presumably by encountering supercooled liquid. The wave pattern shown on the right side of Figure 18.4 nucleates. The wavelength scale results from nonequilibrium forces in the system. In addition to the spontaneous spatial pattern, a temporal pattern also develops, with frequency 7. As soon as a pattern develops, it travels parallel to the interface with a speed vx, which increases with increasing distance from equilibrium. A nonlinear, nondispersive breathing mode appears at higher speeds parallel to the interface, followed by defect shedding and phase winding. Turbulence develops at sufficiently large temperature gradients and velocities. Cladis infers from this simple model that, because living systems necessarily know time, they also have access to turbulence. 2.2. Autowaves Self-organization in active nonlinear media forms a type of self-sustained signals that release stored energy. These waves, called autowaves, include the excitation of heart muscle and the propagation of nerve impulses. Unlike waves in conservative media, autowaves remain constant in shape and amplitude during propagation. Autowaves in re-excitable active media can recover their initial state after excitation. An example is the spreading of grass fires in a prairie over the span of several years. After a fire, the grass grows back, eventually dries and burns again. An impulse traveling along an axon or muscle fiber is similarly re-excitable. So likewise is the movement of ions across an ion channel, which is an integral component of those impulses. Waves in re-excitable media require two variables for their description. The system of equations 2.2 describes the behavior of a distributed medium.
(2.2)
Here, U is the fast and V the slow variable. For the grass-fire example, U is the temperature and V the growth rate of the grass; for the nerve impulse, U is the membrane potential and V the current of potassium ions that resets the potential.20
PROPAGATION AND PERCOLATION IN A CHANNEL
425
In two or three dimensions the second space derivatives are replaced by Laplacian operators. When a wave encounters the refractory region of a wave in front of it, it must break like an ocean wave on a beach. In two dimensions the breaking waves form a spiral akin to an Archimedean spiral. Patterns of this type have been observed in chemically reacting systems as well as in social ameba and cardiac and retinal tissue. Figure 18.5 shows pulses and waves generated by a set of equations of the van der Pol type, Equations 2.2. The phase plane (a) shows the functions f (U, V) = 0 and 1 (U, V) = 0, with the system trajectory O, A, B, C, O. The way in which cooperative processes lead to the switching phenomena seen in excitable membranes and voltage-sensitive ion channels will be explored in Chapter 20.
Figure 18.5. Pulses and waves generated in an active medium. (a) The phase plane of Equation System 2.2 at a point; (b) shows the time variation, and (c) the space variation, of a one-dimensional active medium with excitable kinetics. A two-dimensional pulse is shown in (d), with resting, active and refractory regions shown as white, black and lined areas, respectively. The arrows indicate the direction of propagation. From Krinsky, 1984.
2.3. Catastrophe theory model based on a ferroelectric channel Kotaro Shirane, Takayuki Tokimoto, Kozou Shinagawa and Yoshiko Yamaguchi proposed a diffuse ferroelectric bilayer model for membrane excitation in 1993.21
426
CHAPTER 18
Figure 18.6. Membrane response to a suprathreshold stimulus modeled by the diffuse ferroelectric bilayer model. Under the step depolarization, the state of the membrane jumps from the metastable resting state R through a cusp catastrophe to the plane P. From Tokimoto and Shirane, 1993.
Applying a self-organized chemical model to the ferroelectric transmembrane units proposed by Leuchtag (Chapter 16, Section 6.2), they were able to calculate timedependent membrane potentials. The model consists of Equation 2.3 of Chapter 16 with the term neglected, and Equation 1.4 of Chapter 6, together with a state equation (2.3) where a and b are control parameters for dipole–dipole and dipole–ion interactions respectively, = -F 5/RT and 5 is the membrane potential. In the equilibrium space of Equation 2.3, a cusp catastrophe occurs with a jump in . This bifurcation represents the all-or-none law of the action potential. The solutions to the model equations yield subthreshold responses, single spikes and trains of spikes. The trajectories of the model were derived from an analysis similar to that applied by Zeeman to the Hodgkin–Huxley equations; see Section 2.7 of Chapter 9. The projection of the equilibrium state equation E on the control plane C of the electrical response of a voltage-clamped membrane with Na channels is displayed in Figure 18.6.22 Entrainment of the membrane oscillation by a periodic Na+ current may result in a stable limit cycle.23 The equations of Tokimoto and collaborators yield solutions corresponding to repetitive firing (A1and A2), subthreshold electrotonic responses (B1and B2) and single action potentials (C1 and C2), as shown in Figure 18.7.24
PROPAGATION AND PERCOLATION IN A CHANNEL
427
Figure 18.7. Computer simulation of nerve impulses in response to a prolonged depolarization. The plots on the left shows repetitive impulses (A1), subthreshold response (B1) and a single action potential (C1). The corresponding trajectories (A2, B2 and C2) are shown on the right. From Tokimoto et al., 1999.
2.4. The action potential as a polarization soliton The propagation of an action potential has been modeled by a polarization kink soliton in the interphase boundary between open and closed states of gates under different values of polarization.25 To reproduce a boundary between regions of the axonal membrane with open and closed ion channels, Alex Gordon and collaborators assumed a free energy with a term dependent on the polarization gradient. This long-range interaction yields a propagated polarization change that may contribute to an action potential. The onedimensional model, based on the Ginzburg–Landau equation for polarization dynamics, leads to the equation of motion
428
CHAPTER 18
(2.4) where is the kinetic coefficient that determines the time scale of the process, P is the polarization, x is the coordinate along the axon, E is the external electric field and Er the field corresponding to the resting potential. The parameter A is temperature-dependent, A = A1(T - T0), and A1, B, C and D are positive coefficients.
Figure 18.8. Mechanism proposed for the opening and closing of voltagesensitive ion channels. a) When the polarization has the value corresponding to the resting state, P1, the gates are closed. b) As a kink soliton changes the polarization to its value for the excited state, P2, the gates open at the moving interface. From Gordon et al., 1999.
The solution to Equation 2.4 is (2.5) where v and are the velocity and width of the traveling interface. For appropriate values of field and temperature, Equation 2.5 describes the growth of the stable state with open gates into the region of the metastable state with closed gates. The propagation of this kink soliton is shown in Figure 18.8. The model yields a functional
PROPAGATION AND PERCOLATION IN A CHANNEL
429
dependence of the propagation velocity v on the electric field and temperature. The paper by Gordon et al. also includes a discussion of the effects of magnetic and electromagnetic fields on action potentials. Pradip Das and W. H. Schwarz modeled a lipid membrane as a SmC* liquid crystal phase.26 Their model, based on a free energy with elastic, fluctuation and polarization terms, couples the transmembrane electric field with tilt changes in the bilayer molecules. Voltage–time computations yield propagated solitons that move polarized domains by ferroelectric switching, with shapes similar to those of action potentials. 3. BILAYER AND CHANNELS FORM A HOST–GUEST PHASE Membranes have an ordered structure in the dimension normal to the membrane plane and are disordered in the other two dimensions; by definition this makes them liquid crystals. The bilayer and its embedded ion channels form a host phase and a guest phase, respectively. Liquid crystals that form layers, such as membranes, are smectics. The properties known from the physics of liquid crystals have, to some extent, been explored in membrane biophysics. Meyer hinted at the possibility of flexoelectric properties in biomembranes.27 Smectic modifications are usually formed by amphiphilic molecules, which have polar and nonpolar parts. Lipid bilayers consist of such molecules.28 The polar parts apparently interact with one another via long-range electrostatic forces; other interactions are via short-range attractive forces. Molecules whose polar parts are situated at the center of the molecular contour form, as a rule, smectic liquid crystals with a single density wave. Helielectricity was postulated to exist in cholesterol-containing membranes with tilted chains. It was later found in the L’ and P’ phases of a mixture of dipalmitoyl phosphatidyl choline and cholesterol.29 Smectic liquid crystals with two density waves may consist of molecules with asymmetric positioning of polar and nonpolar parts; this model explains the measured properties of the asymmetrical lipid bilayer. Distortion of one plane (leaflet) destabilizes the other; stabilization of one stabilizes the other.30 The membrane and channels together form a mixed mesophase, perhaps a kind of two-dimensional ceramic; however, the simple categories so far developed in liquid crystal science appear to be inadequate to describe the complexities of biological membranes and channels. 3.1. Protein distribution by molecular shape The distribution of integral membrane proteins is affected by their shape. Roughly conical embedded proteins can induce splay in the membrane. On the assumption of free lateral diffusion, the proteins can become redistributed between flat and curved segment of the membrane, as shown in Figure 18.9.31
430
CHAPTER 18
Figure 18.9. Lateral redistribution of integral membrane proteins between flat and curved segments of membrane. Long-range elastic forces pull favorably oriented proteins into the curved segment and unfavorably oriented proteins toward the flat region. From Petrov, 1999, after Petrov and Bivas, 1984.
3.2. Flexoelectric responses in hair cells Mechanoreceptor cells exhibit electrically induced localized changes in the curvature of the plasma membrane. Electromotility of the outer hair cell of the mammalian cochlea, which converts electrical to mechanical energy, has been analyzed by a flexoelectric model. A study of the lateral wall of this cell is based on the tight association between the plasma membrane and the cytoskeleton, along with the membrane’s small resistance to bending. Since biological membranes are liquid crystals with protein and lipid molecules possessing large electric dipole moments parallel to their axes, they satisfy the requirements for flexoelectricity. Membrane bending as a result of the application of an electric field is called the converse flexoelectric effect. An external electric field due to a threshold depolarization rotates the molecular dipoles, which increases the membrane curvature. The model calculations are compatible with the measured forces.32 4. PERCOLATION THEORY In Chapter 15, Section 2.5, we surmised that the opening and closing of an ion channel is something like the onset and termination of an avalanche, a massive flow of ions
PROPAGATION AND PERCOLATION IN A CHANNEL
431
across the membrane through the specialized structure of the molecule. Such a selforganized criticality can be initiated by a fluctuation, depending on the ion concentration gradients, temperature and electric field. Let us pursue this surmise in more detail. Transport phenomena in disordered systems have been studied in a number of fields in the recent decades. The statistical properties of these systems must take into consideration two aspects of their morphology: topology, the interconnectedness of their microscopic elements and geometry, the size and shape of these elements. The diffusion of materials through disordered media has been studied by percolation theory, which is an extension of the critical phenomena we discussed in Chapter 15. This theory was first developed by Flory33 and Stockmayer34 to describe the way small branching molecules react to form macromolecules. It has been applied to a number of precipitation and agglutination phenomena in biology, including protonic conductivity in anhydrous systems such as seeds.35 Consider a particular section of a city, say 10 by 10 blocks in extent. Because of a freeway, a river, a shopping mall, a park and a street repair project, many of these streets are blocked. The goal is to determine whether a continuous path through this section exists from a point A on the eastern boundary to a point B on the western boundary. With too many blockages, no through route from A to B will exist; if most streets are open, many routes will exist. Between these extremes there must be a critical value for the fraction of open streets. 4.1. Cutting bonds In a related example, consider a communications network represented as a square grid connecting two posts, represented by electrodes.36 A “stochastic saboteur” seeks to break the connection between the posts by randomly cutting links with a pair of scissors; see Figure 18.10.37 A few cut links will not break the connection but a large number will. The question is, then, what fraction of the links must the saboteur cut to isolate the posts? There is a sharp transition, the percolation threshold, at which the long-range connectivity of the grid disappears (and reappears when a “stochastic splicer” mends them). The square lattice could be interpreted as being composed of unit conductors. The current is a maximum when all conductors are intact and decreases as they are cut. In the figure, about 21% of the bonds have been cut, so that the fraction of uncut bonds, p, is 0.79, as indicated by the arrow on the graph of I(p), the current. For p < pc there is no connecting path of conducting bonds and I = 0. For the square array, the critical bond concentration pc turns out to be ½.
432
CHAPTER 18
Figure 18.10. A randomly cut network as an example of percolation. The current decreases as more and more links are cut. When the fraction of uncut bonds p drops to the critical bond concentration pc, the current goes to zero. From Zallen, 1983.
The square array can also be viewed as a structural entity with elastic bonds. Its greatest mechanical strength is at p = 1, and its strength declines until at pc the screen disintegrates into separate pieces. This viewpoint is useful in analyzing the sol–gel transition and the liquid–glass transition in amorphous materials. A geometric lattice is not necessary for the application of percolation techniques. A close-packed mixture of glass and metal balls in a beaker is an example of topologically disordered structure. The structure becomes conducting when the concentration of metal balls exceeds the percolation threshold.
PROPAGATION AND PERCOLATION IN A CHANNEL
433
Percolation theory has been used for many applications, including the flow of liquid in a porous medium, the spread (or containment) of disease in a population, the metal–insulator transition in composite materials and the onset of ferromagnetism in dilute arrays of magnets. It is popular in condensed state physics for its ability to deal with unruly geometries, and because the percolation threshold is a prototype of a phase transition. The localization–delocalization threshold may be modeled by the percolation threshold. Ions that are localized in a closed channel become delocalized in its open configuration. Scaling, discussed in Section 1.2 of Chapter 15, occurs in percolation. For these reasons, percolation appears to be a worthwhile model of the way ions permeate through ion channels. 4.2. Site percolation and bond percolation The term percolation brings to mind the image of a fluid (hot water) passing through an interconnected network of channels (the spaces between the coffee grounds in a percolator), some of which are blocked. The channel network may be idealized as a two-dimensional honeycomb lattice, with unblocked channels as connected bonds. On a map of the network the contiguous connected bonds form clusters. A cluster that spans the region forms a percolation path or unbounded cluster. In the context of regular lattices such as the square lattice of Figure 18.10, a lattice or graph is composed of sites and bonds. The sites form the vertices of the graph and the bonds, the links between them. The two basic types of percolation processes are site percolation and bond percolation. To the geometric lattice a statistical two-state property is randomly assigned, converting it into a stochastic geometry. In bond percolation, as in Figure 18.10, a bond is either connected (unblocked) or disconnected (blocked). In site percolation, a site is either connected or disconnected. Adjacent connected sites comprise a cluster. Dependence on concentration or density frequently is an aspect of site-percolation processes; the connected sites are normally referred to as filled sites, and the blocked sites as empty sites. One example of site percolation is a ferromagnetic crystal diluted by the random substitution of nonmagnetic atoms for the magnetic ones. The dilute magnet is assumed at low temperature, with only nearest-neighbor interactions. Ferromagnetism, the existence of a macroscopically extended cluster of coupled spins, can then only occur when the concentration p of magnetic atoms exceeds the percolation threshold pc for site percolation on the crystal lattice. When the fraction p of filled sites is small in site percolation, the clusters are of small size. Larger clusters appear as p increases. At a critical fraction, “infinite” clusters appear; for convenience, periodic boundary conditions are assumed. In a finite system, these clusters are represented as spanning clusters that connect opposite sides of the sample. The percolation threshold pc for site percolation on a square lattice is 0.59, compared to the threshold of 0.50 given above for bond percolation on this lattice. As p approaches pc the average cluster size diverges, and close to pc this divergence is represented by a universal critical exponent, such as those we have
434
CHAPTER 18
discussed in Chapter 15. These critical exponents are related by the analytical scaling properties and fractal dimensionality mentioned there. The statistical distributions of cluster sizes and shapes characterizing the system are important to percolation phenomena. The probability of finding a bonded path between two points A and B falls off exponentially with the distance r between them, PAB exp (-r/!), where ! is the correlation length. Percolation theory tells us that ! is finite below pc but diverges at pc. The conductivity ) of a network of resistors like that of Figure 18.10 near the percolation threshold pc is expected to have a singular behavior with a constant exponent t, (4.1) Under this assumption, the mean square of cluster sizes has been fitted as a sum of three exponential time decays plus a constant.38 4.3. Two conductors Equation 4.1 can be generalized to a mixture of two different conductors, a and b, both with finite conductivity. To examine the effect of percolation we assume that a << b. We will focus on three regions, above, below and near the percolation threshold.39 Above the percolation threshold many paths are formed entirely of b, making the a regions irrelevant. The scaling of the conductivity is determined by b. (4.2) where t is the conductivity index. Below the percolation threshold there are no paths entirely through b, and the current must go through parts of a, which determines the conductivity. As the percolation threshold is approached from below, more and more of the a paths are shorted out by clusters of b, resulting in an apparent divergence of the conductivity. (4.3) Close to the threshold both these expressions break down as the conductivity interpolates smoothly between them, with significant contributions from both the good and the poor conductors. The correlation length remains finite at pc, and the power laws 4.2 and 4.3 break down for some values of = | p - pc |. The conductivity varies little over the small interval pc ± , so that the two estimates can be equated. (4.4) Solving for we find in the crossover region (4.5)
PROPAGATION AND PERCOLATION IN A CHANNEL
435
Substituting into 4.4, we obtain (4.6) Typical values of the exponents s and t are 1.1-1.3 in two-dimensional systems. In three dimensions, t is 1.6-1.7, while s is 0.7. The existence of crossover regions between phases could help explain why electrophysiological measurements do not exhibit the infinities found in exact solutions of classical electrodiffusion (Chapter 7). These methods can be extended to nonlinear conductors, in which the I–V characteristic evolves into a power law. 4.4. Directed percolation Voltage-sensitive ion channels are asymmetric macromolecules; the membrane midplane is not a plane of reflection symmetry for them. Thus they are vectorial structures, as pointed out for membrane proteins in general by Tian Tsong and Carol Gross.40 The driving force on an ion is likewise directional, as the chemical potential gradient is a vector. As a result, the interaction between the macromolecule with the driving force is a vector-to-vector interaction. Because of this, an examination of directed percolation is warranted. Adding the property of directionality to percolation leads to new interesting properties, such as anisotropic scaling and critical properties that depend on direction.41
Figure 18.11. A configuration of directed bonds on a square lattice. The diagram may be read either as percolation in two-dimensional space or as an evolution of one spatial dimension in time. From Kinzel, 1983.
To introduce direction to the percolation problem, we draw arrowheads on the bonds. These arrows will pursue a general direction; see Figure 18.11. The square lattice shown has a spatial and a temporal axis, but we will ignore the label “time” for
436
CHAPTER 18
now. In the configuration shown one can go from A to C or D but there is no path from A to B, since moves counter to the arrows are not allowed. Directed percolation can be illustrated by a disease spread by a strong steady wind in a regular array of trees. When the probability of infecting a neighboring plant is less than the critical concentration, p < pcD, the spread of infection will be arrested. Otherwise there is a finite probability that the disease will spread through the orchard. It is clear from Figure 18.11 that all allowed paths are in a cone within 45% of the main direction, so that clusters below the percolation threshold will be in a teardropshaped region from the starting point; see Figure 18.12. The characteristic length parallel to the main axis ! will, because of the directedness of the bonds, be on average greater than the perpendicular characteristic length !]. The scaling is anisotropic. The critical concentration in directed percolation pcD will be much greater than that for the undirected pc.
Figure 18.12. The average cluster in directed percolation is anisotropic. From Kinzel, 1983.
The main percolation direction can be considered to be time, as Figure 18.11 shows, since time advances in one direction only and no steps into the past are allowed. In a multidimensional problem, the remaining directions may represent a lattice in space, to model a stochastic process that occurs in discrete time steps. Percolation is a nonequilibrium process and is accompanied by fluctuations in space and time. Models of directed percolation may be used to study autocatalytic chemical reactions, the spread of epidemics and other reaction–diffusion processes. This establishes a connection to the study of critical phenomena in statistical mechanics, and may be helpful in the study of opening and closing of ion channels. 4.5. Percolation in ion channels An example relevant to our subject of ion channels is the hopping conductivity of ions in a percolation array with a strong field or concentration gradient, so that hopping up the electrochemical potential can be neglected. The open–closed behavior of channels is consistent with the critical conductivity of percolation models. In the closed channel,
PROPAGATION AND PERCOLATION IN A CHANNEL
437
the ions are localized to their sites; channel opening delocalizes the ions, allowing them to hop from site to site. Application of percolation theory to ion channels will have to include the case of more than one ion species permeating the channel simultaneously. This problem is considered in polychromatic percolation. In the sodium channel, we may for example consider the simultaneous permeation of Na+ and a second permeant, such as K+ or Ca2+. 5. MOVEMENT OF IONS THROUGH LIQUID CRYSTALS We now examine the way ions move through liquid crystals. Biological membranes are quasi-two-dimensional lyotropic systems. The lyomesogens of which they are composed, lipid and protein molecules, have two important properties: They contain two or more parts with contrasting properties, hydrophilic and hydrophobic; and they possess molecular flexibility due to a large number of possible conformations of similar energy.42 The easiest way to detect a lyotropic state is by showing its anisotropic optical properties. The birefringence of lyotropic liquid crystals was studied by Otto Lehmann in 1895, who found in their changeability and agility a striking similarity to living cells, which he summarized in his monograph, Liquid Crystals and the Theories of Life.43 In 1982, Mae-Wan Ho and Michael Lawrence viewed living Drosophila embryos under a polarizing microscope. As the 1-mm-long larva crawls, its jaw muscles, body wall, tracheal tracts and malpighian tubules shimmer in moving colors. The birefringence colors are due to phase lags of the polarized white light at particular wavelengths due to the changing alignments of asymmetrical molecules. The nonlinear optical behavior observed is akin to that seen in cholesteric liquid crystals.44 Birefringence measurements in helical smectic phases can yield data on the temperature dependence of tilt angles. 45 Lyotropic mesogens, large organic compounds of anisotropic shape, frequently contain alkaline cations, Li+, Na+, K+, Rb+, and halogen anions, F-, Cl-, Br-, which play important roles. Here we are interested in the movement of cations through voltagesensitive ion channels; however, we will also keep in mind the possible structural roles these ions may play in the channels. 5.1. Chiral smectic C elastomers Liquid crystal elastomers display the combination of the anisotropy of liquid crystals and the rubber elasticity of elastomers, as discussed in Chapter 6, Section 7.2. The complex interplay of the polymer network with the anisotropic order of the liquid crystalline state reveals new aspects of material behavior in these “soft crystals.” We distinguish between main-chain elastomers, in which the mesogenic units are segments of the macromolecular chain, and sidechain elastomers, in which the liquid crystalline units are the side chains of the monomer units. The coupling of the network chains with the liquid crystalline anisotropy produces new physical phenomena. When liquid crystal elastomers are chiral, additional physical properties appear, including ferroelectricity, pyroelectricity, circular dichroism and nonlinear optics coupled to the polymer network. We have already seen the occurrence of ferroelectricity
438
CHAPTER 18
in the chiral smectic C phase. External fields produce interesting electromechanical and optomechanical effects. Investigations have been made of cholesteric elastomers and chiral smectic C elastomers. In cholesteric elastomers, a dynamic shear perpendicular to the helicoidal axis produces a piezoelectric voltage.46 As we saw in Chapter 17, unwinding of the helicoidal structure causes macroscopic ferroelectricity in the smectic C* phase. In smectic C* elastomers, a mechanical field orients the phase structure toward uniform alignment. Theoretical studies by Helmut Brand predicted piezoelectricity, rotatoelectric effects and nonlinear effects such as frequency doubling.47 Chiral smectic elastomers have been formed into free-standing ferroelectric thin films. Figure 18.13 shows the electro-optical behavior of these aligned elastomers, recorded by M. Brehmer, R. Zentel, G. Wagenblast and K. Siemensmeyer.48 The unsymmetrical switching behavior is explained as a memory effect of the network. Free-standing smectic films are well defined systems that can be probed by a variety of experimental techniques to provide data on phase transitions, surface-induced order and temperature effects.49
Figure 18.13. Unsymmetrical switching in a chiral smectic C elastomer, observed by Brehmer et al., 1994. The optical response to an applied voltage at 2 Hz shows the effect of reorientation of mesogenic groups. From Stein and Finkelmann, 2001.
5.2. Metallomesogens Metallomesogens are metal-containing liquid crystals. Their history goes back to the middle of the 19th Century when a number of soaps, alkali metal salts of fatty acids, were found to exhibit double refraction in aqueous solution. In 1910, Vorländer described thermotropic properties of alkali metal carboxylates exhibiting lamellar phases. Another important group of metallomesogens are the organometallic Schiff base derivatives, containing N=C bonds. Covalent liquid crystal coordination complexes were first reported in 1977.
PROPAGATION AND PERCOLATION IN A CHANNEL
439
Metallomesogen research was stimulated by consideration of the possibilities inherent in combining the one- or two-dimensional order of organic mesogens with the unique properties of metal atoms. These materials include compounds with interesting magnetic, electrical, optical and electro-optical properties.50 A class of polymers of particular relevance to ion channels are the metallomesogenic polymers. In these, the metallomesogenic core may be incorporated in either the main chain or the sidechain of a polymeric structure. Both lyotropic and thermotropic metallomesogenic polymers have been studied.51 Examples of sidechain lyotropic systems involve metallophthalocyanine derivatives of poly(-benzyl-Lglutamate), a helical polymer that forms lyotropic mesophases in concentrated solutions. The metallophthalocyanine rings are attached to the sidechains as dye components. Metal ions were also used as crosslinking agents, with the coordination sites either in the polymeric chain or the side groups. In ion-conducting materials that have been synthesized, ionic conduction could be switched by ultraviolet irradiation. The effects of UV on excitable membranes are reviewed in Chapter 4, Section 7.3. 5.3. Ionomers Polymers in which strong ionic forces between chains play a dominant role in controlling the properties of the material are called ionomers. These materials, which have been studied since 1965, are defined as polymers in which the bulk properties are governed by ionic interactions in discrete regions of the material, called the ionic aggregates. The properties of ionomers are reviewed in a recent book.52 5.4. Protons, H bonds and cooperative phenomena Hydrogen ions are unique because of their small mass, which allows them to tunnel through potential barriers. This property is responsible for hydrogen bonding. It is also responsible for the unique properties of subsystems of interacting protons, such as those of water and ice, transitions in hydrogen-bonded ferroelectrics, and transitions in organic compounds. In these cases the interactions within the proton subsystem prevail over interactions between protons and heavier atoms, determining the macroscopic properties of the system as a whole.53 NOTES AND REFERENCES 1. Lui Lam and Jacques Prost, Editors, Solitons in Liquid Crystals, Springer, New York, 1992. 2. A. R. Bishop, J. A. Krumhansl and S. E. Trullinger, Physica D 1:1-44, 1980. 3. R. Ribotta, in Solitons in Liquid Crystals, edited by Lui Lam and Jacques Prost, Springer, New York, 1992, 265-292. 4. Figueiredo Neto, Ph. Martinot-Lagarde and G. Durand, J. Phys. Lett. (Paris) 45:L793, 1984. 5. N. A. Clark and S. T. Lagerwall, Appl. Phys. Lett. 36:899, 1980. 6. M. Yamashita and H. Kimura, J. Phys. Soc. Jpn. 51:2419, 1982. 7. J. L. Fergason and G. H. Brown, J. Am. Oil Chem. Soc. 45:120-127, 1968. 8. N. M. Chao and S. H. White, Mol. Cryst. Liq. Cryst. 88:127, 1982.
440
CHAPTER 18
9. A. C. Scott, Neurophysics, Wiley, New York, 1977; Xin-yi Wang (also Wang Xin-yi), Phys. Lett. A 112:402, 1985; ___, Phys. Rev. A 32:3126, 1985. 10. P. E. Cladis and W. van Saarloos, in Solitons in Liquid Crystals, edited by Lui Lam and Jacques Prost, Springer-Verlag, New York, 1992, 110-150. 11. L. Lam, in Solitons in Liquid Crystals, edited by Lui Lam and Jacques Prost, Springer, New York, 1992, 9-50. 12. A. S. Davydov, Solitons in Molecular Systems, D. Reidel, Dordrecht, 1985, 134-163. With kind permission of Springer Science and Business Media. 13. Alwyn Scott, Nonlinear Science: Emergence and Dynamics of Coherent Structures, Second Edition, Oxford University, 2003, 74. 14. Davydov, 242-259. With kind permission of Springer Science and Business Media. 15. R. Ribotta, in Solitons in Liquid Crystals, edited by Lui Lam and Jacques Prost, Springer, New York, 1992, 265-292. 16. J. L. Fergason and G. H. Brown, J. Am. Oil Chem. Soc. 45:120-127, 1968; Lev M. Blinov, ElectroOptical and Magneto-Optical Properties of Liquid Crystals, John Wiley, Chichester 1983, 82f. 17. R. Pindak, C. Y. Young, R. B. Meyer and N. A. Clark, Phys. Rev. Lett. 45:1193, 1980. 18. M. Yamashita, in Solitons in Liquid Crystals, edited by Lui Lam and Jacques Prost, Springer, New York, 1992, 293-325. 19. P. E. Cladis, in Kitzerow and Bahr, 481-493. With kind permission of Springer Science and Business Media. 20. V. I. Krinsky, in Self-Organization: Autowaves and Structures Far from Equilibrium, edited by V. I. Krinsky, Springer-Verlag, Berlin, 1984, 9-19, with kind permission of Springer Science and Business Media; I. M. Starobinets and V. G. Yakhno, op. cit., 98-102. 21. K. Shirane, T. Tokimoto, K. Shinagawa and Y. Yamaguchi, Ferroel. 141:297-305, 1993; T. Tokimoto and K. Shirane, Ferroel. 146:73-80, 1993. 22. Copyright 1993 from T. Tokimoto and K. Shirane, Ferroel. 146:73-80, 1993. Reproduced by permission of Taylor & Francis Group, LLC., http://www.taylorandfrancis.com 23. K. Shirane, T. Tokimoto and H. Kushibe, Physica D 90:306-312, 1996. 24. Copyright 1999 from T. Tokimoto, K. Shirane, and H. Kushibe, Ferroel. 220:273-289, 1999. Reproduced by permission of Taylor & Francis Group, LLC., http://www.taylorandfrancis.com 25. Copyright 1999 from A. Gordon, B. E. Vugmeister, H. Rabitz, S. Dorfman, J. Felsteiner and P. Wyder, Ferroel. 220:291- 304, 1999. Reproduced by permission of Taylor & Francis Group, LLC., http://www.taylorandfrancis.com 26. Pradip Das and W. H. Schwarz, Phys. Rev. E 51:3588-3612, 1995. 27. R. B. Meyer, Phys. Rev. Lett. 22: 918-921, 1969. 28. D. Guillon and A. Skoulios, Mol. Cryst. Liq. Cryst. 38: 31, 1977. 29. L. A.. Beresnev, L. M. Blinov,. and E. I. Kovshev, Dokl. Biophys. 265:111-114, 1982. Translated from Dokl. Akad. Nauk SSSR 265:210-213, 1982; A. G. Petrov, A. T. Todorov, B. Bonev, L. M. Blinov, S. V. Yablonsky, D. B. Fulachyus and N. Tvetkova, Ferroel. 114:415-427, 1991. 30. P. E. Cladis, Phys. Rev. Lett. 35:48-51, 1975; P. E. Cladis, R. K Bogardus, W. B. Daniels and G. N. Taylor, Phys. Rev. Lett. 39:720-723, 1977. 31. Petrov, 328. Reprinted from A. G. Petrov and I. Bivas, Progress in Surface Science, 16 (4):389-512, Copyright 1984, with permission from Elsevier. 32. R. M. Raphael, A. S. Popel and W. E. Brownell, Biophys. J. 78:2844-2862, 2000. 33. F. J. Flory, J. Am. Chem. Soc. 63: 3083-3090, 1941. 34. W. H. Stockmayer, J. Chem. Phys. 11:45-55, 1943. 35. Muhammad Sahimi, Applications of Percolation Theory, Taylor & Francis, 1994, 243-253; Giorgio Careri, in Correlations and Connectivity: Geometric Aspects of Physics, Chemistry and Biology, edited by H. Eugene Stanley and Nicole Ostrowsky, Kluwer Academic, Dordrecht, 1990, 262-265. 36. Richard Zallen, in Percolation Structures and Processes, edited by G. Deutscher, R. Zallen and Joan Adler, Adam Hilger, Bristol and The Israel Physical Society, Jerusalem, 1983, 3-16. 37. Zallen, 5. 38. C. D. Mitescu and J. Roussenq, in Percolation Structures and Processes, edited by G. Deutscher, R. Zallen and Joan Adler, Adam Hilger, Bristol and The Israel Physical Society, Jerusalem, 1983, 81-100. 39. Joseph P. Straley, in Percolation Structures and Processes, edited by G. Deutscher, R. Zallen and Joan Adler, Adam Hilger, Bristol and The Israel Physical Society, Jerusalem, 1983, 353-365. 40. Tian Y. Tsong and Carol J. Gross, in Bioelectrodynamics and Biocommunication, edited by Mae-Wan Ho, Fritz-Albert Popp and Ulrich Warnke, World Scientific, Singapore, 1994, 131-158.
PROPAGATION AND PERCOLATION IN A CHANNEL
441
41. Wolfgang Kinzel, in Percolation Structures and Processes, edited by G. Deutscher, R. Zallen and Joan Adler, Adam Hilger, Bristol and The Israel Physical Society, Jerusalem, 1983, 425-445. 42. A. G. Petrov, 85, 21. 43. O. Lehmann, Flüssige Kristalle und die Theorien des Lebens. Barth, Leipzig, 1906, 1908. 44. Mae-Wan Ho, The Rainbow and the Worm: The Physics of Organisms, Second Edition, World Scientific, Singapore, 1998. 45. I. Muševič, R. Blinc and B. Žekš, The Physics of Ferroelectric and Antiferroelectric Liquid Crystals, World Scientific, Singapore, 2000, 215-222. 46. P. Stein and H. Finkelmann, in Chirality in Liquid Crystals, edited by H. Kitzerow and C. Bahr, Springer, 2001, 433-446. With kind permission of Springer Science and Business Media. 47. H. Brand, Makromol. Chem., Rapid Commun. 10:441, 1989. 48. M. Brehmer, R. Zentel, G. Wagenblast and K. Siemensmeyer, Macromol. Chem. Phys. 195:1891-1904, 1994. Copyright Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission. 49. Muševič et al., 2000, 189-199. 50. J. L. Serrano, in Metallomesogens: Synthesis, Properties, and Applications, edited by José Luis Serrano, VCH, Weinheim, 1996, 1-21; M. Blanca Ros, op. cit., 419-480. 51. L. Oriol, in Serrano, op. cit., 193-231 52. Adi Eisenberg and Joon-Seop Kim, Introduction to Ionomers, John Wiley & Sons, New York, 1998. 53. G. Aiello, M. S. Micciancio-Giammarinaro, M. B. Palma-Vittorelli and M. U. Palma, in Cooperative Phenomena , edited by H. Haken and M. Wagner, Springer, New York, 1973, 395-403.
CHAPTER 19
SCREWS AND HELICES
In this chapter we begin to deal with the question, How does the change of a voltage across the membrane become the trigger for a configurational change of the channel that drastically alters its conduction properties? We begin with a critical analysis of a conventional model, the screw-helical gating model. Continuing the discussion of critical phenomena of Chapter 15, we apply the concept of spontaneous order in open systems to the membrane and its ion channels. We review the model of a dissipative structure in excitable membranes, which was proposed long before the isolation of voltage-sensitive ion channels. We then examine the structure, electrical properties and dynamics of the helix. This interesting structure, which constitutes the major portion of an ion channel, is capable of carrying solitons, and of conducting electrons and ions. 1. THE SCREW-HELICAL GATING HYPOTHESIS We recall that the S4 transmembrane segments of sodium, potassium and calcium channels contain four to eight positively charged residues, separated from one another by two uncharged residues. In the screw-helical gating hypothesis, these voltage sensors are postulated to move by advancing and rotating like a screw, advancing one unit of helical pitch for every 360° of rotation. In conventional terms the question is posed, By what mechanism does the activation gate couple with the voltage? After DNA sequencing had shown that the 1:3 spacing of positive charges on the S4 subunit was a conserved feature of voltagesensitive ion channels, it became clear that this transmembrane segment must somehow interact with the electric field. In 1986 H. Robert Guy and P. Seetharamulu1 and William Catterall2 proposed that, under a depolarization, the electric charges on S4 will experience an outward force that will drive the segment outward in a helical motion. Negative residues in subunits S2 and S3 partially neutralize the positive charges in S4, and could help stabilize the S4 subunit at discrete positions. In a 60° rotation the S4 would advance 0.45 nm outward, and each positive charge would move up to pair with a different negative charge on the neighboring segments. Three such steps would transfer three unit positive charges across the membrane, projecting an arginine residue 1.35 nm into the external aqueous region. In this way the regularity of the spacing of
443
444
CHAPTER 19
the positive residues would provide stable positions of the spiral ribbon of charges within the channel. Experiments confirmed the outward movement of the S4 into the external region. The screw-helical model has been reviewed by Richard D. Keynes and Fredrik Elinder; see Figure 19.1.3
Figure 19.1. Diagram of the hypothetical screw-helical outward movement of an S4 segment in a Shaker K+ channel. The positive charges of the R and K residues are on flexible sidechains about 0.6 nm long. The negatively charged E and D residues shown on the left are on the S2 and S3 segments. From Keynes and Elinder, 1999.
The screw-helical model is an attempt to reconcile experimentally derived facts about ion channels with the pore–gate–filter picture of the channel. The charge movements postulated in the model can account for the measured gating charge. Studies of the accessibility of engineered single cysteine residues to internal and external thiol-specific reagents support a rigid body motion of S4 across the membrane. Histidines substituted at specific sites transport protons across the membrane at voltages that allow the voltage sensor to shuttle between resting and activated states. Voltageclamp fluorometry measurements demonstrate that S4 movement generates the gating current. Fluorescence resonance energy transfer results suggest that S4 twists in a 180° rotation. The motion of S4 thus appears to involve rotary as well as outward components.4 Despite these successes in explaining experimental data, many of the predictions of the screw-helical model have not been analyzed quantitatively. We have already discussed, in Chapter 14, the difficulties inherent in trying to explain a molecular process by a macroscopic mechanism. Let us assume for the moment that the analogy between an -helical segment and an ordinary screw is valid. Could you push a bolt through a nut by an axial force alone? No, you would need a wrench or screwdriver, or at least a twist with your fingers, to provide a torque. So the idea that a molecular segment could be driven through its neighboring segments by an axial force on its charges due to an electric field seems unlikely even from a macroscopic point of view. It is all the more unlikely at the
SCREWS AND HELICES
445
molecular scale because the vastly increased surface-to-volume ratio magnifies the frictional force. Furthermore, if the segment is tilted with respect to the membrane normal, and bent at the proline residue, as shown in the studies of Guy and Durell,5 the force on the charges would not be along the helical axis but at an angle to it. Can we accept the assumption of the model that the segment is analogous to a solid screw? A rigid body is, according to Feynman, “an object in which the forces between the atoms are so strong, and of such character, that the little forces that are needed to move it do not bend it. Its shape stays essentially the same as it moves about.”6 But the helix, a polypeptide chain with its loops held together by weak hydrogen bonds, is not a rigid but a soft body. A force or torque will not be transmitted through it unchanged. Frictional interaction at the molecular level will hinder rigid motion, and the postulated molecular screw would be likely to jam and buckle like a strand of cooked spaghetti. So the assumption that the S4 subunit can advance in a rigid screwlike motion appears to be untenable. Let us also consider the end connections of the segment; the S4 subunit is not an isolated short segment of helix, but an integral part of a long polypeptide chain. It is connected to neighboring loops at the outer end and at the cytoplasmic end. If the S4 segment were to advance outward and rotate rigidly, it would push and twist the outer loop while pulling and twisting the inner loop. There is also a problem with the outward force on the positive charges postulated by the model as a result of a depolarization. We know that a depolarization need not change the internal voltage from negative to positive to activate the channel; a small decrease in negativity is enough. There is no outward force in that case, only a decrease in the inward force. Can a decrease in the inward force create an outward movement? It can not; only an outward force can push the segments out. Although the model claims to explain the way an electric field changes the conductance of the channel, we must look for an alternative. It may well be that the stochastic opening of an ion channel is inherently a quantum effect, and so unpredictable in principle. In that case, theory at best can only predict the probability of a channel opening rather than the event itself. A clue to an alternative is the concept of cooperativity, invoked by Keynes and Elinder. 2. ORDER AND ION CHANNELS We now refer to Chapter 15, which deals with the physics of order and disorder, and enquire into the connections between critical phenomena and voltage-sensitive ion channels. 2.1. Threshold responses in biological membranes While in condensed state physics spatial order is of primary importance, that role in biology is played by functional order, the ensemble of correlations between biochemical and biophysical events. The correlations in functional order are formed not only in space but also between the times at which different events occur. Since these events
446
CHAPTER 19
occur at the molecular level, the time correlations between biochemical and biophysical events must be of a statistical nature. Because of the many significant events occurring in a cell, these correlations possess an intrinsic complexity that makes description of them difficult. An example of this complexity is the chain of biochemical events that constitute cellular metabolism. By partitioning space into discrete compartments, cells and organelles, nature has eliminated certain conflicting interactions and correlations. The biological membrane is the major structure in this sequestration of function, providing the substrate for the selective correlation between primary events to produce significant subsequent events.7 The membrane acts as a fluid barrier that is able to provide selective vectorial fluxes through the surface of a cell or organelle. While thermodynamic equilibrium would require equal concentrations of all molecular species on the two sides of the membrane, active transport maintains large concentration differences at the cost of metabolic energy. By reducing the spatial dimensions from three to two, membranes facilitate kinetic processes along its surface, as in G proteins. Membranes containing ligandgated ion channels-----such as the cation channels of the transient receptor potential superfamily,8 which help regulate cells of almost every type and have been linked to human diseases—respond to threshold concentrations of certain molecules or ions. Our focus here is on the membrane as an electrical insulator with embedded voltage-sensitive ion channels that selectively conduct ions of certain species within a certain range of temperatures, in a dynamic response to electrical, mechanical, chemical or thermal stimuli. Biological membranes function at a molecular scale, statistically inducing a facilitated correlation between events. For excitable membranes, the functional correlations are the ionic currents traversing the membrane, which depend on the transitions in the voltage-sensitive ion channels. These currents, on the millisecond time scale, are the statistical outcome of the cooperative changes in the molecular conformation of the ion channels. It is significant that the protein molecules that constitute the ion channels make systematic use of the hydrogen bond, a weak bond that links the loops of the helical segments of the molecule. Because the molecule is held together by an energy that is not much greater than thermal, its conformation can change in response to small electrical variations. The fact that order can arise from fluctuations in dissipative systems suggests that the opening or closing of an ion channel can be driven by a thermal fluctuation. 2.2. Mean field theories of excitable membranes Goldman’s constant field approximation, which as we saw in Chapter 7 replaces the electric field E(x) across the membrane with its average, Vm/L, is an example of a mean field theory, discussed in Chapter 15. This term also applies to the Hodgkin–Huxley formulation, which builds upon the Goldman–Hodgkin–Katz equation introduced in Chapter 8. A variant of the mean-field theory approach is the equal spatial division of the voltage in barrier-and-well models, discussed in Chapter 14.
SCREWS AND HELICES
447
2.3. Constant phase capacitance obeys a power law We have seen that Hodgkin and Huxley have assumed power relationships with integer powers for the fast (m3h) and slow (n4) conductances of an excitable membrane. Because the powers chosen are not unique, as pointed out in Chapter 9, we cannot consider these relationships to be established power laws. A better example of a critical exponent in the field of axonal membrane excitability has been known for some time. Decades before the seat of molecular excitability was identified in protein molecules, such an exponent was found in the analysis of membrane impedance measurements. In the experiments of Kenneth Cole, a constant phase angle capacitance was discovered and modeled by a power law. We discussed this and other aspects of the modeling of axonal impedance in Chapters 10 and 11. The constant phase angle capacitance has been fitted by a capacitance that is proportional to a fractional power, 0.90, of the frequency. On a Cole–Cole plot, the impedance appears as a semicircle, depressed by an angle of 9%. We have already seen in Chapters 16 and 17 that this type of dielectric behavior is also observed in ferroelectric crystals and liquid crystals. 2.4. The open channel is an open system A voltage-sensitive ion channel in the open state is an open system. Although this statement sounds like a tautology, it is not, since the word open is used in more than one sense; see Section 5.3 of Chapter 5. When an ion channel is open, it is a conduit for ions, which enter at one surface and pass through to the opposite surface. In their motion through the channel, the ions interact with it, dissipating energy and raising its energy level far above the equilibrium level. Since the ions move predominantly in one direction, the channel has lost some of its symmetry and has gained a directional order. An exception to the statement that the open channel is an open system is for a channel in a gating-current experiment. Here care is taken to eliminate ion flows in order to measure the minuscule current associated with the voltage-dependent structural transition that precedes and enables ion flow. As we have seen, a subthreshold stimulus produces a localized response, whereas a threshold stimulus produces a propagated action potential. Order on a macroscopic scale arises in open systems whose organization is coupled to energy dissipation, as discussed in Chapter 15 and emphasized by Blumenthal, Changeux and Lefever and Prigogine and Nicolis.9 This order transition, subject to nonequilibrium constraints, is due to the amplification of thermal fluctuations. 2.5. Self-similarity in currents through ion channels When a patch-clamp recording of the single-channel current across an excitable membrane is played back at low resolution, the times during which the channel was in its highly conducting, open, state and its poorly conducting, closed, state can be seen. Figure 19.2 shows a current recorded by K. Gillis, L. Falke and S. Misler10 from an ATP-sensitive potassium channel on the membrane of a pancreatic beta cell.11
448
CHAPTER 19
Figure 19.2. Ion current recorded by Kevin D. Gillis, Lee C. Falke and Stanley Misler from an ATP-sensitive potassium channel at low (10 Hz) and high (1 kHz) resolution, showing statistical self-similarity in time. From Liebovitch and Tóth, 1990.
If one of these open or closed times is now played back at higher resolution, it is seen to consist of many briefer open and closed times. Because the pattern of open and closed times at the higher resolution approximately repeats the pattern at low resolution, we can say that the current in the channel is self-similar in time; it exhibits fractal kinetics. The property of self-similarity can also be examined by another technique. The number of times an ion channel is closed in a given range (bin) of closed times can be plotted in a closed time histogram. While these experimental results hint at an alternative view of the opening and closing transitions in ion channels, we have not related these findings to the microscopic structure of the channels. We will return to this problem in Chapter 20. 3. FERROELECTRIC BEHAVIOR IN MODEL SYSTEMS Before we proceed to the ferroelectric–superionic model of voltage-sensitive ion channels, let us consider three model systems in which ferroelectricity has been observed or postulated, Langmuir-Blodgett films, bacteriorhodopsin and microtubules. 3.1. Ferroelectricity in Langmuir-Blodgett films Recent studies of ferroelectric films shed some light on the question of the minimum size of a ferroelectric phase. In Langmuir-Blodgett films, a film of 10-30 layers of a random copolymer of vinylidene fluoride and trifluoroethylene exhibited conductance switching and thermal hysteresis in its capacitance, clear indications of a first-order ferroelectric phase transition. A pyroelectric current was measured perpendicular to the
SCREWS AND HELICES
449
surface of films as thin as 1.0 nm, demonstrating a clear hysteresis loop, but without a change of sign in the remanent polarization. Thus while the minimum critical size of the bulk transition can be interpreted as due to suppression by surface depolarization energies, these films must be considered to be two-dimensional ferroelectrics.12 These observations could be adapted to lipid films containing a random distribution of voltage-sensitive ion channels with ferroelectric properties. The channels may be modified by the presence of neighboring structures, such as the submembrane cortex mentioned in Section 3.3 of this chapter. One implication is clear: Since ferroelectricity has been observed in twodimensional systems considerably thinner than biological membranes, size limitations cannot be considered to preclude ferroelectricity in ion channels. 3.2. Observations in bacteriorhodopsin The application of time domain dielectric spectroscopy and differential scanning calorimetry on oriented films of the purple membrane of the bacterium Halobacterium salinarium showed that these membranes exhibited ferroelectric liquid crystal behavior. The low-frequency spectroscopy data exhibited a strong dependence of dielectric permittivity on temperature and applied electric field. The calorimetry results showed an endothermic process at about 18°C. Figure 19.3 shows the dependence of the dielectric strength on the applied electric field at 25 °C.13
Figure 19.3. Dependence of the dielectric permittivity of an oriented bacteriorhodopsin membrane on the applied dc electric field. The external bias field was changed from () 0 to +35 kV/cm, (b) +35 to -35 kV/cm and (×) -35 to +35 kV/cm. From Ermolina et al., 2001.
450
CHAPTER 19
These results suggested a transition controlled by temperature and electric field. The dielectric measurements showed a strong dependence of dielectric permittivity on the amplitude and direction of the electric field. The results were interpreted as a soft mode relaxation process identified as a Smectic C*–Smectic A phase transition, which, as we have seen in Chapter 15, is a ferroelectric transition. In contrast to our interpretation of voltage-sensitive ion channel results as located in the protein, Ermolina and colleagues explain this effect as an interaction between lipid and protein molecules. 3.3. Ferroelectricity in microtubules Microtubules play many important roles in the body and their physicochemical properties are of great interest. These thick cytoskeletal filaments are polymers of two very similar proteins, tubulin and tubulin. Each consists of two sheets flanked on each side by helices. The monomer of tubulin is a heterodimer of tubulin and tubulin whose two C termini are highly negative. A microtubule is a hollow helical structure formed by the polymerization of these heterodimers; see Figure 19.4.14
Figure 19.4. A microtubule is a helical structure consisting of a set of vertical columns of tubulin dimers wrapped around a hollow interior. A typical microtubule has 13 of these columns, called protofilaments. From Tuszynski and Kurzynski, 2003.
SCREWS AND HELICES
451
Microtubules act as struts in the cytoplasm, determining cell shape and the positioning of organelles. They are involved in signal transduction and axonal transport. In geometric arrangements they form the inner structures of cilia and flagella. In cell division, they form the mitotic spindles that segregate the chromosomes. Microtubules provide communication between a cell’s nucleus and its exterior.15 Microtubules perform their functions by controlled assembly and disassembly and by binding with microtubule-associated proteins. Microtubule assembly depends on the supply of energy-carrying molecules of guanosine triphosphate. Tubulin heterodimers attach to other dimers to form oligomers that elongate in protofilaments. The protofilaments connect by weak lateral bonds to form a sheet that wraps into the cylindrical microtubule. Due to their polymerization and depolymerization, microtubules are in a continual state of flux, a dynamic instability. The coupling of elastic and electric degrees of freedom accounts for the piezoelectricity observed in microtubules. A nonlinear bioelectret may exhibit ferroelectric hysteresis, introducing memory and irreversibility to a system. Conformational states of tubulin dimers in microtubules are believed to be coupled to dipolar charge distributions. Studies by J. Andrew Brown and Jack Tuszynski suggest that the functioning of microtubules involves ferroelectric ordering.16 Microtubules are present in the neuroplasmic lattice found in the submembranous region of axonal membranes.17 This filamentous structure, also called the submembrane cortex, may play a role in excitability, as Tasaki and Metuzals have proposed.18 Ion conduction noise measured in axolemmal material suggest a conducting matrix in series with membrane ion channels.19 Gen Matsumoto and collaborators demonstrated that tyrosinated tubulin is necessary to the maintenance of membrane excitability in squid axons.20 While the central role in excitability is that of the ion channels, that of microtubules in the cortical layer of the axon outside the channels may be a valuable clue as to the relation between structure and function. Oliver Penrose has proposed a central role for microtubules in brain function.21 4. SIZING UP THE CHANNEL MOLECULE The phenomenological explanation of channel function that combines the concepts of ferroelectricity and superionic conduction, the ferroelectric–superionic transition model, has already been discussed in Chapter 16, Section 6.2. In ferroelectrics, the transition temperature depends on the electric field, rising as the field is increased, as discussed in Chapter 16, Section 2.3. Thus it is possible to switch from the ferroelectric state to a nonpolar state by lowering the applied field, i.e., by a depolarization. For this it is necessary is for the physiological temperature of the organism to lie between the transition temperature at zero field and the transition temperature at the high (resting) field. In this way a reduction in the resting field of sufficient magnitude, a threshold depolarization, must lead to a phase transition from a ferroelectric to a nonpolar state in the ion channel.22 However, applying the concept of ferroelectricity to ion channels has been thought to lead to certain difficulties, in particular involving their size.
452
CHAPTER 19
4.1. The size problem in crystalline ferroelectrics Because ferroelectricity was first studied in crystalline solids23 and only later found to exist in polymers,24 liquid crystals25 and Langmuir-Blodgett films,26 analyses were generally made under assumptions corresponding to the crystalline state. One of the questions that arose was, What is the size of the smallest crystal that can support a ferroelectric phase? B. T. Matthias put it this way, referring to the space requirement for memory:27 ... once it shrinks below a certain limit, you no longer will get ferroelectricity. However, the one thing we know for certain, is that a crystallite size must have at least about 100 ' [10 nm] cube edge in order to be ferroelectric.
For one crystalline ferroelectric, powdered lead titanate, this minimum dimension turned out to be 12.6 nm, considerably larger than an Na+ channel, of dimensions about 4-6 nm. We must therefore ask, Is an ion channel large enough to become a ferroelectric phase? The experimental conditions in the two cases, a crystalline powder and a glycoprotein embedded in a membrane, are of course not comparable. The powder particles are randomly oriented crystallites, while the membrane proteins are embedded in a lipid bilayer, aligned in a definite orientation and placed under a very high (resting) electric field. According to the ferroelectric–superionic transition hypothesis, the channel is not ferroelectric at zero field, the condition under which the powder particles were measured, but it is ferroelectric at the field of the resting potential. The same material may exhibit different critical sizes under different experimental conditions.28 Because the transition temperature is a function of electric field, the critical size must also depend on field. These considerations thus do not rule out ferroelectric ion channels but emphasize the proximity of the resting channel to instability and thus help clarify the macroscopic mechanism by which a depolarization, or even a fluctuation, can induce a phase transition.29 4.2. Size is a parameter Let us examine the problem of the sizes of membrane proteins, excitable and nonexcitable, in some more detail. Francis Crick has pointed out that intricate threedimensional biological structures such as globular proteins are “always bigger than one might naively expect.”30 Could it be that molecular size is a parameter critical to the function of the molecule? For particles and cylinders, Landau theory predicts that ferroelectricity is suppressed at small size and that a size-driven phase transition exists.31 Collective phenomena, such as ferroelectric phase transitions, require longrange forces, which may help account for the surprisingly large size of some biological molecules, including voltage-sensitive ion channels.
SCREWS AND HELICES
453
5. THE DIPOLAR ALPHA HELIX As we descend to the molecular level, we note that a ubiquitous feature of voltagesensitive ion channels is the presence of helices that extend through the membrane. It is reasonable to surmise that they play an important role in excitability, gating and permeation. A moving configurational transition in an ion channel may be considered to travel along the helical structure as a soliton.32 Let us therefore examine the structure and dynamics of the helix in greater detail. A polypeptide chain is composed of amino acids linked by peptide bonds. This produces a flexible structure of repeated peptide groups of four atoms, HNCO, linked by a carbon to which an R group and an H are attached. The peptide groups are in a single plane. A dipeptide segment is shown in Figure 19.5.33
Figure 19.5. Two peptide groups (shaded) as a segment of a polypeptide. From Davydov, 1985.
5.1. Structure of the helix An helix is held together by three interlacing chains of hydrogen bonds between peptide groups. This arrangement is illustrated in Figure 19.6.
Figure 19.6. The three chains of hydrogen bonds (I, II, III) connecting each peptide group to its fourth-nearest neighbor group. From Davydov, 1985.
454
CHAPTER 19
Here the peptide groups are represented by ellipses and the three H-bond chains connecting them are numbered I, II and III. The equilibrium positions of the peptide groups of an helix with its axis on the z axis may be represented mathematically by a set of radius vectors Rn. The components of the vectors are Xn, Yn and Zn, where n is an index counting unit cells of peptide groups of the three chains = I, II, III.
(5.1)
where R = 0.28 nm is the radius from the axis to the centers of the peptide groups; a = 0.54 nm is the pitch of the helix; p = 3.6 is the number of peptide group per helix turn. 5.2. Helix–coil transition An helix is held together by hydrogen bonds between the carbonyl oxygen of one residue and the nitrogen of the fourth residue from it. When these H bonds are broken in solution, the helix expands into a random coil; this is the helix–coil transition. The helix–random coil transition is temperature-dependent and is accompanied by an endothermic/exothermic change.34 However, the transitions of helices in the voltage-sensitive transition of an ion channel appears to be more subtle than these drastic transitions. 5.3. Dipole moment of the helix The importance of the helix in the voltage sensitivity of ion channels is due to its large electric dipole moment, caused by the summation of the dipole moments for NH and C=O. The helix was described by Akiyoshi Wada as an electric macro-dipole.35 The dipole moment of the helix is localized in the H–N—C=O group, which has a moment of about 4.4 D almost parallel to the helical axis; see Figure 19.7. 5.4. -Helix solitons in protein The equilibrium positions of the peptide groups specified by Equation 5.1 is subject to perturbations by external forces. Since the hydrogen bonds are weak, the forces required to distort the helix are relatively small. For example, the energy released by the hydrolysis of an ATP molecule, about 0.43 eV, is sufficient to excite vibrational excitations of the peptide groups. The energy is transported along the helix by collective excitations. At low energies, these are solitons.36
SCREWS AND HELICES
455
Figure 19.7. The polarity of the helix is localized in the H–N—C=O groups. From Wada, 1962.
Energy is required to excite a soliton; it can be transferred without loss as collective excitations in helices of protein molecules. The quasiperiodic structure of an -helical protein molecule forms the basis of a molecular soliton, the collectivization of vibrational excitations of individual peptide groups. The basic vibrational excitation of the peptide group corresponds to vibrations of the C=O bond. These have an energy of 0.21 eV and a dipole moment of about 0.3 D directed along the chain of hydrogen bonds. By dipole resonance interactions these form collective excitations describable by the nonlinear Schrödinger equation (1.8 of Chapter 18). Proteins and their secondary structures such as helices are frequently represented by ball-and-stick diagrams. These are misleading, however, for two reasons: 1. 2.
The atoms move randomly due to thermal effects, so that the diagram only indicates their mean position. The diagram neglects interactions with the neighboring protein structure.
Nevertheless, these diagrams can be useful in constructing mathematical models of intermolecular motions. The dynamics of the lattice is studied by considering it as a set of masses connected by nonlinear springs.
456
CHAPTER 19
Alexander S. Davydov suggested a mechanism by which an helix can store and transport energy. This polaron-like mechanism is based on stretching oscillations of the C=O bonds along a chain with the structure ### H - N - C = O ### H - N - C = O ### H - N - C = O ### H - N - C = O ###
that extends along the helix. As Figure 19.6 shows, there are three such chains in an helix. The C=O stretching vibrations, also known as Amide-I vibrations, stretch adjacent hydrogen bonds. These local distortions act as potential wells, trapping vibrational energy and preventing its dispersion. The chains interact by dipole–dipole coupling between neighboring C=O oscillators. This coupling is indicated by the letter J on the diagram of Figure 19.8.37 The effective mass of the collective quantum excitations is inversely proportional to the dipole–dipole coupling constant J.
Figure 19.8. A section of helix, with the hydrogen bonds indicated. The letter J labels the dipole–dipole coupling between neighboring C=O oscillators. From “Nonlinear Science: Emergence and Dynamics of Coherent Structures 2/e” by Scott, Alwyn (2003). By permission of Oxford University Press.
The motion describing the dispersion of the C=O vibrational energy as a propagated longitudinal sound wave along the helix can be summarized by the feedback process, localized vibrational energy
:
longitudinal sound
Within the limitations of thermal motions and external interactions mentioned above, the governing equations predict solitons traveling along the helix, storing and transporting energy. Two types of collective excitations, excitons and solitons, may appear in the soft chain. The excitons, with a group velocity exceeding the speed of longitudinal
SCREWS AND HELICES
457
sound waves, smear out with time. The dependence of exciton and soliton energies on the relative intrapeptide distance ' is shown in Figure 19.9.38
Figure 19.9. The dependence of the total energy of the exciton and the local deformation, 1, and soliton, 2, on the relative intrapeptide distance '. From Davydov, 1985.
The solitons, moving with a velocity less than that of longitudinal sound, are accompanied by a local deformation of the peptide chain.39 Figure 19.10 shows the dependence of the soliton and exciton energies on velocity. The calculated exciton energy below V0 is metastable because it is larger than the soliton energy. Since the solitons move with a velocity less than that of longitudinal sound, they do not emit phonons, so that their kinetic energy is not transformed into heat energy. The soliton is therefore assumed to be stable. The three parallel chains of hydrogen-bonded peptide groups can lead to the formation of three types of solitons. The symmetrical soliton corresponds to a synchronous transfer of local excitations along the three chains. In symmetrical solitons the excitation region shows an increase in helix diameter and a decrease between neighboring peptide groups. The other two types of solitons are nonsymmetric in that local excitations are transferred with phase shifts between the three chains. In one of these types, local excitations are transferred along only two of the chains; this type of soliton is the most stable, with the lowest internal energy. Solitons in proteins cannot be excited by light, but can be excited by local effects such as the hydrolysis of an ATP molecule at the edge of the molecule. 5.5. Temperature effects in Davydov solitons The theoretical equations of Davydov were confirmed in numerical studies by J. M. Hyman, D. W. McLaughlin and Alwyn C. Scott.40 However, controversies remain, particularly regarding the stability of Davydov solitons at physiological temperatures. The Davydov soliton can be delocalized by both quantum fluctuations and thermal interactions with the surrounding heat bath. Because of the lack of an exact theory,
458
CHAPTER 19
Figure 19.10. Dependence of soliton energy (1) and exciton energy (2) on their velocities. Dashed line (3) represents metastable excitons. Soliton speeds are less than the longitudinal sound velocity V0. From Davydov, 1985.
predictions are based on different approximations, which “should be viewed as but a pale reflection of the real alpha-helix.”41 Real helices differ from these models by having variable sidechains. For the purpose of explaining excitability in voltage-sensitive ion channels, however, a stable soliton is not necessary. Since individual channels are known to open stochastically and then close after a brief lifetime of only a few milliseconds, a metastable soliton is quite adequate for their description. M. Sataric, Z. Ivic and R. Zakula found a threshold behavior for localized excitons in a linear -helix chain. The robust localized waves that couple amide-I vibrations to longitudinal sound was analyzed by a quantum mechanical finitetemperature approach. Using a variational treatment, Sataric et al. calculated the temperature dependence of exciton–phonon coupling for the formation of a Davydov soliton, with results as shown in Figure 19.11.42 In the figure, 3 is the exciton–phonon coupling constant, which relates the localized vibrational energy to the distortion of the helix; y a variational parameter that depends on temperature; the chain elasticity constant; M the effective mass of the soliton; J the dipole–dipole interaction energy between adjacent peptide groups, and T the temperature. The results shown in the figure indicate that a Davydov soliton can exist at biologically relevant temperatures when 3 is in the range of 77 to 110 piconewtons. One may speculate as to the possible relation between these limits and the heat block and cold block found in axons, but further research is needed.
SCREWS AND HELICES
459
Figure 19.11. Threshold temperature dependence for the formation of a Davydov soliton in a quasi-one-dimensional helix. From Sataric et al., 1990.
6. ALPHA HELICES IN VOLTAGE-SENSITIVE ION CHANNELS Let us apply our discussion of the helix to the membrane-spanning segments of voltage-sensitive ion channels. 6.1. The -helical framework of ion channels Robert H. Spencer and Douglas C. Rees review the -helical structure of a number of channels whose molecular organization has been determined by x-ray crystallography, including the K+ channel KCSA, the mechanosensitive channel MSCL and members of the aquaporin family. Their discussion is based on the assumption that the helices “create the sealed barrier that separates the hydrocarbon region of the bilayer from the permeation pathway for solutes.” Ion conductance depends on the geometry and energetic profile of the permeation pathway. Non-membrane-spanning structures supported by the helical framework are associated with ion selectivity, while
460
CHAPTER 19
understanding of the conformational sensitivity to external influences such as voltage remains as a challenge.43 In Figure 19.12, Spencer and Rees define a tilt angle from the membrane normal in the z direction. The angle is the same as the polar angle of Figure 17.10.
Figure 19.12. Definition of tilt angle from the membrane normal in the z direction. From Spencer and Rees, 2002.
The central region of the membrane, within 1.0 nm of the midplane, is occupied by residues that are apolar and uncharged, with the charged residues mainly in the regions near the surface. Figure 19.13 is a histogram of the tilt angle for 76 membrane-spanning helices of the 15 proteins studied. When the direction of the polypeptide chain across the membrane is ignored, the tilt angles average 23° ± 10°. It is remarkable that the value of 23° ± 10° found by Spencer and Rees (Figure 19.13) to be the average tilt angle of a group of ion channels matches the value of 22.5° said by Katsumi Yoshino and Takao Sakurai to be the appropriate tilt angle in surfacestabilized liquid crystals.44 It is significant in that it suggests that the ion channels are surface-stabilized; see Section 6.4 of Chapter 17. Surface stabilization of the ion channel could well be a major function of the polypeptide loops between the membrane-spanning subunits. 6.2. Channel gating as a transition in an helix Klaus Benndorf in 1989 proposed a gating model for Na channels, in which an helix undergoes a phase transition similar to the helix–coil transition. Strong electric fields can induce helix formation from a random peptide coil. In the model, space for a “watery” pore is provided by stretching the helical S4 segment to a strand with a net dipole moment close to zero. The S4 segment of domain IV of the Electrophorus electricus Na channel contains 20 residues, seven of them positively charged, as shown in Figure 19.14. 45
SCREWS AND HELICES
461
Figure 19.13. Histogram of the tilt angle of helix axes of ion channels. The N-terminals of helices with tilt angles of 0 to 90° are inside and those with angles 90° to 180° are outside. From Spencer and Rees, 2002.
The transition to a channel helix in a depolarized membrane is expected in the Benndorf model to move half of the 20 amino acid sidechains to the bulk solution. A helical strand of water molecules forms a hydrophilic pathway for ions, and the dipole moments of the peptide bonds cancel in the axial direction.
6.3. Water in the channel—again? In both the above models, water is assumed to be necessary for ion translocation through the channel. We have seen in Chapter 6, Section 7 that rapid ion transport is possible in condensed state media without water as a solvent. In the following chapter, we will develop a model of ion transfer that does not require an aqueous pathway.
462
CHAPTER 19
Figure 19.14. An helix with seven positively charged residues, as in the S4 segment of domain IV of an Na channel. The hemispheres around the charges represent a polarized region screening by counterions in water. From Benndorf, 1989.
NOTES AND REFERENCES 1. 2. 3.
H. R. Guy and P. Seetharamulu, Proc. Natl. Acad. Sci. USA 83:508-512, 1986. W. A. Catterall, Trends Neurosci. 9:7-10, 1986. Richard D. Keynes and Fredrik Elinder, Proc. R. Soc. Lond. B 266:843-852, 1993. In kindly granting his permission to reproduce the figure on page 849, Professor Richard Keynes suggested noting that “it is a greatly simplified diagram of the basic principle of voltage-gating, but that in practice there are important movements of the electric field also playing a part.”
4. 5. 6.
Chris S. Gandhi and Ehud Y. Isacoff, Handbook of Cell Signaling, Vol. 1, Elsevier Science, 2003, 209-214. S. R. Durell and H. R. Guy, Biophys. J. 62:238-247, 1992. R. P. Feynman, R. B. Leighton, and M. Sands, The Feynman Lectures on Physics, vol. I, Addison-Wesley, Reading, Mass., 1964, 18-3. Giorgio Careri, Order and Disorder in Matter, Benjamin/Cummings, 1984, 115-137.
7. 8. B. Nilius, J. Owsianik, T. Voets and J. A. Peters, Physiol. Rev. 87:165-217, 2007. 9. R. Blumenthal, J. P. Changeux and R. Lefever, J. Membrane Biol. 2:351-374, 1970; I. Prigogine and G. Nicolis, in From Theoretical Physics to Biology, M. Marois, editor, S. Karger, Basel, 1973, 89-109. 10. S. Misler, L. Falke, K. Gillis and M. L. McDaniel, PNAS 83:7119-7123. 11. Larry S. Liebovitch and Tibor I. Tóth, Ann. N. Y. Acad. Sci. 591:375-391, 1990. 12. A. Bune, S. Ducharme, V. Fridkin, L. Blinov, S. Palto, N. Petukhova and S. Yudin, Appl. Phys. Lett. 67: 3975-3977, 1995; J. Choi, P. A. Dowben, S. Pebley, A. V. Bune, S. Ducharme, V. M. Fridkin, S. P. Palto and N. Petukhova, Phys. Rev. Letts. 80: 1328, 1998; A. V. Bune, V. M. Fridkin, S. Ducharme, L. M. Blinov, S. P. Palto,
SCREWS AND HELICES
463
V. Sorokin, S. G. Yudin and A. Zlatkin, Nature 391:874-877, 1998; H. Qu, W. Yao, J. Zhang, S. Ducharme, P. A. Dowben, A. V. Sorokin and V. M. Fridkin, Appl. Phys. Lett. 82: 4322, 2003; K. A. Verkhovskaya, A. S. Ievlev, A. M. Lotonov, N. D. Gavrilova and V. M. Fridkin, Physica B 368:105, 2005. 13. Reprinted with permission from I. Ermolina, A. Strinkovski, A. Lewis and Y. Feldman, J. Phys. Chem. B 105: 2673-2676. Copyright 2001. American Chemical Society. See also Y. Feldman, I. Ermolina and Y. Hayashi, IEEE Trans. Dielectrics and Electrical Insulation 10:728753, 2003.
14. Jack A. Tuszynski and Michal Kurzynski, Introduction to Molecular Biophysics, CRC, Boca Raton, 2003, 180-187, 388. 15. J. A. Tuszynski, J. A. Brown and P. Hawrylak, Phil. Trans. R. Soc. Lond. A 356:1897-1926, 1998. 16. J. A. Brown and J. A. Tuszynski , Ferroel. 220:157-204, 1999; Tuszynski and Kurzynski, 187, 388. 17. Alan J. Hodge and William J. Adelman, Jr., in Structure and Function in Excitable Cells, edited by D.C. Chang, I. Tasaki, W. J. Adelman, Jr., and H. R. Leuchtag, Plenum, New York, 1983, 75-111. 18. J. Metuzals, D. F. Clapin and I. Tasaki, in Structure and Function in Excitable Cells, edited by D.C. Chang, I. Tasaki, W. J. Adelman, Jr., and H. R. Leuchtag, Plenum, New York, 1983, 5373. 19. H. M. Fishman, Biophys. J. 35:249-255, 1981. 20. Gen Matsumoto, Hiromu Mirofushi, Sachiko Endo, Takaaki Kobayashi and Hikoichi Sakai, in Structure and Function in Excitable Cells, edited by D.C. Chang, I. Tasaki, W. J. Adelman, Jr., and H. R. Leuchtag, Plenum, New York, 1983, 471-483. 21. Roger Penrose, Shadows of the Mind: A Search for the Missing Science of Consciousness, Oxford University, Oxford, 1994. 22. H. R. Leuchtag and V. S. Bystrov, Ferroel. 220:157-204, 1999. 23. M. E. Lines and A. M. Glass, Principles and Applications of Ferroelectrics and Related Materials, Clarendon, Oxford, 1977. 24. A. J. Lovinger, Science 220: 1115-1121, 1983. 25. J. W. Goodby, in Ferroelectric Liquid Crystals: Principles, Properties and Applications, edited by J. W. Goodby, R. Blinc, N. A. Clark, S. T. Lagerwall, M. A. Osipov, S. A. Pikin T. Sakurai, K. Yoshino and B. Zeks, Gordon and Breach, 1991, 99-247. 26. See reference 12. 27. B. T. Matthias, in From Theoretical Physics to Biology, edited by M. Marois, Karger, Basel, 1973, 12- 21. 28. C. L. Wang and S. R. P. Smith, J. Phys. Condensed Matter 7:7163-7171, 1995. 29. V. S. Bystrov and H.R. Leuchtag, Ferroel. 155:19-24, 1994. 30. F. Fröhlich, in Cooperative Phenomena, edited by H. Haken and M. Wagner, Springer-Verlag, New York, 1973, vii-xii. 31. C. L. Wang and S. R. P. Smith, op. cit. 32. A. C. Scott, in Nonlinear Excitations in Biomolecules, edited by M. Peyrard, Springer, Berlin and Les Editions des Physique, Les Ulis, 1995, 249-268. 33. A. S. Davydov, Solitons in Molecular Systems, D. Reidel, Dordrecht, 1985, 2. With kind permission of Springer Science and Business Media 34. I. Singer and I. Tasaki, in Biological Membranes: Physical Fact and Function, volume 1, edited by Dennis Chapman, Academic, London, 1968, 347-410. 35. A. Wada, Polyamino Acids, Polypeptides, and Proteins, edited by Mark A. Stahmann, 131-146. Reprinted by permission of The University of Wisconsin Press, Madison, copyright 1962; -, Adv. Biophys. 9:1-63, 1976. 36. Davydov, 1985, 1-23. 37. Alwyn Scott, Nonlinear Science: Emergence and Dynamics of Coherent Structures, Second Edition, Oxford University, 2003, 202.
464
CHAPTER 19
38. Davydov, 1985, 292. With kind permission of Springer Science and Business Media. 39. A. S. Davydov, in Bioelectrodynamics and Biocommunication, edited by Mae-Wan Ho, FritzAlbert Popp and Ulrich Warnke, World Scientific, Singapore, 1994, 411-430. 40. J. M. Hyman, D. W. McLaughlin and A. C. Scott, Physica D 3:23-44, 1981. 41. Davydov’s Soliton Revisited: Self-Trapping of Vibrational Energy in Protein, edited by Peter L. Christiansen and Alwyn C. Scott, Plenum, New York, 1990, 245-250. 42. M. Sataric, Z. Ivic and R. Zakula, in Davydov’s Soliton Revisited: Self-Trapping of Vibrational Energy in Protein, edited by Peter L. Christiansen and Alwyn C. Scott, Plenum, New York, 1990, 295-308. With kind permission of Springer Science and Business Media. 43. Reprinted, with permission, from Robert H. Spencer and Douglas C. Rees, Annu. Rev. Biophys. Biomol. Struct. 31:207-233, 2002. 44. K. Yoshino and T. Sakurai, in Ferroelectric Liquid Crystals: Principles, Properties and Applications, J. W. Goodby, R. Blinc, N. A. Clark, S. T. Lagerwall, M. A. Osipov, S. A. Pikin, T. Sakurai, K. Yoshino and B. Zeks, editors, Gordon and Breach, 1991, 317-363. 45. K. Benndorf, Eur. Biophys. J. 17:257, 1989. With kind permission of Springer Science and Business Media.
CHAPTER 20
VOLTAGE-INDUCED GATING OF ION CHANNELS
In Chapter 16 we introduced the proposal that membrane excitability is based on ferroelectric properties in membrane molecules. Smectic liquid crystals, we saw in Chapter 17, are capable of exhibiting ferroelectricity when chiral molecules are tilted with respect to the layer normal. Here we explore this proposal, extending it to the molecular level. We saw in Chapter 19 that, while the size of the domain is indeed a limiting factor, voltage-sensitive ion channels are large enough to be capable of sustaining a ferroelectric domain, and that electric fields and temperature variations can switch this ferroelectric phase on and off. We will explore ways in which this phase transition can alter the ion conductance of the channel. 1. ION CHANNEL: A FERROELECTRIC LIQUID CRYSTAL? We have already emphasized the role of mesophases in living systems. Lyotropic mesogens frequently contain anions and cations, which may be mobile, hopping from site to site. Observed electrical, mechanical and optical properties of voltage-sensitive ion channels possess many similarities to those of ferroelectric materials. In particular, the transitions of ion channels are similar to phase transitions with the closed, nonconducting, configuration viewed as ferroelectric and the open, ion-conducting configuration as nonpolar. To bring the analogy into conformity with the fact that the channel is a membrane component, the ferroelectric behavior must be that of a ferroelectric liquid crystal, with the properties of chirality and tilt. 1.1. Electroelastic model of channel gating Structure–function relationships in biomembranes depend on the interaction of the molecular architecture with its local environment. This relationship is essential for the triggering of membrane processes by local phase transitions, as was recognized for the lipid moiety in 1984 by Erich Sackmann. Conformational changes may be induced chemically by the absorption of molecules or ions from the bulk phases. Sackmann’s model attributes the electrical triggering of ion channels to a conformational change involving the tilt of a lipid–protein aggregate. His electroelastic model assumes the aggregate to be in a tilted configuration when the electric field is
465
466
CHAPTER 20
low and untilted when it is high. Its relaxation from a parallel to a tilted configuration opens a pore, as shown in Figure 20.1. 1
Figure 20.1. The electroelastic model of ion channel gating. Top: In the low field of the depolarized membrane, lipid–protein aggregates relax and tilt. Bottom: At high field, elastic torque closes the pore. From E. Sackmann, 1984.
The channel closes when the field balances the elastic torque causing the tilt. It is interesting that this early model recognizes the importance of tilt changes induced by a depolarization. Although the model depends on the structural pore concept, which is criticized in Chapter 14, it suggests the possibility that a field-dependent change in tilt angle may alter ion-conduction properties. The recognition of the importance of elastic restoring forces in the channel is another important feature of the model. Since voltage-sensitive ion channels are large molecular networks, they may be expected to possess rubber-like elasticity. The superionic conducting properties of these materials, elastomers, were mentioned in Chapter 6, and in Chapter 18, Section 4.1, we saw that chiral smectic elastomers exhibit unsymmetrical switching behavior. 1.2. Cole-Cole curves in a ferroelectric liquid crystal Let us continue our discussion of the voltage-sensitive ion channel as a ferroelectric liquid crystal. Since the impedance of excitable membranes displays Cole–Cole
VOLTAGE-INDUCED GATING OF ION CHANNELS
467
characteristics (see Chapter 11, Section 5.1), we will examine relaxation data in ferroelectric liquid crystals. The distribution of relaxation times of a molecular system may be studied by plotting the components of the dielectric permittivity *(7) in a Cole–Cole plot. If *(7) is characterized by a single relaxation time, a circular arc is obtained with the center on the real axis; when the relaxation times are distributed, the center lies below the real axis. Cole–Cole plots have been obtained for a number of ferroelectrics.2 For example, the ferroelectric liquid crystal system FFP, 3-octyloxy-6[2-fluor4(2-fluoroctyloxy)phenyl)]-pyridine, was studied by Wrobel and collaborators in the laboratory of W. Haase.3 Its phase diagram shows three transitions on heating:
This shows that FFP has a first order phase transition at 45.9°C from a ferroelectric chiral smectic C phase to a paraelectric chiral nematic phase. In this transition, the relaxation frequency of the soft mode below Tc appeared to be temperature independent. The high-frequency (10 MHz-10 GHz) relaxation process connected with reorientation along the long molecular axis was practically undisturbed at the chiral nematic–smectic C transition. The complex dielectric permittivity data at low frequency, 10 Hz–10 MHz, showed that the critical frequencies exhibit discontinuities, indicating a first-order, discontinuous, transition. The Goldstone mode dielectric increment, tilt angle and spontaneous polarization all exhibit jumps at the transition from chiral smectic C to paraelectric N phase. The ratio of the slopes of the graph of log frequency vs. temperature is not -2 as predicted by the mean-field model for second order transitions. Wrobel et al. observed a hysteresis in the bias voltage dependence of the Goldstone mode dielectric strength. 1.3. A voltage-sensitive transition in a liquid crystal Wrobel and colleagues found the Goldstone mode in FFP to be voltage-dependent at low fields, but suppressed at fields greater than about 1.2 × 106 V/m, where it is replaced by a new mode, called the domain mode. (Note that this field is of the same order of magnitude as that of the resting potential of excitable cells.) The dielectric data for FFP at these higher bias voltages were fitted by the sum of two Cole–Cole functions yielding two relaxation times,
(2.1)
where ] is the high-frequency limit of the dielectric permittivity perpendicular to the
468
CHAPTER 20
director connected with molecular processes, S and D are the soft mode and domain mode dielectric increments, -S and -D are the respective dielectric relaxation times, and hS and hD are distribution parameters for the soft and domain modes. This function describes a biased reorientation of the molecules in the chiral smectic C phase. A transition was recorded in the region of dc bias field from 12.0 to 12.5 V, in which the Goldstone mode is practically suppressed and the domain mode appears; see Figure 20.2.
Figure 20.2. Dielectric permittivity of FFP as functions of frequency and as Cole–Cole curves. Left: Goldstone mode dielectric spectrum for chiral smectic C phase at bias voltage of 12.0 V across the 10 m thick sample. Right: Domain mode and soft mode dielectric spectra at 12.5 V. From Wrobel et al., 1995.
2. ELECTRIC CONDUCTION ALONG THE ALPHA HELIX Since the membrane-spanning segments of voltage-sensitive ion channels are helices, let us consider the question, Can ions travel along an helix? To look at simple cases first, we will begin with the transfer of electrons by solitons, then move on to protons and larger ions. 2.1. Electron transfer by solitons Molecular structures called electron transfer chains transfer electronic charge across membranes in photosynthesis and cell respiration. However, the movement of electrons over distances of 3-7 nm, separated by many groups of atoms, cannot be realized by a simple tunneling mechanism. In the donor–acceptor model, deformations of the protein molecules are assumed to facilitate the transfer process. The strong coupling between an electron and the local molecular displacement can be described by a soliton wave function.
VOLTAGE-INDUCED GATING OF ION CHANNELS
469
In the soft chain of an helix, the electron motion becomes stabilized. Each of the peptide groups forming the three quasiperiodic chains possess a constant dipole moment of about 3.5 D. Figure 19.6 shows the arrangement of dipolar peptides connected by hydrogen bonds in the chains I, II and III. The overlapping of electron wave functions in neighboring potential wells of the same chain provides a conduction band of electron energies, allowing for three separate conduction pathways. Thus the helix serves as a bridge for electron transfer across a membrane. 2.2. Proton conduction in hydrogen-bonded networks Proton conductivity in solid alcohols and carbohydrates is three orders of magnitude higher along one-dimensional chains of hydrogen bonds than in perpendicular directions. Ice also has a high proton conductivity due to the transfer of protons along hydrogen bonds in one-dimensional chains. In chains of water molecules linked by hydrogen bonds, ionic defects facilitate the transfer of protons. Chains of hydroxonium ions, H3O+, and hydroxyl ions, OH-, formed by the dissociation of water, allow protons to be transferred from one potential well to another by rotations of water molecules; see Figure 20.3.4
Figure 20.3. Proton transfer by rotation of water molecules. From Davydov, 1985.
Since helices also contain chains of connected hydrogen bonds, they may conduct protons by a similar mechanism. Polarizable hydrogen bonds interact with phonons and transverse electromagnetic modes called polaritons in their environments. Proton transfer processes in hydrogen-bonded structures have been investigated in bacteriorhodopsin, the F0 complex of ATP synthases,5 aspartic proteinases6 and serine proteases.7 2.3. Dynamics of the alpha helix The dynamical theory of Davydov’s soliton is complex and beyond the scope of this book. Nevertheless, it is clear that the limits of stability of a soliton depend on spatial dimensions of the amide groups, their effective mass and the temperature. Standard sets of parameters for helices are frequently used, but they are not necessarily applicable to the helical segments of voltage-sensitive ion channels. In particular, the case of electrically charged sidechains appears not to have been investigated.
470
CHAPTER 20
Figure 20.4. Logarithm of polaron effective mass as a function of adiabaticity B(T) and coupling strength S(T). From Brown, Lindenberg and Wang, 1990.
David W. Brown, Katja Lindenberg and Xidi Wang have discussed the relationship between a soliton and a polaron. A polaron is a quasiparticle excitation that maintains persistent correlations with the deformation or polarization quanta of a solid. The polaron may well qualify as a quantum soliton. A Davydov soliton may be a form of a polaron in the adiabatic limit, which is a process so rapid that heat transfer from the environment is negligible.8 Brown and colleagues describe the phase transition characteristics of the soliton in terms of two temperature-dependent control parameters, B(T) and S(T), where B(T) is a measure of adiabicity and S(T) is a measure of coupling strength. The theory of Brown and Zoran Ivic gives the polaron effective mass in terms of these dimensionless parameters. Figure 20.4 schematically shows the catastrophe-theory folding of the mass as a manifestation of a self-trapping transition. The dashed lines indicate the boundaries of the mass catastrophe. Could a relationship exist between these transitions and the catastrophe models in Chapters 9 and 16? 3. ION EXCHANGE MODEL OF CONDUCTION We now know the representation of the open–close transition of voltage-sensitive ion channels as analogous to the movements of macroscopic objects such as sliding gates, rigid screws or paddles is inadequate; see Chapters 14, 19 and 21. It is unrealistic at the
VOLTAGE-INDUCED GATING OF ION CHANNELS
471
molecular scale and does not explain many observed responses of the channel. Instead, we must focus on the lengths and angles of the interatomic bonds of the channel molecule. 3.1. Expansion of H bonds and ion replacement One such clue is the size change of the channel in the dimension perpendicular to the membrane plane. Indications of this outward extension have been observed, optically and mechanically, in squid giant axon by Iwasa and Tasaki as a transient membrane swelling, synchronous with the action potential, of about 1.0 nm; see Chapter 4.9 The open–close transition of voltage-sensitive ion channels has been linked experimentally to dimensional changes in the channel. Outward movements upon depolarization of the S4 segments were demonstrated by fluorescence and/or modifying cysteine reagents on arg mutated to cys.10 Because of the links and interactions between the segments, it seems likely that other segments join in this motion, so that the entire channel molecule changes shape as a cooperative unit.11 These results are consistent with the rotation of the long axes of channel segments toward the membrane normal, or the widening of the H bonds, or both. An increase in the overall length of the channel suggests a proportionate increase in the lengths of the interloop distances of the membrane-spanning helical segments. If these measurements demonstrate a change in channel length, the membrane-spanning segments must share this length change, and it must be distributed among the longitudinal bonds of the segments. Since we have seen that the segments are most likely helical, we can conclude that the hydrogen bonds connecting the loops of the helix must change their lengths, since the H-bonds are the weakest bonds of the helix. To simplify the discussion, we will make the assumption that all the H-bond lengths change by the same fraction, /, where is the length of the H-bond. This fraction must equal the fractional length change of the segment and of the channel itself, L.
Thus, the H-bonds must expand by the same relative fraction as the channel, about 20%. This length change may result in a change in the chemical affinity of the bond; a wider bond may become occupied by a larger atom than the hydrogen atom; see Section 3.3 of this Chapter. 3.2. Can sodium ions travel across an alpha helix? Studies by J. F. Nagle and co-workers,12 and A. K. Dunker and D. A. Marvin13 suggest that chains of hydrogen bonds may play a part in ion transport through biological membranes. P. Yager14 and H. David Chandler and collaborators15 proposed transport of ions other than protons by traveling dislocations in helices. P. Th. Van Duijnen and B. T. Thole carried out molecular orbital calculations for a model helix under the self-consistent field approximation. Their results suggest that sodium ions cannot travel across an helix with standard bond angles and
472
CHAPTER 20
distances, due to Na–C repulsion. However, this repulsion is a very steep function of interatomic distance, which would be increased by the electrostatically driven expansion proposed in Section 3.1 of this Chapter. A trial calculation showed that moving the Na+ by only 0.05 nm reduced the repulsion to zero, and that positive ions such as Na+ could pass through a “channel” of low electronic charge density if not carrying a solvent molecule. The internal electric field of an helix is an important factor in ion transport.16
3.3. Relay mechanism A relay mechanism has been proposed for interaction of permeant ions with the channel protein molecule.17 It has been suggested that membrane-spanning segments in the open sodium channel are coordinated by mobile Na+ ions in transition across the channel. Accordingly, the Na channel is a metalloprotein requiring a metal ion (Na+, Li+, K+, ...) to exist in its open state. In this view, the selectivity of the channel depends not only on the size and shape of the ion, but on the strength of its binding to the nucleophilic site; see the discussion of organometallic receptors in Chapter 14. The movement of protons through water depends on the fact that water molecules form a continuous hydrogen-bonded network. A proton associated with one oxygen atom can break its covalent bond and reassociate with a neighboring oxygen atom to which it had been only hydrogen bonded. Within the context of the pore model, the relay mechanism based on this type of exchange is unique, as it is taken only to apply to the water in the pore. However, the relay mechanism can arise in any Hbonded network, such as the helix. That a relay mechanism for interaction of ions with a protein molecule is a reality was demonstrated in the experiments of Georg Zundel and his collaborators, who observed compounds with lengthened H-bond in which the H ion is replaced by Li, Na, K and other ions. An helix is held together by hydrogen bonds between the carbonyl oxygen of one residue and the nitrogen of the fourth residue from it. These bonds connect neighboring loops of the helix. Zundel and collaborators have observed that the hydrogen ions that bind electronegative atoms in a hydrogen bond can be replaced by metal ions such as Li+ and Na+, leading to a longer bond.18 In the hydrogen bond connecting the loops of an helix, the H is usually closer to the nitrogen atom than to the oxygen, but it has an alternate position closer to the O. The interatomic proton potential can be illustrated as a double well, deeper on the N side, with discrete energy levels. Zundel, B. Brzezinsky and J. Olejnik have shown that when the H bond becomes long enough, it can accommodate a lithium, sodium or potassium ion.19 The upper two diagrams of Figure 20.5 show intramolecular O- Li+###O : O###Li O bonds. Bonds such as these, in which the donor and acceptor are of the same type, are called homoconjugated. The O- Li+###N : O-###Li+ N bonds (labeled 15) are heteroconjugated. +
VOLTAGE-INDUCED GATING OF ION CHANNELS
Figure 20.5. Homoconjugated (top) and heteroconjugated bonds (bottom). From Zundel et al., 1993.
473
474
CHAPTER 20
Figure 20.6 shows homoconjugated O- Na+###O : O-###Na+ O bonds. Experiments show that Li, Na and K bonds are longer than H bonds.20
Figure 20.6. Homoconjugated O- Na+###O : O-###Na+ O bonds. From Zundel et al., 1993.
For hydrogen-bonded systems, large proton polarizabilities due to collective proton motion of Li+, Na+ and Be2+ were observed in hydrogen-bonded chains. Cation polarizabilities due to collective cation motion were also observed in systems with two Li+ or Na+ bonds. Studies of N+ K###N : N###K+ N bonds showed that the K+ polarizability of the K+ bonds is much smaller than that of the Na+ bonds; this is due to the larger mass of the K+ cations.21 Proton polarizabilities decrease significantly if the hydrogen bonds are polarized in an electric field. Both homoconjugated and heteroconjugated hydrogen bonds show large polarizabilities. The polarizability of the H bond in the H5O2+ group varies as a function of field strength and the energy difference between the minima of the position-dependent potential, a measure of the asymmetry induced in the potential by the external electric field. The O–O distance is 0.26 nm.22
Figure 20.7. Discrete energy levels of a proton in a heteroconjugated hydrogen bond. The proton potential curve changes shape with proton transfer. From Zundel, 1983.
VOLTAGE-INDUCED GATING OF ION CHANNELS
475
Potential curves are plots of potential energy as a function of the separation of atoms. The proton can have only discrete energy values, as shown in Figure 20.7.23 M. Eckert and Zundel carried out an ab initio self-consistent field calculation of the proton potential in Br H###N : Br-###H+ N bonds. In the unequal double minimum, the proton potential well at the N atom is slightly deeper than that at the Br. Figure 20.8 shows the calculated proton polarizability as a function of the electric field, with temperature as a parameter.24 Note that the field scale is comparable to fields across excitable membranes. A chain of hydrogen bonds formed by sidechains of amino acids in a protein, with its potential exhibiting multiple minima, exhibits a proton polarizability much larger than that of a single hydrogen bond. The polarizability increases linearly with the number of minima in the chain. Thus, charge will flow readily from one end of the chain to the other under the influence of even of weak electric field. After a proton is removed from one end, the chain must be regenerated by collective rotation of dipolar groups. Such a chain is called a proton wire. The characteristics of a proton wire are highly dependent on the presence of cations such as Li+, Na+ and K+.25 3.4. Metal ions can replace protons in hydrogen bonds of ion channels Applying these observations to voltage-sensitive ion channels, we postulate that a channel opening involves the stretching of bistable H-bonds and the replacement of H+ by other ions available in the aqueous environment. The selectivity will depend on the details of the channel’s molecular structure. Because these ions are loosely bound to their sites, their threshold for hopping from site to site must be low, making them permeant ions.
Figure 20.8. Proton polarizability of a heteroconjugated bond at four temperatures as a function of electric field. From Zundel, 2000.
476
CHAPTER 20
Thus, as the H bonds in an ion channel stretch during a depolarization, the hydrogen ions may become mobile by collective proton motion along a onedimensional chain,26 and be replaced by the permeant metal ions at the same sites.27 The membrane-spanning segments of a voltage-gated ion channel in a resting membrane are pictured as bent helices with a kink at the proline residue. On depolarization to threshold, the forces stabilizing the helices are overcome by changes in repulsive forces between the positively charged residues. As a result, the segments lengthen, straighten and tilt toward the membrane normal, establishing a stochastic pathway for ion translocation. In this picture, the permeant ions can be said to constitute a mobile substructure of the channel- ion complex. Because of their loose binding, these ions may exhibit dynamic characteristics that do not depend strongly on the fixed structure of the channel, behaving somewhat like a gas. The behavior of this ion gas would thus be comparable to that of ion gases in other structures, such as superionic conductors, well known materials with highly mobile ions that we have already encountered in Chapter 8. The ion movements may be explainable by percolation theory, outlined in Chapter 18, Section 4. The resting channel can be compared to a compressed accordion. As the accordion's pleats are close together, so the hydrogen bonds of the helices are short. When a depolarization allows the channel to expand, the hydrogen bonds lengthen; like the accordion's pleats as the instrument is pulled open, they open out. The electrostatic repulsion of the positively charged residues in the S4 segments provides a strong force tending to expand segments of the channel. When the bonds reach a certain threshold length, they become long enough to house a sodium ion instead of a hydrogen ion. Because of the high concentration of the sodium ions, they are able to compete successfully for sites occupied by hydrogen ions, while the hydrogen ions hop away. The Na+ enter from the outer membrane surface and hop inward, displacing the mobile H+ ions. The sodium ions enter quickly, like a shock wave. Soon the region of filled sites spans the channel, even though some vacant sites may remain. The sodium ions play two roles in the channel: One is to determine its new structure, which is a metalloprotein in the conducting phase. The other is to continue hopping across the channel, from filled sites to vacant sites, in a cooperative motion. In this way, the channel acts like a superionic conductor. The driving force behind the ion motion is the electrochemical potential difference of sodium ions across the membrane. The motion can stop for a number of reasons: !
! !
The difference in electrochemical potential, , can become too small. This can happen when the external Na+ concentration becomes lowered, when the internal Na+ concentration is raised, or when the voltage becomes more positive, rising to the Na+ reversal potential. The current can also stop when the temperature drops so low that the activation barrier is not overcome, or when it rises so high that the pathways are destroyed by random motion. It can stop when multivalent ions enter the sites and become immobilized, blocking the sites.
VOLTAGE-INDUCED GATING OF ION CHANNELS !
477
It can stop when toxin molecules, specially adapted to this function for defense or predation, pin the ferroelectric phase, keeping the hydrogen bonds from expanding over a significant region of the pathway.
As the H bonds in a channel segment stretch during a depolarization, the hydrogen ions may become mobile by collective proton motion along a pathway and be replaced by the permeant metal ions at the same sites. That implies that, if unhydrated sodium ions are available and the H bonds have been stretched sufficiently, Na+ can now enter the channel and replace the hydrogen ions in the H bonds at the backbone of the membrane-spanning segments. Because of the thermal motions of the ions and the atoms constituting the channel, these interactions are constantly fluctuating. In this critical region, the onset of an ion avalanche is a probabilistic, not a deterministic event. 4. GATELESS GATING We have seen that the molecular excitability of voltage-sensitive ion channels remains an open problem. However, a proposed mechanism of voltage-sensitive gating, the gateless gating model, describes gating as well as ion permeation in terms of condensed-state mechanisms.28 4.1. How does a depolarization change an ion conductance? In the closed state, the permeant segments are helices that are impermeable to metal ions. Depolarization relaxes the forces stabilizing the helix structure of the pore-domain segments, allowing the mutual repulsion of the positive residues to expand and partially unwind them in a rapid cooperative transition that travels as a wave across the channel. This allows the segments to tilt toward the membrane normal. H + ions in the hydrogen bonds between the loops of the helices are replaced in a relay mechanism by permeant cations. In the open state, the segments are stabilized by the permeant cations. 4.2. Enzymatic dehydration of ions When an ion enters the channel, does it keep its waters of hydration, or lose some or all of them? Does it bond with the channel? Does it undergo an ion exchange, and if so, with what ion? The above observations suggest that the S4 segments make a transition from an helix to a modified helix in which the Na+ or other cation can bind weakly to the backbone carbonyl oxygens and nitrogens, replacing the H+ connecting the loops of the helix; this transition would convert the open channel into a metalloprotein.29 If this is the case, a sodium ion can shed its hydration shell and attach to the outermost oxygen site of the helix by the ion-exchange reaction: Na+#(H2O)n + N-H###O=C
:
N-Na###O=C + nH2O + H+
478
CHAPTER 20
Figure 20.9. An ion channel modeled as an enzyme. Permeant ions are stripped of their hydration waters as they enter the channel on the left. After translocation of the ions by percolation, their hydration shells are restored. From Andersen and Koeppe. Molecular Determinants of Channel Function, Am. J. Physiol. Physiol Rev. 72: 589-5158, 1992; used with permission.
where n is the number of water molecules in the Na+ hydration shell. This equation shows explicitly that the sodium ion loses its waters of hydration as it becomes chelated in the channel, and that a proton must be conducted away from the site. The Na+ ions, which are hydrated in solution, must become unhydrated to be solvated into the channel and replace H+ in hydrogen bonds. Because the stripping of the hydration shell from the ion must be an efficient process to maintain the rapid translocation of ions through the channel, it may well be catalyzed by an enzymatic unit at either end of the channel, presumably one or more of the external hydrophilic loops. This concept was proposed in a model by Olaf Andersen, in which ions from an outer aqueous vestibule undergo dehydration and association, followed by translocation through the membrane and then a dissociation and hydration in an inner aqueous vestibule.30 Andersen and R. E. Koeppe II point out that “channels are enzymes,” with features of substrate specificity, catalytic power and regulation; see Figure 20.9.31 The possibility that the dehydration reactions are enzymatically catalyzed is also suggested by experiments showing solvent dependence of ion kinetics.32 Jan and Jan33 find a consensus that potassium channels have multiple ion-binding sites that can discriminate between K+ and Na+, probably in the absence of most of the hydration shell. The outer enzymatic unit would have a strong effect on Na+ ion conduction because of the voltage drop across it. For that reason, the enzymatic unit may be a linker region, such as the P region, which has been identified as part of the ionconduction pathway, traditionally called the pore. 4.3. Hopping conduction In the gateless gating model, sodium ion conduction in Na channels proceeds by a percolation involving thermally activated hopping of Na+ from site to site, driven by the electrochemical potential gradient. The Na+ are stripped of their hydration shells as they enter the narrow part of the channel, and are rehydrated as they leave. 5. INACTIVATION AND RESTORATION OF EXCITABILITY Currents through excitable membranes are described with reasonable accuracy by the Hodgkin–Huxley model and its successors. In the Hodgkin–Huxley formalism, the
VOLTAGE-INDUCED GATING OF ION CHANNELS
479
macroscopic sodium conductance is described by the function m3h, where activation m and inactivation h are normalized linear kinetic functions. The part of the cycle involved with the spontaneous decline in ionic current is described within the Hodgkin–Huxley model as inactivation. 5.1. Inactivation as a surface interaction As experiments show, the macroscopic Na current normally stops spontaneously while the depolarizing pulse is still on. The evolutionary advantages of inactivation are presumably that it improves communication and conserves pumping energy. The microscopic ion movements can be averaged over the ensemble of channels present in a cell or patch to yield a macroscopic current. While the average current is regular and predictable, the onset and termination of a spontaneous pulse are stochastic events. As emphasized in Chapter 15, theory is powerless to predict these events; it is limited to predicting their emergent probabilities. This process of inactivation is removable by internal proteolysis or specific channel mutations, and so is a separable feature of the channel, but only when the structure is changed. Figure 20.10 shows the effect of a mutation in which the intracellular loop between domains III and IV is cut, with the addition of four to eight residues at each end of the cut.34 Comparison of the unitary currents on the left side of Figure 20.10 with those of Figure 12.5 shows that channel openings, rather than being restricted to the early part of the depolarization step, are distributed throughout the step. The effect on the macroscopic currents through mutated channels at various values of depolarization shown on the right side of the Figure displays a substantial loss of inactivation in comparison to currents through the wild-type channels. Such a removal of inactivation is seen when the hydrophobic sequence ile–phe–met (IFM) is replaced with glu residues (QQQ). The phenylalanine residue alone, with its resonance-stabilized aromatic ring, is critical: its replacement by Q reduces inactivation by a factor of 5000.35 What causes inactivation? We have already seen, in Section 2.3 of Chapter 14, one model explanation, based on a macroscopic ball-and-chain mechanism. We are interested in seeking a molecular explanation within the gateless gating model of an ion channel as a ferroelectric liquid crystal. Inactivation appears to be a surface phenomenon, since fast or N-type inactivation is diminished by intracellular mutations, and slow or C-type inactivation is diminished by extracellular mutations.36 Liquid crystals of all types are strongly affected by surface interactions. The orientation of a liquid crystal molecule is sensitive to the nature of the substrate that bounds it. This surface interaction is referred to as anchoring. It is analyzed in terms of a surface free-energy potential. Transitions in nematic or smectic mesophases may be stabilized by anchoring. For example, in a spontaneous Fredericks transition, the interaction of the tilt angle with the surface produces a distortion of the orientation of a nematic liquid crystal when the thickness of the phase becomes greater than a critical value. In a ferroelectric smectic C* mesophase, a macroscopic polarization appears under the influence of the boundary surfaces. When the thickness of the smectic phase
480
CHAPTER 20
Figure 20.10. Effect of a mutation in which the intracellular loop between repeats III and IV is cut, with the addition of four to eight residues at each end of the cut. (Left) Recordings of the activity of the mutated Na channel in response to a depolarization to -20 mV from a holding potential of -100 mV. Note the double openings in line 8. (Right) Macroscopic Na+ currents from the mutated channels (top) compared to those from wild-type channels. Note the change in current scale. From Hammond, 1996, after Pappone, 1980.
is less than the helix pitch, the helix can be unwound. As we saw in Section 6.3 of Chapter 17, this director configuration is referred to as a surface stabilized liquid crystal.37 Thus we may speculate that the anchoring of the channel, both extracellularly, and intracellularly to cytoplasmic filamentous networks,38 may form the basis for the phenomenon of inactivation. 5.2. Restoration of excitability At the microscopic scale, excitability is restored to a voltage-sensitive channel in an excitable membrane by the endergonic reversal of the steps that initiate the ion avalanche. A repolarizing voltage step restores the resting potential, which compresses the segments. The H bonds shrink, contracting the segment, which then tilts away from the membrane normal. In an Na channel, the sodium ions in the H bonds of a membrane-spanning segment percolate out and are replaced by hydrogen ions. In contrast to the passive relaxation responsible in this model for the onset of an ion avalanche, the re-formation of the excitable configuration is the endergonic part of the cycle. Ordered helices are rebuilt, and the compression of the channel causes the
VOLTAGE-INDUCED GATING OF ION CHANNELS
481
helices to tilt away from the normal. This entropy-lowering process absorbs energy from the electric field, and is slow compared to the spontaneous activation process. NOTES AND REFERENCES 1. 2. 3. 4. 5. 6. 7. 8.
9. 10.
11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24.
Reprinted from E. Sackmann, in Biological Membranes, vol. 5, edited by Dennis Chapman, Academic, London, Copyright 1984, 105-143, with permission from Elsevier. T. Mitsui, I. Tatsuzaki and E. Nakamura, An Introduction to the Physics of Ferroelectrics, Gordon and Breach, New York, 1976, 321. S. Wrobel, M. Marzec, M. Godlewska, B. Gestblom, S. Hiller and W. Haase, SPIE 2372:169175, 1995. A. S. Davydov, Solitons in Molecular Systems, D. Reidel, Dordrecht, 1985, 71-78. With kind permission of Springer Science and Business Media. G. Zundel, Adv. Chem. Phys. 111:1-217, 2000. G. Iliadis, G. Zundel and B. Brzezinski, FEBS Letters 352:315-317, 1994. Nikolaus Wellner and Georg Zundel, J. Molec. Struct. 317:249-259, 1994. David W. Brown, Katja Lindenberg and Xidi Wang, in Davydov’s Soliton Revisited: SelfTrapping of Vibrational Energy in Protein, edited by Peter L. Christiansen and Alwyn C. Scott, Plenum, New York, 1990, 63-82. With kind permission of Springer Science and Business Media. K. Iwasa and I. Tasaki, Biochem. Biophys. Res. Comm. 95:1328-1331, 1980; K. Iwasa, I. Tasaki and R. C. Gibbons, Science 210: 338-339, 1980. N. Yang, A. L. George, Jr. and R. Horn, Neuron 16:113-122, 1996; L. M. Mannuzzu, M. M. Maronne and E. Y. Isacoff, Science 271:213-216, 1996; O.S. Baker, H.P. Larsson, L. M. Mannuzzu and E. Y. Isacoff, Neuron 20:1283-1294, 1998; A. Cha, P.C. Ruben, A. L. George, Jr., E. Fujimoto and F. Bezanilla, Neuron 22:73-87, 1999. O. Helluin, M. Beyermann, H. R. Leuchtag and H. Duclohier, IEEE Trans. Diel. El. Insul. 8:637-643, 2001. J. F. Nagle, M. Mille and H. J. Morowitz, J. Chem. Phys. 72:3959-3971, 1980. A. K. Dunker and D. A. Marvin, J. Theor. Biol. 72:9, 1978. P. Yager, J. Theor. Biol. 66:1,1977. H. D. Chandler, C. J. Woolf and H. R. Hepburn, Biochem. J. 168:559-565, 1978. P. Th. Van Duijnen and B. T. Thole, Chem. Phys. Let. 83:129-133, 1981. S. P. Ionov and G. V. Ionova, Dokl. Biophys. 202:22-24, 1972; translated from Dokl. Acad. Nauk. 202: 960-962, 1972. G. Zundel, Trends in Physical Chemistry 3:129-156, 1992; G. Zundel, Ferroel. 220(3-4): 221242, 1999. Reprinted from G. Zundel, B. Brzezinski and J. Olejnik, J. Mol. Struct. 300: 573-592, 1993, with permission from Elsevier. Reprinted from Zundel et al., 1993, with permission from Elsevier. B. Brzezinski, A. Jarczewski and G. Zundel, J. Molec. Liquids 67:15-21, 1995. R. Janoschek, E. G. Weidemann, H. Pfeiffer and G. Zundel, J. Amer. Chem. Soc. 94:23782396, 1972. G. Zundel, in Biophysics, edited by W. Hoppe, W. Lohmann, H. Markl and H. Ziegler, Springer, Berlin, 1983, 243-254. With kind permission of Springer Science and Business Media. M. Eckert and G. Zundel, J. Phys. Chem. 91:5170-5177, 1987; G. Zundel, Adv. Chem. Phys. 111:1-217, 2000.
25. Georg Zundel, in Transport through Membranes, Carriers and Pumps, edited by Alberte Pullman, Joshua Jortner and Bernard Pullman, Kluwer Academic, Dordrecht, 1988, 409-420.
26. G. Zundel and B. Brzezinski, in Proton Transfer in Hydrogen-Bonded Systems, edited by T. Bountis, Plenum, New York, 1992, 153-166; G. Zundel, J. Mol. Struct. 322:33-42, 1994; G. P. Tsironis, in Nonlinear Excitation in Biomolecules, edited by M. Peyrard, Springer, Berlin and Les Editions de Physique, Les Ulis, 1995, 361-367. 27. H. R. Leuchtag, Biophys. J. 70: A321, 1996. 28. H. R. Leuchtag and V. S. Bystrov, Ferroel. 220:157-204, 1999. 29. H. R. Leuchtag, Biophys. J. 66(2): A356, 1994. 30. O. S. Andersen, Ann. Rev. Physiol. 46:531-548, 1984.
482
CHAPTER 20
31. O. S. Andersen and R. E. Koeppe II, Physiol. Rev. 72: S89-S158, 1992. 32. C. L. Schauf, in Structure and Function in Excitable Cells, edited by D. C. Chang, I. Tasaki, W. J. Adelman Jr. and H. R. Leuchtag, Plenum, New York, 1983, 347-363. 33. L. Y. Jan and Y. N. Jan, Cell 56: 13-25, 1989. 34. P. A. Pappone, J. Physiol. 306:377-410, 1980. By permission of Blackwell Publishing. 35. Reprinted from, Constance Hammond, Cellular and Molecular Neurobiology, Academic, San Diego, 1996, 136f, with permission from Elsevier. 36. Bertil Hille, Ion Channels of Excitable Membranes, Third Edition, Sinauer, Sunderland, 2001, 631-634. 37. A. A. Sonin, The Surface Physics of Liquid Crystals, Gordon and Breach, Amsterdam, 51-58. 38. J. Metuzals, D. F. Clapin and I. Tasaki, in Structure and Function in Excitable Cells, edited by D.C. Chang, I. Tasaki, W. J. Adelman, Jr. and H. R. Leuchtag, Plenum, New York, 1983, 5373; Alan J. Hodge and William J. Adelman, Jr., op. cit., 75-111; Nobutaka Hirokawa, op. cit., 113-141.
CHAPTER 21
BRANCHING OUT
The knowledge we gain from the study of physical and chemical systems gives us new insights into the structure–function relationships of voltage-sensitive ion channels. The similarity of these channels to ferroelectric liquid crystals has alerted us to the importance of molecular chirality and tilt. Membrane-spanning segments of voltage-sensitive ion channels consist of helices, dipolar columns with a multiplicity of amino-acid residues. In particular, the S4 segments, known to contain ordered arrays of positively charged arginine and lysine residues, have been experimentally linked to the gating phenomenon. Released by a depolarization of the excitable membrane and driven by the mutual repulsion of its positive charges, the S4 segments expand, shifting the channel molecule into an extended configuration that unwinds helices traversing the channel. Negatively charged residues on other segments, S2 and S3, also play a role in the control of gating. The dependence of ion conductances on pH and the observation of H–D isotope effects suggest that protons play a role in the conduction of permeant metal ions. The work of Georg Zundel and collaborators, cited in Chapter 20, shows that permeant metal ions can replace protons from hydrogen bonds when these bonds are elongated. Such a relay mechanism could be the basis of ion permeation in an ion channel. A chain of widened hydrogen bonds within the backbone of an α helix, along which permeant cations travel readily, may be considered a cation wire, by analogy to the proton wire discussed in Section 3.3 of Chapter 20. Directed percolation is a statistical model of ion hopping in a particular direction; given data on site binding, it is capable of explaining gating without relying on mechanical gates. Gateless gating, a conformational transition induced by a change in electrical, mechanical or thermal conditions, alters the bond lengths and angles of the channel’s molecular structure, allowing permeant ions to enter and exchange with protons. Channel opening, a symmetry-breaking conformational transition, may be initiated by a fluctuation that leads to an avalanche of ions traversing the membrane. Stresses at the inner and outer boundaries in reaction to the unwinding of helices can stop an avalanche before the end of a polarization step, respectively leading to fast and slow inactivation. In this chapter we will examine the special roles played by the amino acids with branched sidechains, valine, leucine and isoleucine, in the membrane-spanning helices, and the electrostatic interactions between the charged residues of S4 segments.
483
484
CHAPTER 21 1. FERROELECTRIC LIQUID CRYSTALS WITH AMINO ACIDS
On the basis of the ferroelectric properties and liquid crystalline nature of ion channels in membranes, we are motivated to look at what is known about ferroelectric liquid crystals, particularly those that contain amino acids. For helielectricity to appear in liquid crystals, they must satisfy three conditions: ! ! !
Chirality, with at least one asymmetrical carbon atom; A dipole moment perpendicular to the molecule’s long axis; A smectic phase with nonzero tilt angle.
Smectic liquid crystals composed of rod-shaped molecules with their axes tilted with respect to the layer normal are called smectic C. When the molecules have the property of handedness (chirality) they exhibit ferroelectricity and are designated smectic C*. The first ferroelectric liquid crystal to be synthesized, DOBAMBC, uses amyl alcohol as a chiral source; see Chapter 17, Section 6.1. The dielectric permittivity of DOBAMBC is about 10, and it has a relaxation frequency of several hundred Hz. The dielectric response was interpreted as a Goldstone mode, a fluctuation of the azimuthal angle 1 as the helix winds or unwinds; see Figure 17.18. When the Goldstone mode is suppressed by boundary effects in thin samples or by applying a dc bias field, a prominent peak appears in the dielectric response. This peak is explained as a soft mode, a linear electroclinic coupling contributed by fluctuations in the tilt angle . 1.1. Amino acids with branched sidechains In studying ferroelectric liquid crystals with sidechains of amino acids, Katsumi Yoshino and Takao Sakurai1 observed that when the amino acids contained a chiral center and the hydrophobic sidechain was branched, the resulting ferroelectric liquid crystals had extremely large values of spontaneous polarization, as high as 3 × 10-7 C/cm.2 Their dielectric permittivities were also extremely large, exceeding 8000 in some cases. The amino acids with this unusual property are valine, leucine and isoleucine; see Chapter 12, Section 5.1. The temperature dependence of the spontaneous polarization of one of the compounds with branched alkyl groups, 2(S),3(S)-4-octyloxybiphenyl 3-methyl-2chloropentanoate (3M2CPOOB)
in the chiral smectic and low-temperature phases is shown in Figure 21.1. When the compounds were mixed with ferroelectric liquid crystals of opposite helical sense, very short response times (10–20 s) were found in surface-stabilized ferroelectric liquid crystal cells. The dielectric permittivity of 3M2CPOOB depends anomalously on thickness, dropping from over 7000 to about 1000 as the cell
BRANCHING OUT
485
Figure 21.1. Temperature dependence of the spontaneous polarization (nC/cm2) of 3M2CPOOB in the chiral smectic C* and low-temperature phases. From Yoshino and Sakurai, 1991.
thickness is reduced from 250 m to 25 m. When the Goldstone mode was suppressed with a dc bias field, a sharp peak in 3M2CPOOB, with the sidechain of isoleucine as one of two chiral centers, was observed at the transition temperature; see Figure 21.2 which shows the temperature at
Figure 21.2. Dielectric permittivity versus temperature for a sample of 3M2CPOOB that is unbiased, and biased at different voltages. From Yoshino and Sakurai, 1991.
486
CHAPTER 21
which the dielectric permittivity peaks rises with increasing field. The field effect is interpreted as a contribution of the soft mode, in which the tilt angle varies.2 1.2. Relaxation of linear electroclinic coupling The measurement of optical and dielectric properties of smectic C* liquid crystals provides information on the soft mode viscosity and relaxation frequency. Figure 21.3 shows the formulas, phase sequences and transition temperatures of three SmC* (SC*) compounds.
Figure 21.3. The formulas and phase sequences of three chiral smectic compounds with branched amino acid sidechains. I = isotropic phase. From Dupont et al., 1991.
Optical studies of the compound COS-C10 Isoleucine and other smectics confirm the theoretical prediction that the electroclinic component of polarization has a Lorentzian frequency dependence; Lorentzian spectra in excitable membranes are discussed in Chapter 11, Section 4.4. The relaxation frequency is proportional to (T - Tc*)-1. Transmitted light intensity is a measure of the tilt angle ; see Figure 21.4.3 1.3. Electrical switching near the SmA*–SmC* phase transition In the electroclinic effect, a change in the tilt angle and the azimuthal angle 1 may be induced by an applied electric field or a temperature change. These changes can be observed by their optical effects or by the currents induced. Figure 21.5 shows the complex switching behavior near the SmA*–SmC* phase transition, determined by numerical simulation.4 Calculated current curves for the switching due to the positive parts of the square wave voltage in Figure 21.5 are shown in Figure 21.6. Only the electroclinic and polarization reversal contributions are included. The electroclinic part dominates the initial current decay in the SmA* phase, while the polarization reversal dominates the current bump that appears on the transition to the SmC* phase. The height of the
BRANCHING OUT
487
Figure 21.4. Tilt angle modulation in COS-C10, in a log–log plot of transmitted light intensity versus frequency. The -1 slope at high frequencies characterizes the Lorentzian frequency dependence. From Dupont et al., 1991.
current bump is almost independent of the temperature, and its delay increases with the temperature interval below the phase transition. 1.4. Two-dimensional smectic C* films Free-standing films of liquid crystalline phases have been prepared with thickness varying from hundreds to two molecular layers. The reduction of a lyotropic system from three to two dimensions leads to a drastic disruption of long-range correlations by thermal fluctuations of long wavelength. Studies of the behavior of orientational patterns in the presence of external fields allow the determination of viscoelastic parameters associated with the different phases. These orientational patterns are characteristic of the underlying structural order of the phases. In a two-dimensional nematic, orientational fluctuations cause the long range direction correlations to decay as 1/r, where 0.3. The quasi-long range order with this algebraic decay of correlations still exhibits a phase transition to a higher temperature phase with exponentially decaying correlations and short range order. In smectic C films, the molecules are tilted at an angle with respect to the layer normal, and the local direction of the molecular tilt is given by an angle 1; see Figure 17.10. Since changes in 1 involve much less energy than changes in , the critical behavior of two-dimensional films is dominated by the unbinding of orientational singularities in 1. As in nematics, long-wavelength fluctuations of 1 disrupt the order in smectics to result in a phase with only quasi-long range orientational order.
488
CHAPTER 21
Figure 21.5. Complex switching behavior near the SmA*–SmC* phase transition, showing the response of tilt and azimuthal angle 1 to an applied square wave voltage. In the top figure, for a temperature below the transition temperature, the distance from the origin represents ; in the bottom figure, at and above the transition temperature, the vertical scale is magnified 40 times. From Clark and Lagerwall, 1991.
Electric fields are used to couple to the tilt director of ferroelectric smectic C* films. Orientational defects typically found in these films are point singularities and soft walls (or kinks), in both of which the molecular tilt director undergoes a 2% rotation. Soft tilt director walls are obtained by first applying a field to obtain a uniform orientation, and then reversing the direction of the applied field. The width of the wall, defined as the distance between points at which 1 = -%/2 and 1 = %/2, varies as the inverse square root of the magnitude of the electric field. The polarization space charge of the film is neutralized by the movement of free ions.5 The similarity of the creation of a soft wall by a reversal of the applied electric field to the effect of a prehyperpolarization followed by a depolarization on an excitable membrane is evident. 2. FORCES BETWEEN CHARGED RESIDUES WIDEN H BONDS While the screw-helical model, reviewed in Chapter 19, focuses on the interaction between the charges and the external field, a model I proposed in 1994 analyzes the
BRANCHING OUT
489
Figure 21.6. Current versus time for the switching due to the positive parts of the square wave voltage in Figure 21.5. The electroclinic response dominates the initial current decay in the SmA* phase, while the polarization reversal dominates the current bump at and below the transition to the SmC* phase. From Clark and Lagerwall, 1991, after Andersson et al., 1988.
internal repulsive forces between the positive charges in the S4 segments. These outward forces destabilize the S4 segments, which in the resting phase were stabilized in a ferroelectric order due to the resting potential. The expansion of the S4 segments drives the channel into its open configuration. 2.1. Electrostatics and the stability of S4 segments The positive electronic charges, assumed q = e, in the S4 segment repel each other according to the inverse square law of electrostatics. This force can be calculated from the potential energy Uij between charges i and j, Equation 1.11 of Chapter 6. (2.1)
490
CHAPTER 21
In an idealized model of an S4 helix, with the kink produced by a proline residue ignored, six positive charges are arranged at points located on a helix of radius R = 0.8 nm. As a first approximation, we model the S4 segment as an isolated helix in a protein medium of = 4. The axial separation of the charges is A = 0.45 nm, and their angular separation is = -60° = -%/3; see Figure 21.7.
Figure 21.7. Model of a single S4 segment with six positive charges equally spaced at every third residue along an helix. External fields, dipole–dipole forces and induced surface charges at the membrane boundaries are ignored in this simple model. (A) Although the helix is right-handed, the positive charges are located along a lefthanded helix. (B) End view of the helix from outside, showing the planar component of the distance between adjacent charges. From Leuchtag, 1994.
From the figure we see that the distance between charges i and j, rij = {(j i)2A2 + 4R2 sin2 [(j - i) /2]}½. The model calculation shows that the energy to place the outermost charge i = 1 in its location on the isolated helix is about 105 kJ/mol, while the energy required to break the hydrogen bond that holds that residue in place is only about 20 kJ/mole. Similar calculations for the other charges show that the entire S4 helix must expand from its normal dimensions. This calculation is of course far too simple. Since the helix is embedded in a membrane between two aqueous media, we must consider the effect of the induced
BRANCHING OUT
491
charges at the water surfaces (possibly modeled by mirror charges), which will greatly reduce the energy, decreasing the instability. Repulsions from the positively charged S4 segments from the other repeats, on the other hand, will increase the instability, while negatively charged residues on S2 and S3 segments will reduce it. The effect of electrostrictive and piezoelectric forces, which tend to compress the helix, and the mechanical interactions between segments, were also ignored in the model calculation.6 The presence of the branched nonpolar sidechains val, leu and ile suggests a strong dependence of the dielectric permittivity on the electric field, as noted in Section 1.1 of this chapter. Suppose that increases from 4 in the unpolarized (active) channel to 100 in the ferroelectric (resting) channel. According to Equation 2.1, this increase in implies a reduction in electrostatic potential energy U by a factor of 100/4 = 25. Thus, the energy to insert the outermost charge into the α helix is reduced from 105 kJ/mol to 4.2 kJ/mol, far below the energy sufficient to break a hydrogen bond. The other repulsions would be similarly reduced. However, a depolarization of the membrane would restore to its unpolarized value of 4, allowing the H bonds to widen and receive metallic cations. Because of interactions between the S4 segments and the other components of the channel, the resting channel is in a metastable state, ready to relax whenever the external field is reduced to threshold. Upon threshold depolarization, the S4 segments may be expected to extend outward and tilt toward the normal. These conformational changes may be transmitted from a segment to its neighboring segments by the elastic interaction forces between the segments. 2.2. Changes in bond length and ion percolation Since depolarization allows the S4 segments to extend outward, how does that relate to the question of switching ion conductance? The observed outward movement of ion channels during excitation, due to the electrostatic repulsions of the charged amino acid residues, increases the lengths of the H bonds connecting the loops of the helices. If the segment extension is distributed more or less uniformly throughout the length of the helix, the hydrogen bonds forming the S4 helices are strained and must lengthen by the same proportion as the segment as a whole. This requires the helices to partially unwind and reform in a different configuration. The lengthening of the H bonds provides sites that the permeant ions may occupy, while other sites remain vacant. Percolation from occupied to vacant sites provides an ion-conduction pathway. These observations give us a hypothesis we can check: The voltage-sensitive ion channel is a ferroelectric liquid crystal component. In a membrane at resting potential, the channel is in a ferroelectric phase. At a threshold depolarization, the tilt of the S4 segments rotates toward the normal and the segments elongate. These changes break the ferroelectric order of the channel, eliminating the spontaneous polarization and decreasing the dielectric permittivity. The stretching of the hydrogen bonds creates sites with low activation energy, which can be selectively occupied by the permeant ions. The ions can be activated by thermal energy from occupied into vacant sites and thus be driven across the membrane by the electrochemical potential gradient.
492
CHAPTER 21
2.3. Replacement of charged residues with neutrals Real channels are much more complex in structure and behavior than the simple model sketched in Figure 21.7. The Shaker B potassium channel has two acidic (negative) residues in S2, one acidic in S3, and seven basic (positive) residues in S4. Francisco Bezanilla, S. A. Seoh, Daniel Sigg and Diane Papazian measured the gating charge per channel in mutants, ShB-IR, in which the inactivation had been removed. The charged residues were neutralized one at a time. The results are summarized in Figure 21.8.7 Neutralization of the outermost charge in S4, R362, produced a mutant that showed evidence of more than one open state. Neutralization of the sixth residue, R377, resulted in loss of function of the channel. Of the two acidic residues in S2, neutralization of the outer glutamic acid, E283, had no effect on the gating charge, while neutralization of the inner, E293, drastically decreased the charge per channel. Neutralization of the aspartic acid in S3, D316, decreased the charge per channel by about two proton charges. Drastic charge reductions were recorded when any of the arginines R365, R368 or R371 were neutralized. Neutralization of the lysine K374 resulted in a loss of the channel’s function unless a negative residue, E293 or D316, was simultaneously neutralized. The maximum open probability of the neutralization mutants did not differ appreciably from that of the ShB-IR channel. Since the Shaker B channel contains four identical subunits, one might simply predict that neutralization of one positive charge per subunit would produce a decrease of four protonic charges, 4e, per channel. The results shown in Figure 21.8 show that the charge reduction can be greater than 4e. Remarkably, the neutralization of a negative charge, which might be expected to increase the net charge, can also result in a gating charge reduction. The collective movements of the channel segments and the induced surface charges evidently are quite complex. An explanation suggested by Bezanilla and collaborators is that, as the charged segments move through the channel, some of the charges extend into the aqueous compartment inside or outside the membrane, where they are neutralized by free charges in solution. 3. MICROSCOPIC CHANNEL FUNCTION Voltage-sensitive ion channels are guests in a host–guest system in which the lipid bilayer separating aqueous phases is the host. They are much larger and more complex than the molecules that have been studied in ferroelectric liquid crystal laboratories. The membrane is bounded lyotropically by aqueous media with asymmetrical ion distributions. Despite these complications the channels have significant similarities to helielectric molecules: the transmembrane helices are columnar segments with a core of aromatic rings, a bend at the proline residue and chiral endgroups. 3.1. Tilted segments in voltage-sensitive channels The similarities of voltage-sensitive ion channels to ferroelectric liquid crystals, which require chirality and tilt, suggests the question, Are the membrane-spanning segments
BRANCHING OUT
493
Figure 21.8. Charges that contribute to voltage sensing in the Shaker B channel. The schematic representation of the transmembrane segments of the subunit at the bottom of the figure show the acidic residues in S2 and S3, and the basic residues in S4. The bar chart at the top shows the charge per channel in units of proton charge e0 of wild type (ShB-IR) and the indicated neutralization mutants. The open and hatched bars represent two independent methods of estimating the charge. From Bezanilla et al., 1997.
of the channel tilted? A topology diagram of a homologous repeat of an Na+ or Ca2+ channel or a subunit of a K+ channel, first proposed by H. Robert Guy and P. Seetharamulu,8 shows tilt in the outer part of the S4 segment outside the kink and in the S6 segment; see Figure 21.9.9
494
CHAPTER 21
Figure 21.9. Topology diagram of a homologous repeat of an Na+ or Ca2+ channel or a subunit of a K+ channel. The cylinders represent helices. The outer part of the S4 segment outside the kink (the inner part being labeled S45) and the S6 segment are shown tilted. From Guy and Durell, 1996.
A conformational transition of the channel would be expected to impose a change in the tilt angles of the segments, which would induce either a soft mode change in polar angle or an azimuthal rotation about the membrane normal, a Goldstone mode; see Chapter 16, Section 3.3 and Chapter 17, Sections 4 to 6. 3.2. Segment tilt and channel activation The propagation of electric signals along and through nerve membranes has long been suspected of being related to ferroelectric liquid crystals. In a discussion of the biological significance of chirality, Patricia Cladis and W. van Saarloos10 note that parts of the brain and cell membranes are chiral, so that functions of the living process depend on the collective dynamic properties of mesophases. Xin-yi Wang11 has proposed a model of front propagation in nematics, pointing out that the smectic C* form may provide additional insight into this problem, since its chirality matches that of biological systems. Wang’s equation describing the motion of the wall is similar to the Hodgkin–Huxley equation. Although the authors refer to the lipid molecules of membranes, these models can be extended to proteins. Segment tilt has a direct interpretation for ferroelectric liquid crystals: Only tilted smectics are ferroelectric. Structural studies by Stewart R. Durell and Guy provide three-dimensional pictures of channels that show tilt in all segments. Figure 21.10 shows two views of a model of a Shaker K+ channel.12
BRANCHING OUT
495
Figure 21.10. A model of a Shaker K+ channel, viewed (A) from outside and (B) from the side, with only two of the four subunits shown. The ribbons represent helices. The code letters for the ionselective residues of the P segments are indicated on the axis of the pore. The oxygen atoms (gray) of these segments are postulated to bind permeant K+ ions. From Guy and Durell, 1996.
3.3. Chirality and bend A ferroelectric liquid crystal molecule must also be chiral. True to form, the membrane-spanning segments possess multiple chiral centers. Furthermore, three of the four S4 segments contain highly conserved proline residues, which are known to produce a kink in the helix. These prolines, which bend the helical columns, have been found to be essential to excitability. We have seen in Chapter 19 that the behavior of the Na channel is consistent with a transition in which the closed channel is ferroelectric, while the open channel is not polar. By analogy to a SmC* to SmA* transition, we might expect to see the tilt angle of the segments going to zero upon depolarization. It is interesting to compare Figure 21.10 with Figure 20.9 of the previous chapter. Here the sequence DGYGVT is comparable to the translocation region of
496
CHAPTER 21
Andersen and Koeppe. Since the K+ ions will generally flow from the cytoplasm outward, the dehydration–solvation region would be on the inner side of the selectivity filter and the desolvation–hydration region would be on the outside. S. K. Tiwari-Woodruff and colleagues, using combined mutations in a strategy to explore electrostatic interactions among charged residues in Shaker K+ channels, have identified positions that are in close proximity in the functional protein.13 A model consistent with their data places segments S2 and S3 on parallel helices with their sidechains fully extended and an S4 helix crossing them at a tilt angle of about 60°. This evidence of a tilted S4 segment supports the hypothesis of a functional role for the S4 tilt angle in gating, as in SmC* ferroelectric liquid crystals. They propose a mechanism for a tilt angle reduction in which the sidechains no longer fully extend, which may involve the unwinding of the S4 helices to form an open configuration. 4. CRITICAL ROLES OF PROLINE AND BRANCHED SIDECHAINS Because of the finding (Section 1.1 of this chapter) by Yoshino and Sakurai that the spontaneous polarization of ferroelectric liquid crystals is greatly enhanced by branched, nonpolar amino acid sidechains (probably by hindered rotation), we can predict that the replacement of nonpolar branched with unbranched residues in S4 segments will affect channel behavior. The loss of branched sidechains would be expected to lower the spontaneous polarization of the channel and make it more difficult to reconstitute the excitable ferroelectric configuration. It should also require a greater hyperpolarization (or a lower temperature or longer time) to restore the excitable, ordered form of the channel. A minimal change, replacement of a single residue, may therefore be expected to weaken the ferroelectric phase, the excitable resting state of the channel, and slow its reestablishment. 4.1. The role of proline As discussed in Section 2.2 of Chapter 12, the demonstration by Mueller and Rudin in 1968 of excitability induced in artificial membranes by alamethicin and basic polypeptides set off an interest in model systems of planar lipid bilayers with embedded peptides.14 Natural peptides, including those with antimicrobial properties, turned out to form ion channels or pores.15 The goal of the peptide strategy is a molecular dissection of ion channels with the aim of reconstituting the structures involved in gating and permeation of ions. This strategy was used by Hervé Duclohier and colleagues to address the mechanisms of gating and selectivity in voltage-sensitive sodium channels. A group of synthetic peptides was prepared to test their functional properties after reconstitution into planar lipid bilayers. The peptides, selected from the four domains of the electric eel sodium channel, were chosen to mimic the voltage sensors (S4) and together with their contiguous segments (S45 or L45), as well as the P regions associated with selectivity. The presumed locations of these regions in the classical topology of the sodium channel subunit within the membrane are shown in Figure 21.11.16
BRANCHING OUT
497
Figure 21.11. Positions of peptides S4, S45 and P in the classical transmembrane topology of the subunit of the voltage-sensitive sodium channel investigated by dissection and incorporation of peptide fragments in lipid bilayers. From Duclohier et al., 1997.
The S4 segment and its contiguous L45 linker are joined by helix-breaking proline residues in homologous repeats I, II and III, while repeat IV has no pro residue. Functional and conformational correlations are found to be tuned to the presence and position of a single pro, suggesting an important role for the kink in gating mechanisms.17 The kinks are comparable to those of ferroelectric liquid crystal molecules in a bent configuration; see Chapter 17, Figure 17.19. 4.2. The role of branched nonpolar amino acids Since the branched sidechains val, leu and ile have significant effects in ferroelectric liquid crystals, one test whether these branched residues play a critical role in ion channels is to replace them with unbranched sidechains. Because the branched sidechains favor the ferroelectric (resting) phase, their partial removal should favor activation and indirectly affect inactivation. The residues val, leu and ile are present in large numbers in S4 segments, the voltage sensors of the voltage-sensitive ion channels. Studies by V. J. Auld and
498
CHAPTER 21
collaborators found strong electrophysiological effects of mutations replacing hydrophobic branched with unbranched residues.18 Data from a 1991 study by G. A. Lopez, Yuh Nung Jan and Lily Jan showed large depolarizing shifts of the activation as well as inactivation curve, especially with L361A and L375A, replacements of a leu with an ala at positions immediately before the first arg and after the fifth basic residue. The branched residues whose replacement by unbranched ones produced the strongest depolarizing shifts (L361, L375 and L382) were separated by multiples of seven, and hence located on one side of an helix, presumably in the closed-channel configuration. Some replacements of branched with unbranched residues produced activation shifts in the hyperpolarizing direction, but those residues were on the opposite side of the helix.19 4.3. Substitution leads to loss of voltage sensitivity In 2001 a test of this model was carried out with the peptide strategy on S4L45 fragments of the eel Na channel by Olivier Helluin and collaborators in the Rouen laboratory of Hervé Duclohier. The replacement of selected residues containing branched, nonpolar sidechains from S4L45 peptides with unbranched sidechains was found to alter the voltage-dependent properties of bilayers containing these channels.20 The peptide strategy was applied to S4L45 repeat III of the electric eel Na+ channel, since it had been found to be the most voltage-sensitive in a planar lipid bilayer assay.21 Circular dichroism spectroscopy showed a conformational transition (from helix to extended forms) occurring with increasing solvent dielectric constant that was broader with repeat III. Electrical activity was assayed in planar lipid bilayers doped with voltage-sensor analogs in macroscopic and single-channel configurations.22 Substitutions were made at positions 9 and 15 of eel repeat III of S4L45, equivalent to residues 1100 and 1106 of the sodium channel sequence.23 The unbranched amino acids alanine or -methylalanine (aib, U) were substituted for the branched isoleucine at position 9 and leucine at position 15; see Fig. 21.12.
Figure 21.12. Amino acid sequences of peptides, showing unbranched for branched substitutions at positions 9 and 15 (bold I and L) of the S4L45 segment of eel repeat III. U = -methylalanine. From Helluin et al., 2001.
BRANCHING OUT
499
Figure 21.13. Electrical activity induced at room temperature in analogs of the voltage sensor S4L45 of repeat III in planar lipid bilayers. Macroscopic voltage sensitivity for (A) S4L45(III)U9,15 and for (B) the wild type, S4L45(III)A9,15. Single-channel recordings are displayed for the (C) doubly substituted peptide at two applied voltages and (D) wild type peptide at 80 mV (bottom) and 55mV (top trace) . Openings are upward deflections. Vertical scale bar, 10 pA for (C), 2 pA for (D); horizontal, 50 and 100 ms for lower and upper traces in (C) and 500 ms in (D). From Helluin et al., 2001.
The L9A substitution (of ala for leu next to the third arg, R10) leads to a loss of the high voltage sensitivity of the native repeat III, as well as an increased tendency for dimerization. The double ala substitution S4L45(III)A9,15, which replaced both the ile at position 9 and the leu at position 15 with alanines, led to a complete loss of voltage dependence, yielding either a "leaky" ohmic conductance, a moderate voltagedependent current at high thresholds for concentrations higher than with the native peptide, or a quasi-ohmic conductance after an abrupt transition; see Figure 21.13.24 The voltage sensitivity Ve, the voltage increment producing e-fold change in macroscopic conductance, for S4L45(III)A9,15 was greater than 20 mV, compared to 6.0 mV for the wild type. As with the four native repeats of the electric eel,25 the three modified voltage sensors were subjected to a systematic investigation of the secondary structure (mainly, the helical content) as a function of solvent polarity and thus of the dielectric constant of the medium. Ellipticity in all cases declined sharply, especially between 40 and 20% propanol-2. Measurements of helical contents for the analogues as a function of solvent polarity suggest that steric hindrance of an -helix motion modulates the stability of the channel with respect to an electric perturbation, either increasing or decreasing the helix stability depending on the position of the residue on the helix.26
500
CHAPTER 21
4.4. Whole channel experiments Experimental replacement of branched with unbranched residues was done in whole sodium channels by Saïd Bendahhou and colleagues, who found that the substitution of two uncharged hydrophobic residues in domain III of the S4 segment produced changes in inactivation properties. Branched sidechains were replaced with unbranched sidechains in the helical S4 segment of domain III of the human skeletal muscle voltage-gated sodium channel hSkM1. Sodium currents were measured at room temperature in human embryonic kidney (HEK293) cells for wild type and two mutations of neutral amino acids adjacent to arginine residues, L1131A and L1137A. Segment S4 of domain III of hSkM1 has five positive charges, K1 and R2-R5, separated from one another by two neutral amino acids. To study the role of neutral residues in the movements of the voltage sensors during gating, alanines were substituted for leucines adjacent to the R3 and R5 residues, L1137A and L1131A. For mutation L1131A, the rate of fast inactivation was slower than in the wild type, steady-state fast inactivation was shifted towards hyperpolarizing potentials, the mutant channels deactivated more slowly and they recovered from the fast inactivated state more rapidly. Alteration of gating charge was shown by changes in the slope of the inactivation curve. In contrast, the L1137A currents exhibited inactivation kinetics similar to that in the wild type. These data show that an uncharged, nonpolar residue with a branched sidechain in the DIIIS4 segment plays a critical role in the sodium channel gating process. They also show that activation is coupled to inactivation, not only through domain IV but also through domain III. The substitution of a branched sidechain (leu) with an unbranched one (ala) in DIIIS4 greatly affects the gating properties of sodium channels, as predicted by the ferroelectric liquid crystal model. These electrophysiological studies on whole channels show a fundamental role for branched sidechains of specific residues that are most likely to be involved in gating.27 By comparison with the findings of Yoshino and Sakurai, these experiments tend to confirm the proposal that membranes containing voltage-sensitive ion channels act as ferroelectric liquid crystals, and that the S4 and other segments change the tilt angle of the channel in the conformational transition that initiates the gating process in voltage-sensitive ion channels. The specific influence on kinetics and ion currents of a replacement of a residue with a branched sidechain by one with an unbranched sidechain was predicted by the ferroelectric liquid crystal model, and can be explained by this model. The explanation is that the branched sidechains help establish and maintain the large polarization of the resting phase of the channel, which is responsible for its molecular excitability. The conformational transition to the ion-conducting activated state is accompanied by a rotation, lengthening and partial unwinding of helices, including the S4 segments. This results in a loss of polarization and a widening of the interloop hydrogen bonds, which become sodium bonds by ion exchange in the presence of enzymatically dehydrated sodium ions. Because of strong interatomic interactions, the hypothesis that the permeant ions are occupying positions normally occupied by hydrogen ions in backbone
BRANCHING OUT
501
hydrogen bonds is sufficient in principle to explain the ionic selectivity of the channel. This is in conformity with the finding that bacterial KCSA potassium channels have a region 1.2 nm long, the selectivity filter, through which ions move without a hydration shell, as discussed in Chapter 13.28
5. NEW DATA, NEW MODELS We have seen two approaches to understanding the structure–function relationship in voltage-sensitive ion channels. Briefly stated, one is induction from experiment and the other, deduction from physical principles. Neither by itself can suffice; both are needed to solve the problem of molecular excitability. Let us briefly sample a few recent experimental results. 5.1. Amino acids dissociate from the α helix Studies on the Shaker K+ channel have shed some light on the unwinding of the α helix. Experimental results by S.K. Aggarwal and Roderick McKinnon indicate that voltage activation involves the displacement of positively charged amino acid residues of S4 from the intracellular to the extracellular side of the membrane. Four of the positive charges were found to be important in determining the total number 29 of gating charges per channel involved in activation. Dorine M. Starace and Francisco Bezanilla demonstrated by histidine-scanning mutagenesis that these four 30 charges move across the entire electric field upon channel opening. Assuming that the S3 segment remains fixed, some flexibility is required of the S3–S4 linker segment. When the length of this linker was systematically shortened by Osvaldo Álvarez, Eduardo Rosenmann, Bezanilla, Carlos González and Ramón Latorre, periodic perturbations appeared in the activation time and 31 voltage dependence of the channel activation kinetics. When the 31 amino acids defined as the S3–S4 linker were deleted and the segments connected directly to each other, the mutant channels, expressed in oocytes, surprisingly showed voltage-dependent currents. If the S3 and S4 segments had been rigid structures, it would be difficult to visualize conformational changes leading to the opening and closing of such a channel. While the open probability of the channels with linker removed is comparable to that of the wild type, the activation curve is displaced 45 mV to the right, with the activation time constant nearly 50 times longer and a limiting slope charge only half that of the wild type. In further experiments, mutant channels with the linker residues partially restored were studied. They were characterized by the number, N, of the amino acids restored, counting from the S4 segment. The activation time constant as a function of N displays a periodicity in which the pattern of N = 0 to 3 is repeated in 4 to 6, so that the maxima and minima are shifted by three amino acids; a plot of half activation voltage gave similar results. To explain these results, Álvarez et al. created a mechanical model of the molecular structure of the S4 segment and its linker to S3. The S3 segment was placed 0.3 nm further from the S4 in the open than in the closed channel; see
502
CHAPTER 21
Figure 21.14. For N = 7, the motion could be accounted for by changing the dihedral angles of the linker backbone. For smaller values of N, amino acid residues have to dissociate and unwind from the α helix. In one effect of the unwinding process, the arginine R362 is dissociated from the S4 segment, resulting in a reduced gating valence.
Figure 21.14. Mechanical models of a series of S4 voltage sensors with partially or totally removed S3--S4 linker segments. The transition from closed to open channel is represented as a 0.3 nm displacement of S4 with respect to S3. The S4 segment appears to unwind and dissociate as the linker is shortened from 7 to 0 amino acids. From O. Álvarez et al., 2005.
BRANCHING OUT
503
Computations of the free energy of activation showed that a sinusoidal function with an angular period of 3.6 amino acid residues per revolution fitted the experimental data. These results suggest that the S4 helix is actually four residues longer at the N terminus than previously assumed. 5.2. A twisted pathway in a resting channel Francesco Tombola, Medha M. Pathak, Pau Gorostiza and Ehud Y. Isacoff studied the conformational switch of the voltage-sensing domain of an engineered mutant Shaker K+ channel. Ions permeating from the extracellular medium into the resting domain, the omega current, follow a curved trajectory along the tilted S4 helix. This twisted pathway contrasts with that of the ions permeating the activated channel, the 32 alpha current, which follows a straight path perpendicular to the membrane. 5.3. A prokaryotic voltage-sensitive sodium channel In Chapter 13, Section 14, we saw that the solution in 1998 of the crystal structure of the bacterial channel KCSA opened new avenues of understanding of ion channels. Since that time, the structures of several other prokaryotic channels have been deciphered. A sodium channel, NaChBac, was discovered by Dejian Ren, Betsy Navarro, Haoxing Xu, Lixia Yue, Qing Shi and David E. Clapham in the bacterium Bacillus holodurans. Unlike NaV and CaV channels, it has only a single 6TM domain. Its primary structure shows a pore region surrounded by 2TM segments. Although its sensitivity to blockers is similar to that of L-type CaV channels, its selectivity is for sodium ions. It carries large voltage-activated currents, 1000 to over 10,000 pA, compared to 50 pA for NaV channels. NaChBac is insensitive to 33 TTX. 5.4. Interactions with bilayer charges The positively charged S4 helices of voltage-sensitive K+ channels, KvAP, from the archaebacterium Aeropyrum pernix, were found by Youxing Jiang, Vanessa Ruta, Jiayun Chen, Alice Lee and MacKinnon to be positioned at the protein–lipid 34 interface of the channel molecule, a result that appears to be consistent with electrostatic energy considerations. The S4 subunit together with part of S3 forms an α-helical hairpin structure called a paddle. A subsequent search by Daniel Schmidt, Qiu-Xing Jiang, and MacKinnon for interactions of the S4 helix with the phospholipid heads of the host bilayer showed that the negative charges of their phosphate groups affect channel gating. Embedding the KvAP channel in bilayers of different lipid composition showed that the negatively charged phosphate groups of phospholipids are important requisites of channel 35 activity.
504
CHAPTER 21
6. TOWARD A PHYSICAL THEORY OF VOLTAGE-SENSITIVE CHANNELS In conventional models, the way in which the selectivity filter, the gates, the voltage sensors and other parts operate together remains to be explained.36 The gateless gating model, on the other hand, provides an explanation of the phenomena in which no such artificial divisions are introduced.37 This model has shown its ability to fit data to the ferroelectric Curie–Weiss law, and it is demonstrating its predictive ability in showing key roles for the nonpolar branched residues val, leu and ile. In this chapter we have seen some similarities between a protein-spangled bilayer and a ferroelectric liquid crystal. They suggest links between membrane electrophysiology and the electrodynamics of a two-component smectic layer. Although membrane excitability remains an open problem, we are beginning to have a sense of the direction in which to look. Clearly, there is a vast difference in size and ocomplexity between ferroelectric liquid crystals such as 3M2CPOOB (see Section 1.1 of this chapter) and voltage-sensitive ion channels such as the Na channel. Nevertheless, there are also significant similarities: a core of aromatic rings; chiral centers, including nonpolar amino acids with branched sidechains; and outer hydrocarbon chains. The -helical subunits are tilted columns similar to those found in columnar liquid crystals such as blue phases. The outward movement of the outer part of the S4 segments is well established, but while some investigators have interpreted this movement as an indication of a rigid screwlike outward movement of the entire S4 segment or a paddlelike movement of the 53-54 pair, the gateless gating model proposes that the S4 segments expand relative to a fixed internal point (probably near the bulky residue next to the proline kink). The force driving this outward expansion of the S4 is the mutual repulsion of the positive charges that are functional features of the voltage-sensing S4 segments. The expansion is a relaxation triggered by the lowering, upon depolarization, of the dielectric permittivity due to the influence of the branched nonpolar sidechains val, leu and ile. The hypothesis of expanding S4 segments has the advantage of providing a physical mechanism for the conduction and gating of the ionic current, since the expansion of the S4 segments may result in a twisting and stretching of the pore domain. This stretching implies a widening of the hydrogen bonds, which makes possible their occupation by Na+ in place of H+.38 The gateless gating hypothesis is also consistent with the known properties of ferroelectric liquid crystals with amino acid sidechains. These experiments show that replacement of certain of these residues containing nonpolar branched sidechains with residues containing unbranched sidechains strongly affects activation, inactivation and their coupling in peptides and channels.39 The results are consistent with the predictions of the ferroelectric liquid crystal model.40 6.1. The hierarchy of excitability Let us review the levels of organization from the excitable membrane to the submolecular components of the ion channel. The excitable membrane is a lyotropic liquid crystal separating the asymmetrical internal and external aqueous media. It is a guest–host system with at least two different sets of ion channels embedded as guests in a lipid bilayer host, one fast and one slow channel. The membrane supports both a subthreshold electrogenic response and, above threshold, a propagated double current vortex as an electrosoliton action potential.
BRANCHING OUT
505
The principal subunit of the fast (Na or Ca) channel is a single polypeptide with tertiary structure; auxiliary subunits are also present. The slow (K) channel is a tetramer of polypeptides with a quaternary structure. Carbohydrate chains attached at glycolysis sites extend from the voltage-sensitive ion channel into the external and internal aqueous media. Anchoring proteins that attach the ion channel to the cytoskeleton probably play a significant role in fast activation. We have modeled the principal subunit of an ion channel as a polymer ferroelectric liquid crystal of a type much more complex than those hitherto studied in condensed state physics laboratories. In its structure of interlinked columns, it is comparable to a blue phase (see Chapter 17, Section 1.3), but also exhibits behavior similar to a ferroelectric mesophase undergoing a SmA–SmC* transformation. Channel polypeptides are heteropolymers with aromatic ring residues (phe, tyr and trp) and branched nonpolar residues (val, leu and ile), features that are present in SmC* phases. In addition, most of the transmembrane helices have a helix-breaking pro residue, which also promotes helielectricity. The ion channel has a molecular director with orientation that is defined by two angles, polar () and azimuthal (1). The membrane-spanning helices (S1–S6) form columns with linear membrane-spanning stretches that have their individual tilt orientations. The polypeptide columns bend through % radians, forming intracellular and extracellular loops. 6.2. Block polymers The different roles of membrane-spanning (s) and loop (l) residues gives the channel a structure characteristic of block polymers.41 The residues of a KV molecule may be represented in the form l–l–...–l–[s–s–...–s–l–l–...–l]6, and those of an NaV molecule, {l–l–...–l–[s–s–...–s–l–l–...–l]6}4 –1–l–...–l. However, the s’s and l’s are not identical residues as they would be in a conventional block polymer. Within each set of six membrane-spanning helices, there is one, S4, with a repeat pattern of positively charged amino acid residues (K and R). The repeat pattern, [–K/R–X–X]n, where n is typically about 6, can be viewed as a subblock of the S4 block. Other helices (such as S2) have negatively charged residues (D and E). At given values of temperature and transmembrane field, electrostatic interactions between the charged residues control the pitch of the helices, and therefore also their lengths. Because of the packing and bonding interactions of the transmembrane helices, their motions are coupled, as in elastomers. The orientations and lengths of the S4 and other charged segments determine the overall configuration of the channel molecule. The ion channel is capable of existing in at least two relatively stable conformations: an open (or relaxed) configuration with pore-domain segments of large pitch, in which the interloop H bonds of the helices are transiently occupied by permeant cations, and a closed (or compressed) configuration with narrow pitch, in which the H bonds are occupied by H atoms. The model assumes that the directors of the pore-domain segments are roughly parallel to the membrane normal (as in SmA) in the open, ionconducting configuration, and tilted (as in SmC*) in the closed configuration.
506
CHAPTER 21
6.3. Coupling the S4 segments to the electric field
Recent experimental research indicates that the voltage-sensitive ion channel has an approximate fourfold symmetry (see Figure 21.10A) and is composed of • •
a compact pore domain, roughly square in cross section, which acts as the ion conductor, and four voltage sensor domains that contain the S4 segment, are attached internally and externally to the pore domain, and move in response to changes in the external electric field.
The (voltage) sensor domains have been reported to consist of the S3 and S4 segments, with S1, S2, S5 and S6 comprising the pore domain. More recently the sensor domains have been said to consist of S1-S4, with S5-S8 as the pore domain.42 The movable sensor domains have been referred to as paddles, a term that suggests a rigid pole supported at one end and free at the other, flattened, end. This term appears to be misleading, as the sensor domain must be flexible and attached to loops at both ends. Since the sensor domains have been found to be attached to two neighboring surfaces at opposite ends of the pore domain, the motion of sensor domains is capable of exerting net torques and forces on the pore domain. Consideration of the effect of a ferroelectric–nonpolar phase transition on the dielectric permittivity (Sections 1.1 and 2.1 of this Chapter) suggests the following possible mechanism of channel opening in response to a depolarizing voltage step: The channel at resting potential is in a metastable, ferroelectric phase. In this phase the dielectric permittivity ε is very high, say ~100, so that the mutual repulsions of the positively charged arg and lys residues of the S4 segments in the sensor domains are low, by Equation 21.2.1. Therefore, the sensor domains probably will adhere closely to the pore domain in this phase, with their positions mainly determined by elastic forces and torques. The pore domain then may be twisted and relatively short. As the external electric field is reduced to its threshold value, a transition to a nonpolar phase will cause the dielectric permittivity ε to drop sharply to a value typical of ordinary proteins, say ~4. In consequence, the forces driving the positive sidechains of the S4s apart will increase drastically, by a factor of the order of 25. As a consequence of the mutual electrostatic repulsions of the positive charges, (which dominate over the small number of negative charges), the sensor domains will become lengthened and bow outward, away from the pore domain. These forces will be transmitted to the pore domain at the attachment regions, probably causing the pore domain to untwist and lengthen. The fourfold symmetry of the channel structure is broken by the pattern of the proline residues, which are found in three of the four S4 segments (see Section 4.1 of this chapter), and in the unequal number of positively charged residues in the S4 segments. The asymmetrical steric and charge pattern must result in an asymmetrical force distribution on the segments when the channel is in its phase of low dielectric permittivity. As a result, the fourfold conformational symmetry may be lost in the open channel.
BRANCHING OUT
507
The conformational alteration in which the sensor domain is lengthened will expand the H bonds of its transmembrane helices, creating sites for permeant ions. As the equilibrium between H+ and (in a sodium channel) Na+ shifts to favor the Na+, protons diffuse out of the membrane and permeant ions, having shed their hydration shells, diffuse in to occupy the sites vacated by the H+. The diffusion of the permeant ions forms defects in the channel structure, consisting of interloop H bonds with H replaced by Na. At a sufficiently high density, the occupancy of the permeant ions in H-bond sites reaches a critical value, and these defects form an ion-conducting pathway across the membrane. The permeant ions percolate through the translocation region and become rehydrated in the other aqueous medium. Boundary effects, including repulsion by positive charges in the cytoplasmic region of the channel, can impede the cation avalanche, stopping the ion current. Divalent ions play an important role in ion percolation in Na+ or K+ channels, as discussed in Section 6.3 of Chapter 6. Neurotoxins such as external TTX in the sodium channel can pin the ferroelectric phase, reversibly preventing a transition to the nonpolar, open configuration. 6.4. A new picture is emerging The study of voltage-sensitive ion channels stands at the intersection of physics, chemistry and biology. These macromolecules are evolutionary adaptations of the laws of physics to biological communication within and between cells in living organisms. A clearer concept of the relation between structure and function in these ion channels may help in the understanding of ion-channel diseases. However, further studies are needed to complete the models proposed here and to verify or disprove their experimental implications. NOTES AND REFERENCES 1.
2. 3. 4.
5. 6. 7.
8. 9. 10.
K. Yoshino and T. Sakurai, in Ferroelectric Liquid Crystals: Principles, Properties and Applications, edited by J. W. Goodby, R. Blinc, N. A. Clark, S. T. Lagerwall, M. A. Osipov, S. A. Pikin, T. Sakurai, K. Yoshino and B. Zeks, Gordon and Breach, 1991, 317-363. Hideo Takezoe and Yoichi Takanishi, in Chirality in Liquid Crystals, edited by HeinzSiegfried Kitzerow and Christian Bahr, Springer, New York, 2001, 251-295. L. Dupont, M. Glogarová, J. P. Marcerou, H. T. Nguyen, C. Destrade and L. Lejcek, J. Phys. II France 1:831-834, 1991. N.A. Clark and S.T. Lagerwall, in Ferroelectric Liquid Crystals: Principles, Properties and Applications, J. W. Goodby, R. Blinc, N. A. Clark, S. T. Lagerwall, M. A. Osipov, S. A. Pikin, T. Sakurai, K. Yoshino and B. Zeks, editors, Gordon and Breach, 1991, 1-97; G. Andersson, I. Dahl, W. Kuczynski, S. T. Lagerwall, K. Skarp and B. Stebler, Ferroel. 84:285, 1988. R. Pindak, in Solitons in Liquid Crystals, edited by Lui Lam and Jacques Prost, Springer, New York, 1992, 235-252. H. R. Leuchtag, Biophys. J. 66:217-224, 1994. F. Bezanilla, S. A. Seoh, D. Sigg and D. M. Papazian, in From Ion Channels to Cell-to-Cell Conversations, edited by Ramón Latorre and Juan Carlos Sáez, Plenum, New York, 1997, 3-19. With kind permission of Springer Science and Business Media. H. R. Guy and P. Seetharamulu, Proc. Natl. Acad. Sci. USA 83:508-512, 1986. H. R. Guy and S. R. Durell, in Ion Channels, Volume 4, edited by Toshio Narahashi, Plenum, New York, 1996, 1-40. With kind permission of Springer Science and Business Media. P. E. Cladis and W. van Saarloos, in Solitons in Liquid Crystals, edited by Lui Lam and Jacques Prost, Springer, New York, 1992, 110-150.
508
CHAPTER 21
11. Xin-yi Wang (also Wang Xin-yi), Phys. Lett. A 112:402, 1985; ___, Phys. Rev. A 32:31263129, 1985. 12. H. R. Guy and S. R. Durell, 1996, op. cit. With kind permission of Springer Science and Business Media; S. R. Durell and H. R. Guy, Biophys. J. 62: 238-247, 1992. 13. S. K. Tiwari-Woodruff, C. T. Schulteis, A. F. Mock, and D. M. Papazian, Biophys. J. 72:14891500, 1997. 14. P. Mueller and D. O. Rudin, Nature 217:713-719, 1968. 15. M. S. P. Sansom, Progr. Biophys. Molec. Biol. 55:139-235, 1991. 16. Reprinted by permission of Data Trace Publishing Company. H. Duclohier, O. Helluin, P. Cosette, A. R. Schoofs, S. Bendahhou and H. W. Wróblewski, Chemtracts–Biochemistry and Molecular Biology 10(3):189-206, Copyright 1997. 17. O. Helluin, P. Cosette, P. C. Biggin, M. S. P. Sansom and H. Duclohier, Ferroel. 220:329-341, 1999. 18. V. J. Auld, A. L. Goldin, D. S. Krafte, W.A. Catterall, H.A. Lester, N. Davidson and R. J. Dunn, Proc. Natl. Acad. Sci. USA.87: 323-327, 1990. 19. G.A. Lopez, Y. N. Jan and L. Y. Jan, Neuron 7:327-336, 1991. 20. O. Helluin, M. Beyermann, H. R. Leuchtag and H. Duclohier, IEEE Trans. Dielectrics and Electrical Insulation 8:637-643, 2001. 21. O. Helluin, S. Bendahhou and H. Duclohier, Eur. Biophys. J. 27: 595-604, 1998. 22. Helluin et al., 2001. 23. M. Noda, S. Shimizu, T. Tanabe, T. Takai, T. Kayano, T.Ikeda, H. Takahashi, H. Nakayama, Y. Kanaoka, N. Minamino, K. Kangawa, H. Matsuo, M. A. Raftery, T.Hirose, S. Inayama, H. Hayashida, T. Miyata and S. Numa, Nature 312:121-127, 1984. 24. Helluin et al., 2001. 25. Helluin et al., 1999. 26. Helluin et al., 2001. 27. S. Bendahhou, L. J. Ptácek, H. R. Leuchtag, and H. Duclohier, Biophys. J. 80:229a , 2001; H. R. Leuchtag, S. Bendahhou and H. Duclohier, Biophys. J. 80:234a, 2001. 28. D. A. Doyle, J. M. Cabral, R. A. Pfuetzner, A. Kuo, J. M. Gulbis, S. L. Cohen, B. T. Chait and R. MacKinnon, Science 280:69-76, 1998. 29. S. K. Aggarwal and R. McKinnon, Neuron 16:1169-1177, 1996; S. A. Seoh, D. Sigg, D. M. Papazian and F. Bezanilla, Neuron 16:1159-1167, 1996. 30. D. M. Starace and F. Bezanilla, J. Gen. Physiol. 117:469-490, 2001. 31. Osvaldo Álvarez, Eduardo Rosenmann, Francisco Bezanilla, Carlos González and Ramón Latorre, in Pumps, Transporters, and Ion Channels, edited by Francisco V. Sepúlveda and Francisco Bezanilla, Kluwer Academic/Plenum, New York,2005, 93-101. With kind permission of Springer Science and Business Media. 32. Francesco Tombola, Medha M. Pathak, Pau Gorostiza and Ehud Y. Isacoff, Ann. Rev. Cell Devel. Biol. 22:23-52, 2006. 33. Dejian Ren, Betsy Navarro, Haoxing Xu, Lixia Yue, Qing Shi and David E. Clapham, Science 294:2372-2375, 2001. 34. Youxing Jiang, Alice Lee, Jiayun Chen, Vanessa Ruta, Martine Cadene, Brian T. Chait and Roderick MacKinnon, Nature 423:33-41, 2003; Youxing Jiang, Vanessa Ruta, Jiayun Chen, Alice Lee and Roderick MacKinnon, Nature 423:42-48, 2003. 35. Daniel Schmidt, Qiu-Xing Jiang, and Roderick MacKinnon, Nature 444:775-779, 2006. 36. B. A. Yi and L. Y. Jan, Neuron 27:423-425, 2000. 37. H. R. Leuchtag and V. S. Bystrov, Ferroel. 220:157-204, 1999. 38. G. Zundel, Ferroel. 220:221-242, 1999. 39. Lopez et al., 1991; Helluin et al., 2001; Bendahhou et al., 2001. 40. H. R. Leuchtag and V. S. Bystrov, Ferroel. 220:157-204, 1999; H. R. Leuchtag, Ferroel. 236:23-33, 2000. 41. Karl R. Amundson, in Electrical and Optical Polymer Systems, edited by Donald L. Wise, Gary E. Wnek, Debra J. Trantolo, Thomas M. Cooper and Joseph D. Gresser, Marcel Dekker, New York, 1998, 1079-1139. 42. Vladimir Yarov-Yarovoy, David Baker and William A. Catterall, PNAS 103:7292-7297, 2006.
INDEX
alkaloid 77, 78 alkalosis 321 all-or-none law 66, 426 allosteric transitions 267-269 in hemoglobin 268 in ion channels 268 α subunit, see principal subunit α-methylalanine 498 alpha helix 256, 257, 458, 468, 469, 471, 490, 505 dipole moment of 454 helical wheel diagram of 258 hydrogen bond in 256, 257 in ion channels 268, 459 ion transport in 318 kink, at proline residue 297 structure of 453, 455 tilt angle of 496 vibrations of 458 amide group 255, 268, 469 amino acid 17, 22, 34, 41-44, 59-61, 78, 94, 253-256, 258, 273275, 277, 282, 283, 286, 292, 295, 297, 299, 305, 308, 369, 453 alpha carbon of 43, 253-255 sidechains 254, 461, 486, 496, 504 sidechains aromatic 254 sidechains branched 254, 483, 484, 486, 496, 497, 500, 504 sidechains charged 254, 277, 281, 305, 469, 491, 501, 505 sidechains hydrophilic 260 sidechains hydrophobic 254, 258, 260, 263, 274, 280, 484
a priori model 326 acceptor 12, 313, 314, 317, 468, 472 access resistance 200, 236, 303 accessibility 287, 444 accommodation 80 acetylcholine receptor muscarinic 36 nicotinic 36, 38, 59, 244, 264, 265, 307 aconitine 77, 78, 247, 285 action current 49, 50, 65, 85 action potential, propagated 178, 182, 351, 447 activation 78, 97, 129, 176, 178, 181, 186, 188-190, 213, 216, 235, 236, 244, 247, 249, 276, 281, 282, 285-288, 291, 295, 308, 310, 311, 321, 351, 377, 443, 476, 478, 481, 494, 497, 498, 500, 501, 503-505 energy 121, 133, 319, 413, 491 active site 261-263, 268-269 active transport 351, 446 admittance, complex 205, 207, 214, 364 agatoxin 276, 291 aggregation model 322, 323 agonist 77, 250, 295 aib, see aminobutyric acid Airy equation 154 Airy functions 154 alamethicin 297, 298, 307, 496 alkali metals 12, 13, 75, 265, 304, 306, 314-316, 438 alkaline earths 12, 13, 265, 316
509
510
sidechains unbranched 498, 497, 500, 504 aminobutyric acid 297, 498 amphiphilic 7, 98, 272, 429 ampulla of Lorenzini 36 anchoring 408, 479, 480, 505 anion selectivity 293 antiferroelectric 323, 365, 367-369, 372, 376, 408 antiresonance 215 approximation, constant field 56, 60, 146, 147, 150, 162, 164, 167, 446 aquaporin 459 arginine 254, 285, 287, 292, 443, 492, 500, 502 Armstrong model 309 Arrhenius equation 137, 364 asymmetry currents 217 see also gating currents ataxia 284, 292 ATP-sensitive potassium channel 447, 448 autocorrelation 226 function 223, 224, 226 time 224, 227 autocrine regulation 289 autowaves 424 auxiliary subunit 279, 282, 505 avalanche 329, 330, 333, 337, 430, 477, 480, 483, 506 axolemma 4, 7, 25, 53, 85, 415, 451 axon 4-8, 31, 33-38, 5-58, 63-73, 75-77, 79-81, 84-86, 120, 123-125, 141, 142, 145, 149, 150, 159-162, 166, 173, 176, 180-186, 188, 195, 200-206, 214-218, 230-23, 235-240, 282, 283, 285, 288, 291, 311, 320, 321, 323, 371, 375, 376, 380, 381, 415, 421, 424, 428, 451, 471
Index
axonal transport 33, 44, 451 axosomes 53 bacterial ion channel 267, 272, 298 bacteriorhodopsin 448, 449, 469 ball-and-chain model 311 “ball” domain 281, 282 Barkhausen pulses 305, 366 barrier-and-well model 309, 446 batrachotoxin 77, 78, 247, 285 bent configuration 412, 497 steric dipole 412 beta alumina 133, 336 beta barrel 259, 263, 297 beta sheet 256, 259, 260, 262, 263, 450 betaine calcium chloride dihydrate (BCCD) 369, 370 bilayer 7, 8, 31, 39, 43, 59, 60, 98, 12, 139, 174, 187, 221, 245, 246, 267, 269, 272-274, 293, 296-299, 313, 318, 324, 352, 425, 426, 429, 452, 459, 492, 496-499, 503-505 binding site, toxin 275 bioelectricity 47, 51 biological tissues, 375 piezoelectric 374 pyroelectric 374 biophysics 19, 45, 53, 57, 66, 98, 179, 246, 312, 429 biosynthesis, channel 282 birefringence 84, 85, 243, 312, 32, 355, 376, 380, 402, 437 induced 380 black lipid membrane 245 block polymers 505 block, channel 76, 174 blue phase 388, 390, 391, 402, 504, 505 Boltzmann’s law 340
Index
bond covalent 13, 44, 92, 136, 267, 268, 314, 472 dative 313, 314 hydrogen 13, 17, 136, 256, 257, 259, 260, 265, 267, 269, 306, 308, 313, 314, 319, 368, 369, 445, 446, 453-456, 469, 471, 472, 474-477, 483, 490, 491, 500, 501, 504 ionic 13, 347 peptide 44, 253, 256, 453, 461 pi 255 secondary 313, 314 sigma 92 boson 92, 93 condensation 323 brain memory, ferroelectric model 375 breathing mode 418, 424, 506 Brownian motion 222 Burgers equation 168, 169, 182, 378, 417 forced 378 homogeneous 170 C=O vibration, stretching 456 cable equation 53, 123-125 cable theory 118, 123, 178, 200 Caenorhabditis elegans 17, 279 calamitic liquid crystals 388 calcium channels, 7, 8, 12, 74, 256, 264, 288-291, 443 L-type 289-291 N-type 289-291 P/Q-type 289 R-type 291 structure of 290 T-type 291 voltage-gated (VLG Ca) 276, 288, 290, 291 capacitance 54, 84, 121-125, 133, 175, 181, 182, 187, 196, 201, 213, 217, 235, 236, 306, 323, 323, 371, 380, 401, 447, 448
511
constant phase angle 206, 211, 447 membrane 84, 125, 174, 195, 199, 200, 206, 209, 215, 380, 381 capacity dimension 334 carbohydrate 8, 14, 41-45, 98, 131, 136, 378, 469, 505 carrier 190-192, 210, 227, 268, 296, 303, 315, 321, 418, 422 also see transporter back transport in 191 cascade 308, 321 catastrophe theory 183-185, 470 ferroelectric 425 model of action potential 184 cell lines, mammalian 273 cell cardiac 26, 38 culture 30 glandular 38, 276 pyramidal 31, 35 cellular excitability 3, 15, 45, 293 channel response, statistical 322 channelopathy 29, 277 chaos theory 330 chaperones 260, 282 characteristic functions 102, 103 characteristic length 125, 436 charge density 117, 151, 306, 422, 472 charged residues, replacement with neutrals 492 chemical potential 105, 106, 126, 127, 435 chemistry, organometallic 313 chemoreceptors 26, 32, 36, 39 chemosensitivity, bacterial 21 chimera, see heteromeric channel Chinese hamster ovary 273 chiral smectic 401, 407, 408, 411, 412, 422, 437, 428, 466-468, 484-486 chirality 40-42, 363, 388, 391, 399, 399, 409, 422, 465, 483, 484, 492, 494, 495
512
chiriquitoxin 76 chloride channel 12, 275, 293, 294 ligand-gated 295 voltage-gated 293, 294 cholesteric phase 389, 390, 394, 397, 423, 424 cholesterol 389, 429 chord conductance 174, 176 chromophore 271 circuit 50, 58, 68, 72, 79, 122, 124, 125, 133, 174, 175, 179, 183, 187, 196, 197, 200, 202, 205, 214, 215, 217, 221, 224, 225, 227, 235-238, 356, 415, 420 circuit model of membrane admittance fit of 235 fluctuation spectra of 230 circular dichroism 437 spectroscopy 498 Clausius–Clapeyron equation 105, 362 closed system 10, 24, 99, 101, 108110 cockroach giant axon 230 coherence 338, 352 coherent behavior 343, 351 colchicine 321 Cole–Cole analysis 206-207, 211-212, 371, 447, 466-468 Cole–Moore shift (effect) 180, 323, 364 collective behavior 10, 93, 323 collision coupling 39 columnar liquid crystals 391, 504 complex system 18, 271, 330, 333, 338, 343 complexity 1, 18, 25, 28, 34, 35, 39, 45, 75, 94, 95, 129, 329, 330, 337, 446, 50 compounds heterocyclic 247 polycyclic 247 computer simulation 186, 187, 307, 427 Monte Carlo 308 condensed matter 89, 92, 95, 113, 115, 310, 323, 387
Index
conductance 8, 54, 58, 61, 65, 74, 75, 78, 79, 83, 84, 120, 122, 123, 125, 134, 174-177, 179, 180, 184, 187-189, 192, 195, 200-202, 205, 213-215, 234236, 243, 246-251, 269, 292, 293, 303, 304, 307, 3087, 320-322, 372, 376, 382, 445, 447, 448, 459, 465, 477, 478, 483, 483, 491, 499 functions, probability interpretation of 180 kinetics 250 pore 303 single-channel 249, 261, 303 time variation of 189 voltage dependence of 187 conduction band 119, 469 conduction channels, in Nasicon 134 conduction speed 34, 49, 80 conductivity 2, 120, 122, 129-137, 162, 210, 317, 318, 321, 324, 330, 332, 336, 337, 364, 371, 372, 374, 404, 431, 434, 436, 469 fractal 336 nonlinear 120 conotoxin 276, 285, 291 constant phase capacitance 195, 235, 236, 371, 401, 447 cooperative motion (behavior) 308, 319, 476, 506 cooperative phenomenon 335, 347 coordination number 76 correlation time 226, 253 correspondence principle 94, 111 Coulomb’s law 115-117 coupling, of activation and inactivation 288 covariance 234 critical exponent 332, 333, 349, 350, 371, 433, 447 critical phenomena 19, 103, 113, 206, 329, 431, 436, 443, 445
Index
critical point 103, 105, 106, 329, 332, 343, 348-350 critical slowing down 330, 350, 371, 372 critical state 329, 337 self-organized 36, 330, 337 crystal ferroic 132 ionic 132, 133, 208, 306, 420 cultured cells 240, 245 Curie constant 357, 368 Curie point 357, 363, 367, 371, 376378, 380-382 see also transition temperature Curie–Weiss law 357, 361, 371, 380, 381, 504 current density 120, 126, 127, 143, 144, 146, 155-157, 159, 160, 165, 167, 178, 181, 311 current separation 159, 174 current A-type 281 alpha 503 omega 503 current–voltage curve (relationship, characteristic) 60, 69, 150, 159, 161, 162 equal concentrations, potassium ion 161 curtain effect 368 cyclic nucleotide gated channel 36, 286, 295 cysteine 254, 255, 281, 287, 444, 471 substitution 287 cystic fibrosis transmembrane conductance regulator (CFTR) 293, 307 cytochalasine B 321 cytoskeleton 31, 430, 505 database, protein 260 deafness 284 debye (dipole moment unit) 208, 409
513
Debye equation 209, 211, 364 Debye length 152 Debye’s law 350 Debye–Hückel treatment 306 defect interstitial 132 line 96, 131, 390 point 96, 131, 347 vacancy 132 deformation bend 395, 405, 406 splay 405 twist 395, 404, 408, 422 degeneracy 112, 113, 350, 413 degenerins 292 degree of freedom, orientational 392 dehydrating ion 303 dehydration of ions, enzymatic 477 dehydration–solvation region 496 “delicate phases” of matter 98, 387 delocalization 433 dendrite 31, 33-35, 37, 39 density, of channels 76 deoxyribonucleic acid 14-17, 22, 43-45, 51, 59, 129, 253, 255, 260, 264, 272, 273, 275, 334, 374, 392, 443 depolarization 5-7, 12, 37, 53, 54, 65, 66, 71, 73, 76, 83, 176, 184, 215, 234, 236, 244, 247, 249, 250, 278, 282, 283, 287, 289, 291, 294, 295, 311, 319, 321, 324, 353, 376-378, 380, 382, 415, 426, 427, 430, 443, 445, 449, 451, 452, 466, 471, 475-477, 479, 480, 483, 488, 491, 495, 504, 506 desensitization 22, 292 desolvation–hydration region 496 desorption 217 deterministic 11, 222, 228, 229, 343, 477 dielectric anisotropy 402, 403, 406
514
constant 54, 116, 129, 321, 325, 364, 365, 371, 383, 402, 484-486, 491, 498, 499 dispersion 209 loss 210, 217, 236, 364 permittivity 61, 129, 170, 209, 210, 212, 325, 360, 364, 370, 371, 373, 377, 402, 449, 450, 467, 468, 491, 506 relaxation 206, 207, 209, 364, 370, 371, 468 susceptibility tensor 404 dielectrophoresis 129 differential equation dispersive 50 nonlinear 50, 156, 182, 183, 318 diffusion 50, 79, 135, 136, 139, 140, 142-144, 157, 170, 182, 303, 307, 310, 319, 321, 336, 345, 351, 423, 429, 431, 436, 506 dimer 255, 260, 279, 290, 316, 323, 450, 451, 499 dipolar electric field gating 322 dipole electric 13, 322, 347, 357, 374, 408, 430, 454 magnetic 339, 340, 342, 346, 347 moment 13, 117, 129, 130, 180, 208, 257, 306, 319, 324, 355-357, 368, 374, 391, 404, 405, 408, 409, 411, 430, 454, 455, 460, 461, 469, 484 dipole–dipole interaction 365, 426, 456, 458 dipole–ion interaction 426 director tilt 390 see also tilt director wave 416 disclination lines 390 discotic liquid crystals 388 disease Alzheimer’s 31, 277 Becker’s 294 Dent’s 294
Index
kidney 294 Parkinson’s 277 prion 277 Thompsen’s 294 dispersive wave 169 displacement current 78, 168, 180, 217, 366, 380 positively charged residues 286 vector 111, 117, 151, 369, 393 dissipative effects 50 dissipative regime 329 disulfide bridge 136, 255 divalent ion 66, 73, 74, 80, 132, 162, 192, 268, 283, 506 DNA, complementary 59 DOBAMBC 409, 411, 484 domain ferromagnetic 343 mode 467, 468 wall 132, 348, 358, 371, 380 donor 12, 313, 314, 317 donor–acceptor interaction 313, 314 double layer, electric 129, 131 drift 38, 119, 129, 139-141, 143, 157, 268 drift–diffusion, see electrodiffusion 139 Drosophila melanogaster 17, 277 duplication, gene 273 effectors 14, 21, 25, 26, 35, 38-40, 45, 66, 266, 276, 507 Einstein relation 143, 325 elastomer 136, 137, 437, 438, 466, 505 electret 128, 129 electric organs 38, 47, 244, 245, 294 electric polarization waves 323 electric potential 31, 52, 118, 119, 142, 154, 266, 311, 344 electrochemical potential energy 5 electrochiral effect 408 electroclinic coupling 484, 486
Index
electroclinic effect 391, 408, 486 electrodiffusion (theory) assumptions of 61 boundary conditions 169 classical 149, 151, 159, 162, 163, 169, 170, 174, 324, 325, 331, 435 electrodominant solution 158 exact solution 155 initial conditions 169 ions of different charges 164 ions of the same charge 163 multi-ion 142, 163 osmodominant solution 158 steady state 151 time-dependent 163, 166, 170, 182, 416 electrodynamics 9-11, 187, 504 electroelastic model 465, 466 electrogenic response 345, 505 electrokinetic 129, 130 electrolyte 13, 53, 69, 75, 81, 136, 191, 336 solid 133, 336 electron microscopy 244, 375 electron transfer chain 468 electronegativity 13, 313, 318, 319, 472 electroneutrality 129, 131 electrophysiology 47, 58, 63, 65, 68, 173, 504 electroplax 26, 28, 266, 293 electrorotation 130 electrostriction 217, 382 electrotonic response 68, 426 ellipticity 499 emergent property 18, 45, 140, 272, 330 energy levels, distribution of 341 ensemble canonical 109 Gibbs 108-110 grand canonical 109 enthalpy 338
515
entropy production 351, 352 entropy signal transduction 24 enzymatic unit of channel 478 enzyme as channel model 478 enzyme, restriction 273 epilepsy 277, 284, 292 equation of continuity 167, 168, 325 equation of state 154, 349, 359, 360 caloric 101 nonlinear dielectric 360 thermal 99 equations, van der Pol type 425 equilibrium 4, 7, 14, 54, 56, 83, 97, 99, 101, 103-106, 108-110, 132, 135, 142, 144, 156, 164, 166, 174, 176, 181, 184, 196, 221, 227, 268-270, 323, 335, 337-341, 346, 350-352, 359, 361, 363, 382, 399, 422, 424, 426, 446, 447, 454, 506 Dirichlet 169 electrical 144, 156, 164 Neumann 169 stable 195, 196 unstable 195 ergodic assumption 224 eukaryote 14, 16, 30 evolution 2, 7, 16, 21, 23, 25, 28, 36, 45, 76, 97, 224, 260, 271, 272, 276, 325, 337, 343, 351, 352, 435 excitability cycle 276 restoration of 478, 480 excitability-inducing materials 245 excitable membranes, noise in 221, 230, 231, 364 excitation collective 97, 366, 454-456 threshold 80 excitation–contraction coupling 291 exciton 132, 269, 456-458 expectation value 222
516
expression system 273 extrinsic (improper) ferroelectricity 359 farad 121 fermion 92, 93 ferroelastic 132, 133, 359, 369, 376 ferroelectric channel unit 375, 378, 379 displacive 369 hydrogen-bonded 367, 368, 439 order–disorder 364, 368, 448 ferroelectric–superionic transition model 61, 140, 330, 347, 355, 357359, 367, 368, 375-377, 380, 383, 391, 409, 415, 437, 438, 448-452, 465, 484 ferroic effect 358, 374 ferromagnetic 61, 103, 342, 343, 349, 350, 420, 433 FFP 467, 468 field, electric 4, 5, 9, 10, 21, 26, 36, 52, 61, 64, 73, 83, 96, 103, 115-123, 129, 120, 132, 140, 142, 146, 151, 155, 159, 160, 167, 169, 179, 180, 186, 207-209, 217, 222, 243, 268, 269, 272, 275, 277, 287, 294, 301-303, 305-307, 309-311, 318, 319, 321, 322, 324, 325, 337, 344, 347, 357-360, 362, 364, 366, 368, 370, 376, 377, 380-382, 390, 391, 402-406, 408, 413, 415, 416, 422, 428-431, 443-446, 449-452, 460, 465, 472, 474, 475, 481, 486, 488, 491, 501, 506 flagella 321-23, 31, 451 flexibility, conformational 383 flexoelectric effect 405, 406, 408 converse 430 flip-flop mechanism 322 fluctuation analysis 221, 230, 235
Index
fluctuations 42, 58, 113, 186, 203, 217, 221, 222, 224-227, 229231, 233-235, 238, 240, 243, 249, 253, 301, 318, 321, 324, 329, 335, 343-346, 349, 350, 352, 364, 366, 370, 394, 397, 400, 424, 436, 446, 447, 457, 484, 487 flux 54, 57, 72, 110, 127, 142-144, 149, 170, 303, 304, 307, 310, 344, 352, 446, 451 diffusion 142, 143 experiments, with radioactive ions 304 migration 143 net 143 focusing of energy 421 forcing function 169 four-helix bundle 260-262 Fourier analysis 96, 97, 203, 204, 225, 227 Fourier coefficient 198, 203, 225, 226 Fourier series 198, 203, 225 Fourier transform 198, 199, 203205 fractals 231, 334, 335, 337 dimension 335 fractional power relations 243 Frank coefficient 394, 395 Fredericks transition 403, 404, 408, 479 free energy 109, 310, 324, 341, 352, 358, 362, 367, 373, 394-396, 400-402, 404, 411, 427, 503 free-standing films 416, 438, 487 frequency domain 202, 203, 212, 213, 217, 236, 271, 371 frequency plot, admittance and noise 239 fullerene 318 functional order 14, 351, 445 fungal ion channel 297
Index
G protein 36, 39, 40, 265, 278, 290, 291, 446, 507 GASH, see guanidinium aluminum sulfate hexahydrate gating 58, 73, 78, 85, 179, 180, 192, 206, 217, 234, 250, 265, 276, 277, 286, 287, 294, 299, 301, 305, 306, 308, 312, 318, 322, 443, 453, 460, 465, 466, 477479, 492, 496, 497, 500-504, 506 charges 180, 286, 287, 311, 444, 492, 500, 501 current 58, 78, 179, 180, 217, 277, 287, 305, 306, 380, 444, 447 measurement of 180 Gauss’s law 117, 146, 147, 151, 167, 168, 325 gene eag 284 elk 284 erg 284 HERG 284 genetic engineering 59, 273, 281 genetics 16, 59, 272, 281, 334 Gibbs (energy) function 102-104 elastic 360 glia 31 globin fold 262 glutamate residue 277 glutamine 254, 286 glycine 34, 41, 44, 103, 253, 255, 256, 258, 281, 293, 363, 369, 376 glycoprotein 5, 7, 8, 42, 44, 221, 244, 253, 266, 378, 452 glycosylation 44 Goldman–Hodgkin–Katz equation 75, 165, 166, 170, 177, 279, 446 Goldstone (spin) mode 409, 467, 468, 484, 485, 494, 506 grain boundaries, in elastomers 137 gramicidin 302, 303, 307 ground potential 4, 118, 126
517
guanidinium aluminum sulfate hexahydrate 357, 383 guanidinium group 76, 383 guest phase 429 guest–host system 313-315, 505 H+-gated cation channels 292 H5 region 274, 281, 284 hair cell 26, 36, 85, 375, 430 Hamiltonian operator 90, 111, 365 hapto number 316 harmonics, generation of 217 heat generation 84 HEK, see human embryonic kidney helical structure 51, 388, 422, 450, 453, 459 helicoidal arrangement, in cholesterics 389 helielectricity 409, 429, 484, 505 helielectric molecules 492 helix unwinding 406 helix-coil transition 321, 454, 460 heterologous expression 273 heteromeric channel 281 histidine 254, 287, 444 histidine-scanning mutagenesis 501 Hodgkin–Huxley model, linearized 57 hole 192, 245 homology, structural 273, 295 hopping 309, 310, 318-320, 382, 436, 465, 475, 476, 478, 483 hopping, ion 319 host phase 136, 429 host–guest complex 314 human embryonic kidney 273, 281, 500 human genome 334 hydrogen bond heteroconjugated 474 homoconjugated 474 proton potential of 475 widening of 471, 504 hydropathy 258 analysis 273, 274 index 274
518
hydrophilic 7, 43, 98, 100, 191, 258, 260, 266, 274, 284, 437, 461, 478 hydrophobic 7, 13, 39, 43, 60, 98, 100, 253, 254, 258, 260, 263, 266, 273, 274, 280, 281, 437, 479, 484, 498 hydrophobicity 60 hyperpolarization-activated cation channel 294 hyperpolarization-activated channel 286, 294 hyperthermia 292 hypokalemic periodic paralysis 292 hypothesis 42, 49, 53, 57, 60, 140, 191, 200, 301, 304, 305, 312, 377, 381, 382, 443, 452, 491, 496, 500, 504 hysteresis 81, 358, 363, 364, 366, 383, 391, 401, 449, 451, 467 dielectric 358 thermal (temperature) 80, 323, 372, 448 impedance 26, 54, 125, 129, 187, 195, 199-201, 203, 205, 214, 217, 225, 227, 235-237, 326, 447, 466 membrane 54, 55, 69, 199, 201, 216, 230, 232, 447 inactivation 6, 37, 78, 85, 176, 182, 186-190, 213, 234-236, 243, 247, 249, 276, 281, 282, 285, 287, 288, 291, 308, 311, 320, 321, 478-480, 492, 497, 498, 500, 504 as a surface interaction 479 C-type 281, 479 fast 188, 190, 234, 281, 285, 288, 309, 311, 500 gate 311 N-type 479 slow 234, 483 independence principle 74, 175 inductance 121, 122, 196, 201, 202
Index
induction, electric 207, 360, 362, 402 information flow 44, 45 information processing 12, 21, 24, 27, 31, 32, 34 infrared radiation 26, 37, 322, 347, 356 injury current (current of injury) 49, 51, 52, 183 instability 68, 344, 352, 393, 404, 415, 416, 421, 422, 452, 491 convective 345 dynamic 451 electrohydrodynamic 344, 345 localized 345, 421 Rayleigh-Bénard 343 instability, notch 345 interdisciplinary science 19 internal perfusion 58, 64, 68-70, 80, 215 International System (of units) 115 inversion 342, 347, 399 inward rectification 69, 282, 283 inward rectifier 278, 279, 282, 283 ion avalanche 477, 480, 506 ion channel 1-3, 5, 7-10, 12, 14, 16, 25, 27, 29, 31, 34, 36-39, 44, 45, 47, 51, 58-61, 74-76, 84, 89, 92-95, 98, 113, 115, 116, 118, 122, 130, 131, 134, 135, 137, 140, 163, 173, 179, 187, 190, 192, 195, 214, 217, 221, 228, 229, 235, 239, 240, 243246, 248-250, 253, 254, 263, 265, 267-269, 271-273, 275277, 281, 282, 286, 288, 292299, 301, 303-307, 313, 314, 318-322, 324, 329, 334, 335, 337, 355, 368, 377, 382, 383, 387, 393, 401, 408, 412, 415, 424, 425, 427-430, 433, 435437, 439, 443-454, 458-461, 465, 466, 469-471, 475, 477479, 483, 484, 491, 492, 496, 497, 500, 501, 503-507
Index
bacterial 267, 272, 298 diversity of 271 quantum mechanics and 94 structure of 59, 273, 321 voltage-gated 275, 276, 295, 475 voltage-sensitive 1-3, 7-9, 13, 44, 45, 47, 59, 75, 94, 113, 115, 116, 131, 140, 163, 179, 192, 195, 217, 221, 239, 243, 244, 253, 263, 265, 267-269, 271, 272, 276, 277, 282, 288, 297, 299, 301, 303, 305-307, 313, 314, 318, 329, 355, 368, 383, 393, 408, 412, 415, 425, 428, 435, 437, 443, 445-450, 452, 453, 458, 459, 465, 466, 468-471, 475, 477, 483, 491, 492, 497-501, 504-507 ion exchange 138, 320, 322, 470, 477, 500 ion gas 476 ion kinetics, separation of 189, 308 ion pump 7, 265 ion(s), calcium 7, 34, 54, 76, 116, 159, 170, 174, 177, 245, 264, 269, 276, 288, 369, 370 lithium 12, 56, 285 number density of 141 potassium 4, 6, 7, 57, 71, 144, 145, 161, 167, 173, 176, 190, 200, 215, 278, 285, 299, 306, 315, 316, 342, 424, 472 quaternary ammonium 76, 77, 311 sodium 4-7, 37, 54, 56-58, 69, 72, 73, 75, 79, 81, 92, 116, 133, 135, 166, 173, 182, 190, 191, 200, 284, 306, 309, 311, 315, 316, 342, 353, 471, 476478, 480, 500, 503 ionomer 439 Ising ferromagnet 332 Ising model 346, 348, 364, 368, 372
519
isoform 282 isoleucine 254, 281, 286, 483-486, 498 isotope substitution D2O for H2O 321 isotope tracer studies 57 isotropic liquid 98, 108, 387, 394, 423 jelly roll 260, 263 KCa, channel 260, 263 KCSA channel 276, 281 kinetic functions linear 176, 478 nonlinear 176 kinetics, slowing of 321 kink 297, 366, 391, 416, 420, 422, 427, 428, 475, 488, 490, 493495, 497, 504 kink solution 420 Kir, see inward rectifier Korteweg-deVries equation 50, 416418 Lambert–Eaton myasthenic syndrome 291 Landau’s theory 349, 452 Langevin equation 228, 229 Langmuir-Blodgett film 245, 448, 452 leakage current 174 leucine 22, 23, 254, 275, 286, 483, 484, 498, 500 life, broken symmetries of 422 Lifshitz invariant 395, 397, 400 ligand-gated ion channel 2, 36, 244, 276 light scattering 84, 85, 235, 322 linear system 197, 198, 235 linker region 292, 478 lipid 1, 7, 8, 14, 39, 42-45, 55, 77, 89, 98, 121, 139, 245, 246, 265-267, 305, 313, 319, 322, 324, 377, 393, 416, 430, 437, 439, 450, 465, 494, 503
520
bilayer 7, 8, 39, 60, 98, 122, 139, 221, 245, 267, 273, 297299, 318, 429, 452, 492, 496499, 505 phase, function of 408 liposome 245, 247, 296 liquid crystal biaxial 406 calamitic 388 cholesteric 389, 395, 406, 437 columnar 391, 504 discotic 388 ferroelectric 61, 377, 408-410, 412, 413, 415, 449, 465-467, 479, 483, 484, 491, 492, 494497, 500, 504, 505 lyotropic 100, 437, 505 model 324, 500, 504 nematic 394, 402-405, 416, 421, 422, 479 smectic 391, 396, 411, 422, 429, 465, 484 surface-stabilized 460, 480 thermotropic 388 uniaxial 394 localization 420, 433 long QT syndrome 284 loop region 259, 260, 262, 263, 266 lyotropic 54, 98, 100, 388, 416, 437, 439, 465, 487, 505 lysine 254, 285, 483, 492 macroscopic 1, 25, 60, 78, 89, 99, 108, 112, 121, 139, 140, 173, 186, 192, 208, 209, 222, 214, 229, 243, 248, 249, 251, 312, 322, 336, 338, 342-344, 350, 351, 358, 389, 392, 395, 401, 433, 348, 439, 444, 447, 452, 470, 478-480, 498, 499 magnetoreception 36 magnon 346 Markov process 224, 225 Maxwell relations 102, 107 Maxwell–Boltzmann distribution 110
Index
mean-field theories 186, 349, 446 measurement area, minimizing 239 mechanical 9, 21, 31, 36, 37, 42, 52, 55, 68, 79, 85, 86, 92, 97, 103, 108, 109, 117, 119, 132, 143, 243, 276, 302, 304, 305, 310, 311, 322, 323, 326, 329, 337, 348, 356, 359, 374, 380, 387, 393, 419, 420, 430, 432, 438, 446, 458, 465, 483, 491, 501, 502 mechanism, gating 305, 322, 497 mechanoreception 86 mechanoreceptors 26, 36, 86, 243, 430 mechanosensitive 292, 459 membrane data, comparison with 149 membrane excitability models of 173, 352, 425 models of Hodgkin and Huxley 173 membrane potential difference 64, 124, 160, 249 membrane swelling 85, 86, 243, 269, 322, 471 membrane–cortex model 320 mesogen, lyotropic 98, 437 mesophase 95, 98, 387, 388, 396, 429, 439, 465, 479, 494, 505 metal-insulator transition 433 metallomesogen 438, 439 metalloprotein 263-265, 382, 472, 476, 477 metastable state 104, 309, 319, 323, 359, 428, 491 Mg2+, rectification effect of 283 micelle 98, 99, 245, 313 microfilament 31, 321 micropipette 246 microstate 224 microtubule 31, 33, 321, 448, 450, 451 assembly 321, 451 dynamic instability of 451 ferroelectricity in 450
Index
in submembranous region 451 proposed role in brain function 451 microwave irradiation, effect on channel function 281 migraine 277, 292 minK 282, 284 mirror charges 491 mitochondrial channel 295 mobility electrical 119 mechanical 119, 143 model gated pore 301, 302, 305, 326 two-state 35, 248, 250 modular channel 322 modulated structure 389, 393, 397, 401 molecular biology 22, 272, 273, 277 molecular dissection 496 molecular dynamics 307 molecular excitability 1, 2, 7, 19, 45, 47, 192, 447, 477, 500, 501 moment first (mean) 223 second 223, 226 momentum operator 111 mutation 15, 16, 29, 43, 59, 190, 272, 274, 275, 277, 281, 284, 288, 291, 294, 308, 479, 480, 496, 498, 500 myelin 4, 34, 55 sheath 4, 50, 65, 231, 267 myopathy 292 myotonia 288, 294 Myxicola giant axon 321 NaChBac channel 503 nematic 56, 98, 108, 387, 389, 391, 394-397, 401-406, 408, 416, 421-423, 467, 479, 487, 494 Nernst–Planck equation 53, 126, 127, 142, 144, 146, 147, 151, 164, 167, 224, 323, 325
521
nerve impulse 1-5, 25, 26, 49, 50, 53, 72, 183-185, 324, 416, 424, 427 nervous system 2, 15, 25, 27, 29, 31, 34, 38, 45, 51, 66, 285 central 2, 24, 25, 27, 29, 31, 24, 244, 281, 291, 292 neural nets 35 neurofilament 31, 85 neuromuscular junction, noise analysis of 231 neuron 3, 4, 17, 25, 26, 30-39, 53, 66, 70-72, 166, 167, 221, 228, 289-292, 294, 295, 334 neurophysiology 15, 28, 276 neurotransmitter 2, 25, 32, 34, 35, 38, 44, 289, 291, 293, 376 night blindness 292 node (of Ranvier) 34, 50, 65-67, 75, 85, 185, 188, 230-232, 234 voltage clamp of 232 noise 1/f (flicker) 231 fractal 231 nonconducting conformations 277 nonlinear dynamics, effects of noise on 228 nonlinear Schrödinger equation 418, 422, 455 nonlinear system 190, 197, 228, 229 nonstationary noise 234 normal modes 195, 197 normal rolls 421 nuclear magnetic resonance 272, 373 nucleic acid 13, 14, 17, 34, 43, 45, 136, 273, 375 nucleophilic site 472 numerical methods 146, 420 off-diagonal long range order 323 Ohm’s law 119, 120, 123, 224, 249 oocyte 273, 283, 292, 501 open system 10, 14, 24, 39, 103, 110, 329, 336, 343, 362, 443, 447
522
optical 18, 27-29, 55, 79, 84, 86, 97, 210, 243, 312, 320, 322, 326, 380, 391-393, 399, 405, 418, 419, 421, 423, 347-439, 465, 471, 486 order parameter 133, 134, 330, 348350, 359, 367, 394, 396, 399401, 407 complex 422 order emergence from disorder 326 evolution of 343, 351 long-range 95, 96, 323, 338, 345, 361, 487 orientational 356, 392, 394, 396, 423, 487 spontaneous 342, 345, 347, 443 short-range 95, 96, 487 organic conductor, planar 317 orientational order 356, 392, 396, 423, 487 overshoot 65, 73, 173, 183, 200 P loop 291 P region 274, 283, 478, 496 see also H5 region, P loop pacemaker cells 70, 71 paddle 470, 503 pain receptors 26 pair correlation function 393 parabolic relationship, E–N 157 paramyotonia congenita 287, 288 pararesonance 215, 216 Parseval identity 225, 226 partial differential equation 50, 169, 174, 182, 416 Hodgkin and Huxley 178, 179, 182, 187, 189, 416 partition coefficient 141 partition function 109, 111, 112, 145, 165, 350 patch clamp(ing) 59, 139, 221, 232, 240, 245-247, 249, 272, 273, 447
Index
patch 6, 73, 139, 174, 200, 228, 230, 240, 246, 251, 296, 479 gigaseal 246 inside-out 246 measurement from 230 on-cell 246 outside-out 246 whole-cell 246 pathway, conductive 320 pendulum equation 196 peptaibol 297 peptide 78, 247, 255, 256, 276, 292, 296, 318, 377, 453-455, 457, 458, 460, 469, 496, 497, 499 peptide bond, see bond, peptide peptide selective channel 296 peptide strategy 496, 498 perception 27-29, 292 percolation 137, 192, 336, 415, 430437, 476, 478, 491, 506 directed 435, 436, 483 directed threshold 431-434, 436 perikaryon 31, 35, 44 periodic activity 294 permeability 54, 57, 69, 74, 75, 79, 142, 143, 165, 166, 173, 186, 192, 200, 247, 285, 289, 294, 304, 310, 321 ionic 326 selective 39 permeation 78, 115, 192, 301, 312, 321, 345, 353, 437, 453, 459, 427, 483, 496 permittivity (dielectric) 61, 129, 170, 209, 210, 212, 325, 357, 364, 370, 371, 373, 377, 402, 449, 450, 467, 468, 491, 506 complex 209, 210, 371, 407 of free space 115 relative 116, 151, 169, 208 pharmacology 70, 275, 276 phase diagram 105, 106, 332, 396, 397, 399, 467 phase pinning 382, 477, 507
Index
(phase) transition (transformation) 10, 18, 55, 60, 61, 81, 89, 93, 96, 98, 101-104, 108, 112, 132-134, 136, 265, 306, 308, 322-324, 329, 332, 337, 344, 347, 349, 350, 352, 357, 359, 364-367, 369, 371, 376-379, 382, 383, 392, 394, 398-401, 407, 416, 422-424, 433, 448, 450-452, 460, 465, 470, 486488 assisted 353 field-induced 401 first order 103, 133, 422, 467 second order 106, 133, 362, 401 phenomenological approach, limitations of 192 phenylalanine 254, 275, 286, 479 phonon 97, 346, 347, 366, 369, 418, 457, 458, 469 phosphate group, of phospholipids 503 photon 11, 24, 26, 37, 85, 92, 94, 227, 346, 347, 352 photoreceptors 26, 30, 227, 271, 295 phytochrome 267, 271 piezoelectricity 52, 355, 356, 358, 375-377, 405, 407 pitch, helix 480 Planck’s constant 10, 11, 90, 94, 310 plot 106, 133, 155, 195, 199, 200, 206, 207, 212, 214, 216, 217, 236-238, 256, 257, 310, 333, 335, 427, 447, 467, 485, 487, 501 Bode 206, 207 Cole–Cole 206, 207, 447, 467 pnp configuration 322 Poisson–Boltzmann equation 306 polariton 469 polarization 10, 52, 84, 103, 117, 121, 122, 125, 130, 132, 207209, 306, 319, 324, 349, 355359, 361-369, 374, 377, 380,
523
383, 391, 399, 403-409, 411, 413, 420, 422, 427, 428, 449, 467, 470, 479, 483-486, 488, 489, 491, 496, 500 atomic 208 electric 10, 52, 103, 117, 129, 207, 323, 347, 357, 404, 405 electronic 208 ionic 208 persistent electrical 129 polarized states 404, 405 polaron 132, 420, 421, 456, 470 as a quantum soliton 470 polychromatic 437 polymorphism 96, 387 poly-γ-benzyl-L-glutamate 392 population inversion 342 pore aqueous 78, 137, 180, 244, 245, 298, 303, 304, 306, 307, 312, 318 aqueous cylindrical 192, 303 aqueous functional 192, 307 aqueous inadequacy of model 312 aqueous long 305 aqueous mechanically gated 326 aqueous preformed 329 aqueous structural 301, 305, 307, 312, 326 aqueous walls of 304 structural versus functional 192 porin 265, 297, 306, 307 postsynaptic membrane 32, 34, 244, 376 potassium channel 5, 7, 8, 12, 59, 76-78, 170, 186, 248, 268, 275, 277-280, 282, 284, 292, 298, 306, 319, 323, 447, 448, 478, 492, 501 calcium-activated 264, 276, 278 delayed rectifier 280 inward rectifier 278, 284 leak 278 voltage-gated 264, 278, 294, 298, 307
524
potassium tartrate tetrahydrate 61, 103, 357, 366, 367 potential action 3-7, 33, 34, 36-38, 45, 50-58, 65-69, 72-76, 72-76, 78, 79, 81, 84-86, 125, 140, 166, 173, 174, 178, 179, 182186, 189, 195, 200, 216, 230, 235, 245, 276, 277, 284, 285, 288, 291, 304, 321, 322, 338, 345, 351, 366, 375, 376, 380, 415, 421, 422, 426, 427, 429, 447, 471, 505 bistable 229 depolarizing 181 hyperpolarizing 181, 500 reversal 74, 75, 159, 174, 181, 201, 249, 251, 295, 476 power law 206, 329, 331-333, 335, 349, 401, 434, 435, 447 prehyperpolarization 488 presynaptic inhibition 289 Prigogine’s version of the second law of thermodynamics 351 principal subunit 36, 278-281, 285, 288, 295, 496, 497 probability density function 135, 222 conditional 222 probability 5, 15-17, 24, 45, 86, 9193, 110, 178, 180, 222-225, 229, 243, 250-252, 277, 278, 281, 320, 366, 393, 434, 436, 492, 501 channel closing 251, 252 channel opening 243, 249, 251, 252, 277, 287, 296, 445 procaine 76, 77 prokaryote 14, 30 proline 253, 297, 412, 445, 475, 490, 492, 495-497, 504 protein–lipid interface 503 proteins biosynthesis of 45, 255, 267 crystallization of 266
Index
membrane 8, 14, 38, 39, 44, 76, 187, 255, 265-267, 282, 320, 429, 430, 435, 452 membrane-spanning 1, 265, 266, 309 receptor–transducer 22 transitions in 96, 267 proteolysis 479 protomer 352, 353 proton 12, 13, 75, 93, 111, 131, 133, 143, 146, 265, 268, 287, 292, 308, 319, 321, 367-369, 371373, 377, 439, 444, 468, 469, 471, 472, 474-477, 483, 492, 493, 506 access 287 cascade 321 transfer 469, 474 protonic conductivity 431 pseudospin 348, 366, 369, 370, 383 psn-junction model 323 pyroelectricity 355-358, 374, 375, 391, 437 Q10 79 quantum mechanics 9, 11, 32, 42, 89-92 quantum tunneling 93 quasicrystals 97 quasiparticle 94, 318, 338, 345, 346, 470 Ramachandran plot 256, 257 ramp clamp 69, 70, 81, 84 random phase approximation 224, 369 random walk 22, 310, 331 self-avoiding 331 rate constant 252, 310 reactance 200, 202, 236 capacitive 206, 236 inductive 201 reaction–diffusion process 436 reaction–diffusion system 182 receptor, organometallic 314, 472
Index
receptor-coupled modulation 276 recognition 26, 44, 286, 313-316, 324, 466 reconstitution of channels 245 rectification ratio 149, 162 reductionism 19 refractory period 6, 50, 66, 72 absolute 50, 66 relative 50 regeneration 51, 52 regulatory subunit 275, 290 relaxation methods 83 relaxation time 83, 109, 212, 238, 30, 350, 364, 371, 467, 468 relay mechanism 376, 472, 477, 483 renormalization group 332, 333, 350 repetitive action potentials (firing) 66, 70-71, 78, 80, 186-187, 287, 321, 426-427 replacement of hydrogen ions by metal ions 475 residue number 274 response 2, 3, 10, 21, 25-27, 34, 38, 39, 50, 58, 66, 68-70, 72, 77, 81, 83-86, 122, 130, 133, 166, 174, 182, 186, 189, 200, 207, 209, 214, 215, 217, 218, 228, 230, 248-250, 271, 275, 276, 288, 292, 294, 296, 320, 322, 337, 345, 355, 367, 370, 375, 383, 387, 393, 405, 406, 426, 427, 430, 438, 445-447, 471, 480, 484, 488, 489, 505 ionotropic 34 metabotropic 34 resting potential 6, 12, 53, 57, 58, 65 69, 74, 78, 79, 82, 86, 122, 144, 173, 184, 187, 217, 234, 243, 269, 276, 277, 304, 319, 379, 381, 382, 428, 452, 467, 480, 489, 491, 504 reverse transcriptase 273 reversible process 102, 126, 128 rheobasic current 381 ribonucleic acid 13, 14, 17, 38, 4345, 129, 254, 255, 264, 272, 273
525
robust property 272 Rochelle salt, see potassium tartrate tetrahydrate rolls, parallel 344 rotation 22, 23, 38, 112, 130, 133, 208, 255, 260, 267, 268, 350, 373, 391, 394, 396, 398, 407, 409, 443, 444, 469, 471, 488, 494, 496, 500 domain 324 electric-field vector 324 rotatoelectric 438 S3–S4 linker segment 501, 502 S4 segments 59, 277, 281, 285-288, 308, 444, 445, 460, 462, 471, 476, 477, 483, 489-491, 493497, 500-502, 504, 505 saltatory conduction 34, 50 sample functions 223, 225 saxitoxin 76, 77, 285 scaling law 168, 331-335, 433-436 Schrödinger equation 89, 90, 94, 111, 112, 350, 418, 422, 455 screw-helical (gating) 443, 444, 488 second messenger 37, 39, 40, 290, 291, 295 segment tilt 494 segments, membrane-spanning 59, 131, 274, 283, 285, 459, 468, 471, 472, 475, 477, 480, 483, 492, 495 selectivity 73, 85, 137, 206, 265, 280, 281, 293, 295, 301, 302, 304, 306, 312, 326, 459, 472, 475, 496, 501, 503 selectivity filter 299, 302-304, 306, 312, 318, 496, 501, 504 self-organization 313, 424 self-organized chemical model 422, 426 self-organized criticality 36, 330, 337 self-organized waves 422 self-similarity 334, 335, 448 in channel current 447, 448
526
semicircle, Cole–Cole 206, 211, 212, 301, 355, 371, 447 semiconductor model, ionic 323 semiconductor, ferroelectric 373, 374, 380 Shaker gene 277 Shaker mutation 59, 277 Shaw gene 281 shot noise 227 siemens 120 signature sequence 281, 299 sine-Gordon equation 419, 420 single file 303, 305 single-channel pulse 59, 249 singularities 156, 158, 159, 170, 404, 487, 488 site-directed mutagenesis 274 size 15, 25, 28, 40, 66, 102, 108, 120, 139, 176, 186, 200, 222, 272, 304, 323, 335, 349, 421, 431, 433, 448, 449, 451, 452, 465, 504 as a parameter 452 in ferroelectrics 452 of channel 471 of ion 134, 394, 326, 472 phase transition driven by 452 slow wave, unstable 178 smectic 56, 98, 108, 387, 391, 394, 395, 397-401, 407-409, 411, 422, 429, 438, 465, 479, 484, 486, 487, 494, 504 A 387, 394, 396-398, 405, 407, 408, 416 C 61, 388, 399, 400, 407, 408, 411, 416, 422, 437, 438, 450, 467, 468, 484-488, 494 sodium channel 5-8, 12, 59, 72, 7577, 85, 108, 170, 178, 180, 181, 194, 217, 234, 235, 247, 268-270, 276, 285, 288, 302, 304, 307, 309, 319, 320, 377380, 437, 472, 496-498, 500, 503, 506 molecular weight of 12 voltage-gated (VLG Na) 285, 500
Index
voltage-sensitive 5, 38, 76, 247, 284, 285, 378, 379, 496, 497, 503 soft mode 357, 366, 367, 409, 450, 467, 468, 484, 486, 494, 506 soliton action potential as 422, 427 as a nonlinear excitation 416 electric-field-induced 422 in alpha helix 459 molecular 415, 455 polarization 427 rigorous 182 topological 366, 420, 457 space clamp 58, 67, 68, 457 space quantization 340 spectral analysis 225 spectral density function 227 1/f 232 current 227 Lorentzian 232 multiple Lorentzian 232 of frog node 231 voltage 227 spherical symmetry 306 spike, see action potential spin 13, 42, 91, 130, 345-347, 409 gas 339-342, 345 spine, dendritic 97 splay 394, 395, 404-406, 429 spontaneous electrical pulses (Barkhausen pulses) 305, 366 spontaneous polarization 132, 355359, 361-363, 365-368, 374, 377, 380, 391, 399, 403, 407409, 411, 467, 484, 485, 491, 496 squid axon 54, 58, 65-67, 69, 70, 73, 76, 79-81, 84, 86, 125, 145, 149, 150, 159-161, 173, 176, 180, 183, 185-187, 195, 200, 201, 206, 214-218, 230, 232, 233, 235-240, 282, 283, 285, 321, 323, 371, 380, 381, 451
Index
stability 11, 80, 104, 186, 343, 392, 397, 397, 457, 469, 489, 499 stationary process 224 statistical laws 16, 221, 222 stochastic 5, 221-226, 228, 230, 240, 243, 307, 381, 382, 431, 433, 436, 445, 458, 476, 479 stochastic stimulation of axons 230 Streptomyces lividans 298 structure coherent 344 dissipative 344, 351-353, 443 primary 59, 256, 260, 273, 285, 503 quaternary 256, 277, 505 secondary 256, 259, 260, 455, 499 tertiary 256, 260, 505 superconductivity 318, 330 superionic conductor (conduction) 48, 115, 132-137, 192, 307, 336, 371, 372, 451, 466, 476 elastomer 136 in hydrogen-bonded crystals 133 ion channel 137 polymer 136 sodium ion 133 supramolecular aggregates 45 supramolecular array 313, 314 surface charge 4, 121, 129, 255, 380, 490, 492 surface stabilization 460 susceptibility, dielectric 208, 404 switches conformational 503 ionic 132 switching 8, 132, 182, 229, 267, 281, 316, 358, 359, 366, 379, 391, 392, 422, 438, 448, 466, 486, 488, 489, 491 symmetry broken (breaking, break in) 40, 96, 349, 394, 399, 423, 483 fourfold 279, 298 mirror 262, 407
527
spatial 416, 422, 423 time reversal 422, 423 synapse 7, 26, 27, 31-34, 38, 44, 97 cholinergic 32, 244 excitatory 26, 33 inhibitory 26, 33 synaptic transmission 34, 293 synergetics 352 system, far from equilibrium 14, 110, 323, 351, 352 tail currents 69, 291 temperature critical 103, 318, 330, 342, 347350, 366, 370, 376 heat-block 79, 243, 355, 382 negative 338, 342 transition 96, 98, 133, 330, 332, 355, 357, 362, 364, 366, 367, 369, 371, 372, 374, 380, 382, 423, 451, 452, 485, 486, 488 temperature jump, effect on conductance 83 terminal (bouton) 31 tetraethyl ammonium 70, 77, 232, 311 tetrameric assembly 279 tetrodotoxin 58, 59, 70, 76, 77, 85, 247, 276, 285, 382 thermal 34, 79, 80, 97, 99, 101, 102, 109, 111, 129, 131, 221, 222, 227, 230, 309, 310, 319, 321323, 326, 338, 339, 342-344, 346, 347, 349, 350, 352, 362, 372, 387, 394, 397, 394, 397, 400, 424, 446-448, 455-457, 477, 483, 487, 491 equilibrium 4, 99, 221, 227, 323 thermally stimulated discharge 129 thermodynamics 10, 14, 99, 101, 103, 108, 109, 111, 126, 127, 164, 330, 338, 351, 352, 359, 360 first law of 19, 99, 360 irreversible 103
528
of ferroelectrics 359 second law of 14, 101, 351, 352 third law of 101 thermotropic 98, 388, 438, 439 threshold behavior 68, 329, 458 tilt 319, 377, 390, 399, 408, 409, 411, 460, 461, 465-467, 476, 477, 479, 481, 483, 484, 486489, 491-496, 500, 505, 506 angle, molecular 409 time domain 69, 202, 203, 205, 217, 222, 271, 449 topology 183, 260, 266, 274, 278, 295, 431, 493, 494, 496, 497 toroids 72, 73, 183 Torpedo mamorata 294 toxicity, metal 264 tracer experiment 247 transcription 17, 264, 271, 294 transfection 273 transition conformational 23, 96, 250, 267, 268, 306, 326, 329, 483, 494, 498, 500 cooperative 323, 477 global 322 phase 10, 18, 55, 60, 61, 81, 84, 89, 93, 96, 98, 101-104, 106, 108, 111, 112, 132-134, 136, 265, 306, 308, 322-324, 329, 332, 337, 344, 347, 349, 350, 352, 353, 357, 359, 362, 364367, 369, 371, 376-379, 382, 383, 392, 394, 398-401, 407, 416, 422-424, 433, 448, 450452, 460, 465, 467, 470, 486488 probability 225, 252 sol–gel 30, 432 structural 113, 370, 372, 447 temperature 96, 98, 133, 330, 332, 355, 357, 362, 364, 366, 367, 369, 371, 372, 374, 380, 382, 423, 451, 452, 485, 486, 488
Index
translation 17, 95, 189, 267, 294, 309, 349 invariance, of crystals 95 transmembrane domains 274, 278, 292 transport phenomena 431 transporter 191, 265, 303 traveling wave 129, 130, 174, 421 trialanine sulfate 363 triglycine sulfate 103, 363-365, 371, 372, 380 tris-sarcosine calcium chloride 369, 370 tubulin 450, 451 tunnel 369, 439 twist waves 422 two-state model neurons 35 ultraviolet 11, 84, 85 radiation 85, 439 uncertainty principle 90, 346 unitary currents 246, 479 universality 332, 333, 375 unwinding, of helices 402, 403, 406, 483, 496, 500, 501 valine 254, 281, 286, 483, 484 Van der Waals’s theory 186, 349 variability 15, 330, 392 variance 95, 330, 392 veratridine 77, 78, 247, 285 vesicle exocytosis 289 vesicle fusion 289, 307 vestibule 299, 302, 303, 478 VLG Na channel 275, 276, 285 see also sodium channel, voltage-gated Nav1.x 285 Nav2.x 285 Nav3.1 285 VLG, see ion channel, voltage-gated voltage clamp 58, 68, 69, 71, 85, 122, 150, 173, 174, 177, 181, 184, 213, 214, 248, 273, 345 voltage-dependent anion selective channel 296
Index
voltage-sensitive ion channel 1-3, 7-9, 16, 44, 45, 47, 59, 75, 89, 94, 113, 115, 116, 140, 163, 179, 192, 195, 217, 221, 239, 243, 244, 253, 263, 265, 267-269, 271, 272, 276, 277, 282, 297, 299, 301, 303, 305307, 313, 314, 318, 329, 355, 383, 393, 408, 412, 415, 425, 435, 443, 445-450, 452, 458, 459, 465, 466, 468-471, 475, 477, 483, 491, 492, 497, 500, 501, 505-507 vortex pair, action potential as 183 vortex unbinding 350 vortices 183, 344, 345, 350, 351, 416, 419 walls 93, 132, 140, 192, 299, 304, 307, 348, 358, 413, 416, 422, 488
529
water 10, 13, 21, 26, 30, 39, 43, 50, 52-56, 63, 76, 98, 103, 105, 116, 117, 119, 141, 169, 192, 197, 208, 210, 245, 256, 260, 265, 266, 287, 301, 303, 305-308, 313, 321, 329-332, 343, 348, 377, 415, 416, 418, 419, 421, 423, 439, 461, 462, 469, 472, 477, 491 ion translocation in 461 wave equation 89, 197, 416-418 white noise 227, 228, 231, 335, 337 Wien dissociation effect 321 Wiener–Khinchine theorem 226 zeta potential 131