biological and medical physics, biomedical engineering
For further volumes: http://www.springer.com/series/3740
biological and medical physics, biomedical engineering The fields of biological and medical physics and biomedical engineering are broad, multidisciplinary and dynamic. They lie at the crossroads of frontier research in physics, biology, chemistry, and medicine. The Biological and Medical Physics, Biomedical Engineering Series is intended to be comprehensive, covering a broad range of topics important to the study of the physical, chemical and biological sciences. Its goal is to provide scientists and engineers with textbooks, monographs, and reference works to address the growing need for information. Books in the series emphasize established and emergent areas of science including molecular, membrane, and mathematical biophysics; photosynthetic energy harvesting and conversion; information processing; physical principles of genetics; sensory communications; automata networks, neural networks, and cellular automata. Equally important will be coverage of applied aspects of biological and medical physics and biomedical engineering such as molecular electronic components and devices, biosensors, medicine, imaging, physical principles of renewable energy production, advanced prostheses, and environmental control and engineering.
Editor-in-Chief: Elias Greenbaum, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA
Editorial Board:
Masuo Aizawa, Department of Bioengineering, Tokyo Institute of Technology, Yokohama, Japan Olaf S. Andersen, Department of Physiology, Biophysics & Molecular Medicine, Cornell University, New York, USA Robert H. Austin, Department of Physics, Princeton University, Princeton, New Jersey, USA James Barber, Department of Biochemistry, Imperial College of Science, Technology and Medicine, London, England Howard C. Berg, Department of Molecular and Cellular Biology, Harvard University, Cambridge, Massachusetts, USA Victor Bloomf ield, Department of Biochemistry, University of Minnesota, St. Paul, Minnesota, USA Robert Callender, Department of Biochemistry, Albert Einstein College of Medicine, Bronx, New York, USA Britton Chance, Department of Biochemistry/ Biophysics, University of Pennsylvania, Philadelphia, Pennsylvania, USA Steven Chu, Lawrence Berkeley National Laboratory, Berkeley, California, USA Louis J. DeFelice, Department of Pharmacology, Vanderbilt University, Nashville, Tennessee, USA Johann Deisenhofer, Howard Hughes Medical Institute, The University of Texas, Dallas, Texas, USA George Feher, Department of Physics, University of California, San Diego, La Jolla, California, USA Hans Frauenfelder, Los Alamos National Laboratory, Los Alamos, New Mexico, USA Ivar Giaever, Rensselaer Polytechnic Institute, Troy, New York, USA Sol M. Gruner, Cornell University, Ithaca, New York, USA
Judith Herzfeld, Department of Chemistry, Brandeis University, Waltham, Massachusetts, USA Mark S. Humayun, Doheny Eye Institute, Los Angeles, California, USA Pierre Joliot, Institute de Biologie Physico-Chimique, Fondation Edmond de Rothschild, Paris, France Lajos Keszthelyi, Institute of Biophysics, Hungarian Academy of Sciences, Szeged, Hungary Robert S. Knox, Department of Physics and Astronomy, University of Rochester, Rochester, New York, USA Aaron Lewis, Department of Applied Physics, Hebrew University, Jerusalem, Israel Stuart M. Lindsay, Department of Physics and Astronomy, Arizona State University, Tempe, Arizona, USA David Mauzerall, Rockefeller University, New York, New York, USA Eugenie V. Mielczarek, Department of Physics and Astronomy, George Mason University, Fairfax, Virginia, USA Markolf Niemz, Medical Faculty Mannheim, University of Heidelberg, Mannheim, Germany V. Adrian Parsegian, Physical Science Laboratory, National Institutes of Health, Bethesda, Maryland, USA Linda S. Powers, University of Arizona, Tucson, Arizona, USA Earl W. Prohofsky, Department of Physics, Purdue University, West Lafayette, Indiana, USA Andrew Rubin, Department of Biophysics, Moscow State University, Moscow, Russia Michael Seibert, National Renewable Energy Laboratory, Golden, Colorado, USA David Thomas, Department of Biochemistry, University of Minnesota Medical School, Minneapolis, Minnesota, USA
P.O.J. Scherer Sighart F. Fischer
Theoretical Molecular Biophysics With 178 Figures
123
Professor Dr. Philipp Scherer Professor Dr. Sighart F. Fischer Technical University M¨unchen, Physics Department T38 ¨ 85748 Munchen, Germany E-mail:
[email protected], sighart.fi
[email protected]
Biological and Medical Physics, Biomedical Engineering ISSN 1618-7210 ISBN 978-3-540-85609-2 e-ISBN 978-3-540-85610-8 DOI 10.1007/978-3-540-85610-8 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2010930050 © Springer-Verlag Berlin Heidelberg 2010 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specif ically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microf ilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specif ic statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: eStudio Calamar Steinen Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
Biophysics deals with biological systems, such as proteins, which fulfill a variety of functions in establishing living systems. While the biologist uses mostly a phenomenological description, the physicist tries to find the general concepts to classify the materials and dynamics which underly specific processes. The phenomena span a wide range, from elementary processes, which can be induced by light excitation of a molecule, to communication of living systems. Thus, different methods are appropriate to describe these phenomena. From the point of view of the physicist, this may be Continuum Mechanics to deal with membranes, Hydrodynamics to deal with transport through vessels, Bioinformatics to describe evolution, Electrostatics to deal with aspects of binding, Statistical Mechanics to account for temperature and to learn about the role of the entropy, and last but not least Quantum Mechanics to understand the electronic structure of the molecular systems involved. As can be seen from the title, Molecular Biophysics, this book will focus on systems for which sufficient information on the molecular level is available. Compared to crystallized standard materials studied in solid-state physics, the biological systems are characterized by very big unit cells containing proteins with thousands of atoms. In addition, there is always a certain amount of disorder, so that the systems can be classified as complex. Surprisingly, the functions like a photocycle or the folding of a protein are highly reproducible, indicating a paradox situation in relation to the concept of maximum entropy production. It may seem that a proper selection in view of the large diversity of phenomena is difficult, but exactly this is also the challenge taken up within this book. We try to provide basic concepts, applicable to biological systems or soft matter in general. These include entropic forces, phase separation, cooperativity and transport in complex systems, like molecular motors. We also provide a detailed description for the understanding of elementary processes like electron, proton, and energy transfer and show how nature is making use of them for instance in photosynthesis. Prerequisites for the reader are a basic understanding in the fields of Mechanics, Electrostatics, Quantum Mechanics, and Statistics. This means the book is for graduate students, who want to
VI
Preface
specialize in the field of Biophysics. As we try to derive all equations in detail, the book may also be useful to physicists or chemists who are interested in applications of Statistical Mechanics or Quantum Chemistry to biological systems. This book is the outcome of a course presented by the authors as a basic element of the newly established graduation branch “Biophysics” in the Physics Department of the Technische Universität Muenchen. The authors thank Dr. Florian Dufey and Dr. Robert Raupp-Kossmann for their contributions during the early stages of the evolving manuscript.
Garching, May 2010
Sighart Fischer Philipp Scherer
Contents
Part I Statistical Mechanics of Biopolymers 1
Random Walk Models for the Conformation . . . . . . . . . . . . . . . 3 1.1 The Freely Jointed Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.1 Entropic Elasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.2 Force–Extension Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 Two-Component Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.1 Force–Extension Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.2 Two-Component Model with Interactions . . . . . . . . . . . . . 11 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2
Flory–Huggins Theory for Biopolymer Solutions . . . . . . . . . . . 2.1 Monomeric Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Polymeric Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Phase Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Stability Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Critical Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Phase Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19 19 22 26 26 28 30 33
Part II Protein Electrostatics and Solvation 3
Implicit Continuum Solvent Models . . . . . . . . . . . . . . . . . . . . . . . 3.1 Potential of Mean Force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Dielectric Continuum Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Born Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Charges in a Protein . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Generalized Born Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37 37 38 39 40 43
VIII
Contents
4
Debye–Hückel Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Electrostatic Shielding by Mobile Charges . . . . . . . . . . . . . . . . . . 4.2 1–1 Electrolytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Charged Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Charged Cylinder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Charged Membrane (Goüy–Chapman Double Layer) . . . . . . . . . 4.6 Stern Modification of the Double Layer . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45 45 46 47 49 52 57 58
5
Protonation Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Protonation Equilibria in Solution . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Protonation Equilibria in Proteins . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Apparent pKa Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Protonation Enthalpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Protonation Enthalpy Relative to the Uncharged State . 5.2.4 Statistical Mechanics of Protonation . . . . . . . . . . . . . . . . . 5.3 Abnormal Titration Curves of Coupled Residues . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61 61 64 65 66 68 69 70 71
Part III Reaction Kinetics 6
Formal Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Elementary Chemical Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Reaction Variable and Reaction Rate . . . . . . . . . . . . . . . . . . . . . . 6.3 Reaction Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Zero-Order Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 First-Order Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Second-Order Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Dynamical Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Competing Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Consecutive Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Enzymatic Catalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Reactions in Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.1 Diffusion-Controlled Limit . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.2 Reaction-Controlled Limit . . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75 75 75 76 77 77 77 78 79 79 80 82 83 84 84
7
Kinetic Theory: Fokker–Planck Equation . . . . . . . . . . . . . . . . . . 7.1 Stochastic Differential Equation for Brownian Motion . . . . . . . . 7.2 Probability Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Sharp Initial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Absorbing Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Fokker–Planck Equation for Brownian Motion . . . . . . . . . . . . . . .
87 87 89 90 91 92 93
Contents
7.5 Stationary Solution to the Focker–Planck Equation . . . . . . . . . . 7.6 Diffusion in an External Potential . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Large Friction Limit: Smoluchowski Equation . . . . . . . . . . . . . . . 7.8 Master Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IX
94 96 98 98 98
8
Kramers’ Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 8.1 Kramers’ Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 8.2 Kramers’ Calculation of the Reaction Rate . . . . . . . . . . . . . . . . . 102
9
Dispersive Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 9.1 Dichotomous Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 9.1.1 Fast Solvent Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . 110 9.1.2 Slow Solvent Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . 111 9.1.3 Numerical Example (Fig. 9.3) . . . . . . . . . . . . . . . . . . . . . . . 112 9.2 Continuous Time Random Walk Processes . . . . . . . . . . . . . . . . . . 112 9.2.1 Formulation of the Model . . . . . . . . . . . . . . . . . . . . . . . . . . 112 9.2.2 Exponential Waiting Time Distribution . . . . . . . . . . . . . . 114 9.2.3 Coupled Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 9.3 Power Time Law Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Part IV Transport Processes 10 Nonequilibrium Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 125 10.1 Continuity Equation for the Mass Density . . . . . . . . . . . . . . . . . . 125 10.2 Energy Conservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 10.3 Entropy Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 10.4 Phenomenological Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 10.5 Stationary States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 11 Simple Transport Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 11.1 Heat Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 11.2 Diffusion in an External Electric Field . . . . . . . . . . . . . . . . . . . . . 134 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 12 Ion Transport Through a Membrane . . . . . . . . . . . . . . . . . . . . . . . 139 12.1 Diffusive Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 12.2 Goldman–Hodgkin–Katz Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 12.3 Hodgkin–Huxley Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
X
Contents
13 Reaction–Diffusion Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 13.1 Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 13.2 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 13.3 Fitzhugh–Nagumo Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Part V Reaction Rate Theory 14 Equilibrium Reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 14.1 Arrhenius Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 14.2 Statistical Interpretation of the Equilibrium Constant . . . . . . . . 157 15 Calculation of Reaction Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 15.1 Collision Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 15.2 Transition State Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 15.3 Comparison Between Collision Theory and Transition State Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 15.4 Thermodynamical Formulation of TST . . . . . . . . . . . . . . . . . . . . . 165 15.5 Kinetic Isotope Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 15.6 General Rate Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 15.6.1 The Flux Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 16 Marcus Theory of Electron Transfer . . . . . . . . . . . . . . . . . . . . . . . 173 16.1 Phenomenological Description of ET . . . . . . . . . . . . . . . . . . . . . . . 173 16.2 Simple Explanation of Marcus Theory . . . . . . . . . . . . . . . . . . . . . . 175 16.3 Free Energy Contribution of the Nonequilibrium Polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 16.4 Activation Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 16.5 Simple Model Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 16.5.1 Charge Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 16.5.2 Charge Shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 16.6 The Energy Gap as the Reaction Coordinate . . . . . . . . . . . . . . . . 187 16.7 Inner-Shell Reorganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 16.8 The Transmission Coefficient for Nonadiabatic Electron Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Part VI Elementary Photophysics 17 Molecular States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 17.1 Born–Oppenheimer Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 17.2 Nonadiabatic Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Contents
XI
18 Optical Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 18.1 Dipole Transitions in the Condon Approximation . . . . . . . . . . . . 201 18.2 Time Correlation Function Formalism . . . . . . . . . . . . . . . . . . . . . . 202 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 19 The Displaced Harmonic Oscillator Model . . . . . . . . . . . . . . . . . 205 19.1 The Time Correlation Function in the Displaced Harmonic Oscillator Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 19.2 High-Frequency Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 19.3 The Short-Time Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 20 Spectral Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 20.1 Dephasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 20.2 Gaussian Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 20.2.1 Long Correlation Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 20.2.2 Short Correlation Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 20.3 Markovian Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 21 Crossing of Two Electronic States . . . . . . . . . . . . . . . . . . . . . . . . . 219 21.1 Adiabatic and Diabatic States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 21.2 Semiclassical Treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 21.3 Application to Diabatic ET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 21.4 Crossing in More Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 22 Dynamics of an Excited State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 22.1 Green’s Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 22.2 Ladder Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 22.3 A More General Ladder Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 22.4 Application to the Displaced Oscillator Model . . . . . . . . . . . . . . . 240 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Part VII Elementary Photoinduced Processes 23 Photophysics of Chlorophylls and Carotenoids . . . . . . . . . . . . . 247 23.1 MO Model for the Electronic States . . . . . . . . . . . . . . . . . . . . . . . . 248 23.2 The Free Electron Model for Polyenes . . . . . . . . . . . . . . . . . . . . . . 249 23.3 The LCAO Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 23.4 Hückel Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 23.5 Simplified CI Model for Polyenes . . . . . . . . . . . . . . . . . . . . . . . . . . 253 23.6 Cyclic Polyene as a Model for Porphyrins . . . . . . . . . . . . . . . . . . . 253 23.7 The Four Orbital Model for Porphyrins . . . . . . . . . . . . . . . . . . . . 254 23.8 Energy Transfer Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
XII
Contents
24 Incoherent Energy Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 24.1 Excited States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 24.2 Interaction Matrix Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 24.3 Multipole Expansion of the Excitonic Interaction . . . . . . . . . . . . 262 24.4 Energy Transfer Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 24.5 Spectral Overlap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 24.6 Energy Transfer in the Triplet State . . . . . . . . . . . . . . . . . . . . . . . 268 25 Coherent Excitations in Photosynthetic Systems . . . . . . . . . . . 269 25.1 Coherent Excitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 25.1.1 Strongly Coupled Dimers . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 25.1.2 Excitonic Structure of the Reaction Center . . . . . . . . . . . 273 25.1.3 Circular Molecular Aggregates . . . . . . . . . . . . . . . . . . . . . . 274 25.1.4 Dimerized Systems of LH2 . . . . . . . . . . . . . . . . . . . . . . . . . . 280 25.2 Influence of Disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 25.2.1 Symmetry-Breaking Local Perturbation . . . . . . . . . . . . . . 285 25.2.2 Periodic Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 25.2.3 Diagonal Disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 25.2.4 Off-Diagonal Disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 26 Ultrafast Electron Transfer Processes in the Photosynthetic Reaction Center . . . . . . . . . . . . . . . . . . . . 297 27 Proton Transfer in Biomolecules . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 27.1 The Proton Pump Bacteriorhodopsin . . . . . . . . . . . . . . . . . . . . . . 301 27.2 Born–Oppenheimer Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 27.3 Nonadiabatic Proton Transfer (Small Coupling) . . . . . . . . . . . . . 305 27.4 Strongly Bound Protons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 27.5 Adiabatic Proton Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Part VIII Molecular Motor Models 28 Continuous Ratchet Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 28.1 Transport Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 28.2 Chemical Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 28.3 The Two-State Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 28.3.1 The Chemical Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 28.3.2 The Fast Reaction Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 28.3.3 The Fast Diffusion Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 28.4 Operation Close to Thermal Equilibrium . . . . . . . . . . . . . . . . . . . 319 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 29 Discrete Ratchet Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 29.1 Linear Model with Two Internal States . . . . . . . . . . . . . . . . . . . . . 324
Contents
XIII
Part IX Appendix 30 The Grand Canonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 30.1 Grand Canonical Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 30.2 Connection to Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 31 Time Correlation Function of the Displaced Harmonic Oscillator Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 31.1 Evaluation of the Time Correlation Function . . . . . . . . . . . . . . . . 333 31.2 Boson Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 31.2.1 Derivation of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 31.2.2 Derivation of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 31.2.3 Derivation of Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 31.2.4 Derivation of Theorem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 32 The Saddle Point Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Part I
Statistical Mechanics of Biopolymers
1 Random Walk Models for the Conformation
In this chapter, we study simple statistical models for the entropic forces which are due to the large number of conformations characteristic for biopolymers like DNA or proteins (Fig. 1.1). R
H + N H H
Ψ
Cα H
H H N
Φ C
ω
O
O H
C
Cα R
Fig. 1.1. Conformation of a protein. The relative orientation of two successive protein residues can be described by three angles (Ψ, Φ, ω). For a real protein, the ranges of these angles are restricted by steric interactions, which are neglected in simple models
1.1 The Freely Jointed Chain We consider a three-dimensional chain consisting of M units. The configuration can be described by a point in a 3(M + 1)-dimensional space (Fig. 1.2): (1.1) (r0 , r2 , . . . , rM ). The M bond vectors bi = ri − ri−1
(1.2)
have a fixed length |bi | = b and are randomly oriented. This can be described by a distribution function: P (bi ) =
1 δ(|bi | − b). 4πb2
(1.3)
4
1 Random Walk Models for the Conformation b
Fig. 1.2. Freely jointed chain with bond length b
Since the different units are independent, the joint probability distribution factorizes M P (b1 , . . . , bM ) = P (bi ). (1.4) i=1
There is no excluded volume interaction between any two monomers. Obviously, the end-to-end distance R=
N
bi
(1.5)
bi P (bi ) = 0.
(1.6)
i=1
has an average value of R = 0 since R=
bi = M
The second moment is ⎛ R2 = ⎝ =
bi
i
b2i +
i
j
⎞ bj ⎠ =
bi bj
(1.7)
i,j
bi bj = M b2 .
i=j
1.1.1 Entropic Elasticity The distribution of the end-to-end vector is
bi d3 b1 , . . . , d3 bM . P (R) = P (b1 , . . . , bM )δ R −
(1.8)
This integral can be evaluated by replacing the δ-function by the Fourier integral: 1 δ(R) = e−ikR d3 k (1.9) (2π)3 which gives P (R) =
3
d ke
−ikR
M i=1
1 ikbi 3 δ(|bi | − b)e d bi . 4πb2
(1.10)
1.1 The Freely Jointed Chain
The inner integral can be evaluated in polar coordinates: 1 δ(|bi | − b)eikbi d3 bi 4πb2 ∞ π 2π 1 dφ b2i dbi δ(|b | − b) sin θ dθeikbi cos θ . = i 4πb2 0 0 0 The integral over θ gives
π
2 sin kbi kbi
sin θ dθeikbi cos θ = 0
5
(1.11)
(1.12)
and hence
∞ 1 1 2 sin kbi ikbi 3 δ(|bi | − b)e d bi = 2π dbi δ(bi − b)b2i 2 4πb2 4πb kbi 0 sin kb = , (1.13) kb
and finally we have 1 P (R) = (2π)3 The function
3
−ikR
d ke
sin kb kb
sin kb kb
M (1.14)
.
M (1.15)
has a very sharp maximum at kb = 0. For large M , it can be approximated quite accurately by a Gaussian:
sin kb kb
M
≈ e−M(1/6)k
2 2
b
(1.16)
which gives 1 P (R) ≈ (2π)3
3
d ke
−ikR −(M/6)k2 b2
e
=
3 2πb2 M
3/2
e−3R
2
/(2b2 M)
.
(1.17) We consider R as a macroscopic variable. The free energy is (no internal degrees of freedom, E = 0) F = −T S = −kB T ln P (R) =
3R2 kB T + const. 2b2 M
(1.18)
The quadratic dependence on L is very similar to a Hookean spring. For a potential energy
6
1 Random Walk Models for the Conformation
ks 2 x , 2 the probability distribution of the coordinate is
2 ks e−ks x /2kB T P (x) = 2πkB T V =
(1.19)
(1.20)
which gives a free energy of F = −kB T ln P = const +
ks x2 . 2
(1.21)
By comparison, the apparent spring constant is ks =
3kB T . M b2
(1.22)
1.1.2 Force–Extension Relation We consider now a chain with one fixed end and an external force acting in x-direction at the other end [1] (Fig. 1.3). The projection of the ith segment onto the x-axis has a length of (Fig. 1.4) bi = −b cos θ ∈ [−b, b].
(1.23)
We discretize the continuous range of bj by dividing the interval [−b, b] into n bins of width Δb = 2b/n corresponding to the discrete values li , i = 1, . . . , n.
b
κ x
Fig. 1.3. Freely jointed chain with external force
b
φ
θ x
Fig. 1.4. Projection of the bond vector
1.1 The Freely Jointed Chain
7
The chain members are divided into n groups according to their bond projections bj . The number of units in each group is denoted by Mi so that n
Mi = M
(1.24)
li Mi = L.
(1.25)
i=1
and the end-to-end length is n i=1
The probability distribution is P (θ, φ)dθdφ =
sin(θ)dθdφ . 4π
(1.26)
Since we are only interested in the probability of the li , we integrate over φ P (θ)dθ =
sin(θ)dθ 2
(1.27)
and transform variables to have P (l)dl = P (−b cos θ)d(−b cos θ) =
1 1 d(−b cos θ) = dl. 2b 2b
(1.28)
The canonical partition function is
Z(L, M, T ) = {Mi }
Mi li =L
z Mi M ! Mi i . zi = M! M i! j Mj ! i i
(1.29)
{Mi }
The zi = z are the independent partition functions of the single units which we assume as independent of i. The degeneracy factor M !/ Mi ! counts the number of microstates for a certain configuration {Mi }. The summation is only over configurations with a fixed end-to-end length. This makes the evaluation rather complicated. Instead, we introduce a new partition function by considering an ensemble with fixed force and fluctuating length Z(L, M, T )eκL/kBT . (1.30) Δ(κ, M, T ) = L
Approximating the logarithm of the sum by the logarithm of the maximum term, we see that (1.31) − kB T ln Δ = −kB T ln Z − κL gives the Gibbs free enthalpy1 1
−κL corresponds to +pV .
8
1 Random Walk Models for the Conformation 1
L/Mb
0.5
0
–0.5
–1 –20
–10
0 κb/kBT
10
0
Fig. 1.5. Force–extension relation
G(κ, M, T ) = F − κL.
(1.32)
In this new ensemble, the summation over L simplifies the partition function: Δ=
eκ
Mi li /kB T
{Mi }
=
zeκli /kB T
M
M!
z Mi Mi !
=
{Mi }
M!
(zeκli /kB T )Mi Mi !
= ξ(κ, T )M .
(1.33)
Now returning to a continuous distribution of li = −b cos θ, we have to evaluate b sinh t ξ= P (l)dl, zeκl/kB T = z (1.34) t −b with (Fig. 1.5) t=
κb . kB T
(1.35)
From dG = −SdT − Ldκ, we find
(1.36)
∂G ∂ z κb = M k T ln sinh B ∂κ T ∂κ κb/kB T kB T 1 b κb κb = M kB T − + coth = M bL , κ kB T kB T kB T
L= −
with the Langevin function L(x) = coth(x) −
1 . x
(1.37)
1.2 Two-Component Model
9
1.2 Two-Component Model A one-dimensional random walk model can be applied to a polymer chain which is composed of two types of units (named α and β), which may interconvert. This model is for instance applicable to the dsDNA → SDNA transition or the α-Helix → random coil transition of proteins which show up as a plateau in the force–extension curve [1, 2]. We follow the treatment given in [1] which is based on an explicit evaluation of the partition function. Alternatively, the two-component model can be mapped onto a one-dimensional Ising model, which can be solved by the transfer matrix method [3, 4]. We assume that of the overall M units, Mα are in the α-configuration and M − Mα are in the β-configuration. The lengths of the two conformers are lα and lβ , respectively (Fig. 1.6). 1.2.1 Force–Extension Relation The total length is given by L = Mα lα + (M − Mα )lβ = Mα (lα − lβ ) + M lβ . The number of configurations with length L is given by L − M lβ M!
. Ω(L) = Ω Mα = = L−Mlβ lα − l β ! Mlα −L ! lα −lβ
(1.38)
(1.39)
lα −lβ
From the partition function (L−Mlβ )/(lα −lβ ) (Mlα −L)/(lα −lβ ) zβ Ω,
Z = zαMα zβM−Mα Ω = zα
(1.40)
application of Stirling’s approximation gives for the free energy F = −kB T ln Z L − M lβ M lα − L ln zα − kB T ln zβ − kB T (M ln M − M ) = −kB T lα − lβ lα − l β L − M lβ L − M lβ L − M lβ = −kB T − ln + lα − l β lα − l β lα − l β M lα − L M lα − L M lα − L ln + . (1.41) − lα − l β lα − l β lα − lβ L
lα
lβ
Fig. 1.6. Two-component model
10
1 Random Walk Models for the Conformation
κ(1a–1b )/kBT
4 zb /za = 5 zb /za = 1
2 0
1/2
1/6
1/1.2 zb /za = 0.2
–2 –4 0
0.2
0.4
0.6
0.8
1
δ
Fig. 1.7. Force–extension relation for the two-component model
The derivative of the free energy gives the force–extension relation ∂F kB T M lβ − L kB T zβ κ= = ln + ln . ∂L lα − lβ L − M lα lα − lβ z α
(1.42)
This can be written as a function of the fraction of segments in the αconfiguration Mα L − M lβ δ= = (1.43) M M (lα − lβ ) in the somewhat simpler form (Fig. 1.7) κ
lα − lβ δ zβ = ln + ln . kB T 1−δ zα
The mean extension for zero force is obtained by solving κ(L) = 0: z α lα + z β lβ L0 = M , zα + zβ δ0 =
¯ 0 − M lβ L zα = . M (lα − lβ ) zα + zβ
(1.44)
(1.45)
(1.46)
Taylor series expansion around L0 gives the linearized force–extension relation: ∂F kB T (zα + zβ )2 ¯ 0) + · · · = (L − L 2 ∂L M (lα − lβ ) zα zβ kB T (zα + zβ )2 ≈ (δ − δ0 ) lα − lβ zα zβ kB T 1 = (δ − δ0 ). lα − lβ δ0 (1 − δ0 )
κ=
(1.47)
1.2 Two-Component Model
11
1.2.2 Two-Component Model with Interactions We now consider additional interaction between neighboring units. We introduce the interaction energies wαα , wαβ , wββ for the different pairs of neighbors and the numbers Nαα , Nα,β , Nββ of such interaction terms. The total interaction energy is then W = Nαα wαα + Nαβ wαβ + Nββ wββ .
(1.48)
The numbers of pair interactions are not independent from the numbers of units Mα , Mβ . Consider insertion of an additional α-segment into a chain. Figure 1.8 counts the possible changes in interaction terms. In any case by insertion of an α-segments, the expression 2Nαα + Nαβ increases by 2. Similarly, insertion of an extra β-segment increases 2Nββ + Nαβ by 2 (Fig. 1.9). This shows that there are linear relationships of the form 2Nαα + Nαβ = 2Mα + c1 , 2Nββ + Nαβ = 2Mβ + c2 .
(1.49)
The two constants depend on the boundary conditions as can be seen from an inspection of the shortest possible chains with two segments. They are zero for periodic boundaries and will therefore be neglected in the following since the numbers Mα and Mβ are much larger (Fig. 1.10). We substitute 1 Nαα = Mα − Nαβ , (1.50) 2 α
Mα Mβ Nαα Nαβ Nββ 2Nαα + Nαβ
∗∗∗αα∗∗∗ +1
+1
+2
∗∗∗αβ∗∗∗ +1
+1
+2
∗∗∗βα∗∗∗ +1
+1
+2
∗∗∗ββ∗∗∗ +1
+2
−1
+2
Fig. 1.8. Insertion of an α-segment. 2Nαα + Nαβ increases by 2 β
Mα Mβ Nαα Nαβ Nββ 2Nββ + Nαβ
∗∗∗αα∗∗∗
+1
∗∗∗αβ∗∗∗
+1
+1
+2
∗∗∗βα∗∗∗
+1
+1
+2
∗∗∗ββ∗∗∗
+1
+1
+2
−1
+2
+2
Fig. 1.9. Insertion of a β-segment. 2Nββ + Nαβ increases by 2
12
1 Random Walk Models for the Conformation Nα Nβ Nαα Nαβ Nββ
2Nαα + Nαβ–2Nα
2Nββ + Nαβ–2Nβ
α α
2
0
1
0
0
−2
0
β β
0
2
0
0
1
0
−2
α β
1
1
0
1
0
−1
−1
α α
2
0
2
0
0
0
0
β β
0
2
0
0
2
0
0
α β
1
1
0
2
0
0
0
Fig. 1.10. Determination of the constants. c1 and c2 are zero for periodic boundary conditions and in the range −2, . . . , 0 for open boundaries
1 Nββ = Mβ − Nαβ , 2 w = wαα + wββ − 2wαβ to have the interaction energy 1 1 W = wαα Mα − Nαβ + wββ Mβ − Nαβ + wαβ Nαβ 2 2 w = wαα Mα + wββ (M − Mα ) − Nαβ . 2
(1.51) (1.52)
(1.53)
The canonical partition function is M g(Mα , Nαβ )e−W (Nαβ )/kB T Z(Mα , T ) = zαMα zβ β Nαβ
Mα
(M−Mα ) zβ e−wββ /kB T = zα e−wαα /kB T × g(Mα , Nαβ )eNαβ w/2kB T .
(1.54)
Nαβ
The degeneracy factor g will be evaluated in the following. The chain can be divided into blocks containing only α-segments (αblocks) or only β-segments (β-blocks). The number of boundaries between α-blocks and β-blocks obviously is given by Nαβ . Let Nαβ be an odd number. Then there are (Nαβ +1)/2 blocks of each type. (We assume that Nαβ , Mα , Mβ are large numbers and neglect small differences of order 1 for even Nαβ .) In each α-block, there is at least one α-segment. The remaining Mα −(Nαβ +1)/2 α-segments have to be distributed over the (Nαβ + 1)/2 α-blocks (Fig. 1.11). Therefore, we need the number of possible ways to arrange Mα − (Nαβ + 1)/2 segments and (Nαβ − 1)/2 walls which is given by the number of ways
1.2 Two-Component Model αα
ββ
α
β
αα
β
α
ββ
α
ββ
αα
β
α
β
αα
ββ
ββ
αα
β
α
ββ
α
β
αα
β
αα
ββ
α
β
α
ββ
αα
13
Fig. 1.11. Degeneracy factor g. The possible eight configurations are shown for M = 4, Mα = 3, Mβ = 3, and Nαβ = 3 ααα
ααα α
αα
ααα
αα
α
ααα
ααα
ααα
α αα
α
ααα
α
α
αα
ααα
α
αα
αα α
αα
ααα
αα
ααα
ααα
α α
α
Fig. 1.12. Distribution of three segments over three blocks. This is equivalent to the arrangement of three segments and two border lines
to distribute the (Nαβ − 1)/2 walls over the total of Mα − 1 objects which is given by (Fig. 1.12) (Mα − 1)! Nαβ −1 N +1 ( 2 )!(Mα − αβ2 )!
≈
Mα ! . Nαβ N ( 2 )!(Mα − 2αβ )!
(1.55)
The same consideration for the β-segments gives another factor of (M − Mα )! N ( 2αβ )!(M
− Mα −
Nαβ )! 2
(1.56)
.
Finally, there is an additional factor of 2 because the first block can be of either type. Hence for large numbers, we find g(Mα , Nαβ ) = 2
(Mα )! Nαβ N ( 2 )!(Mα − 2αβ )!
(M − Mα )! N ( 2αβ )!(M
− Mα −
Nαβ )! 2
.
(1.57)
We look for the maximum summand of Z as a function of Nαβ . The corre∗ sponding number will be denoted as Nαβ and is determined from the condition
14
1 Random Walk Models for the Conformation
w ∂ ∂ + ln g(Mα , Nαβ )ewNαβ /2kB T = ln g(Mα , Nαβ ). ∂Nαβ 2kB T ∂Nαβ (1.58) Stirling’s approximation gives ∗ ∗ ∗ Nαβ Nαβ Nαβ 1 1 w + ln Mα − + ln M − Mα − − ln 0= 2kB T 2 2 2 2 2 (1.59) or N∗ N∗ (Mα − 2αβ )(M − Mα − 2αβ ) w 0= . (1.60) + ln N∗ kB T ( αβ )2 0=
2
Taking the exponential gives 2 ∗ ∗ Nαβ Nαβ Nαβ M − Mα − = e−w/kB T Mα − . 2 2 2
(1.61)
Introducing the relative quantities δ=
Mα , M
γ=
∗ Nαβ , 2M
(1.62)
we have to solve the quadratic equation (δ − γ)(1 − δ − γ) = γ 2 e−w/kB T . The solutions are γ=
−1 ±
(1 − 2δ)2 + 4e−w/kB T δ(1 − δ) . 2(e−w/kB T − 1)
(1.63)
(1.64)
Series expansion in w/kB T gives γ=
kB T 1 w + + + ··· 2w 4 24kB T 1 w 1 kB T 2 2 + δ(1 − δ) − + (δ − δ ) − + · · · . (1.65) ···± − 2w 4 24 kB T
The − alternative diverges for w → 0 whereas the + alternative 1 w 2 2 w 2 2 3 γ = δ(1 − δ) + (δ − δ ) −δ (1 − δ) + 2δ(δ − 1) · · · (1.66) kB T 2 kB T approaches the value γ0 = δ − δ 2
(1.67)
which is the only solution of the one case (δ − γ0 )(1 − δ − γ0 ) = γ02 → δ(1 − δ) − γ = 0.
(1.68)
1.2 Two-Component Model ∗ For Nαβ , we obtain approximately w ∗ + ··· . = 2M δ(1 − δ) + (δ − δ 2 )2 Nαβ kB T
15
(1.69)
Let us now apply the maximum term method, which approximates the logarithm of a sum by the logarithm of the maximum summand: F = −kB T ln Z(Mα , T ) ≈ −kB T Mα ln zα − kB T (M − Mα ) ln zβ + Mα wαα + (M − Mα )wββ ∗ wNαβ ∗ −kB T ln g(Mα , Nαβ )− . (1.70) 2 The force–length relation is now obtained from
L−Mlβ ∂ lα −lβ ∂F ∂F 1 ∂F = = κ= ∂L ∂Mα ∂L lα − lβ ∂Mα 1 ∂ −kB T ln zα + kB T ln zβ + wαα − wββ − kB T = ln g lα − l β ∂Mα ∗ ∗ wN ∂N ∂ 1 αβ αβ −kB T ln g − . (1.71) + ∗ lα − lβ ∂Mα ∂Nαβ 2 ∗ . Now using Stirling’s The last part vanishes due to the definition of Nαβ formula, we find N∗
M − Mα − 2αβ ∂ 1−δ−γ Mα δ + ln ln g = ln + ln = ln ∗ N ∂Mα M − Mα 1−δ δ−γ Mα − αβ 2
(1.72) and substituting γ, we have finally κ
(lα − lβ ) zβ e−wββ /kB T (1 − δ) = ln − ln −w /k T αα B kB T δ zα e +(2δ − 1)
w + δ(3δ − 2δ 2 − 1) kB T
w kB T
2 .
(1.73)
Linearization now gives zα e−wαα /kB T , zα e−wαα /kB T + zβ e−wββ /kB T kB T 1 κ= (δ − δ0 ) lα − lβ δ0 (1 − δ0 ) 2 w w (2δ0 − 1) + (3δ02 − δ0 − 2δ03 ) + kB T kB T 2 w w (6δ0 − 6δ02 − 1) (δ − δ0 ). + 2 + kB T kB T
δ0 =
(1.74)
16
1 Random Walk Models for the Conformation 10 5 0 –5 –10 0
0.2
0.4
0.6
0.8
1
δ
Fig. 1.13. Force–length relation for the interacting two-component model. Dashed curves: exact results for w/kB T = 0, ±2, ±5. Solid curves: series expansion (1.73) which gives a good approximation for |w/kB T | < 2
For negative w, a small force may lead to much larger changes in length than with no interaction. This explains, for example, how in proteins huge channels may open although the acting forces are quite small. In the case of Myoglobin, this is how the penetration of oxygen in the protein becomes possible (Fig. 1.13).
Problems 1.1. Gaussian Polymer Model The simplest description of a polymer is the Gaussian polymer model which considers a polymer to be a series of particles joined by Hookean springs n n−1
(a) The vector connecting monomers n−1 and n obeys a Gaussian distribution with average zero and variance (rn − rn−1 )2 = b2 . Determine the distribution function P (rn − rn−1 ) explicitly. (b) Assume that the distance vectors rn − rn−1 are independent and calculate the distribution of end-to-end vectors P (rN − r0 ). (c) Consider now a polymer under the action of a constant force κ in xdirection. The potential energy of a conformation is given by N f V = (rn − rn−1 )2 − κ(xN − x0 ) 2 n=1
1.2 Two-Component Model
17
and the probability of this conformation is P ∼ e−V /kB T . Determine the effective spring constant. f (d) Find the most probable configuration by searching for the minimum of the energy ∂V ∂V ∂V = = = 0. ∂xn ∂yn ∂zn Calculate the length of the polymer for the most probable configuration (according to the maximum term method, the average value coincides with the most probable value in the thermodynamic limit) 1.2. Three-Dimensional Polymer Model Consider a model of a polymer in three-dimensional space consisting of N links of length b. The connection of the links i and i + 1 is characterized by the two angles φi and θi . The vector ri can be obtained from the vector r1 through application of a series of rotation matrices2 θ
ri = R1 R2 · · · Ri−1 r1 with
⎛ ⎞ 0 r1 = ⎝ 0 ⎠ b
φ ri+1 ri
⎞⎛ ⎞ cos θi 0 − sin θi cos φi sin φi 0 0 ⎠. Ri = Rz (φi )Ry (θi ) = ⎝ − sin φi cos φi 0 ⎠ ⎝ 0 1 0 0 1 sin θi 0 cos θi
2 N for the Calculate the mean square of the end-to-end distance i=1 ri ⎛
following cases: (a) The averaging · · · includes averaging over all angles φi and θi . (b) The averaging · · · includes averaging over all angles φi while the angles θi are held fixed at a common value θ. (c) The averaging · · · includes averaging over all angles φi while the angles θi are held fixed at either θ2i+1 = θa or θ2i = θb depending on whether the number of the link is odd or even. 2
The rotation matrices act in the laboratory fixed system (xyz). Transformation into the coordinate system of the segment (x y z ) changes the order of the matrices. For instance, r2 = R(y, θ1 )R(z, φ1 )r1 = R(z, φ1 )R(y , θ1 )R−1 (z, φ1 )R(z, φ1 )r1 = R(z, φ1 )R(y , θ1 )r1 (this is sometimes discussed in terms of active and passive rotations).
18
1 Random Walk Models for the Conformation
(d) How large must N be, so that it is a good approximation to keep only terms which are proportional to N ? (e) What happens in the second case if θ is chosen as very small (wormlike chain)? Hint. Show first that after averaging over the φi , the only terms of the matrix which have to be taken into account are the elements (Ri )33 . The appearing summations can be expressed as geometric series. 1.3. Two-Component Model We consider the two-component model of a polymer chain which consists of M -segments of two different types α, β (internal degrees of freedom are neglected). The number of configurations with length L is given by the Binomial distribution: Ω(L, M, T ) =
M! , Mα !(M − Mα )!
L = Mα lα + (M − Mα )lβ .
(a) Make use of the asymptotic expansion3 of the logarithm of the Γ -function √ 1 1 ln(Γ (z + 1)) = (ln z − 1)z + ln( 2π) + ln z + + O(z −3 ), 2 12z N ! = Γ (N + 1) to calculate the leading terms of the force–extension relation which is obtained from ∂ κ= (−kB T ln Ω(L, M, T )). ∂L Discuss the error of Stirling’s approximation for M = 1,000 and lβ /lα = 2. (b) Now switch to an ensemble with constant force κ. The corresponding partition function is Z(κ, M, T ) = eκL/kB T Ω(L, M, T ). L
Calculate the first two moments of the length L=−
∂ ∂ (−kB T ln Z) = Z −1 kB T Z, ∂κ ∂κ
∂2 Z, ∂κ2 2 and discuss the relative uncertainty σ = ( L2 − L )/L. Determine the maximum of σ. L2 = Z −1(kB T )2
3
Several asymptotic expansions can be found in [5].
2 Flory–Huggins Theory for Biopolymer Solutions
In the early 1940s, Flory [6] and Huggins [7], both working independently, developed a theory based upon a simple lattice model that could be used to understand the nonideal nature of polymer solutions. We consider a lattice model where the lattice sites are chosen to be the size of a solvent molecule and where all lattice sites are occupied by one molecule [1].
2.1 Monomeric Solution As the simplest example, consider the mixing of a low-molecular-weight solvent (component α) with a low-molecular-weight solute (component β). The solute molecule is assumed to be the same size as a solvent molecule and therefore every lattice site is occupied by one solvent molecule or by one solute molecule at a given time (Fig. 2.1). The increase in entropy ΔSm due to mixing of solvent and solute is given by N! ΔSm = kB ln Ω = kB ln , (2.1) Nα !Nβ ! where N = Nα + Nβ is the total number of lattice sites. Using Stirling’s approximation leads to ΔSm = kB (N ln N − N − Nα ln Nα + Nα − Nβ ln Nβ + Nβ ) = kB (N ln N − Nα ln Nα − Nβ ln Nβ ) Nα Nβ − kB Nβ ln . = −kB Nα ln N N
(2.2)
Inserting the volume fractions φα =
Nα , N α + Nβ
φβ =
Nβ , N α + Nβ
the mixing entropy can be written in the well-known form
(2.3)
20
2 Flory–Huggins Theory for Biopolymer Solutions
solvent solute
Fig. 2.1. Two-dimensional Flory–Huggins lattice
ΔSm = −N kB (φα ln φα + φβ ln φβ ).
(2.4)
Neglecting boundary effects (or using periodic b.c.), the number of nearest neighbor pairs is (c is the coordination number) c Nnn = N . 2
(2.5)
These are divided into Nα c N φ2α c φα = , 2 2 = N φα φβ c.
Nαα = Nαβ
Nββ =
N φ2β c Nβ c φβ = , 2 2 (2.6)
The average interaction energy is w=
1 1 N cφ2α wαα + N cφ2β wββ + N cφα φβ wαβ 2 2
(2.7)
which after the substitution wαβ =
1 (wαα + wββ − w) 2
(2.8)
becomes 1 w = − N cφα φβ w 2 1 1 = + N cφα (φα wαα + φβ wαα ) + N cφβ (φβ wββ + φα wββ ) 2 2
(2.9)
and since φα + φβ = 1 1 1 w = − N cφα φβ w + N c(φα wαα + φβ wββ ). 2 2
(2.10)
Now the partition function is N! Nα !Nβ ! N! = (zα e−cwαα /2kB T )Nα (zβ e−cwββ /2kB T )Nβ eNα Nβ cw/2N kB T . Nα !Nβ ! (2.11)
Z = zαNα zβ β e−Nα cwαα /2kB T e−Nβ cwββ /2kB T eNα Nβ cw/2N kB T N
ΔFm /NkBT
2.1 Monomeric Solution 0
χ =2
–0.2
χ =2
–0.4
χ =1
–0.6
χ=0 0
0.2
0.4
0.6
0.8
21
1
φα
Fig. 2.2. Free energy change ΔF/N kB T of a binary mixture with interaction
The free energy is F = −kB T ln Z = −Nα kB T ln zα − Nβ kB T ln zβ c c Nα Nβ cw = +Nα wαα + Nβ wββ − + N kB T (φα ln φα + φβ ln φβ ). 2 2 2N (2.12) For the pure solvent, the free energy is c F (Nα = N, Nβ = 0) = −Nα kB T ln zα + Nα wαα 2
(2.13)
and for the pure solute c F (Nα = 0, Nβ = N ) = −Nβ kB T ln zβ + Nβ wββ . 2
(2.14)
Hence the change in free energy is ΔFm = −
Nα Nβ cw + N kB T (φα ln φα + φβ ln φβ ) 2N
(2.15)
with the energy change (van Laar heat of mixing) ΔEm = −
cw Nα Nβ cw = −N φα φβ = N kB T χφα φβ . 2N 2
(2.16)
The last equation defines the Flory interaction parameter (Fig. 2.2): χ=−
cw . 2kB T
(2.17)
For χ > 2, the free energy has two minima and two stable phases exist. This is seen from solving
22
2 Flory–Huggins Theory for Biopolymer Solutions
∂ΔF ∂ = N kB T (χφα (1 − φα ) + φα ln φα + (1 − φα ) ln(1 − φα )) ∂φα ∂φα φα . (2.18) = N kB T χ(1 − 2φα ) + ln 1 − φα
0=
This equation has as one solution φα = 1/2. This solution becomes instable for χ > 2 as can be seen from the sign change of the second derivative 1 − 2χφα + 2χφ2α ∂ 2 ΔF = N kB T (4 − 2χ). = N k T (2.19) B ∂φ2α φ2α − φα
2.2 Polymeric Solution Now, consider Nβ polymer molecules which consist of M units and hence occupy a total of M Nβ lattice sites. The volume fractions are (Fig. 2.3) φα =
Nα , Nα + M Nβ
φβ =
M Nβ Nα + M Nβ
(2.20)
and the number of lattice sites is N = N α + M Nβ .
(2.21)
The entropy change is given by ΔS = ΔSm + ΔSd = kB ln Ω(Nα , Nβ ).
(2.22)
It consists of the mixing entropy and a contribution due to the different conformations of the polymers (disorder entropy). The latter can be eliminated by subtracting the entropy for Nα = 0: ΔSm = ΔS − ΔSd = kB
ln Ω(Nα , Nβ ) . ln Ω(0, Nβ )
(2.23)
In the following, we will calculate Ω(Nα , Nβ ) in an approximate way. We use a mean-field method where one polymer after the other is distributed over the
solvent solute
Fig. 2.3. Lattice model for a polymer
2.2 Polymeric Solution
a
r= 2
r =3
23
r =5
r =4
3 4 1
2 1
1 2
3 2 1
b
r =5 3 4 2 1
Fig. 2.4. Available positions. The second segment can be placed onto four positions, the third segment onto three positions. For the following segments, it is assumed that they can be also placed onto three positions (a). Configurations like (b) are neglected
lattice, taking into account only the available volume but not the configuration of all the other polymers. Under that conditions, Ω factorizes Nβ 1 Ω= νi , Nβ ! i=1
(2.24)
where νi counts the number of possibilities to put the ith polymer onto the lattice. It will be calculated by adding one segment after the other and counting the number of possible ways: νi+1 =
M
ni+1 s .
(2.25)
s=1
The first segment of the (i + 1)th polymer molecule can be placed onto = N − iM ni+1 1
(2.26)
lattice sites (Fig. 2.4). The second segment has to be placed on a neighboring position. Depending on the coordination number of the lattice, there are c possible neighboring sites. But only a fraction of iM f =1− (2.27) N of these is unoccupied. Hence for the second segment, we have iM = c 1 − ni+1 . (2.28) 2 N
24
2 Flory–Huggins Theory for Biopolymer Solutions
For the third segment only c − 1 neighboring positions are available iM i+1 . n3 = (c − 1) 1 − N
(2.29)
For the following segments r = 4 . . . M , we assume that the number of possible sites is the same as for the third segment. This introduces some error since for some configurations the number is reduced due to the excluded volume. This error, however, is small compared to the crudeness of the whole model. Multiplying all the factors, we have M−1 iM M−2 1− νi+1 = (N − iM )c(c − 1) N M−1 c−1 ≈ (N − iM )M (2.30) N and the entropy is ⎡
⎤ M−1 Nβ 1 c − 1 ⎦ (N − iM )M ΔS = kB ln ⎣ Nβ ! N i=1 c−1 = −kNBβ ln Nβ + kB Nβ + kB Nβ (M − 1) ln N +kB M
Nβ
ln(N − iM ).
(2.31)
i=1
The sum will be approximated by an integral Nβ
Nβ
ln(N − iM ) ≈
ln(N − M x)dx =
0
i=1
=−
N N (ln(N − M x) − 1) β x− 0 M
Nα N (ln Nα − 1) + (ln N − 1). M M
(2.32)
Finally, we get ΔS = −kB Nβ ln Nβ + kB Nβ + kB Nβ (M − 1) ln
c−1 N
+kB (N ln N − N + Nα − Nα ln Nα ).
(2.33)
The disorder entropy is obtained by substituting Nα = 0 and N = M Nβ : ΔSd = ΔS(Nα = 0) = −kB Nβ ln Nβ + kB Nβ + kB Nβ (M − 1) ln +kB (M Nβ ln M Nβ − M Nβ ).
c−1 M Nβ
(2.34)
2.2 Polymeric Solution
25
0.6
ΔSm /kBT
0.5 0.4 0.3 0.2 0.1 0
0
0.2
0.4
0.6
0.8
1
φβ
Fig. 2.5. Mixing entropy. ΔSm /N kB is shown as a function of φβ for M = 1, 2, 10, 100
The mixing entropy is given by the difference (Fig. 2.5): ΔSm = ΔS − ΔSd = kB (N ln N − N + Nα − Nα ln Nα − M Nβ ln M Nβ + M Nβ ) +kB Nβ (M − 1)(ln M Nβ − ln N ) = kB (N ln N − Nα ln Nα − M Nβ ln M Nβ + M Nβ ln M Nβ −M Nβ ln N − Nβ ln M Nβ + Nβ ln N ) Nα M Nβ = −kB Nα ln + Nβ ln = −kB Nα ln φα − kB Nβ ln φβ N N φβ = −N kB φα ln φα + (2.35) ln φβ . M In comparison with the expression for a solution of molecules without internal flexibility, we obtain an extra contribution to the entropy of − Nβ kB ln
M Nβ Nβ + Nβ kB ln = −Nβ kB ln M. N N
(2.36)
Next, we calculate the change of energy due to mixing, ΔEm . wαα is the interaction energy between nearest-neighbor solvent molecules, wββ between nearest-neighbor polymer units (not chemically bonded), and wαβ between one solvent molecule and one polymer unit. The probability that any site is occupied by a solvent molecule is φα and by a polymer unit is φβ . We introduce an effective coordination number c which takes into account that a solvent molecule has c neighbors, whereas a polymer segment interacts only with c-2 other molecules. Then Nαα = cφα
Nα , 2
Nαβ = cφα Nβ .
Nββ = M cφβ
Nβ , 2
(2.37) (2.38)
26
2 Flory–Huggins Theory for Biopolymer Solutions
In the pure polymer φβ = 1 and Nββ = M cNβ /2, whereas in the pure solvent Nαα = cNα/2 . The energy change is Nα Nβ + wββ M cφβ + wαβ M cφα Nβ 2 2 Nα Nβ −wαα c − wββ M c 2 2 Nα Nβ = −cwαα φβ − cM wββ φα + wαβ M cφα Nβ 2 2 wαα + wββ = wαβ − cN φα φβ 2 w = − cN φα φβ = N kB T χφα φβ , 2
ΔEm = cwαα φα
(2.39)
with the Flory interaction parameter χ=−
wc . 2kB T
(2.40)
For the change in free energy, we find (Figs. 2.6–2.8) ΔFm ΔEm ΔSm φβ = − = φα ln φα + ln φβ + χφα φβ . N kB T N kB T N kB M
(2.41)
2.3 Phase Transitions 2.3.1 Stability Criterion In equilibrium, the free energy (if volume is constant) has a minimum value. Hence, a homogeneous system becomes unstable and separates into two phases 0.2
ΔFm / NkBT
0.1 0 – 0.1 – 0.2 – 0.3 – 0.4
0
0.2
0.4
Φβ
0.6
0.8
1
Fig. 2.6. Free energy as a function of solute concentration. ΔFm /N kB T is shown for M = 1,000 and χ = 0.1, 0.5, 1.0, 2.0
2.3 Phase Transitions
27
0
ΔFm / NkBT
– 0.1 – 0.2 – 0.3 – 0.4 – 0.5
0
0.2
0.4
0.6
0.8
1
Φβ
Fig. 2.7. Free energy as a function of solute concentration. ΔFm /N kB T is shown for χ = 1.0 and M = 1, 2, 10, 1,000
0.2
ΔFm / NkBT
0.1 0 – 0.1 – 0.2
0
0.2
0.4
0.6
0.8
1
Φβ
Fig. 2.8. Free energy as a function of solute concentration. ΔFm /N kB T is shown for χ = 2.0 and M = 1, 2, 10, 1,000
if the free energy of the two-phase system is lower, that is, the following equation can be fulfilled: ΔFm (φβ , N ) > ΔFm (φβ , N ) + ΔFm (φβ , N − N ).
(2.42)
But since ΔFm = N Δfm (φβ ) is proportional to N , this condition becomes N Δfm (φβ ) > N Δfm (φβ ) + (N − N )Δfm (φβ ).
(2.43)
Since the total numbers Nα and Nβ are conserved, we have N φβ = N φβ + (N − N )φβ or N = N
φβ − φβ
, φβ − φβ
(N − N ) = N
φβ − φβ φβ − φβ
(2.44)
.
(2.45)
28
2 Flory–Huggins Theory for Biopolymer Solutions stable
φβ’
φβ
φβ’’
metastable φβ’
φβ’
φβ
φβ’’ φβ’’
φβ instable
Fig. 2.9. Stability criterion for the free energy
But since N as well as (N − N ) should be positive numbers, there are two possible cases: φβ − φβ > 0,
φβ − φβ > 0,
φβ − φβ > 0,
(2.46)
φβ − φβ < 0,
φβ − φβ < 0,
φβ − φβ < 0,
(2.47)
which means that either φβ < φb < φβ or φβ > φb > φβ . By renaming, we always can choose the order φβ < φβ < φβ .
(2.48)
The stability criterion becomes (Fig. 2.9) Δfm (φβ ) >
φβ − φβ φβ − φβ Δf (φ ) + Δfm (φβ ), m β φβ − φβ φβ − φβ
(2.49)
which can be written with the abbreviation h = φβ − φβ ,
h = φβ − φβ
(2.50)
as
Δf (φβ − h ) − Δf (φβ ) Δf (φβ + h ) − Δf (φβ ) + <0 h h but that means the curvature has to be negative or locally ∂ 2 Δf (φβ ) < 0. ∂φ2β
(2.51)
(2.52)
2.3.2 Critical Coupling Instabilities appear above a certain value of χ = χc (M ). To find this critical value, we have to look for the metastable case with
2.3 Phase Transitions
0=
∂2 1 1 Δf (φb ) = + − 2χ. ∂ 2 φb 1 − φb M φb
29
(2.53)
In principle, the critical χ value can be found from solving this quadratic equation for φb and looking for real roots in the range 0 ≤ φβ ≤ 1. Here, however, a simpler strategy can be applied. At the boundaries of the interval [0, 1] of possible φb -values, the second derivative is positive 1 ∂2 Δf (φb ) → >0 ∂ 2 φb M φb
for φb → 0,
∂2 1 Δf (φb ) → > 0 for 2 ∂ φb 1 − φb
φb → 1.
(2.54)
(2.55)
Hence, we look for a minimum of the second derivative, that is, we solve 0=
1 1 ∂3 Δf (φb ) = − . ∂ 3 φb (1 − φb )2 M φ2b
(2.56)
This gives immediately φbc =
1+
1 √
M
.
(2.57)
Above the critical point, the minimum of the second derivative is negative. Hence we are looking for a solution of ∂3 ∂2 Δf (φb ) = 3 Δf (φb ) = 0. 2 ∂ φb ∂ φb
(2.58)
Inserting φbc into the second derivative gives 0=1+
2 1 + √ − 2χ, M M
which determines the critical value of χ √ (1 + M )2 1 1 1 = +√ + . χc = 2M 2 2M M
(2.59)
(2.60)
From dF = −SdT + μα dNα + μβ dNβ , we obtain the change of the chemical potential as ∂φβ ∂ ∂ΔF ∂N ∂ 0 Δμα = μα − μα = + = N Δf (φβ ) ∂Nα Nβ ,T ∂Nα ∂N ∂Nα ∂φβ = Δf (φβ ) − φβ Δf (φβ ) 1 φβ + χφ2β . = kB T ln(1 − φβ ) + 1 − M
(2.61)
30
2 Flory–Huggins Theory for Biopolymer Solutions
Now the derivatives of Δμα are ∂ Δμα = −φβ Δf (φβ ), ∂φβ
(2.62)
∂2 Δμα = −Δf (φβ ) − φβ Δf (φβ ). ∂φ2β
(2.63)
Hence, the critical point can also be found by solving ∂2 ∂ Δμα = Δμα = 0. ∂φ2β ∂φβ
(2.64)
Employing the ideal gas approximation μ = kB T ln(p) + const,
(2.65)
this gives for the vapor pressure (Figs. 2.10 and 2.11) 0 2 pα = e(μα −μα )/kB T = (1 − φβ )eχφβ +(1−1/M )φβ p0α
(2.66)
and since the exponential is a monotonous function, another condition for the critical point is ∂ pα ∂ 2 pα = = 0. (2.67) ∂φβ p0α ∂ 2 φβ p0α 2.3.3 Phase Diagram In the simple Flory–Huggins theory, the interaction parameter is proportional to 1/T . Hence we can write it as 2
χ= 3
P/P0
1.5
1
χ= 2 χ= 1
0.5 χ=0 0
0
0.2
0.4
0.6
0.8
1
φβ
Fig. 2.10. Vapor pressure of a binary mixture with interaction (M = 1)
2.3 Phase Transitions
31
1.0001 1.0000
p/p0
0.9999 0.9998 0.9997 0.9996 0.9995 0
0.02
0.04
0.06
0.08
0.1
Φβ
Fig. 2.11. Vapor pressure of a polymer solution with interaction for M = 1,000, χ = 0.5, 0.532, 0.55
χ=
T0 χ0 T
(2.68)
and discuss the free energy change as a function of φβ and T : φβ ΔF = N kB T (1 − φβ ) ln(1 − φβ ) + ln φβ + N kB T0 χ0 φβ (1 − φβ ). M (2.69) The turning points follow from 0=
∂2 ΔF = N kB ∂φ2β
T T + − 2T0 χ0 1 − φβ M φβ
(2.70)
as φβ,TP
1 T (1 − M ) ± = + 2
T 2 (M − 1)2 + 4T0 χ0 M (T0 χ0 M − T − 4M T ) . 4T0 χ0 M (2.71)
This defines the spinodal curve which separates the instable from the metastable region (Fig. 2.12). Which is the minimum free energy of a two-phase system? The free energy takes the form ΔF = ΔF 1 + ΔF 2 = N 1 kB T Δf (φ1β ) + N 2 kB T Δf (φ2β ) with N j = Nαj + M Nβj ,
φjβ =
M Nβj Nj
.
(2.72)
(2.73)
32
2 Flory–Huggins Theory for Biopolymer Solutions 1 0.8
φβ
0.6 0.4
M =1
0.2 0
M =10
0
0.5
M =100
1 T / (T0χ0)
1.5
2
Fig. 2.12. Spinodal
The minimum free energy can be found from the condition that exchange of solvent or solute molecules between the two phases does not change the free energy, that is, the chemical potentials in the two phases are the same: ∂ΔF 1 ∂ΔF 2 ∂ΔF 1 ∂ΔF 2 0 = dΔF = − dNα + − dNβ , (2.74) ∂Nα1 ∂Nα2 ∂Nβ1 ∂Nβ2 0 = μ1α − μ2α = μ1β − μ2β or
∂ 1 2 2 ∂ 2 0 = Δf − Δf − Δf − φβ 2 Δf , ∂φ1β ∂φβ ∂ ∂ 1 1 1 2 2 2 0 = Δf + (1 − φβ ) 1 Δf − Δf + (1 − φβ ) 2 Δf . ∂φβ ∂φβ
(2.75)
1
φ1β
(2.76)
(2.77)
From the difference of these two equations, we find ∂Δf 1 ∂Δf 2 = ∂φ1β ∂φ2β
(2.78)
and hence the slope of the free energy has to be the same for both phases. Inserting into the first equation then gives Δf 1 − Δf 2 = (φ1β − φ2β )Δf ,
(2.79)
which shows that the concentrations of the two phases can be found by the well-known “common tangent” construction (Fig. 2.13). These so-called binodal points give the border to the stable one-phase region. Between spinodal and binodal the system is metastable. It is stable in relation to small fluctuations since the curvature of the free energy is positive. It is, however, instable against larger-scale fluctuations (Fig. 2.14).
2.3 Phase Transitions
33
2
φβ
1 φβ
Fig. 2.13. Common tangent construction T
stable one−phase
meta stable
meta stable
stable two−phase spinodal
binodal
φβ
Fig. 2.14. Phase diagram
Problems 2.1. Osmotic Pressure of a Polymer Solution Polymer solution Pure Solvent
μ1(P,T) = μ01(P’,T)
Calculate the osmotic pressure Π = P − P for the Flory–Huggins model of a polymer solution. The difference of the chemical potential of the solvent ∂ΔFm 0 μα (P, T ) − μα (P, T ) = ∂Nα Nβ ,T can be obtained from the freeenergy change φβ ΔFm = N kB T φα ln φα + ln φβ + χφα φβ M
34
2 Flory–Huggins Theory for Biopolymer Solutions
and the osmotic pressure is given by μ0α (P , T ) − μ0α (P, T ) = −Π
∂μ0α . ∂P
Taylor series expansion gives the virial expansion of the osmotic pressure as a series of powers of φβ . Truncate after the quadratic term and discuss qualitatively the dependence on the interaction parameter χ (and hence also on temperature). 2.2. Polymer Mixture Consider a mixture of two types of polymers with chain lengths M 1 and M 2 and calculate the mixing free energy change ΔFm . Determine the critical values φc and χc . Discuss the phase diagram for the symmetrical case M1 = M2 .
Part II
Protein Electrostatics and Solvation
3 Implicit Continuum Solvent Models
Since an explicit treatment of all solvent atoms and ions is not possible in many cases, the effect of the solvent on the protein has to be approximated by implicit models which replace the large number of dynamic solvent modes by a continuous medium [8–10].
3.1 Potential of Mean Force In solution, a protein occupies a conformation γ with the probability given by the Boltzmann factor P (X, Y ) =
e−U (X,Y ) , dXdY e−U(X,Y )
(3.1)
where X stands for the coordinates of the protein (including the protonation state) and Y stands for the coordinates of the solvent. The potential energy can be formally split into three terms: U (X, Y ) = Uprot (X) + Usolv (Y ) + Uprot,solv (X, Y ).
(3.2)
The mean value of a physical quantity which depends only on the protein coordinates Q(X) is Q = dXdY, Q(X)P (X, Y ) = dXQ(X)P(X), (3.3) where we define a reduced probability distribution for the protein P (X) = dY P (X, Y ),
(3.4)
which is represented by introducing the potential of mean force W (x) as
38
3 Implicit Continuum Solvent Models
e−W (X)/kB T P (X) = dXe−W (X)/kB T with e−W (X)/kB T = e−Uprot (X)/kB T
(3.5)
dY e−(Usolv (Y )+Uprot,solv (X,Y ))/kB T
= e−(Uprot (X)+ΔW (X))/kB T .
(3.6)
ΔW accounts implicitly but exactly for the solvents effect on the protein.
3.2 Dielectric Continuum Model In the following, we discuss implicit solvent models, which treat the solvent as a dielectric continuum. In response to the partial charges of the protein qi , polarization of the medium produces an electrostatic reaction potential φR (Fig. 3.1). If the medium behaves linearly (no dielectric saturation), the reaction potential is proportional to the charges: fij qj . (3.7) φR i = j
Let us now switch on the charges adiabatically by introducing a factor qi → qi λ,
0 < λ < 1.
(3.8)
The change of the free energy is dF =
φi qi dλ =
i
qi qj λ dλ + fij qj qi λdλ 4πεrij ij
(3.9)
i=j
and thermodynamic integration gives the change of free energy due to Coulombic interactions: ⎛ ⎞ 1 qi qj λdλ ⎝ + fij qj qi ⎠ ΔFelec = 4πεrij 0 ij i=j
1 qi qj 1 = + fij qj qi . 2 4πεrij 2 ij
(3.10)
i=j
The first part is a property of the protein and hence included in Uprot . The second part is the mean force potential: ΔWelec =
1 fij qi qj . 2 ij
(3.11)
3.3 Born Model
39
Ions H+ Na+ K+ Ca2+ Mg2+
OH− Cl−
positively charged residues Histidine Lysine Arginine N N N N N N
O
dipolar residues Serine OH
Threonine OH
Asparagine N
O
N
N
N O
O
O
O
O Glutamine N O
N O
O
O
N
N
N
N
negatively charged residues Aspartate Glutamate O O
O
Methionine Cysteine SH
N O
S
N
N O
polarizable residues Phenylalanine
O Tyrosine H O
Tryptophan N
N
N
N O
O
O
Fig. 3.1. Charged, polar, and polarizable groups in proteins. Biological macromolecules contain chemical compounds with certain electrostatic properties. These are often modeled using localized electric multipoles (partial charges, dipoles, etc.) and polarizabilities
3.3 Born Model The Born model [11] is a simple continuum model to calculate the solvation free energy of an ion. Consider a point charge q in the center of a cavity with radius a (the so-called Born radius of the ion). The dielectric constant is ε1 inside the sphere and ε2 outside (Fig. 3.2).
40
3 Implicit Continuum Solvent Models
ε1
q r
ε2 a
Fig. 3.2. Born model for ion solvation
The electrostatic potential is given outside the sphere by Φ=
q 4πε2 r
and inside by Φ = Φ0 +
q . 4πε1 r
(3.12)
(3.13)
From the continuity of the potential at the cavity surface, we find the reaction potential q 1 1 − . (3.14) Φ0 = 4πa ε2 ε1 Since there is only one charge, we have 1 1 1 f1,1 = − 4πa ε2 ε1 and the solvation energy is given by the famous Born formula: q2 1 1 − ΔWelec = . 8πa ε2 ε1
(3.15)
(3.16)
3.4 Charges in a Protein The continuum model can be applied to a more general charge distribution in a protein. An important example is an ion pair within a protein (εr = 2) surrounded by water (εr = 80). We study an idealized model where the protein is represented by a sphere (Fig. 3.3). We will first treat a single charge within the sphere. A system of charges can then be treated by superposition of the individual contributions. Using polar coordinates (r, θ, ϕ), the potential of a system with axial symmetry (no dependence on ϕ)can be written with the help of Legendre polynomials as ∞ φ= (An rn + Bn r−(n+1) )Pn (cos θ). (3.17) n=0
3.4 Charges in a Protein
41
r
−q
−s ε1
s
Θ +q R
ε2
Fig. 3.3. Ion pair in a protein
The general solution can be written as the sum of a special solution and a harmonic function. The special solution is given by the multipole expansion: ∞ q q 1 s n Pn (cos θ). = 4πε1 |r − s+ | 4πε1 r n=0 r
(3.18)
Since the potential has to be finite at large distances, outside it has the form φ2 =
∞
Bn r−(n+1) Pn (cos θ)
(3.19)
n=0
and inside the potential is given by φ1 =
∞
(An rn +
n=0
qsn −(n+1) r )Pn (cos θ). 4πε1
(3.20)
At the boundary, we have two conditions φ1 (R) = φ2 (R) → Bn R−(n+1) = An Rn + ε1
qsn −(n+1) R , 4πε1
∂ ∂ φ1 (R) = ε2 φ2 (R) = 0, ∂r ∂r
qsn −(n+2) ε2 (n + 1)Bn R−(n+2) = nAn Rn−1 − (n + 1) R , ε1 4πε1 from which the coefficients can be easily determined: →−
qsn −1−2n (ε1 − ε2 )(n + 1) R , 4πε1 nε1 + (n + 1)ε2 qsn (2n + 1)ε1 Bn = . 4πε1 nε1 + (n + 1)ε2
(3.21) (3.22) (3.23)
An =
(3.24)
The potential inside the sphere is φ1 =
q + φR 4πε1 |r − s+ |
(3.25)
42
3 Implicit Continuum Solvent Models
with the reaction potential φR =
∞ qsn −1−2n (ε1 − ε2 )(n + 1) n R r Pn (cos θ) 4πε1 nε1 + (n + 1)ε2 n=0
(3.26)
and the electrostatic energy is given (without the infinite self-energy) by ∞ q qsn −1−2n (ε1 − ε2 )(n + 1) n 1 R s , qφ(s, cos θ = 1) = 2 2 n=1 4πε1 nε1 + (n + 1)ε2
which for ε2 ε1 is approximately 1 s 2n q2 1 1 q2 1 1 1 1 − =− − . 2 4πR ε2 ε1 R 2 4πR ε1 ε2 1 − s2 /R2
(3.27)
(3.28)
Consider now two charges ±q at symmetric positions ±s. The reaction potentials of the two charges add up to R φR = φR + + φ−
(3.29)
and the electrostatic free energy is given by 1 −q 2 1 1 1 R + qφR (s) + qφR (s) + (−q)φR + (−s) + (−q)φ− (−s). 4πε1 (2s) 2 + 2 − 2 2
(3.30)
By comparison we find 1 q2 1 1 1 1 q2 (s) = − − , f++ = qφR + 2 2 2 4πR ε1 ε2 1 − s2 /R2 (−q)2 1 1 1 q2 1 1 f−− = (−q)φR (−s) = − − , − 2 2 2 4πR ε1 ε2 1 − s2 /R2 1 (−q)q f−+ = (−q)φR + (−s) 2 2 ∞ (−q) qsn −1−2n (ε1 − ε2 )(n + 1) R (−s)n = 2 n=1 4πε1 nε1 + (n + 1)ε2 ∞ s 2n −q 2 1 1 1 ≈ − (−)n 2 4πR ε2 ε1 n=0 R 2 1 1 (−q ) 1 1 , =− − 2 2 4πR ε1 ε2 s /R2 + 1 q(−q) 1 (−q)q f+− = qφR f−+ , − (s) = 2 2 2
(3.31) (3.32)
(3.33) (3.34)
3.5 Generalized Born Models
43
and finally the solvation energy is Welec =
Welec
∞ q 2 sn −1−2n (ε1 − ε2 )(n + 1) n R (s − (−s)n ), 4πε nε + (n + 1)ε 1 1 2 n=1
(3.35)
1 1 1 1 1 1 1 1 2 ≈ −q − − (−q ) − 2 2 2 4πR ε1 ε2 1 − s /R 4πR ε1 ε2 s /R2 + 1 2 1 1 (2qs) 1 =− − . (3.36) 8πR ε1 ε2 1 − s4 /R4 2
If the extension of the system of charges in the protein is small compared to the radius s R, the multipole expansion of the reaction potential converges rapidly. Since the total charge of the ion pair is zero, the monopole contribution(n = 0) 1 Q2 1 (1) (3.37) Welec = − 8πR ε2 ε1 (which is nothing but the Born energy (3.16)) vanishes and the leading term is the dipole contribution1 (n = 1) (2)
Welec =
p2 (ε1 − ε2 ) . 4πε1 R3 ε1 + 2ε2
(3.38)
3.5 Generalized Born Models The generalized Born model approximates the protein as a sphere with a dielectric constant different from that of the solvent [12, 13]. Still and coworkers [13] proposed an approximate expression for the solvation free energy of an arbitrary distribution of N charges: Welec =
1 8π
1 1 − ε2 ε1
N
qi qj f GB (rij , aij ) i,j=1
(3.39)
with the smooth function fGB
# $ ! 2 ! r ij 2 + a a exp − = "rij , i j 2ai aj
(3.40)
where the effective Born radius ai accounts for the effect of neighboring solute atoms. Several methods have been developed to calculate appropriate values of the ai [13–16]. 1
This has been associated with several names (Bell, Onsager, and Kirkwood).
44
3 Implicit Continuum Solvent Models
Expression (3.39) interpolates between the Born energy of a total charge N q at short distances 1 1 1 N 2q2 for rij → 0 − (3.41) Welec → 8π ε2 ε1 a and the sum of the individual Born energies plus the change in Coulombic energy ⎞ ⎛ 1 1 1 ⎝ qi2 qi qj ⎠ √ for rij ai aj (3.42) − + Welec → 8π ε2 ε1 a r i ij i i=j
for a set of well-separated charges. It gives a reasonable description in many cases without the computational needs of a full electrostatics calculation.
4 Debye–Hückel Theory
Mobile charges like Na+ or Cl− ions are very important for the functioning of Biomolecules. In a stationary state, their concentration depends on the local electrostatic field which is produced by the charge distribution of the Biomolecule, which in turn depends for instance on the protonation state and on the conformation. In this chapter, we present continuum models to describe the interaction between a Biomolecule and surrounding mobile charges [17–20].
4.1 Electrostatic Shielding by Mobile Charges We consider a fully dissociated (strong) electrolyte containing Ni mobile ions of the sort i = 1 . . . with charges Zi e per unit volume. The charge density of the mobile charges is given by the average numbers of ions per volume: mob (r) = Zi eN i (r). (4.1) i
The electrostatic potential is given by the Poisson equation: εΔφ(r) = −(r) = −mob (r) − fix (r).
(4.2)
Debye and Hückel [21] used Boltzmann’s theorem to determine the mobile charge density. Without the presence of fixed charges, the system is neutral: Zi eNi0 (4.3) 0 = 0mob = i
and the constant value of the potential can be chosen as zero: φ0 = 0.
(4.4)
The fixed charges produce a change of the potential. The electrostatic energy of an ion of sort i is
46
4 Debye–Hückel Theory
(4.5)
Wi = Zi eφ(r) and the density of such ions is given by a Boltzmann distribution:
or
e−Zi eφ(r)/kB T Ni (r) = 0 Ni0 e−Zi eφ /kB T
(4.6)
Ni (r) = Ni0 e−Zi eφ(r)/kB T .
(4.7)
The total mobile charge density is mob (r) = Zi eNi0 e−Zi eφ(r)/kB T
(4.8)
i
and we obtain the Poisson–Boltzmann equation: εΔφ(r) = − Zi eNi0 e−Zi eφ(r)/kB T − fix (r).
(4.9)
i
If the solution is very dilute, we can expect that the ion–ion interaction is much smaller than thermal energy: Zi eφ kB T and linearize the Poisson–Boltzmann equation: Zi e φ(r) + · · · . εΔφ(r) = −fix (r) − Zi eNi0 1 − kB T
(4.10)
(4.11)
i
The first summand vanishes due to electroneutrality and we find finally 1 Δφ(r) − κ2 φ(r) = − fix (r) ε
(4.12)
with the inverse Debye length % λ−1 Debye
=κ=
e2 0 2 Ni Zi . εkB T i
(4.13)
4.2 1–1 Electrolytes If there are only two types of ions with charges Z1,2 = ±1 (also in semiconductor physics), the Poisson–Boltzmann equation can be written as e2 e e Δφ(r) + N 0 (e−eφ(r)/kB T − eeφ(r)/kB T ) = − fix (r) kB T εkB T εkB T
(4.14)
4.3 Charged Sphere
which, after substitution
takes the form
= φ(r)
e φ(r), kB T
− κ2 sinh(φ(r)) Δφ(r) =−
e fix (r) εkB T
47
(4.15) (4.16)
= f (r ) = f (κr) and with the scaled radius vector r = κr → φ(r) Δf (r ) − sinh(f (r )) = −
e fix (κ−1 r ). κ2 εkB T
(4.17)
4.3 Charged Sphere We consider a spherical protein (radius R) with a charged sphere (radius a) in its center (Fig. 4.1). For a spherical problem, we only have to consider the radial part of the Laplacian and the linearized Poisson–Boltzmann equation becomes outside the protein: 1 d2 (rφ(r)) − κ2 φ(r) = 0, (4.18) r dr2 which has the solution φ2 (r) =
c1 e−κr + c2 eκr . r
(4.19)
Since the potential should vanish at large distances, we have c2 = 0. Inside the protein (a < r < R), solution of the Poisson equation gives φ1 (r) = c3 +
Q . 4πε1 r
(4.20)
At the boundary, we have the conditions Q c1 e−κR − , R 4πε1 R −κR e ∂ ∂ Q e−κR ε1 φ1 (R) = ε2 φ2 (R) → − − , = c ε − κ 1 2 ∂r ∂r 4πR2 R2 R φ1 (R) = φ2 (R) → c3 =
R
a ε1 κ
ε2
Fig. 4.1. Simple model of a charged protein
(4.21)
(4.22)
48
4 Debye–Hückel Theory
potential Φ
10
1
κ = 0.1 κ = 1.0
0.1 κ = 2.0
0.01
0
2 r
4
Fig. 4.2. Charged sphere in an electrolyte. The potential φ(r) is shown for a = 0.1, R = 1.0
which determine the constants: c1 = and c3 = −
QeκR 4πε2 (1 + κR)
Q Q + . 4πε1 R 4πε2 R(1 + κR)
Together, we find the potential inside the sphere 1 1 Q Q − + φ1 (r) = 4πε1 r R 4πε2 R(1 + κR)
(4.23)
(4.24)
(4.25)
and outside (Fig. 4.2) φ2 (r) =
e−κ(r−R) Q . 4πε2 (1 + κR) r
(4.26)
The ion charge density is given by mob (r) = ε2 Δφ2 = ε2 κ2 φ2 ;
(4.27)
hence, the ion charge at distances between r and r + dr is given by κ
Q re−κ(r−R) dr. (1 + κR)
(4.28)
This function has a maximum at rmax = 1/κ and decays exponentially at larger distances (Fig. 4.3).
4.4 Charged Cylinder
κ = 2.0
0.6 charge density
49
0.5
κ = 0.5
0.4 0.3
κ = 1.0
0.2 0.1 0
1
1.5
2
2.5 r
3
3.5
4
Fig. 4.3. Charge density around the charged sphere for a = 0.1, R = 1.0
Let the charge be concentrated on the surface of the inner sphere. Then we have Q 1 1 Q φ1 (a) = − + . (4.29) 4πε1 a R 4πε2 R(1 + κR) Without the medium (ε2 = ε1 , κ = 0), the potential would be φ01 (a) =
Q . 4πε1 a
(4.30)
Hence, the solvation energy is 1 1 Q2 W = QφR = Q(φ1 (a) − φ01 (a)) = 2 2 8πR
1 1 − , ε2 (1 + κR) ε1
which for κ = 0 is given by the well-known Born formula: Q2 1 1 W =− − . 8πR ε1 ε2
(4.31)
(4.32)
For a → R, ε1 = ε2 this becomes the solvation energy of an ion in solution: ΔGsol = W = −
Q2 κ . 8πε (1 + κR)
(4.33)
4.4 Charged Cylinder Next, we discuss a cylinder of radius a and length l a carrying the net charge N e uniformly distributed on its surface: σ=
Ne . 2πal
(4.34)
50
4 Debye–Hückel Theory r a z l
Fig. 4.4. Charged cylinder model
Outside the cylinder, this charge distribution is equivalent to a linear distribution of charges along the axis of the cylinder with a one-dimensional density (Fig. 4.4): Ne e = 2πaσ = . (4.35) l b For the general case of a charged cylinder in an ionic solution, we have to restrict the discussion to the linearized Poisson–Boltzmann equation which becomes outside the cylinder d 1 d r φ(r) = κ2 φ(r). (4.36) r dr dr Substitution r → x = κr gives the equation d2 1 d φ(x) − φ(x) = 0. φ(x) + dx2 x dx
(4.37)
Solutions of this equation are the modified Bessel functions of order zero denoted as I0 (x) and K0 (x). For large values of x, lim I0 (x) = ∞,
x→∞
lim K0 (x) = 0,
x→∞
(4.38)
and hence the potential in the outer region has the form φ(r) = C1 K0 (κr).
(4.39)
Inside the cylinder surface, the electric field is given by Gauss’ theorem: 2πrlε1 E(r) = N e, E(r) = −
Ne dφ(r) = , dr 2πε1 rl
(4.40) (4.41)
and hence the potential inside is φ(r) = C2 −
Ne ln r. 2πε1 l
(4.42)
4.4 Charged Cylinder
51
The boundary conditions are φ(a) = C1 K0 (κa) = C2 −
Ne ln a, 2πε1 l
Ne dφ(a) dφ(a) =− = ε2 = ε2 C1 (−κK1 (κa)), dr 2πal dr from which we find Ne C1 = 2πalε2 κK1 (κa) ε1
(4.43) (4.44)
(4.45)
and C2 =
Ne N e K0 (κa) + ln a. 2πalε2 κ K1 (κa) 2πε1 l
(4.46)
The potential is then outside N e K0 (κr) 2πalε2 κ K1 (κa)
(4.47)
N e K0 (κa) Ne Ne + ln a − ln r. 2πalε2 κ K1 (κa) 2πε1 l 2πε1 l
(4.48)
φ(r) = and inside φ(r) =
For small κa → 0, we use the asymptotic behavior of the Bessel functions K0 (x) → ln K1 (x) →
2 − γ + ··· , x
γ = 0.577 · · · ,
1 + ··· x
(4.49)
to have approximately Ne Ne , κa = 2πalε2 κ 2πlε2 Ne Ne 2 −γ + ln a. C2 ≈ ln 2πlε2 κa 2πε1 l C1 ≈
(4.50)
The potential outside is φ(r) = − and inside is
Ne κ γ + ln + ln r 2πlε2 2
Ne 2 Ne Ne ln −γ + ln a − ln r 2πlε2 κa 2πε1 l 2πε1 l γ 1 Ne 1 κ 1 1 − − = ln + − ln r . ln a − 2πl ε2 ε2 2 ε1 ε2 ε1
(4.51)
φ(r) =
(4.52)
Outside the potential consists of the potential of the charged line (ln r) and an additional contribution from the screening of the ions (Fig. 4.5).
52
4 Debye–Hückel Theory 3
2
1 2
4
6
8
10
r
Fig. 4.5. Potential of a charged cylinder with unit radius a = 1 for κ = 0.05. Solid curve: K0 (κr)/(κK1 (κ)). Dashed curve: approximation by − ln(κ/2) − ln r − γ
+ + + + + + + + + + + + + +
− − −
−
− +
−
− +
−
−
−
−
− −
−
−
−
+ −
+ −
Fig. 4.6. Goüy–Chapman double layer
4.5 Charged Membrane (Goüy–Chapman Double Layer) We approximate the surface charge of a membrane by a thin layer charged with a homogeneous charge distribution (Fig. 4.6). Goüy [22, 23] and Chapman [24] derived the potential similar to Debye– Hückel theory. For a 1–1 electrolyte (e.g., NaCl), the one-dimensional Poisson– Boltzmann equation has the form (with transformed variables as above) d2 f (x) − sinh(f (x)) = g(x), dx2
(4.53)
where the source term g(x) = −(e/κ2 εkB T )(x/κ) has the character of a δ-function centered at x = 0. Consider an area A of the membrane and integrate along the x-axis: +0 dA (x)dx = σ0 A, (4.54) −0
4.5 Charged Membrane (Goüy–Chapman Double Layer)
+0
dA −0
e dA 2 κ εkB T e σ0 A. =− κεkB T
dxg(x) = −
+0
53
κdx (x )
−0
(4.55)
Hence we identify (x) = σ0 δ(x),
g(x) = −
eσ0 δ(x). κεkB T
(4.56)
The Poisson–Boltzmann equation can be solved analytically in this simple case. But first, we study the linearized homogeneous equation: d2 f (x) − f (x) = 0 dx2
(4.57)
f (x) = f0 e±x
(4.58)
kB T f0 e±κx = φ0 e±κx . e
(4.59)
with the solution or going back to the potential φ(x) =
The membrane potential is related to the surface charge density. Let us assume that on the left side (x < 0), the medium has a dielectric constant of ε1 and on the right side ε. Since in one dimension the field in a dielectric medium does not decay, we introduce a shielding constant κ1 on the left side and take the limit κ1 → 0 to remove contributions not related to the membrane charge. The potential then is given by φ0 e−κx x > 0, φ(x) = (4.60) φ0 eκ1 x x < 0. The membrane potential φ0 is determined from the boundary condition: ε
dφ dφ (+0) − ε1 (−0) = −σ0 , dx dx
(4.61)
− εκφ0 − ε1 κ1 φ0 = −σ0 .
(4.62)
which gives In the limit κ1 → 0, we find φ0 =
σ0 . εκ
(4.63)
For x < 0 the potential is constant and for x > 0 the charge density is (Figs. 4.7 and 4.8) d2 φ(x) (x) = −ε = −σ0 κe−κx , (4.64) dx2
54
4 Debye–Hückel Theory
a
φ(x)
φo
x
b
ρ(x)
x
Fig. 4.7. Electrolytic double layer. (a) The potential is continuous and decreases exponentially in the electrolyte. (b) The charge density has a sharp peak on the membrane which is compensated by the exponentially decreasing charge in the electrolyte 1
co-ions Φ
0
total
–1
counter-ions
–2
–3
0
1
2
r
3
4
5
Fig. 4.8. Charge density of counter- and co-ions. For an exponentially decaying potential Φ(r) (full curve), the density of counter- and co-ions (dashed curves) and the total charge density (dash-dotted curve) are shown
which adds up to a total net charge per unit area of ∞ (x)dx = −σ0 .
(4.65)
0
Hence, the system is neutral and behaves like a capacity of σ0 A A = εκA = ε . φ0 LDebye
(4.66)
The solution of the nonlinear homogeneous equation can be found multiplying the equation with df (x)/dx: df df d2 f = sinh(f ) 2 dx dx dx
(4.67)
4.5 Charged Membrane (Goüy–Chapman Double Layer)
55
and rewriting this as 1 d 2 dx
df dx
2
d cosh(f ), dx
(4.68)
= 2 [cosh(f ) + C] .
(4.69)
=
which can be integrated
df dx
2
The constant C is determined by the asymptotic behavior1 lim f (x) = lim
x→∞
x→∞
df =0 dx
(4.70)
and obviously has the value C = −1. Making use of the relation cosh(f ) − 1 = 2 sinh we find
d f (x) = dx
2 f , 2
f (x) ±2 sinh . 2
(4.71)
(4.72)
Separation of variables then gives
with the solution
df = ±dx 2 sinh(f /2)
(4.73)
x C + . f (x) = 2 ln ± tanh 2 2
(4.74)
For x > 0, only the plus sign gives a physically meaningful expression. The constant C is generally complex-valued. It can be related to the potential at the membrane surface: 1 + ef (0)/2 f (0)/2 ) = ln . (4.75) C = 2 arctanh(e 1 − ef (0)/2 For f (0) > 0, the argument of the logarithm becomes negative. Hence, we replace in (4.74) C by C + iπ to have (Fig. 4.9) x C iπ x C f (x) = 2 ln tanh + + = −2 ln tanh + , (4.76) 2 2 2 2 2
1
We consider only solutions for the region x > 0 in the following.
56
4 Debye–Hückel Theory
f(x)
a
b 1
5
0.8
4
0.6
3
0.4
2
0.2
1
0
0
1
2
3 x
4
5
6
a
0
0
1
2
3 x
4
5
6
b 15 1
f’’(x)
10 0.5 5
0
0
1
2
3
4
5
0
0
1
x
2
3
4
5
x
Fig. 4.9. One-dimensional Poisson–Boltzmann equation. The solutions of the full (full curves) and the linearized equation (broken curves) are compared for (a) B = −1 and (b) B = −5
where C is now given by C = 2 arctanh(e−f (0)/2 ) = ln
1 + e−f (0)/2 1 − e−f (0)/2
.
(4.77)
The integration constant C is again connected to the surface charge density by dφ σo (0) = − (4.78) dx ε and from d kB T d kB T φ(x) = f (κx) = κf (κx), (4.79) dx e dx e we find kB T σ0 =− κf (0). (4.80) ε e Now the derivative is in the case f (0) > 0 given by x C 1 &x f (x) = tanh + − 2 2 tanh 2 +
C 2
'
(4.81)
4.6 Stern Modification of the Double Layer
and especially
f (0) = tanh
C 2
−
1 tanh(C/2)
57
(4.82)
and we have to solve the equation t−
eσ0 1 =− = B, t kB T κε
which yields2 B + t= 2 C = 2arctanh
(4.83)
√ B2 + 4 , 2 √ B B2 + 4 + . 2 2
(4.84) (4.85)
4.6 Stern Modification of the Double Layer Real ions have a finite radius R. Therefore, they cannot approach the membrane closer than R and the ion density has a maximum possible value, which is reached when the membrane is occupied by an ion layer. To account for this, Stern [25] extended the Goüy–Chapman model by an additional ion layer between the membrane and the diffusive ion layer (Fig. 4.10). φ(x) φM φ0
x
d + + + + + + + + +
− −
−
−
−
−
+
−
−
+
+
− −
Stern layer
−
−
Goüy−Chapman layer
charged plate (membrane)
Fig. 4.10. Stern modification of the double layer
2
The second root leads to imaginary values.
58
4 Debye–Hückel Theory
Within the Stern layer of thickness d, there are no charges and the potential drops linearly from the membrane potential φM to a value φ0 : φ(x) = φM −
φM − φ0 x, d
0 < x < d.
(4.86)
In the diffusive Goüy–Chapman layer, the potential decays exponentially φ(x) = φ0 e−κ(x−d) .
(4.87)
Assuming the same dielectric constant for both layers, we have to fulfill the boundary condition φM − φ0 dφ (d) = − = −κφ0 (4.88) dx d and hence φM φ0 = . (4.89) 1 + κd The total ion charge in the diffusive layer now is ∞ ∞ (x)dx = −Aεκ2 φ0 e−κ(x−a) dx = −Aεκφ0 qdiff = d
= −A
d
εκ φM 1 + κd
(4.90)
and the capacity is C=
−qdiff Aε = , φM d + κ−1
(4.91)
which are just the capacities of the two layers in series 1 1 Ad AλDebye 1 + . = + = C ε ε CStern Cdiff
(4.92)
Problems 4.1. Membrane Potential Consider a dielectric membrane in an electrolyte with an applied voltage V . V
I κ εw
II
III κ εw
εm
0
L
x
4.6 Stern Modification of the Double Layer
59
Solve the linearized Poisson–Boltzmann equation d2 φ(x) = κ2 (φ(x) − φ(0) ) dx2 with boundary conditions φ(0) (I) = φ(−∞) = 0, φ(0) (III) = φ(∞) = V, and determine the voltage difference ΔV = φ(L) − φ(0). Calculate the charge density mob (x) and the integrated charge on both sides of the membrane. What is the capacity of the membrane? 4.2. Ionic Activity The chemical potential of an ion with charge Ze is given in terms of the activity a by μ = μ0 + kB T ln a. Assume that the deviation from the ideal behavior μideal = μ0 + kB T ln c is due to electrostatic interactions only. Then for an ion with radius R, Debye–Hückel theory gives μ − μideal = kB T ln a − kB T ln c = Δ(μα − μ0α )Gsolv = −
κ Z 2 e2 . 8πε (1 + κR)
For a 1–1 electrolyte, calculate the mean activity coefficient
c c a+ a− c γ± = γ+ γ− = c+ c− and discuss the limit of extremely dilute solutions (Debye–Hückel limiting law).
5 Protonation Equilibria
Some of the amino acid residues building a protein can be in different protonation states and therefore differently charged states (Fig. 5.1). O
O
C
C O
C
−
NH2 NH
OH C
NH+3
Aspartic Acid Glutamic Acid
Arginine
NH
NH2
NH+3
Lysine
N
NH+
Histidine
Fig. 5.1. Functional groups which can be in different protonation states
In this chapter, we discuss the dependence of the free energy of a protein on the electrostatic interactions of its charged residues. We investigate the chemical equilibrium between a large number of different protein conformations and the dependence on the pH value [26].
5.1 Protonation Equilibria in Solution We consider a dilute aqueous solution containing N titrable molecules (Fig. 5.2) (i.e., which can be in two different protonation states 0 = deprotonated, 1 = protonated). For one molecule, we have for fixed protonation state G0 = F0 + pV0 = −kB T ln z0 + pV0 ,
(5.1)
G1 = F1 + pV1 = −kB T ln z1 + pV1 ,
(5.2)
62
5 Protonation Equilibria H 2O
H 2O
H 2O
H2O
H 2O H 2O
H 2O
H 3O
+
H 2O
H 2O
H 2O
H 2O
H 2O
H 2O
H 2O
R
O C OH
OH
H 2O
−
H 2O
H 2O H2O
H2O H 2O
H 2O
H 2O
H2O
Fig. 5.2. Titration in solution
and the free enthalpy difference between protonated and deprotonated form of the molecule is z1 G1 − G0 = −kB T ln + pΔV. (5.3) z0 In the following, the volume change will be neglected.1 If we now put such N molecules into the solution, the number of protonated molecules can fluctuate by exchanging protons with the solvent. Removal of one proton from the solvent costs a free enthalpy of ΔG = −μH+ ,
(5.4)
where μH+ is the chemical potential of the proton in solution.2 Hence, we come to the grand canonical partition function (5.5) Ξ= ZM eμM/kB T , where the partition function for fixed number of protonated molecules M is given by N! ZM = z M z N −M (5.6) M !(N − M )! 1 0 if the N molecules cannot be distinguished. Hence we find Ξ= 1
2
N! (z1 eμ/kB T )M z0N −M = (z0 + z1 eμ/kB T )N . M !(N − M )!
(5.7)
At atmospheric pressure, the mechanic work term pΔV is very small. We prefer to discuss the free enthalpy G in the following since experimentally usually temperature and pressure are constant. We omit the index H+ in the following.
5.1 Protonation Equilibria in Solution
63
protonation degree
1 0.8 0.6 0.4 0.2 0
–4
–2
0 μ − ΔGp
2
4
Fig. 5.3. Henderson–Hasselbalch titration curve. The protonation degree (5.9) is shown for kB T = 1, 2, 5
The average number of protonated molecules can be found from M=
∂ z1 eμ/kB T ln Ξ = N ∂(μ/kB T ) z0 + z1 eμ/kB T
(5.8)
and the protonation degree, that is, the fraction of protonated molecules is (Fig. 5.3) 1 M 1 = = . (5.9) N 1 + (z0 /z1 )e−μ/kB T 1 + e(G1 −G0 −μ)/kB T In physical chemistry [27, 28], the following quantities are usually introduced: •
The activity of a species ai is defined by μi = μ0i + kB T ln ai
(5.10)
in analogy to μi = μ0i + kB T ln pi /p0i for the ideal gas. For very dilute solutions, it can be approximated by μi = kB T ln Ni − kB T ln zi = μ0i + kB T ln •
•
ci , c0
where (0) indicates the standard state (usually p0 = 1 atm, c0 = 1 mol/l). The concentration of protons is measured by the pH value3 μH+ c(H+ ) pH = − log10 aH+ = − ≈ − log10 . (5.12) kB T ln(10) c0 The standard reaction enthalpy of an acid–base equilibrium AH A− + H+ ,
3
(5.11)
The standard enthalpy of formation for a proton is zero per definition.
(5.13)
64
5 Protonation Equilibria
where 0=
νi μi = ΔG0r + kB T
Ka = e−ΔGr /kB T = 0
(5.14)
νi ln ai ,
a(A− )a(H+ ) , a(AH)
(5.15)
is measured by the pKa value: 1 ΔG0r = − log10 pKa = − log10 (Ka ) = ln(10) kB T ⎛ ⎞ − c(A ) c(H+ ) ≈ − log10 ⎝ c0c(AH)c0 ⎠ ,
a(A− )a(H+ ) a(AH)
(5.16)
c0
which is usually simply written as4 pKa = − log10
c(A− )c(H+ ) c(AH)
,
(5.17)
which together with (5.12) gives the Henderson–Hasselbalch equation: c(A− ) pH − pKa = log10 . (5.18) c(AH) The standard reaction enthalpy of the acid–base equilibrium is (with the approximation (5.11)) ΔG0r = μ0HA + μ0 − + μ0H+ M = −(kB T ln L − kB T ln z1 ) + (kB T ln L − kB T ln z0 ) z0 = −(G1 − G0 ). = −kB T ln z1
(5.19)
In the language of physical chemistry, the protonation degree (5.9) is given by c(AH) 1 = . (5.20) c(AH) + c(A− ) 1 + 10(pH−pKa )
5.2 Protonation Equilibria in Proteins In the protein, there are additional steric and electrostatic interactions with other groups of the protein, which contribute to the energies of the titrable site (Fig. 5.4). 4
But you have to be aware that all concentrations have to be taken in units of the standard concentration c0 . The argument of a logarithm should be dimensionless.
5.2 Protonation Equilibria in Proteins
65
H2O H2O
H2O COOH δ+
OH− H2O δ− Cl−
δ−
H2O δ+
δ−
NH2
VCoul δ− H2O
H2O
δ+
δ+ +
δ−
H2O
δ+
δ+ H2O
H2O
H2O
δ−
δ+
COO−
δ−
δ−
H O+ 3
δ+
δ+
H2O Na+
δ−
H2O
H2O
H2O
H2O
Fig. 5.4. Titration in a protein. Electrostatic interactions with fixed charges (charged residues and background charges) and mobile charges (ions) have to be taken into account RMH
ΔGsol
ΔG1 PMH
–
+
RM + H ΔG0
ΔGprot
PM−+ H+
Fig. 5.5. Thermodynamic cycle. The protonation enthalpy of a titrable group in the protein (PMH) differs from that of a model compound in solution (RMH) [29]
5.2.1 Apparent pKa Values The pKa of a titrable group depends on the interaction with background charges, with all the other residues and with the solvent which contains dipolar molecules and free moving ions. As a consequence, the pKa value is different from that of a model compound containing the titrable group in solution (Fig. 5.5). The difference of protonation enthalpies ΔΔG = ΔGprot − ΔGsolv = ΔG0 − ΔG1
(5.21)
can be divided into three parts: ΔΔG = ΔΔE + pΔΔV − T ΔΔS.
(5.22)
66
5 Protonation Equilibria
In the following, the volume change will be neglected. The pKa value of a group in the protein 1 1 ΔGprot = (ΔGsolv + ΔΔG) kB T ln(10) kB T ln(10) ΔΔG = pKamodel + kB T ln(10)
pKaprot =
(5.23)
is called the apparent pKa of the group in the protein. It depends on the mutual interactions of the titrable groups and hence on the pH value. Therefore, titration of groups in a protein cannot be described by a simple Henderson–Hasselbalch equation. 5.2.2 Protonation Enthalpy The amino acids forming a protein can be enumerated by their appearance in the primary structure and are therefore distinguishable. The protonation state of a protein with N titrable sites will be described by the protonation vector: 1 if group i is protonated, s = (s1 , s2 , . . . , sS ) with si = (5.24) 0 if group i is deprotonated. The number of protonation states Npstates = 2N can be very big for real proteins.5 Proteins are very flexible and have a large number of configurations. These will be denoted by the symbol γ which summarizes all the orientational angles between neighboring residues of the protein. The apparent pKa values will in general depend on this configuration vector, since for instance distances between the residues depend on the configuration. The enthalpy change by protonating the ith residue is denoted as G(s1 , . . . , si−1 , 1i , si+1 , . . . , sN , γ) − G(s1 , . . . , si−1 , 0i , si+1 , . . . , γ) 1,s 0,s (Ei,j j − Ei,j j ) (5.25) = ΔGi,intr + j=i
with the Coulombic interaction between two residues i, j in the protonation states si , sj (Fig. 5.6): s ,s Ei,ji j (γ). (5.26)
5
We do not take into account that some residues can be in more than two protonation states.
5.2 Protonation Equilibria in Proteins s i−1
si = 0
s
i−1
si = 1
s
s
67
i+1
i+1
Fig. 5.6. Protonation of one residue. The protonation state of the ith residue changes together with its charge. Electrostatic interactions are divided into an intrinsic part and the interactions with all other residues
The so-called intrinsic protonation energy ΔGi,intr is the protonation energy of residue i if there are no Coulombic interactions with other residues, that is, if all other residues are in their neutral state. It can be estimated from the model energy (5.3) and (5.19), taking into account all remaining interactions with background charges6 and the different solvation energies, which can be calculated using Born models (3.16) and (3.39) or by solving the Poisson–Boltzmann equation (4.9) and (4.12): ΔGi,intr ≈ ΔGi,solv + ΔEi,bg + ΔEi,Born .
(5.27)
Let us now calculate the protonation enthalpy of a protein with S of its N titrable residues protonated. The contributions from the intrinsic enthalpy changes can be written as si ΔGi,intr . (5.28) i
The Coulombic interactions are divided into pairs of protonated residues 1,1 si sj Ei,j , (5.29) i<j
pairs of unprotonated residues 0,0 0,0 0,0 0,0 (1 − si )(1 − sj )Ei,j = Ei,j + si sj Ei,j − si Ei,j , i<j
i<j
i<j
(5.30)
i=j
and interactions between one protonated and one unprotonated residue
1,0 0,1 1,0 0,1 si (1 − sj )Ei,j =− + (1 − si )sj Ei,j si sj Ei,j + Ei,j i<j
i<j
+
1,0 si Ei,j .
(5.31)
i=j
6
That is, the charge distribution of the protein backbone and the nontitrable residues.
68
5 Protonation Equilibria A–+ H+
BH+
B + H+
1
0
1
0
1
1
0
0
0
−1
+1
0
−1
−1
1
1
AH s s
0
q = s−s
0
qc = 1−2s0
Fig. 5.7. Protonation states. Protonation state si , protonation of the neutral state s0i , charge qi , and charge of the non-neutral state qic are correlated
Summing up these contributions and subtracting the Coulombic interactions of the fully deprotonated protein, we find the enthalpy change ⎛ ⎞
1,0 0,0 ΔG(s) = si ⎝ΔGi,intr + si sj Wi,j (5.32) Ei,j − Ei,j ⎠ + i
i<j
j=i
with the interaction parameter Wij = Eij (1, 1) − Eij (1, 0) − Eij (0, 1) + Eij (0, 0).
(5.33)
In fact for each pair i, j, only one of the summands is nonzero. 5.2.3 Protonation Enthalpy Relative to the Uncharged State In the literature, the enthalpy is often taken relative to a reference state (s0i ) where all titrable residues are in their uncharged state. As is illustrated in Fig. 5.7, the formal dimensionless charge of a residue is given by qi = si − s0i . The enthalpy change relative to the uncharged reference state is ΔG(s) − ΔG(s0 ) =
N
(si − s0i )ΔGi,intr +
i=1
N N 1,0 0,0 (si − s0i ) (Ei,j − Ei,j ) i=1
j=i,j=1
& ' 1 Wi,j (si − s0i )(sj − s0j ) + s0i sj + si s0j − 2s0i s0j . + 2 i
(5.34)
j=i
Note that Wij = Wji and therefore the last sum can be simplified & ' 1 Wi,j (si − s0i )(sj − s0j ) + s0i sj + si s0j − 2s0i s0j 2 i j=i
1 = Wi,j (qi qj + 2(si − s0i )s0j ). 2 i j=i
(5.35)
5.2 Protonation Equilibria in Proteins
69
Consider now the expression ΔW =
N
N
qi
i=1
1,0 0,0 (Ei,j − Ei,j + s0j Wi,j ).
(5.36)
j=i,j=1
For each residue j = i, there are only two alternatives7 1,0 0,0 s0j = 0 → Ei,j − Ei,j = 0,
(5.37)
1,0 0,0 1,1 0,1 − Ei,j + Wi,j = Ei,j − Ei,j = 0, s0j = 1 → Ei,j
(5.38)
and hence we have (5.39)
ΔW = 0, ΔG(s) − ΔG(s0 ) =
N
qi ΔGi,intr +
si =s0i
1 Wi,j qi qj . 2
(5.40)
i=j
5.2.4 Statistical Mechanics of Protonation The partition function for a specific total charge Q= qi = (si − s0i )
(5.41)
is given by
Z(Q) = {s,
= {s,
e−ΔG/kB T
qi =Q} γ
e−[
(si −s0i )ΔGi,intr + 12
Wi,j qi qj ]/kB T
.
(5.42)
qi =Q} γ
We come to the grand canonical partition function by introducing the factor eμQ/kB T = eμ
(si −s0i )/kB T
(5.43)
and summing over all possible charge states of the protein Ξ= Z(Q)eμQ/kB T Q
=
e−[
(si −s0i )(ΔGi,intr (γ)−μ)+ 12
Wi,j (γ)qi qj ]/kB T
.
(5.44)
γ,s
7
The Coulombic interaction vanishes if one of the residues is in the neutral state.
70
5 Protonation Equilibria
With the approximation (5.27) and (5.3), the partition function (i) qi z1 Z(Q) ≈ Zconf (i) z0 {s, qi =Q}
(5.45)
becomes the product of one factor which relates to the internal degrees of freedom which are usually assumed to be configuration-independent8 and a second factor which depends only on the configurational degrees of freedom: 0 1 Zconf (s) = e−[ (si −si )(ΔEi,bg +ΔEi,Born )+ 2 Wi,j qi qj ]/kB T . (5.46) γ
5.3 Abnormal Titration Curves of Coupled Residues Let us consider a simple example of a model protein with only two titrable sites of the same type. The free enthalpies of the four possible states are (Fig. 5.8) 1,1 0,0 ΔG(AH, AH) = ΔG1,intr + ΔG2,intr + E1,2 − E1,2 ,
ΔG(A− , AH) − ΔG(AH, AH) = −ΔG1,intr , ΔG(AH, A− ) − ΔG(AH, AH) = −ΔG2,intr , ΔG(A− , A− ) − ΔG(AH, AH) = −ΔG1,intr − ΔG2,intr + W12 0,0 = −ΔG2,intr − ΔG1,intr + E1,2 .
(5.47)
protonation degree
1 0.8 0.6 0.4 0.2 0
–10
0 μ
10
Fig. 5.8. Abnormal titration curves. Two interacting residues with neutral protonated states (AH), ΔG1,intr = ΔG2,intr = 1.0, W = −5, 0, 5, 10 8
Protonation-dependent degrees of freedom can be important in certain cases [30].
5.3 Abnormal Titration Curves of Coupled Residues
71
The grand partition function is Ξ = 1 + e−(−ΔG1,intr +μ)/kB T + e−(−ΔG2,intr +μ)/kB T +e−(−ΔG2,intr−ΔG1,intr +2μ+W )/kB T .
(5.48)
The average protonation values are 1 + e−(−ΔG2,intr +μ)/kB T , Ξ 1 + e−(−ΔG1,intr +μ)/kB T . s2 = Ξ s1 =
(5.49)
Problems 5.1. Abnormal Titration Curves Consider a simple example of a model protein with only two titrable sites of the same type. Determine the relative free enthalpies of the four possible states. (B,B) B
B
B
BH+
BH+
B
BH+
BH+
+2H+
(B,BH+) +H+
(BH+,B) +H+
(BH+,BH+)
W
From the grand canonical partition function (the number of protons is not fixed), calculate the protonation degree for both sites and discuss them as a function of the interaction energy W .
Part III
Reaction Kinetics
6 Formal Kinetics
In this chapter, we discuss the phenomenological description of elementary chemical reactions and photophysical processes with the help of rate equations.
6.1 Elementary Chemical Reactions The basic steps of chemical reactions can be divided into several classes of elementary reactions. They can be photoinduced or thermally activated, may involve the transfer of an electron or proton, and are accompanied by structural changes, like breaking and forming bonds or at least a reorganization of bond lengths and angles (Figs. 6.1 and 6.2). All elementary reactions are reversible. There is a dynamical equilibrium between forward and back reaction, which are independent, for instance H2 + J2 2HJ.
(6.1)
6.2 Reaction Variable and Reaction Rate We consider a general stoichiometric equation for the reaction of several species1 νi Ai = 0 (6.2) i
and define a reaction variable x based on the concentration of the species Ai by ci = ci,0 + νi x (6.3) 1
The stoichiometric coefficients νi are positive for products and negative for educts. This is the conventional definition. Products and educts can be exchanged at least in principle, since the back reaction is always possible.
76
6 Formal Kinetics A + hν
A*
A* + B
absorption
D+ A–
D*A
A + B*
energy transfer
D+A– B
D+A B–
charge transfer
charge separation
bonds are conserved
reorganization of bond lengths and angles
Fig. 6.1. Elementary reactions without bond reformation
A−H + B A− + B−H+ proton transfer bimolecular H− Cl + Br−Br
H
H− Br + Br− Cl
Br Br activated complex
CH2 CH2
Cl
CH –3 CH
CH2
CH2
bonds are broken or formed unimolecular
isomerization
Fig. 6.2. Elementary reactions with bond reformation
as
ci − ci,0 νi and the reaction rate as its time derivative x=
r=
dx 1 dci = . dt νi dt
(6.4)
(6.5)
6.3 Reaction Order The progress of a chemical reaction can be frequently described by a simple rate expression such as r = kcn1 1 cn2 2 · · · = k cni i (6.6) i∈educts
with the rate constant k. For such a system, the exponent2 of the ith term is called the order of the reaction with respect to this substance and the sum of all the exponents is called the overall reaction order. 2
For more complicated reactions, the exponents need not be integers. For simple reactions, they are given by the stoichiometric coefficients ni = |νi | of the educts (the products for the back reaction).
6.3 Reaction Order
77
6.3.1 Zero-Order Reactions Zero-order reactions proceed at the same rate regardless of concentration. The rate expression for a reaction of this type is dc = k0 dt
(6.7)
c = c0 + k0 t.
(6.8)
which can be integrated Zero-order reactions appear when the determining factor is an outside source of energy (light) or when the reaction occurs on the surface of a catalyst. 6.3.2 First-Order Reactions First-order reactions describe the decay of an excited state, for instance a radioactive decay A∗ → A. (6.9) The rate expression is dcA∗ dcA =− = −kcA∗ , dt dt
(6.10)
which gives an exponential decay cA∗ = cA∗ (0)e−kt
(6.11)
with a constant half-period τ1/2 =
ln(2) . k
(6.12)
6.3.3 Second-Order Reactions A second-order reaction between two different substances A+B → · · ·
(6.13)
obeys the equations dcA dcB = = −k2 cA cB , (6.14) dt dt which can be written down using the reaction variable x and the initial concentrations a, b as cA = a − x, cB = b − x, (6.15) dx = k2 (a − x)(b − x). dt This can be integrated to give
(6.16)
78
6 Formal Kinetics
b(a − x) 1 ln = k2 t. a − b a(b − x)
(6.17)
If two molecules of the same type react with each other, we have instead −
dcA = −k2 c2A dt
(6.18)
which gives an algebraic decay cA (t) =
1 k2 t +
1 a
,
(6.19)
where the half-period now depends on the initial concentration τ1/2 =
1 . k2 a
(6.20)
An example is exciton–exciton annihilation in the light harvesting complex of photosynthesis: A∗ + A∗ → A+A. (6.21)
6.4 Dynamical Equilibrium We consider a first-order reaction together with the back reaction k1 → B. A ← k−1
(6.22)
The reaction variable of the back reaction will be denoted by y. The concentrations are cA (t) = a − x + y, cB (t) = b + x − y,
(6.23) (6.24)
and the reaction rates are dx = k1 cA = k1 (a − x + y), dt dy = k−1 cB = k−1 (b + x − y). dt
(6.25) (6.26)
Introducing an overall reaction variable z =x−y
(6.27)
6.6 Consecutive Reactions
and the equilibrium value s=
k1 a − k−1 b , k1 + k−1
79
(6.28)
we have
dz = k1 a − k−1 b − (k1 − k−1 )z = (k1 + k−1 )(s − z) dt which for z(0) = 0 has the solution z = s(1 − e−(k1 +k−1 )t ).
(6.29)
(6.30)
The reaction approaches the equilibrium with a rate constant k1 + k−1 . In equilibrium, z = s and dz/dt = 0. The equilibrium concentrations are cA = a − s = (a + b)
k−1 , k1 + k−1
(6.31)
cB = b + s = (a + b)
k1 , k1 + k−1
(6.32)
and the equilibrium constant is K=
k−1 cA = . cB k1
(6.33)
6.5 Competing Reactions If one species decays via separate independent channels (fluorescence, electron transfer, radiationless transitions, etc.), the rates are additive: dcA = −(k1 + k2 + · · · )cA . dt
(6.34)
6.6 Consecutive Reactions We consider a chain consisting of two first-order reactions3 k1 k2 A → B → C.
(6.35)
The reaction variables are denoted by x, y and the initial concentrations by a, b, c. The concentrations are cA = a − x, 3
With negligible back reactions
(6.36)
80
6 Formal Kinetics
cB = b + x − y,
(6.37)
cC = c + y,
(6.38)
dx dcA =− = −k1 cA = −k1 (a − x), dt dt
(6.39)
and their time derivatives are
dcB dx dy = − = k1 cA − k2 cB = k1 a − k2 b + (k2 − k1 )x − k2 y, dt dt dt dy dcC = = k2 cB = k2 (b + x − y). dt dt
(6.40) (6.41)
The first equation gives an exponential decay: cA = ae−k1 t .
(6.42)
dcB + k2 cB = k1 ae−k1 t dt
(6.43)
Integration of
gives the concentration of the intermediate state: k1 a −k1 t k1 a e + b− e−k2 t . cB = k2 − k1 k2 − k1
(6.44)
If at time zero only the species A is present, the concentration of B has a maximum at 1 k1 tmax = ln (6.45) k1 − k2 k2 with the value cB,max = a
k1 k1 − k2
exp
k2 k2 ln k1 − k2 k1
− exp
k1 k2 ln k1 − k2 k1
. (6.46)
6.7 Enzymatic Catalysis Enzymatic catalysis is very important for biochemical reactions. It can be described schematically by formation of an enzyme–substrate complex followed by decomposition into enzyme and product: k1 E + S ES E + P. k−1
(6.47)
6.7 Enzymatic Catalysis
81
We consider the limiting case of negligible k−2 k2 and large concentration of substrate cS cE . Then we have to solve the equations dcS dt dcE dt dcES dt dcP dt
≈ −k1 cE c0S + k−1 cES , ≈ −k1 cE c0S + (k−1 + k2 )cES , ≈ k1 cE c0S − (k−1 + k2 )cES , = k2 cES .
(6.48)
First, we solve the equations for dcE /dt and dcES /dt: d cES −k−1 − k2 k1 c0S cES = . k−1 + k2 −k1 c0S cE dt cE
(6.49)
The matrix has one eigenvalue λ = 0 corresponding to a stationary solution: k1 c0 S 0 −k−1 − k2 k1 c0S k−1 +k2 = . (6.50) 0 k−1 + k2 −k1 c0S 1 The stationary concentration of the ES complex is cstat ES =
k1 cS cE cS cE = k−1 + k2 KM
(6.51)
k−1 + k2 . k1
(6.52)
with the Michaelis constant KM =
The second eigenvalue relates to the time constant for reaching the stationary state: 1 −k−1 − k2 k1 c0S 1 0 = −(k c + k + k ) . (6.53) 1 −1 2 S k−1 + k2 −k1 c0S −1 −1 For the initial conditions cES (0) = cP (0) = 0,
(6.54)
we find c0E (1 − e−(k1 +k−1 +k2 )t ), 1 + Kc0M S c0E c0S −(k1 +k−1 +k2 )t e 1 + . cE (t) = c0 KM 1 + KSM
cES (t) =
(6.55)
82
6 Formal Kinetics rmax=k2cE,tot
reaction rate r
1
cSk2cE,tot/KM 0.5
0
0
0.5 cS
1
Fig. 6.3. Michaelis–Menten kinetics
The stationary state is stable, since any deviation will decrease exponentially. The overall rate of the enzyme-catalyzed reaction is given by the rate of product formation: dcP dcS r= =− = k2 cES (6.56) dt dt and with the total concentration of enzyme
we have cES =
cE,tot = cE + cES ,
(6.57)
cE cS (cE,tot − cES )cS = KM KM
(6.58)
cE,tot cS . cS + KM
(6.59)
and hence cES =
The overall reaction rate is given by the Michaelis–Menten equation (Fig. 6.3) r=
k2 cE,tot cS , KM + cS
r cS = , rmax cS + KM
rmax = k2 cE,tot .
(6.60) (6.61)
6.8 Reactions in Solutions In solutions, the reacting molecules approach each other by diffusive motion, forming a reactive complex within a solvent cage which has a lifetime of typically 100 ps (Fig. 6.4).
6.8 Reactions in Solutions
83
B
A
solvent cage B A
Fig. 6.4. Formation of a reactive complex
reaction rate
a
b
c
0.1
0.01
0
5
0
5 time
0
5
Fig. 6.5. Transition from the diffusion-controlled limit to the reaction-controlled limit. r = k2 c{AB} is calculated numerically for cA (0) = cB (0) = 1, k2 = 1: (a) k1 = k−1 = 0.1, (b) k1 = k−1 = 1.0, and (c) k1 = k−1 = 10
Formally, this can be described by an equilibrium between the free reactants A and B and a reactive complex {AB} k1 k2 A+B {A B} → C. k−1
(6.62)
The concentrations obey the equations (Fig. 6.5) dcA dcB = = −k1 cA cB + k−1 c{AB} , dt dt dcC = k2 c{AB} , dt dc{AB} = k1 cA cB − (k−1 + k2 )c{AB} . dt Let us consider two limiting cases.
(6.63)
6.8.1 Diffusion-Controlled Limit If the reaction rate k2 is large compared to k±1 , we find for the stationary solution approximately
84
6 Formal Kinetics
k2 c{AB} ≈ k1 cA cB
(6.64)
and hence for the overall reaction rate dcC = k2 c{AB} ≈ k1 cA cB . dt
(6.65)
The reaction rate is determined by the formation of the reactive complex. 6.8.2 Reaction-Controlled Limit If, on the other hand k2 k±1 , an equilibrium between reactants and reactive complex will be established c{AB} k1 =K= . cA cB k−1
A + B { A B},
(6.66)
Now, the overall reaction rate dcC = k2 c{AB} = k2 KcA cB dt
(6.67)
is determined by the reaction rate k2 and the constant of the diffusion equilibrium.
Problems 6.1. pH Dependence of Enzyme Activity Consider an enzymatic reaction where the substrate can be in two protonation states S− + H+ HS and the enzyme reacts only with the deprotonated form E + S− ES− → E + P. Calculate the reaction rate as a function of the proton concentration cH+ 6.2. Polymerization at the End of a Polymer Consider the multiple equilibrium between the monomer M and the i-mer iM c2M = Kc2M , c3M = KcM c2M , .. . ciM = KcM c(i−1)M ,
6.8 Reactions in Solutions
85
where the equilibrium constant K is assumed to be independent of the degree of polymerization. Calculate the concentration of the i-mer ciM and the mean degree of polymerization iciM . i = i i ciM 6.3. Primary Salt Effect Consider the reaction of two ionic species A and B with charges ZA,B e which are in equilibrium with an activated complex X (with charge (ZA + ZB )e) which decays into the products C and D: k1 A+B X → C + D. The equilibrium constant is K=
aX , aA aB
where the activities are given by the Debye–Hückel approximation μi = μ0i + kB T ln ci −
Zi2 e2 κ , 8π
Calculate the reaction rate r=
dcC . dt
κ2 =
e2 Ni Zi2 . kB T
7 Kinetic Theory: Fokker–Planck Equation
In this chapter, we consider a model system (protein) interacting with a surrounding medium which is only taken implicitly into account. We are interested in the dynamics on a time scale slower than the medium fluctuations. The interaction with the medium is described approximately as the sum of an average force and a stochastic force [31].
7.1 Stochastic Differential Equation for Brownian Motion The simplest example describes one-dimensional Brownian motion of a big particle in a sea of small particles. The average interaction leads to damping of the motion which is described in terms of a velocity-dependent damping term: dv(t) = −γv(t). (7.1) dt This equation alone leads to exponential relaxation v = v(0)e−γt which is not compatible with thermodynamics, since the average kinetic energy should be (m/2)v 2 = kB T /2 in equilibrium. Therefore, we add a randomly fluctuating force which represents the collisions with many solvent molecules during a finite time interval τ . The result is the Langevin equation dv(t) = −γv(t) + F (t) dt
(7.2)
with the formal solution −γt
v(t) = v0 e
+
t
eγ(t −t) F (t )dt .
(7.3)
0
The average of the stochastic force has to be zero because the equation of motion for the average velocity should be
88
7 Kinetic Theory: Fokker–Planck Equation
dv(t) = −γv(t). dt
(7.4)
We assume that many collisions occur during τ and therefore forces at different times are not correlated: F (t)F (t ) = Cδ(t − t ). The velocity correlation function is −γ(t+t )
v(t)v(t ) = e
v02
t
t γ(t1 +t2 )
dt1
+
(7.5)
dt2 e
F (t1 )F (t2 ) . (7.6)
0
0
Without losing generality, we assume t > t and substitute t2 = t1 + s to find
v(t)v(t ) = v02 e−γ(t+t )
+e−γ(t+t )
t
dt1 0
=
v02 e−γ(t+t )
t −t1
−t1
−γ(t+t )
ds eγ(2t1 +s) F (t1 )F (t1 + s) t
dt1 e2γt1 C
+e
= v02 e−γ(t+t ) + e−γ(t+t )
0 2γt
e
−1 C. 2γ
(7.7)
The exponential terms vanish very quickly and we find
v(t)v(t ) → e−γ|t −t|
C . 2γ
(7.8)
Now, C can be determined from the average kinetic energy as kB T mC 2γkB T mv 2 = = →C= . 2 2 2 2γ m
(7.9)
The mean square displacement of a particle starting at x0 with velocity v0 is ( 2 ) t t t = (x(t) − x(0))2 = dt1 v(t1 ) v(t1 )v(t2 )dt1 dt2 0
0
0
t t kB T −γ|t1 −t2 | 2 −γ(t1 +t2 ) v0 e dt e = + m 0 0 and since
t
t
−γ(t1 +t2 )
e 0
0
dt1 dt2 =
1 − e−γt γ
(7.10)
2 (7.11)
7.2 Probability Distribution
and
t
t
e 0
−γ|t1 −t2 |
dt1 dt2 = 2
0
t
dt1 0
=
t1
89
e−γ(t1 −t2 ) dt2
0
2 2 t − 2 (1 − e−γt ), γ γ
(7.12)
we obtain 2 kB T (1 − e−γt ) 2kB T 2kB T 2 + (1 − e−γt ). (x(t) − x(0)) = v0 − t− m γ2 mγ mγ 2 (7.13) 2
If we had started with an initial velocity distribution for the stationary state v02 = kB T /m,
(7.14)
then the first term in (7.12) would vanish. For very large times, the leading term is1 (x(t) − x(0))2 = 2Dt (7.15) with the diffusion coefficient D=
kB T . mγ
(7.16)
7.2 Probability Distribution Now, we discuss the probability distribution W (v). The time evolution can be described as W (v, t + τ ) = P (v, t + τ |v , t)W (v , t)dv . (7.17) To derive an expression for the differential ∂W (v, t)/∂t, we need the transition probability P (v, t + τ |v , t) for small τ . Introducing Δ = v − v , we expand the integrand in a Taylor series P (v, t + τ |v , t)W (v , t) = P (v, t + τ |v − Δ, t)W (v − Δ, t) n ∞ (−1)n n ∂ = Δ n! ∂v n=0 ×P (v + Δ, t + τ |v, t)W (v, t).
(7.18)
Inserting this into the integral gives n ∞ (−1)n ∂ W (v, t + τ ) = Δn P (v + Δ, t + τ |v, t)dΔ W (v, t) n! ∂v n=0 (7.19) 1
This is the well-known Einstein result for the diffusion constant D.
90
7 Kinetic Theory: Fokker–Planck Equation
and assuming that the moments exist which are defined by Mn (v , t, τ ) = (v(t + τ ) − v(t)) |v(t)=v = (v − v )n P (v, t + τ |v , t)dv, n
we find W (v, t + τ ) =
n ∞ (−1)n ∂ Mn (v, t, τ )W (v, t). n! ∂v n=0
(7.20)
(7.21)
Expanding the moments into a Taylor series 1 1 Mn (v, t, τ ) = Mn (v, t, 0) + D(n) (v, t)τ + · · · , n! n!
(7.22)
we have finally2 W (v, t + τ ) − W (v, t) =
n ∞ ∂ D(n) (v, t)W (v, t)τ + · · · , − ∂v 1
which gives the equation of motion for the probability distribution3 n ∞ ∂ ∂W (v, t) − = D(n) (v, t)W (v, t). ∂t ∂v 1
(7.23)
(7.24)
If this expansion stops after the second term,4 the general form of the onedimensional Fokker–Planck equation results ∂ (1) ∂W (v, t) ∂ 2 (2) = − D (v, t) + D (v, t) W (v, t) (7.25) ∂t ∂v ∂x2
7.3 Diffusion Consider a particle performing a random walk in one dimension due to collisions. We use the stochastic differential equation5 dx = v0 + f (t), dt
(7.26)
where the velocity has a drift component v0 and a fluctuating part f (t) with f (t) = 0, 2 3 4 5
f (t)f (t ) = qδ(t − t ).
The zero-order moment does not depend on τ . This is known as the Kramers–Moyal expansion. It can be shown that this is the case for all Markov processes. This is a so-called Wiener process.
(7.27)
7.3 Diffusion
91
The formal solution is simply
t
x(t) − x(0) = v0 t +
f (t )dt .
(7.28)
0
The first moment
M1 (x0 , t, τ ) = x(t + τ ) − x(t)|x(t)=x0 = v0 τ +
τ
f (t )dt
(7.29)
0
gives D (1) = v0 .
(7.30)
The second moment is M2 (x0 , t, τ )
= v02 τ 2 + v0 τ
τ
f (t )dt +
0
τ
τ
f (t1 )f (t2 )dt1 dt2 . 0
(7.31)
0
The second term vanishes and the only linear term in τ comes from the double integral τ τ τ τ −t1 f (t1 )f (t2 )dt1 dt2 = dt1 qδ(t )dt = qτ. (7.32) 0
0
0
−t1
Hence
q (7.33) 2 and the corresponding Fokker–Planck equation is the diffusion equation D (2) =
∂W (x, t) ∂W (x, t) ∂ 2 W (x, t) = −v0 +D ∂t ∂x ∂x2
(7.34)
with the diffusion constant D = D(2) . 7.3.1 Sharp Initial Distribution We can easily find the solution for a sharp initial distribution W (x, 0) = δ(x − x0 ) by taking the Fourier transform: ∞ * (k, t) = W dx W (x, t) e−ikx . (7.35) −∞
We obtain the algebraic equation * (k, t) ∂W * (k, t) = (−Dk2 + iv0 k)W ∂t
(7.36)
92
7 Kinetic Theory: Fokker–Planck Equation
0.5 0.4 0.3 0.2 0.1 0 –4
1
–2 0 2 x 4
2 3 t
6 8
4
10 12
5
Fig. 7.1. Solution (7.38) of the diffusion equation for sharp initial conditions
which is solved by + , * (k, t) = W *0 exp (−Dk 2 + iv0 k)t + ikx0 . W Inverse Fourier transformation then gives6 1 (x − x0 − v0 t)2 W (x, t) = √ , exp − 4Dt 4πDt
(7.37)
(7.38)
which is a Gaussian distribution centered at xc = x0 + v0 t with a variance of (x − xc )2 = 4Dt (Fig. 7.1). 7.3.2 Absorbing Boundary Consider a particle from species A which can undergo a chemical reaction with a particle from species B at position xA = 0 A+B → AB.
(7.39)
If the reaction rate is very fast, then the concentration of A vanishes at x = 0 which gives an additional boundary condition 6
with the proper normalization factor
7.4 Fokker–Planck Equation for Brownian Motion
a
93
b 1 2
0.9 0.8 CA(t)
W(x,t)
1 0 –1
0.7 0.6 0.5 0.4
–2 –2
0
x
0.3
2
0
2
4
t
6
8
10
Fig. 7.2. Solution from the mirror principle. (a) The probability density distribution (7.42) at t = 0.25, 0.5, 1, 2, 5 and (b) the decay of the total concentration (7.43) are shown for 4D = 1
(7.40)
W (x = 0, t) = 0. Starting again with a localized particle at time zero with W (x, 0) = δ(x − x0 ),
(7.41)
v0 = 0,
the probability distribution W (x, t) = √
1 4πDt
e−
(x−x0 )2 4Dt
− e−
(x+x0 )2 4Dt
(7.42)
is a solution which fulfills the boundary conditions. This solution is similar to the mirror principle known from electrostatics. The total concentration of species A in solution is then given by (Fig. 7.2) ∞ x0 CA (t) = . (7.43) dx W (x, t) = erf √ 4Dt 0
7.4 Fokker–Planck Equation for Brownian Motion For Brownian motion, we have from the formal solution τ v(τ ) = v0 (1 − γτ + · · · ) + (1 + γ(t1 − τ ) + · · · )F (t1 )dt1 .
(7.44)
0
The first moment7 M1 (v0 , t, τ ) = v(τ ) − v(0) = −γτ v0 + · · · 7
Here and in the following, we use F (t) = 0.
(7.45)
94
7 Kinetic Theory: Fokker–Planck Equation
gives D(1) (v, t) = −γv.
(7.46)
The second moment follows from (v(τ ) − v0 )2 = (v0 γτ )2 + 0
τ
τ
(1 + γ(t1 + t2 − 2τ · · · ))F (t1 )F (t2 )dt1 dt2 . (7.47) 0
The double integral gives τ −t1 τ 2γkB T δ(t ) dt1 dt (1 + γ(2t1 + t − 2τ + · · · )) m −t1 0 τ 2γkB T (1 + γ(2t1 − 2τ + · · · )) dt1 = m 0 2γkB T = τ + ··· m
(7.48)
and we have
γkB T . (7.49) m The higher moments have no contributions linear in τ and the resulting Fokker–Planck equation is D(2) =
∂ γkB T ∂ 2 ∂W (v, t) = γ (vW (v, t)) + W (v, t). ∂t ∂v m ∂v2
(7.50)
7.5 Stationary Solution to the Focker–Planck Equation The Fokker–Planck equation can be written in the form of a continuity equation: ∂ ∂W (v, t) = − S(v, t) (7.51) ∂t ∂v with the probability current γkT mv ∂ S(v, t) = − W (v, t) + W (v, t) . (7.52) m kB T ∂v The probability current has to vanish for a stationary solution (with open boundaries −∞ < v < ∞) mv ∂ W (v, t) = − W (v, t), ∂v kB T which has the Maxwell distribution as its solution
2 m e−mv /2kB T . Wstat (v, t) = 2πkB T
(7.53)
(7.54)
7.5 Stationary Solution to the Focker–Planck Equation
95
Therefore, we conclude that the Fokker–Planck equation describes systems that reach thermal equilibrium, starting from a nonequilibrium distribution. In the following, we want to look at the relaxation process itself. We start with ∂W (v, t) ∂ 2 W (v, t) ∂W (v, t) = γW (v, t) + γv +D (7.55) ∂t ∂v ∂v2 and introduce the new variables y(ρ, t) = W (ρe−γt , t)
ρ = veγt ,
(7.56)
which transform the differentials according to ∂W ∂y ∂y ∂ρ = = eγt , ∂v ∂ρ ∂v ∂ρ 2 2 ∂ y ∂ W = e2γt 2 , ∂v 2 ∂ρ ∂W ∂y ∂y ∂ρ ∂y ∂y = + = + γρ . ∂t ∂t ∂ρ ∂t ∂t ∂ρ
(7.57)
This leads to the new differential equation: ∂ 2y ∂y = γy + De2γt 2 . ∂t ∂ρ
(7.58)
To solve this equation, we introduce new variables again
which results in
y = χeγt ,
(7.59)
∂χ ∂2χ = De2γt 2 . ∂t ∂ρ
(7.60)
Now, we introduce a new time scale θ=
1 2γt (e − 1), 2γ
dθ = e2γt dt,
(7.61) (7.62)
satisfying the initial condition θ(t = 0) = 0. Finally, we have to solve a diffusion equation ∂ 2χ ∂χ =D 2 (7.63) ∂θ ∂ρ which gives (7.38) χ(ρ, θ, ρ0 ) = √
1 (ρ − ρ0 )2 . exp − 4πDθ 4πDθ
(7.64)
96
7 Kinetic Theory: Fokker–Planck Equation
After back substitution of all variables, we find
m m (v − v0 e−γt )2 exp − . W (v, t) = 2πkB T (1 − e−2γt ) 2kB T (1 − e−2γt ) This solution shows that the system behaves initially like 1 (v − v0 )2 W (v, t) ≈ √ exp − 4Dt 4πDt
(7.65)
(7.66)
and relaxes to the Maxwell distribution with a time constant Δt = 1/2γ.
7.6 Diffusion in an External Potential We consider motion of a particle under the influence of an external (mean) force K(x) = −(d/dx)U (x). The stochastic differential equations for position and velocity are x˙ = v, (7.67) 1 v˙ = −γv + K(x) + F (t). (7.68) m We will calculate the moments for the Kramers–Moyal expansion. For small τ , we have τ Mx = x(τ ) − x(0) = v(t)dt = v0 τ + · · · , 0 τ 1 −γv(t) + K(x(t)) + F (t) dt Mv = v(τ ) − v(0) = m 0 1 = −γv0 + K(x0 ) τ + · · · , m τ τ Mxx = (x(τ ) − x(0))2 = v(t1 )v(t2 )dt1 dt2 = v02 τ 2 + · · · , 0
0
Mvv = (v(τ ) − v(0))2 2 τ τ 1 F (t1 )F (t2 )dt1 dt2 = −γv0 + K(x0 ) τ 2 + m 0 0 2γkB T = τ + ··· m
(7.69)
The drift and diffusion coefficients are D(x) = v, D(v) = −γv + D (xx) = 0, γkB T , D(vv) = m
1 K(x), m (7.70)
7.6 Diffusion in an External Potential
97
which leads to the Klein–Kramers equation . ∂ ∂ (v) ∂2 ∂W (x, v, t) = − D(x) − D + 2 D(vv) W (x, v, t), ∂t ∂x ∂v ∂v . ∂W (x, v, t) ∂ ∂ K(x) γkB T ∂ 2 W (x, v, t). = − v+ (γv − )+ ∂t ∂x ∂v m m ∂v2 (7.71) This equation can be divided into a reversible and an irreversible part:
Lrev
∂W = (Lrev + Lirrev )W, ∂t . . 1 ∂U ∂ γkB T ∂ 2 ∂ ∂ . + , Lirrev = γv + = −v ∂x m ∂x ∂v ∂v m ∂v2
(7.72) (7.73)
The reversible part corresponds to the Liouville operator for a particle moving in the potential without friction . ∂H ∂ p2 ∂H ∂ L= − , H= + U (x). (7.74) ∂x ∂p ∂p ∂x 2m Obviously LH = 0
(7.75)
and
H Lirrev exp − kB T 0 / 2 mv γkB T mv m H γ − γv + = 0. (7.76) − = exp − kB T kB T m kB T kB T
Therefore, the Klein–Kramers equation has the stationary solution Wstat (x, v, t) = Z −1 e−(mv
with Z=
dvdx e−(mv
2
2
/2 +U (x))/kB T
/2+U (x))/kB T
.
(7.77)
(7.78)
The Klein–Kramers equation can be written in the form of a continuity equation: ∂ ∂ ∂ W = − Sx − Sv (7.79) ∂t ∂x ∂v with the probability current Sx = vW, (7.80) . 1 ∂U γkB T ∂W Sv = − γv + W− . (7.81) m ∂x m ∂v
98
7 Kinetic Theory: Fokker–Planck Equation
7.7 Large Friction Limit: Smoluchowski Equation For large friction constant γ, we may neglect the second derivative with respect to time and obtain the stochastic differential equation x˙ =
1 1 K(x) + F (t) mγ γ
(7.82)
and the corresponding Fokker–Planck equation is the Smoluchowski equation: . 1 ∂ kB T ∂ 2 ∂W (x, t) = − K(x) + W (x, t), (7.83) ∂t mγ ∂x mγ ∂x2 which can be written with the mean force potential U (x) as . ∂ 1 ∂ ∂U ∂W (x, t) = kB T + W (x, t). ∂t mγ ∂x ∂x ∂x
(7.84)
7.8 Master Equation The master equation is a very general linear equation for the probability density. If the variable x takes on only integer values, it has the form ∂Wn = (wm→n Wm − wn→m Wn ) , ∂t m
(7.85)
where Wn is the probability to find the integer value n and wm→n is the transition probability. For continuous x, the summation has to be replaced by an integration ∂W (x, t) (7.86) = (wx →x W (x , t) − wx→x W (x, t)) dx . ∂t The Fokker–Planck equation is a special form of the master equation with ∂ ∂2 wx →x = − D (1) (x) + 2 D(2) (x) δ(x − x ). (7.87) ∂x ∂x So far, we have discussed only Markov processes where the change of probability at time t only depends on the probability at time t. If memory effects are included, the generalized Master equation results.
Problems 7.1. Smoluchowski Equation Consider a one-dimensional random walk. At times tn = nΔt, a particle at position xj = jΔx jumps either to the left side j − 1 with probability wj− or
7.8 Master Equation
99
to the right side j + 1 with probability wj+ = 1 − wj− . The probability to find a particle at site j at the time n + 1 is then given by + − Pn,j−1 + wj+1 Pn,j+1 . Pn+1,j = wj−1
Show that in the limit of small Δx, Δt, the probability distribution P (t, x) obeys a Smoluchowski equation + wj
w−j
x j−1
j
j+1
7.2. Eigenvalue Solution to the Smoluchowski Equation Consider the one-dimensional Smoluchowski equation . 1 ∂ ∂U ∂ ∂W (x, t) ∂ = kB T + W (x, t) = − S(x) ∂t mγ ∂x ∂x ∂x ∂x for a harmonic potential mω 2 2 x . 2 Show that the probability current can be written as kB T −U (x)/kT ∂ U (x)/kB T e e W (x, t) S(x, t) = − mγ ∂x U (x) =
and that the Fokker–Planck operator can be written as . ∂U kB T ∂ −U (x)/kB T ∂ U (x)/kB T 1 ∂ ∂ kB T + = e e LFP = mγ ∂x ∂x ∂x mγ ∂x ∂x and can be transformed into a hermitian operator by L = eU (x)/2kB T LFP e−U(x)/2kB T . Solve the eigenvalue problem Lψn (x) = λn ψn (x) and use the function ψ0 (x) to construct a special solution W (x, t) = eλ0 t e−U(x)/2kB T ψ0 (x).
100
7 Kinetic Theory: Fokker–Planck Equation
7.3. Diffusion Through a Membrane A kA
kB
km km
B
A membrane with M pore proteins separates two half-spaces A and B. An ion X may diffuse through M pore proteins in the membrane from A to B or vice versa. The rate constants for the formation of the ion-pore complex are kA and kB , respectively, while km is the constant for the decay of the ion–pore complex independent of the side to which the ion escapes. Let PN (t) denote the probability that there are N ion–pore complexes at time t. The master equation for the probability is dPN (t) = − [(kA + kB )(M − N ) + 2km N ] PN (t) dt + (kA + kB )(M − N + 1)PN −1 (t) + 2km (N + 1)PN +1 (t). (7.88) Calculate mean and variance of the number of complexes N= N PN , N
σ2 = N 2 − N
2
and the mean diffusion current JAB =
dNA dNB − , dt dt
where NA,B is the number of ions in the upper of lower half-space.
8 Kramers’ Theory
Kramers [32] used the concept of Brownian motion to describe motion of particles over a barrier as a model for chemical reactions in solution. The probability distribution of a particle moving in an external potential is described by the Klein–Kramers equation (7.71): . ∂ ∂ K(x) γkB T ∂ 2 ∂W (x, v, t) W (x, v, t) = − v+ γv − + ∂t ∂x ∂v m m ∂v 2 ∂ ∂ Sv = − Sx − (8.1) ∂x ∂v and the rate of the chemical reaction is related to the probability current Sx across the barrier.
8.1 Kramers’ Model A particle in a stable minimum A has to reach the transition state B by diffusive motion and then converts to the product C. The minimum well and the peak of the barrier are both approximated by parabolic functions (Fig. 8.1) m 2 ω (x − x0 )2 , 2 a m U ∗ = Ea − ω ∗2 x2 . 2 Without the chemical reaction, the stationary solution is UA =
Wstat = Z −1 exp {−H/kB T } .
(8.2) (8.3)
(8.4)
We assume that the perturbation due to the reaction is small and make the following ansatz (Fig. 8.2) W (x, v) = Wstat y(x, v),
(8.5)
102
8 Kramers’ Theory
U(x)
A
B
C
U* UA x0
0
reaction coordinate x
Fig. 8.1. Kramers’ model Wstat
y
x0
0
Fig. 8.2. Ansatz function. The stationary solution of a harmonic well is multiplied with the function y(x, v) which switches from 1 to 0 at the saddle point
where the partition function is approximated by the harmonic oscillator expression 2πkB T Z= . (8.6) mωa The probability distribution should fulfill two boundary conditions: 1. In the minimum of the A-well, the particles should be in thermal equilibrium and therefore H → y = 1 if x ≈ x0 . (8.7) W (x, v) = Z −1 exp − kB T 2. On the right side of the barrier (x > 0), all particles A are converted into C and there will be no particles of the A species here W (x, v) = 0 → y = 0 if
x > 0.
(8.8)
8.2 Kramers’ Calculation of the Reaction Rate Let us now insert the ansatz function into the Klein–Kramers equation. For the reversible part, we have H H (8.9) Lrev exp − y(x, v) = exp − Lrev y(x, v) kB T kB T
8.2 Kramers’ Calculation of the Reaction Rate
and for the irreversible part . H γkB T ∂ 2 ∂ exp − γv + y(x, v) ∂v m ∂v2 kB T . ∂ γkB T ∂ 2 H −γv + = exp − y . kB T ∂v m ∂v 2
103
(8.10)
Note the subtle difference. The operator (∂/∂v)v is replaced by −v(∂/∂v). Together, we have the following equation for y 0 = −v
1 ∂U ∂y ∂y γkB T ∂ 2 y ∂y + − γv + , ∂x m ∂x ∂v ∂v m ∂v2
(8.11)
which becomes in the vicinity of the top 0 = −v
∂y γkB T ∂ 2 y ∂y ∂y − ω ∗2 x − γv + . ∂x ∂v ∂v m ∂v 2
(8.12)
Now, we make a transformation of variables (x, v) → (η, ξ) = (x, v − λx). With x = η,
v = ξ + λη,
(8.13)
∂ ∂ ∂ = −λ , ∂x ∂η ∂ξ
∂ ∂ = , ∂v ∂ξ
(8.14)
(8.12) is transformed to 0 = (ξ + λη)
∂ ∂ γkB T ∂ 2 y. (8.15) y + ((λ − γ)ξ + (λ2 − λγ − ω 2 )η) y + ∂η ∂ξ m ∂ξ 2
Now choose
0 = (ξ + λη)
γ2 + ω ∗2 4
(8.16)
∂ ∂ γkB T ∂ 2 y + ((λ − γ)ξ) y + y ∂η ∂ξ m ∂ξ 2
(8.17)
γ λ= + 2 to have the simplified equation
which obviously has solutions which depend only on ξ and obey ξ
γkB T ∂ 2 ∂ Φ(ξ). Φ(ξ) = − ∂ξ m(λ − γ) ∂ξ 2
The general solution of this equation is1 % Φ(ξ) = C1 + C2 erf ξ 1
m(λ − γ) 2γkB T
(8.18)
.
(8.19)
Note that λ − γ is always positive. This would not be true for the second root.
104
8 Kramers’ Theory
Now, we impose the boundary condition Φ → 0 for x → ∞, which means ξ → −∞. From this, we find C1 = −C2 erf(−∞) = C2 .
(8.20)
Together, we have for the probability density W (x, v) % $ m 2 # v + U (x) mωa m(λ − γ) = C2 exp − 2 1 + erf (v − λx) 2πkB T kB T 2γkB T (8.21) and the flux over the barrier is approximately S = dv vW (0, v) mωa −U (x∗ )/kB T = C2 e 2πkB T = C2
%
#
dv ve
mωa −U (0)/kB T 2kB T e 2πkB T m
−mv2 /2kB T
1 + erf
γ 1− . λ
m(λ − γ) v 2γkB T
$
(8.22)
In the A-well, we have approximately
# $ mωa2 (x−xa )2 mv 2 + mωa 2 2 W (x, v) ≈ 2C2 exp − . 2πkB T kB T
The total concentration is approximately [A] = dx dvWA (x, v) = 2C2 . Hence, we find ωa −U(0)/kB T e 2π The square root can be written as S = [A]
! !1 − "
γ 2
1 +
γ γ2 4
+ ω ∗2
1−
γ . λ
(8.23)
(8.24)
(8.25)
! γ 1 γ2 !− + ∗2 ! 2 4 +ω 1 =" γ γ2 ∗2 2 + 4 +ω ! 2 1 ! ! − γ + γ 2 + ω ∗2 ! 2 4
=! " 2 2 − γ4 + γ4 + ω ∗2 1 2 − γ2 + γ4 + ω ∗2 = ω∗
(8.26)
8.2 Kramers’ Calculation of the Reaction Rate
and finally, we arrive at Kramers’ famous result
γ2 γ ωa −U(0)/kB T S ∗2 e − + . = +ω k= [A] 2πω ∗ 2 4
105
(8.27)
The bracket is Kramers’ correction to the escape rate. In the limit of high friction, series expansion in γ −1 gives k = γ −1
ωa ω ∗ −U (0)/kB T e . 2π
(8.28)
In the limit of low friction, the result is k=
ωa −U (0)/kB T e . 2π
(8.29)
9 Dispersive Kinetics
In this chapter, we consider the decay of an optically excited state of a donor molecule in a fluctuating medium. The fluctuations are modeled by time-dependent decay rates k (electron transfer), k−1 (back reaction), kda (deactivation by fluorescence or radiationless transitions), and kcr (charge recombination to the ground state). (Fig. 9.1) The time evolution is described by the system of rate equations: d W (D∗ ) = −k(t)W (D∗ ) + k−1 (t)W (D+ A− ) − kda W (D∗ ), dt d W (D+ A− ) = k(t)W (D∗ ) − k−1 (t)W (D+ A− ) − kcr W (D+ A− ), (9.1) dt which has to be combined with suitable equations describing the dynamics of the environment. First, we will apply a very simple dichotomous model [33]. k
D*
D+A−
k −1 kda
kcr
DA
Fig. 9.1. Electron transfer in a fluctuating medium. The rates are time dependent
9.1 Dichotomous Model The fluctuations of the rates are modeled by random jumps between two different configurations (±) of the environment which modulate the values of the rates. The probabilities of the two states are determined by the
108
9 Dispersive Kinetics
master equation: d dt
W (+) W (−)
=
−α β α −β
W (+) W (−)
,
(9.2)
which has the general solution: W (+) = C1 + C2 e−(α+β)t , α W (−) = C1 − C2 e−(α+β)t . β
(9.3)
Obviously, the equilibrium values are Weq (+) =
β , α+β
Weq (−) =
α α+β
(9.4)
and the correlation function is (with Q± = ±1) Q(t)Q(0) = Weq (+)(P (+, t|+, 0) − P (−, t|+, 0)) +Weq (−)(P (−, t|−, 0) − P (+, t|−, 0)) = (Weq (+) − Weq (−))2 + 4Weq (+)Weq (−)e−(α+β)t = Q2 + (Q2 − Q2 )e−(α+β)t .
(9.5)
Combination of the two systems of equations (9.1) and (9.2) gives the equation of motion d W = AW (9.6) dt for the four-component state vector ⎞ ⎛ W (D∗ , +) ⎜ W (D∗ , −) ⎟ ⎟ (9.7) W=⎜ ⎝ W (D+ A− , +) ⎠ W (D+ A− , −) with the rate matrix ⎞ ⎛ + β k−1 0 −α − k + − kda − ⎟ ⎜ 0 k−1 α −β − k − − kda ⎟. A=⎜ + + ⎠ ⎝ k 0 −α − k−1 − kcr β − − 0 k α −β − k−1 − kcr (9.8) Generally, the solution of this equation can be expressed by using the left and right eigenvectors and the eigenvalues λ of the rate matrix which obey ARν = λν Rν ,
(9.9)
Lν A = λν Lν .
(9.10)
9.1 Dichotomous Model
109
α D+A– (−)
α D*(+)
+− D A (+)
k
D*(–) β
β
Fig. 9.2. Gated electron transfer
For initial values W(0), the solution is given by1 W(t) =
4 (Lν · W(0)) Rν eλν t . (L · R ) ν ν ν=1
(9.11)
In the following, we consider a simplified case of gated electron transfer with ± kda = kcr = k−1 = k − = 0 (Fig. 9.2). Then the rate matrix becomes ⎛ ⎞ −α − k + β 0 0 ⎜ α −β 0 0 ⎟ ⎜ ⎟. (9.12) ⎝ k+ 0 −α β ⎠ 0 0 α −β As initial values, we chose ⎛
⎞ ⎛ β Weq (+) α+β ⎜ Weq (−) ⎟ ⎜ α ⎟ = ⎜ α+β W0 = ⎜ ⎝ ⎠ ⎝ 0 0 0 0
⎞ ⎟ ⎟. ⎠
(9.13)
There is one eigenvalue λ1 = 0 corresponding to the eigenvectors: ⎛ ⎞ 0 ⎜0⎟ & ' ⎟ (9.14) R1 = ⎜ ⎝ β ⎠ , L1 = 1 1 1 1 . α 4 This reflects simply conservation of ν=1 Wν in this special case. The contribution of the zero eigenvector is ⎞ ⎛ ⎞ ⎛ 0 0 ⎟ ⎟ ⎜ L1 · W0 1 ⎜ 0 ⎟. ⎜0⎟=⎜ (9.15) R1 = ⎝ ⎠ ⎝ Weq (+) ⎠ L1 · R1 α+β β Weq (−) α 1
In the case of degenerate eigenvalues, linear combinations of the corresponding vectors can be found such that Lν · Lν = 0 for ν = ν .
110
9 Dispersive Kinetics
A second eigenvalue λ2 = −(α + β) corresponds to the equilibrium in the final state D+ A− where no further reactions take place ⎛ ⎞ 0 ⎜ 0 ⎟ & ' ⎟ (9.16) R2 = ⎜ ⎝ 1 ⎠ , L2 = α −β α −β . −1 The contribution of this eigenvalue is L2 · W0 R2 = 0 L2 · R2
(9.17)
since we assumed equilibrium in the initial state. The remaining two eigenvalues are α + β + k 1 ± λ3,4 = − (α + β + k)2 − 4βk (9.18) 2 2 and the resulting decay will be in general biexponential. We consider two limits. 9.1.1 Fast Solvent Fluctuations In the limit of small k, we expand the square root to find λ3,4 = −
α+β k α−β k α+β ± − ± + ··· . 2 2 2 α+β 2
(9.19)
One of the eigenvalues is λ3 = −(α + β) −
α k + ··· α+β
In the limit of k → 0, the corresponding eigenvectors are ⎛ ⎞ 1 ⎜ −1 ⎟ & ' ⎟ R3 = ⎜ ⎝ −1 ⎠ , L3 = α −β 0 0 1
(9.20)
(9.21)
and will not contribute significantly. The second eigenvalue λ4 = −
β k + · · · = −Weq (+)k α+β
is given by the average rate. The eigenvectors are ⎛ ⎞ β ⎜ α ⎟ & ' ⎟ R4 = ⎜ ⎝ −β ⎠ , L4 = 1 1 0 0 −α
(9.22)
(9.23)
9.1 Dichotomous Model
111
and the contribution to the dynamics is ⎛
(L4 · W0 ) R4 eλ4 t (L4 · R4 )
⎞ β ⎟ 1 ⎜ ⎜ α ⎟ e−λ4 t . = ⎝ α + β −β ⎠ −α
The total time dependence is approximately given by ⎞ ⎛ Weq (+)e−λ4 t ⎜ Weq (−)e−λ4 t ⎟ ⎟ W(t) = ⎜ ⎝ Weq (+)(1 − e−λ4 t ) ⎠ . Weq (−)(1 − e−λ4 t )
(9.24)
(9.25)
9.1.2 Slow Solvent Fluctuations In the opposite limit, we expand the square root for small k −1 to find λ3,4 = −
' α+β k 1& − ± k + (α − β) + 2αβk −1 + · · · , 2 2 2 αβ λ3 = −β + + ··· , k ⎛ ⎞ 0 ⎜ 1 ⎟ & ' ⎟ R3 = ⎜ ⎝ 0 ⎠ , L3 = α k 0 0 , −1 λ4 = −k − α + · · · , ⎞ k ⎜ −α ⎟ ⎟ R4 = ⎜ ⎝ −k ⎠ , α
(9.26) (9.27)
(9.28)
(9.29)
⎛
& ' L4 = 1 0 0 0
(9.30)
and the time evolution is approximately ⎛ ⎜ ⎜ W(t) = ⎜ ⎜ ⎝
α α+β
α α+β
β e−kt α+β
e−βt − βk e−kt
β (1 − e−kt ) α+β 1 − e−βt + βk e−kt
⎞ ⎟ ⎟ ⎟. ⎟
⎠
(9.31)
This corresponds to an inhomogeneous situation. One part of the ensemble is in a favorable environment and decays with the fast rate k. The rest has to wait for a suitable fluctuation which appears with a rate of β.
112
9 Dispersive Kinetics
9.1.3 Numerical Example (Fig. 9.3)
a
10
b
0
occupation of the initial state
10 –1
10 –2
10 –3
c
10
d
0
10 –1
10 –2
10 –3
0
5
10 0 time
5
10
Fig. 9.3. Nonexponential decay. Numerical solutions of (9.12) are shown for α = 0.1, β = 0.9, (a) k = 0.2, (b) k = 2, (c) k = 5, and (d) k = 10. Dotted curves show the two components of the initial state and solid curves show the total occupation of the initial state
9.2 Continuous Time Random Walk Processes Diffusive motion can be modeled by random walk processes along a onedimensional coordinate. 9.2.1 Formulation of the Model The fluctuations of the coordinate X(t) are described as random jumps [34, 35]. The time intervals between the jumps (waiting time) and the coordinate changes are random variables with independent distribution functions: ψ(tn+1 − tn )
and f (Xn+1 , Xn ).
(9.32)
The probability that no jump happened in the interval 0 . . . t is given by the survival function: t ∞ Ψ0 (t) = 1 − dt ψ(t ) = dt ψ(t ) (9.33) 0
t
and the probability of finding the walker at position X at time t is given by (Fig. 9.4)
9.2 Continuous Time Random Walk Processes
113
X X(t)
0
t1 t2
t3
t4
t
Fig. 9.4. Continuous time random walk
∞ P (X, t) = P (X, 0) ψ(t )dt t ∞ t + dt dX ψ(t − t )f (X, X )P (X , t ). 0
(9.34)
−∞
Two limiting cases are well known from the theory of collisions. The correlated process with f (X, X ) = f (X − X )
(9.35)
corresponds to weak collisions. It includes normal diffusion processes as a special case. For instance if we chose ψ(tn+1 − tn ) = δ(tn+1 − tn − Δt)
(9.36)
and f (X − X ) = pδ(X − X − ΔX) + (1 − p)δ(X − X + ΔX),
(9.37)
we have P (X, t + Δt) = pP (X − ΔX, t) + qP (X + ΔX, t),
p+q =1
(9.38)
and in the limit Δt → 0 and ΔX → 0, Taylor expansion gives P (X, t) +
∂P ∂ 2P ∂P ΔX 2 + · · · (9.39) Δt + · · · = P (X, t) + (q − p)ΔX + ∂t ∂X ∂X 2
The leading terms constitute a diffusion equation ∂P ΔX ∂P ΔX 2 ∂ 2 P P = (q − p) + ∂t Δt ∂X Δt ∂X 2
(9.40)
with drift velocity (q − p)(ΔX/Δt) and diffusion constant ΔX 2 /Δt. The uncorrelated process, on the other hand, with f (X, X ) = f (X)
(9.41)
114
9 Dispersive Kinetics
corresponds to strong collisions. This kind of process can be analyzed analytically and will be applied in the following. The (normalized) stationary distribution Φeq of the uncorrelated process obeys ∞ t ∞ ψ(t )dt + f (X) dt ψ(t − t ) dX φeq (X ) Φeq (X) = Φeq (X)
t
= Φeq (X)
−∞
0
∞
ψ(t )dt + f (X)
t
t
dt ψ(t ),
(9.42)
0
which shows that f (X) = Φeq (X).
(9.43)
9.2.2 Exponential Waiting Time Distribution Consider an exponential distribution of waiting times ∞ ψ(t) = τ −1 e−t/τ , Ψ0 (t) = τ −1 e−t /τ dt = e−t/τ .
(9.44)
t
It can be obtained from a Poisson process which corresponds to the master equation: dPn (9.45) = −τ −1 Pn + τ −1 Pn−1 , n = 0, 1, 2, . . . dt with the solution Pn (0) = δn,0 ,
Pn (t) =
(t/τ )n −t/τ e n!
(9.46)
if we identify the survival function with the probability to be in the initial state P0 Ψ (t) = P0 (t) = e−t/τ .
(9.47)
The general uncorrelated process (9.34) becomes for an exponential distribution t P (X, t) = P (X, 0)e−t/τ + dt τ −1 e−(t−t )/τ dX f (X, X )P (X , t ). 0
(9.48) Laplace transformation gives P˜ (X, s) = P (X, 0)
τ −1 1 + s + τ −1 s + τ −1
dX f (X, X )P˜ (X , s),
(9.49)
9.2 Continuous Time Random Walk Processes
115
which can be simplified (s + τ −1 )P˜ (X, s) = P (X, 0) + τ −1
dX f (X, X )P˜ (X , s).
Back transformation gives d + τ −1 P (X, t) = τ −1 dX f (X, X )P (X , t) dt
(9.50)
(9.51)
and finally 1 1 ∂ P (X, t) = − P (X, t) + ∂t τ τ
dX f (X, X )P (X , t),
(9.52)
which is obviously a Markovian process, since it involves only the time t. For the special case of an uncorrelated process with exponential waiting time distribution, the motion can be described by ∂ P (X, t) = LP (X, t), ∂t
(9.53)
1 LP (X, t) = − (P (X, t) − φeq (X)P (t)) . τ
(9.54)
9.2.3 Coupled Equations Coupling of motion along the coordinate X with the reactions gives the following system of equations [36, 37]: ' & ∂ P (X, t) = −k(X) + L1 − τ1−1 P (X, t) + k−1 (X)C(X, t), ∂t ' & ∂ C(X, t) = −k−1 (X) + L2 − τ2−1 C(X, t) + k(X)P (X, t), ∂t
(9.55)
where P (X, t)ΔX and C(X, t)ΔX are the probabilities of finding the system in the electronic state D∗ and D+ A− , respectively; L1,2 are operators describ−1 ing the motion in the two states and the rates τ1,2 account for depopulation via additional channels. For the uncorrelated Markovian process (9.54), the rate equations take the form ∂ P (X, t) ∂t C(X, t) k(X) + τ1−1 + τ −1 −k−1 (X) P (X, t) =− C(X, t) −k(X) k−1 (X) + τ2−1 + τ −1 P (t) φ1 (X) , (9.56) +τ −1 φ2 (X) C(t)
116
9 Dispersive Kinetics
which can be written in matrix notation as ∂ R(X, t) = −A(X)R(X, t) + τ −1 B(X)R(t). ∂t
(9.57)
R(X, t) = exp {−A(X)U(X, t)}
(9.58)
Substitution gives ∂ − A(X) R(X, t) + exp −A(X) U(X, t) ∂t = −A(X)R(X, t) + τ −1 B(X)R(t), ∂ U(X, t) = τ −1 exp {A(X)t} B(X)R(t). ∂t
(9.59) (9.60)
Integration gives U(X, t) = U(X, 0) + τ −1
t
exp {A(X)t } B(X)R(t )dt ,
(9.61)
R(X, t) = exp(−A(X)t)R(X, 0) t −1 +τ exp(A(X)(t − t))B(X)R(t )dt
(9.62)
0
0
and the total populations obey the integral equation: R(t) = exp(−At)R(0) t exp(A(t − t))BR(t )dt , + τ −1
(9.63)
0
which can be solved with the help of a Laplace transformation: ∞ ˜ R(s) = e−st R(t)dt,
(9.64)
0
∞
e−st exp(−At)dt = (s + A)−1 ,
(9.65)
0
∞
e 0
−st
t
dt
exp(A(t − t))BR(t )dt
0 −1
= (s + A)
˜ BR(s).
(9.66)
The Laplace transform of the integral equation ˜ ˜ R(s) = (s + A)−1 R(0) + τ −1 (s + A)−1 BR(s)
(9.67)
9.2 Continuous Time Random Walk Processes
117
is solved by 5−1 4 ˜ (s + A)−1 R(0). R(s) = 1 − τ −1 (s + A)−1 B
(9.68)
We assume that initially, the system is in the initial state D∗ and the motion is equilibrated: φ1 (X) R(X, 0) = . (9.69) 0 For simplicity, we treat here only the case of τ12 → ∞. Then we have k + τ −1 −k−1 , (9.70) A= −k k−1 + τ −1
(s + A)
−1
1 = −1 (s + τ )(s + τ −1 + k + k−1 )
s + τ −1 + k−1 k−1 k s + τ −1 + k
(9.71) and with the abbreviations α= 1+
−1 1 (k + k−1 ) s + τ −1
(9.72)
and f (X)1,2 =
(9.73)
φ1,2 (X)f (X)dX,
we find (s + A)−1 R(0) 1 −1 − (s+τ1−1 )2 αk1 φ1 α α (s + τ −1 ) − k s+τ −1 = = 1 k αk1 (s + τ −1 )2 (s+τ −1 )2 (9.74) as well as (s + A)−1 B 1 1 −1 − (s+τ −1 )2 αk1 = s+τ 1 αk1 (s+τ −1 )2
1 (s+τ −1 )2 αk−1 2 1 − (s+τ1−1 )2 αk−1 2 s+τ −1
(9.75)
and the final result becomes ˜ R(s) =
1 s(s2 + τ −1 (s + αk1 + αk−1 2 )) s(s + τ −1 ) − sαk1 + τ −1 αk−1 2 × . (s + τ −1 )αk1
(9.76)
118
9 Dispersive Kinetics
Let us discuss the special case of thermally activated electron transfer. Here αk1 , αk−1 2 1 (9.77) and the decay of the initial state is approximately given by P (s) =
s + K2 (s + τ −1 ) + τ −1 s−1 αk−1 2 = 2 2 −1 −1 s + τ s (1 + αk1 + αk−1 2 ) s + s(K1 + K2 ) (9.78)
with
K2 = τ −1
1 + s−1
k−1 (X) dXφ2 (X) 1 + s+τ1 −1 (k(X) + k−1 (X))
k−1 (X) , 1 + τ (k(X) + k−1 (X)) k(X) . K1 = dXφ1 (X) 1 + τ (k(X) + k−1 (X)) ≈
dXφ2 (X)
(9.79) (9.80) (9.81)
This can be visualized as the result of a simplified kinetic scheme d P = −K1 P + K2 C, dt d C = K1 P − K2 C dt
(9.82) (9.83)
with the Laplace transform ˜ sP˜ − P (0) = −K1 P˜ + K2 C, ˜ ˜ ˜ sC − C(0) = K1 P + K2 C,
(9.84) (9.85)
which has the solution P =
sP0 + K2 (P0 + C0 ) , s(s + K1 + K2 )
sC0 + K1 (C0 + P0 ) . s(s + K1 + K2 )
(9.86)
K1 (1 − e−(K1 +K2 )t ). K1 + K2
(9.87)
C=
In the time domain, we find P (t) =
K2 + K1 e−(K1 +K2 )t , K1 + K 2
C(t) =
Let us now consider the special case that the back reaction is negligible and k(X) = kΘ(X). Here, we have P˜ (s) =
s(s + τ −1 ) − sαk1 , s(s2 + τ −1 (s + αk1 ))
(9.88)
9.3 Power Time Law Kinetics
119
k(X)
τ
φ(X)
−1
X
k
Fig. 9.5. Slow solvent limit
αk1 =
dXφ1 (X)
s + τ −1 = bk , k + s + τ −1
k(X) 1+
k(X) s+τ −1
=
∞
b=
φ1 (X)dX,
s+τ (s + τ −1 − bk k+s+τ −1 )
(s2 + τ −1 (s + bk
s+τ −1 k+s+τ −1
))
=
k
dX 1 + s+τk −1 0 a = 1−b = φ1 (X)dX, (9.89) φ1 (X)
0
−∞
0 −1
P˜ (s) =
∞
s + τ −1 + k(1 − b) . s2 + s(τ −1 + k) + bkτ −1
(9.90)
Inverse Laplace transformation gives a biexponential behavior P (t) = with
(μ+ + k(1 − 2b))e−t(k+μ− )/2 − (μ− + k(1 − 2b))e−t(k+μ+ )/2 μ+ − μ− μ± = τ −1 ±
k 2 + τ −2 + 2kτ −1 (1 − 2b).
If the fluctuations are slow τ −1 k, then k 2 + τ −2 + 2kτ −1 (1 − 2b) = k + (1 − 2b)τ −1 + · · · , μ+ = k + 2(1 − b)τ −1 + · · · ,
μ− = −k + 2bτ −1 + · · · ,
(9.91)
(9.92)
(9.93) (9.94)
and the two time constants are approximately (Fig. 9.5) k + μ+ = k + ··· , 2
k + μ− = bτ −1 + · · · 2
(9.95)
9.3 Power Time Law Kinetics The last example can be generalized to describe the power time law as observed for CO rebinding in myoglobin at low temperatures. The protein motion is now modeled by a more general uncorrelated process.2 2
A much more detailed discussion is given in [37].
120
9 Dispersive Kinetics
We assume that the rate k is negligible for X < 0 and very large for X > 0. Consequently, only jumps X < 0 → X > 0 are considered. Then the probability obeys the equation P (X, t)|X<0
∞
= P (X, 0)
ψ(t )dt +
t
0
dX
−∞ t
t
dt ψ(t − t )f (X)P (X , t )
0
dt ψ(t − t )
= φeq (X)Ψ0 (t) + φeq (X) 0
∞
Ψ0 (t) =
ψ(t )dt ,
t
0
dX P (X , t )
(9.96)
−∞
˜ 1 − ψ(s) Ψ˜0 (s) = s
(9.97)
and the total occupation of inactive configurations is 0 t P< (t) = dXφeq (X) Ψ0 (t) + dt ψ(t − t )P< (t ) −∞
0
t = a Ψ0 (t) + dt ψ(t − t )P< (t ) .
(9.98)
0
Laplace transformation gives
˜ P˜< (s) P˜< (s) = a Ψ˜0 (s) + ψ(s) with
(9.99)
0
a= −∞0
dXφeq (X)
(9.100)
and the decay of the initial state is given by P˜< (s) =
1 aΨ˜0 (s) = . ˜ s + 1−a 1 − aψ(s) aΨ ˜(s)
(9.101)
0
For a simple Poisson process (9.44) with Ψ˜0 = this gives P˜< (s) =
1 , s + τ −1
a , s + (1 − a)τ −1
(9.102)
(9.103)
which reproduces the exponential decay found earlier in the slow solvent limit (9.95) P< (t) = ae−t (1−a)/τ . (9.104) The long-time behavior is given by the asymptotic behavior for s → 0. As P< (t) → 0 for t → ∞, this is also the case for P˜< (s) in the limit s → 0. Hence, the asymptotic behavior must be
9.3 Power Time Law Kinetics
aΨ˜0 (s) P˜< (s) ≈ → 0, s → 0, 1−a a Ψ0 (t), t → ∞. P< (t) → 1−a To describe a power time law at long times P< (t) → t−β , P˜< (s) → sβ−1 ,
121
(9.105) (9.106)
t → ∞,
(9.107)
s → 0,
(9.108)
the waiting time distribution has to be chosen as Ψ0 (t) ∼ which implies where z find
−1
1 , (zt)β
t → ∞,
Ψ˜0 (s) ∼ z −β sβ−1 ,
(9.109)
is the characteristic time for reaching the asymptotics. Finally, we P˜< (s) ∼
1 s+
1−a β 1−β a z s
=
1 . s(1 + (˜ z /s)β )
(9.110)
In the time domain, this corresponds to the Mittag–Leffler function3 P< (t) =
∞ (−1)l (˜ z t)βl l=0
Γ (βl + 1)
z t)β ) = Eβ (−(˜
(9.111)
which can be approximated by the simpler function: 1 . 1 + (t/τ )β
(9.112)
Problems 9.1. Dichotomous Model for Dispersive Kinetics k(X)
P(D*,X) α P(D*,−)
β
P(D*,+) X
k−
k+ α
P(D+A−,−) 3
β
P(D+A−,+)
which has also been discussed for nonexponential relaxation in inelastic solids and dipole relaxation processes corresponding to Cole–Cole spectra.
122
9 Dispersive Kinetics
Consider the following system of rate equations ⎞⎛ ⎛ ⎞ ⎛ ⎞ P (D∗ , +) P (D∗ , +) −k+ − α β 0 0 ∗ ∗ ⎜ ⎜ d ⎜ , −) ⎟ , −) ⎟ α −k− − β 0 0 ⎟ ⎟ ⎜ P (D ⎜ P (D ⎟=⎜ ⎟ + − + − ⎠ ⎝ ⎝ ⎠ ⎝ P (D A , +) ⎠ k+ 0 −α β dt P (D A , +) α −β 0 k− P (D+ A− , −) P (D+ A− , −). Determine the eigenvalues of the rate matrix M . Calculate the left and right eigenvectors approximately for the two limiting cases: (a) Fast fluctuations k± α, β. Show that the initial state decays with an average rate. (b) Slow fluctuations k± α, β. Show that the decay is nonexponential.
Part IV
Transport Processes
10 Nonequilibrium Thermodynamics
Biological systems are far from thermodynamic equilibrium. Concentration gradients and electrostatic potential differences are the driving forces for diffusive currents and chemical reactions (Fig. 10.1). energy JE
JA A+B
C
JC
JB
JS entropy
Fig. 10.1. Open nonequilibrium system. Energy and particles are exchanged with the environment. Chemical reactions change the concentrations and consume or release energy. Entropy is produced by irreversible processes like diffusion and heat transport
In this chapter, we consider a system composed of n species k = 1, . . . , n which can undergo r chemical reactions j = 1, . . . , r. We assume that the system is locally in equilibrium and that all thermodynamic quantities are locally well defined.
10.1 Continuity Equation for the Mass Density We introduce the partial mass densities mk k = N k = ck mk , V
(10.1)
126
10 Nonequilibrium Thermodynamics
the total density =
1 M mk Nk = , V V
(10.2)
mk mk Nk k Nk = . = M m N k k k
(10.3)
k =
k
k
and the mass fraction xk =
From the conservation of mass, we have d d ∂ Mk = mk Nk = k dV dt dt V ∂t r k vk dA + mk νkj rj dV =− (V )
j=1
(10.4)
V
which can also be expressed in the form of a continuity equation r ∂ mk νkj rj k = −div(k vk ) + ∂t j=1
= −div(mk Jk ) +
r
mk νkj rj
(10.5)
j=1
with the diffusion fluxes Jk . For the total mass density k , =
(10.6)
k
we have
∂ mk νkj rj . = −div k v k + ∂t j=1 r
(10.7)
k
Due to conservation of mass for each reaction, the last term vanishes mk νkj = 0 (10.8) k
and with the center of mass velocity 1 k v k ,
(10.9)
∂ = −div(v). ∂t
(10.10)
v=
k
we have
10.2 Energy Conservation
127
10.2 Energy Conservation We define the specific values of enthalpy, entropy, and volume h, s, −1 , respectively, by H = h V, S = s V, V = −1 V. The differential of the enthalpy is dH = T dS + V dp +
(10.11)
μk dNk ,
(10.12)
k
where μk is a generalized chemical potential, which also includes the potential energy of electrostatic or gravitational fields which are assumed to be timeindependent. For the specific quantities, we find V (hd + dh) = T V (sd + ds) + V dp V + μk (xk d + dxk ) mk
(10.13)
k
and if we combine all terms with d, μk dxk = dh − T ds − dp − mk
Ts +
k
k
xk μk − h d. mk
The right-hand side can be written as 1 μk Nk − H d, TS + V
(10.14)
(10.15)
k
which vanishes in equilibrium. Hence, we have for the specific quantities 1 dh = T ds + −1 dp + μk dxk . (10.16) mk k
In the following, we consider a resting solvent without convection. We neglect the center of mass motion and its contribution to the total energy. The diffusion currents are taken relative to the solvent.1 For constant pressure, we have ∂(h) ∂(s) μk ∂k =T + (10.17) ∂t ∂t mk ∂t k
and the enthalpy obeys the continuity equation ∂(h) = −div(Jh ) ∂t with the enthalpy flux Jh . 1
A more general discussion can be found in [38].
(10.18)
128
10 Nonequilibrium Thermodynamics
10.3 Entropy Production The entropy change
dS = d
s dV
(10.19)
V
has contributions from the local entropy production dSi and from exchange with the surroundings dSe : dS = dSi + dSe .
(10.20)
The local entropy production can only be positive: dSi ≥ 0,
(10.21)
whereas entropy exchange with the surroundings can be positive or negative (Fig. 10.2). We denote by σ the entropy production per volume: dSi = σ dV. (10.22) dt V The entropy exchange is connected with the total entropy flux: dSe =− Js dA. dt (V )
(10.23)
Together, we have ∂(s) = −div(Js ) + σ, ∂t σ ≥ 0.
(10.24) (10.25)
From (10.17) the time derivative of the entropy per volume is 1 ∂(h) 1 μk ∂k ∂(s) = − ∂t T ∂t T mk ∂t
(10.26)
k
dSe
dSi
dSe
Fig. 10.2. Entropy change. The local entropy production dSi is always positive, whereas entropy exchange with the environment dSe can have both signs
10.3 Entropy Production
129
and inserting (10.5) and (10.26) ⎛ ⎞ r 1 1 μk ⎝ ∂(s) −div(mk Jk ) + = − div(Jh ) − mk νkj rj ⎠ , (10.27) ∂t T T mk j=1 k
which can be written as
μk ∂(s) 1 1 = −div Jh + Jk + Jh grad ∂t T T T k μ 1 k Jk grad Aj rj + − T T j
(10.28)
k
with the chemical affinities Aj = −
(10.29)
μk νkj .
k
From comparison of (10.28) and (10.24), we find the entropy flux 1 Jh − Js = μk Jk T
(10.30)
k
and the entropy production μ 1 1 k Jk grad Aj rj σ = Jh grad − + T T T j k 1 1 1 − Jk grad(μk ) + = Jh − μk Jk grad Aj rj . T T T j k
k
(10.31) We define the heat flux as2 Jq = T Js .
(10.32)
The entropy production is a bilinear form σ = Kq Jq + Kk Jk + K j rj
(10.33)
j
k
of the fluxes Jq , 2
Jk ,
rj
(10.34)
The definition of the heat flux is not unique in the case of simultaneous diffusion and heat transport. Our definition is in analogy to the relation T dS = dH − V dp − k μk dNk for an isobaric system. With this convention, the diffusion flux does not depend on the temperature gradient explicitly.
130
10 Nonequilibrium Thermodynamics
and the thermodynamic forces Kq = grad
1 , T
1 Kk = − grad(μk ), T 1 Kj = Aj . T
(10.35)
Instead of entropy production, alternatively the rate of energy dissipation can be considered which follows from multiplication of (10.31) with T 1 T σ = − Jq grad(T ) − Jk grad(μk ) + Aj rj . T
(10.36)
j
k
10.4 Phenomenological Relations In equilibrium, all fluxes and thermodynamic forces vanish T = const,
μk = const,
Aj = 0.
(10.37)
The entropy production is also zero and entropy has its maximum value. If the deviation from equilibrium is small, the fluxes3 can be approximated as linear functions of the forces: Lα,β Kβ , (10.38) Jα = β
where the so-called Onsager coefficients Lα,β are material constants.4 As Onsager has shown, the matrix of coefficients is symmetric Lα,β = Lβ,α . The entropy production has the approximate form σ= J α Kα = Lα,β Kα Kβ . (10.39) α
α,β
According to the second law of thermodynamics, σ ≥ 0 and therefore the matrix of Onsager coefficients is positive definite.
10.5 Stationary States Under certain circumstances,5 stationary states are characterized by a minimum of entropy production, which is compatible with certain external conditions. Consider a system with n independent fluxes and thermodynamic 3 4 5
Only a set of independent fluxes should be used here. Fluxes with different tensorial character are not coupled. Especially the Onsager coefficients have to be constants.
10.5 Stationary States T
131
T+ΔT Jq
Fig. 10.3. Combined diffusion and heat transport
forces. If the forces K1 , . . . , Km are kept fixed and a stationary state is reached, then the fluxes corresponding to the remaining forces vanish: Jα = 0,
α = m + 1, . . . , n
(10.40)
and the entropy production is minimal since ∂σ = Jα , ∂Kα |K1 ,...,Km
α = m + 1, . . . , n.
(10.41)
A simple example is the coupling of diffusion and heat transport between two reservoirs with fixed temperatures T and T + ΔT (Fig. 10.3). In a stationary state, the diffusion current must vanish Jd = 0 since any diffusion would lead to time-dependent concentrations. The entropy production is σ = Jq Kq + Jd Kd . (10.42) If the force Kq = grad
1 ΔT ≈− 2 T T
(10.43)
is fixed by keeping the temperature distribution fixed, then in the stationary state there is only transport of heat but not of mass. The entropy production is σ = Lqq Kq2 + Ldd Kd2 + 2Lqd Kq Kd 2 Lqq Ldd − L2qd 2 Lqd = Kq + Ldd Kd + Kq , Ldd Ldd
(10.44)
where due to the positive definiteness of Lij Ldd > 0,
Lqq > 0,
LddLqq − L2qd > 0.
(10.45)
As a function of Kd , the entropy production is minimal for 0 = Kd +
Lqd 1 Kq = Jd Ldd Ldd
(10.46)
132
10 Nonequilibrium Thermodynamics
hence in the stationary state. Obviously, the chemical potential adjusts such that Lqd ΔT 1 , (10.47) Kd = − Δμ = T Ldd T 2 which leads to a concentration gradient6 : Lqd ΔT Δc ≈− . c Ldd kB T 2
(10.48)
Problems 10.1. Entropy Production by Chemical Reactions Consider an isolated homogeneous system with T = const and μk = const but nonzero chemical affinities μk νkj Aj = − k
and determine the rate of entropy increase.
6
This is connected to the Seebeck effect.
11 Simple Transport Processes
11.1 Heat Transport Consider a system without chemical reactions and diffusion and with constant chemical potentials.1 The thermodynamic force is 1 1 ≈ − 2 gradT Kq = grad (11.1) T To and the phenomenological relation is known as Fourier’s law: Jq = −κgradT.
(11.2)
The entropy flux is Js =
1 1 Jq = Jh T T
(11.3)
and the energy dissipation is κ 1 2 T σ = − Jq gradT = (gradT ) . T T From the continuity equation for the enthalpy, we find ∂(h) ∂T = cp = κΔT ∂t ∂t and hence the diffusion equation for heat transport κ ∂T = ΔT. ∂t cp
(11.4)
(11.5)
(11.6)
For stationary heat transport in one dimension, the temperature gradient is constant, hence also the heat flux. The entropy flux, however, is coordinate dependent due to the local entropy production (Fig. 11.1). divJs = −Jq 1
κ 1 grad(T ) = 2 (gradT )2 = σ. T2 T
We neglect the temperature dependence of the chemical potential here.
(11.7)
134
11 Simple Transport Processes Jq
T1
T2 JS
Fig. 11.1. Heat transport
11.2 Diffusion in an External Electric Field Consider an ionic solution at constant temperature without chemical reactions. The thermodynamic force is 1 Kk = − grad(μk ). T
(11.8)
For a dilute solution, we have2 μk = μ0k + kB T ln(ck ) + Zk eΦ
(11.9)
and hence the thermodynamic force Kk = −
kB 1 grad(ck ) − grad (Zk eΦ) . ck T
The entropy production is given by kB T Jk − grad ck − Zk e grad Φ . Tσ = ck
(11.10)
(11.11)
k
The phenomenological relations have the general form Jk = Lk,k Kk ,
(11.12)
k
where the interaction mobilities Lk,k vanish for very small ion concentrations, whereas the direct mobilities Lk,k are only weakly concentration dependent [39, 40]. Inserting the forces, we find Jk = −
k
Lk,k
1 kB gradck − grad(Φ) Lk,k Zk e, ck T
(11.13)
k
which for small ion concentrations can be written more simply by introducing the diffusion constant as Jk = −Dk grad ck − 2
ck Zk eDk grad Φ, kB T
The concentration (particles per volume) is ck = k /mk = xk /mk .
(11.14)
11.2 Diffusion in an External Electric Field
135
which is known as the Nernst–Planck equation. This equation can be understood in terms of motion in a viscous medium (7.1). For the motion of a mass point, we have mv˙ = F − mγv. (11.15) In a stationary state, the velocity is v=
1 F mγ
(11.16)
and the particle current J = cv =
c cD F= F, mγ kB T
(11.17)
where we used the Einstein relation D=
kB T . mγ
From comparison with the Nernst–Planck equation, we find kB T cZeD F=− Dgradc + grad Φ cD kB T kB T gradc − grad ZeΦ. =− c
(11.18)
(11.19)
The charge current is Zk eck Dk kB T gradck + gradZk eΦ Jel = Zk eJk = − kB T ck (Zk e)2 ck Dk = −Zk eDk grad ck − grad Φ. kB T
(11.20)
The prefactor of the potential gradient is connected to the electrical conductivity: (Zk e)2 ck Dk . (11.21) Gk = kB T Together with the continuity equation ∂ck = −div Jk , ∂t we arrive at a Smoluchowski-type equation ck Zk eD ∂ck = div Dk gradck + gradΦ . ∂t kB T
(11.22)
(11.23)
136
11 Simple Transport Processes
−
+
− grad(c)
− grad(Φ)
Fig. 11.2. Diffusion in an electric field
In equilibrium, we have μk = const
(11.24)
kB T ln ck + Zk eΦ = const
(11.25)
ck = c0k e−Zk eΦ/kB T .
(11.26)
and hence or
This is sometimes described as an equilibrium of the currents due to concentration gradient on the one side and to the electrical field on the other side (Fig. 11.2). The Nernst–Planck equation has to be solved together with the Poisson– Boltzmann equation for the potential (11.27) div(εgrad Φ) = − Zk eck . In one dimension, the Poisson–Nernst–Planck equations are dc ZecD dΦ − , dx kB T dx
(11.28)
d d Φ=− Zk eck . dx dx
(11.29)
J = −D
Problems 11.1. Synthesis of ATP from ADP In mitochondrial membranes, ATP is synthesized according to + ADP + POH + 2H+ out ATP + H2 O + 2Hin
11.2 Diffusion in an External Electric Field
137
where (in) and (out) refer to inside and outside of the mitochondrial membrane. Express the equilibrium constant K as a function of the potential difference Φin − Φout
(11.30)
and the proton concentration ratio + c(H+ in )/c(Hout ).
(11.31)
12 Ion Transport Through a Membrane
12.1 Diffusive Transport We can draw a close analogy between diffusion and reaction on the one side and electronic circuit theory on the other side. We regard the membrane as a resistance for the current of diffusion and the difference in chemical potential as the driving force (Fig. 12.1): μ = μ0 + kB T ln c,
(12.1)
μ − μ0 . (12.2) kB T Assuming linear variation of the chemical potential over the thickness a of the membrane, we have c = exp
x Δμ. a
(12.3)
Δμ ∂μ = −C ∂x a
(12.4)
μ(x) = μinside + The diffusion current is J = −C
and the constant is determined from the diffusion equation in the center of the membrane: ∂c ∂ ∂ kB T ∂c ∂ ∂ 0= = − (J) = − −C = D(x) c ∂t ∂x ∂x c ∂x ∂x ∂x →C =
D c kB T
(12.5)
with c = exp
μoutside + μinside √ μinside + (Δμ/2) = exp = cinside coutside . kB T 2kB T
(12.6)
140
12 Ion Transport Through a Membrane cinside
coutside
J
Fig. 12.1. Transport across a membrane J μinside
μoutside R μ0
μ0
Fig. 12.2. Equivalent circuit for the diffusion through a membrane
Finally, we have (Fig. 12.2) J =−
DcΔμ Δμ =− . akB T R
(12.7)
If we include a gradient of the electrostatic potential, then the equations become μ = μ0 + kB T ln c + ZeΦ, (12.8) c = exp
μ − μ0 − ZeΦ , kB T
∂μ Δμ kB T Δ ln c + ZeΔΦ = −C = −C , ∂x a a and the electric current is kB T ln c (Ze)2 Δ + Δφ I = ZeJ = − R Ze = −g(VNernst − V ) J = −C
(12.9) (12.10)
(12.11)
with the Nernst potential VNernst =
kB T coutside ln Ze cinside
(12.12)
and the membrane potential (Fig. 12.3) V = Φinside − Φoutside .
(12.13)
12.2 Goldman–Hodgkin–Katz Model
141
I inside g
VN V
outside
Fig. 12.3. Equivalent circuit for the electric current J μoutside
R
μinside
R Jm
C
μ0
μ0
Fig. 12.4. Equivalent circuit with capacity
The transport through biological membranes occurs often via the formation of pore–ligand complexes. The number of these complexes depends on the chemical potential of the ligands and needs time to build up. It may therefore be more realistic to associate also a capacity Cm with the membrane, which we define in analogy to the electronic capacity via the change of the potential (Fig. 12.4) dμm . dt
(12.14)
kB T dc dμm = dt c dt
(12.15)
Jm = Cm The change of the potential is
and the current is from the continuity equation − divJ = Hence, the capacity is Cm =
Jm dc = . a dt
(12.16)
ac . kB T
(12.17)
12.2 Goldman–Hodgkin–Katz Model In the rest state of a neuron, the potassium concentration is higher in the interior whereas there are more sodium and chlorine ions outside. The concentration gradients have to be produced by (energy-consuming) ion pumps.
142
12 Ion Transport Through a Membrane membrane outside
inside
JNa+ JK+ JCl−
charge density
Φm
potential
0
d
x
Fig. 12.5. Coupled ionic fluxes
Let us first consider only one type of ions and constant temperature. From (11.25), we have kB T (12.18) Φ+ ln ck = const. Zk e Usually, the potential is defined as zero on the outer side and Vk = Φinside − Φoutside =
kB T ck,outside ln Zk e ck,inside
(12.19)
is the Nernst potential (12.12) of the ion species k. We now want to calculate the potential for a steady state with more than one ionic species present. If we take the ionic species Na+ , K+ , and Cl− which are most important in nerve excitation that will define the so-called Goldman–Hodgkin–Katz model [41] (Fig. 12.5). We want to calculate the dependence of the fluxes Jk on the concentrations ck,in and ck,out . To that end, we multiply the Nernst–Planck (11.28) equation by eyk where Φ (12.20) y k = Zk e kB T to get dyk d dck (12.21) + ck = −Dk (ck eyk ). Jk eyk = −Dk eyk dx dx dx This can be integrated over the thickness d of the membrane
12.2 Goldman–Hodgkin–Katz Model
d
d
Jk eyk dx = −D 0
0
143
d (ck eyk )dx = −D ck,in eZk eΦm /kB T − ck,out . dx (12.22)
We assume a linear variation of the potential across the membrane Φ(x) =
x Φm . d
(12.23)
This is an approximation which is consistent with the condition of electroneutrality. On the boundary between liquid and membrane, the first derivative of Φ will be discontinuous corresponding to a surface charge distribution.1 Assuming a stationary state with ∂J/∂x = 0, we can evaluate the left-hand side of (12.22):
kB T d Zk eΦm /kB T − 1 = −D ck (d)eZk eΦm /kB T − ck (0) , (12.24) Jk e Zk eΦm which gives the current Jk = −
D Zk eΦm ck,in eZk eΦm /kB T − ck,out . d kB T eZk eΦm /kB T − 1
In a stationary state, the total charge current has to vanish Zk eJk = 0 I=
(12.25)
(12.26)
k
and we find e2 Φm ck,in eZk eΦm /kB T − ck,out = 0. Dk Zk2 kB T d eZk eΦm /kB T − 1
(12.27)
With ZNa+ = ZK+ = 1, ZCl− = −1 and ym =
eΦm , kB T
bm = ey m ,
(12.28)
we have cK+ ,in bm − cK+ ,out cNa,in bm − cNa,out + D K+ bm − 1 bm − 1 −1 cCl,in bm − cCl,out +DCl− = 0. b−1 m −1
DNa+
(12.29) (12.30)
Multiplication with (bm − 1) gives DNa+ (cNa,in bm − cNa,out ) + DK+ (cK+ ,in bm − cK,out ) −bm DCl− (CCl,in b−1 m − cCl,out ) = 0 1
We do not consider a possible variation of here.
(12.31) (12.32)
144
12 Ion Transport Through a Membrane
and we find DNa+ cNa,out + DK+ cK,out + DCl− cCl,in , DNa+ cNa,in + DK+ cK,in + DCl− cCl,out
(12.33)
kB T D + cNa,out + DK+ cK,out + DCl− cCl,in . ln Na e DNa+ cNa,in + DK+ cK,in + DCl− cCl,out
(12.34)
bm = and finally Φm =
This formula should be compared with the Nernst equation: Vk = Φinside − Φoutside =
ck,outside kB T ln . Zk e ck,inside
(12.35)
The ionic contributions appear weighted with their mobilities. The Nernst equation is obtained if the membrane is permeable for only one ionic species.
12.3 Hodgkin–Huxley Model In 1952, Hodgkin and Huxley won the Nobel prize for their quantitative description of the squid giant axon dynamics [42]. They thought of the axon membrane as an electrical circuit. They assumed independent currents of sodium and potassium, a capacitive current, and a catch-all leak current. The total current is the sum of these (Fig. 12.6): Iext = IC + INa + IK + IL .
(12.36)
The capacitive current is IC = Cm
dV . dt
(12.37)
For the ionic currents, we have (12.11) Ik = gk (V − Vk ),
(12.38)
where gk is the channel conductance which depends on the membrane potential and on time, and Vk is the specific Nernst potential (12.12). The macroscopic current relates to a large number of ion channels which are controlled Vinside Im
INa
IK
ILeak
Cm
Voutside
Fig. 12.6. Hodgkin–Huxley model
Iext
12.3 Hodgkin–Huxley Model
145
by gates which can be in either permissive or nonpermissive state. Hodgkin and Huxley considered a simple rate process for the transition between the two states with voltage-dependent rate constants α(V ) closed open. β(V )
(12.39)
The variable p which denotes the number of open gates obeys a first-order kinetics: dp = α(1 − p) − βp. (12.40) dt Hodgkin and Huxley introduced a gating particle n describing the potassium conductance (Fig. 12.7) and two gating particles for sodium (m being activating and h being inactivating). They used the specific equations gK = gk p4n ,
(12.41)
gNa = g Na p3m ph ,
(12.42) K+
membrane
membrane n−gates
inactive state
open state
probability
potential (mV)
Fig. 12.7. Potassium gates 40 0 – 40 – 80 1.0 0.5 0.0
0
100
200 300 time (ms)
400
500
Fig. 12.8. Hodgkin–Huxley model. Equation (12.43) is solved numerically [43]. Top: excitation by two short rectangular pulses Iext (t) (solid ) and membrane potential V (t) (dashed ). Bottom: fraction of open gates pm (full ), pn (dashed), and ph (dashdotted )
12 Ion Transport Through a Membrane
probability
potential (mV)
146
40 0 – 40 –80 1.0 0.5 0.0
0
100
200
300 time (ms)
400
500
Fig. 12.9. Refractoriness of the Hodgkin–Huxley model. A second pulse cannot excite the system during the refractory period. Details as in Fig. 12.8
which gives finally a highly nonlinear differential equation: Iext (t) = Cm
dV + g k p4n (V − VK ) + gNa p3m ph (V − VNa ) + g L (V − VL ). (12.43) dt
This equation has to be solved numerically. It describes many electrical properties of the neuron-like threshold and refractory behavior, repetitive spiking, and temperature dependence (Figs. 12.8 and 12.9). Later, we will discuss a simplified model with similar properties in more detail (Sect. 13.3).
13 Reaction–Diffusion Systems
Reaction–diffusion systems are described by nonlinear equations and show the formation of structures. Static as well as dynamic patterns found in biology can be very realistically simulated [44].
13.1 Derivation We consider coupling of diffusive motion and chemical reactions. For constant temperature and total density, we have the continuity equation ∂ ck = −div Jk + νkj rj ∂t
(13.1)
j
together with the phenomenological equation Lkj grad k ln cj = − Dkj grad cj , Jk = − j
(13.2)
j
which we combine to get ∂ ck = Dkj Δcj + νkj rj , ∂t j j
(13.3)
where the reaction rates are nonlinear functions of the concentrations: νkj rj = Fk ({ci }). (13.4) j
We assume for the diffusion coefficients the simplest case that the different species diffuse independently (but possibly with different diffusion constants)
148
13 Reaction–Diffusion Systems
and that the diffusion fluxes are parallel to the corresponding concentration gradients. We write the diffusion–reaction equation in matrix form ⎞ ⎛ ⎞ ⎛ ⎛ ⎞ ⎛ ⎞ D1 c1 F1 ({c}) c1 ∂ ⎜ . ⎟ ⎜ ⎟ ⎜ .. ⎟ ⎜ ⎟ .. .. (13.5) ⎠Δ⎝ . ⎠ + ⎝ ⎝ .. ⎠ = ⎝ ⎠ . . ∂t DN FN ({c}) cN cN or briefly ∂ c = DΔc + F(c). ∂t
(13.6)
13.2 Linearization Since a solution of the nonlinear equations is generally possible only numerically, we investigate small deviations from an equilibrium solution c0 1 with ∂ c0 = 0, ∂t
(13.7)
Δc0 = 0.
(13.8)
Obviously, the equilibrium corresponds to a root F(c0 ) = 0.
(13.9)
We linearize the equations by setting c = c0 + ξ
(13.10)
and expand around the equilibrium ∂ ∂F ξ = DΔξ + F(c0 + ξ) = DΔξ + ξ. ∂t ∂c c0
(13.11)
Denoting the matrix of derivatives by M0 , we can discuss several types of instabilities: Spatially constant solutions. For a spatially constant solution, we have ∂ ξ = M0 ξ ∂t
(13.12)
ξ = ξ 0 exp(M0 t).
(13.13)
with the formal solution Depending on the eigenvalues of M , there can be exponentially growing or decaying solutions, oscillating solutions, and exponentially growing or decaying solutions. 1
We assume tacitly that such a solution exists.
13.3 Fitzhugh–Nagumo Model
149
Plane waves. Plane waves are solutions of the linearized problem. Using the Ansatz ξ = ξ0 ei(ωt−kx) (13.14) gives iωξ = −k2 Dξ + M0 ξ = Mk ξ.
(13.15)
For a stable plane wave solution, λ = iω is an eigenvalue of Mk = M 0 − k 2 D
(13.16)
(λ) ≤ 0.
(13.17)
with If there are purely imaginary eigenvalues for some k, they correspond to stable solutions which are spatially inhomogeneous and lead to formation of certain patterns.
13.3 Fitzhugh–Nagumo Model The Hodgkin–Huxley model (12.43) describes characteristic properties like the threshold behavior and the refractory period where the membrane is not excitable. Since the system of equations is rather complicated, a simpler model was developed by Fitzhugh (1961) and Nagumo (1962) which involves only two variables u, v (membrane potential and recovery variable): u˙ = u −
u3 − v + I(t), 3
v˙ = (u + a − bv).
(13.18) (13.19)
The standard parameter values are a = 0.7,
b = 0.8,
= 0.08.
(13.20)
Nagumo studied an electronic circuit with a tunnel diode which is described by rather similar equations (Fig. 13.1): I = C V˙ + Idiode (V ) + ILR ,
(13.21)
V − E − RILR V = RILR + LI˙LR + E → I˙LR = . (13.22) L The Fitzhugh–Nagumo model can be used to describe the excitation propagation along the membrane. After rescaling the original equations and adding diffusive terms, we obtain the reaction–diffusion equations: 3 ∂ u 1 u u − u3 − v + I(t) = + Δ . (13.23) δ v (u − bv + a) ∂t v
150
13 Reaction–Diffusion Systems I
R C V
tunnel diode
L E
Fig. 13.1. Nagumo circuit 4 3
v
2 1 0 –1 –3
–2
–1
u
0
1
2
Fig. 13.2. Nullclines of the Fitzhugh–Nagumo equations. The solutions of (13.24) and (13.25) are shown for different values of the current I = 0, 1, 2, 3
The evolution of the FN model can be easily represented in a two-dimensional uv-plot. The u-nullcline and the v-nullcline are defined by u3 + I0 , (13.24) 3 a+u . (13.25) v˙ = 0 → v = b The intersection of the nullclines is an equilibrium since here u˙ = v˙ = 0 (Fig. 13.2). Consider small deviations from the equilibrium values u˙ = 0 → v = u −
u = ueq + μ v = veq + η.
(13.26)
The linearized equations are μ˙ = (1 − u2eq )μ − η,
(13.27)
η˙ = (μ − bη).
(13.28)
Obviously, instability has to be expected for u2eq < 1.
13.3 Fitzhugh–Nagumo Model
The matrix of derivatives
M0 =
1 − u2eq −1 −b
151
(13.29)
has the eigenvalues λ=
1
1 1 − u2eq − b ± (1 − u2eq + b)2 − 4 . 2
(13.30)
The square root is zero for ueq
1 √ = ± 1 + b ± 2 .
(13.31)
This defines a region around ueq = ±1 where the eigenvalues have a nonzero imaginary part (Fig. 13.3). For standard parameters, we distinguish the regions 1 √ I : |ueq | > 1 + b + 2 M = 1.277, II : 0.706 < |ueq | < 1.277, 1 √ III : |ueq | < 1 + b − 2 = 0.706. (13.32) Within region II, the real part of both eigenvalues is (Fig. 13.4) λ = and since
1 (1 − u2eq − b) 2
(13.33)
√ √ 1 + b − 2 < u2eq < 1 + b + 2 ,
(13.34)
we have √ √ 1 − u2eq − b − 2 + 2b < 0 < 1 − u2eq − b + 2 + 2b, 0.3 0.2
I
II
III
II
–1
0 ueq
1
I
Im(λ)
0.1 0 – 0.1 – 0.2 –2
2
Fig. 13.3. Imaginary part of the eigenvalues
(13.35)
152
13 Reaction–Diffusion Systems 1
III II
0
II
Re(λ)
I
I
–1
–2
–3 –2
–1
0 ueq
2
1
Fig. 13.4. Real part of the eigenvalues
λ + b −
√
< 0 < λ + b + √ |λ + b| < .
√
,
(13.36) (13.37)
For standard parameters, − 0.35 < λ < 0.22. Oscillating instabilities appear in region II if √ < b → < b,
(13.38)
(13.39)
which is the case for standard parameters. Outside region II instabilities appear if at least the larger of the two real eigenvalues is positive or 1 (1 − u2eq + b)2 − 4 > u2eq − 1 + b. (13.40) This is the case if the right-hand side is negative or u2eq < 1 − b
(13.41)
hence in the center of region III if b < 1. For standard parameters 1 − b = 0.936 and according to (13.32), the whole region III is instable. If the right-hand side of (13.40) is positive, we can take the square (1 − u2eq + b)2 − 4 > (u2eq − 1 + b)2 ,
(13.42)
which simplifies to 1 u2eq < 1 − . (13.43) b This is false for b < 1 and hence for standard parameters the whole region I is stable.
Part V
Reaction Rate Theory
14 Equilibrium Reactions
In this chapter, we study chemical equilibrium reactions. In thermal equilibrium of forward and backward reactions, the overall reaction rate vanishes and the ratio of the rate constants gives the equilibrium constant, which usually shows an exponential dependence on the inverse temperature.1
14.1 Arrhenius Law Reaction rate theory goes back to Arrhenius who in 1889 investigated the temperature-dependent rates of inversion of sugar in the presence of acids. Empirically, a temperature dependence is often observed of the form k(T ) = Ae−Ea /kB T
(14.1)
with the activation energy Ea . Considering a chemical equilibrium (Fig. 14.1) k A B, k
(14.2)
this gives for the equilibrium constant K=
k k
and ln K = ln k − ln k = ln A − ln A −
1
(14.3) Ea − Ea . kB T
(14.4)
An overview over the development of rate theory during the past century is given in [45].
156
14 Equilibrium Reactions products
reactants Ea
E’a
ΔH0
reaction coordinate
Fig. 14.1. Arrhenius law
In equilibrium, the thermodynamic forces vanish T = const, A= μk νk = 0.
(14.5) (14.6)
k
For dilute solutions with μk = μ0k + kB T ln ck ,
(14.7)
we have
μ0k νk + kB T
k
νk ln ck = 0,
(14.8)
k
which gives the van’t Hoff relation for the equilibrium constant 0 μ νk ΔG0 . ln(Kc ) = νk ln ck = − k k = − kB T kB T
(14.9)
k
The standard reaction free energy can be divided into an entropic and an energetic part: −
ΔG0 −ΔH 0 ΔS 0 = + . kB T kB T k
(14.10)
Since volume changes are not important at atmospheric pressure, the free reaction enthalpy gives the activation energy difference Ea − Ea = ΔH 0 .
(14.11)
A catalyst can only change the activation energies but never the difference ΔH 0 .
14.2 Statistical Interpretation of the Equilibrium Constant
157
14.2 Statistical Interpretation of the Equilibrium Constant The chemical potential can be obtained as ∂F ∂ ln Z = −kB T . μk = ∂Nk T,V,N ∂Nk T,V,N k
(14.12)
k
Using the approximation of the ideal gas, we have Z= and ln Z ≈
z Nk k
(14.13)
Nk !
Nk ln zk − Nk ln Nk + Nk ,
(14.14)
k
which gives the chemical potential μk = −kB T ln
zk . Nk
(14.15)
Let us consider a simple isomerization reaction A B.
(14.16)
The partition functions for the two species are (Fig. 14.2) zA = e−n (A)/kB T , zB = e−n (B)/kB T . n=0,1,...
(14.17)
n=0,1,...
In equilibrium, μB − μA = 0,
(14.18)
ε3(B) ε2(B) ε3(A)
ε1(B)
ε2(A)
ε0(B)
ε1(A)
Δε0
ε0(A) A
B
Fig. 14.2. Statistical interpretation of the equilibrium constant
158
14 Equilibrium Reactions
− kB T ln
zB zA = −kB T ln , NB NA
zB NB = = (NB /V )(NA /V )−1 = Kc , zA NA n=0,1,... Kc =
e−n (B)/kB T
n=0,1,... e
−n (A)/kB T
n=0,1,... =
e−(n (B)−0 (B))/kB T
−(n (A)−0 (A))/kB T n=0,1,... e
(14.19) (14.20)
e−Δ/kB T . (14.21)
This is nothing but a thermal distribution over all energy states of the system.
15 Calculation of Reaction Rates
The activated behavior of the reaction rate can be understood from a simple model of two colliding atoms (Fig. 15.1). In this chapter, we discuss the connection to transition state theory, which takes into account the internal degrees of freedom of larger molecules and explains not only the activation energy, but also the prefactor of the Arrhenius law [27, 28, 46].
15.1 Collision Theory The Arrhenius expression consists of two factors. The exponential gives the number of reaction partners with enough energy and the prefactor gives the collision frequency times the efficiency. We consider the collision of two spherical particles A and B in a coordinate system, where B is fixed and A is moving with the relative velocity vr = vA − vB .
(15.1)
The two particles collide if the distance between their centers is smaller than the sum of the radii: (15.2) d = rA + rB . During a time interval dt, the number of possible collision partners is cB πd2 vr dt
(15.3)
and the number of collisions within a volume V is Ncoll /V = cA cB π(rA + rB )2 vr dt. Assuming independent Maxwell distributions for the velocities 3 ma va2 m mb vb2 exp − f (va vb )d3 va d3 vb = − d3 va d3 vb , 2πkB T 2kB T 2kB T
(15.4)
(15.5)
160
15 Calculation of Reaction Rates
vr d = ra +rb vr dt
Fig. 15.1. Collision of two particles
we have a Maxwell distribution also for the relative velocities 3/2 μvr2 μ 3 d3 vr exp − f (vr )d vr = 2πkB T 2kB T with the reduced mass μ=
M A MB . MA + M B
(15.6)
(15.7)
The average relative velocity then is % vr =
8kB T πμ
(15.8)
and the collision frequency is % dNcoll = ca cb πd2 V dt
8kB T . πμ
If both particles are of the same species, we have instead
1 2 2 16kB T dNcoll = ca πd , V dt 2 πM
(15.9)
(15.10)
since each collision involves two molecules. Since each collision will not lead to a chemical reaction, we introduce the reaction cross section σ(E) which depends on the kinetic energy of the relative velocity. As a simple approximation, we use the reaction cross section of hard spheres: 0 if E < E0 , σ(E) = (15.11) π(ra + rb )2 if E > E0 , where E0 is the minimum energy necessary for a reaction to occur (Fig. 15.2). The distribution of relative kinetic energy can be determined as follows. From the one-dimensional Maxwell distribution
2 m e−mv /2kB T dv, f (v)dv = (15.12) 2πkB T
15.1 Collision Theory
161
σ(E)
Eo
E
Fig. 15.2. Hard-sphere approximation to the reaction cross section
we find the distribution of relative kinetic energy for one particle as1
ma −Ea /kB T dEa 1 √ e f (Ea )dEa = 2 =√ e−Ea /kB T dEa . 2πkB T 2mEa πkB T Ea (15.13) The joint distribution for the two particles is f (Ea , Eb )dEa dEb 1 1 √ = √ e−Ea /kB T dEa e−Eb /kB T dEb πkB T Ea πkB T Eb
(15.14)
and the distribution of the total kinetic energy is given by E f (Ea , E − Ea )dEa f (E) = 0
1 = πkB T
E
1
e−Ea /kB T −(E−Ea )/kB T dEa Ea (E − Ea ) E 1 dEa −E/kB T = . (15.15) e πkB T Ea (E − Ea ) 0
0
This integral can be evaluated by substitution Ea = E sin2 θ, dEa = 2E sin θ cos θ dθ, π/2 π/2 E dEa 2E sin θ cos θdθ √ = 2dθ = π = Ea (E − Ea ) E sin2 θE cos2 θ 0 0 0 and we find 1 −E/kB T . f (E) = e kB T Now
∞
∞
f (E)σ(E)dE = 0 1
f (E)πd2 dE = πd2 e−E0 /kB T
E0
There is a factor of 2 since v 2 = (−v)2 .
(15.16)
(15.17)
(15.18)
162
15 Calculation of Reaction Rates
and we have finally % r = ca cb πd
2
8kB T −E0 /kB T . e πμ
(15.19)
For nonspherical molecules, the reaction rate depends on the relative orientation. Therefore, a so-called steric factor p is introduced: % 8kB T −E0 /kB T e r = ca cb pπd2 . (15.20) πμ There is no general way to calculate p and often it has to be estimated. Together, p and d2 give an effective molecular diameter2 d2eff = pd2 .
(15.21)
15.2 Transition State Theory According to transition state theory [47, 48], the reactants form an equilibrium amount of an activated complex which decomposes to yield the products (Figs. 15.3 and 15.4). The reaction path [49] leads over a saddle point called the transition state. According to TST, the reaction of two substances A and B is written K k1 A+B [A−B]‡ → C+D . reactants activated complex products
reactants
(15.22)
activated complex products
reaction coordinate
Fig. 15.3. Transition state theory. The reactants are in chemical equilibrium with the activated complex which can decompose into the products 2
The quantity πd2 /4 is equivalent to the collision cross section used in connection with nuclear reactions.
15.2 Transition State Theory RAB
163
A+BC
[A−B−C]
R*AB
AB+C R*BC
RBC
Fig. 15.4. Potential energy contour diagram. The potential energy for the reaction A+BC → [A−B−C]‡ → AB+C is shown schematically as a function of the distances RAB and RBC . In general, three coordinates are needed to give the relative positions of the nuclei, but if atom A approaches the molecule BC, the direction of minimum potential energy is along the line of centers for the reaction. Arrows indicate the reaction coordinate. The activated complex corresponds to a saddle point of the potential energy surface
If all molecules behave as ideal gases, the equilibrium constant K may be calculated as c[A−B]‡ z[A−B]‡ −ΔE0 /kB T = e , (15.23) Kc = cA cB zA zB where ΔE0 is the difference in zero-point energies of the reactants and the activated complex. The activated complex may be considered a normal molecule, except that one of its vibrational modes is so loose that it allows immediate decomposition into the products. The corresponding contribution to the partition function is zω→0 =
1 1−
e−ω/kB T
→
kB T . ω
(15.24)
Thus, the equilibrium constant becomes K=
kB T z ‡ −ΔE0 /kB T e , ω zA zB
(15.25)
where z ‡ differs from z[A−B]‡ by the contribution of the unique vibrational mode. If ω/2π is considered the decomposition frequency, then the rate of decomposition is r=
ω dcC kB T z ‡ −ΔE0 /kB T = c[A−B]‡ = e cA cB = k2 cA cB . dt 2π 2π zA zB
(15.26)
164
15 Calculation of Reaction Rates
If not every activated complex decays into products, the expression for the rate constant has to be modified by a transmission coefficient κ: k2 = κ
kB T z ‡ −ΔE0 /kB T e . 2π zA zB
(15.27)
In most cases, κ is close to 1. Important exceptions are the formation of a diatomic molecule, since the excess energy can only be eliminated by third-body collisions or radiation and reactions that involve changes from one type of electronic state to another – for instance in certain cis–trans isomerizations.
15.3 Comparison Between Collision Theory and Transition State Theory For comparison, we apply transition state theory to the reaction of two spherical particles. For the reactants, only translational degrees of freedom are relevant giving the partition function zT =
(2πmkB T )3/2 . h3
(15.28)
For the activated complex, we consider the rotation around the center of mass. The moment of inertia is mA mB I = (rA + rB )2 (15.29) mA + mB and the partition function is zR =
8π 2 IkB T . h2
(15.30)
The rate expression from TST is k=
kB T (2π(mA + mB )kB T )3/2 h3 2 8π kB T 3/2 3/2 h (2πkB T )3 mA mB
(rA + rB )2 mA mB −ΔE0 /kB T e h2 mA + mB
mA + mB = 8πkB T (rA + rB )2 e−ΔE0 /kB T , mA mB ×
(15.31)
which is the same result as obtained from collision theory. We consider now a bimolecular reaction. Each reactant has three translational, three rotational, and 3N − 6 vibrational3 degrees of freedom. The 3
3N − 5 for a collinear molecule
15.4 Thermodynamical Formulation of TST
165
partition functions are 3 3 3NA −6 zR zV , zA = zT 3 3 3NB −6 z B = zT zR zV , 3 3 3NA +3NB −7 z ‡ = zT zR zV .
(15.32)
The reaction rate from TST is kTST =
5 kB T z ‡ −E0 /kB T kB T zV −E0 /kB T e ≈ . 3 z3 e h zA zB h zT R
(15.33)
For the rigid sphere model 3 , zA = z B = z T
3 2 z ‡ = zT zR
(15.34)
hence the rate constant is krigid =
2 kB T zR e−E0 /kB T . 3 h zT
(15.35)
From comparison, we see that the steric factor p≈
zV zR
5 (15.36)
which is typically in the order of 10−5 .
15.4 Thermodynamical Formulation of TST Consider again the reaction A+B [A−B]‡ → products
(15.37)
with the equilibrium constant Kc =
cAB‡ . cA cB
(15.38)
The TST rate expression (15.25) gives4 k2 =
4
kB T Kc 2π
(15.39)
The activated complex is treated as a normal molecule with the exception of the special mode.
166
15 Calculation of Reaction Rates
and with (14.9) we have for an ideal solution k2 =
kB T −ΔG0‡ /kB T kB T −ΔH 0‡ /kB T ΔS 0‡ /kB = e . e e 2π 2π
(15.40)
The temperature dependence of the rate constant is d ln k2 1 ∂ ΔG0‡ = − . dT T ∂T kB T
(15.41)
Now for ideal gases or solutions, the chemical potential has the form μ = kB T ln c − kB T ln hence ΔG0‡ = −kB T and
νi ln
z V
(15.42)
zi V
(15.43)
∂ ∂ ΔG0‡ Ui =− νi ln zi = νi = ΔU 0‡ . ∂T kB T ∂T kB T 2
Comparison with the Arrhenius law d Ea Ea ln A − = dT kB T kB T 2
(15.44)
(15.45)
shows that the activation energy is Ea = kB T + ΔU 0‡ ≈ kB T + ΔH 0‡
(ideal solutions).
(15.46)
For a bimolecular reaction in solution, we find from rTST =
kB T ΔS 0‡ /kB −ΔH 0‡ /kB T kB T ΔS 0‡ /kB −(Ea −kB T )/kB T e e e = e h h
(15.47)
that the steric factor is determined by the entropy of formation of the activated complex.
15.5 Kinetic Isotope Effects There are two origins of the kinetic isotope effect. The first is quantum mechanical tunneling through the potential barrier. It is only important at very low temperatures and for reactions involving very light atoms. For the simplest model case of a square barrier, the tunneling probability is approximately $ # 2m(V0 − E) a . (15.48) P = exp −2 2
15.5 Kinetic Isotope Effects
167
ln(k) V0 E
tunneling
activated 1/kBT
a
Fig. 15.5. Tunneling through a rectangular barrier. Left: the tunneling probability depends on the height and width of the barrier. Right: at low temperatures, tunneling dominates (dashed ). At higher temperatures, activated behavior is observed (dashdotted ) activated complex
reactants
Eo(H) Eo(D)
ln(k)
Eo(H) Eo(D) Ea(H) Ea(D)
1/kBT
Fig. 15.6. Isotope dependence of the activation energy
The tunneling probability depends on the mass m but is independent of temperature (Fig. 15.5). The second origin of isotope effects is the difference in activation energy for reactions involving different isotopes. Vibrational frequencies and therefore vibrational zero-point energies depend on the reduced mass μ of the vibrating system: % ω =
2π
f . μ
(15.49)
Since the transition state is usually more loosely bound than the reactants, the vibrational levels are more closely spaced. Therefore, the activation energy is higher for the heavier isotopomer (a normal isotope effect). The maximum isotope effect is obtained when the bond involving the isotope is completely broken in the transition state. Then, the difference in activation energies is simply the difference in zero-point energies of the reactants (Fig. 15.6).
168
15 Calculation of Reaction Rates
15.6 General Rate Expressions The potential energy surface has a saddle point in the transition state region. The surface S ∗ , passing through that saddle point along the direction of steepest ascent, defines the border separating the reactants from the products in quite a natural way. We introduce the so-called intrinsic reaction coordinate (IRC) qr as the path of steepest descent from the cole connecting reactants and products. We set qr = 0 for the points on the surface S ∗ . The instantaneous rate r(t) is given by the flux of systems that cross the surface S ∗ at time t and are on the product side at t → ∞. This definition allows for multiple crossings of the surface S ∗ . In general, however, it will depend on how exactly the surface S ∗ is chosen. Only if the fluxes become stationary, this dependence vanishes. 15.6.1 The Flux Operator Classically, the flux is the product of the number of systems passing through the surface S ∗ and their velocity vr normal to that surface. As we defined the reaction coordinate qr to be normal to the surface S ∗ , the velocity is vr =
dqr dt
(15.50)
and the classical flux is pr F = dn−1 q dn p vr W (q, p, t) = dn q dn p δ(qr ) W (q, p, t). mr (15.51) Here, W (q, p, t) is the phase-space distribution and mr is the reduced mass corresponding to the coordinate qr . We may therefore define the classical flux operator pr (15.52) F (q, p) = δ(qr ) . mr In the quantum mechanical case, we have to modify this definition since p and q do not commute. It can be shown that we have to use the symmetrized product 1 1 F = (pr δ(qr ) + δ(qr )pr ) = {pr , δ(qr )} (15.53) 2mr 2mr with pr =
∂ . i ∂qr
(15.54)
The projection operator on to the systems on the product side is simply given by P = θ(qr ). (15.55)
15.6 General Rate Expressions
169
However, we need the projector P (t) on that states that at some time t in the future are on the product side. If H is the system Hamiltonian, then P (t) = eiHt/ θ(qr )eiHt/
(15.56)
and that part of the density matrix ρ(t) that will end in the product side in the far future is (15.57) lim P (t − t)ρ(t)P (t − t). t →∞
The rate is given by r(t) = lim tr (P (t − t)ρ(t)P (t − t)F ) . t →∞
(15.58)
The time dependence of the density matrix is given by the van Neumann equation dρ(t) = [H, ρ(t)]. (15.59) i dt In thermal equilibrium, ρeq = Q−1 exp (−βH) ,
Q = tr (exp(−βH))
(15.60)
is time independent as [exp(−βH), H] = 0. It can be shown that for t → ∞, the density matrix ρeq and the projector P (t) commute5 : lim P (t − t)ρeq P (t − t) = lim P (t )ρeq .
t →∞
t →∞
The rate will then be also independent of t:
r = Q−1 lim tr e−βH eiHt / θ(qr ) e−iHt / F . t →∞
(15.61)
(15.62)
This expression can be modified such that the limit operation does not appear explicitly any more. To that end, we note that the trace vanishes for t =06 and
t = ∞ −1 −βH iHt / −iHt / e θ(qr ) e F r = Q tr e t =0 ∞
d −βH iHt / −iHt / = Q−1 e θ(q ) e F dt . (15.63) e r dt 0 5
6
Asymptotically, the states on the product side will leave the reaction zone with positive momentum pr , so that we may replace θ(qr ) with θ(pr ). Again asymptotically, these states are eigenfunctions of the Hamiltonian, so that P (t = ∞) and exp(−βH) commute. This follows from time inversion symmetry: the trace has to be symmetric with respect to that operation. Time inversion changes pr → −pr . The operators ρ and qr are not affected. So the trace has to be symmetric and antisymmetric at the same time.
170
15 Calculation of Reaction Rates
For the derivative, we find i d iHt / θ(qr ) e−iHt / = eiHt / [H, θ(qr )] e−iHt / . e dt
(15.64)
Since only the kinetic energy p2r /2mr does not commute with θ(qr ), the commutator is - 2 . ∂ pr ∂ pr pr [H, θ(qr )] = , θ(qr ) = θ(qr ) − θ(qr ) 2mr i ∂qr 2mr ∂qr 2mr pr pr ∂ ∂ pr = δ(qr ) + θ(qr ) − θ(qr ) i 2mr 2mr ∂qr ∂qr 2mr pr pr = F. (15.65) δ(qr ) + δ(qr ) = i 2mr 2mr i Therefore, we can express the reaction rate as an integral over the flux–flux correlation function ∞ ∞ & ' r = Q−1 tr e−βH F (t)F (0) dt = F (t)F (0)dt. (15.66) 0
0
Ultimately, this expression for the reaction rate has been used a lot in numerical computations. It is quite general and may easily be extended to nonadiabatic reactions.
Problems 15.1. Transition State Theory v [AB] A+B δx
Instead of using a vibrational partition function to describe the motion of the activated complex over the reaction barrier, we can also use a translational partition function. We consider all complexes lying within a distance δx of the barrier to be activated complexes. Use the translational partition function for a particle of mass m in a box of length δx to obtain the TST rate constant. Assume that the average velocity of the particles moving over the barrier is7
1 2kB T ‡ v = . 2 πm‡ 7
The particle moves to the left or right side with equal probability.
15.6 General Rate Expressions
171
15.2. Harmonic Transition State Theory For systems such as solids, which are well described as harmonic oscillators around stationary points, the harmonic form of TST is often a good approximation, which can be used to evaluate the TST rate constant, which can then be written as the product of the probability of finding the system in the transition state and the average velocity at the transition state: kTST = vδ(x − x‡ ). For a one-dimensional model, assume that the transition state is an exit point from the parabola at some position x = x‡ with energy ΔE = mω 2 x‡2 /2 and evaluate the TST rate constant
ΔE x
x
16 Marcus Theory of Electron Transfer
In 1992, Rudolph Marcus received the Nobel Prize for chemistry. His theory is currently the dominant theory of electron transfer in chemistry. Originally, Marcus explained outer-sphere electron transfer, introducing reorganization energy and electronic coupling to relate the thermodynamic transition state to nonequilibrium fluctuations of the medium polarization [50–52].
16.1 Phenomenological Description of ET We want to describe the rate of electron transfer (ET) between two species, D and A, in solution. If D and A are neutral, this is a charge separation process: D + A → D+ + A − .
(16.1)
If the charge is transferred from one molecule to the other, this is a charge resonance process (16.2) D− + A → D + A− or D + A+ → D+ + A.
(16.3)
In the following, we want to treat all these kinds of processes together. Therefore from now on, the symbols D and A implicitly have the meaning DZD and AZA and the general reaction scheme DZD + AZA → DZD +1 + AZA −1
(16.4)
will be simply denoted as D + A → D+ + A − .
(16.5)
Marcus theory is a phenomenological theory, in which the assumption is made, that ET proceeds in the following three steps:
174
16 Marcus Theory of Electron Transfer
1. The two species approach each other by diffusive motion and form the so-called encounter complex: kdif ‡ D + A [D − A] . k−dif
(16.6)
2. In the activated complex, ET takes place [D − A]
‡
ket [D+ A− ]‡ . k−et
(16.7)
The actual ET is much faster than the motion of the nuclei. The nuclear conformation of the activated complex and the solvent polarization do not change. According to the Franck–Condon principle, the two states [D − A]‡ and [D+ A− ]‡ have to be isoenergetic. 3. After the transfer, the polarization returns to a new equilibrium and the products separate ksep + ‡ [D A− ] → D+ + A− . (16.8) The overall reaction rate is r=
d c − = ksep c[D+ A− ]‡ . dt A
(16.9)
We want to calculate the observed quenching rate r=−
d cD = kq cD cA . dt
(16.10)
We consider the stationary state 0=
d d ‡ = c c + − ‡. dt [D−A] dt [D A ]
(16.11)
We deduce that 0 = kdif cD cA + k−et c[D+ A− ]‡ − (k−dif + ket )c[D−A]‡ , 0 = ket c[D−A]‡ − (k−et + ksep )c[D+ A− ]‡ .
(16.12) (16.13)
Eliminating the concentration of the encounter complex from these two equations, we find ket c ‡ k−et + ksep [D−A] ket kdif = cD cA k−dif k−et + k−dif ksep + ket ksep
c[D+ A− ]‡ =
(16.14)
16.2 Simple Explanation of Marcus Theory
175
and the observed quenching rate is kq =
kdif 1+
k−dif ket
+
k−dif k−et ket ksep
(16.15)
.
The reciprocal quenching rate can be written as 1 k−dif k−dif k−et 1 = + + . kq kdif kdif ket kdif ket ksep
(16.16)
There are some interesting limiting cases: •
If ket k−dif and ksep k−et , then the observed quenching rate is given by the diffusion rate kq = kdif and the ET rate is limited by the formation of the activated complex. • If the probability of ET is very small ket k−dif and k−et kdif , then the quenching rate is given by kq = ket kdif /k−dif = ket Kdif . • If diffusion is not important as in solids or proteins and in the absence of further decay channels, the quenching rate is given by kq ≈ ket.
16.2 Simple Explanation of Marcus Theory The TST expression for the ET rate is ket =
‡ ωN κet e−ΔG /kB T , 2π
(16.17)
where κet describes the probability of a transition between the two electronic states DA and D+ A− , and ωN is the effective frequency of the nuclear motion. We consider a collective reaction coordinate Q which summarizes the polarization coordinates of the system (Fig. 16.1). Initially, the polarization is in equilibrium with the reactants and for small polarization fluctuations, series expansion of the Gibbs free energy gives a 2 Gr (Q) = G(0) r + Q . 2
Gr (Q)
(16.18)
Gp(Q) ER ΔG ΔG Q =0 Qs
Q1
Q
Fig. 16.1. Displaced oscillator model
176
16 Marcus Theory of Electron Transfer ER
−ΔG
activationless
activated
ln(k)
inverted
Fig. 16.2. Marcus parabola
If the polarization change induced by the ET is not too large (no dielectric saturation), the potential curve for the final state will have the same curvature but is shifted to a new equilibrium value Q1 : a a 2 (0) 2 Gp (Q) = G(0) p + (Q − Q1 ) = Gr + ΔG + (Q − Q1 ) . 2 2
(16.19)
In the new equilibrium, the final state is stabilized by the reorganization energy1 a ER = −(Gp (Q1 ) − Gp (0)) = Q21 . (16.20) 2 The transition between the two states is only possible if due to some thermal fluctuations the crossing point is reached which is at Qs =
a 2 2 Q1
+ ΔG ER + ΔG = √ . aQ1 2aER
(16.21)
The corresponding activation energy is ΔG‡ =
a 2 (ΔG + ER )2 Qs = 2 4ER
(16.22)
hence the rate expression becomes ket =
ωN (ΔG + ER )2 κet exp − . 2π 4ER kB T
(16.23)
If we plot the logarithm of the rate as a function of the driving force ΔG, we obtain the famous Marcus parabola which has a maximum at ΔG = −ER (Fig. 16.2). 1
Sometimes the reorganization energy is defined as the negative quantity Gp (Q1 )− Gp (0).
16.3 Free Energy Contribution of the Nonequilibrium Polarization
177
16.3 Free Energy Contribution of the Nonequilibrium Polarization In Marcus theory, the solvent is treated in a continuum approximation. The medium is characterized by its static dielectric constant εst and its optical dielectric constant εop = ε0 n2 . The displacement D is generated by the charge distribution : div D = . (16.24) In equilibrium, it is related to the electric field by D = ε0 E + P = εE.
(16.25)
Here P is the polarization, which is nothing else but the influenced dipole density. It consists of two parts [53]: 1. The optical or electronical polarization Pop . It is due to the induced dipole moments due to the response of the molecular electrons to the applied field. At optical frequencies, the electrons still move fast enough to follow the electric field adiabatically. 2. The inertial polarization Pin which is due to the reorganization of the solvent molecules (reorientation of the dipoles, changes of molecular geometry). This part of the polarization corresponds to the static dielectric constant. The nuclear motion of the solvent molecules cannot follow the rapid oscillations at optical frequencies. The total frequency-dependent polarization is P(ω) = Pop (ω) + Pin (ω) = D(ω) − ε0 E(ω) = (ε(ω) − ε0 )E(ω).
(16.26)
In the static limit, this becomes Pop + Pin = (εst − ε0 )E
(16.27)
Pop = (εop − ε0 )E.
(16.28)
and at optical frequencies
Hence the contribution of the inertial polarization is in the static limit Pin = (εst − εop )E.
(16.29)
In general, rot D = 0 and the calculation of the field for given charge distribution is difficult. Therefore, we use a model of spherical ions behaving as ideal conductors with the charge distributed over the surface. Then in our case, the displacement is normal to the surface and rot D = 0 and the displacement is the same as it would be generated by the charge distribution in vacuum. The corresponding electric field in vacuum would be ε−1 0 D and the field in
178
16 Marcus Theory of Electron Transfer
the medium can be expressed as 1 1 1 (D − P) = D − (Pop + Pin ) ε0 ε0 ε0 1 1 = (D − Pin ) − (εop − ε0 )E, ε0 ε0 1 (D − Pin ), E= εop E=
(16.30)
where we assumed that the optical polarization reacts instantaneously to changes of the charge distribution or to fluctuations of the inertial polarization. In equilibrium, (16.29) gives 1 (D − (εst − εop )E), εop 1 E= D. εst
E=
(16.31)
If the polarization is in equilibrium with the field (1/ε0 )D produced by the charge distribution of either AB or A+ B− , the free energy G of the two configurations will, in general, not be the same. We designate these two configurations as I and II, respectively. Marcus calculated the free energy of a nonequilibrium polarization for which the free energies of the reaction complexes [AB]‡ and [A+ B− ]‡ become equal. The contribution to the free energy is 1 (16.32) Gel = d3 r E dD = d3 r (D − Pin )dD, εop which can be divided into two contributions: 1. The energy without inertial polarization 1 D2 . d3 r Gel,0 = εop 2
(16.33)
2. The contribution of the inertial polarization 1 − d3 rPin dD. εop
(16.34)
In equilibrium, we have Pin
1 = (εst − εop )E = (εst − εop ) D = εop εst
1 1 − εop εst
and we can express the free energy as a function of Pin 2 Pin 1 eq 3 1 d r Gel = Gel,0 − 1 . 2 εop − ε1st εop
D
(16.35)
(16.36)
16.3 Free Energy Contribution of the Nonequilibrium Polarization
179
P (2)
P Peq (1)
DI/II D*I/II
D
Fig. 16.3. Calculation of the free energy change
Now, we have to calculate the free energies of the activated complexes. Consider a thermal fluctuation of the inertial polarization. We will do this in two steps (Fig. 16.3). In the first step, it is assumed that the polarization is always in equilibrium with the charge distribution. However, this charge density will be chosen such that the nonequilibrium polarization P‡in of the activated complex is generated. The electric fields are D∗ and E∗ : 1 1 1 ‡ Pin = − (16.37) D∗ . εop εop εst The corresponding free energy is ΔG1el =
1 εop
D∗
D dD −
d3 r D
= G∗el,0 − Gel,0 −
1 εop
1 −
1 εst
D∗ 1 d3 r Pin dD εop D P‡2 − P2 d3 r in 2 in . 2εop
(16.38)
In the second step, the electric field of the charge distribution will be reduced to the equilibrium value while keeping the nonequilibrium polarization fixed.2 Then this fixed polarization acts as an additional displacement: E=
1 1 (D − P‡in ) − Pop , ε0 ε0
(16.39)
where the optical polarization depends on the electric field according to (16.28) εop 1 ‡ − 1 E. (16.40) E = (D − Pin ) − ε0 ε0 2
‡ We tacitly assume that this is possible. In general, the polarization Pin will also modify the charge distribution on the reactants. If, however, the distance between the two ions is large in comparison with the radii, these changes can be neglected. Marcus calls this the point-charge approximation.
180
16 Marcus Theory of Electron Transfer
Hence, if we treat the optical polarization as following instantaneously, the slow fluctuations of the inertial polarization 1 (D − P‡in ), εop
E=
(16.41)
which means the inertial polarization is shielded by the optical polarization it creates. We may now calculate the change in free energy as 1 εop
D
(D − P‡in )dD 1 ‡ ∗ P (D − D∗ ). = Gel,0 − Gel,0 − d3 r εop in
ΔG2el =
d3 r
D∗
(16.42)
If we substitute D∗ from (16.37), we get 1 ‡ P (D − D∗ ) = εop in
1 εop
1 −
1 εst
P‡in εop
Pin − P‡in εop
(16.43)
and the total free energy change due to the polarization fluctuation is 1 2 G‡el − Geq el = ΔGel + ΔGel 1 P‡2 − P2 d3 r in 2 in =− 1 1 2εop εop − εst 1 1 − 1 d3 r 2 P‡in (Pin,eq − P‡in ) 1 εop εop − εst 2 1 P‡in − Pin,eq 3 1 . = 1 d r 1 2 εop εop − εst
(16.44)
16.4 Activation Energy The free energy G‡el is a functional of the polarization fluctuation δP = P‡in − Pin .
(16.45)
If we minimize the free energy δG‡el = 0, we find that δP = 0,
G‡el = Geq el .
(16.46)
(16.47)
16.4 Activation Energy
181
Marcus asks now, for which polarizations P‡in of the two states I and II become isoenergetic, i.e. ΔU ‡ = UII‡ − UI‡ = 0. (16.48) We want to rephrase the last condition in terms of the free energy
We conclude that
G = U − T S + pV ≈ U − T S.
(16.49)
ΔG‡ = G‡II − G‡I = −T ΔS ‡ .
(16.50)
Here (Fig. 16.4) G‡I
=
1 εop
G‡II =
1 εop
The quantity
1 − 1 −
1 εst
1 εst
1 2 1 2
3
d r
d3 r
δPI εop
2
δPII εop
eq ∞ + Geq el,I (R) − Gel,I (∞) + GI ,
2
eq ∞ + Geq el,II (R) − Gel,II (∞) + GII .
∞ ΔG∞ = G∞ II − GI = ΔGvac + ΔGsolv,∞
(16.51)
(16.52)
is the difference of free energies for an ideal solution of A and B on the one side and A+ and B− on the other side at infinite distance R → ∞. Usually, it is determined experimentally. It contains all the information on the internal structure of the ions. The entropy difference is the difference of internal entropies of the reactants since the contribution from the polarization drops out due to P‡in,I = P‡in,II . Usually, this entropy difference is small. The electrostatic energies Geq el,I/II are composed of the polarization contribution (the solvation free energy) and the mutual interaction of the reactants which will be considered later. The reaction free energy is eq ΔGeq = ΔG∞ + ΔGeq el (R) − ΔGel (∞).
(16.53)
It can be easily seen that the condition ΔG‡ = 0 will be fulfilled for infinitely many polarizations P‡in . We will therefore look for that one which minimizes simultaneously the free energy of the transition state G‡I,II . We introduce a Lagrange parameter m to impose the isoenergeticity condition: 6 7 δ G‡I (P‡in ) + m(G‡II (P‡in ) − G‡I (P‡in )) = 0. (16.54) The variation is with respect to P‡in or, equivalently with respect to δPI for fixed change of the inertial polarization: ΔPin = δPI − δPII = (P‡in − PI,in ) − (P‡in − PII,in ) = PII,in − PI,in . (16.55)
A+
A,II Gvac
AI A Gvac
B,II Gvac
B−
B,I B Gvac
ΔGvac
R=∞
∞
ΔGsolv,II
∞
∞ ΔGsolv,I
ΔG∞
B
− GB,II ∞ B
B,I
G∞
eq
eq Geq II (R) – GII (∞)
eq
GI (R) – GI (∞)
Fig. 16.4. Free energy differences
A+ GA,II ∞
AI A G∞
R=∞
A+
A
R
B−
ΔG0
B
Geq II (R)
eq
GI (R)
¹
ΔG#
¹
GII(R,P )
¹
¹
GI(R,P )
182 16 Marcus Theory of Electron Transfer
16.4 Activation Energy
183
G
eq GI
# GI/II
δ PI# GIIeq
δ PII #
Pin,I Pin
Pin,II
Pin
ΔPin m
1−m
Fig. 16.5. Variation of the nonequilibrium polarization
We have to solve 6 7 δ (1 − m)G‡I (PI,in + δPI ) + mG‡II (PII,in − ΔPin + δPI,in ) = 0.
(16.56)
Due to the quadratic dependence of the free energies, we find the condition [(1 − m)δPI + m(δPI − ΔPin )] = 0
(16.57)
and hence the solution (Fig. 16.5) δP‡I = mΔPin .
(16.58)
The Lagrange parameter m is determined by inserting this solution into the isoenergicity condition: − T ΔS = ΔG‡ (P‡in ) = G‡II (Pin,II + (m − 1)ΔPin ) − G‡I (Pin,I + mΔPin ) 2 1 ΔPin 1 3 d r = ΔGeq + ((m − 1)2 − m2 ) 1 1 2 εop εop − εst = ΔGeq + (1 − 2m)λ,
(16.59)
where λ = GII (Pin,II − ΔPin ) − GII (Pin,II ) 2 1 1 1 ΔPin 1 1 2 3 = 1 = − d r d3 r (ΔD) 1 2 ε ε ε 2 − op op st εop εst (16.60)
184
16 Marcus Theory of Electron Transfer
is the reorganization energy. Solving for m we find m=
ΔGeq + T ΔS ‡ + λ . 2λ
(16.61)
The free energy of activation is given by 2 ΔGa = G‡I − Geq I = m
= m2 λ =
1 εop
1 −
1 εst
1 2
d3 r
ΔPin εop
2
(ΔGeq + T ΔS ‡ + λ)2 . 4λ
Finally, we insert this into the TST rate expression to find ωN ωN (ΔGeq + T ΔS ‡ + λ)2 κet e−ΔGa /kB T = κet exp − . ket = 2π 2π 4λkB T
(16.62)
(16.63)
16.5 Simple Model Systems We will now calculate reorganization energy and free energy differences explicitly for a model system consisting of two spherically reactants. We assume that the charge distribution of the reactants is not influenced by mutual Coulombic interaction or changes of the medium polarization (we apply the point-charge approximation again) (Fig. 16.6). We have already calculated the solvation energy at large distances 43 1 Q2 Q2B 1 (1) Welec = − A − (16.64) − 8πaA 8πaB ε0 εst from which we find the polarization energy at large distances 2 QA 1 Q2B . Geq (R = ∞) = + el 8πεst aA aB
(16.65)
We now consider a finite distance R. If we take the origin of the coordinate system to coincide with the center of the reactant A, then the dielectric displacement is r r−R 1 (16.66) DI/II (r) = QA,I/II 3 + QB,I/II 4π r |r − R|3 aA
aB R
Fig. 16.6. Simple donor–acceptor geometry
16.5 Simple Model Systems
185
and the polarization free energy Geq el,I/II =
d3 r
D2 2εst
(16.67)
contains two types of integrals. The first is 3 d r . I1 = r4
(16.68)
The range of this integral is outside the spheres A and B. As the integrand falls off rapidly with distance, we may extend the integral over the volume of the other sphere. Introducing polar coordinates, we find ∞ dr 4π 4π 2 = I1 ≈ . (16.69) r a a The second kind of integral is I2 =
d3 r
r(r − R) . r3 |r − R|3
(16.70)
Again we extend the integration range and use polar coordinates π ∞ r2 − rR cos θ I2 ≈ 2π sin θ dθ r2 dr 3 2 . r (r + R2 − 2rR cos θ)3/2 0 0
(16.71)
The integral over r gives ∞ ∞ 1 r − R cos θ = 1 (16.72) dr = − √ 2 2 3/2 2 2 R (r + R − 2rR cos θ) r + R − 2rR cos θ 0 0 and the integral over θ then gives π 2π sin θ 4π I2 = dθ = . R R 0
(16.73)
Together, we have the polarization free energy 2 1 2QA QB QA 1 1 Q2B 3 2 + (16.74) d Geq = rD = + el 2εst 8πεst aA aB 8πεst R QA QB = Geq (∞) + 4πεst R and the difference Geq el,II
−
Geq el,I
e2 = 8πεst
1 + 2ZA 1 − 2ZB ZB − ZA − 1 + +2 aA aB R
.
(16.75)
186
16 Marcus Theory of Electron Transfer
Similarly, the reorganization energy is 1 1 1 − d3 r (ΔD)2 λ= 2 εop εst ΔQ2A 1 1 1 ΔQ2B 2ΔQA ΔQB . = − + + 8π εop εst aA aB R Now since ΔQA = −ΔQB = e, we can write this as 1 1 1 1 2 e2 . − + − λ= 8π εop εst aA aB R
(16.76)
(16.77)
We consider here the most important special cases. 16.5.1 Charge Separation If an ion pair is created during the ET reaction (QA,I = QB,I = 0, QA,II = e, QB,II = −e), we have Q2 2QA QB Q2A 0 (I) + B+ . (16.78) = 1 1 2 + − (II) aA aB R aA aB R The free energy is
Geq el,II
Geq el,I = 0, 1 e2 1 2 e2 = Geq = + − el,II (∞) − 8πεst aA aB R 4πεst R
(16.79)
and the activation energy becomes ΔGa =
(ΔG∞ + T ΔS ‡ + λ −
e2 2 4πεst R )
4λ
.
(16.80)
If the process is photoinduced, the free energy at large distance can be expressed in terms of the ionization potential of the donor, the electron affinity of the acceptor, and the energy of the optical transition (Fig. 16.7) ΔG∞ = EA(A) − (IP(D) − ω).
(16.81)
16.5.2 Charge Shift For a charge shift reaction of the type QA,I = QB,II = −e, QB,I = QA,II = 0, Geq I =
e2 , 8πεst aA
Geq II =
e2 , 8πεst aB
(16.82)
16.6 The Energy Gap as the Reaction Coordinate
187
ΔG D+ IP(D*) D* IP(D) hω
A EA(A) A−
D
Fig. 16.7. Photoinduced charge separation. At large distance, the energy difference ΔG∞ is given by EA(A) − IP(D∗ ) = EA(A) − (IP(D) − ω)
and for the second type QA,I = QB,II = 0, QB,I = QA,II = e, Geq I =
e2 , 8πεst aB
Geq II =
e2 . 8πεst aA
(16.83)
The activation energy now does not contain a Coulombic term ΔGa =
(ΔG∞ + T ΔS ‡ + λ)2 . 4λ
(16.84)
16.6 The Energy Gap as the Reaction Coordinate We want to construct one single reaction coordinate which summarizes the polarization fluctuations. To that end, we consider the energy gap ΔU ‡ = ΔG‡ + T ΔS ‡ ≈ G‡II (P‡in ) − G‡I (P‡in ) = G‡II (PII,in − ΔPin + δPI,in ) − G‡I (PI,in + δPI,in ) 2 1 1 (δPI,in − ΔPin ) 3 d r = ΔGeq + 1 1 2 εop εop − εst 2 1 δPI,in 1 d3 r − 1 1 2 εop − εop εst 1 ΔP2in − 2δPI,in ΔPin 1 = ΔGeq + 1 d3 r 1 2 ε2op εop − εst = ΔGeq + λ − δU (t)
(16.85)
with the thermally fluctuating energy δU (t) =
1 εop
1 −
1 εst
d3 r
δPI,in ΔPin . ε2op
(16.86)
188
16 Marcus Theory of Electron Transfer ΔU
λ I
Geq II
Geq
λ
ΔGeq 0
2λ
δU
Fig. 16.8. Energy gap as reaction coordinate
In the equilibrium of the reactants, δU = 0,
ΔU = ΔGeq + λ (I)
(16.87)
and in the equilibrium of the products (Fig. 16.8) δU = 2λ,
ΔU = ΔGeq − λ (II).
(16.88)
The free energies now take the form 1 (δU )2 , 4λ 1 GII = Geq (δU − 2λ)2 . II + 4λ GI = Geq I +
(16.89) (16.90)
The free energy fluctuations in the reactant state are given by GI − Geq I =
1 δU 2 . 4λ
(16.91)
If we identify this with the thermal average kB T /2 of a one-dimensional harmonic motion, we have δU 2 = 2kB T λ.
(16.92)
The fluctuations of the energy gap can for instance be investigated with molecular dynamics methods. They can be modeled by diffusive motion in a harmonic potential U (Q) = Q2 /4λ: 2 ∂ ∂U 1 ∂ ∂ W (Q, t) = −DE W − κδ(Q − λ)W, (16.93) W + ∂t ∂Q2 kB T ∂Q ∂Q where the diffusion constant is DE =
2kB T λ τL
and τL is the longitudinal relaxation time of the medium.
(16.94)
16.8 The Transmission Coefficient for Nonadiabatic Electron Transfer
189
16.7 Inner-Shell Reorganization Intramolecular modes can also contribute to the reorganization. Low-frequency modes (ω < kB T ) which can be treated classically can be taken into account by adding an inner-shell contribution to the reorganization energies: λ = λouter + λinner .
(16.95)
Often C–C stretching modes at ω ≈ 1,400 cm−1 change their equilibrium positions during the ET process. They can accept the excess energy in the inverted region and reduce the activation energy. Often the Franck–Condon progression of one dominant mode ωv is used to analyze experimental data in the inverted region. Since ωv kB T , thermal occupation is negligible. For a harmonic oscillator, the relative intensity of the 0 → n transition3 is FC(0, n) =
gr2n −gr2 e n!
(16.96)
and the sum over all transitions gives the rate expression ∞ 2n ω (ΔG + ER + nωv )2 −g2 g k= e κel exp − , 2π n! 4ER kB T n=0
(16.97)
where g 2 ωv is the partial reorganization energy of the stretching mode (Figs. 16.9 and 16.10).
16.8 The Transmission Coefficient for Nonadiabatic Electron Transfer The transmission coefficient κ will be considered later in more detail. For adiabatic electron transfer, it approaches unity. For nonadiabatic transfer (small crossing probability), it depends on the electronic coupling matrix element Vet . The quantum mechanical rate expression for nonadiabatic transfer becomes in the high-temperature limit (ω kB T for all coupling modes) 2πVet2 e−(ΔG+λ) /4λkB T √ . k= 4πλkB T 2
3
A more detailed discussion follows later.
(16.98)
190
16 Marcus Theory of Electron Transfer activated region
inverted region n=0
3 2
3 2
1
1
n=0
n’ = 0
n’ = 0
Fig. 16.9. Accepting mode. In the inverted region, vibrations can accept the excess energy
electron transfer rate k
1
0.1
0.01
0
0.5
1 ΔG
1.5
2
Fig. 16.10. Inclusion of a high-frequency mode progression. Equation (16.97) is evaluated for typical values of ωv = 0.2 eV, g = 1, and λ = 0.5 eV (broken curve). The full curves show the contributions of different 0 → n transitions
Problems 16.1. Marcus Cross Relation (a) Calculate the activation energy for the self-exchange reaction kAA A− + A →
A + A−
in the harmonic model GR (Q) =
a 2 Q , 2
GP (Q) =
a (Q − Q1 )2 . 2
16.8 The Transmission Coefficient for Nonadiabatic Electron Transfer
191
(b) Show that for the cross reaction
A+D
k →
A− + D+
the reorganization energy is given by the average of the reorganization energies for the two self-exchange reactions: λ=
λAA + λDD 2
and the rate k can be expressed as k = kAA kDD Keq f where kAA and kDD are the rate constants of the self-exchange reactions, Keq is the equilibrium constant of the cross reaction and f is a factor, which is usually close to unity.
Part VI
Elementary Photophysics
17 Molecular States
Biomolecules have a large number of vibrational degrees of freedom, which are more or less strongly coupled to transitions between different electronic states. In this chapter, we discuss the vibronic states on the basis of the Born–Oppenheimer separation and the remaining nonadiabatic coupling of the adiabatic states. In the following, we do not consider translational or rotational motion which is essentially hindered for Biomolecules in the condensed phase.
17.1 Born–Oppenheimer Separation In molecular physics, usually the Born–Oppenheimer separation of electronic (r) and nuclear motion (Q) is used. The molecular Hamiltonian (without considering spin or relativistic effects) can be written as H = TN (Q) + Tel (r) + VN (Q) + VeN (Q, r) + Vee (r)
(17.1)
with kinetic energy TN =
−
j
2 ∂ 2 , 2mj ∂Q2j
Tel =
−
2 ∂ 2 2me ∂rk2
(17.2)
and Coulombic interaction V = VN (Q) + VeN (Q, r) + Vee (r) Zj Zj e2 Zj e2 e2 = − + . (17.3) 4πε|Rj − Rj | 4πε|Rj − rk | 4πε|rk − rk | j,j
j,k
k,k
The Born–Oppenheimer wave function is a product ψ(r, Q)χ(Q),
(17.4)
196
17 Molecular States
where the electronic part depends parametrically on the nuclear coordinates. The nuclear masses are much larger than the electronic mass. Therefore, the kinetic energy of the nuclei is neglected for the electronic motion. The electronic wave function is obtained approximately, from the eigenvalue problem, (Tel + V )ψs (r, Q) = Es (Q)ψs (r, Q),
(17.5)
which has to be solved separately for each set of nuclear coordinates. Using now the Born–Oppenheimer product ansatz, we have Hψs (r, Q)χs (Q) = TN ψs (r, Q)χs (Q) + Es (Q)ψs (r, Q)χs (Q) = ψs (r, Q)(TN + Es (Q))χs (Q) 2 ∂ 2 ψs (r, Q) ∂χs (Q) ∂ψs χs (Q) . (17.6) − + 2mj ∂Q2j ∂Qj ∂Qj j The sum constitutes the so-called nonadiabatic interaction Vnad . If it is neglected in lowest order, the nuclear wave function is a solution of the eigenvalue problem: (TN + Es (Q))χs (Q) = Eχs (Q). (17.7) Often the potential energy Es (Q) can be expanded around the equilibrium configuration Q0s 1 Es (Q) = Es (Q0s ) +
1 ∂2 (Qj − Qj0s )(Qj − Qj 0s ) Es + · · · (17.8) 2 ∂Qj ∂Qj j,j
Within the harmonic approximation, the matrix of mass-weighted second derivatives Djj = √ is diagonalized
∂2 1 mj mj ∂Qj ∂Qj
Djj urj = ωr2 urj
(17.9)
(17.10)
j
and the nuclear motion becomes a superposition of independent normal mode vibrations with amplitudes qr and frequencies ωr 2 1 Qj − Qj0s = √ qr urj . mj r
(17.11)
After transformation to normal mode coordinates, the eigenvalue problem (17.7) decouples according to 1 2
which will be different for different electronic states s in general. These quantities will be different for different electronic states s of course.
17.2 Nonadiabatic Interaction
Es + TN = Es(0) +
r
ωr2 2 2 ∂ 2 q − 2 r 2 ∂qr2
and the nuclear wave function factorizes χs = χs,r,n(r) (qr ).
197
(17.12)
(17.13)
r
The total energy of a molecular state is 1 . Es0 + ωr(s) n(r) + 2 r
(17.14)
The vibrations form a very dense manifold of states for biological molecules. This is often schematically represented in a Jablonsky diagram3 (Figs. 17.1– 17.4)
17.2 Nonadiabatic Interaction The nonadiabatic interaction couples the adiabatic electronic states.4 If we expand the wave function as a linear combination of the adiabatic wave functions, which form a complete basis at each configuration Q Ψ= Cs ψs (r, Q)χs (Q), (17.15) s
we have to consider the nonadiabatic matrix elements5
E
. . . . . .
S2
. . . T2
S1
. . . T1
. . . S0
Fig. 17.1. Jablonsky diagram
3 4 5
which usually also shows electronic states of higher multiplicity. Sometimes called channels. Without a magnetic field, the molecular wave functions can be chosen real-valued.
198
17 Molecular States CH3
O
CH3
H 1
CH3
2 I
N
H CH2 H CH2 O
O
N III
CH3
V
H O
H H
H N IV
CH3
phytyl
N
H
H
CH3
3 H II 4
O O CH3
Fig. 17.2. Bacteriopheophytine-b
number of normal modes
300
0.1 /cm–1
250 200 150 100 50 0
0
1000
2000
3000
4000
normal mode frequency (cm–1)
DOS (1/cm–1)
Fig. 17.3. Distribution of normal mode frequencies. Normal modes for bacteriopheophytine-b were calculated quantum chemically. The phytyl tail was replaced by a methyl group. The cumulative frequency distribution (solid ) is almost linear at small frequencies corresponding to a constant density of 0.1 cm−1 normal modes (dashed) 10
20
10
16
10
12
10
8
10
4
0
10 0
1000
2000 3000 frequency (cm–1)
4000
Fig. 17.4. Density of vibrational states. The density of vibrational states including overtones and combination tones was evaluated for bacteriopheophytine-b with a simple counting algorithm
17.2 Nonadiabatic Interaction
199
nad Vs,s =
dr dQ χs (Q)ψs (r, Q) Vnad ψs (r, Q)χs (Q)
2 ∂2 dQ χs (Q) χs (Q) dr ψs (r, Q) =− ψs (r, Q) 2mj ∂Q2j j 2 ∂ ∂ dQ χs (Q) χs (Q) dr ψs (r, Q) ψs (r, Q) . − 2mj ∂Qj ∂Qj (17.16)
The first term is generally small. If we evaluate the second at an equilibrium configuration Q(0) of the state s and neglect the dependence on Q6 we have 2 ∂ nad,el nad χs (Q) , (17.17) Vs,s dQ χs (Q) ≈ −Vs,s 2mj ∂Qj ∂ nad,el (0) (0) Vs,s = dr ψs (r, Q ) ψs (r, Q ) . (17.18) ∂Qj Within the harmonic approximation, the gradient operator can be expressed as
∂ ωr + 1 r ∂ r (b − br ) = uj = uj (17.19) √ mj ∂Qj ∂qr 2 r r r and the nonadiabatic matrix element becomes
2 ωr nad,el χs r n(r ) Vsnad dq1 dq2 · · · ,n(r ),s,n(r) = −Vs,s 2 2 r
⎛ ×⎝
⎞
χs,r ,n(r ) ⎠
n(r) + 1χs,r,n(r)+1 +
n(r)χs,r,n(r)−1 . (17.20)
r =r
This expression simplifies considerably if the mixing of normal modes in the state s can be neglected (the so-called parallel mode approximation). Then the overlap integral factorizes into a product of Franck–Condon factors: ⎞ ⎛
2 ωr ⎝ nad,el FCsr s (n , n)⎠ Vsnad ,n(r ),s,n (r) = −Vs,s 2 2 r =r
×
√
with
FCrs s (nr nr ) = 6
nr FCsr s (nr nr − 1)
(17.21)
dqr χs r,n (r) (qr ) χs,r,n(r)(qr ).
(17.22)
nr + 1FCsr s (nr , nr + 1) +
√
This is known as the Condon approximation.
18 Optical Transitions
We consider allowed optical transitions between two electronic states of a molecule, for instance the ground state (g) and an excited singlet state (e). Within the adiabatic approximation (17.13), we take into account transitions between the manifolds of Born–Oppenheimer vibronic states ψg (r; Q)χg (Q) → ψe (r; Q)χe (Q).
(18.1)
Assuming that the occupation of initial states is given by a canonical distribution P (χg ), the probability for dipole transitions in an oscillating external field E0 exp(iω0 t) (18.2) is given by the golden rule expression k=
2π 2 P (χg ) |ψg χg |E0 er| ψe χe | δ(ωeg − ω0 ). χ
(18.3)
g
18.1 Dipole Transitions in the Condon Approximation The matrix element of the dipole operator er can be simplified by performing the integration over the electronic coordinates: ∗ drψg∗ (r; Q)erψe (r; Q) dQχg (Q)χe (Q) =
dQχ∗g (Q)χe (Q) Mge (Q).
(18.4)
The dipole moment function Meg (Q) is expanded as a series with respect to the nuclear coordinates around the equilibrium configuration Meg (Q) = Meg (Qeq ) +
∂Meg (Q − Qeq ) + · · · ∂Q
(18.5)
202
18 Optical Transitions
If its equilibrium value does not vanish for symmetry reasons, the dipole moment can be approximated by neglecting all higher-order terms (this is known as Condon approximation) Meg (Q) ≈ μeg = Meg (Qeq ).
(18.6)
The transition rate becomes a product of an electronic factor and an overlap integral of the nuclear wave functions which is known as Franck–Condon factor (in fact, the Franck–Condon-weighted density of states) k=
2π 2 |E0 μeg |2 P (χg ) |χg (Q)|χe (Q)| δ(ωeg − ω0 ) χ ,χ
(18.7)
2π |E0 μeg |2 FCF(ω0 ).
(18.8)
g
=
e
18.2 Time Correlation Function Formalism In the following, we rewrite the transition rate within the time correlation function (TCF) formalism. The unperturbed molecular Hamiltonian is H0 = |ψe (He + ω00 )ψe | + |ψg Hg ψg |,
(18.9)
where we introduced the Hamiltonians for the nuclear motion in the two states: Hg(e) = TN + Eg(e) (Q)
(18.10)
with Hg(e) χg(e) = ωg(e) χg(e)
(18.11) 1
and ω00 is the pure electronic, so-called “0–0” transition energy. The energy of the transition |ψg χg → |ψe χe (18.12) is ωeg = ω00 + ωe − ωg .
(18.13)
The interaction between molecule and radiation field is within the Condon approximation2 H = |ψe (−E0 μeg )ψg | + h.c. (18.14) After replacing the δ-function by a Fourier integral ∞ 1 eiωt dt, δ(ω) = 2π −∞ 1 2
Including the zero-point energy difference. We consider here only the nondiagonal term.
(18.15)
18.2 Time Correlation Function Formalism
203
(18.7) becomes |E0 μeg |2 k= 2
dt
P (χg )|χg |χe |2 ei(ωe −ωg +ω00 −ω0 )t ,
(18.16)
χg χe
which can be written as a thermal average over the vibrations of the ground state e−βHg |E0 μeg |2 −iωg t dt e |χ k= χ eiωe t χe |χg ei(ω00 −ω0 )t g e 2 Q g χg χe 8 9 2 |E0 μeg | = (18.17) dt e−i(ω0 −ω00 )t e−itHg / eitHe / 2 g |E0 μeg |2 = (18.18) dt e−i(ω0 −ω00 )t F (t) 2 with the correlation function
9 8 F (t) = e−itHg / eitHe /
(18.19) g
which will be evaluated for some simple models in the following chapter. From comparison with (18.8), we find that the Franck–Condon-weighted density of states is given by 1 (18.20) FCF(ω) = dte−i(ω0 −ω0 0) F (t). 2π
Problems 18.1. Absorption Spectrum Within the Born–Oppenheimer approximation, the rate for optical transitions of a molecule from its ground state to an electronically excited state is proportional to Pi |i|μ|f |2 δ(+ωf − ωi − ω). α(ω) = i,f
Show that this golden rule expression can be formulated as the Fourier integral of the dipole moment correlation function 1 dt eiωt μ(0)μ(t) 2π and that within the Condon approximation, this reduces to the correlation function of the nuclear motion.
19 The Displaced Harmonic Oscillator Model
We would like now to discuss a more specific model for the transition between the vibrational manifolds. We apply the harmonic approximation for the nuclear motion which is described by Hamiltonians ωr(g,e) br(g,e)+ br(g,e) (19.1) Hg,e = r
and discuss the model of displaced oscillators.
19.1 The Time Correlation Function in the Displaced Harmonic Oscillator Approximation In a very simplified model, we neglect mixing of the normal modes (parallel mode approximation) and frequency changes (Fig. 19.1). This leads to the “displaced harmonic oscillator” model (DHO) with ωr b+ (19.2) Hg = r br , r
He =
ωr (b+ r + gr )(br + gr ) = Hg +
r
gr ωr (b†r + br ) +
r
gr2 ωr .
r
(19.3) In this approximation, the normal mode frequencies and eigenvectors are the same for both electronic states but the equilibrium positions are shifted relative to each other. The correlation function (18.19) 9
8 F (t) = e−(it/)Hg e(it/)He = Q−1 tr e−Hg /kB T e−(it/)Hg e(it/)He (19.4) g
with
Q = tr(e−Hg /kB T )
(19.5)
206
19 The Displaced Harmonic Oscillator Model
hω00
Fig. 19.1. Displaced oscillator model
factorizes in the parallel mode approximation: F (t) = Fr (t)
(19.6)
r
−ωr b†r br /kB T −itωr b†r br itωr (b†r +gr )(br +gr ) e e Fr (t) = Q−1 r tr e 9 8 † † = e−itωr br br eitωr (br +gr )(br +gr ) . As shown in the appendix, this can be evaluated as & ' Fr (t) = exp gr2 (eiωt − 1)(nr + 1) + (e−iωt − 1)nr
(19.7)
(19.8)
with the average phonon numbers nr =
1 eωr /kB t
−1
.
(19.9)
Expression (19.7) contains phonon absorption (positive frequencies) and emission processes (negative frequencies). We discuss two important limiting cases.
19.2 High-Frequency Modes In the limit ωr kT , the average phonon number 1
nr =
eω/kB T
−1
is small and the correlation function becomes & ' Fr (t) → exp gr2 (eiωt − 1) .
(19.10)
(19.11)
Expansion of Fr (t) as a power series of gr2 gives Fr (t) =
g 2j r
j
j!
e−gr ei(jωr )t 2
(19.12)
19.3 The Short-Time Approximation
a
207
b 0.80
f (hΔω)
0.60
0.40
0.20
0.00
– 400
0
400 – 400 Δhω (cm–1)
0
400
Fig. 19.2. Progression of a low-frequency mode. The Fourier transform of (19.8) is shown for typical values of ω = 50 cm−1 , 1, Er = 50 cm−1 and (a) kT = 10 cm−1 , (b) kT = 200 cm−1 . A small damping was introduced to obtain finite linewidths
which corresponds to a progression of transitions 0 → j ωr with FranckCondon factors (Fig. 19.2) F C(0, j) =
gr2j −gr2 e . j!
(19.13)
19.3 The Short-Time Approximation To obtain a smooth approximation to the spectrum ∞ 1 f (Δω) = dt e−iΔωt Fr (t), 2π −∞ r
(19.14)
we expand the oscillating functions eiωr t = 1 + iωr t − and find
(19.15)
$ ωr2 t2 dt e exp − + 1) 2 −∞ r r % 2 2 (Δω − g ωr ) 1 2π 2 2 = exp − 2 2 r r . 2π g ω (2¯ n + 1) 2 ( g ω (2¯ r r r r r r r nr + 1))
1 f (Δω) = 2π
∞
−iΔω
#
ωr2 t2 + ··· 2
gr2 iωr t
gr2 (2¯ nr
(19.16)
19 The Displaced Harmonic Oscillator Model f( hΔω)
208
Δ Er
hΔω
Fig. 19.3. Gaussian envelope
With the definition of the reorganization energy gr2 ωr Er =
(19.17)
r
and the width Δ2 =
gr2 (ωr )2 (2¯ nr + 1),
(19.18)
r
it takes the form (Δω − Er )2 1 . f (Δω) = √ exp − 2Δ2 Δ 2π
(19.19)
In the high-temperature limit (all modes can be treated classically), the width becomes (Fig. 19.3) Δ2 → 2gr2 ωr kB T = 2Er kB T (19.20) and
1 (Δω − Er )2 √ f (Δω) → . exp − 4Er kB T 4πEr kB T
(19.21)
20 Spectral Diffusion
Electronic excitation energies of a chromophore within a protein environment are not static quantities but fluctuate in time. This can be directly observed with the methods of single molecule spectroscopy. If instead an ensemble average is measured, then the relative time scales of measurement and fluctuations determine if an inhomogeneous distribution is observed or if the fluctuations lead to a homogeneous broadening. In this chapter, we discuss simple models [54–56], which are capable of describing the transition between these two limiting cases.
20.1 Dephasing We study a semiclassical model of a two-state system. Due to interaction with the environment, the energies of the two states are fluctuating quantities. The system is described by a time-dependent Hamiltonian E1 (t) 0 H0 = (20.1) 0 E2 (t) and the time evolution of the unperturbed system t 1 ψ(t) = exp H0 dt ψ(0) = U0 (t)ψ(0) i 0 can be described with the help of a propagator t (1/i) t E (t)dt 1 0 1 e t H0 dt = . U0 (t) = exp i 0 e(1/i) 0 E2 (t)dt Optical transitions are induced by the interaction operator 0 μeiωt . H = μe−iωt 0
(20.2)
(20.3)
(20.4)
210
20 Spectral Diffusion
We make the following ansatz (20.5)
ψ(t) = U0 (t)φ(t) and find (H0 + H )ψ = i
d d ψ = H0 U0 (t)φ + iU0 (t) φ. dt dt
(20.6)
Hence
d φ = U0 (−t)H U0 (t)φ. (20.7) dt The product of the operators gives t 0 μeiωt−(i/) 0 (E2 −E1 )dt U0 (−t)H U0 (t) = (20.8) t μe−iωt+(i/) 0 (E2 −E1 )dt 0 i
and (20.7) can be written in the form d 0 μ(t)eiωt i φ = , μ(t)∗ e−iωt 0 dt
(20.9)
where the time-dependent dipole moment μ(t) obeys the differential equation 1 d μ(t) = (E2 (t) − E1 (t))μ(t) dt i = −iω21 (t)μ(t) = −i(ω21 + δω(t))μ(t).
(20.10)
Starting from the initial condition φ(t = 0) = we find in lowest order
φ(t) =
1 i
t 0
1 , 0
1 dt e−iωt μ∗ (t)
(20.11)
(20.12)
and the transition probability is given by t t 1 P (t) = 2 dt dt eiω(t −t ) μ(t )μ∗ (t ) 0 0 |μ0 |2 t t iω(t −t ) −iω21 (t −t ) −i t δω(t )dt dt dt e e e t . = 2 0 0 (20.13) The ensemble average gives for stationary fluctuations |μ0 |2 t t iω(t −t ) −iω21 (t −t ) P (t) = 2 dt dt e e F (t − t ) 0 0
(20.14)
20.2 Gaussian Fluctuations
211
with the dephasing function t . δω(t )dt F (t) = exp −i
(20.15)
0
With the help of the Fourier transformation ∞ F (t) = e−iω t Fˆ (ω)dω , −∞
the transition probability becomes t t |μ0 |2 ∞ P (t) = 2 dω dt dt ei(ω−ω )(t −t ) e−iω21 (t −t ) Fˆ (ω) −∞ 0 0 2 ∞ 2(1 − cos((ω − ω − ω21 )t) |μ0 | = dω Fˆ (ω) , (20.16) 2 (ω − ω − ω21 )2 −∞ where the quotient approximates a δ-function for longer times 2(1 − cos((ω − ω − ω21 )t) → 2πtδ(ω − ω − ω21 ) (ω − ω − ω21 )2
(20.17)
and finally the golden rule expression is obtained in the form 2π|μ0 |2 ˆ P (t) F (ω − ω21 ). → t 2
(20.18)
20.2 Gaussian Fluctuations For Gaussian fluctuations, the dephasing function can be approximated by a cumulant expansion. Consider the expression ln(eiλx ). Series expansion with respect to λ gives ln(eiλx ) = λ
ixeiλx λ2 −x2 eλx eλx − ixeλx 2 + + ··· eiλx 2 eλx 2
(20.19)
Setting now λ = 1 gives 1 ln(eix ) = ix − (x2 − x2 ) 2 1 + (−ix3 − 3−x2 ix + 2ix3 ) − · · · 6
(20.20)
which for a distribution with zero mean simplifies to ln(eix ) =
1 2 1 x + x3 + · · · 2 6
(20.21)
212
20 Spectral Diffusion
Especially for a Gaussian distribution W (x) =
2 2 1 √ e−x /2Δ , Δ 2π
(20.22)
all cumulants except the first and second order vanish. This can be seen from ∞ 2 ix eix W (x)dx = e−Δ /2 . (20.23) e = −∞
If for instance the frequency fluctuations are described by diffusive t motion in a harmonic potential, then the distribution of the integral 0 δω(t)dt is Gaussian. Finally, we find ( 2 ) t 1 F (t) = exp − δω(t )dt 2 0 t t 1 = exp − dt dt δω(t )δω(t ) 2 0 0 1 t t dt dt δω(t − t )δω(0) . (20.24) = exp − 2 0 0 We assume that in the simplest case, the correlation of the frequency fluctuations decays exponentially δω(t)δω(0) = Δ2 e−t/τc .
(20.25)
F (t) = exp −Δ2 τc2 (e−|t|/τc − 1 + |t|/τc ) .
(20.26)
Then integration gives
Let us look at the limiting cases. 20.2.1 Long Correlation Time The limit of long correlation time corresponds to the inhomogeneous case where δω(t)δω(0) = Δ2 is constant. We expand the exponential for t τc : e−t/τc − 1 + t/τc = F (t) = e−Δ
t2 + ··· 2τc2
2 2
t /2
and the transition probability is |μ0 |2 t t i(t −t )(ω−ω21 ) −Δ2 (t −t )2 /2 dt dt e e . P (t) = 2 0 0
(20.27) (20.28)
(20.29)
20.2 Gaussian Fluctuations
213
100
F(t)
10 –1
10 –2
10 –3 –10
–5
0 time t
5
10
Fig. 20.1. Time correlation function of the Kubo model. F (t) from (20.26) is shown for Δ = 1 and several values of τc . For large correlation times, the Gaussian exp(−Δ2 t2 /2) is approximated whereas for short correlation times the correlation function becomes approximately exponential and the width increases with τ −1 . Correspondingly, the Fourier transform becomes sharp in this case (motional narrowing)
Since F (t) is symmetric, we find |μ0 |2 t t i(t −t )(ω−ω21 ) −Δ2 (t −t )2 /2 P (t) = dt dt e e 42 −t −t ∞ 2 2 |μ0 |2 ≈ t dt eit(ω−ω21 ) e−Δ t /2 2 2 −∞ and the lineshape has the form of a Gaussian (Fig. 20.1) P (t) (ω − ω21 )2 lim ∼ exp − . t 2Δ2
(20.30)
(20.31)
20.2.2 Short Correlation Time For very short correlation time τc t, we approximate & ' F (t) = exp −Δ2 τc |t| = exp (−|t|/T2) with the dephasing time
T2 = (Δ2 τc )−1 .
The lineshape has now the form of a Lorentzian ∞ 2T2−1 dt eit(ω−ω21 ) e−|t|/T2 = −2 . T2 + (ω − ω21 )2 −∞
(20.32) (20.33)
(20.34)
This result shows the motional narrowing effect when the correlation time is short or the motion very fast.
214
20 Spectral Diffusion
20.3 Markovian Modulation Another model which can be analytically investigated describes the frequency fluctuations by a Markovian random walk. We discuss the simplest case of a dichotomous process (9.2), that is, the oscillator frequency switches randomly between two values ω± . This is, for instance, relevant for NMR spectra of a species which undergoes a chemical reaction A+B AB,
(20.35)
where the NMR resonance frequencies depend on the chemical environment. For a dichotomous Markovian process switching between two states X = ±, we have for the conditional transition probability P (X, t + τ |X0 t0 ) = P (X, t + τ |+, t)P (+, t|X0 t0 ) +P (X, t + τ |−, t)P (−, t|X0 t0 ).
(20.36)
For small time increment τ , linearization gives P (+, t + τ |+, t) = 1 − ατ, P (−, t + τ |−, t) = 1 − βτ, P (−, t + τ |+, t) = ατ, P (+, t + τ |−, t) = βτ,
(20.37)
hence P (+, t + τ | + t0 ) = P (+, t + τ |+, t)P (+, t| + t0 ) + P (+, t + τ |−, t)P (−, t| + t0 ) = (1 − ατ )P (+, t|+, t0 ) + βτ P (−, t|+, t0 ) (20.38) and we obtain the differential equation ∂ P (+, t|+, t0 ) = −αP (+, t|+, t0 ) + βP (−, t|+, t0 ). ∂t
(20.39)
Similarly, we derive ∂ P (−, t|−, t0 ) = −βP (−, t|−, t0 ) + αP (+, t|−, t0 ), ∂t ∂ P (−, t|+, t0 ) = −βP (−, t|+, t0 ) + αP (+, t|+, t0 ), ∂t ∂ P (+, t|−, t0 ) = −αP (+, t|−, t0 ) + βP (−, t|−, t0 ). ∂t
(20.40)
In a stationary system, the probabilities depend only on t − t0 and the differential equations can be written as a matrix equation ∂ P(t) = A P(t) ∂t
(20.41)
20.3 Markovian Modulation
with P(t) =
P (+, t|+, 0) P (+, t|−, 0) P (−, t|+, 0) P (−, t|−, 0)
and the rate matrix A=
−α β α −β
215
(20.42)
(20.43)
.
Let us define the quantity 8 t 9 Q(X, t|X , 0) = e−i 0 ω(t )dt
X,X
,
(20.44)
where the average is taken under the conditions that the system is in state X at time t and in state X at time 0. Then we find 8 t 9 Q(X, t + τ |X , 0) = e−i 0 ω(t )dt eiω(t)τ X,X
= Q(X, t + τ |+, t)Q(+, t|X , 0) +Q(X, t + τ |−, t)Q(−, t|X , 0).
(20.45)
Expansion for small τ gives Q(+, t + τ |+, t) = e−iω+ τ +,+ = 1 − (iω+ + α)τ, Q(−, t + τ |−, t) = e−iω− τ −,− = 1 − (iω− + β)τ, Q(−, t + τ |+, t) = e−iω+ τ −,+ = ατ, Q(+, t + τ |−, t) = e−iω− τ +,− = βτ,
(20.46)
and for the time derivatives 1 ∂ Q(+, t|+, 0) = lim [(−iω+ − α)τ Q(+, t|+, 0) + βτ Q(−, t|+, 0)] τ →0 τ ∂t = −(iω+ + α)Q(+, t|+, 0) + βQ(−, t|+, 0), ∂ Q(−, t|−, 0) = −(iω− − +β)Q(−, t|−, 0) + αQ(+, t|−, 0), ∂t ∂ 1 Q(+, t|−, 0) = lim [(−iω+ − α)τ Q(+, t|−, 0) + βτ Q(−, t|−, 0)] τ →0 τ ∂t = −(iω+ + α)Q(+, t|−, 0) + βQ(−, t|−, 0), ∂ (20.47) Q(−, t|+, 0) = −(iω− + β)Q(−, t|+, 0) + αQ(+, t|+, 0), ∂t or in matrix notation
∂ Q = (−iΩ + A)Q ∂t
with Q=
Q(+, t|+, 0) Q(+, t|−, 0) Q(−, t|+, 0) Q(−, t|−, 0)
(20.48) (20.49)
216
20 Spectral Diffusion
and Ω=
ω+ ω−
(20.50)
.
Equation (20.48) is formally solved by Q(t) = exp {(−iΩ + A)t} .
(20.51)
For a steady-state system, we have to average over the initial state and to sum over the final states to obtain the dephasing function. This can be expressed as F (t) = (1, 1)Q(t)
β α+β α α+β
= (1, 1) exp {(−iΩ + A)t}
β α+β α α+β
.
(20.52)
Laplace transformation gives F(s) =
∞
−st
e
−1
F (t)dt = (1, 1)(s + iΩ − A)
0
β α+β α α+β
(20.53)
,
which can be easily evaluated to give F (s) =
=
1 (s + iω1 )(s + iω2 ) + (α + β)s + i(αω2 + βω1 ) β s + iω2 − β −β α+β ×(1, 1) α −α s + iω1 − α α+β 2 +αω1 s + (α + β) + i βωα+β
(s + iω1 )(s + iω2 ) + (α + β)s + i(αω2 + βω1 )
.
(20.54)
Since the dephasing function generally obeys the symmetry F (−t) = F (t)∗ , the lineshape is obtained from the real part ∞ ∞ ∞ iωt iωt −iωt ∗ ˆ 2π F (ω) = e F (t)dt = e F (t)dt + e F (t)dt −∞ 0 0
= 2 F (−iω) =2
αβ α+β
(ω1 − ω2 )2
(ω − ω1 )2 (ω − ω2 )2 + (α + β)2 ω −
αω2 +βω1 α+β
2 . (20.55)
Let us introduce the average frequency ω=
αω2 + βω1 α+β
(20.56)
20.3 Markovian Modulation 60
a
50
b
217
c
40 30 20 10 0
0
1
2
0
1 2 frequency ω
0
1
2
Fig. 20.2. Motional narrowing. The lineshape (20.55) is evaluated for ω1 = 1.0, ω2 = 1.5 and α = β =(a) 0.02, (b) 0.18, and (c) 1.0
and the correlation time of the dichotomous process τc = ωc−1 = (α + β)−1 .
(20.57)
In the limit of slow fluctuations (ωc → 0), two sharp resonances appear at ω = ω1,2 with relative weights given by the equilibrium probabilities P (+) = β/(α + β),
P (−) = α/(α + β).
(20.58)
With increasing ωc , the two resonances become broader and finally merge into one line. For further increasing ωc the resonance, which is now centered at ω, becomes very narrow (this is known as the motional narrowing effect) (Fig. 20.2).
Problems 20.1. Motional Narrowing Discuss the poles of the lineshape function and the motional narrowing effect (20.55) for the symmetrical case α = β = (2τc )−1 and ω1,2 = ω ± Δω/2.
21 Crossing of Two Electronic States
21.1 Adiabatic and Diabatic States We consider the avoided crossing of two electronic states along a nuclear coordinate Q. The two states are described by adiabatic wave functions ad (r, Q) ψ1,2
(21.1)
ψ1ad (r, Q)χ1 (Q) + ψ2ad (r, Q)χ2 (Q)
(21.2)
and the wave function
will be also written in matrix notation [57] as ' χ1 (Q) & ad ad ψ1 (r, Q) ψ2 (r, Q) . χ2 (Q)
(21.3)
We will assume that the electronic wave functions are chosen real-valued (no magnetic field). From the adiabatic energies ad dr ψsad (Tel + V )ψsad (21.4) = δs,s Es (Q) and the matrix elements of the nuclear kinetic energy 2 ∂ 2 ad ψsad (r, Q) − dr ψs (r, Q) 2m ∂Q2 ∂ 2 ∂ 2 2 nad 2 nad,(2) (Q) − =− Vs,s (Q) − Vs,s 2m ∂Q 2m 2m ∂Q2
(21.5)
with the first- and second-order nonadiabatic coupling matrix elements 2 nad 2 ∂ ad − (21.6) Vs,s (Q) = − dr ψsad (r, Q) ψs (r, Q) , 2m 2m ∂Q
220
21 Crossing of Two Electronic States
−
2 nad,(2) 2 Vs,s (Q) = − 2m 2m
drψsad (r, Q)
∂ 2 ad ψ (r, Q) , ∂Q2 s
(21.7)
we construct the Hamiltonian for the nuclear wave function H(Q) = −
2 nad,(2) 2 ∂ 2 2 nad ∂ ad V − V + E (Q) − (Q) (Q) 2m ∂Q2 2m ∂Q 2m s,s
with the matrices - ad E1 (Q) E ad (Q) =
/
. ,
V
nad
nad nad (Q) V12 (Q) V11
(21.8)
0
, nad nad (Q) V22 (Q) V21 ⎤ ⎡ nad,(2) nad,(2) V (Q) V (Q) 11 12 ⎦. (21.9) V nad,(2) (Q) = ⎣ nad,(2) nad,(2) (Q) V22 (Q) V21 E2ad (Q)
(Q) =
The nuclear part of the wave function is determined by the eigenvalue equation: χ1 (Q) χ1 (Q) =E . (21.10) H(Q) χ2 (Q) χ2 (Q) Now for real-valued functions, we have ∂ ad nad (Q) = dr ψsad (r, Q) Vss ψ ∂Q s ∂ ∂ ad dr ψsad (r, Q)ψsad ψs ψsad = dr (r, Q) − ∂Q ∂Q & nad ' = − Vs s (Q) (21.11) and the matrix V nad is antisymmetric1 nad 0 V12 . V nad = nad −V12 0
(21.12)
We want to define a new basis of so-called diabatic electronic states for which the nonadiabatic interaction is simplified. Therefore, we introduce a transformation with a unitary matrix c11 c12 (21.13) C= c21 c22 and form the linear combinations & ' & ' ψ dia = ψ1dia (r, Q) ψ2dia (r, Q) = ψ1ad (r, Q) ψ2ad (r, Q) C = ψad C. (21.14) 1
nad nad ∗ For complex wave functions instead Vs,s = −(Vs ,s ) .
21.1 Adiabatic and Diabatic States
The nuclear kinetic energy transforms into ∂ 2 dia 2 dr ψ dia † (r, Q) ψ (Q) − 2m ∂Q2 2 ∂2 ∂ 2 ψ ad 2 ad ∂ C ad C + ψ + ψ C dr C † ψ ad † =− 2m ∂Q2 ∂Q2 ∂Q2 ad ad ∂ψ ∂ ∂ψ ∂C ∂C ∂ +2 C + 2ψ ad +2 ∂Q ∂Q ∂Q ∂Q ∂Q ∂Q 2 2 2 ∂ ∂ C ∂C C † V nad,(2) C + C † =− + + 2C † V nad 2 2 2m ∂Q ∂Q ∂Q ∂C ∂ ∂ + 2C † V nad C + 2C † . ∂Q ∂Q ∂Q
221
(21.15)
(21.16)
The gradient operator vanishes if we chose C to be a solution of ∂C = −V nad (Q)C. ∂Q This can be formally solved by
∞
C = exp
V nad (Q)dQ .
(21.17)
(21.18)
Q
For antisymmetric V nad , the exponential function can be easily evaluated2 to give ∞ cos ζ(Q) sin ζ(Q) nad V12 (Q)dQ. (21.19) C= , ζ(Q) = − sin ζ(Q) cos ζ(Q) Q Due to (21.17), the last two summands in (21.16) cancel and 2 ∂ 2 dia dr ψ dia † (r, Q) − ψ (Q) 2m ∂Q2 2 2 ∂2 † nad,(2) †∂ C † nad ∂C C V . (21.20) =− C+C + + 2C V 2m ∂Q2 ∂Q2 ∂Q Applying (21.17) once more, we find that ∂2C ∂C + 2V nad ∂Q2 ∂Q nad ∂C ∂C ∂V = V nad,(2) C − C − V nad + 2V nad ∂Q ∂Q ∂Q nad ∂V C − (V nad )2 C. = V nad,(2) C − ∂Q
V nad,(2) C +
2
for instance by a series expansion
(21.21)
222
21 Crossing of Two Electronic States
The derivative of the nonadiabatic coupling matrix is 2 ∂ψ ∂ψ ∂ ψ ∂V nad = dr + dr ψ ∂Q ∂Q ∂Q ∂Q2 = (V nad )T V nad + V nad,(2) = −(V nad )2 + V nad,(2) and therefore (21.21) vanishes. Together, we have finally 2 ∂2 2 ∂ 2 dr ψ dia † (r, Q) 2 ψ dia (Q) = − − . 2m ∂Q 2m ∂Q2 From
dr ψ dia † (Tel + V )ψdia =
dr C † ψ ad † (Tel + V )ψ ad C,
(21.22)
(21.23)
(21.24)
we find the matrix elements of the electronic Hamiltonian = C † E ad (Q)C H cos2 ζE1ad + sin2 ζE2ad sin ζ cos ζ(E1ad − E2ad ) . = sin ζ cos ζ(E1ad − E2ad ) cos2 ζE2ad + sin2 ζE1ad
(21.25)
At the crossing point (cos2 ζ0 − sin2 ζ0 )(E1ad − E2ad ) = 0,
(21.26)
which implies 1 . (21.27) 2 Expanding the sine and cosine functions around the crossing point, we have E1 +E2 & ' + (E&2 − E1 )(ζ − ζ0 ) + · ·'· (E1 − E2 ) 12 + (ζ − ζ0 )2 + · · · 2 ≈ H . 2 (E1 − E2 ) 12 + (ζ − ζ0 )2 + · · · E1 +E − (E2 − E1 )(ζ − ζ0 ) + · · · 2 cos2 ζ0 = sin2 ζ0 =
(21.28) Expanding furthermore the matrix elements ¯ + Δ + g1 (Q − Q0 ) + · · · , E1ad = E 2 Δ E2ad = E − + g2 (Q − Q0 ) + · · · , 2 nad ζ − ζ0 = −(Q − Q0 )V12 (Q0 ),
(21.29) (21.30) (21.31)
where Δ is the splitting of the adiabatic energies at the crossing point, the interaction matrix becomes nad 2 g1 + g2 −(Q − Q0 )V12 (Q0 )Δ Δ + g1 −g (Q − Q0 ) 2 2 + ··· Q+ H =E+ Δ nad 2 + g1 −g (Q − Q0 ) (Q − Q0 )V12 (Q0 )Δ 2 2 2 (21.32) We see that in the diabatic basis, the interaction is given by half the splitting of the adiabatic energies at the crossing point (Fig. 21.1).
21.2 Semiclassical Treatment
223
E E1ad
E1dia
2Δ Q Edia 2
Ead 2
Fig. 21.1. Curve crossing
21.2 Semiclassical Treatment Landau [58] and Zener [59] investigated the curve-crossing process treating the nuclear motion classically by introducing a trajectory Q(t) = Q0 + vt,
(21.33)
where they assumed a constant velocity in the vicinity of the crossing point. They investigated the time-dependent model Hamiltonian 1 ∂ΔE − 2 ∂t t V E1 (Q(t)) V = (21.34) H(t) = 1 ∂ΔE V E2 (Q(t)) V 2 ∂t t and solved the time-dependent Schrödinger equation ∂ c1 c i = H(t) 1 , c2 ∂t c2
(21.35)
which reads explicitely ∂c1 = E1 (t)c1 + V c2 , ∂t ∂c2 = E2 (t) + V c1 . i ∂t
i
(21.36)
Substituting c1 = a1 e(1/i) (1/i)
c2 = a 2 e
E1 (t)dt E2 (t)dt
, ,
(21.37)
the equations simplify ∂a1 = V e(1/i) (E2 (t)−E1 (t)dt a2 , ∂t ∂a2 = V e−(1/i) (E2 (t)−E1 (t)dt a1 . i ∂t
i
(21.38)
224
21 Crossing of Two Electronic States R P high velocity low velocity
Fig. 21.2. Velocity dependence of the transition. At low velocities, the electronic wave function follows the nuclear motion adiabatically corresponding to a transition between the diabatic states. At high velocities, the probability for this transition (21.42) becomes slow. Full : diabatic states and dashed : adiabatic states
Let us consider the limit of small V and calculate the transition probability in lowest order. From the initial condition a1 (−∞) = 1, a2 (−∞) = 0, we get
t
∂ΔE t2 , ∂t 2 0 % ∞ V 1 1 ∂ΔE t2 2π a2 (∞) ≈ V dt = exp − i i ∂t 2 i −i(∂ΔE/∂t) −∞ (E2 (t ) − E1 (t ))dt =
(21.39)
(21.40)
and the transition probability is P12 = |a2 (∞)|2 =
2πV 2 . |∂ΔE/∂t|
(21.41)
Landau and Zener calculated the transition probability for arbitrary coupling strength (Fig. 21.2) 2πV 2 LZ . (21.42) P12 = 1 − exp − |∂ΔE/∂t|
21.3 Application to Diabatic ET If we describe the diabatic potentials as displaced harmonic oscillators E1 (Q) =
mω 2 2 Q , 2
E2 (Q) = ΔG +
the energy gap is with
Q1 =
E2 − E1 = ΔG +
mω 2 (Q − Q1 )2 , 2
2λ , mω 2
√ mω 2 2 (Q1 − 2Q1 Q) = ΔG + λ − ω 2mλQ. 2
(21.43)
(21.44) (21.45)
21.4 Crossing in More Dimensions 1
(1−P12)P12 (1−P12)P12
225
2 1−P12
P12
Fig. 21.3. Multiple crossing. The probability of crossing during one period of the oscillation is 2P12 (1 − P12 ) if the two crossings are independent
The time derivative of the energy gap is √ ∂Q ∂(E2 (Q) − E1 (Q)) = −ω 2mλ ∂t ∂t and the average velocity is
2 ∞ |v|e−mv /2kT dV ∂Q 2kT −∞ = . = ∞ −mv2 /2kT dV ∂t mπ e −∞
(21.46)
(21.47)
The probability of curve crossing during one period of the oscillation is given by 2P12 (1 − P12 ) ≈ 2P12 (see Fig. 21.3). Together, this gives a transmission coefficient 1 2πV 2 2π 2πV 2 = √ κel = 2 √ 1 ω 4πλkT ω 2λ 2kT π and a rate of k=
1 2πV 2 √ e−ΔGa /kT . 4πλkT
(21.48)
(21.49)
21.4 Crossing in More Dimensions The definition of a diabatic basis is not so straightforward in the general case of more nuclear degrees of freedom [60]. Generalization of (21.17) gives ∂C = −Vjnad (Q)C. ∂Qj
(21.50)
Now consider ∂ ∂ V nad V nad (Q) − (Q) ∂Qi ss ,j ∂Qj ss ,i ∂ ad ∂ ad = dr ψs (r, Q) ψs (r, Q) ∂Qi ∂Qj ∂ ad ∂ ad − dr ψs (r, Q) ψs (r, Q) . ∂Qj ∂Qi
(21.51)
226
21 Crossing of Two Electronic States
We assume completeness of the states ψsad and insert ψsad (r)ψsad (r ) = δ(r − r )
(21.52)
s
to obtain
dr dr
s
−
∂ ad ∂ ad ψs (r, Q) ψsad (r)ψsad (r ) ψs (r , Q) ∂Qi ∂Qj
∂ ad ∂ ad ad ad ψ (r, Q) ψs (r)ψs (r ) ψ (r, Q) , ∂Qj s ∂Qi s
which can be written as &
' nad nad nad nad Vs,s ,i Vs ,s ,j − Vs,s ,j Vs ,s ,i .
(21.53)
(21.54)
s =s,s
If only two states are involved, then this sum vanishes ∂ ∂ V nad (Q) − V nad (Q) = 0 ∂Qi 1,2,j ∂Qj 1,2,i
(21.55)
and a potential function exists, which provides a solution to (21.50). For more than two states, an integrability condition like (21.55) is not fulfilled and only part of the nonadiabatic coupling can be eliminated.3 Often a crude diabatic basis is used, which refers to a fixed configuration near the crossing point and does not depend on the nuclear coordinates at all. In a diabatic representation, the Hamiltonian of a two-state system has the general form H11 (Qi ) H12 (Qi ) H= . (21.56) H12 (Qi )† H22 (Qi ) In one dimension a crossing of the corresponding adiabatic curves (the eigenvalues of H) is very unlikely, since it occurs only if the two conditions H11 (Q) − H22 (Q) = H12 (Q) = 0
(21.57)
are fulfilled simultaneously.4 In two dimensions, this is generally the case in a single point, which leads to a so-called conical intersection. This type of curve crossing is very important for ultrafast transitions. In more than two dimensions, crossings appear at higher-dimensional surfaces. The terminology of conical intersections is also used here (Fig. 21.4). 3
4
This is similar to a vector field v in three dimensions which can be divided into a longitudinal or irrotational part with rot v = 0 and a rotational part with div v = 0. If the two states are of different symmetry, then H12 = 0 and a crossing is possible in one dimension.
21.4 Crossing in More Dimensions
227
Ead
Q2
Q1
Fig. 21.4. Conical intersection
Problems 21.1. Crude Adiabatic Model Consider the crossing of two electronic states along a coordinate Q. As basis functions, we use two coordinate-independent electronic wave functions which diagonalize the Born–Oppenheimer Hamiltonian at the crossing point Q0 : (Tel + V (Q0 ))ϕ1,2 = e1,2 ϕ1,2 . Use the following ansatz functions Ψ1 (r, Q) = (cos ζ(Q)ϕ1 (r) − sin ζ(Q)ϕ2 (r))χ1 (Q), Ψ2 (r, Q) = (sin ζ(Q)ϕ1 (r) + cos ζ(Q)ϕ2 (r))χ2 (Q), which can be written in more compact form c s χ1 . (Ψ1 , Ψ2 ) = (ϕ1 , ϕ2 ) −s c χ2 The Hamiltonian is partitioned as H = TN + Tel + V (r, Q0 ) + ΔV (r, Q). Calculate the matrix elements of the Hamiltonian H11 H12 H21 H22 † & 1 2' c s 2 ∂ 2 c −s ϕ1 = + T + V (Q ) + ΔV ϕ , ϕ − , el 0 s c −s c 2m ∂Q2 ϕ†2 (21.58) where χ(Q) and ζ(Q) depend on the coordinate Q whereas the basis functions ϕ1,2 do not. Chose ζ(Q) such that Tel + V is diagonalized. Evaluate the nonadiabatic interaction terms at the crossing point Q0 .
22 Dynamics of an Excited State
In this chapter, we would like to describe the dynamics of an excited state |s which is prepared, for example, by electronic excitation, due to absorption of radiation. This state is an eigenstate of the diabatic Hamiltonian with energy Es0 . Close in energy to |s is a manifold of other states {|l}, which are not populated during the short-time excitation, since from the ground state only the transition |0 → |s is optically allowed. The l-states are weakly coupled to a continuum of bath states1 and therefore have a finite lifetime (Fig. 22.1).
E V |s>
|>
Γ
|0>
Fig. 22.1. Dynamics of an excited state. The excited state |s decays into a manifold of states |l which are weakly coupled to a continuum
The bath states will not be considered explicitly. Instead, we use a nonHermitian Hamiltonian for the subspace spanned by |s and {|l}. We assume that the Hamiltonian is already diagonal2 with respect to the manifold {|l}, which has complex energies3 1 2
3
For instance, the field of electromagnetic radiation. For non-Hermitian operators, we have to distinguish left eigenvectors and right eigenvectors. This is also known as the damping approximation.
230
22 Dynamics of an Excited State
G+ E
G−
Fig. 22.2. Integration contour for G± (t)
El0 = l − i
Γl . 2
(22.1)
This describes the exponential decay ∼ e−Γl t of the l-states into the continuum states. Thus, the model Hamiltonian takes the form ⎛ 0 ⎞ Es Vs1 · · · VsL ⎜ V1s E10 ⎟ ⎜ ⎟ H = H0 + V = ⎜ . (22.2) ⎟. .. ⎝ .. ⎠ . EL
VLs
22.1 Green’s Formalism The corresponding Green’s operator or resolvent [61] is defined as G(E) = (E − H)−1 =
n
|n
1 n|. E − En
(22.3)
For a Hermitian Hamiltonian, the poles of the Green’s operator are on the real axis and the time evolution operator (the so-called propagator) is defined by (Fig. 22.2) = G+ (t) − G− (t), G(t) ∞±i 1 G± (t) = e−(iE/)t G(E)dE. 2πi −∞±i
(22.4) (22.5)
G(t) is given by an integral, which encloses all the poles En . Integration between two poles does not contribute, since the integration path can be taken to be on the real axis and the two contributions vanish. The integration
22.1 Green’s Formalism
231
G+ E −G−
= G+ (t) − G− (t) Fig. 22.3. Integration contour for G(t) t<0
G+ E E t>0
Fig. 22.4. Deformation of the integration contour for G+ (t)
along a small circle around a pole gives 2πi times the residual value which is given by e−iEn t/ (Fig. 22.3). Hence, we find t = G(t) |ne−(iE/)t n| = exp H . (22.6) i For times t < 0, the integration path for G+ can be closed in the upper half of the complex plane where the integrand e−(iE/)t G(E) vanishes exponentially for large |E| (Fig. 22.4). Hence (22.7) G+ (t) = 0 for t < 0. We shift the integration path for times t > 0 into the lower half of the complex plane, where again the integrand vanishes exponentially and we have to sum over all residuals to find G+ (t) = G(t) for
t > 0.
(22.8)
is the time evolution operator for t > 0.4 There are Hence, G+ (t) = Θ(t)G(t) additional interactions for times t < 0 which prepare the initial state 4
Similarly, we find G− (t) = (Θ(t − 1))G(t) is the time evolution operator for negative times.
232
22 Dynamics of an Excited State G+ E
Fig. 22.5. Integration contour for a non-Hermitian Hamiltonian
ψ(t = 0) = |s.
(22.9)
The integration contour for a non-Hermitian Hamiltonian can be chosen as the real axis for G+ (t), which now becomes the Fourier transform of the resolvent (Fig. 22.5). Dividing the Hamiltonian in the diagonal part H 0 and the interaction V , we have (G0 )−1 = E − H 0 , (22.10) G−1 = E − H = (G0 )−1 − V.
(22.11)
Multiplication from the left with G0 and from the right with G gives the Dyson equation: (22.12) G = G0 + G0 V G. We iterate that once more G = G0 + G0 V (G0 + G0 V G) = G0 + G0 V G0 + G0 V G0 V G
(22.13)
and project on the state |s s|G|s = s|G0 |s + s|G0 V G0 |s + s|G0 V G0 V G|s.
(22.14)
Now, G0 is diagonal and V is nondiagonal. Therefore s|G0 V G0 |s = s|G0 |ss|V |ss|G0 |s = 0
(22.15)
and s|G0 V G0 V G|s = s|G0 |ss|V |ll|G0 |ll|V |ss|G|s, Gss = G0ss + G0ss Vsl G0ll Vls Gss ,
(22.16) (22.17)
l
Gss =
G0ss 1 = , 0 0 1 − Gss l Vsl Gll Vls E − Es0 − Rs
(22.18)
with the level shift operator Rs =
l
|Vsl |2 . E − l + i(Γl /2)
(22.19)
22.2 Ladder Model
233
The poles of the Green’s function Gss are given by the implicit equation Ep = Es0 +
|Vsl |2 . Ep − El0
(22.20)
l
Generally, the Green’s function is meromorphic and can be represented as Gss (E) =
p
Ap , E − Ep
(22.21)
where the residuals are defined by Ap = lim Gss (E)(E − Ep ). E→Ep
(22.22)
The probability of finding the system still in the state |s at time t > 0 is 2 ss (t)|2 , Ps (t) = |s|G(t)|s| = |G
(22.23)
where the propagator is the Fourier transform of the Green’s function: ∞ 1 e(E/i)t Gss (E)dE = θ(t) Ap e(Ep /i)t . (22.24) G+ss (t) = 2πi −∞ p
22.2 Ladder Model We now want to study a simplified model which can be solved analytically. The energy of |s is set to zero. The manifold {|l} consists of infinitely equally spaced states with equal width El0 = α + lΔ − i
Γ 2
(22.25)
and the interaction Vsl = V is taken independent of l. With this simplification, the poles are solutions of Ep =
l=∞ l=−∞
V2 , Ep − α − lΔ + i(Γ/2)
(22.26)
which can be written using the identity5 cot(z) =
∞ l=−∞
5
1 z − lπ
(22.27)
cot(z) has single poles at z = lπ with residues limz→lπ cot(z)(z − lπ) = 1.
234
22 Dynamics of an Excited State
as Ep =
V 2π cot Δ
π Δ
Γ Ep − α + i . 2
(22.28)
For the following discussion, it is convenient to measure all energy quantities in units of π/Δ and define α = απ/Δ, p = Ep π/Δ, E to have
Γ = Γ π/Δ, V = V π/Δ,
$ # iΓp Γ r 2 = V cot Ep − α , Ep = Ep − +i 2 2
(22.29) (22.30)
(22.31)
which can be split into real and imaginary part: pr − α pr )) sin(2(E E = , *p − Γ) − cos(2(E r − α V 2 cosh(Γ )) p −
sinh(Γp − Γ) Γp = . *p − Γ) − cos(2(E r − α 2V 2 cosh(Γ )) p
(22.32)
(22.33)
We now have to distinguish two limiting cases. The Small Molecule Limit For small molecules, the density of state 1/Δ is small. If we keep the lifetime of the l-states fixed, then Γ is small in this limit. The poles are close to the real axis.6 We approximate the real part by solving the real equation (Fig. 22.6)
(0) − α p(0) = V 2 cot E (22.34) E p p(0) , and consider first the solution which is closest to zero. Expanding around E we find iΓ (0) 2 (0) + Ep + ξ = V cot Ep + ξ − α 2
2 i Γ 2 (0) 2 (0) p − α p − α − V 1 + cot E = V cot E ξ+ , 2 (22.35)
6
We assume that α = 0.
cot (x-1)
22.2 Ladder Model
235
0
–4
–2
0 x
2
4
Fig. 22.6. Small molecule limit. The figure shows the graphical solution of x = cot(x − 1)
which simplifies to
2 i Γ (0) −α ξ = −V 1 + cot E ξ+ p 2 2
(22.36)
and has the solution
(0)2 E V 2 1 + Vp 4 i i , ξ = − Γp = − Γ p(0)2 2 2 E 2 1+V 1 + V 4
(22.37)
p(0) gives which in the case V small but V > E Γp ≈ V 2 Γ,
(22.38)
V2 Γ. Δ2 The other poles are approximately undisturbed Γp ≈ π 2
pr = α E + lπ, *p = Γ Γ
V 2 +
r2 E p 2 V r2 E
1 + V 2 +
p 2 V
(22.39)
(22.40) → Γ
(22.41)
236
22 Dynamics of an Excited State
and the residuum of the Green’s function follows from 1 2 p + dE − α (Ep + dE) − V cot E + =
p + dE) − V 2 cot E p − α (E +
iΓ 2
iΓ 2
1 + cot(E p − α + V 2 dE(1 +
1 1 + V 2 1 + cot E p − α dE +
iΓ 2
2 =
2 iΓ 2 ) )
1 1 , 2 p dE V + E
+ ···
,
(22.42)
which shows that only contributions from poles close to zero are important. The Statistical Limit Consider now the limit Γp > 1 and V > 1. This condition is usually fulfilled in biomolecules (Fig. 22.7). If the poles are far away enough from the real axis, then on the real axis the cotangent is very close to −i and V 2π π V 2π Γ cot Ep − α + i ≈ −i . (22.43) Rs = Δ Δ 2 Δ Thus Gss (E) =
1 E + i(V 2 π/Δ)
4
4
2
2
0
0
−2
−2
−4
−4 −4
−4 −2
−2 0
0 Im(z)
2
2 4
4
Re(z)
(22.44)
−4
−4 −2
−2 Im(z)
0
0 2
2 4
Re(z)
4
Fig. 22.7. Complex cotangent function. Left: imaginary part and right: real part of cot(z + 3i) are shown together with (z) and (z), respectively
22.3 A More General Ladder Model
237
and the initial state decays as P (t) = |e(−iV with the rate k=
2
| = e−kt
π/Δ)t 2
(22.45)
2πV 2 2πV 2 = ρ, Δ
(22.46)
1 . Δ
(22.47)
where the density of states is ρ=
22.3 A More General Ladder Model We now consider a more general model where the spacing of the states is not constant and the matrix elements are different. The Hamiltonian has the form ⎞ ⎛ Es Vs1 · · · VsL ⎟ ⎜ V1s E1 ⎟ ⎜ (22.48) H=⎜ . ⎟. .. ⎠ ⎝ .. . EL
VLs
We take the energies relative to the lowest l-state (Fig. 22.8) Es = E0 + ΔE,
El = E0 + l .
(22.49)
We start from the golden rule expression 2πV 2
2πV 2
δ(ΔE − l ),
(22.50)
where the density of final states is given by δ(Es − El ) = δ(ΔE − l ). ρ(Es ) =
(22.51)
k=
l
sl
δ(Es − El ) =
l
l
sl
l
Es Eo
ΔE
εl
Fig. 22.8. Generalized ladder model
238
22 Dynamics of an Excited State
We represent the δ-function by a Fourier integral: ∞ 1 e−iωtdt, δ(ω) = 2π −∞ ∞ 1 E → δ(E) = ω= e(E/i)t dt, 2π −∞ V2 ∞ sl e(ΔE−l /i)t dt. k= 2 −∞
(22.52) (22.53) (22.54)
l
Interchanging integration and summation, we find ∞ 2 ΔE − l Vsl dt exp k= t . 2 i −∞
(22.55)
l
We introduce z=
V2
sl (il /)t
(22.56)
dt e−(i/)ΔE t+ln(z) .
(22.57)
2
l
and write the rate as
∞
k=
e
∞
This integral will now be evaluated approximately using the saddle point (p. 339) method.7 The saddle point equation for the ladder model becomes
From the derivative
d ln z(ts ) i = 0. − ΔE + dt
(22.58)
1 Vsl2 il itl / d ln z = e , dt z 2
(22.59)
l
we find ΔE =
1 Vsl2 itl / l e . z 2
(22.60)
l
Since ΔE is real, we look for a saddle point on the imaginary axis and write its = −β,
(22.61)
where the new variable β is real. The saddle point equation now has a quasithermodynamic meaning: ΔE =
1 Vsl2 −βl l e = l , z 2 l
7
Also known as method of steepest descent or Laplace method.
(22.62)
22.3 A More General Ladder Model
239
where β plays the role of 1/kB T and gl = Vsl2 /2 that of a degeneracy factor. The second derivative relates to the width of the energy distribution: 2 d2 z dz d dz d2 ln z 2 dt = = dt − dt 2 dt dt z z z 2 1 −2l −βl 1 il −β = gl 2 e − 2 gl e z z l l 2 8 9 2 =− . (22.63) + 2 We approximate now the integral by the contribution around the saddle point: % ∞ 2π2 −(i/)ΔE t+ln(z) βΔE k= dt e = z(ts )e 2 − 2 ∞ % V2 2π sl −β(l −ΔE) e = . (22.64) 2 − 2 l
For comparison, let us consider the simplified model with Vsl = V and l = lΔ. Then ∞ V2 V 2 −βlΔ 1 e = 2 . (22.65) z= 2 1 − e−βΔ l=0
The saddle point equation is ΔE = −
Δ ∂ ln z = βΔ ∂β e −1
(22.66)
which determines the “temperature”
1 Δ ln 1 + Δ ΔE 1 if Δ ΔE. ≈ ΔE
β=
Then l =
Δ
≈ ΔE, −1 ∂2 Δ2 eΔ/ΔE 2 ln z = 2l − 2 = '2 ≈ Δe , & ∂β 2 eΔ/ΔE − 1 z(ts ) =
eΔ/ΔE
V2 V 2 ΔE 1 , ≈ 2 1 − e−Δ/ΔE 2 Δ
(22.67) (22.68)
(22.69) (22.70) (22.71)
and the rate becomes
V 2 ΔE 1 2π2 2πV 2 1 1 k= 2 = e e , Δ Δe2 Δ 2π √ where the numerical value of e/ 2π = 1.08
(22.72)
240
22 Dynamics of an Excited State
22.4 Application to the Displaced Oscillator Model We now want to discuss a displaced oscillator model (p. 205) for the transition between the vibrational manifolds of two electronic states. 2π 2 V F CD(ΔE) V2 ∞ V2 ∞ dt e(−it/)ΔE e(it/)Hf e−(it/)Hi = 2 dt e(−it/)ΔE F (t), = 2 −∞ −∞ (22.73)
k=
where Hf is the Hamiltonian of the nuclear vibrations in the final state and the correlation function F (t) = exp(g(t)) for independent displaced oscillators is taken from (31.12). The rate becomes V2 ∞ dt e(−it/)ΔE k= 2 −∞ # $ 4 5 × exp gr2 (nr + 1)(eiωr t − 1) + nr (e−iωr t − 1) . r
(22.74) The short-time expansion gives
V 2 2π2 (ΔE − Er )2 . k= 2 exp − Δ2 2Δ2
(22.75)
If all modes can be treated classically ωr kB T , the phonon number is n ¯ r = kB T /ωr and Δ2 ≈ 2kB T gr2 ωr = 2kB T Er (22.76) r
which gives for the rate in the classical limit the Marcus expression
1 2πV 2 (ΔE − Er )2 k= . exp − 4πkB T Er 4Er kB T
(22.77)
The saddle point equation is 4 5 i ΔE = gr2 iωr (nr + 1)eiωr t − iωr nr e−iωr t r 4 5 gr2 ωr (nr + 1)e−iωr t − nr eiωr t , =i r
(22.78)
22.4 Application to the Displaced Oscillator Model
241
which we solve approximately by linearization gr2 ωr [1 − (2¯ nr + 1)iωr t + · · · ] iΔE = i r
= iEr − Δ2
t + ··· ,
ts = −i
(22.79)
ΔE − Er . Δ2
(22.80)
At the saddle point, we have −
its its ΔE + g(ts ) = − ΔE + gr2 [(nr + 1)(iωr ts ) − nr iωr ts + · · · ] r =−
Δ 2 t2 its gr2 ωr ts − 2 s ΔE + i 2 r
−(Er − ΔE)2 its (ΔE − Er ) − Δ2 2Δ4 (ΔE − Er )2 =− . 2Δ2 =−
(22.81)
The second derivative is −
r
4 5 Δ2 gr2 ωr2 (nr + 1)e−iωr t + nr eiωr t = − 2 + · · ·
and finally the rate again is
(ΔE − Er )2 V 2 2π2 exp − k= 2 . Δ2 2Δ2
(22.82)
(22.83)
At low temperatures kB T ω, the saddle point equation simplifies to ΔE = gr2 ωr eiωr t . (22.84) r
To solve this equation, we introduce an average frequency (major accepting modes) ω with gr2 ωr = gr2 ω = S ω, (22.85) r
ω =
r
2 r gr ωr
S
,
S=
gr2 ,
(22.86)
r
which, after insertion into the saddle point equation, gives the approximation ΔE = S ωeits ω
(22.87)
242
22 Dynamics of an Excited State 10
0
10–3
rate k
10–6 10–9 10–12 10–15 10–18 0
5
10
15
energy gap Δ
Fig. 22.9. Energy gap law. The rate (22.90) is shown as a function of the energy gap in units of the average frequency for S = 0.2, 0.5, 1, 2
with the solution its =
ΔE 1 ln . ω S ω
The second derivative is 1 − gr2 ωr2 eiωr ts ≈ −S ω 2 eln(ΔE/Sω) = − ΔE ω r
(22.88)
(22.89)
and the rate is
2π ΔE 1 ΔE ΔE exp − ln +S −S ΔE ω ω S ω S ω
. 2 V ΔE ΔE 2π = exp −S − ln −1 . ΔE ω ω S ω
k=
V2 2
The dependence on ΔE is known as the energy gap law (Fig. 22.9)
Problems 22.1. Ladder Model Solve the time evolution for the ladder model approximately ⎛ ⎞ 0 V ··· V ⎜ V E1 ⎟ ⎜ ⎟ H =⎜ . ⎟ , Ej = α + (j − 1)Δ. .. ⎝ .. ⎠ . V En
(22.90)
22.4 Application to the Displaced Oscillator Model
243
First derive an integral equation for C0 (t) only by substitution. Then replace the sum by integration over ω = j(Δ/) and extend the integration over the whole real axis. Replace the integral by a δ-function and show that the initial state decays exponentially with a rate k=
2πV 2 . Δ
Part VII
Elementary Photoinduced Processes
23 Photophysics of Chlorophylls and Carotenoids
Chlorophylls and carotenoids are very important light receptors. Both classes of molecules have a large π-electron system which is delocalized over many conjugated bonds and is responsible for strong absorption bands in the visible region (Figs. 23.1 and 23.2). CH3
O
CH3
H 1
CH3
2 I
N Mg
H N IV
CH3 H
CH2 H CH2 phytyl O
74
N
O
H O
CH3
3 H II 4
H H
N III
CH3
V O O CH3
Fig. 23.1. Structure of bacteriochlorophyll-b [62]. This is the principal green pigment of the photosynthetic bacterium Rhodopseudomonas viridis. Dashed curves indicate the delocalized π-electron system. Chlorophylls have an additional double bond between positions 3 and 4. Variants have different side chains at positions 2 and 3
Fig. 23.2. Structure of β-carotene. The basic structure of the carotenoids is a conjugated chain made up of isoprene units. Variants have different end groups
248
23 Photophysics of Chlorophylls and Carotenoids
23.1 MO Model for the Electronic States The electronic wave functions of larger molecules are usually described by introducing one-electron wave functions φ(r) and expanding the wave function in terms of one or more Slater determinants. Singlet ground states can be in most cases described quite sufficiently by one determinant representing a set of doubly occupied orbitals: |S0 = |φ1↑ φ1,↓ · · · φnocc,↑ φnocc,↓ | φ1↑ (r1 ) φ1↓ (r1 ) · · · φnocc,↓ (r1 ) φ1↑ (r2 ) φ1↓ (r2 ) · · · φnocc,↓ (r2 ) 1 .. .. . .. = . (23.1) . . (2Nocc )! φ1↑ (r2N ) φ1↓ (r2N ) φ (r ) nocc,↓ 2Nocc occ occ Excited states can be described as a linear combination of excited electronic configurations. The lowest excited state can often be reasonably approximated as the transition of one electron from the highest occupied orbital φnocc (HOMO) to the lowest unoccupied orbital φnocc+1 (LUMO). The singlet excitation is given by 1 |S1 = √ (|φ1↑ φ1,↓ · · · φnocc,↑ φnocc+1,↓ | − |φ1↑ φ1,↓ · · · φnocc,↓ φnocc+1,↑ |). 2 (23.2) The molecular orbitals can be determined with the help of more or less sophisticated methods (Fig. 23.3). energy
LUMO HOMO
unoccupied orbitals
occupied orbitals
Fig. 23.3. HOMO–LUMO transition. The singlet ground state is approximated by a Slater determinant of doubly occupied molecular orbitals. The lowest singlet excitation can be described by promotion of an electron from the highest occupied to the lowest unoccupied orbital for most chromophores
23.2 The Free Electron Model for Polyenes
249
23.2 The Free Electron Model for Polyenes We approximate a polyene with N double bonds by a one-dimensional box of length L = (2N + 1)d.1 The orbitals of the free electron model H=−
2 ∂ 2 + V (r) 2me ∂x2
(23.3)
have to fulfill the boundary condition φ(0) = φ(L) = 0. Therefore, they are given by
2 πsx sin (23.4) φs (x) = L L with energies (Fig. 23.4) Es =
2 L
L 0
2 1 2 πs 2 πsx 2 πs dx = . sin 2me L L 2me L
(23.5)
Since there are 2N π-electrons, the energy of the lowest excitation is estimated as ΔE = En+1 − En π 2 2 π 2 2 2 2 . = ((N + 1) − N ) = 2me d2 (2N + 1)2 2me d2 (2N + 1)
(23.6)
V(x) s=2
H H
C
s=1
d H C C C C C C C H H H H H H H
Fig. 23.4. Octatetraene. Top: geometry-optimized structure. Middle: potential energy and lowest two eigenfunctions of the free electron model. Bottom: linear model with equal bond lengths d
1
The end of the box is one bond length behind the last C-atom.
250
23 Photophysics of Chlorophylls and Carotenoids
The transition dipole matrix element for the singlet–singlet transition is 1 (−er)| √ (|φnocc↑ φnocc+1↓ | − |φnocc↓ φnocc+1↑ |) μ = |φnocc↑ φnocc↓ | 2 √ = − 2e dr φnocc (r)rφnocc+1 (r) √ 2 L N πx (N + 1)πx = − 2e x sin dx sin L 0 L L √ √ 2 2e 4L2 N (N + 1) 8 2ed n(N + 1) , (23.7) = = L π 2 (2N + 1)2 π2 2N + 1 which grows with increasing length of the polyene. The absorption coefficient is proportional to N 2 (N + 1)2 , (23.8) α ∼ μ2 ΔE ∼ (2N + 1)3 which depends close to linear on the number of double bonds N .
23.3 The LCAO Approximation The molecular orbitals are usually expanded in a basis of atomic orbitals Cs ϕs (r), (23.9) φ(r) = s
where the atomic orbitals are centered on the nuclei and the coefficients are determined from diagonalization of a certain one-electron Hamiltonian (for instance, Hartree–Fock, Kohn–Sham, semiempirical approximations such as AM1) Hψ = Eψ. (23.10) Inserting the LCAO wave function gives Cs Hϕs (r) = E Cs ϕs (r)
(23.11)
and projection on one of the atomic orbitals ϕs gives a generalized eigenvalue problem: Cs (H − E)ϕs (r) 0 = d3 r ϕs (r) =
s
s
Cs (Hs s − ESs s ).
(23.12)
23.4 Hückel Approximation β
C
β
C
C
β
C
β
β
C
C
β
C
251
β
C
Fig. 23.5. Hückel model for polyenes
23.4 Hückel Approximation The Hückel approximation [63] is a very simple LCAO model for the πelectrons. It makes the following approximations (Fig. 23.5): • • • •
The diagonal matrix elements Hss = α have the same value (Coulomb integral) for all carbon atoms. The overlap of different pz -orbitals is neglected Sss = δss . The interaction between bonded atoms is Hss = β (resonance integral) and has the same value for all bonds. The interaction between nonbonded atoms is zero Hss = 0. The Hückel matrix for a linear polyene has the form of a tridiagonal matrix ⎞ ⎛ α β ⎟ ⎜β α β ⎟ ⎜ ⎟ ⎜ .. .. .. (23.13) H =⎜ ⎟ . . . ⎟ ⎜ ⎠ ⎝ β α β β α
and can be easily diagonalized with the eigenvectors (Fig. 23.6)
φr =
2 rsπ ϕs , sin 2N + 1 s=1 2N + 1 2N
The eigenvalues are Er = α + 2β cos
r = 1, 2, . . . , 2N.
rπ . 2N + 1
(23.14)
(23.15)
The symmetry of the molecule is C2h (Fig. 23.7). All π-orbitals are antisymmetric with respect to the vertical reflection σh . With respect to the C2 rotation, they have alternating a- or b-symmetry. Since σh × C2 = i, the orbitals are of alternating au - and bg -symmetry. The lowest transition energy is in the Hückel model (N + 1)π Nπ ΔE = EN +1 − EN = 2β cos − cos , (23.16) 2N + 1 2N + 1
252
23 Photophysics of Chlorophylls and Carotenoids LUMO
au
HOMO
bg au
bg
au
Fig. 23.6. Hückel orbitals for octatetraene C2h
E
C2
σ
i
Ag Au
1 1
1 1
1 –1
1 –1
Bg
1
–1
–1
1
Bu
1
–1
1
–1
Fig. 23.7. Character table of the symmetry group C2h . All π-orbitals are antisymmetric with respect to the reflection σ and are therefore of au - or bg -symmetry
which can be simplified with the help of (x = π/(2N + 1)) cos(N + 1)x − cos N x 1 = (ei(N +1)x + e−i(N +1)x − eiN x − e−iN x ) 2 1 1 = ei(N +1/2)x (eix/2 − e−ix/2 ) + e−i(N +1/2)x (e−ix/2 − eix/2 ) 2 2 1 x = −2 sin N+ x sin (23.17) 2 2 to give2 1 π ∼ for large n. 4N + 2 N This approximation can be improved if the bond length alternation into account, by using two alternating β-values [64]. The resulting are Ek = α ± β 2 + β 2 + 2ββ cos k, ΔE = −4β sin
2
β is a negative quantity.
(23.18) is taken energies (23.19)
23.6 Cyclic Polyene as a Model for Porphyrins
Ag
Ag
Ag
Bu
S3
Ag
S2
Bu
S1
Ag
S0
Ag
253
Fig. 23.8. Simplified CI model for linear polyenes
where the k-values are solutions of β sin(N + 1)k + β sin N k = 0.
(23.20)
In the Hückel model, the lowest excited state has the symmetry au × bg = Bu and is strongly allowed. However, it has been well established that for longer polyenes, the lowest excited singlet is an optically forbidden totally symmetric Ag state [65,66]. Such a low-lying dark state has been observed experimentally also for carotenoids [67] and can be only understood if correlation effects are taken into account [68].
23.5 Simplified CI Model for Polyenes In a very simple model, Kohler [69] treats only transitions from the bg -HOMO into the au -LUMO and bg -LUMO+1 orbitals. The HOMO–LUMO+1 transition as well as the double HOMO–LUMO transition are both of Ag -symmetry and can therefore interact. If the interaction is strong enough, then the lowest excited state will be of Ag -symmetry and therefore optically forbidden (Fig. 23.8).
23.6 Cyclic Polyene as a Model for Porphyrins For a cyclic polyene with N carbon atoms ⎛ α β ⎜β α β ⎜ ⎜ H = ⎜ ... ... ⎜ ⎝ β β
the Hückel matrix has the form ⎞ β ⎟ ⎟ ⎟ .. (23.21) . ⎟ ⎟ ⎠ α β β α
with eigenvectors N 1 iks e ϕs , φk = √ N s=1
k = 0,
2π 2π 2π , 2 , . . . , (N − 1) N N N
(23.22)
254
23 Photophysics of Chlorophylls and Carotenoids 3
3
4
α
β
4
α
β
2− N
N 5 2
2 NH
HN
N
5
2+ Mg
N
6 1
1
6
N
N
δ
γ 8
δ
γ 8
7
7
Fig. 23.9. Free base porphin and Mg-porphin. The bond character was assigned on the basis of the bond lengths from HF-optimized structures
and eigenvalues Ek = α + 2β cos k.
(23.23)
This can be used as a model for the class of porphyrin molecules [70, 71] (Fig. 23.9). For the metal–porphyrin, there are in principle two possibilities for the assignment of the essential π-system, an inner ring with 16 atoms or an outer ring with 20 atoms, both reflecting the D4h -symmetry of the molecule. Since it is not possible to draw a unique chemical structure, we have to count the π-electrons. The free base porphin is composed of 14 H-atoms (of which the peripheral ones are not shown), 20 C-atoms and 4 N-atoms which provide a total of 14 + 4 × 20 + 5 × 4 = 114 valence electrons. There are 42 σbonds (including those to the peripheral H-atoms) and two lonely electron pairs involving a total of 88 electrons. Therefore, the number of π-electrons is 114 − 88 = 26. Usually, it is assumed that the outer double bonds are not so strongly coupled to the π-system and a model with 18 π-electrons distributed over the N = 16-ring is used. For the metal porphin, the number of H-atoms is only 12 but there are four lonely electron pairs instead of two. Therefore, the number of non-π valence electrons is the same as for the free base porphin. The total number of valence electrons is again 114 since the Mg atom formally donates two electrons.
23.7 The Four Orbital Model for Porphyrins Gouterman [72–74] introduced the four orbital model which considers only the doubly degenerate HOMO (k = ±4 × 2π/N ) and LUMO (k = ±5 × 2π/N ) orbitals. There are four HOMO–LUMO transitions (Fig. 23.10). Their intensities are given by
23.7 The Four Orbital Model for Porphyrins
255
9 −8
8
−7
7
−6
6
−5
5
−4
4
−3
3
−2
2
−1
1 k=0
Fig. 23.10. Porphin orbitals
μ=
1 −ik s iks ϕ∗s (r)(−er)ϕs (r)d3 r e e N
(23.24)
ss
which is approximately3 ⎛ & cos &s × 1 −ik s iks μ= e e δss (−eR) ⎝ sin s × N 0 ss
'⎞
2π N' π N
⎠,
(23.25)
where the z-component vanishes and the perpendicular components have circular polarization4 −eR i(k−k ±2π/N )s i −eR 1 μ √ , ±√ , 0 = √ e = √ δk−k ,±2π/N . (23.26) 2 2 2N s 2 The selection rule is
k − k = ±2π/N.
(23.27)
Hence, two of the HOMO–LUMO transitions are allowed and circularly polarized: 2π 2π →5 , 4 N N 2π 2π −4 → −5 . (23.28) N N If configuration interaction is taken into account, these four transitions split into two degenerate excited states of E-symmetry. The higher one carries most of the intensity and corresponds to the Soret band in the UV. The 3 4
Neglecting differential overlaps and assuming a perfect circular arrangement. For an average R = 3 Å, this gives a total intensity of 207 Debye2 which is comparable to the 290 Debye2 from a HF/CI calculation.
256
23 Photophysics of Chlorophylls and Carotenoids
a
b
c
H H H H
y x
eV 2.0D (E) 5
10D (E)
x x x,y x y
x y
H H H H
H H H H
y x
y y x
8.6D
8.1D 8.0D
x
1.0D
0.5D 4.0D
x
1.4D
y
7D
10D
4
3
0.9D (E)
2
Fig. 23.11. Electronic structure of porphins and porphin derivatives. 631G-HF/CI calculations were performed with GAMESS [132]. Numbers give transition dipoles in Debyes. Dashed lines indicate very weak or dipole-forbidden transitions. (a) Mgporphin has two allowed transitions of E-symmetry, corresponding to the weak Q-band at lower and the strong B-band at higher energy. (b) In Mg-chlorin (dihydroporphin), the double bond 1–2 is saturated similar to chlorophylls. The Q-band splits into the lower and stronger Qy -band and the weaker Qx -band. (c) In Mg-tetrahydroporphin, double bonds 1–2 and 5–6 are saturated similar to bacteriochlorophyll. The Qy -band is shifted to lower energies
lower one is very weak and corresponds to the Q-band in the visible [75]. As a result of ab initio calculations, the four orbitals of the Gouterman model stay approximately separated from the rest of the MO orbitals [76]. If the symmetry is disturbed as for chlorin, the Q-band splits into the lower Qy -band which gains large intensity and the higher weak Qx -band. Figure 23.11 shows calculated spectra for Mg-porphin, Mg-chlorin, and Mg-tetrahydroporphin.
23.8 Energy Transfer Processes Carotenoids and chlorophylls are very important for photosynthetic systems [77]. Chlorophyll molecules with different absorption maxima are used to harvest light energy and to direct it to the reaction center where the special pair dimer has the lowest absorption band of the chlorophylls. Carotenoids can act as additional light-harvesting pigments in the blue region of the spectrum [78, 79] and transfer energy to chlorophylls which absorb at longer wavelengths. Carotenoids are also important as triplet quenchers to prevent
23.8 Energy Transfer Processes Chlorophyll
Chlorophyll
B
257
Chlorophyll dimer
Q
S0
Fig. 23.12. Chlorophyll–chlorophyll energy transfer. Energy is transferred via a sequence of chromophores with decreasing absorption maxima Chlorophyll
Carotenoid
B Bu Ag
Q T
T
S0
S0
Fig. 23.13. Carotenoid–chlorophyll energy transfer. Carotenoids absorb light in the blue region of the spectrum and transfer energy to chlorophylls which absorb at longer wavelengths (solid arrows). Carotenoid triplet states are below the lowest chlorophyll triplet and therefore important triplet quenchers (dashed arrow )
the formation of triplet oxygen and for dissipation of excess energy [80] (Figs. 23.12 and 23.13).
Problems 23.1. Polyene with Bond Length Alternation
C
C C C
C C C
C
Consider a cyclic polyene with 2N -carbon atoms with alternating bond lengths.
258
23 Photophysics of Chlorophylls and Carotenoids
The Hückel matrix has the form ⎛
⎞ α β β ⎟ ⎜ β α β ⎟ ⎜ ⎟ ⎜ . . .. .. ⎟. ⎜ H=⎜ β ⎟ ⎟ ⎜ .. ⎝ . α β⎠ β β α
(a) Show that the eigenvectors can be written as c2n = eikn ,
c2n−1 = ei(kn+χ) .
(b) Determine the phase angle χ and the eigenvalues for β = β (c) We want now to find the eigenvectors of a linear polyene. Therefore, we use the real-valued functions c2n = sin kn,
c2n−1 = sin(kn + χ),
n = 1, . . . , N.
with the phase angle as in (b). We now add two further carbon atoms with indices 0 and 2N + 1. The first of these two obviously has no effect since c0 = sin(0 × k) = 0. For the atom 2N + 1, we demand that the wave function again vanishes which restricts the possible k-values: 0 = c2N +1 = sin((N + 1)k + χ) = (eiχ+i(N +1)k ). For these k-values, the cyclic polyene with 2N + 2 atoms is equivalent to the linear polyene with 2N -atoms as the off-diagonal interaction becomes irrelevant. Show that the k-values obey the equation β sin((N + 1)k) + β sin(N k) = 0. (d) Find a similar treatment for a linear polyene with odd number of C-atoms.
24 Incoherent Energy Transfer
We consider the transfer of energy from an excited donor molecule D∗ to an acceptor molecule A D ∗ + A → D + A∗ . (24.1)
24.1 Excited States We assume that the optical transitions of both molecules can be described by the transition between the highest occupied (HOMO) and lowest unoccupied (LUMO) molecular orbitals. The Hartree–Fock ground state of one molecule can be written as a Slater determinant of doubly occupied molecular orbitals: |HF = |φ1↑ φ1↓ · · · φHO↑ φHO↓ |.
(24.2)
Promotion of an electron from the HOMO to the LUMO creates a singlet or a triplet state both of which are linear combinations of the four determinants: | ↑↓ = |φ1↑ φ1↓ · · · φHO↑ φLU↓ |, | ↓↑ = |φ1↑ φ1↓ · · · φHO↓ φLU↑ |, | ↑↑ = |φ1↑ φ1↓ · · · φHO↑ φLU↑ |, | ↓↓ = |φ1↑ φ1↓ · · · φHO↓ φLU↓ |.
(24.3)
Obviously, the last two determinants are the components of the triplet state with Sz = ±1: |1, +1 = | ↑↑, |1, −1 = | ↓↓.
(24.4)
Linear combination of the first two determinants gives the triplet state with Sz = 0: 1 |1, 0 = √ (| ↑↓ + | ↓↑) (24.5) 2
260
24 Incoherent Energy Transfer
and the singlet state 1 (24.6) |0, 0 = √ (| ↑↓ − | ↓↑). 2 Let us now consider the states of the molecule pair DA. The singlet ground state is (24.7) |φ1↑ φ1↓ · · · φHO,D,↑ φHO,D,↓ φHO,A,↑ φHO,A,↓ |, which will simply be denoted as |1 DA = |D↑ D↓ A↑ A↓ |.
(24.8)
The excited singlet state of the donor is 1 |1 D∗ A = √ (|D∗↑ D↓ A↑ A↓ | − |D∗↓ D↑ A↑ A↓ |) 2
(24.9)
and the excited state of the acceptor is 1 |1 DA∗ = √ (|D↑ D↓ A∗↑ A↓ | − |D↑ D↓ A∗↓ A↑ |). 2
(24.10)
24.2 Interaction Matrix Element The interaction responsible for energy transfer is the Coulombic electron– electron interaction: e2 Vee = . (24.11) 4πε|r1 − r2 | With respect to the basis of molecular orbitals, its matrix elements are denoted as V (φ1 φ1 φ2 φ2 ) = d3 r1 d3 r2 φ∗1σ (r1 )φ∗2,σ (r2 )
e2 φ1 σ (r1 )φ2 ,σ (r2 ). 4πε|r1 − r2 |
(24.12)
The transfer interaction Vif = 1 D∗ A|Vee |1 DA∗ 1 = |D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↑ A↓ | 2 1 − |D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↓ A↑ 2 1 − |D∗↓ D↑ A↑ A↓ |Vee ||D↑ D↓ A∗↑ A↓ | 2 1 + |D∗↓ D↑ A↑ A↓ |Vee |D↑ D↓ A∗↓ A↑ | 2
(24.13)
24.2 Interaction Matrix Element excitonic
261
exchange
D*
A*
D*
A*
D
A
D
A
Fig. 24.1. Intramolecular energy transfer. Left: the excitonic or Förster mechanism [81, 82] depends on transition intensities and spectral overlap. Right: the exchange or Dexter mechanism [83] depends on the overlap of the wave functions
consists of four summands. The first one has two contributions 1 1 1 |D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↑ A↓ | = V (D∗ DA A∗ ) − V (D∗ A∗ AD), (24.14) 2 2 2 where the first part is the excitonic interaction and the second part is the exchange interaction (Fig. 24.1). The second summand 1 1 − |D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↓ A↑ | = V (D∗ DA A∗ ) 2 2
(24.15)
has no exchange contribution due to the spin orientations. The two remaining summands are just mirror images of the first two. Altogether, the interaction for singlet energy transfer is 1 D∗ A|Vee |1 DA∗ = 2V (D∗ DAA∗ ) − V (D∗ A∗ AD).
(24.16)
In the triplet case, Vif = 3 D∗ A|Vee |3 DA∗ 1 = |D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↑ A↓ | 2 1 + |D∗↑ D↓ A↑ A↓ |Vee |D↑ D↓ A∗↓ A↑ 2 1 + |D∗↓ D↑ A↑ A↓ |Vee ||D↑ D↓ A∗↑ A↓ | 2 1 + |D∗↓ D↑ A↑ A↓ |Vee |D↑ D↓ A∗↓ A↑ | 2 = −V (D∗ A∗ AD).
(24.17)
Here, energy can only be transferred by the exchange coupling [83]. Since this involves the overlap of electronic wave functions, it is important at small distances. In the singlet state, the excitonic interaction [81, 82] allows for energy transfer also at larger distances.
262
24 Incoherent Energy Transfer ρ1
ρ R1
2
R2
r1
r2
D
A
Fig. 24.2. Multipole expansion. For large intermolecular distance R = |R1 − R2 |, the distance of the electrons r1 − r2 can be expanded as a Taylor series with respect to (r1 − R1 )/R and (r2 − R2 )/R
24.3 Multipole Expansion of the Excitonic Interaction We will now apply a multipole expansion to the excitonic matrix element (Fig. 24.2) Vexc = 2V (D∗ DAA∗ ) = 2 d3 r1 d3 r2 φ∗D∗ (r1 )φ∗A (r2 )
e2 φD (r1 )φA∗ (r2 ). (24.18) 4πε|r1 − r2 |
We take the position of the electrons relative to the centers of the molecules r1,2 = R1,2 + 1,2
(24.19)
and expand the Coulombic interaction 1 1 = |r1 − r2 | |R1 − R2 + 1 − 2 |
(24.20)
using the Taylor series 1 1 R 1 3|R|(R)2 − |R|3 2 1 = − + + ··· |R + | |R| |R|2 |R| 2 |R|6
(24.21)
for R = R1 − R2 ,
= 1 − 2 .
(24.22)
The zero-order term vanishes due to the orthogonality of the orbitals. The first-order term gives e2 −2 R d3 1 d3 2 φ∗D∗ (r1 )φD (r1 )(1 − 2 )φ∗A (r2 )φA∗ (r2 ) (24.23) 4πε|R|3 and also vanishes due to the orthogonality. The second-order term gives the leading contribution:
24.4 Energy Transfer Rate
e2 4πε|R|6
263
d3 1 d3 2 φ∗D∗ (r1 )φD (r1 )
×(3|R|(R1 − R2 )2 − |R|3 (1 − 2 )2 )φ∗A (r2 )φA∗ (r2 ).
(24.24)
Only the integrals over mixed products of 1 and 2 are not zero. They can be expressed with the help of the transition dipoles: √ μD = 1 D|er|1 D∗ = 2 d3 1 φ∗D∗ (1 )e1 φD (1 ), √ 3 1 1 ∗ (24.25) μA = A|er| A = 2 d 2 φ∗A (2 )e2 φA∗ (2 ). Finally1 the leading term of the excitonic interaction is the dipole–dipole term: Vexc =
& 2 ' e2 |R| μD μA − 3(RμD )(RμA ) . 4πε|R|5
(24.26)
This is often written with an orientation-dependent factor K as Vexc =
K |μD ||μA |. |R|3
(24.27)
24.4 Energy Transfer Rate We consider excitonic interaction of two molecules. The Hamilton operator is divided into the zero-order Hamiltonian H0 = |D∗ AHD∗ A D∗ A| + |DA∗ HDA∗ DA∗ |
(24.28)
and the interaction operator H = |D∗ AVexc DA∗ | + |DA∗ Vexc D∗ A|.
(24.29)
We neglect electron exchange between the two molecules and apply the Condon approximation. Then taking energies relative to the electronic ground state |DA, we have HD∗ A = ωD∗ + HD∗ + HA , HDA∗ = ωA∗ + HD + HA∗ ,
(24.30)
with electronic excitation energies ωD∗ (A∗ ) and nuclear Hamiltonians HD , HD∗ , HA , HA∗ . Now, consider once more Fermi’s golden rule for the transition between vibronic states |i → |f : 1
Local dielectric constant and local field correction [84, 85] will be taken into account implicitly by considering the transition dipoles as effective values.
264
24 Incoherent Energy Transfer
k= = = = =
2π Pi |i|H |f |2 δ(ωf − ωi ) i,f 1 Pi i|H |f f |H |iei(ωf −ωi )t dt 2 i,f 1 Pi i|H |f eiωf t f |H |eiωi t idt 2 i,f 1 i|Q−1 e−H0 /kB T H eitH0 / H eitH0 / |idt 2 i 1 dt H (0)H (t) . 2
(24.31)
(24.32)
Initially, only the donor is excited. Then the average is restricted to the vibrational states of D∗ A: 1 k = 2 dt H (0)H (t)D∗ A . (24.33) With the transition dipole operators μ ˆD = |DμD D∗ | + |D∗ μD D|, μ ˆA = |AμA A∗ | + |A∗ μA A|,
(24.34)
the rate becomes k=
K2 2 R 6
dt ˆ μD (0)ˆ μA (0)ˆ μD (t)ˆ μA (t)D∗ A .
Here, we assumed that the orientation does not change on the relevant time scale. Since each of the dipole operators acts only on one of the molecules, we have 1 K2 dt ˆ μD (0)ˆ k= 2 μD (t)ˆ μA (0)ˆ μA (t)D∗ A |R|6 1 K2 = 2 μD (t)D∗ A ˆ μA (0)ˆ μA (t)D∗ A dt ˆ μD (0)ˆ |R|6 1 K2 = 2 μD (t)D∗ ˆ μA (0)ˆ μA (t)A . (24.35) dt ˆ μD (0)ˆ |R|6
24.5 Spectral Overlap The two factors are related to the acceptor absorption and donor fluorescence spectra. Consider optical transitions between the singlet states |1 D∗ → |1 D and |1 A →1 A∗ . The number of fluorescence photons per time is given by the
24.5 Spectral Overlap
265
Einstein coefficient for spontaneous emission2 AD∗ D =
3 2ωD ∗D |μD |2 3ε0 hc3
with the donor transition dipole moment (24.25) √ 3 μD = 2 d r φD∗ (er)φD .
(24.36)
(24.37)
The frequency-resolved total fluorescence is then Ie (ω) = AD∗ D ge (ω) =
2ω 3 |μD |2 ge (ω) 3ε0 hc3
(24.38)
with the normalized lineshape function ge (ω). The absorption cross section can be expressed with the help of the Einstein ω coefficient for absorption B12 as [86] ω σa (ω) = BAA ∗ ωga (ω)/c.
(24.39)
The Einstein coefficients for absorption and emission are related by ω BAA ∗ =
π 2 c3 1 π A A∗ A = |μA |2 3 ωA∗ A 3 2 ε 0
and therefore (Fig. 24.3) σa (ω) =
1 π |μA |2 ωga (ω). 3 cε0
In the following, we discuss absorption and emission in terms of the modified spectra: 3cε0 σa (ω) = |ˆ μA |2 ga (ω) π ω = Pi |i|ˆ μA |f |2 δ(ωf − ωi − ω)
α(ω) =
i,f
1 = 2π 1 = 2π
2
dt e
−iωt
e−H/kB T μA e−iωi t |i i μ ˆA f eiωf t f |ˆ Q i
dt e
−iωt
ˆ μA (0)ˆ μA (t)A
A detailed discussion of the relationships between absorption cross section and Einstein coefficients is found in [86].
266
24 Incoherent Energy Transfer
EA* 0−0 α
η
EA Q
Fig. 24.3. Absorption and emission. Within the displaced oscillator model, the 0 → 0 transition energy is located halfway between the absorption (α) and emission maximum (η)
and3 3ε0 hc3 Ie (ω) = |μD |2 ge (ω) 2ω 3 = Pf |f |ˆ μD |i|2 δ(ωf − ωi − ω)
η(ω) =
i,f
=
1 2π
dt e−iωt
e−H/kB T eiωf t μ ˆ D i e−iωi t i|ˆ μD |f f Q f
1 μD (t)ˆ μD (0)D∗ = dt e−iωt ˆ 2π 1 dt eiωt ˆ = μD (0)ˆ μD (t)D∗ . 2π
(24.40)
Within the Condon approximation
8 9 μA (t)A = eiωA∗ t |μA |2 eitHA∗ / e−itHA / μˆA (0)ˆ
A
= eiωA∗ t |μA |2 FA (t)
(24.41)
and therefore the lineshape function is the Fourier transform of the correlation function: 1 ga (ω) = dt ei(ω−ωA∗ )t FA (t). (24.42) 2π Similarly, 8 9 μD (t)D∗ = e−iωD∗ t |μD |2 eitHD / e−itHD∗ / μˆD (0)ˆ D∗
= e−iωD∗ t |μD |2 FD∗ (t) 3
(24.43)
Note the sign change which appears since we now have to average over the excited state vibrations.
24.5 Spectral Overlap
and ge (ω) =
1 2π
267
dt ei(ω−ωD∗ )t FD∗ (t).
(24.44)
With the inverse Fourier integrals μA (t)A = ˆ μA (0)ˆ
dωeiωt α(ω),
dωe−iωtη(ω),
ˆ μD (0)ˆ μD (t)D∗ =
(24.45)
(24.35) becomes k= = = =
1 K2 iωt dt dωe η(ω) dω e−iω t α(ω ) 2 |R|6 1 K2 dωdω η(ω)α(ω ) 2πδ(ω − ω ) 2 |R|6 2π K 2 dωη(ω)α(ω) 2 |R|6 2π K 2 2 2 dωge (ω)ga (ω). |μA | |μD | 2 |R|6
(24.46)
ω
α(A) η(A)
D
η(D)
D*
A*
A
ω
α(D)
This is the famous rate expression for Förster [81] energy transfer which involves the spectral overlap of donor emission and acceptor absorption [87, 88]. For optimum efficiency of energy transfer, the maximum of the acceptor absorption should be at longer wavelength than the maximum of the donor emission (Fig. 24.4).
Fig. 24.4. Energy transfer and spectral overlap. Efficient energy transfer requires good spectral overlap of donor emission and acceptor absorption. This is normally only the case for transfer between unlike molecules
268
24 Incoherent Energy Transfer
24.6 Energy Transfer in the Triplet State Consider now energy transfer in the triplet state. Here, the transitions are optically not allowed. We consider a more general interaction operator which changes the electronic state of both molecules simultaneously and can be written as4 H = |D∗ |AVx A∗ |D| + h.c. (24.47) We start from the rate expression (24.32) 1 k = 2 dt H (0)H (t).
(24.48)
The thermal average has to be taken over the initially populated state |D∗ A 1 k = 2 dt H (0)H (t)D∗ A . In the static case (Vx = const), 9 8 1 k = 2 dt H eitHDA∗ / H e−itHD∗ A / D∗ A 8 9 1 = 2 dt ei(ωA∗ −ωD∗ )t H eit(HD +HA∗ )/ H e−it(HD∗ +HA )/ ∗ D A 8 9 8 9 2 V eitHA∗ / e−itHA / dt ei(ωA∗ −ωD∗ )t eitHD / e−itHD∗ / = x2 D∗ A 2 V dt eit(ωA∗ −ωD∗ ) FD∗ (t)FA (t) = x2 (24.49) with the correlation functions
8 9 FA (t) = eitHA∗ / e−itHA / , A 8 9 FD∗ (t) = eitHD / e−itHD∗ / . D∗
(24.50) (24.51)
Introducing lineshape functions (24.42) and (24.44) similar to the excitonic case, the rate becomes V2 k = x2 dt eit(ωA∗ −ωD∗ ) dω e−i(ω −ωD∗ )t ge (ω ) dωei(ω−ωA∗ )t ga (ω) Vx2 = 2 dωdω ge (ω )ga (ω)2πδ(ω − ω ) 2πVx2 dωga (ω)ge (ω), = (24.52) which is very similar to the Förster expression (24.46). The excitonic interaction is replaced by the exchange coupling matrix element and the overlap of the optical spectra is replaced by the overlap of the Franck–Condon-weighted densities of states. 4
We assume that the wave function of the pair can be factorized approximately.
25 Coherent Excitations in Photosynthetic Systems
LH1 RC
LH2
Fig. 25.1. Energy transfer in bacterial photosynthesis
Photosynthetic units of plants and bacteria consist of antenna complexes and reaction centers. Rings of closely coupled chlorophyll chromophores form the light-harvesting complexes which transfer the incoming photons very efficiently and rapidly to the reaction center where the photon energy is used to create an ion pair. In this chapter, we concentrate on the properties of strongly coupled chromophore aggregates (Fig. 25.1).
270
25 Coherent Excitations in Photosynthetic Systems
25.1 Coherent Excitations If the excitonic coupling is large compared to fluctuations of the excitation energies, a coherent excitation of two or more molecules can be generated. 25.1.1 Strongly Coupled Dimers Let us consider a dimer consisting of two strongly coupled molecules A and B as in the reaction center of photosynthesis. The two excited states |A∗ B,
|AB∗
(25.1)
are mixed due to the excitonic interaction. The eigenstates are given by the eigenvectors of the matrix E A∗ B V . (25.2) V EAB∗ For a symmetric dimer, the diagonal energies have the same value and the eigenvectors can be characterized as symmetric or antisymmetric ⎛ 1 ⎛ 1 ⎞ ⎞ √ − √1 √ √1 2 2 2 2 ⎝ ⎝ ⎠ EA∗ B V ⎠ = EA ∗ B − V . V EA∗ B EA∗ B + V √1 √1 √1 √1 − 2 2 2 2 (25.3) The two excitonic states are split by 2V .1 The transition dipoles of the two dimer bands are given by 1 μ± = √ (μA ± μB ) 2
(25.4)
and the intensities are given by |μ± |2 =
1 2 (μ + μ2B ± 2μA μB ). 2 A
(25.5)
For a symmetric dimer μ2A = μ2B = μ2 and |μ± |2 = μ2 (1 ± cos α),
(25.6)
where α denotes the angle between μA and μB . In case of an approximately C2 -symmetric structure, the components of μA and μB are furthermore related by symmetry operations (Fig. 25.2). If we choose the C2 -axis along the z-axis, we have ⎛ ⎞ ⎛ ⎞ μBx −μAx ⎝ μBy ⎠ = ⎝ −μAy ⎠ (25.7) μBz μAz 1
This is known as Davidov splitting.
25.1 Coherent Excitations
271
C2 E |+> |A*B> |AB*>
2V |−>
absorption
Fig. 25.2. The “special pair” dimer. Top: the nearly C2 -symmetric molecular arrangement for the reaction center of Rhodopseudomonas viridis (molekel graphics [89]). The transition dipoles of the two molecules (arrows) are essentially antiparallel. Bottom: the lower exciton component carries most of the oscillator strength
and therefore
⎛
⎞ 0 1 1 √ (μA + μB ) = √ ⎝ 0 ⎠ , 2 2 2μ Az ⎞ ⎛ 2μ 1 1 ⎝ Ax ⎠ √ (μA − μB ) = √ 2μAy , 2 2 0
(25.8)
which shows that the transition to the state |+ is polarized along the symmetry axis whereas the transition to |− is polarized perpendicularly. For the special pair dimer, interaction with internal charge transfer states |A+ B− and |A− B+ has to be considered. In the simplest model, the following interaction matrix elements are important (Fig. 25.3). The local excitation A∗ B is coupled to the CT state A+ B− by transferring an electron between the two LUMOs
272
25 Coherent Excitations in Photosynthetic Systems
A+B −
A−B +
UL
UH
UL
V A *B
AB*
Fig. 25.3. Extended dimer model. The optical excitations A∗ B and AB∗ are coupled to the ionic states A+ B− and A− B+
A∗ B|H|A+ B− 1 = (A∗↑ A↓ − A∗↓ A↑ )B↑ B↓ H(B∗↑ A↓ − B∗↓ A↑ )B↑ B↓ 2 = HA∗ ,B∗ = UL
(25.9)
and to the CT state A− B+ by transferring an electron between the HOMOs A∗ B|H|A− B+ 1 = (A∗↑ A↓ − A∗↓ A↑ )B↑ B↓ H(A∗↑ B↓ − A∗↓ B↑ )A↑ A↓ 2 = −HA,B = UH .
(25.10)
Similarly, the second local excitation couples to the CT states by AB∗ |H|A+ B− 1 = (B∗↑ B↓ − B∗↓ B↑ )A↑ A↓ H(B∗↑ A↓ − B∗↓ A↑ )B↑ B↓ 2 = −HA,B = UH ,
(25.11)
AB∗ |H|A− B+ 1 = (B∗↑ B↓ − B∗↓ B↑ )A↑ A↓ H(A∗↑ B↓ − A∗↓ B↑ )A↑ A↓ 2 = HA∗ ,B∗ = UL .
(25.12)
The interaction of the four states is summarized by the matrix ⎛ ⎞ E A∗ B V UL UH ⎜ V EB∗ A UH UL ⎟ ⎟. H=⎜ ⎝ UL UH EA+ B− ⎠ UH UL EA− B+
(25.13)
Again for a symmetric dimer, EB∗ A = EAB∗ and EA+ B− = EA− B+ and the interaction matrix can be simplified by transforming to symmetrized basis functions with the transformation matrix
25.1 Coherent Excitations |A+B−> , |A−B+>
273
+ U−
U+
+ |A*B> , |AB*>
−
Fig. 25.4. Dimer states
⎛
√1 √1 2 2 ⎜ − √1 √1 ⎜ 2 2
S=⎜ ⎝
⎞ √1 √1 2 2 − √12 √12
⎟ ⎟ ⎟. ⎠
The transformation gives ⎛ ⎞ E∗ − V UL − UH ⎜ E∗ + V UL + UH ⎟ ⎟, S −1 H S = ⎜ ⎝ UL − UH ⎠ ECT U L + UH ECT where the states of different symmetry are decoupled (Fig. 25.4) E∗ + V UL + UH E∗ − V UL − UH , H− = . H+ = UL + UH ECT UL − UH ECT
(25.14)
(25.15)
(25.16)
25.1.2 Excitonic Structure of the Reaction Center The reaction center of bacterial photosynthesis consists of six chromophores. Two bacteriochlorophyll molecules (PL , PM ) form the special pair dimer. Another bacteriochlorophyll (BL ) and a bacteriopheophytine (HL ) act as the acceptors during the first electron transfer steps. Both have symmetry-related counterparts (BM , HM ) which are not directly involved in the charge separation process. Due to the short neighbor distances (11–13 Å), delocalization of the optical excitation has to be considered to understand the optical spectra. Whereas the dipole–dipole approximation is not applicable to the strong coupling of the dimer chromophores, it has been used to estimate the remaining excitonic interactions in the reaction center. The strongest couplings are expected for the pairs PL BL , BL HL , PM BM , BM HM due to favorable distances and orientational factors (Fig. 25.5; Table 25.1). Starting from a system with full C2 -symmetry the excitations again can be classified as symmetric or antisymmetric. The lowest dimer band interacts
274
25 Coherent Excitations in Photosynthetic Systems
10.8
PM 7.4 PL 10.5
BM 12.9
12.5
10.6
BL 10.7
HM
HL
Fig. 25.5. Transition dipoles of the reaction center Rps. viridis [90–93]. Arrows show the transition dipoles of the isolated chromophores. Center–center distances (as defined by the nitrogen atoms) are given in Å Table 25.1. Excitonic couplings for the reaction center of Rps. viridis HM HM BM PM PL BL HL
160 33 −9 −11 6
BM
PM
PL
BL
HL
160
33 −168
−9 −37 770
770 −42 −7
−189 35
−11 29 −42 −189
6 −11 −7 35 167
−168 −37 29 −11
167
The matrix elements were calculated with the dipole–dipole approximation for μ2 = 45 Debye2 for bacteriochlorophyll and 30 Debye2 for bacteriopheophytine [94]. All values in cm−1
with the antisymmetric combinations of B and H excitations. Due to the differences of excitation energies, this leads only to a small amount of state mixing. For the symmetric states, the situation is quite different as the upper dimer band is close to the B excitation (Fig. 25.6). If the symmetry is disturbed by structural differences or interactions with the protein, excitations of different symmetry character interact. Qualitatively, we expect that the lowest excitation is essentially the lower dimer band, the highest band2 reflects absorption from the pheophytines and in the region of the B absorption, we expect mixtures of the B∗ excitations and the upper dimer band. 25.1.3 Circular Molecular Aggregates We consider now a circular aggregate of N -chromophores as it is found in the light-harvesting complexes of photosynthesis [96–98] (Figs. 25.7 and 25.8). We align the CN -symmetry axis along the z-axis. The position of the nth molecule is ⎛ ⎞ cos(2π n/N ) (25.17) Rn = R ⎝ sin(2π n/N ) ⎠ , n = 0, 1, . . . , N − 1, 0 2
in the Qy region
275
14000
16000
25.1 Coherent Excitations
PCT*(±)
12000 cm–1
H*(±) P*(+) B*(±)
10000
P*(–)
Fig. 25.6. Excitations of the reaction center. The absorption spectrum of Rps. viridis in the Qy region is assigned on the basis of symmetric excitonic excita∗ are very broad and cannot be observed tions [95]. The charge transfer states PCT directly CN
y
x
Fig. 25.7. Circular aggregate
which can also be written with the help of a rotation matrix ⎛ ⎞ cos(2π/N ) − sin(2π/N ) SN = ⎝ sin(2π/N ) cos(2π/N ) ⎠ 1 as n Rn = SN R0 ,
⎛ ⎞ 1 ⎝ R0 = 0 ⎠. 0
(25.18)
(25.19)
Similarly, the transition dipoles are given by n μ0 . μn = S N
(25.20)
The component parallel to the symmetry axis is the same for all monomers μn,z = μ
(25.21)
276
25 Coherent Excitations in Photosynthetic Systems
Fig. 25.8. Light-harvesting complex. The light-harvesting complex from Rhodopseudomonas acidophila (structure 1kzu [99–103] from the protein data bank [104, 105]) consists of a ring of 18 closely coupled (shown in red and blue) and another ring of 9 less strongly coupled bacteriochlorophyll molecules (green). Rasmol graphics [106]
whereas for the component in the perpendicular plane cos(n2π/N ) − sin(n2π/N ) μ0,x μn,x = . μn,y μ0,y sin(n2π/N ) cos(n2π/N )
(25.22)
We describe the orientation of μ0 in the x − y plane by the angle φ: cos(φ) μ0,x = μ⊥ . (25.23) μ0,y sin(φ) Then we have (Fig. 25.9) ⎞ ⎛ ⎞ ⎛ μ⊥ cos(φ + n2π/N ) μn,x ⎝ μn,y ⎠ = ⎝ μ⊥ cos(φ + n2π/N ) ⎠ . μ μn,z
(25.24)
25.1 Coherent Excitations
277
y φ + 2π N 2π N
φ x
Fig. 25.9. Orientation of the transition dipoles
We denote the local excitation of the nth molecule by |n = |A0 A2 · · · A∗n · · · AN −1 .
(25.25)
Due to the symmetry of the system, the excitonic interaction is invariant against the SN rotation and therefore m|V |n = m − n|V |0 = 0|V |n − m.
(25.26)
Without a magnetic field, the coupling matrix elements can be chosen real and depend only on |m − n|. Within the dipole–dipole approximation, we have furthermore3 V|m−n| = m|V |n 2
=
e 4πε|Rmn |5
&
(25.27) ' |Rmn |2 μm μn − 3(Rmn μm )(Rmn μn ) . (25.28)
The interaction matrix has the form (Table 25.2) ⎞ ⎛ E0 V1 V2 · · · V2 V1 ⎜ V1 E1 V1 · · · V3 V2 ⎟ ⎟ ⎜ ⎜ .. ⎟ ⎟ ⎜ V2 V1 . . . . ⎟. H =⎜ ⎟ ⎜ . . ⎜ .. .. V2 ⎟ ⎟ ⎜ ⎝ V2 V3 V1 ⎠ V1 V2 · · · V2 V1 EN −1
(25.29)
The excitonic wave functions are easily constructed as N −1 1 ikn |k = √ e |n N n=0
3
(25.30)
A more realistic description based on a semiempirical INDO/S method is given by [107].
278
25 Coherent Excitations in Photosynthetic Systems Table 25.2. Excitonic couplings V|m−n| (cm−1 )
|m − n|
−352.5 −43.4 −12.6 −5.1 −2.6 −1.5 −0.9 −0.7 −0.6
1 2 3 4 5 6 7 8 9
The matrix elements are calculated with the dipole– dipole approximation (25.27) for R = 26.5Å, φ0 = 66◦ , and μ2 = 37 Debye2
with k=
2π l, N
k |H|k =
l = 0, 1, . . . , N − 1,
(25.31)
N −1 N −1 1 ikn −ik n e e n |H|n N n=0 n =0
=
N −1 N −1 1 ikn −ik n e e H|n−n | . N n=0
(25.32)
n =0
We substitute
m = n − n
(25.33)
to get k |H|k =
N −1 n−N +1 1 i(k−k )n+ik m e H|m| N n=0 m=n
= δk,k
N −1
eik m H|m|
m=0
= δk,k (E0 + 2V1 cos k + 2V2 cos 2k + · · · ) .
(25.34)
For even N , the lowest and highest states are not degenerate whereas for all of the other states (Fig. 25.10) Ek = EN −k = E−k .
(25.35)
25.1 Coherent Excitations
279
E(k) 9
10 = – 8 11 = – 7 12 = – 6
8 7 6
13 = – 5
5
14 = – 4
4
13 = – 3 12 = – 2 11 = – 1
3 2 1
l=0
−π
π k
0
Fig. 25.10. Exciton dispersion relation. The figure shows the case of negative excitonic interaction for which the k = 0 state is the lowest in energy
The transition dipoles of the k-states are given by N −1 N −1 1 ikn 1 ikn n μk = √ e μn = √ e SN μ0 N n=0 N n=0 ⎛ ⎞ N −1 μ cos(2π n/N ) + μy sin(2π n/N ) 1 ikn ⎝ x μy cos(2π n/N ) − μx sin(2π n/N ) ⎠ . = √ e N n=0 μz
(25.36)
For the z-component, we have (Fig. 25.11) N −1 √ 1 √ μz eikn = N μz δk,0 . N n=0
(25.37)
For the component in the x, y-plane, we introduce complex polarization vectors corresponding to circular polarized light:
±i 0 μk,± = √12 √ μk 2 ⎛ ⎞⎛ ⎞ cos( 2πn ) − sin( 2πn ) N N N −1 μx
1 ⎜ ⎟ =√ eikn √12 ± √i2 0 ⎝ sin( 2πn ) cos( 2πn ) ⎠ ⎝ μy ⎠ N N N n=0 μz 1 ⎛ ⎞ N −1 μ 1 ikn √1 ±i2nπ/N √i ±i2nπ/N ⎝ x ⎠ e ± e 0 μy =√ e 2 2 N n=0 μ N −1
1 ±i ei(k±2π/N ) √12 √ = √ 2 N n=0 √ = N δk,∓2π/N μ± with μ± =
μx μy
z
1 1 √ (μx ± iμy ) = √ μ⊥ e±iφ . (25.38) 2 2
280
25 Coherent Excitations in Photosynthetic Systems 9
10 = –8 11 = –7 12 = –6
8 7 6
13 = –5
5
14 = –4
4
13 = –3 12 = –2 11 = –1
3 2 1
0
circ. pol. (−)
pol
circ. pol. (+)
ground state
Fig. 25.11. Allowed optical transitions. The figure shows the case of negative excitonic interaction where the lowest three exciton states carry intensity
Linear polarized light with a polarization vector
& ' 1 −iΘ √1 1 −i 0 √ cos Θ sin Θ 0 = √ eiΘ √12 √ + e 2 2 2 2
√i
2
0
(25.39)
couples to a linear combination of the k = ±2π/N states since
N μk,Θ = (25.40) μ⊥ ei(Φ−Θ) δk,−2π/N + e−i(Φ−Θ) δk,2π/N . 2 The absorption strength does not depend on the polarization angle Θ |μ−2π/N,Θ |2 + |μ2π/N,Θ |2 = N μ2⊥ .
(25.41)
25.1.4 Dimerized Systems of LH2 The light-harvesting complex LH2 of Rps. acidophila contains a ring of nine weakly coupled chlorophylls and another ring of nine stronger coupled chlorophyll dimers. The two units forming a dimer will be denoted as α, β and the number of dimers as N (Fig. 25.12). The transition dipole moments are '⎞ & ⎛ ⎞ ⎛ sin θα cos n 2π N − ν + φα sin θα ⎟ ⎜ n μn,α = μ ⎝ sin θα sin &n 2π − ν + φα ' ⎠ = μSN Rz (−ν + φα ) ⎝ 0 ⎠ , N cos θα cos θα & 2π '⎞ ⎛ ⎛ ⎞ sin θβ cos n N + ν + φβ sin θβ ⎜ ⎟ n μn,β = μ ⎝ sin θβ sin &n 2π + ν + φβ ' ⎠ = μSN Rz (+ν + φβ ) ⎝ 0 ⎠ . N cos θβ cos θβ (25.42)
25.1 Coherent Excitations
Rβ Rα
φβ φα
2ν
z θα
281
θβ
α1
β1 y x
y x
Fig. 25.12. BChl arrangement for the LH2 complex of Rps. acidophila [99–103]. Arrows represent the transition dipole moments of the BChl molecules as calculated on the 631G/HF-CI level. Distances are with respect to the center of the four nitrogen atoms. Rα = 26.1 Å, Rβ = 26.9 Å, 2ν = 20.7◦ , φα = 70.5◦ , φβ = −117.7◦ , θα = 83.7◦ , and θβ = 80.3◦
For a circle with C2N -symmetrical positions (2ν = π/N, θα = θβ ) and alternating transition dipole directions (φβ = π + φα ), we find the relation
π + π μn,α . μn,β = Rz (2ν + φβ − φα )μn,α = Rz (25.43) N The experimental value of 2ν + φβ − φα = 208.9◦
(25.44)
is in fact very close to the value of 180◦ + 20◦ (or π + (π/N )). We have to distinguish the following interaction matrix elements between two monomers in different unit cells (Figs. 25.13; Table 25.3): Vα,α,|m| , Vα,β,m = Vβ,α,−m , Vα,β,−m = Vβ,α,m , Vβ,β,|m| . The interaction matrix of one dimer is Eα Vdim . Vdim Eβ
(25.45)
(25.46)
The wave function has to be generalized as N −1 1 ikn e (Csα |nα + Csβ |nβ), |k, s = √ N n=0
k s |H|ks =
N −1 N −1 1 i(kn−k n ) & e Cs α Csα Hαα|n−n | N n=0 n =0
(25.47)
' +Cs β Csβ Hββ|n−n | + Cs α Csβ Hαβ(n−n ) + Cs β Csα Hβα(n−n ) ,
282
25 Coherent Excitations in Photosynthetic Systems Vαβ1 Vββ1
Vαα1
β2
α2
β1
V βα 1V dim
α1 βN αN
Fig. 25.13. Couplings in the dimerized ring Table 25.3. Excitonic couplings for the LH2 complex of Rps. acidophila n
Vαβn
Vβαn
Vααn
Vββn
0 1 2 3 4
356.1 13.3 2.8 1.0 0.7
356.1 324.5 11.8 2.4 0.9
−49.8 −6.1 −1.8 −0.9
−35.0 −4.0 −1.0 −0.4
The matrix elements were calculated within the dipole–dipole approximation for the structure shown in Fig. 25.12 for μ2 = 37 Debye2 . All values in cm−1
=
−1 N −1 ' N 1 & Hαα|m| Hαβm Csα Cs α Cs β ei(k−k )n+ik m , Hβαm Hββ|m| Csβ N n=0 m=0 &
= δk,k Cs α Cs β
'
N −1 m=0
eikm Hαα|m|
N −1
ikm Hβαm m=0 e
N −1 m=0
N −1
eikm Hαβm
ikm Hββm m=0 e
Csα Csβ
.
(25.48) The coefficients Csα and Csβ are determined by diagonalization of the matrix: Hk =
Eα + 2Vαα,1 cos k + · · · Vdim + eik Vαβ,1 + e−ik Vαβ,−1 + · · · −ik ik Eα + 2Vββ,1 cos k + · · · Vdim + e Vαβ,1 + e Vαβ,−1 + · · ·
.
(25.49)
25.1 Coherent Excitations
283
If we consider only interactions between nearest neighbors, this simplifies to Eα Vdim + e−ik W (25.50) with W = Vαβ,−1 . Hk = Eβ Vdim + eik W In the following, we discuss the limit Eα = Eβ . The general case is discussed in the “Problems section.” The eigenvectors of Vdim + e−ik W Eα Hk = Vdim + eik W Eα are given by 1 √ 2
1 ±eiχ
,
(25.51)
where the angle χ is chosen such that V + eik W = U (k)eiχ
(25.52)
with U (k) = sign(V )|V + eik W | and the eigenvalues are 1 2 + W 2 + 2V Ek,± = Eα ± U (k) = Eα ± sign(V ) Vdim dim W cos k. (25.53) The transition dipoles follow from μk,±
=√
N −1 1 ikn e (μn,α ± eiχ μn,β ) 2N n=0 N −1
μ ikn n e SN 2N n=0 ⎛⎛ ⎞ ⎛ ⎞⎞ sin θα cos(−ν + φα ) sin θβ cos(ν + φβ ) × ⎝⎝ sin θα sin(−ν + φα ) ⎠ ± eiχ ⎝ sin θβ sin(ν + φβ ) ⎠⎠. (25.54) cos θα cos θβ
=√
Similar selection rules as for the simple ring system follow for the first factor &
'
0 0 1 μk,±
√1 √i 2 2
0 μk,±
N −1 ' μ ikn & cos θα ± eiχ cos θβ = √ e 2N n=0
N (cos θα ± cos θβ ) , = δk,0 μ 2 N −1 '& ' 1 i(k±2π/N )n & 1 i 0 μα,1 ± eiχ μβ,1 = √ e 2N n=0
(25.55)
284
25 Coherent Excitations in Photosynthetic Systems
=
−i √1 √ 2 2
N δk,∓2π/N μ sin θα ei(φα −ν) ± eiχ sin θβ ei(φ,Mβ +ν) , 2 (25.56)
N −1
'& ' 1 i(k±2π/N )n & 0 μk,± = √ 1 −i 0 μα,1 ± eiχ μβ,1 e 2N n=0
N δk,∓2π/N μ sin θα ei(φα −ν) ∓ eiχ sin θβ ei(φβ +ν) . 0= 2 (25.57)
The second factor determines the distribution of intensity among the + and − states. In the limit of 2N -equivalent molecules, V = W , 2ν +φβ −φα = π+(π/N ), and θα = θβ and we have V + eik W = (1 + eik )V = eik/2 (e−ik/2 + e+ik/2 )V = eik/2 2V cos k/2 (25.58) hence χ = k/2,
U (k) = 2V cos k/2,
(25.59)
and (25.56) becomes
1 i √ √ 0 2 2
μk,± =
N δk,∓2π/N μ sin θei(φα −ν) 1 ∓ ei(k/2+π/N ) , 2
which is zero for the upper case (+ states) and √ 2Nδk,2π/N μ sin θei(φα −ν)
(25.60)
for the (−) states. Similarly (25.57) becomes √ 1 √ (μk,x − iμk,y ) = 2Nδk,+2π/N μ sin θe−i(φα −ν) 2
(25.61)
for the − states and zero for the + states. (For positive V , the + states are higher in energy than the − states.) For the z-component, the selection rule of k = 0 implies (Fig. 25.14) √ (25.62) μz,+ = 2Nδk,0 μ cos θ, μz,− = 0. In the LH2 complex, the transition dipoles of the two dimer halves are nearly antiparallel. The oscillator strengths of the N molecules are concentrated in the degenerate next to lowest transitions k = ±2π/N . This means that the lifetime of the optically allowed states will be reduced compared to the radiative lifetime of a monomer. In the LH2 complex, the transition dipoles have only a very small z-component. Therefore in a perfectly
25.2 Influence of Disorder
285
E(k)
circ
circ
circ
circ
−2π/N 0 2π/N
π
k
Fig. 25.14. Dispersion relation of the dimer ring
symmetric structure, the lowest (k = 0) state is almost forbidden and has a longer lifetime than the optically allowed k = ±2π/N states. Due to the degeneracy of the k = ±2π/N states, the absorption of photons coming along the symmetry axis does not depend on the polarization.
25.2 Influence of Disorder So far, we considered a perfectly symmetrical arrangement of the chromophores. In reality, there exist deviations due to the protein environment and to low-frequency nuclear motion which leads to variations of the site energies, the coupling matrix elements, and the transition dipoles. Experimentally, a pair of mutually perpendicular states has been observed which were assigned to the strong k = ±2π/N states [96, 108]. This can be only understood if the degeneracy is lifted by some symmetry-breaking perturbation which gives as zero-order eigenstates linear combinations of the k = ±2π/N states (25.40). In the following, we neglect the effects of dimerization and study a ring of N -equivalent chromophores. 25.2.1 Symmetry-Breaking Local Perturbation We consider perturbation of the symmetry by a specific local interaction, for instance due to an additional hydrogen bond at one site. We assume that the excitation energy at the site n0 = 0 is modified by a small amount δE. The Hamiltonian H=
N −1 n=0
|nE0 n| + |0δE0| +
N −1
N −1
n=0 n =0,n =n
|nVnn n |
(25.63)
286
25 Coherent Excitations in Photosynthetic Systems
is transformed to the basis of k-states (25.30) to give 1 −ik n +ikn e n |H|n N
k |H|k =
n,n
δE 1 i(k−k )n 1 −ik n +ikn e E0 + e Vnn = + N n N N n n =n
δE = δk,k Ek + . N
(25.64)
Obviously, the energy of all k-states is shifted by the same amount: k|H|k = Ek +
δE . N
(25.65)
The degeneracy of the pairs | ± k is removed due to the interaction k|H| − k =
δE . N
(25.66)
The zero-order eigenstates are the linear combinations 1 |k± = √ (|k ± | − k) 2
(25.67)
with zero-order energies (Fig. 25.15) δE N
E(k) = Ek +
(nondegenerate states),
δE (degenerate states). (25.68) N Only the symmetric states are affected by the perturbation. The interaction between degenerate pairs is given by the matrix elements E(k− ) = Ek ,
E(k+ ) = Ek + 2
.. .
CN
y δE x
(−) .. .
.. .
−2
2
−1
1
(+) .. .
l=0
Fig. 25.15. Local perturbation of the symmetry. This figure shows the case of δE > 0
25.2 Influence of Disorder k+ |H|k+ = δk,k Ek + 2 k− |H|k− = δk,k Ek , k+ |H|k− = 0,
287
δE , N (25.69)
and the coupling to the nondegenerate k = 0 state is √ 2δE 0|H|k+ = , N 0|H|k− = 0.
(25.70)
Optical transitions to the |1± states are linearly polarized with respect to perpendicular axes. First-order intensity is transferred from the |1+ state to the |0 and the other |k+ states and hence the |1− state absorbs stronger than the |1+ state which is approximately √ 2δE 2δE |0 + |2+ + · · · |1+ + (25.71) N (E0 − E1 ) N (E2 − E1 ) Its intensity is reduced by a factor of
1+
δE 2 N2
1 2 (E0 −E1 )2
+
4 (E2 −E1 )2
+ ···
.
(25.72)
25.2.2 Periodic Modulation The local perturbation (25.64) has Fourier components H (Δk) = k|H |k + Δk =
1 inΔk e δEn N n
(25.73)
for all possible values of Δk. In this section, we discuss the most important Fourier components separately. These are the Δk = ±4π/N components which mix the k = ±2π/N states and the Δk = ±2π/N components which couple the k = ±2π/N to the k = 0, ±4π/N states and thus redistribute the intensity of the allowed states. A general modulation of the diagonal energies with Fourier components Δk = ±2π/N is given by δEn = δE cos(χ0 + 2nπ/N )
(25.74)
and its matrix elements are H (Δk) =
eiχ0 δE e−iχ0 δE δΔk,−2π/N + δΔk,2π/N . 2 2
(25.75)
288
25 Coherent Excitations in Photosynthetic Systems
Transformation to linear combinations of the degenerate pairs ' 1 & k > 0, |k, + = √ eiχ0 | − k + e−iχ0 |k 2 ' 1 & k>0 |k, − = √ −e−iχ0 | − k + eiχ0 |k 2
(25.76)
leads to two decoupled sets of states with δE 0|H |k, + = √ cos(2χ0 )δk,2π/N , 2 0|H |k, − = 0, k, −|H |k , + = 0, δE cos χ0 (δk −k,2π/N + δk −k,−2π/N ), k, −|H |k , − = 2 δE k, +|H |k , + = cos(3χ0 )(δk −k,2π/N + δk −k,−2π/N ). 2 The |k = 2π/N, ± states are linearly polarized (Fig. 25.16) √ eiχ0 eiχ0 μ2π/N,+,Θ = √ μ−2π/N,Θ + √ μ2π/N,Θ = N μ⊥ cos(Φ + χ0 − Θ), 2 2 √ e−iχ0 eiχ0 μ2π/N,−,Θ = − √ μ−2π/N,Θ + √ μ2π/N,Θ = i Nμ⊥ sin(Φ + χ0 − Θ). 2 2 (25.77) Let us now consider a C2 -symmetric modulation [109] (25.78)
δEn = δE cos(χ0 + 4πn/N )
(−)
.. .
.. .
.. .
−2
2
−1
1
(+) .. .
l=0
Fig. 25.16. C1 -symmetric perturbation. Modulation of the diagonal energies by the perturbation δEn = δE cos(χ0 + 2πn/N ) does not mix the |k, + and |k, − states. In first order, the allowed k = ±2π/N states split into two states |k = 2π/N, ± with mutually perpendicular polarization and intensity is transferred to the |k = 0 and |k = 4π/N, ± states
25.2 Influence of Disorder
.. .
.. .
.. .
−2
2
−1
1
289
.. .
l=0
Fig. 25.17. C2 -symmetric perturbation. Modulation of the diagonal energies by the perturbation δEn = δE cos(χ0 + 4πn/N ) splits the |k = ±2π/N states in zero order into a pair with equal intensity and mutually perpendicular polarization
with matrix elements k|H |k =
' δE & iχ0 e δk −k,−4π/N + e−iχ0 δk −k,4π/N . 2
(25.79)
The |k = ±2π/N states are mixed to give the zero-order states ' 1 & |2π/N, + = √ | − 2π/N + eiχ0 |2π/N , 2 ' 1 & |2π/N, − = √ | − 2π/N − eiχ0 |2π/N , 2
(25.80)
which are again linearly polarized (Fig. 25.17). A perturbation of this kind could be due to an elliptical deformation of the ring [109], but also to interaction with a static electric field, which is given up to second order by 1 δEn = −pn E + Et αn E, 2
(25.81)
where the permanent dipole moments are given by an expression similar4 to (25.24) ⎛ ⎞ p⊥ cos(Φ + n2π/N ) n pn = SN p0 = ⎝ p⊥ sin(Φ + n2π/N ) ⎠ (25.82) p and the polarizabilities transform as −n n α0 S N . αn = S N
With the field vector
4
(25.83)
⎛
⎞ E⊥ cos ξ E = ⎝ E⊥ sin ξ ⎠ , E
The angle Φ is different for permanent and transition dipoles.
(25.84)
290
25 Coherent Excitations in Photosynthetic Systems
we find − pn E = −p⊥ E⊥ cos(Φ + n2π/N − ξ)
(25.85)
and 1 1 t −n n E αn E = (Et SN )α0 (SN E) 2 2 E2 + = ⊥ αxx cos2 (n2π/N − ξ) + αyy sin2 (n2π/N − ξ) 2 + 2αxy sin(n2π/N − ξ) cos(n2π/N − ξ)} + E| E⊥ {αxz cos(n2π/N − ξ) + αyz sin(n2π/N − ξ)} +
E 2 2
αzz .
(25.86)
The quadratic term has Fourier components Δk = ±4π/N and can therefore act as an elliptic perturbation. Let us assume that the polarizability is induced by the coupling to charge transfer states like in the special pair dimer. Due to the molecular arrangement, the polarizability will be zero along the CN -symmetry axis and will have a maximum along a direction in the xy-plane. We denote the angle of this direction by η. The polarizability α0 simplifies to ⎞ ⎛ α⊥ cos2 η α⊥ sin η cos η 0 (25.87) α0 = ⎝ α⊥ sin η cos η α⊥ sin2 η 0 ⎠ 0 0 0 and the second-order term 1 t E 2 α⊥ + E αn E = ⊥ 1 + cos(2η − 2ξ + n4π/N ) 2 4 , + cos(−2η − 2ξ + n4π/N )
(25.88)
now has only Fourier components Δk = 0, ±4π/N . 25.2.3 Diagonal Disorder Let us now consider a static distribution of site energies [110–112]. The Hamiltonian H=
N −1 n=0
|n(E0 + δEn )n| +
N −1
N −1
|nVnn n |
(25.89)
n=0 n =0,n =n
contains energy shifts δEn which are assumed to have a Gaussian distribution function: 1 P (δEn ) = √ exp(−δEn2 /Δ2 ). (25.90) Δ π
25.2 Influence of Disorder
291
Transforming to the delocalized states, the Hamiltonian becomes k |H|k =
1 −ik n +ikn e n |H|n N n,n
1 −ik n +ikn 1 i(k−k )n e (E0 + δEn ) + e Vnn = N n N n n =n
1 i(k−k )n = δk,k Ek + e δEn N n
(25.91)
and due to the disorder the k-states are mixed. The energy change of a nondegenerate state (for instance k = 0) is in lowest-order perturbation theory given by the diagonal matrix element δEk =
1 δEn N n
(25.92)
as the average of the local energy fluctuations. Obviously, the average is zero δEk = δEn = 0 and δEk2 =
1 δEn δEn . N2
(25.93)
(25.94)
n,n
If the fluctuations of different sites are uncorrelated δEn δEn = δn,n δE 2 , the width of the k-state δEk2 =
(25.95)
δE 2 N
√ is smaller by a factor 1/ N .5 If the fluctuations are fully correlated, on the other hand, the k-states have the same width as the site energies. For the pairs of degenerate states ±k, we have to consider the secular matrix (Δk = k − k = 2k) inΔk 1 1 δEn n δEn ne N N H1 = , (25.96) −inΔk 1 1 δEn ne n δEn N N which has the eigenvalues δE±k =
% 1 1 inΔk δEn ± e δEn e−in Δk δEn . N n N n n
5
This is known as exchange or motional narrowing.
(25.97)
292
25 Coherent Excitations in Photosynthetic Systems
Obviously, the average is not affected δE+k + δE−k =0 2
(25.98)
and the width is given by 2 2 δE+k + δE−k 1 i(n−n )Δk 1 δEn δEn + 2 e δEn δEn = 2 2 N N nn
=2
nn
δE 2 . N
(25.99)
25.2.4 Off-Diagonal Disorder Consider now fluctuations also of the coupling matrix elements, for instance due to fluctuations of orientation and distances. The Hamiltonian contains an additional perturbation Hkk =
1 −ik n +ikn e δVnn . N
(25.100)
nn
For uncorrelated fluctuations with the properties6 δVnn = 0,
(25.101)
2 δVnn δVmm = (δnm δn m + δnm δn m − δnm δn m δnn )δV|n−n | , (25.102)
it follows
Hkk = 0
(25.103)
1 ik(n−m)+ik (m −n ) e δVnn δVmm N2 n n m m 1 1 i(k+k )(n−n ) 2 2 = 2 δVnn δVnn e + N N2
2 |Hkk | =
nn
δE 2 1 2 = δV|m| [1 + cos (m(k + k ))] . (25.104) + N N m=0
If the dominant contributions come from the fluctuation of site energies and nearest neighbor couplings, this simplifies to 2 |Hkk | = 6
1 2 (δE 2 + 2δV±1 (1 + cos(k + k )). N
(25.105)
We assume here that the fluctuation amplitudes obey the CN -symmetry.
25.2 Influence of Disorder
293
For the nondegenerate states, the width is δEk2 =
2 δE 2 + 4δV±1 N
(25.106)
and for the degenerate pairs the eigenvalues of the secular matrix become δE±k = Hkk ± |Hk,−k | . Again the average of the two is not affected δE+k + δE−k = 0, 2 whereas the width now is given by 2 2 δE+k + δE−k 2 2 + Hk,−k = Hkk 2 ' 2 & 2 = δE 2 + δV±1 (3 + cos(2k)) . N
(25.107)
(25.108)
(25.109)
Off-diagonal disorder has a similar effect as diagonal disorder, but it influences the optically allowed k = ±2π/N states stronger than the states in the center of the exciton band. More sophisticated investigations, including also partially correlated disorder and disorder of the transition dipoles, can be found in the literature [110–115].
Problems 25.1. Photosynthetic Reaction Center The “special pair” in the photosynthetic reaction center of Rps. viridis is a dimer of two bacteriochlorophyll molecules whose centers of mass have a distance of 7 Å. The transition dipoles of the two molecules include an angle of 139◦. C2 139º μA
7Å
μB
Calculate energies and intensities of the two dimer bands from a simple exciton model −Δ/2 V H= V Δ/2 as a function of the energy difference Δ and the interaction V . The Hamiltonian is represented here in a basis spanned by the two localized excited states |A∗ B and |B∗ A.
294
25 Coherent Excitations in Photosynthetic Systems
25.2. Light-Harvesting Complex The circular light-harvesting complex of the bacterium Rps. acidophila consists of nine bacteriochlorophyll dimers in a C9 -symmetric arrangement. The two subunits of a dimer are denoted as α and β. The exciton Hamiltonian with nearest-neighbor and next-to-nearest-neighbor interactions only is (with the index n taken as modulo 9) H=
9
{Eα |n; αn; α| + Eβ |n; βn; β|
n=1
+ Vdim (|n; αn; β| + |n; βn; α|) + Vβα,1 (|n; αn − 1; β| + |n; βn + 1; α|) + Vαα,1 (|n; αn + 1; α| + |n; αn − 1; α|) + Vββ,1 (|n; βn + 1; β| + |n; βn − 1; β|)} . Transform the Hamiltonian to delocalized states |k; α =
9 1 ikn e |n; α, 3 n=1
k = l 2π/9,
|k; β =
9 1 ikn e |n; β, 3 n=1
l = 0, ±1, ±2, ±3, ±4.
(a) Show that states with different k-values do not interact k , α(β)|H|k, α(β) = 0 if
k = k .
(b) Find the matrix elements Hαα (k) = k; α|H|k; α,
Hββ (k) = k; β|H|k; β,
Hαβ (k) = k; α|H|k; β.
(c) Solve the eigenvalue problem Hαα (k) Hαβ (k) Cα Cα = E . 1,2 (k) ∗ Hαβ (k) Hββ (k) Cβ Cβ (d) The transition dipole moments are given by ⎞ ⎛ sin θ cos(φα − ν + nφ) μn,α = μ ⎝ sin θ sin(φα − ν + nφ) ⎠ , cos θ ⎛ ⎞ sin θ cos(φβ + ν + nφ) μn,β = μ ⎝ sin θ sin(φβ + ν + nφ) ⎠ , cos θ ν = 10.3◦ , φα = −112.5◦, φβ = 63.2◦ , and θ = 84.9◦ . Determine the optically allowed transitions from the ground state and calculate the relative intensities.
25.2 Influence of Disorder
295
25.3. Exchange Narrowing Consider excitons in a ring of chromophores with uncorrelated diagonal disorder. Show that in lowest order, the distribution function of Ek is Gaussian. Hint. Write the distribution function as δEn P (δEk = X) = dδE1 dδE2 · · · P (δE1 )P (δE2 ) · · · δ X − N and replace the δ-function with a Fourier integral.
26 Ultrafast Electron Transfer Processes in the Photosynthetic Reaction Center
In this chapter, we discuss a model for ultrafast electron transfer processes in biological systems where the density of vibrational states is very high. The vibrational modes which are coupled to the electronic transition1 are treated within the model of displaced parallel (17.2) harmonic oscillators (Chap. 19). This gives the following approximation to the diabatic potential surfaces Ei = Ei0 +
ω2 r
r
Ef = Ef0 +
2
ω2 r
r
2
qr2 , (qr − Δqr )2 ,
(26.1)
and the energy gap Ef − Ei = ΔE +
(ωr Δqr )2 2
r
−
ωr2 Δqr qr .
(26.2)
r
The total reorganization energy is the sum of all partial reorganization energies ER =
(ωr Δqr )2 r
2
=
gr2 ωr
(26.3)
r
with the vibronical coupling strength
ωr Δqr . gr = 2
(26.4)
The golden rule expression gives the rate for nonadiabatic electron transfer in analogy to (18.8) (Fig. 26.1)
1
Which have different equilibrium positions in the two states |i and |f .
298
26 Ultrafast Electron Transfer Processes
Fig. 26.1. Electron transfer processes in the reaction center of bacterial photosynthesis. After photoexcitation of the special pair dimer (red ), an electron is transferred to a bacteriopheophytine (blue) in few picoseconds via another bacteriochlorophyll (green). The electron is then transferred on a longer time scale to the quinone QA and finally to QB via a nonheme ferreous ion. The electron hole in the special pair is filled by an electron from the hemes of the cytochrome c (orange). After a second excitation of the dimer, another electron is transferred via the same route to the semiquinone to form ubiquinone which then diffuses freely within the membrane. This figure was created with Rasmol [106] based on the structure of Rps. viridis [90–93] from the protein data bank [104, 105]
26 Ultrafast Electron Transfer Processes
ket =
299
2πV 2 P ({nr }) (FCr (nr , nr ))2 δ ΔE + ωr (nr − nr ) n n r r r
r
2
=
2πV FCD(ΔE),
(26.5)
where the Franck–Condon-weighted density of states FCD(ΔE) can be expressed with the help of the time correlation formalism (18.2) as 1 (26.6) dt e−itΔE/ F (t) FCD(ΔE) = 2π with F (t) =
' & exp gr2 (eiωt − 1)(nr + 1) + (e−iωt − 1)nr .
(26.7)
r
Quantum chemical calculations for the reaction center [116–119] put the electronic coupling in the range of V = 5–50 cm−1 for the first step P∗ → P+ B− and V = 15–120 cm−1 for the second step P+ B− → P+ H− . For a rough estimate, we take a reorganization energy of 2,000 cm−1 and a maximum of the Franck–Condon-weighted density of states FCD ≈ 1/2,000 cm−1 yielding an electronic coupling of V = 21 cm−1 for the first and V = 42 cm−1 for the second step in quite good agreement with the calculations.
27 Proton Transfer in Biomolecules
Intraprotein proton transfer is perhaps the most fundamental flux in the biosphere [120]. It is essential for such important processes as photosynthesis, respiration, and ATP synthesis [121]. Within a protein, protons appear only in covalently bound states. Here, proton transfer is essentially the motion along a hydrogen bond, for instance peptide or DNA H-bonds (Fig. 27.1) or the more complicated pathways in proton-pumping proteins.
a
b O
C N
O
H
H
C N
H
O
C N
H
O
H
N
N
H N
C
N N
H
O
H
O
N
C N
H
O
C
N
N N
H
Fig. 27.1. Proton transfer in peptide H-bonds (a) and in DNA H-bonds (b)
The energy barrier for this reaction depends strongly on the conformation of the reaction complex, concerning as well its geometry as its protonation state. Due to the small mass of the proton, the description of the proton transfer process has to be discussed on the basis of quantum mechanics.
27.1 The Proton Pump Bacteriorhodopsin Rhodopsin proteins collect the photon energy in the process of vision. Since the end of the 1950s, it has been recognized that the photoreceptor molecule retinal undergoes a structural change, the so-called cis–trans isomerization upon photoexcitation (G. Wald, R. Granit, H.K. Hartline, Nobel prize for
302
27 Proton Transfer in Biomolecules
ASP96
5.0
6.8
2.9
LYS216 ASP85 RET
2.6
2.9
2.6 2.8
ARG82
2.5 3.2 3.3 3.5
5.7 3.4
GLU194 2.3
GLU204 Fig. 27.2. X-ray structure of bR. The most important residues and structural waters are shown, together with possible pathways for proton transfer. Numbers show O–O and O–N distances in Å. Coordinates are from the structure 1c3w [122] in the protein data bank [104, 105]. Molekel graphics [89]
medicine 1961). Rhodopsin is not photostable and its spectroscopy remains difficult. Therefore, much more experimental work has been done on its bacterial analogue bacteriorhodopsin, which performs a closed photocycle (Figs. 27.2–27.4). Bacteriorhodopsin is the simplest known natural photosynthetic system. Retinal is covalently bound to a lysine residue forming the so-called protonated Schiff base. This form of the protein absorbs sunlight very efficiently over a large spectral range (480–360 nm). After photoinduced isomerization, a proton is transferred from the Schiff base to the negatively charged ASP85 (L → M ). This induces a large blue shift. In wild-type bR, under physiological conditions, a proton is released to the extracellular medium on the same time scale of 10 μs. The proton release group is believed to consist of GLU194, GLU204, and some structural waters. During the M -state, a large rearrangement on the cytoplasmatic side appears, which is not seen spectroscopically and which induces the possibility to reprotonate the Schiff base from ASP96 subsequently (M → N ). ASP96 is then reprotonated from the cytoplasmatic side and the retinal reisomerizes thermally to the all trans-configuration (N → O). Finally, a proton is transferred from ASP85 via ARG82 to the release group and the ground state bR is recovered.
27.2 Born–Oppenheimer Separation
303
ASP96
BR 570 11 13 14 12
O
5 ms
ASP85 ARG82 PRG
640 ASP96 11 13 14 12
5 ms H+ cytoplasmic side
N H+ ASP85 ARG82 PRG
light
N H+
4 ps
J/K 590 11 13 14 12
ASP96 NH+
H+
ASP85 ARG82 PRG
1 μs
ASP96 11 13 14 12
N
560
11 13 14 12
NH+
ASP96 NH+
L
ASP85 ARG82 PRG 5 ms
550
11 13 14 12
10 μs
ASP96 H+ N
M
410
ASP85 ARG82 PRG
H+ extracellular side
ASP85 ARG82 PRG
Fig. 27.3. The photocycle of bacteriorhodopsin [123]. Different states are labeled by the corresponding absorption maximum (nm) 11 13
14
12
11
13 12
+ N H
Lys216
14
+ NH Lys216
Fig. 27.4. Photoisomerization of bR
27.2 Born–Oppenheimer Separation Protons move on a faster time scale than the heavy nuclei but are slower than the electrons. Therefore, we use a double Born–Oppenheimer approximation for the wave function1 1
The Born–Oppenheimer approximation and the nonadiabatic corrections to it are discussed more systematically in Sect. 17.1.
304
27 Proton Transfer in Biomolecules
(27.1)
ψ(r, RP , Q)χ(Rp , Q)Φ(Q).
The protonic wave function χ depends parametrically on the coordinates Q of the heavy atoms and the electronic wave function ψ depends parametrically on all nuclear coordinates. The Hamiltonian consists of the kinetic energy contributions and a potential energy term: (27.2)
H = Tel + Tp + TN + V.
The Born–Oppenheimer approximation leads to the following hierarchy of equations. First, all nuclear coordinates (Rp , Q) are fixed and the electronic wave function of the state s is obtained from (Tel + V (r; Rp , Q))ψs (r; Rp , Q) = Eel,s (Rp , Q)ψs (r; Rp , Q).
(27.3)
In the second step, only the heavy atoms (Q) are fixed but the action of the kinetic energy of the proton on the electronic wave function is neglected. Then the wave function of proton state n obeys (Tp + Eel,s (Rp ; Q))χs,n (Rp ; Q) = εs,n (Q)χs,n (Rp ; Q).
(27.4)
The electronic energy Eel (Rp , Q) plays the role of a potential energy surface for the nuclear motion. It is shown schematically in Fig. 27.5 for one proton
1.2 1 0.8 0.2
0.6 0.4
0.1
0.2 0
0 – 0.2
y
– 0.1 –1
–0.5
0
0.5 x
1
– 0.2 1.5
2
2.5
Fig. 27.5. Proton transfer potential. The figure shows schematically the potential which, as a function of the proton coordinate x, has two local minima corresponding to the structures · · · N–H · · · O · · · and · · · N · · · H–O · · · and which is modulated by a coordinate y representing the heavy atoms
27.3 Nonadiabatic Proton Transfer (Small Coupling)
Rp
305
Rp
Rp
Fig. 27.6. Proton tunneling. The double well of the proton along the H-bond is shown schematically for different configurations of the reaction complex
coordinate (for instance, the O–H bond length) and one promoting mode of the heavy atoms which modulates the energy gap between the two minima. For fixed positions Q of the heavy nuclei, Eel (Rp , Q) has a double-well structure as shown in Fig. 27.6. The rate of proton tunneling depends on the configuration and is most efficient if the two localized states are in resonance. Finally, the wave function of the heavy nuclei in state α is the approximate solution of (27.5) (TN + εs,n (Q))Φs,n,α (Q) = Es,n,α Φs,n,α (Q).
27.3 Nonadiabatic Proton Transfer (Small Coupling) Initial and final states (s, n) and (s , m) are stabilized by the reorganization of the heavy nuclei. The transfer occurs whenever fluctuations bring the two states in resonance (Fig. 27.7). We use the harmonic approximation a (0) 2 εs,n (Q) = ε(0) s,n + (Q − Qs ) + · · · = εs,0 (Q) + (εs,n − εs,0 ) 2
(27.6)
similar to nonadiabatic electron transfer theory (Chap. 16). The activation free energy for the (s, 0) → (s , 0) transition is given by ΔG‡00 =
(ΔG0 + ER )2 4ER
(27.7)
and for the transition (s, n) → (s , m), we have approximately ΔG‡nm = ΔG‡0 + (εsm − εs0 ) − (εs n − εs 0 ) (ΔG0 + ER + εsm − εs n − εs0 + εs 0 )2 . = 4ER The partial rate knm is given in analogy to the ET rate (16.23) by ω P ΔG‡nm κnm exp − . knm = 2π kT
(27.8)
(27.9)
306
27 Proton Transfer in Biomolecules εsm
εs’n
εs0
εs’0
ΔGo
# ΔG00
Q
Fig. 27.7. Harmonic model for proton transfer
The interaction matrix element will be calculated using the BO approximation and treating the heavy nuclei classically Vsns m = dr dRp ψs (r, RP , Q)χsn (Rp , Q)V χs m (Rp , Q)ψs (r, RP , Q) dr ψs (r, RP , Q)V ψs (r, RP , Q) χsn (Rp , Q)χs m (Rp , Q) = dRp el = dRp Vss (27.10) (Rp , Q) χsn (Rp , Q)χs m (Rp , Q). The electronic matrix element el Vss ψs (r, Rp , Q)V ψs (r, Rp , Q)dr =
(27.11)
is expanded around the equilibrium position of the proton Rp∗ el el ∗ Vss (Rp , Q) ≈ Vss (RP , Q) + · · ·
(27.12)
and we have approximately el ∗ nm Vsns m ≈ Vss (RP , Q)Sss (Q)
(27.13)
with the overlap integral of the protonic wave functions (Franck–Condon factor) nm Sss (Q) = χsn χs m dRp . (27.14)
27.4 Strongly Bound Protons The frequencies of O–H or N–H bond vibrations are typically in the range of 3,000 cm−1 which is much larger than thermal energy. If, on the other hand, the barrier for proton transfer is on the same order, only the transition between the vibrational ground states is involved. For small electronic matrix element V el , the situation is very similar to electron transfer described by Marcus
27.4 Strongly Bound Protons
307
theory (Chap. 16) and the reaction rate is given by Borgis and Hynes [124] for symmetrical proton transfer (ER ΔG0 , ΔG‡00 = ER /4):
ER 1 2πV02 exp − . (27.15) k= 4πkT ER 4kT In the derivation of (27.15), the Condon approximation has been used which corresponds to application of (27.12) and approximation of the overlap integral (27.14) by a constant value. However, for certain modes (which influence the distance of the two potential minima), the modulation of the coupling matrix element can be much stronger than in the case of electron transfer processes due to the larger mass of the proton. The dependence on the transfer distance (0) (0) δQ = Qs − Qs is approximately exponential: Vs0s 0 ∼ e−αδQ ,
(27.16)
where typically α ≈ 25–35A−1 , whereas for electron transfer processes typical values are around α ≈ 0.5–1A−1 . Borgis and Hynes [124] found the following result for the rate when the low-frequency substrate mode with frequency Ω and reduced mass M is thermally excited
ΔGtot 1 2πV 2 exp − (27.17) k= χSC , 4πkT 4ΔGtot kT where the average coupling square can be expanded as # 2 $ α2 Ω 4kT Ω 2 2 V = V0 exp . + +O 2M Ω Ω 3kT kT
(27.18)
The activation energy is ΔGtot = ΔG‡ +
2 α2 8M
and the additional factor χSC is given by γSC kT χSC = exp γSC 1 − 4ΔGtot with γSC = where D=
αD , M Ω2
∂ΔH (Q0 ) ∂Q
(27.19)
(27.20)
(27.21)
(27.22)
measures the modulation of the energy difference by the promoting mode Ω.
308
27 Proton Transfer in Biomolecules
27.5 Adiabatic Proton Transfer If V el is large and the tunnel factor approaches unity (for s = s ), then the application of the low-friction result (8.29) gives ad ω ΔG‡nm exp − , (27.23) knm = 2π kT where the activation energy has to be corrected by the tunnel splitting 1 ad = ΔG‡nm − Δεpnm . ΔG‡nm 2
(27.24)
Part VIII
Molecular Motor Models
28 Continuous Ratchet Models
A particular class of proteins (molecular motors) can move linearly along its designated track, against an external force, by using chemical energy. The movement of a single protein within a cell is subject to thermal agitation from its environment and is, therefore, a Brownian motion with drift. The following discussion is based on the pioneering work by Jülicher and Prost [125–128].
28.1 Transport Equations Treating the center of mass of the motor, as subject to Brownian motion within a periodic potential, we apply a Smoluchowski equation with periodic boundary conditions: kB T ∂ 2 1 ∂ ∂U ∂ W (x, t) = W (x, t) W (x, t) + ∂t mγ ∂x2 mγ ∂x ∂x 2 kB T ∂ 1 ∂ (F (x)W (x, t)) , (28.1) = W (x, t) − mγ ∂x2 mγ ∂x U (x + L) = U (x), (28.2) which can be written in terms of the probability flux ∂ ∂ W (x, t) = − S(x, t), ∂t ∂x S(x, t) = −
1 kB T ∂ W (x, t) + F (x)W (x, t). mγ ∂x mγ
(28.3) (28.4)
By comparison with the continuity equation for the density ∂ ρ = −div(ρv), ∂t
(28.5)
312
28 Continuous Ratchet Models
we find
∂ ∂ ∂W = − S = − (W v) ∂t ∂x ∂x hence the drift velocity at x is vd (x) =
(28.6)
S(x) W (x)
(28.7)
and the average velocity of systems in the interval 0, L is L L 0 W (x)vd (x)dx 0 S(x)dx v = = L . L 0 W (x)dx 0 W (x)dx
(28.8)
Equation (28.6) shows that for a stationary solution, the flux has to be constant. However, the stationary distribution is easily found by integration and has the form mγS x U (x )/kB T e dx . Wst = e−U (x)/kB T C − (28.9) kB T 0 From the periodic boundary condition, we find L mγ −U (0)/kB T U (x )/kB T C− S e dx Wst (L) = e kB T 0 = Wst (0) = e−U(0)/kB T C,
(28.10)
which shows that in a stationary state, there is zero flux (Fig. 28.1) S=0
(28.11)
v = 0.
(28.12)
and no average velocity If an external force is applied to the system U (x) → U (x) − xFext ,
(28.13)
U(x) W(x)
S>0 F>0
F=0 S=0
x
Fig. 28.1. Periodic potential with external force
28.2 Chemical Transitions
then the stationary solution is mγ S e(U (x)−xF )/kB T dx . W (x) = e(xF −U (x))/kB T C − kB T
313
(28.14)
Assuming periodicity for W (x) W (0) = W (L), Ce
−U (0)/kB T
(LF−U (0))/kB T
=e
mγ C− S kB T
L
e
(U (x)−xF )/kB T
dx ,
0
(28.15) there will be a net flux which is a function of the external force: S=
eLF/kB T − 1 kB T W (0) L = S(F ). mγ e(U (x)−U (0)−(x−L)F )/kB T dx
(28.16)
0
From Sst = −
kB T ∂ 1 ∂ Wst (x) − Wst (x) (U (x) − xFext ) , mγ ∂x mγ ∂x
we find mγS
1 ∂ = − (kB T ln W (x) + U + Uext), W (x) ∂x
(28.17)
(28.18)
which can be interpreted in terms of a chemical potential [129]
as
μ(x) = kB T ln W (x) + U (x) + Uext (x)
(28.19)
∂ mγS μ(x) = − = −mγvd (x), ∂x W (x)
(28.20)
where the right-hand side gives the energy which is dissipated by a particle moving in a viscous medium.
28.2 Chemical Transitions We assume that the chemistry of the motor can be described by a number m of discrete states or conformations i = 1, . . . , m. Many biochemical models focus on a model with four states (Fig. 28.2). Transitions between these states are fast compared to the motion of the motor and will be described by chemical kinetics kij i j. kji
(28.21)
314
28 Continuous Ratchet Models P
M−ADP−P 3
M−ADP
4
2
M−ATP
1 ADP
M
ATP
Fig. 28.2. Chemical cycle of the four-state model
The motion in each conformation will be described by a Smoluchowski equation in a corresponding potential Ui (x) which is assumed to be periodic and asymmetric Ui (x + L) = Ui (x), Ui (x) = Ui (−x) (28.22) and where the potential of an external force may be added Ui → Ui (x) − xFext .
(28.23)
Together, we have the equations ∂ ∂ Wi (x, t) + Si (x, t) = (kji (x)Wi (x, t) − kij (x)Wj (x, t)) , ∂t ∂x
(28.24)
i=j
Si (x, t) = −
kB T ∂ 1 ∂ Wi (x, t) − Wi (x, t) Ui (x). mγ ∂x γm ∂x
(28.25)
28.3 The Two-State Model We discuss in the following a simple two-state model which has only a small number of parameters, but which is very useful to understand the general behavior. The model is a system of two coupled differential equations: ∂ ∂ W1 + S1 = −k12 W1 + k21 W2 , ∂t ∂x ∂ ∂ W2 + S2 = −k21 W2 + k12 W2 . ∂t ∂x
(28.26)
28.3.1 The Chemical Cycle The energy source is the hydrolysis reaction ATP → ADP+P.
(28.27)
28.3 The Two-State Model P
315
M−ADP−P
M−ADP β1
β2 α 2
ADP
M
α1 M−ATP
ATP
Fig. 28.3. Simplified chemical cycle U1,0 U2,1 U1,1 U2,2 x
Fig. 28.4. Hierarchy of potentials
The motor protein undergoes a chemical cycle which can be divided into four states. In a simplified two-state model, this cycle has to be further divided into two substeps. One option is to combine ATP binding and hydrolysis (Fig. 28.3) α1 M+ATP M−ADP−P α2
(28.28)
as well as the release of ADP and P β1 M+ADP+P M−ADP−P. β2
(28.29)
Since each cycle consumes the energy from hydrolysis of one ATP, we consider a sequence of states characterized by the motor state and the number of total cycles (Fig. 28.4) α1 β2 α1 · · · M1,0 M2,1 M1,1 M2,2 · · · (28.30) α2 β1 α2 The potential energies Us,n and Us,n+m only differ by a vertical shift corresponding to the consumption of m energy units Δμ = μATP − μADP − μP
(28.31)
316
28 Continuous Ratchet Models
and therefore Us,n = Us − nΔμ.
(28.32)
Detailed balance of the chemical reactions implies α1 = e(U1,n −U2,n+1 )/kB T = e(U1 −U2 +Δμ)/kB T , α2 β1 = e(U1,n −U2,n )/kB T = e(U1 −U 2 )/kB T . β2
(28.33) (28.34)
The time evolution of the motor is described by a hierarchy of equations ∂ W1,n + ∂t ∂ W2,n + ∂t
∂ S1,n = −(α1 + β1 )W1,n + α2 W2,n+1 + β2 W2,n , ∂x ∂ S2,n = −(α2 + β2 )W2,n + β1 W1,n + α1 W1,n−1 . ∂x
Obviously, we can sum over the hierarchy Ws,n , Ss = Ss,n Ws = n
(28.35)
(28.36)
n
to obtain the two-state model: ∂ ∂ W1 + S1 = −(α1 + β1 )W1 + (α2 + β2 )W2 , ∂t ∂x ∂ ∂ W2 + S2 = −(α2 + β2 )W2 + (α1 + β1 )W1 , ∂t ∂x where the transition rates are superpositions: k21 = α2 + β2 , k12 = α1 + β1 = α2 e(U1 −U2 +Δμ)/kB T + β2 e(U 1 −U2 )/kB T .
(28.37) (28.38)
(28.39) (28.40)
If Δμ = 0, then these rates obey the detailed balance k12 (x) = e−(U2 −U1 )/kB T , k21 (x)
(28.41)
which indicates that the system is not chemically driven and is only subject to thermal fluctuations. The steady state is then again given by Wi = N e−Ui (x)/kB T
(28.42)
for which the flux is zero. For Δμ > 0, the system is chemically driven and spontaneous motion with v = 0 can occur. However, for a symmetric system with U1,2 (−x) = U1,2 (x), the steady-state distributions are also symmetric
28.3 The Two-State Model
317
Wst U(x) WF
F(x) x
Fig. 28.5. Symmetric system
and the velocity v = −
1 mγ
L 0
∂U1 ∂U2 + W2 dx W1 ∂x ∂x
vanishes, since the forces are antisymmetric functions (Fig. 28.5). The ATP hydrolysis rate L r= dx(α1 W1 − α2 W2 ),
(28.43)
(28.44)
0
on the other hand, is generally not zero in this case since α1,2 are symmetric functions. Therefore, two conditions have to be fulfilled for spontaneous motion to occur: detailed balance has to be broken (Δμ = 0) and the system must have polar symmetry. 28.3.2 The Fast Reaction Limit If the chemical reactions are faster than the diffusive motion, then there is always a local equilibrium k21 (x) W1 (x, t) = W2 (x, t) k12 (x)
(28.45)
and the total probability W = W1 + W2 obeys the equation 1 ∂ ∂ kB T ∂ 2 ∂ W+ W = (U1 W1 + U2 W2 ) ∂t mγ ∂x2 mγ ∂x ∂x 1 ∂ k12 kB T ∂ 2 ∂ k21 W+ U1 + U2 W , = mγ ∂x2 mγ ∂x ∂x k12 + k21 k12 + k21 (28.46) which describes motion in the average potential. 28.3.3 The Fast Diffusion Limit If the diffusive motion is faster than the chemical reactions, the motor can work as a Brownian ratchet. In the time between chemical reactions, equilibrium is established in both states
318
28 Continuous Ratchet Models
Fig. 28.6. Brownian ratchet 1−p2 p2 1−p1 p1 x
Fig. 28.7. Fast diffusion limit far from equilibrium
Wst0 = c1,2 e−U1,2 (x)/kB T .
(28.47)
If the barrier height is large and the equilibrium distribution is confined around the minima, the diffusion process can be approximated by a simpler rate process. Every ATP consumption corresponds to a transition to the second state. Then the system moves to one of the two neighboring minima with probabilities p2 and 1 − p2 , respectively. From here it makes a transition back to the first state. Finally, it moves right or left with probability p1 and 1 − p1 , respectively (Fig. 28.6). During this cycle, the system proceeds in a forward direction with probability (Fig. 28.7) p+ = p1 p2 (28.48) backwards with probability p− = (1 − p1 )(1 − p2 )
(28.49)
or it returns to its initial position with probability p0 = p2 (1 − p1 ) + p1 (1 − p2 ).
(28.50)
The average displacement per consumed ATP is x ≈ L(p+ − p− ) = L(p1 + p2 − 1)
(28.51)
and the average velocity is v = rx.
(28.52)
In the presence of an external force, the efficiency can thus be estimated as η=
−Fext L −Fext x = (p1 + p2 − 1). Δμ Δμ
(28.53)
28.4 Operation Close to Thermal Equilibrium
319
28.4 Operation Close to Thermal Equilibrium For constant T and μi , the local heat dissipation is given by ∂Uk T σ(x) = Fext − Sk + r(x)Δμ. ∂x
(28.54)
k
In a stationary state, the total heat dissipation is Π = T σ(x)dx = Fext v + rΔμ,
(28.55)
∂U ∂S S dx = − U (x) dx = 0. (28.56) ∂x ∂x Under the action of an external force Fext , the velocity and the hydrolysis rate are functions of the external force and Δμ: since
v = v(Fext , Δμ), (28.57)
r = r(Fext , Δμ). Close to equilibrium, linearization gives v = λ11 Fext + λ12 Δμ, r = λ21 Fext + λ22 Δμ.
(28.58)
The rate of energy dissipation is given by mechanical plus chemical work 2 Π = Fext v + rΔμ = λ11 Fext + λ22 Δμ2 + (λ12 + λ21 )Fext Δμ
(28.59)
which must be always positive due to the second law of thermodynamics. This is the case if λ11 > 0,
λ22 > 0,
and λ11 λ22 − λ12 λ21 > 0.
(28.60)
The efficiency of the motor can be defined as η=
2 − λ12 Fext Δμ −λ11 Fext −Fext v = rΔμ λ22 Δμ2 + λ21 Fext Δμ
or shorter η=−
λ11 a2 + λ12 a , λ22 + λ21 a
a=
Fext . Δμ
(28.61)
It vanishes for zero force but also for v = 0 and has a maximum at (Figs. 28.8 and 28.9) √ 1 − (λ12 λ21 /λ11 λ22 ) − 1 λ12 λ21 (1 − 1 − Λ)2 , Λ= , ηmax = . a= λ21 /λ22 Λ λ11 λ22 (28.62)
320
28 Continuous Ratchet Models
velocity v/Δμ
0.8 0.6 0.4 0.2 0 –1
–0.8
–0.6 –0.4 external force F/Δμ
–0.2
0
Fig. 28.8. Velocity–force relation
efficiency η
0.4
0.2
0 –1
–0.5 external force F/Δμ
0
Fig. 28.9. Efficiency of the motor
Problems 28.1. Deviation from Equilibrium Make use of the detailed balance conditions α1 = e(Δμ−ΔU)/kB T , α2 β1 = e−ΔU/kB T , β2 and calculate the quantity Ω=
k12 α1 + β 1 − e−ΔU/kB T = − e−ΔU/kB T . k21 α2 + β 2
Use the approximation Δμ = Δμ0 + kB T ln
C(ATP) C(ADP)C(P)
28.4 Operation Close to Thermal Equilibrium
and express Ω in terms of the the equilibrium constant 0
Keq = eΔμ
/kB T
.
Consider the limiting cases Δμ = 0 but
Δμ kB T
and Δμ kB T.
321
29 Discrete Ratchet Models
Simplified motor models describe the motion of the protein as a hopping process between positions xn combined with a cyclical transition between internal motor states: M1,xn M2,xn · · · MN,n1 M1,xn+1 · · ·
(29.1)
For example, the four-state model of the ATP hydrolysis can be easily translated into such a scheme (Fig. 29.1).
M1 ATP M2 ADP−P M2 ADP−P M1 ADP M1 xn
xn+1
Fig. 29.1. Discrete motor model
324
29 Discrete Ratchet Models
29.1 Linear Model with Two Internal States Fisher [130] considers a linear periodic process with two internal states: α1 β1 · · · M1,xn M2,xn M1,xn+1 · · · α2 β2
(29.2)
The master equation is d P1n = −(α1 + β2 )P1n + α2 P2n + β1 P2,n−1 , dt d P2n = −(α2 + β1 )P2n + α1 P1n + β2 P1,n+1 , (29.3) dt and the stationary solution corresponds to the zero eigenvalue of the matrix −α1 − β2 α2 + β1 . (29.4) α1 + β2 −α2 − β1 With the help of the left and right eigenvectors & ' α2 + β1 L 0 = 1 1 , R0 = , α1 + β2
(29.5)
we find P1,st =
α2 + β1 , α2 + β1 + α1 + β2
P2,st =
α1 + β2 α1 + β2 + α2 + β1
(29.6)
and the stationary current is S = α1 P1,st − α2 P2,st = With the definition of ω= and Γ =
α1 β1 − α2 β2 . α1 + β2 + α2 + β1
(29.7)
α2 β2 α1 + α2 + β1 + β2
(29.8)
α1 β1 = eΔμ/kB T , α2 β 2
(29.9)
the current can be written as S = (Γ − 1)ω.
(29.10)
Consider now application of an external force corresponding to an additional potential energy (29.11) Un = −n d Fext . Assuming that the motion corresponds to the transition M2,n M1,n+1 , the corresponding rates will become dependent on the force (Fig. 29.2) β1 = β10 eΘFd/kB T , β2 = β20 e−(1−Θ)Fd/kB T .
(29.12)
29.1 Linear Model with Two Internal States
325
G(x) ΔG1
ΔG2
F=0
F< 0
x1
d
x2
x
Fig. 29.2. Influence of the external force. The external force changes the activation barriers which become approximately ΔG1 (F ) = ΔG1 (0) − x1 F and ΔG2 (F ) = ΔG2 (0) + x2 F
vo
−Fstal −F
Fig. 29.3. Velocity–force relation
Θ is the so-called splitting parameter. The force-dependent current is S = (eΔμ/kB T +Fd/kB T − 1)
α2 β20 e−(1−Θ)Fd/kB T , α1 + α2 + β10 eΘFd/kB T + β20 e−(1−Θ)Fd/kB T (29.13)
which vanishes for the stalling force: Fstal = −
Δμ . d
(29.14)
Close to equilibrium, we expand (Fig. 29.3) Γ − 1 = eΔμ/kB T +Fd/kB T − 1 Δμ , kB T Fd (β10 + (α1 + α2 )(1 − Θ)) ω ≈ ω0 1 − . kB T (α1 + α2 + β10 + β20 ) = e(1−F/Fstal )Δμ/kB T − 1 ≈ (1 − F/Fstal )
(29.15)
(29.16)
Part IX
Appendix
A The Grand Canonical Ensemble
Consider an ensemble of M systems which can exchange energy as well as particles with a reservoir. For large M , the total number of particles Ntot = M N and the total energy Etot = M E have well-defined values since the relative widths decrease as 1 1 2 2 2 2 −E Ntot − Ntot Etot 1 1 tot ∼ √ , ∼√ . (A.1) Ntot Etot M M Hence also the average number and energy can be assumed to have welldefined values (Fig. A.1).
E,N
T, μ
Fig. A.1. Ensemble of systems exchanging energy and particles with a reservoir
A.1 Grand Canonical Distribution We distinguish different microstates j which are characterized by the number of particles Nj and the energy Ej . The number of systems in a certain microstate j will be denoted by nj . The total number of particles and the total energy of the ensemble in a macrostate with nj systems in the state j are nj Nj , Etot = nj Ej (A.2) Ntot = j
j
330
A The Grand Canonical Ensemble
and the number of systems is M=
(A.3)
nj .
j
The number of possible representations is given by a multinomial coefficient: M! . W ({nj }) = j nj !
(A.4)
From Stirling’s formula, we have ln W ≈ ln(M !) −
(nj ln nj − nj ).
(A.5)
j
We search for the maximum of (A.5) under the restraints imposed by (A.2) and (A.3). To this end, we use the method of undetermined factors (Lagrange method) and consider the variation ⎛ ⎞ δ ⎝ln W − α nj − M − β nj Ej − Etot − γ nj Nj − Ntot ⎠ j
j
=
j
δnj (− ln nj − α − βEj − γNj ) = 0.
(A.6)
j
Since the nj now can be varied independently, we find nj = exp(−α − βEj − γNj ).
(A.7)
The unknown factors α, β, and γ have to be determined from the restraints. First we have nj = e−α e−βEj −γNj = M. (A.8) j
j
With the grand canonical partition function Ξ= e−βE j −γNj ,
(A.9)
j
the probability of a certain microstate is given by nj e−βEj −γNj P (Ej , Nj ) = = Ξ j nj and further e−α =
M . Ξ
(A.10)
(A.11)
A.2 Connection to Thermodynamics
From
nj E j =
j
M Ej e−βEj −γNj = Etot , Ξ
331
(A.12)
j
we find the average energy per system nj Ej ∂ E= =− ln Ξ M ∂β
(A.13)
and similarly from
nj N j =
j
M Nj e−βEj −γNj = Ntot Ξ j
the average particle number of a system ∂ nj Nj =− ln Ξ. N= M ∂γ
(A.14)
(A.15)
Equations (A.13) and (A.15) determine the parameters β, γ implicitly and then α follows from (A.11).
A.2 Connection to Thermodynamics Entropy is given by S = −k = −k
P (Ej , Nj ) ln P (Ej , Nj ) P (Ej , Nj )(−βEj − γNj − ln Ξ) (A.16)
= kβE + kγN + k ln Ξ.
From thermodynamics, the Duhem–Gibbs relation is known which states for the free enthalpy G = U − T S + pV = μN where U = E and S, V, N are the thermodynamic averages. Solving for the entropy, we have U + pV − μN S= (A.17) T and comparison with (A.16) shows that β=
1 , kB T
γ=−
μ , kB T
kB T ln Ξ = pV.
(A.18) (A.19)
The summation over microstates j with energy Ej and Nj particles can be replaced by a double sum over energies Ei (N ) and particle number N to give
332
A The Grand Canonical Ensemble
Ξ=
e−β(E(N )−μN ) ,
E,N
which can be written as a sum over canonical partition functions with different particle numbers Ξ= eβμN e−βE(N ) = eβμN Q(N ). (A.20) N
E
N
B Time Correlation Function of the Displaced Harmonic Oscillator Model
In the following, we evaluate the time correlation function of the displaced harmonic oscillator model (19.7) 9 8 † † (B.1) fr (t) = e−itωr br br eitωr (br +gr )(br +gr ) .
B.1 Evaluation of the Time Correlation Function To proceed, we need some theorems which will be derived afterwards. Theorem 1. A displacement of the oscillator potential energy minimum can be formulated as a canonical transformation †
†
ωr (b†r + gr )(br + gr ) = e−gr (br −br ) ωr b†r br egr (br −br ) .
(B.2)
With the help of this relation, the single mode correlation function becomes 8 9 † † † † Fr (t) = e−itωr br br e−gr (br −br ) eitωbr br egr (br −br ) . (B.3) The first three factors can be interpreted as another canonical transformation. To this end, we apply the following theorem. Theorem 2. The time-dependent boson operators are given by e−iωtb b b† eiωtb
†
†
†
†
e−iωtb b beiωtb
b
= b† e−iωt ,
b
= beiωt .
(B.4)
we find & ' & ' Fr (t) = exp −gr (b†r e−iωr t − br eiωt ) exp gr (b†r − br ) . The two exponentials can be combined due to the following theorem.
(B.5)
334
B Time Correlation Function of the Displaced Harmonic Oscillator Model
Theorem 3. If the commutator of two operators A and B is a c-number then eA+B = eA eB e−(1/2)[A,B] .
(B.6)
− gr2 [b†r e−iωr t − br eiωt , b†r − br ] = −gr2 (e−iωt − eiωt )
(B.7)
1 Fr (t) = exp − gr2 (e−iωt − eiωt ) 2 & ' × exp −gr b†r (e−iωt − 1) + gr br (eiωt − 1) .
(B.8)
The commutator is
and we have
The remaining average is easily evaluated due to the following theorem. Theorem 4. For a linear combination of br and b†r , the second-order cumulant expansion is exact 9 8 † † 2 † eμb +τ b = e1/2(μb +τ b) = eμτ b b+1/2 . (B.9) The average square is 8& '2 9 2 gr b†r (e−iωt − 1) − br (eiωt − 1) = −gr2 (2 − eiωt − e−iωt ) b†r br + br b†r = −gr2 (2 − eiωt − e−iωt )(2nr + 1)
(B.10)
with the average phonon number1 1
nr =
eβωr
−1
(B.11)
.
Finally we have 1 1 Fr (t) = exp − gr2 (2 − eiωt − e−iωt )(2nr + 1) − gr2 (e−iωt − eiωt ) 2 2 ' & 2 iωt −iωt (B.12) − 1)nr . = exp gr (e − 1)(nr + 1) + (e
B.2 Boson Algebra B.2.1 Derivation of Theorem 1 Consider the following unitary transformation †
A = e−g(b 1
−b) †
b beg(b
†
−b)
In the following, we use the abbreviation β = 1/kB T .
(B.13)
B.2 Boson Algebra
335
and make a series expansion A = A(0) + g
dA 1 2 d2 A + ··· + g dg 2 dg 2
(B.14)
The derivatives are dA = [b† b, b† − b] = b† [b, b† ] + [b† , −b]b = (b + b+ ), dg g=0 d2 A = [[b† b, b† − b], b† − b] = [b + b† , b† − b] = 2, dg 2 g=0 dn A = 0 for n ≥ 3, dg n g=0
(B.15)
and the series is finite A = b† b + g(b† + b) + g 2 = (b† + g)(b + g).
(B.16)
Hence for any of the normal modes †
†
ωr (b†r + gr )(br + gr ) = e−gr (br −br ) ωr b†r br egr (br −br ) .
(B.17)
B.2.2 Derivation of Theorem 2 Consider
†
†
A = eτ b b be−τ b
b
with
τ = −iωt.
(B.18)
Make again a series expansion dA = [b+ b, b] = −b, dτ d2 A = [b+ b, −b] = b, etc., dτ 2 (itω)2 τ2 − · · · = b 1 + itω + + · · · = b eiωt . A=b 1−τ + 2 2
(B.19) (B.20) (B.21)
Hermitian conjugation gives †
e−iωtb b b+ eiωtb
†
b
= b† e−iωt .
(B.22)
B.2.3 Derivation of Theorem 3 Consider the operator f (τ ) = e−Bτ e−Aτ e(A+B)τ
(B.23)
336
B Time Correlation Function of the Displaced Harmonic Oscillator Model
as a function of the c-number τ . Differentiation gives df (τ ) = e−Bτ (−A − B)e−Aτ e(A+B)τ + e−Bτ e−Aτ (A + B)e(A+B)τ dτ = e−Bτ [e−Aτ , B]e(A+B)τ . (B.24) Now if the commutator [A, B] is a c-number, then [An , B] = A[An−1 , B] + [A, B]An−1 = · · · n[A, B]An−1
(B.25)
and therefore [e−Aτ , B] =
∞ (−τ )n (−τ )n n [A , B] = An−1 [A, B] n! (n − 1)! n n=0
= −τ [A, B]e−Aτ
(B.26)
and (B.24) gives df (τ ) = −τ [A, B]e−Bτ e−Aτ e(A+B)τ = −τ [A, B]f (τ ) dτ which is for the initial condition f (0) = 1 solved by 2 τ f (τ ) = exp − [A, B] . 2
(B.27)
(B.28)
Substituting τ = 1 finally gives e−B e−A e(A+B) = e−(1/2)[A,B] .
(B.29)
B.2.4 Derivation of Theorem 4 This derivation is based on [131]. For one single oscillator, consider the linear combination (B.30) μb† + τ b = A + B, [A, B] = −μτ.
(B.31)
Application of (B.29) gives 9 8 † 9 8 † eμb +τ b = eμb eτ b eμτ /2
(B.32)
and after exchange of A and B 9 8 9 8 † † eμb +τ b = eτ b eμb e−μτ /2 .
(B.33)
B.2 Boson Algebra
337
Combination of the last two equations gives 9 8 † 9 8 † eτ b eμb = eμb eτ b eμτ .
(B.34)
Using the explicit form of the averages, we find
† † † † Q−1 tr e−βωb b eτ b eμb = Q−1 tr e−βωb b eμb eτ b eμτ
(B.35)
and due to the cyclic invariance of the trace operation, the right-hand side becomes † † Q−1 tr(eτ b e−βωb b eμb )eμτ (B.36) which can be written as †
†
†
†
Q−1 tr(e−βωb b e+βωb b eτ b e−βωb b eμb )eμτ 9 8 † † † = e+βωb b eτ b e−βωb b eμb eμτ .
(B.37)
Application of (B.4) finally gives the relation & ' & ' exp(τ b) exp(μb† ) = exp τ e−βω b exp μb† eμτ
(B.38)
which can be iterated to give & ' & ' −βω ) exp(τ b) exp(μb† ) = exp τ e−2βω b exp μb† eμτ (1+e = ··· 9
8 ' & −βω +···+e−nβω ) (B.39) = exp τ e−(n+1)βω b exp μb† eμτ (1+e and in the limit n → ∞
μτ † exp(τ b) exp(μb ) = exp(μb ) exp 1 − e−βω μτ = exp 1 − e−βω †
(B.40)
since only the zero-order term of the expansion of the exponential gives a nonzero contribution to the average. With the average number of vibrations n ˜= we have and finally
1 eβω
−1
,
(B.41)
exp(τ b) exp(μb† ) = exp (μτ (˜ n + 1))
(B.42)
8 † 9 eμb +τ b = eμτ (˜n+1/2) .
(B.43)
The average of the square is † n + 1), (μb + τ b)2 = μτ b† b + b b† = μτ (2˜ which shows the validity of the theorem.
(B.44)
C The Saddle Point Method
The saddle point method is an asymptotic method to calculate integrals of the type ∞ enφ(x) dx (C.1) −∞
for large n. If the function φ(x) has a maximum at x0 , then the integrand also has a maximum there which becomes very sharp for large n. Then the integral can be approximated by expanding the exponent around x0 1 d2 (nφ(x)) 2 nφ(x) = nφ(x0 ) + (C.2) (x − x0 ) + · · · 2 dx2 x0 as a Gaussian integral
%
∞
e −∞
nφ(x)
dx ≈ e
nφ(x0 )
2π . n|φ (x0 )|
The method can be extended to integrals in the complex plane enφ(z) dz = en(φ(z)) ein(φ(z)) dz. C
(C.3)
(C.4)
C
If the integration contour is deformed such that the imaginary part is constant, then the Laplace method is applicable and gives nφ(z) in(φ(z0 )) e dz = e en(φ(z)) dz. (C.5) C
C
The contour C and the expansion point are determined from φ (z0 ) = 0,
(C.6)
φ(z) = u(z) + iv(z) = u(z) + iv(z0 ).
(C.7)
340
C The Saddle Point Method
Now consider the imaginary part as a function in R2 . The gradient is ∂v ∂v v(x, y) = , . ∂x ∂y
(C.8)
For an analytic function φ(x + iy), we have u(x + dx + iy + idy) + iv(x + dx + iy + idy) − (u(x + iy) + iv(x + iy)) =
∂(u + iv) ∂(u + iv) dx + dy, ∂x ∂y
(C.9)
which can be written as dφ dφ dz = (dx + idy) dz dz
(C.10)
∂(u + iv) ∂(u + iv) = ∂x ∂y
(C.11)
only if i or
∂v ∂v ∂u ∂u = , =− . ∂x ∂y ∂x ∂y Hence the gradient of the imaginary part ∂u ∂u v(x, y) = − , ∂y ∂x is perpendicular to the gradient of the real part ∂u ∂u u(x, y) = , ∂x ∂y
(C.12)
(C.13)
(C.14)
which gives the direction of steepest descent. The method is known as saddle point method since a maximum of the real part always is a saddle point which can be seen from the expansion 1 φ(z) = φ(z0 ) + φ (z0 )(z − z0 )2 + · · · 2
(C.15)
dz 2 = dx2 − dy 2 + 2idx dy
(C.16)
We substitute and find for the real part ' 1& (φ (z0 ))(dx2 − dy 2 ) − 2(φ (z0 ))dx dy 2 1 (φ ) −(φ ) dx . (C.17) = (φ(z0 )) − (dx, dy) −(φ ) −(φ ) dy 2
(φ(z)) = (φ(z0 )) −
The eigenvalues of the matrix are ± (φ )2 + (φ )2 = ±|φ |.
(C.18)
Solutions
Problems of Chap. 1 1.1. Gaussian Polymer Model (a)
Δri = ri − ri−1 , i = 1, . . . , N,
27 1 3(Δri )2 P (Δri ) = 3 exp − b 8π 3 2b2 is the normalized probability function with ∞ ∞ 3 P (Δri )d Δri = 1, Δr2i P (Δri )d3 Δri = b2 . −∞
(b)
−∞
r N − r0 =
N
Δri ,
i=1
P
N
Δri = R
i=1
∞
= −∞
= × =
d Δr1 · · · 3
exp −
3
−∞
1 d3 keikR (2π)3
∞
d ΔrN
∞ −∞
P (Δri )δ
R−
i=1
d Δr1 · · ·
N
Δri
i=1
3
3Δr2i − ikΔri 2b2
1 d3 keikR (2π)3
N
∞
−∞
3
d ΔrN
1 b3
27 8π3
N
342
Solutions
/ ×
1 b3
27 8π3
0N 3Δr2 d Δr exp − − ikΔr 2b2 −∞
∞
3
N b2 2 1 3 ikR = d ke exp − k (2π)3 6
27 1 3R2 . = 3 exp − b 8π 3 N 3 2N b2 1 f 2 Δri exp − Δri − κ kB T 2 1 f exp − = Δr2i − κΔri kB T 2 i
(c)
3 3kB T f = 2 →f = . 2kB T 2b b2 (d)
xN − x0 =
Nκ , f
L = xN − x0 =
yN − y0 = zN − z0 = 0, N κb2 . 3kB T
1.2. Three-Dimensional Polymer Model (a) N b2 . 1 + x 2x(x2 − 1) 2 with x = cos θ + (b) b N 1−x (1 − x)2 1 + cos x . ≈ N b2 1 − cos x (1 + cos θ1 )(1 + cos θ2 ) . (c) N b2 1 − cos θ1 cos θ2 (d) N a/b with the coherence length 1 + cos x a=b . (1 − cos x) cos(x/2) 8 4 4 2 2 −1 −b − 4 . (e) N b θ2 θ2 θ 1.3. Two-Component Model 1 M lα − L ln (a) κ = −kB T lα − lβ L − M lβ 1 lα − lβ lα − lβ 1 . −kB T + + − 2M lα − 2L 2M lβ − L 12(L − M lβ )2 12(M lα − L)2
Solutions
343
The exact solution can be written with the digamma function Ψ which is well known by algebra programs as L − M lβ 1 M lα − L −Ψ . κ = −kB T +1 +Ψ lα − lβ lα − l β lα − l β The error of the asymptotic expansion is largest for L ≈ M lα or L ≈ M lβ . The following table compares the relative errors of the Stirling’s approximation and the higher-order asymptotic expansion for M = 1,000 and lβ /lα = 2: L/lα 1,000.2 1,000.5 1,001 1,005
(b)
Stirling 0.18 0.11 0.065 0.019
Asympt. expansion 0.13 0.009 0.00094 2.5×10−6
M Z(κ, M, T ) = eκlα /kB T + eκlβ /kB T , lα eκlα /kB T + lβ eκlβ /kB T , eκlα /kB T + eκlβ /kB T 2 lα − lβ 2 L2 = L + M eκ(lα +lβ )/kB T , eκlα /kB T + eκlβ /kB T 2 lα − lβ 2 κ(lα +lβ )/kB T , σ = Me eκlα /kB T + eκlβ /kB T L=M
1 σ ∼√ , L N ∂σ =0 ∂κ
for (lα + lβ ) = 2
lα eκlα /kB T + lβ eκlβ /kB T , eκlα /kB T + eκlβ /kB T
hence for κ=0 M ∂ 2 σ2 (κ = 0) = − (lα − lβ )2 < 0 → maximum ∂κ2 k(kB T )2 also a maximum of σ since the square root is monotonous.
344
Solutions
Problems of Chap. 2 2.1. Osmotic Pressure of a Polymer Solution 1 0 2 φβ + χφβ , μα (P, T ) − μα (P, T ) = kB T ln(1 − φβ ) + 1 − M ∂μ0α (P, T ) , ∂P −1 0 1 ∂μα (P, T ) φβ + χφ2β . Π=− kB T ln(1 − φβ ) + 1 − ∂P M
μ0α (P , T ) − μ0α (P, T ) = μα (P, T ) − μ0α (P, T ) = −Π
For the pure solvent, μ0α =
G , Nα
dG = −S dT + V dP + μ0α (P, T )dN, ∂μ0α V = , ∂P T,Nα Nα 1 2 1 3 1 Nα 2 Π=− kB T −φβ − φβ − φβ + · · · + 1 − φβ + χφβ V 2 3 M 1 1 Nα kB T φβ + − χ φ2β + · · · , = V M 2 χ=
χ0 T 0 , T
high T : 1 − χ > 0, 2 low T:
Π > 0 good solvent
Π < 0 bad solvent, possibly phase separation. 2.2. Polymer Mixture φ2 φ1 ln φ1 + ln φ2 + χφ1 φ2 , ΔF = N kB T M1 M2 φ2,c =
χc =
1 , 1 + M2 /M1
1 M1 + M2 2
1 √
M2 M1
+
1 √
M1 M2
,
Solutions
symmetric case φc =
1 2
χc =
2 can be small, demixing possible. M
Problems of Chap. 4 4.1. Membrane Potential W κx , ΦI = Beκx , ΦII = B 1 + M B=
ΦIII = V − Be−κ(x−L) ,
V , 2 + (w /M )κL
Q/A = W κB per area A, C/A =
Q/A W κ 1 = = . V 2 + (W /M )κL (2/w κ) + (L/M )
4.2. Ion Activity kB T ln γ c = kB T ln
Z 2 e2 κ a =− , c 8π 1 + κR
1 Z 2 e2 κ , kB T 8π 1 + κR κ 1 Z 2 e2 κ c , =− + ln γ± kB T 16π 1 + κR+ 1 + κR−
c c ln γ+ = ln γ− =−
κ → 0 for dilute solution, c ln γ± →−
1 Z 2 e2 κ . kB T 8π
Problems of Chap. 5 5.1. Abnormal Titration Curve ΔG(B, B) = 0, ΔG(BH+ , B) = ΔG1,int , ΔG(B, BH+ ) = ΔG2,int , ΔG(BH+ , BH+ ) = ΔG1,int + ΔG2,int + W1,2 ,
345
346
Solutions
Ξ = 1 + e−β(ΔG1,int −μ) + e−β(ΔG2,int−μ) + e−β(ΔG1,int +ΔG2,int −W +2μ) , e−β(ΔG1,int −μ) + e−β(ΔG1,int +ΔG2,int −W +2μ) , s1 = Ξ e−β(ΔG2,int −μ) + e−β(ΔG1,int +ΔG2,int −W +2μ) . s2 = Ξ
Problems of Chap. 6 6.1. pH Dependence of Enzyme Activity r rmax K=
=
1 , 1 + (1 + (cH+ /K))(KM /cS )
cH+ cS− , cHS
cs = cS− + cHS .
6.2. Polymerization at the End of a Polymer ciM = KcM c(i−1)M = · · · =
(KcM )i , K
∞ i(KcM )i 1 = for KcM < 1. i = i=1 ∞ i 1 − KcM (Kc ) M i=1 6.3. Primary Salt Effect r = k1 cX ,
cX ZA ZB e2 κ K= , exp − cA cB 4πkB T ZA ZB e2 κ cX = KcA cB exp . 4πkB T
Problems of Chap. 7 7.1. Smoluchowski Equation P (t + Δt, x) = eΔt(∂/∂t) P (t, x), w± (x ± Δx)P (t, x ± Δx) = e±Δx(∂/∂x) w± (x)P (t, x), eΔt(∂/∂t) P (t, x) = eΔx(∂/∂x) w+ (x)P (t, x) + e−Δx(∂/∂x) w− (x)P (t, x), P (t, x) + Δt +Δx
∂ P (t, x) + · · · = (w+ (x) + w− (x))P (x, t) ∂t
∂ Δx2 ∂ 2 (w+ (x)−w− (x))P (x, t)+ (w+ (x) + w− (x))P (x, t)+ · · · , ∂x 2 ∂x2
Solutions
347
Δx ∂ Δx2 ∂ 2 ∂ P (t, x) = (w+ (x) − w− (x))P (x, t) + P (x, t) + · · · , ∂t Δt ∂x Δt ∂x2 D=
Δx2 , Δt
K(x) = −
kB T + (w (x) − w− (x)). Δx
7.2. Eigenvalue Solution to the Smoluchowski Equation kB T −U/kB T ∂ U/kB T e e − W mγ ∂x 1 ∂U kB T −U/kB T U/kB T ∂W U/kB T W e e +e =− mγ ∂x kB T ∂x =−
1 ∂U kB T ∂W − W = S, mγ ∂x mγ ∂x
∂ ∂ kB T −U/kB T LFP W = − S = e ∂x ∂x mγ LFP =
∂ U/kB T W e ∂x
,
kB T ∂ −U/kB T ∂ U/kB T e e , mγ ∂x ∂x kB T ∂ −U/kB T ∂ U/2kB T e e , mγ ∂x ∂x ∂ kB T U/2kB T = L, − e ∂x mγ
L = eU/2kB T LFP e−U/2kB T = eU/2kB T LH = eU/2kB T
∂ − e−U/kB T ∂x
since kB T = constant. mγ For an eigenfunction ψ of L, we have λψ = Lψ = eU/2kB T LFP e−U/2kB T ψ. Hence
LFP e−U/2kB T ψ = λ e−U/2kB T ψ
gives an eigenfunction of the Fokker–Planck operator to the same eigenvalue λ. A solution of the Smoluchowski equation is then given by W (x, t) = eλt e−U/2kB T ψ(x). The Hermitian operator is very similar to the harmonic oscillator in second quantization ∂ −U/2kB T ∂ U/2kB T kB T e−U/2kB T L= eU/2kB T e e mγ ∂x ∂x kB T 1 ∂U 1 ∂U ∂ ∂ = − + mγ ∂x 2kB T ∂x ∂x 2kB T ∂x
348
Solutions
mω 2 mω 2 ∂ ∂ − x + x ∂x 2kB T ∂x 2kB T ⎞ ⎛ ⎞ ⎛ % % 2 2 2 ω ⎝ kB T ∂ 1 mω ⎠ ⎝ kB T ∂ 1 mω ⎠ =− − x + x γ mω 2 ∂x 2 kB T mω 2 ∂x 2 kB T
kB T = mγ
=−
ω2 γ
1 ∂ − ξ ∂ξ 2
1 ∂ + ξ ∂ξ 2
=−
ω2 + b b γ
with Boson operators b+ b − bb+ = 1. From comparison with the harmonic oscillator, we know the eigenvalues λn = −
ω2 n, γ
n = 0, 1, 2, . . .
The ground state obeys ∂ 1 ∂U aψ0 = + ψ0 = 0 ∂x 2kB T ∂x with the solution ψ0 = e−U (x)/2kB T . This corresponds to the stationary solution of the Smoluchowski equation: % mω 2 −U (x)/kB T e W = . 2πkB T 7.3. Diffusion Through a Membrane kAB = kA + kB , 0=
M M N dPN dN = N =− kAB M N PN + (kAB − 2km )N 2 PN dt dt N =1
+
M
kAB M N PN −1 −
N =2
+
M−1
N =1 M
N =1
kAB (N − 1)N PN −1
N =2
2km N (N + 1)PN +1
N =1
≈ −kAB M N + (kAB − 2km )N 2 + kAB M (1 + N ) − kAB (N 2 + N ) +2km (N 2 − N )
Solutions
349
= kAB M − kAB N − 2km N , N =M
kA + kB , kA + kB + 2km
M N dN 2 2 dPN 2 = N = − kAB M N PN + (kAB − 2km )N 3 PN 0= dt dt N =1
M
+
kAB M N PN −1 − 2
N =2
+
M−1
M
N =1
kAB (N − 1)N 2 PN −1
N =2
2km N 2 (N + 1)PN +1
N =1
≈ −kAB M N 2 + (kAB − 2km )N 3 + kAB M (N 2 + 2N + 1) −kAB (N 3 + 2N 2 + N ) + 2km (N 3 − 2N 2 + N ) = kAB M (2N + 1) − kAB (2N 2 + N ) + 2km (−2N 2 + N ) = kAB M + N(2kAB M − kAB + 2km ) − N 2 (2kAB + 4km ) AB kAB M + (2kAB M − kAB + 2km )M kABk+2k m
N2 = =
2kAB + 4km
2 2kAB km kAB M+ M 2, 2 (kAB + 2km ) (kAB + 2km )2
and the variance is 2
N2 − N =
2km kAB 2km 2 M= N . 2 (kAB + 2km ) kAB M
The diffusion current from A → B is dNB dNA − = (−kA (M − N )PN + km N PN ) J= dt dt N
−
(−kB (M − N )PN + km N PN )
N
=
N
(kB − kA )(M − N )PN = (kB − kA )(M − N ).
350
Solutions
Problems of Chap. 9 9.1. Dichotomous Model λ1 = 0, L1 = ( 1 1 1 1),
⎛ ⎞ 0 ⎜0⎟ ⎟ R1 = ⎜ ⎝β ⎠, α
λ2 = −(α + β),
1 (L1 P0 ) = , (L1 R1 ) α+β
⎛
⎞ 0 ⎜ 0 ⎟ ⎟ L2 = ( α −β α −β), R2 = ⎜ ⎝ −1 ⎠ , 1
(L2 P0 ) = 0, (L2 R2 )
k− + k+ + α + β 2 1 (α + β)2 + (k+ − k− )2 + 2(β − α)(k− − k+ ), ± 2
λ3,4 = −
Fast fluctuations: α β λ3 = − k− − k+ + O(k 2 ), α+β α+β ⎛ ⎞ β ⎜ α ⎟ ⎟ L3 ≈ (1, 1, 0, 0), R3 ≈ ⎜ ⎝ −β ⎠ , −α (L3 P0 ) 1 ≈ , (L3 R3 ) α+β α β k+ − k− + O(k 2 ), α+β α+β ⎛ ⎞ 1 ⎜ −1 ⎟ (L4 P0 ) ⎟ L4 ≈ (−α, β, 0, 0), R4 ≈ ⎜ ⎝ 1 ⎠ , (L4 R1 ) ≈ 0, −1 ⎞ ⎛ ⎞ ⎛ β 0 α+β ⎟ ⎜ α ⎜ 0 ⎟ ⎜ α+β ⎟ λ3 t ⎟ → P (D∗ ) = eλ3 t . + P(t) ≈ ⎜ ⎜ β ⎠ β ⎟e ⎝ α+β ⎠ ⎝ − α+β α α − α+β α+β λ4 = −(α + β) −
Solutions
351
Slow fluctuations: λ3 ≈ −k+ − α, L3 ≈ (k+ − k− , −β, 0, 0),
⎛
⎞ k+ − k− ⎜ ⎟ −α ⎟ R3 ≈ ⎜ ⎝ −(k+ − k− ) ⎠ , α
L3 P0 1 β ≈ , L3 R3 α + β k+ − k− λ4 ≈ −k− − β,
⎛
⎞ β ⎜ k+ − k− ⎟ L4 P0 1 α ⎟, L4 ≈ (α, k+ − k− , 0, 0), R4 ≈ ⎜ ≈ , ⎝ ⎠ −β L4 R4 α + β k+ −k− −(k+ − k− ) ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ 0 1 0 ⎜ 0 ⎟ ⎟ ⎜ ⎟ β ⎜ ⎜ 0 ⎟ −(k+ +α)t + α ⎜ 1 ⎟ e−(k− +β)t , P(t) ≈ ⎜ β ⎟ ⎝ α+β ⎠ + α + β ⎝ −1 ⎠ e α+β ⎝ 0 ⎠ α 0 −1 α+β P (D∗ ) ≈
β α −(k− +β)t . e−(k+ +α)t + e α+β α+β
Problems of Chap. 10 10.1. Entropy Production 0 = dH = T dS + V dp +
μk dNk ,
k
T dS = −
μk dNk = −
k
Aj dS = rj . dt T j
Problems of Chap. 11 11.1. ATP Synthesis At chemical equilibrium, 0=A=− νk μk
j
k
μk νkj dξj =
j
Aj dξj ,
352
Solutions
= μ0 (ADP) + kB T ln c(ADP) + μ0 (POH) + kB T ln c(POH) + ) + 2eΦout +2kB T ln c(Hout
−μ0 (ATP) − kB T ln c(ATP) − μ0 (H2 O) − kB T ln c(H2 O) + −2kB T ln c(Hin ) − 2eΦin ,
kB T ln K = −ΔG0 = μ0 (ADP) − μ0 (ATP) + μ0 (POH) − μ0 (H2 O) = kB T ln
+ c(ATP)c(H2 O)c2 (Hin ) + 2e(Φin − Φout ) + c(ADP)c(POH)c2 (Hout )
= kB T ln
+ c(ATP)c(H2 O) c(Hin ) + 2kB T ln + 2e(Φin − Φout ). + c(ADP)c(POH) c(Hout )
Problems of Chap. 15 15.1. Transition State Theory K‡ k‡ ‡ A+B [AB] → P, k = k‡ c[AB]‡ = k ‡ K ‡ cA cB , ‡ q[AB] ‡ c[AB]‡ q[AB]‡ −ΔH ‡ /kB T ‡ = e = qx e−ΔH /kB T , cA cB qA qB qA qB √ 2πmkB T qx = δx, h
v‡ kB T 1 k‡ = = , δx δx 2πm
√ ‡ 1 kB T 2πmkB T q[AB]‡ −ΔH ‡ /kB T δx e cA cB k= δx 2πm h qA qB
K‡ =
=
‡ kB T q[AB]‡ −ΔH ‡ /kB T e cA cB . h qA qB
15.2. Harmonic Transition State Theory ∞ −mω2 x2 /2k T
B δ(x − x‡ ) kB T −∞ e ‡ ∞ k = vδ(x − x ) = 2 2 2πm e−mω x /2kB T −∞
2 ‡2 kB T e−mω x /2kB T = 2πm 2πkB T /mω 2 ω −ΔE/kB T = . e 2π
Solutions
353
Problems of Chap. 16 16.1. Marcus Cross Relation A+A− → A− +A,
λA = 2ΔE(Aeq → A− eq ),
D+D+ → D+ +D,
+ λD = 2ΔE(Deq → Deq ),
+ A+D → A− +D+ , λAD = ΔE(Aeq → A− eq ) + ΔE(Deq → Deq ) =
ωA −λA /4kB T e , 2π ωB −λB /4kB T , e kB = 2π kA =
KAD = e−ΔG/kB T , (λAD + ΔG)2 ωAD exp − kAD = 2π 4λAD kB T ωAD λA + λD ΔG ΔG2 = exp − − − 2π 8kB T 2kB T 4λAD kB T % 2 ωAD ΔG2 . = kA kB KAD exp − ωA ω D 4λAD kB T
Problems of Chap. 18 18.1. Absorption Spectrum −βH e 1 dt e−iωi t μf eiωf t f μi α= eiωt i| 2π Q i,f
=
1 2π
=
1 2π
≈
|μeg |2 2π
dt eiωt e−iHt/ μeiHt/ μ
dt eiωt μ(0)μ(t)
dt eiωt e−iHg t/ eiHe t/ g .
λA +λD , 2
354
Solutions
Problems of Chap. 20 20.1. Motional Narrowing (s + iω1 )(s + iω2 ) + (α + β)(s + iω) Δω Δω Ω− − iωc Ω, =− Ω+ 2 2 Ω = ω − ω, Δω 2 + iωc Ω = 0, 4 2 ω2 iωc Δω 2 − c. Ω+ = 2 4 4 Ω2 −
For ωc Δω, the poles are approximately at iωc Δω ± 2 2 and two lines are observed centered at the unperturbed frequencies ω ± Δω/2 and with their width determined by ωc . For ωc = Δω, the two poles coincide at Ωp = −
iωc 2 and a single line at the average frequency ω appears. For ωc Δω, one pole approaches zero according to Ωp = −
Ωp = −i
Δω 2 , 4ωc
which corresponds to a sharp line at the average frequency ω. The other pole approaches infinity as Ωp = −iωc . It contributes a broad line at ω which vanishes in the limit of large ωc (Fig. 32.1).
Problems of Chap. 21 21.1. Crude Adiabatic Model ∂C ∂ζ −s c 0 1 ∂ζ † ∂C = , C = , −c −s ∂Q −1 0 ∂Q ∂Q ∂Q 2 2 ∂ 2C ∂ ζ ∂ζ −s c = −C , −c −s ∂Q2 ∂Q2 ∂Q
Solutions
355
Im
−Δω/2
Δω/2 Re
Fig. 32.1. Poles of the lineshape function
C†
†
∂ 2C = ∂Q2
0 1 −1 0
∂ 2ζ − ∂Q2
∂ζ ∂Q
2 ,
2 ∂2 ∂2 †∂ C † ∂C ∂ ΦC = C + 2C + ∂Q2 ∂Q2 ∂Q ∂Q ∂Q2 2 ∂2 ∂ζ 0 1 ∂2ζ 0 1 ∂ζ ∂ = + − + 2 , −1 0 ∂Q2 −1 0 ∂Q ∂Q ∂Q2 ∂Q
dr C † Φ†
dr C † Φ† (Tel + V0 + ΔV )ΦC
= C EC + C
†
†
dr Φ ΔV Φ,
C=C
†
E(Q) − ΔE(Q) V (Q) 2 V (Q) E(Q) + ΔE(Q) 2
C,
dr C † Φ† H Φ C 2 ∂ 2 =− + C† 2m ∂Q2
E(Q) − ΔE(Q) V (Q) 2 V (Q) E(Q)+ ΔE(Q) 2 ∂2ζ 2 ∂ζ ∂ 0 1 , − +2 2m −1 0 ∂Q2 ∂Q ∂Q
V (Q) cos ζ sin ζ = , 4V (Q)2 + ΔE(Q)2 cos ζ 2 − sin ζ 2 =
ΔE(Q) 4V (Q)2 + ΔE(Q)2
,
∂ζ ∂ζ ∂ 2V ΔE (cs)2 = 2cs(c2 − s2 ) = , ∂Q ∂Q 4V 2 + ΔE 2 ∂Q
C+
2 2m
∂ζ ∂Q
2
356
Solutions
V2 ∂ ∂ (cs)2 = ∂Q ∂Q 4V 2 + ΔE 2 ∂V V2 2V − = 4V 2 + ΔE 2 ∂Q (4V 2 + ΔE 2 )2
∂ΔE ∂V 2ΔE + 8V ∂Q ∂Q
,
∂V ∂ΔE ∂ζ ΔE V 1 ∂V = − ≈ , ∂Q 4V 2 + ΔE 2 ∂Q 4V 2 + ΔE 2 ∂Q ΔE ∂Q 2 2 ΔE ∂ΔE ∂V ΔE ∂∂QV2 − V ∂∂Q 2 ∂ΔE 2ΔE ∂Q + 8V ∂Q ∂V ∂2ζ − V = − ΔE ∂Q2 4V 2 + ΔE 2 ∂Q ∂Q (4V 2 + Δe2 )2
1 ∂2V 2 ∂ΔE ∂V , − 2 ΔE ∂Q ΔE 2 ∂Q ∂Q √ 2 2 2 2 ≈ − ∂ + E − 4V + ΔE √ H E + 4V 2 + ΔE 2 2m ∂Q2 2 2 1 ∂V + 2m ΔE 2 ∂Q 2 ∂V ∂ 1 ∂ 2V 2 2 ∂ΔE ∂V 0 1 + . − − 2m −1 0 ΔE ∂Q2 ΔE 2 ∂Q ∂Q ΔE ∂Q ∂Q ≈
Problems of Chap. 22 22.1. Ladder Model n iC˙ 0 = V Cj , j=1
iC˙ j = Ej Cj + V C0 , Cj = uj e(Ej /i)t , iu˙ j e(Ej /i)t = V C0 , t V uj = e−(Ej /i)t C0 (t )dt , i 0 t V ei(Ej /)(t−t ) C0 (t )dt , Cj = i 0 Ej = α + j ∗ Δω, n V V 2 t i(jΔω+α/)(t−t ) Cj = − 2 e C0 (t )dt , C˙ 0 = i j=1 0
Solutions
ω = jΔω + ∞
α ,
ei(jΔω+α/)(t−t ) Δj →
j=−∞ 2
∞
eiω(t−t )
−∞
357
2π dω = δ(t − t ), Δω Δω
2
2πV 2πV C˙ 0 = − C0 = − ρ(E)C0 , Δω 1 1 ρ(E) = = . Δω ΔE
Problems of Chap. 23 23.1. Hückel Model with Alternating Bonds
(a) αeikn + βei(kn+χ) + β ei(kn+k+χ) = eikn α + βeiχ + β ei(k+χ) ,
αei(kn+χ) + β ei(kn−k) + βeikn = ei(kn+χ) α + β e−i(k+χ) + βe−iχ . (b)
βeiχ + β ei(k+χ) = β e−i(k+χ) + βe−iχ , e2iχ
2 −ik/2 ik/2 β e−ik/2 + βeik/2 β e + βe β e−ik + β , = ik = e−ik 2 = e−ik ik/2 β + β 2 + 2ββ cos k β e +β βe + βe−ik/2
eiχ = ±e−ik/2
β e−ik/2 + βeik/2 β 2 + β 2 + 2ββ cos k
λ = α + βeiχ + β ei(k+χ) = α ± =α±
(c)
,
ββ e−ik + β 2 + β 2 + ββ eik β 2 + β 2 + 2ββ cos k
β 2 + β 2 + 2ββ cos k.
β e−ik/2 + βeik/2 iχ+i(N +1)k −ik/2 i(N +1)k 0= e e = ±e β 2 + β 2 + 2ββ cos k β eiN k + βei(N +1)k , = ± β 2 + β 2 + 2ββ cos k 0 = β sin(N k) + β sin(N + 1)k.
(d) For a linear polyene with 2N − 1 carbon atoms, use again eigenfunctions c2n = sin(kn) = (eikn ), c2n−1 = sin(kn + χ) = (ei(kn+χ) ), and chose the k-values such that (eiN k ) = sin(N k) = 0.
358
Solutions
Problems of Chap. 25 25.1. Special Pair Dimer c −s −Δ/2 V c s s c V Δ/2 −s c is diagonalized if (c2 − s2 )V = csΔ or with c = cos χ, s = sin χ tan(2χ) =
2V , Δ
c2 − s2 = cos 2χ = 2cs = sin(2χ) =
1 1 + (4V 2 /Δ2 ) 1
1 + (Δ2 /4V 2 )
≥ 0,
≥ 0.
The eigenvalues are (Fig. 32.2) Δ 2 2 E± = ± (c − s ) + 2csV 2 ⎛ Δ 1 1 = ±⎝ 1 +V 1 2 1 + 4V 2 1+ Δ2
⎞ Δ2 4V 2
⎠ = ±1 2
Δ2 + 4V 2 .
The transition dipoles are μ+ = sμa + cμb , μ− = cμa − sμb , 6.0
relative energies E / 2V
4.0 2.0 0.0 – 2.0 – 4.0 – 6.0
0
2
4 6 disorder Δ /2V
8
10
Fig. 32.2. Energy splitting of the two dimer bands
Solutions
relative intensities
2.0
1.0
0.0
0
2
4 6 disorder Δ/2V
8
10
Fig. 32.3. Intensities of the two dimer bands
and the intensities (for |μa | = |μb | = μ)
cos α
1± 1 + (Δ2 /4V 2 )
|μ± |2 = μ2 (1 ± 2cs cos α) = μ2 with (Fig. 32.3) cos α = −0.755. 25.2. LH2 |n; α =
1 −ikn e |k; α, 3 k
9
Eα |n; αn; α| =
n=1
9
Eα
n=1
1 −i(k−k )n e |k; αk ; α| 9 k,k
= δk,k Eα |k; αk; α|, 9 n=1
Eα |n; βn; β|
9
Eα
n=1
1 −i(k−k )n e |k; βk ; β| 9 k,k
= δk,k Eα |k; βk; β|, 9 n=1
Vdim |n; αn; β| =
9 n=1
= δk,k Vdim |k; αk; β|,
Vdim
1 −i(k−k )n e |k; αk ; β| 9 k,k
359
360
Solutions 9
Vβα,1 |n; αn − 1; β| =
n=1
9
Vβα,1 e−ik
n=1
1 −i(k−k )n e |k; αk ; β| 9 k,k
= δk,k Vβα,1 e−ik |k; αk; β|, 9
Vβα,1 |n; βn + 1; α| =
n=1
9
Vβα,1 eik
n=1
1 −i(k−k )n e |k; βk ; α| 9 k,k
= δk,k Vβα,1 eik |k; βk; α|, 9
Vαα,1 (|n; αn + 1; α| + h.c.)
n=1
=
9
Vαα,1 eik
n=1
1 −i(k−k )n e |k; αk ; α| + h.c. 9 k,k
= δk,k 2Vαα,1 cos k|k; αk; α|, Hαα (k) = Eα + 2Vαα1 cos k, Hββ (k) = Eβ + 2Vββ1 cos k, Hαβ (k) = Vdim + e−ik Vβα,1 , Hβα (k) = Vdim + eik Vβα,1 , Eα + 2Vαα1 cos k V + e−ik W H(k) = V + eik W Eβ + 2Vββ1 cos k −Δk /2 V + e−ik W = Ek + . V + eik W Δk /2 Perform a canonical transformation with c −se−iχ . S= seiχ c S † H S becomes diagonal if c2 (V + eik W ) − s2 e2iχ (V + e−ik W ) + cseiχ Δk = 0. Chose χ such that V + eik W = sign V |V + eik W |eiχ = U (k)eiχ and solve (c2 − s2 )U + csΔk = 0
Solutions
361
k=0 +
Eα+Eβ 2
k=0
Fig. 32.4. Energy levels of LH2
by1
c2 − s2 = −sign
U Δ
1 , 1 + (4U 2 /Δ2 )
cs =
|U/Δ| 1 + (4U 2 /Δ2 )
.
The eigenvalues are (Fig. 32.4) 1 2 Δ + 4U 2 E± (k) = E k ± sign V 2 Eα + Eβ + (Vαα1 + Vββ1 ) cos k 2 % 2 Eα − Eβ + 2(Vαα1 − Vββ1) cos k ± sign V + V 2 + W 2 + 2V W cos k 2 =
μk,+ = c =c
1 ikn 1 ikn e μnα + seiχ e μnβ 3 n 3 n
1 ikn n 1 ikn n e S9 μ0α + seiχ e S9 Rz (2ν + φβ − φα )μ0,α 3 n 3 n ⎛
=
2π cos 2π 9 n − sin 9 n
1 ikn ⎜ e ⎝ sin 2π n cos 2π n 3 n 9 9
⎟ ⎠
1 ⎞⎞
⎛ ⎞ cos − sin sin θ × ⎝c + seiχ ⎝ sin cos ⎠⎠ μ0 ⎝ 0 ⎠ . 1 cos θ ⎛
1
⎛
⎞
√ The sign is chosen such that for |Δ| |U |, the solution becomes c = s = 1/ 2.
362
Solutions
The sum over n again gives the selection rules k = 0 z-polarization, k=±
2π 9
circular xy-polarization.
The second factor gives for z-polarization μ = 3μ0 (c + seiχ ) cos θ, |μ|2 = 9μ20 (1 + 2cs cos χ) cos2 θ, with cos2 θ ≈ 0.008 and for polarization in the xy-plane c + seiχ cos , μ = 3μ0 sin θ seiχ sin |μ|2 = 9μ20 sin2 θ(1 + 2 cos cs cos χ), with sin2 θ ≈ 0.99,
cos ≈ −0.952.
The intensities of the (k, −) states are |μz |2 = 9μ20 (1 − 2cs cos χ) cos2 θ, |μ⊥ |2 = 9μ20 sin2 θ(1 − 2 cos cs cos χ). 25.3. Exchange Narrowing δEn P (δEk = X) = dδE1 dδE2 · · · P (δE1 )P (δE2 ) · · · δ X − , N 2 2 1 1 dt δE1 dδE2 · · · √ e−δE1 /Δ · · · eit(X− δEn /N ) P (δEk = X) = 2π Δ π N 1 1 itX −δE12 /Δ2 −itδE1 /N √ dt e dt δE1 e = 2π Δ π 2 2 1 dt eitX e−Δ t /4N = 2π √ N −X 2 N/Δ2 e =√ . πΔ
(32.19)
Solutions
363
Problems of Chap. 28 28.1. Deviation from Equilibrium Ω = e−ΔU/kB T (1 − eΔμ/kB T ) (Δμ0 −ΔU (x))/kB T
Ω(x) = e
α2 , α2 + β2
α2 (x) α2 (x) + β2 (x)
Keq −
C(ATP) C(ADP)C(P)
.
For Δμ = 0 but Δμ kB T , Ω becomes a linear function of Δμ Ω(x) → −e−ΔU (x)/kB T
α2 (x) Δμ , α2 (x) + β2 (x) kB T
whereas in the opposite limit Δμ kB T it becomes proportional to the concentration ratio: 0 α2 (x) C(ATP) Ω(x) → −e−ΔU (x)/kB T eΔμ /kB T . α2 (x) + β2 (x) C(ADP)C(P)
References
1. T.L. Hill, An Introduction to Statistical Thermodynamics (Dover, New York, 1986) 2. S. Cocco, J.F. Marko, R. Monasson, arXiv:cond-mat/0206238v1 3. C. Storm, P.C. Nelson, Phys. Rev. E 67, 51906 (2003) 4. K.K. Mueller-Niedebock, H.L. Frisch, Polymer 44, 3151 (2003) 5. C. Leubner, Eur. J. Phys. 6, 299 (1985) 6. P.J. Flory, J. Chem. Phys. 10, 51 (1942) 7. M.L. Huggins, J. Phys. Chem. 46, 151 (1942) 8. M. Feig, C.L. Brooks III, Curr. Opin. Struct. Biol. 14, 217 (2004) 9. B. Roux, T. Simonson, Biophys. Chem. 78, 1 (1999) 10. A. Onufriev, Annu. Rep. Comput. Chem. 4, 125 (2008) 11. M. Born, Z. Phys. 1, 45 (1920) 12. D. Bashford, D. Case, Annu. Rev. Phys. Chem. 51, 29 (2000) 13. W.C. Still, A. Tempczyk, R.C. Hawley, T. Hendrickson, JACS 112, 6127 (1990) 14. G.D. Hawkins, C.J. Cramer, D.G. Truhlar, Chem. Phys. Lett. 246, 122 (1995) 15. M. Schaefer, M. Karplus, J. Phys. Chem. 100, 1578 (1996) 16. I.M. Wonpil, M.S. Lee, C.L. Brooks III, J. Comput. Chem. 24, 1691 (2003) 17. F. Fogolari, A. Brigo, H. Molinari, J. Mol. Recognit. 15, 377 (2002) 18. A.I. Shestakov, J.L. Milovich, A. Noy, J. Colloid Interface Sci. 247, 62 (2002) 19. B. Lu, D. Zhang, J.A. McCammon, J. Chem. Phys. 122, 214102 (2005) 20. P. Koehl, Curr. Opin. Struct. Biol. 16, 142 (2006) 21. P. Debye, E. Hueckel, Phys. Z. 24, 305 (1923) 22. G. Goüy, Comt. Rend. 149, 654 (1909) 23. G. Goüy, J. Phys. 9, 457 (1910) 24. D.L. Chapman, Philos. Mag. 25, 475 (1913) 25. O. Stern, Z. Elektrochem. 30, 508 (1924) 26. A.-S. Yang, M. Gunner, R. Sampogna, K. Sharp, B. Honig, Proteins 15, 252 (1993) 27. P.W. Atkins, Physical Chemistry (Freeman, New York, 2006) 28. W.J. Moore, Basic Physical Chemistry (Prentice-Hall, New York, 1983) 29. M. Schaefer, M. Sommer, M. Karplus, J. Phys. Chem. B 101, 1663 (1997) 30. R.A. Raupp-Kossmann, C. Scharnagl, Chem. Phys. Lett. 336, 177 (2001) 31. H. Risken, The Fokker–Planck Equation (Springer, Berlin, 1989) 32. H.A. Kramers, Physica 7, 284 (1941)
366 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43.
44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69.
References P.O.J. Scherer, Chem. Phys. Lett. 214, 149 (1993) E.W. Montroll, H. Scher, J. Stat. Phys. 9, 101 (1973) E.W. Montroll, G.H. Weiss, J. Math. Phys. 6, 167 (1965) A.A. Zharikov, P.O.J. Scherer, S.F. Fischer, J. Phys. Chem. 98, 3424 (1994) A.A. Zharikov, S.F. Fischer, Chem. Phys. Lett. 249, 459 (1995) S.R. de Groot, P. Mazur, Irreversible Thermodynamics (Dover, New York, 1984) D.G. Miller, Faraday Discuss. Chem. Soc. 64, 295 (1977) R. Paterson, Faraday Discuss. Chem. Soc. 64, 304 (1977) D.E. Goldman, J. Gen. Physiol. 27, 37 (1943) A.L. Hodgkin, A.F. Huxley, J. Physiol. 117, 500 (1952) J. Kenyon, How to solve and program the Hodgkin–Huxley equations (http://134.197.54.225/department/Faculty/kenyon/Hodgkin&Huxley/ pdfs/HH.Program.pdf) J. Vreeken, A friendly introduction to reaction–diffusion systems, Internship Paper, AILab, Zurich, July 2002 E. Pollak, P. Talkner, Chaos 15, 026116 (2005) F.T. Gucker, R.L. Seifert, Physical Chemistry (W.W. Norton, New York, 1966) S. Glasstone, K.J. Laidler, H. Eyring, The Theory of Rate Processes (McGrawHill, New York, 1941) K.J. Laidler, Chemical Kinetics, 3rd edn. (Harper & Row, New York, 1987) G.A. Natanson, J. Chem. Phys. 94, 7875 (1991) R.A. Marcus, Annu. Rev. Phys. Chem. 15, 155 (1964) R.A. Marcus, N. Sutin, Biochim. Biophys. Acta 811, 265 (1985) R.A. Marcus, Angew. Chem. Int. Ed. Engl. 32, 1111 (1993) A.M. Kuznetsov, J. Ulstrup, Electron Transfer in Chemistry and Biology (Wiley, Chichester, 1998), p. 49 P.W. Anderson, J. Phys. Soc. Jpn. 9, 316 (1954) R. Kubo, J. Phys. Soc. Jpn. 6, 935 (1954) R. Kubo, in Fluctuations, Relaxations and Resonance in Magnetic Systems, ed. by D. ter Haar (Plenum, New York, 1962) T.G. Heil, A. Dalgarno, J. Phys. B 12, 557 (1979) L.D. Landau, Phys. Z. Sowjetun. 1, 88 (1932) C. Zener, Proc. R. Soc. Lond. A 137, 696 (1932) E.S. Kryachko, A.J.C. Varandas, Int. J. Quant. Chem. 89, 255 (2001) E.N. Economou, Green’s Functions in Quantum Physics (Springer, Berlin, 1978) H. Scheer, W.A. Svec, B.T. Cope, M.H. Studler, R.G. Scott, J.J. Katz, JACS 29, 3714 (1974) A. Streitwieser, Molecular Orbital Theory for Organic Chemists (Wiley, New York, 1961) J.E. Lennard-Jones, Proc. R. Soc. Lond. A 158, 280 (1937) B.S. Hudson, B.E. Kohler, K. Schulten, in Excited States, ed. by E.C. Lin (Academic, New York, 1982), pp. 1–95 B.E. Kohler, C. Spangler, C. Westerfield, J. Chem. Phys. 89, 5422 (1988) T. Polivka, J.L. Herek, D. Zigmantas, H.-E. Akerlund, V. Sundstrom, Proc. Natl. Acad. Sci. USA 96, 4914 (1999) B. Hudson, B. Kohler, Synth. Met. 9, 241 (1984) B.E. Kohler, J. Chem. Phys. 93, 5838 (1990)
References 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104.
367
W.T. Simpson, J. Chem. Phys. 17, 1218 (1949) H. Kuhn, J. Chem. Phys. 17, 1198 (1949) M. Gouterman, J. Mol. Spectrosc. 6, 138 (1961) M. Gouterman, J. Chem. Phys. 30, 1139 (1959) M. Gouterman, G.H. Wagniere, L.C. Snyder, J. Mol. Spectrosc. 11, 108 (1963) C. Weiss, The Porphyrins, vol. III (Academic, New York, 1978), p. 211 D. Spangler, G.M. Maggiora, L.L. Shipman, R.E. Christofferson, J. Am. Chem. Soc. 99, 7470 (1977) B.R. Green, D.G. Durnford, Annu. Rev. Plant Physiol. Plant Mol. Biol. 47, 685 (1996) H.A. Frank et al., Pure Appl. Chem. 69, 2117 (1997) R.J. Cogdell et al., Pure Appl. Chem. 66, 1041 (1994) H. van Amerongen, R. van Grondelle, J. Phys. Chem. B 105, 604 (2001) T. Förster, Ann. Phys. 2, 55 (1948) T. Förster, Disc. Faraday Trans. 27, 7 (1965) D.L. Dexter, J. Chem. Phys. 21, 836 (1953) F.J. Kleima, M. Wendling, E. Hofmann, E.J.G. Peterman, R. van Grondelle, H. van Amerongen, Biochemistry 39, 5184 (2000) T. Pullerits, M. Chachisvilis, V. Sundström, J. Phys. Chem. 100, 10787 (1996) R.C. Hilborn, Am. J. Phys. 50, 982 (1982), revised 2002 S.H. Lin, Proc. R. Soc. Lond. A 335, 51 (1973) S.H. Lin, W.Z. Xiao, W. Dietz, Phys. Rev. E 47, 3698 (1993) MOLEKEL 4.0, P. Fluekiger, H.P. Luethi, S. Portmann, J. Weber, Swiss National Supercomputing Centre CSCS, Manno Switzerland, 2000 J. Deisenhofer, H. Michel, Science 245, 1463 (1989) J. Deisenhofer, O. Epp, K. Miki, R. Huber, H. Michel, Nature 318, 618 (1985) J. Deisenhofer, O. Epp, K. Miki, R. Huber, H. Michel, J. Mol. Biol. 180, 385 (1984) H. Michel, J. Mol. Biol. 158, 567 (1982) E.W. Knapp, P.O.J. Scherer, S.F. Fischer, BBA 852, 295 (1986) P.O.J. Scherer, S.F. Fischer, in Chlorophylls, ed. by H. Scheer (CRC, Boca Raton, 1991), pp. 1079–1093 R.J. Cogdell, A. Gall, J. Koehler, Q. Rev. Biophys. 39, 227 (2006) M. Ketelaars et al., Biophys. J. 80, 1591 (2001) M. Matsushita et al., Biophys. J. 80, 1604 (2001) K. Sauer, R.J. Cogdell, S.M. Prince, A. Freer, N.W. Isaacs, H. Scheer, Photochem. Photobiol. 64, 564 (1996) A. Freer, S. Prince, K. Sauer, M. Papitz, A. Hawthorntwaite-Lawless, G. McDermott, R. Cogdell, N.W. Isaacs, Structure 4, 449 (1996) M.Z. Papiz, S.M. Prince, A. Hawthorntwaite-Lawless, G. McDermott, A. Freer, N.W. Isaacs, R.J. Cogdell, Trends Plant Sci. 1, 198 (1996) G. McDermott, S.M. Prince, A. Freer, A. Hawthorntwaite-Lawless, M. Papitz, R. Cogdell, Nature 374, 517 (1995) N.W. Isaacs, R.J. Cogdell, A. Freer, S.M. Prince, Curr. Opin. Struct. Biol. 5, 794 (1995) E.E. Abola, F.C. Bernstein, S.H. Bryant, T.F. Koetzle, J. Weng, in Crystallographic Databases – Information Content, Software Systems, Scientific Applications, ed. by F.H. Allen, G. Bergerhoff, R. Sievers (Data Commission of the International Union of Crystallography, Cambridge, 1987), p. 107
368
References
105. F.C. Bernstein, T.F. Koetzle, G.J.B. Williams, E.F. Meyer Jr., M.D. Brice, J.R. Rodgers, O. Kennard, T. Shimanouchi, M. Tasumi, J. Mol. Biol. 112, 535 (1977) 106. R. Sayle, E.J. Milner-White, Trends Biochem. Sci. 20, 374 (1995) 107. Y. Zhao, M.-F. Ng, G.H. Chen, Phys. Rev. E 69, 032902 (2004) 108. A.M. van Oijen, M. Ketelaars, J. Köhler, T.J. Aartsma, J. Schmidt, Science 285, 400 (1999) 109. C. Hofmann, T.J. Aartsma, J. Köhler, Chem. Phys. Lett. 395, 373 (2004) 110. S.E. Dempster, S. Jang, R.J. Silbey, J. Chem. Phys. 114, 10015 (2001) 111. S. Jang, R.J. Silbey, J. Chem. Phys. 118, 9324 (2003) 112. K. Mukai, S. Abe, Chem. Phys. Lett. 336, 445 (2001) 113. R.G. Alden, E. Johnson, V. Nagarajan, W.W. Parson, C.J. Law, R.G. Cogdell, J. Phys. Chem. B 101, 4667 (1997) 114. V. Novoderezhkin, R. Monshouwer, R. van Grondelle, Biophys. J. 77, 666 (1999) 115. M.K. Sener, K. Schulten, Phys. Rev. E 65, 31916 (2002) 116. A. Warshel, S. Creighton, W.W. Parson, J. Phys. Chem. 92, 2696 (1988) 117. M. Plato, C.J. Winscom, in The Photosynthetic Bacterial Reaction Center, ed. by J. Breton, A. Vermeglio (Plenum, New York, 1988), p. 421 118. P.O.J. Scherer, S.F. Fischer, Chem. Phys. 131, 115 (1989) 119. L.Y. Zhang, R.A. Friesner, Proc. Natl. Acad. Sci. USA 95, 13603 (1998) 120. M. Gutman, Structure 12, 1123 (2004) 121. P. Mitchell, Biol. Rev. Camb. Philos. Soc. 41, 445 (1966) 122. H. Luecke, H.-T. Richter, J.K. Lanyi, Science 280, 1934 (1998) 123. R. Neutze et al., BBA 1565, 144 (2002) 124. D. Borgis, J.T. Hynes, J. Chem. Phys. 94, 3619 (1991) 125. F. Juelicher, in Transport and Structure: Their Competitive Roles in Biophysics and Chemistry, ed. by S.C. Müller, J. Parisi, W. Zimmermann. Lecture Notes in Physics (Springer, Berlin, 1999) 126. A. Parmeggiani, F. Juelicher, A. Ajdari, J. Prost, Phys. Rev. E 60, 2127 (1999) 127. F. Jülicher, A. Ajdari, J. Prost, Rev. Mod. Phys. 69, 1269 (1997) 128. F. Jülicher, J. Prost, Progr. Theor. Phys. Suppl. 130, 9 (1998) 129. H. Qian, J. Math. Chem. 27, 219 (2000) 130. M.E. Fisher, A.B. Kolomeisky, arXiv:cond-mat/9903308v1 131. N.D. Mermin, J. Math. Phys. 7, 1038 (1966) 132. M.W. Schmidt, K.K. Baldridge, J.A. Boatz, S.T. Elbert, M.S. Gordon, J.J. Jensen, S. Koseki, N. Matsunaga, K.A. Nguyen, S. Su, T.L. Windus, M. Dupuis, J.A. Montgomery J. Comput. Chem. 14, 1347 (1993)
Index
activation, 155 Arrhenius, 155 ATP, 314 bacteriorhodopsin, 301 binary mixture, 21 binodal, 32 Boltzmann, 46 Born, 40, 49 Born energy, 43, 44 Born radius, 43 Born–Oppenheimer, 195, 201, 303 Brownian motion, 87, 93, 311 Brownian ratchet, 318 carotenoids, 247 Chapman, 52 charge separation, 173 charged cylinder, 49 charged sphere, 48 chemical potential, 127 chlorophylls, 247 collision, 159 common tangent, 32 Condon, 201 contribution, 127 coordination number, 25 correlated process, 113 critical coupling, 28 ctrw, 112 cumulant expansion, 211 Debye, 45 Debye length, 46
dephasing, 209 dephasing function, 211 detailed balance, 316 diabatic states, 219 dichotomous, 107 dielectric continuum, 38 diffusion, 90 dimer, 270 dipole, 43 dipole moment, 201 disorder, 285 disorder entropy, 24 dispersive kinetics, 107 displaced harmonic oscillator, 205 displaced oscillator model, 240 double layer, 52, 57 double well, 305 Einstein coefficient, 265 electrolyte, 45 electron transfer rate, 176, 184, 189 elementary reactions, 75 energy gap law, 242 energy transfer, 259 energy transfer rate, 267 entropic elasticity, 4 entropy production, 128–130 enzymatic catalysis, 80 equilibrium constant, 155, 157, 163 excited state, 229 excitonic interaction, 261 external force, 6
370
Index
Fitzhugh–Nagumo, 149 Flory, 19 Flory parameter, 21, 26 Fokker–Planck, 87 force–extension relation, 8, 10, 15 Franck–Condon, 189 Franck-Condon factor, 202, 207 free electron model, 249 freely jointed chain, 3 Förster, 267 Gaussian, 213 Goüy, 52 heat of mixing, 21 Henderson–Hasselbalch, 64 Huggins, 19 Hückel, 45, 251 ideal gas, 30 implicit model, 37 interaction, 11 interaction energy, 20, 25 ion pair, 40 isotope effect, 166 Juelicher, 311 Klein–Kramers equation, 97 Kramers, 101 Kramers–Moyal expansion, 96 ladder model, 233, 237 Langevin function, 8 lattice model, 19 LCAO, 250 level shift, 232 LH2, 280 Lorentzian, 213 Marcus, 173 master equation, 98, 107 maximum term, 7, 15 Maxwell, 94, 159 mean force, 37, 38 membrane potential , 140 metastable, 28 Michaelis Menten, 82 mixing entropy, 19, 25
molecular aggregates, 274 molecular motors, 311 motional narrowing, 213 multipole expansion, 41, 262 Nagumo, 150 Nernst equation, 144 Nernst potential, 140, 142 Nernst–Planck equation, 135 nonadiabatic interaction, 196, 197 nonequilibrium, 125 octatetraene, 252 Onsager, 130 optical transitions, 201 pKa , 66 parallel mode approximation, 199 phase diagram, 30, 33 point, 222 Poisson, 114 Poisson–Boltzmann equation, 46 polarization, 177 polyenes, 249 polymer solutions, 19 power time law, 119 Prost, 311 protonation equilibria, 61 ratchet, 311 reaction order, 76 reaction potential, 38 reaction rate, 75 reaction variable, 75 reorganization energy, 186, 189, 208 saddle point, 339 Schiff base, 302 Slater determinant, 248, 259 Smoluchowski, 98, 311 solvation energy, 43, 49 spectral diffusion, 209 spinodal, 31 stability criterion, 26, 28 state crossing, 219 stationary, 130 Stern, 57 time correlation function, 202, 205 titration, 62, 63, 65, 70
Index transition state, 162 tunneling, 305 two-component model, 9
van Laar, 21 van’t Hoff, 156
uncorrelated process, 113
waiting time, 112
371